Ellen Andrews avatar

Legislators on the General Law Committee want the state to evaluate the growing role of Artificial Intelligence (AI) in state agency decision-making. AI is a broad term for powerful mathematical tools that can analyze vast amounts of data far more quickly than humans. AI created a fake picture of Pope Francis as a fashion influencer, and another AI system is being sued for defamation. As a tool, AI is neither good nor evil, it’s all about how it is used.

Badly done, AI can deny appropriate access to healthcare. But done well, it has the potential to improve care by removing individual biases, reducing disparities, and promoting fairness. Legislators are right to monitor AI’s use in state services. 

Despite its bad reputation and scary name, healthcare has embraced AI across the system to improve care and save money. Last year, the FDA approved 91 new AI-enabled medical devices, up from five in 2015. An AI system is showing promise in detecting breast cancers missed by doctors. I recently got hearing aids that are trained on millions of real-world sounds. They learn from experience to help my brain better understand what I’m hearing.

It’s estimated that AI could save up to $360 billion each year in healthcare costs by making care safer, improving quality, and easing administrative burdens on burnt-out clinicians. We could really use those savings to make care more affordable and accessible.

a green button that says support and red button that says oppose

Despite the potential, AI systems have also raised serious concerns about biases and discrimination. In 2019, researchers found that an AI system, widely used by hospitals to determine patients’ needs, was inappropriately denying care to Black patients. It wasn’t designed to discriminate, but it was developed using data that reflected, and therefore perpetuated, historic underservice for Black patients. That system and others like it had been used to identify the care needs of 150 to 200 million patients. As soon as it was found, the bias was removed, and the system was corrected. While this case has a happy ending, it makes clear the need to regularly monitor, test, and correct biased AI systems. It’s also important to remember that there is great variation in care by race without AI, due to provider and institutional biases and stereotyping. Done well, AI could reduce these biases significantly. 

AI has a history of inappropriately limiting care for people with disabilities. Here in Connecticut, a new AI system inappropriately denied a Medicaid patient with a dangerous medical condition the home health care she’d been receiving. The patient experiences unexpected drop seizures causing her to drop to the ground without warning. There are things she cannot safely do alone such as cooking at a stove. Unfortunately, the AI system the state purchased didn’t understand that while sometimes she could complete those tasks alone, it could quickly become very dangerous. Sheldon Toubman of Disability Rights Connecticut, helped the patient appeal the state’s decision and it was overturned. According to Mr. Toubman, the state has come to appreciate that AI can be helpful in making care decisions less subjective, but it shouldn’t be the final decider.

Concerned about civil rights, the District of Columbia’s Attorney General has called for banning the use of AI entirely. But that would miss AI’s potential to improve safety and equity while freeing up resources to improve access to care. Connecticut’s Advisory Committee on AI to the US Commission on Civil Rights includes some smart recommendations to keep AI safe.

Data security should be solid, and data should never, ever be sold. AI systems must be tested before implementation, audited regularly, and fixed when problems are found. AI should be a tool to help the state and others assess needs, but humans should make the final decisions. This bill should pass.

More from Ellen Andrews

Ellen Andrews avatar

Ellen Andrews, Ph.D.

Ellen Andrews, Ph.D., is the executive director of the CT Health Policy Project. Follow her on Twitter@CTHealthNotes.

The views, opinions, positions, or strategies expressed by the author are theirs alone, and do not necessarily reflect the views, opinions, or positions of CTNewsJunkie.com or any of the author's other employers.