Connecticut lawmakers are looking to take stock of the use of artificial intelligence and algorithms by state agencies to ensure that automated systems are not making critical decisions based on biases or discriminatory processes.
Last month, legislators on the General Law Committee voted unanimously to advance a bill that would create two new positions in state government — an artificial intelligence officer at the Office of Policy and Management and an artificial intelligence implementation officer at the Department of Administrative Services.
The goal is to inventory automated systems used by state agencies by the end of this year, draft policies for how they should be used, and ensure compliance.
Without the bill, lawmakers worry about what metrics an AI might employ if it were permitted to make critical decisions concerning resident eligibility for things like jobs, health care programs, housing or utility assistance.
During a recent meeting, Sen. James Maroney, a Milford Democrat who co-chairs the general law panel, recalled an example from Indiana that employed a fully automated process to determine whether residents were eligible for Medicaid coverage.
“A woman was trying to get cancer treatment,” Maroney said. “She was getting denied for her treatment by this algorithm. By the time she was able to get a human to intervene so she could get her treatment, which she was eligible for, it was too late and she died.”
Maroney said the example was extreme and not what he expected to find, but illustrated the concerns of the bill’s proponents.
“We want to try to make sure we’re preventing problems and that we’re ethically implementing AI,” he said. “We can see a lot of uses where it will be more efficient, and where we can help streamline processes. We’re not saying not to use it… We’re just saying we need to test it to make sure.”
The legislative committee is not alone in those concerns. Earlier this year, a Connecticut Advisory Committee to the U.S. Commission on Civil Rights released a preliminary report on the use of algorithms by state agencies after testimony and research on the issue raised misgivings about automated decision-making.
“[T]he Committee is concerned that algorithms and the use of computers for decision making may limit individuals’ opportunities such as for employment or credit; prevent access to critical resources or services such as housing; reflect and reproduce existing inequities in highly policed neighborhoods; and/or embed new harmful bias and discrimination though inaccurate language translation for example,” the report read.
The bill advanced by the General Law Committee has bipartisan support. House Minority Leader Vincent Candelora testified in favor of the proposal during a public hearing in February. Candelora lauded sections of the bill which extend recently adopted data privacy policies to state agencies in an effort to curtail identity theft.
“On a daily basis, state agencies process sensitive information about our residents and businesses,” Candelora wrote. “It’s imperative that agencies and the many vendors doing business on behalf of our residents do everything possible to protect consumer data.”
However, the bill was opposed by OPM and DAS, the two agencies it most directly impacts. In written testimony, Mark Raymond, chief information officer at DAS, and Adel Ebeid, a senior policy advisor at OPM, said the bill would have unintended consequences and “exceeds what is possible to achieve at this point in time.”
“We believe that OPM and DAS can establish policies and procedures necessary to ensure AI is integrated in an ethical and equitable manner to protect the rights of our state residents and those doing business in the state without the need for such a task force,” they wrote.