Credit: Zenzen via Shutterstock

When artificial intelligence is making decisions for state agencies, the public should know. That’s what the Connecticut Advisory Committee to the U.S. Commission on Civil Rights declared Thursday in a memorandum. 

“The state of Connecticut makes thousands of decisions that impact the lives and civil rights of residents every day,” David McGuire, chair of the Advisory Committee, said. “Through our briefings we learned that some of these decisions are made with the use of algorithms. When the state uses an algorithm residents should know which agency is using the algorithm, the reason it is being used, and assurances that the algorithm is fair.” 

McGuire said Connecticut’s Advisory Committee heard testimony last fall about the lack of transparency from some state agencies about the algorithms they use and is concerned about how they may create or perpetuate discrimination and regulatory approaches to address bias and discrimination. 

Governments, including the state of Connecticut, use algorithms for everything from screening job applications to assigning students to magnet schools, setting bail, and allocating social welfare benefits of all types. However, critics say without proper testing and ongoing evaluation, algorithms can function improperly or perpetuate historic biases reflected in the algorithm’s code or, for a machine-learning algorithm, embedded in the data used to train it. 

The Yale Law School presented a white paper at one of the committee meetings, which showed that it’s hard to hold the state accountable for these algorithms. 

They sent Freedom of Information requests to three state agencies: The Department of Children and Families, Department of Education, and the Department of Administrative Services. 

Each of the agencies responded differently or not at all in the case of DAS. 

“DCF provided the only complete FOIA response, producing documents on its use of an algorithm intended to reduce the incidence of children suffering a life-threatening episode,” Yale researchers found. “It disclosed basic information about the algorithm but not its source code, which DCF did not possess and claimed to be protected as a trade secret. The production indicated that DCF had not performed a robust evaluation of the algorithm’s efficacy or bias before implementing it or during the three years it was used.”

When it came to the Department of Education, “DOE made a partial production concerning its use of an algorithm to assign students to schools, an issue that has raised substantial disparate racial impact questions in the past. DOE’s disclosure did not reveal how its school-assignment algorithm worked, apart from noting that it implemented the “Gale-Shapley deferred acceptance algorithm” and had no mechanism allowing parents to challenge its determinations.”

DAS “provided no documents in response to our request for information concerning a new algorithm used in hiring state employees and contractors.”

Sen. James Maroney, D-Milford, said the data privacy task force has been looking into the issue and will include algorithmic bias as part of its data privacy legislation. 

Maroney said they are looking at creating a position for a person to oversee AI in the state. 

He said the state should have an inventory that tells the public where AI is being used in decision making and create a common definition. 

House Minority Leader Vincent Candelora said the fact the state even use AI to make decisions would startle people.

“The idea that government decisions could be made by a computer program rather than humans sitting behind desks would likely surprise a lot of people,” Candelora said. “Considering their potential impact on public policy, there’s certainly value in learning more about how the state uses algorithms and what’s in them.”