Credit: Zenzen via Shutterstock

State agencies in Connecticut will be required to inventory their use of artificial intelligence and conduct impact assessments before implementing new AI systems under legislation signed into law last week by Gov. Ned Lamont. 

The bill, which passed with unanimous support in both the House and Senate, aims to scrutinize government use of AI and algorithms with an eye toward preventing automated systems from making discriminatory decisions. 

In an interview Wednesday, the bill’s proponent, Sen. James Maroney, D-Milford, said it was important for state government to set regulations on AI early in its development. 

“Before we get too far along the road it’s important to have some stakes in the ground,” he said. “Look at it like product viability, in that we’re doing testing commensurate with the risk for the process.”

Beginning in February, state agencies will be prohibited from putting a new AI system into use without first assessing its potential impact.

The law’s inventory requirements will apply to Connecticut’s Judicial and Executive Branches, which will be tasked with beginning to account for any AI systems at the end of this year. 

Inventories will be publicly accessible and include descriptions of the systems in use and their vendors as well as an assessment of whether the system had independently made or informed any state decisions.

Beginning next year, the Department of Administrative Services will be required to conduct ongoing assessments of automated systems designed to prevent discriminatory decisions. 

Automated systems are already at work in state agencies. A 2022 report by the Media Freedom & Information Access Clinic found that the state Department of Education had spent more than $650,000 to acquire an algorithm called the Gale – Shapley deferred acceptance algorithm, which it used to assign students to schools in Hartford. 

The new law also establishes a 21-member working group to inform future regulation on issues including the drafting of a “Connecticut AI bill of rights” as well as policies to govern the use of AI by the private sector.

“[The new law] was a first step,” Maroney said, “leading by example with the government testing the AI before we employ it at our different agencies. The next step is to go toward private industry.” 

Connecticut is one of dozens of states grappling with the emergent technology through proposed legislation this year. 

Meanwhile, states have recently pushed for the federal government to develop more oversight of AI. On Monday, Connecticut Attorney General William Tong led a bipartisan group of 22 attorneys general in calling on the National Telecommunications and Information Administration to advance artificial intelligence transparency and testing policies. 

In a press release Tuesday, Tong said that AI was being developed and deployed faster than the government’s ability to understand or regulate it.

“At a minimum, any use of AI should be clearly disclosed—there should be zero confusion as to when and whether we are dealing with real people or AI,” Tong said. 

On Wednesday, Maroney said policymakers would likely be working to regulate artificial intelligence for the foreseeable future. He pointed to the rapid adoption of the AI chatbot ChatGPT, which amassed 100 million active users in its first two months of operation. 

“It’s something that, as a legislature, we’ll be back at every year because it is a rapidly evolving field and it’s being adopted so rapidly as well.” Maroney said.