
U.S. Sen. Richard Blumenthal arrived late to his own press conference Monday, produced a cell phone and played an audio recording in which a facsimile of his voice reported being broken down on the road, stranded unless someone sent money.
“Hi, it’s me. I’m in trouble and I need money,” a digital approximation of Blumenthal’s voice said. “The car broke down on 95 in New London. I don’t know what’s wrong with it and the auto shops won’t help me unless I pay them a $4,000 deposit right away. I’m going to be late to my next press conference if you don’t wire me $4,000 in the next five minutes.”
In reality, Blumenthal was late because he’d been stuck in traffic. An aide produced the recording using artificial intelligence software. He said it took her less than three minutes to create the plea for cash in his voice.
The ease with which his voice could be reproduced and manipulated suggests an alarming potential for scams and malfeasance, Blumenthal said. He made the same point last week during a hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law when his opening remarks were written by ChatGPT and articulated in a voice indistinguishable from that of Richard Blumenthal.
That hearing, which included testimony from ChatGPT founder Sam Altman, comes as the federal government explores oversight of emerging AI technology. In its Blumenthal-inspired remarks, ChatGPT warned of the pitfalls of when “technology outpaces regulation.”
On Monday, Blumenthal said the voice-cloning software had nearly limitless applications, especially when coupled with “deep fake” visual technology capable of creating images and video using someone’s likeness.
“They could portray me, for example, with Vladimir Putin, offering to endorse him for the Nobel Peace Prize, or urging Ukraine to surrender,” Blumenthal said. “The implications here are deeply serious.”
Blumenthal said he has asked the companies producing audio clones to explain what safeguards they have employed to deter abuse. He said his subcommittee would explore how new technology might be used to detect AI-generated recordings. Ultimately, he expected lawmakers would back regulations to prevent fraudulent use of AI technologies.
State legislators have taken steps to begin the process of regulating the use of artificial intelligence in Connecticut’s government.
Earlier this month, the Senate unanimously advanced a bill requiring an inventory of automated systems used by state agencies and builds oversight of how they are used in the future. The proposal, championed by Sen. James Maroney, D-Milford, seeks to prevent technology from making discriminatory decisions on behalf of state agencies.
During the press conference, Maroney and Blumenthal suggested that AI offered the potential for scientific breakthroughs in the fields of cancer research or climate change. Even the voice-cloning software could be used for good effect, Maroney said.
“If you have Lou Gehrig’s [Disease] and at the end, when you’re losing your voice, if you can then communicate with your loved ones using your actual sound of your voice,” he said.
The technology could also be used to tailor public service announcements, Blumenthal said. “I can imagine some good uses, but obviously the potential for evil is tremendous,” he said.
If adopted by the House and signed by the governor, Maroney’s bill could be used as a first step to inform potential federal regulation, Blumenthal said. Eventually, there may be safeguards against the use of fraudulent voice cloning, he said.
In the meantime, don’t believe every familiar voice you hear, especially if it’s asking for wired money, gift cards or credit card information.
“Don’t fall for it. Verify,” Blumenthal said. “Because anyone who is using these kinds of means of payment very likely is engaged in a fraud.”