FCC Fines Robocaller $6M for Using AI to Clone Biden’s Voice in Scam
The proposed $6 million fine is substantial, though the actual amount paid may be less due to various mitigating factors.
The Federal Communications Commission (FCC) has proposed a $6 million fine for a scammer who used AI-generated voice technology to impersonate President Joe Biden in illegal robocalls during a New Hampshire primary election. This case, which combines issues of robocalling and AI misuse, serves as a stark warning to potential high-tech fraudsters, according to a news report by TechCrunch.
In January, many New Hampshire voters received a robocall featuring a cloned voice of President Biden, falsely instructing them not to vote in the upcoming primary. The voice clone was created using generative AI technology, which has made it easier than ever to replicate voices with just a small sample of speech.
The FCC, along with other law enforcement agencies, quickly moved to address this misuse of AI and telecommunications networks. Loyaan Egal, Chief of the FCC’s Enforcement Bureau, emphasized the agency's commitment to preventing such high-tech scams, stating, "We will act swiftly and decisively to ensure that bad actors cannot use U.S. telecommunications networks to facilitate the misuse of generative AI technology to interfere with elections, defraud consumers, or compromise sensitive data."
The primary perpetrator, identified as political consultant Steve Kramer, collaborated with Life Corporation, previously charged with illegal robocalls, and used the services of the telecom company Lingo (also known by several other names such as Americatel, BullsEyeComm, Clear Choice Communications, and more).
While the FCC can propose fines and declare violations, it relies on local or federal law enforcement to pursue criminal charges. Currently, there are no criminal proceedings against Kramer or his collaborators. The proposed $6 million fine is substantial, though the actual amount paid may be less due to various mitigating factors. Kramer is expected to respond to these allegations, and additional actions are being taken against Lingo, which could result in further fines or the loss of licenses.
In response to this incident, the FCC officially declared the use of AI-generated voices in robocalls illegal in February, addressing the loophole regarding what counts as “artificial” under existing laws.
This case highlights the ongoing challenges in regulating emerging technologies and underscores the importance of robust legal frameworks to protect consumers and ensure the integrity of democratic processes.