Businesses

Voters face ‘a’significant threat’ from the surge of AI-generated fraud as experts scramble to stop election manipulation.

New Hampshire voters were recently targeted by a phony robocall of Joe Biden, asking them not to cast their votes.
Following a robocall targeting New Hampshire residents with a false phone call from President Biden, experts warn that voters may be swamped with AI-generated content, potentially interfering with the 2024 primary and presidential elections.

While threat actors use AI to circumvent existing security measures and launch larger, faster, and more covert attacks, researchers are now utilizing AI techniques to develop new defensive capabilities. However, Optiv Vice President of Cyber Risk, Strategy, and Board Relations James Turgal told Fox News Digital that generative AI is a “significant threat.”

“I believe the greatest impact could be AI’s capacity to disrupt the security of party election offices, volunteers and state election systems,” he stated.

Tergal, a former FBI veteran, pointed out that the objectives of a threat actor can include manipulating vote tallies, eroding public trust in election results, or inciting violence. Worse, they can now do it extensively.
“In the end, the threat posed by AI to the American election system is no different than the use of malware and ransomware deployed by nation-states and organized crime groups against our personal and corporate networks on a daily basis,” he stated. “The battle to mitigate these threats can and should be fought by both the United States government and the private sector.”

Turgal recommended that election offices implement policies to guard against social engineering attacks and that employees take part in deepfake video training that educates them on attack vectors like email, text, and social media platforms, as well as in-person and telephone-based attempts, to lessen the threat.

Additionally, he emphasized that private sector enterprises must develop AI tools—such as large language chatbots—to guarantee that the chatbots deliver correct election-related information. To accomplish this, businesses must verify that their AI models are taught to inform consumers of their limits and reroute them to reliable sources, such as official election websites.

In response to a question from Fox News Digital about whether voters would see more AI-generated voices that could influence their choices, Chris Mattmann, Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory (JPL), stated, “The secret is out.”

The attorney general’s office in New Hampshire is looking into Biden’s spoof AI call; it comes from an unidentified source. Experts claimed that it is practically impossible to identify the software that generated a voice imitation since it is so freely accessible as web services and applications.

Voters in New Hampshire were informed by the digitally altered voice of Joe Biden that sending in their ballots on Tuesday, January 23, would aid Republicans in their “quest” to reelect Trump.

The voice further asserted that their vote would count in November but not in the primary.

Although deliberate attempts to restrict voting or influence voting choices are punishable by federal statutes, regulations regarding the fraudulent use of artificial intelligence have not yet been implemented.

Mattmann noted that the situation worsens when these audio snippets attain high speed and authenticity and are combined with artificial intelligence (AI)-generated video that has been trained on millions of recordings from the public domain.

When they become virtually indiscernible, even to computer systems, we run into serious issues and cannot do specific tasks, such as attribution and detection. That’s when you hear everyone, myself included, becoming anxious,” he remarked.

Spam and spoof calls used to be often produced using a mix of statistical techniques, audio dubbing, and clipping. Given that well-known public figures like Joe Biden have been captured on tape using voiceovers that contain numerous well-known words, it can be challenging to distinguish between real and fake audio when using software and to pay close attention to details like background and voice tone.

These AI voice models can accurately mimic a human voice in two to eight hours, even on a low-end machine. However, voice cloning now takes a fraction of the time because of Microsoft’s new text-to-speech technology, VALL-E.

“This can replicate your voice in just three seconds. And by “clone it,” I mean that it is possible to get Joe Biden to say things that he has never said,” Mattmann added.

Virtual voice assistants’ data collection has significantly contributed to the quick development of voice cloning software. For years, voice clips have been recorded by devices such as Apple’s Siri, Google Assistant, Amazon Alexa, and more, all of which are utilized to train artificial intelligence and make better replicating speech.

Tech companies have occasionally even retained voice recordings of their customers without getting their legally required informed consent. Amazon was forced to pay $25 million to resolve a lawsuit in May after regulators claimed the business had breached privacy regulations by retaining voice recordings of minors “forever.”

“Basically, they’ve been listening to our data for years, whether we clicked yes or not,” Mattmann stated.

Threat actors are not the only ones that use AI to modify information. Several political organizations and politicians have used technology to target their rivals within the past year.

Florida Governor Ron DeSantis was the first candidate for the US presidency to utilize the technology in a political attack ad. DeSantis’ now-dissolved campaign released artificial intelligence (AI) pictures of Dr. Anthony Fauci and former President Donald Trump cuddling in June. In response to worries that the photographs would be used to sway voters, Twitter quickly gave users access to a “context” bubble.

“Through the looking glass, that is. It was a scene out of Alice in Wonderland. It’s like, oh my goodness, they may reply with this the very same day,” Mattmann remarked.

A second AI-powered political advertisement was released in December, this one by the House Republicans’ campaign arm. The commercial used artificial intelligence (AI) to create images of migrant camps scattered over some of the country’s most well-known national parks and sites.

According to seasoned AI specialist Mattmann, the federal government has requested that the Federal Election Commission and other institutions test out labels that can be placed on AI-generated campaign items to tell voters about the technology’s source. This rule hasn’t been implemented yet, though.

Although methods and software are available to determine whether political information is artificial intelligence (AI) generated, he claimed that many research groups lack access to them and that there are very few, if any, campaigns aware of the technological aspects.

“This will take us into an audio-wise and slightly beyond that, audio, text, and multimodal video realm in political campaigns and things like that, where they’re going to have tools in not this election cycle but the next one ready for this.” But here is the challenge,” Mattmann continued.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button