Politico News

AI tools undermine our own eyes and ears – POLITICO

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections around the world.

Have you ever seen a deepfake? More importantly, can you tell the difference between these AI-generated images, audio clips, and videos and the real thing?

As more than 2 billion voters in 50 countries prepare for national elections in 2024, this question – and the ability of these deepfakes to distort the decisions of potential voters at the polls – has never been more critical.

Case in point: In recent months, people have increasingly reported AI-powered deepfake images, audio, and videos on X (formerly Twitter), according to a Brookings Institution review of so-called community notes from the platform, a crowdsourced fact-checking initiative. On the platform.

POLITICO has decided to put you to the test.

Using Midjourney, an AI research lab whose technology can create realistic images based on simply typing suggestions into the company’s online platform, POLITICO collected a series of real images – and those generated by ‘artificial intelligence. Repeated trials globally have shown that, on average, people can detect digital fakes versus legitimate images about 60% of the time, according to tech company executives POLITICO spoke with.

Although the technology is still a work in progress, the ability of anyone — including POLITICO journalists — to create such realistic images with just a few clicks on a keyboard worries politicians, policymakers and disinformation experts.

If AI puts such power in the hands of anyone with a laptop, an internet connection, and $50 to access these powerful tools, such fake political content could flood social media feeds in the months to come.

How successful will you be?

Who wants to be a clone?

Among potential deepfake threats this year, cybersecurity and disinformation experts are most concerned about audio.

So far, almost all controversial AI-generated images have been debunked within hours, mainly due to the power of social media to quickly collect errors in these photos that would otherwise be imperceptible. Big tech companies and independent fact-checkers have also prioritized finding and removing these harmful, politically motivated lies.

But audio – particularly the grainy AI-powered clips that have been unsuccessfully used to smear UK Labor Party leader Keir Starmer – remains uncharted territory. The disconnect between what people hear and what they see can often lead individuals to believe that an inflammatory, deepfake audio clip is legitimate.

To test this theory, POLITICO used commercially available technologies — costing less than $50 total to purchase — to see how easy it was to generate a deepfake audio clip. Initially, we were going to clone real politicians. But because such lies are both legally dubious and pose a direct threat to this year’s election cycle, we decided to emulate the voices of POLITICO journalists instead.

You can judge for yourself whether these AI-generated clips are good enough to fool you.

AI Biden versus AI Trump

The next frontier of AI deepfakes is video, particularly content that can interact with humans in real time.

And when it comes to politically motivated photos based on AI, a Soviet-era office building near the German-Polish border has become the starting point for demonstrating the evolution of this technology.

There, amid a group of activists known as the Singularity Group, researchers created a real-time online video debate between an AI-powered Joe Biden and an AI-generated Donald Trump.

The project, which has been going on for almost nine months, uses so-called open source technology, or AI models freely available to the public. It lets anyone enter a debate question — via the Amazon-owned streaming service Twitch — then the Biden/Trump bots power up, calculate an answer through Singularity’s AI systems and spit it out, mimicking the voices and images of politicians.

“Deepfakes are a real concern,” said Reese Leysen, one of the activists behind the project which, importantly, is being called a parody on Twitch. “We wanted to focus on the politicians so people would notice. »

POLITICO asked several real-world debate questions to fake Biden and Trump. Most were either too racy or too crude to publish – which is not surprising, given that this AI system was trained on random people asking it questions on the internet for almost a year .

But here are the two least graphic videos. Is the technology perfect? Definitely not. But it’s a glimpse of where things are headed.

Emma Krstic, Olivia Martin, Saga Ringmar and Aoife White contributed reporting.

Politico

Sara Adm

Aimant les mots, Sara Smith a commencé à écrire dès son plus jeune âge. En tant qu'éditeur en chef de son journal scolaire, il met en valeur ses compétences en racontant des récits impactants. Smith a ensuite étudié le journalisme à l'université Columbia, où il est diplômé en tête de sa classe. Après avoir étudié au New York Times, Sara décroche un poste de journaliste de nouvelles. Depuis dix ans, il a couvert des événements majeurs tels que les élections présidentielles et les catastrophes naturelles. Il a été acclamé pour sa capacité à créer des récits captivants qui capturent l'expérience humaine.
Back to top button