‘Well-being measure’: Google to require visible disclosure in political ads using AI for images and audio

Google is set to require that political advertising that uses artificial intelligence to generate images or sounds must be accompanied by a disclosure visible to users.

“AI-generated content absolutely must be disclosed in political ads. Failure to do so leaves the American people exposed to misleading and predatory campaign ads,” Ziven Havens, political director of the Bull Moose Project, told Fox NewsDigital. “In the absence of government action, we support the creation of new rules to manage the new technological frontier before it becomes a major problem”

Havens’ comments come after Google revealed last week that it would begin requiring disclosure of the use of AI to alter images in political ads starting in November, just over a year before the 2024 elections, according to a PBS report. The search giant will require that information attached to ads be “clear and conspicuous” and also located in an area of ​​the ad that users are likely to notice.


Google headquarters in Mountain View, California. (Marlena Sloss/Bloomberg via Getty Images)

The move comes as political campaigns have begun increasing the use of AI technology in advertising this cycle, including ads from 2024 GOP hopeful Florida Gov. Ron DeSantis and the National Committee republican.

In a DeSantis ad in June targeting former President Donald Trump, the campaign used fake realistic images depicting the former president hugging Dr. Anthony Fauci. The ad took aim at Trump for failing to fire Fauci at the height of the pandemic, noting that the former president “became a household name” by firing people on television, but failed to get rid of him. Controversial infectious disease expert.

A version of the ad posted on of Trump hugging and kissing” Fauci.

Florida Governor Ron DeSantis lashes out at heckler during speech

Florida Governor Ron DeSantis. (Fox News)

Such warnings could now become commonplace in ads served on Google, although some experts say such labels are unlikely to make much difference.


“I think it’s a feel-good measure that doesn’t accomplish anything,” Christopher Alexander, director of analytics at Pioneer Development Group, told Fox News Digital. “An AI with erroneous content or a human who deliberately lies? Unless you want to start suing politicians for lying, that’s like regulating Colt gun t-shirts as a drug control measure. guns. This kind of complacency and alarmism about AI is irresponsible and simply stifles innovation without accomplishing anything useful.”

Last month, the Federal Election Commission (FEC) unveiled plans to regularly integrate AI-generated content into political ads ahead of the 2024 election, according to the PBS report, while lawmakers such as Chief Senate Majority Chuck Schumer, D-N.Y., have expressed interest in pushing legislation to create regulations for AI-generated content.

But Jonathan D. Askonas, an assistant professor of politics and a member of the Center for the Study of Political Policy at the Catholic University of America, questioned how effective such rules would be.


“The real problem is that Google has such a monopolistic grip on the advertising industry that its dictates matter more than the FEC,” Askonas told Fox News Digital. “Some sort of disclaimer or labeling seems rather harmless and sort of nothing. What matters more is that policies are implemented without bias. The record of big tech is not encouraging. “

A Google spokesperson told Fox News Digital that the new policy adds to the company’s election ad transparency efforts, which in the past have included “paid for by” disclosures and a public ad library allowing users to view more information.

“Given the increasing prevalence of tools producing synthetic content, we are further expanding our policies to require advertisers to disclose when their election ads include material that has been altered or digitally generated,” the spokesperson said. “This update builds on our existing transparency efforts: it will help further support responsible political advertising and provide voters with the information they need to make informed decisions.”

Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, or CAPTRS, told Fox News Digital that the “intent” behind Google’s new rule is good, but warned about how it will be implemented and on its value. being for users remains a key question. He suggested that the FEC could add rules to the current “Stand by Your Ad” provision that would require applicants to disclose the use of AI-generated content.

google logo on black carpet

The Google logo is displayed on a carpet in the entrance hall of Google France in Paris on November 18, 2019. (AP Photo/Michel Euler, file)

“For example, ‘I approved this post and it contained AI-generated content,'” Siegel said.


Without it, Siegel said the value of Google’s rule would likely be “minimal.”

“The value of the disclosure to the viewer seems minimal, except perhaps in illustrative cases where someone might mistake it for the hard truth.” he said. “It remains to be seen whether campaigns use the identifier to reduce its use or use it even further to create even more misleading ads. They could do the latter and claim that it was identified as generated by the ‘AI and so it doesn’t matter.’


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button