Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.

Why deepfakes aren’t just a problem for celebrities

binance ad 3

American actress Scarlett Johansson at the 2023 Cannes Film Festival. Photocall for the film Asteroid City. Cannes (France), May 24, 2023

Mondadori wallet | Mondadori wallet | Getty Images

Movie star Scarlett Johansson is taking legal action against an AI app that used her name and an AI-generated version of her voice in an ad without her permission, according to Variety.

The 22-second ad was posted to X, formerly Twitter, on October 28 by AI image generation app Lisa AI: 90s Yearbook & Avatar, according to Variety. The ad featured images of Johansson and an AI-generated voice similar to his own promoting the app. However, the fine print below the ad stated that the AI-generated content “has nothing to do with this person.”

Representatives for Johansson confirmed to Variety that she is not a spokesperson for the app, and her lawyer told the publication that legal action is pending. CNBC has not seen the ad and it appears to have been removed. Lisa AI and a representative for Johansson did not respond to CNBC Make It’s request for comment.

While many celebrities have been the subject of deepfakes, they can also create problems for ordinary people. Here’s what you need to know.

The word deepfake comes from the concept of “deep learning”, which fits into the broader framework of machine learning. This is when algorithms are trained to identify patterns in large data sets and then use these pattern recognition skills on a new data set or produce results similar to the original data set.

Here’s a simplified example: An AI model could receive audio clips of a person speaking and learn to identify their speech patterns, tone, and other unique aspects of their voice. The AI ​​model could then create a synthetic version of the voice.

The problem is that technology can be used in harmful ways, says Jamyn Edis, an assistant professor at New York University with more than 25 years of experience in the technology and media industries.

“Deepfakes are simply a new vector for identity theft and fraud and, as such, can be used in similar malicious ways whether or not one is a celebrity,” he told CNBC Make It. “Examples could be your image – or that of your loved ones – used to generate pornography or used for extortion or to bypass security by hijacking your identity.”

What’s even more concerning is that it’s becoming increasingly difficult to tell the difference between what’s real and what’s fake as deepfake technology rapidly evolves, Edis says.

There are several things you can do if you’re wondering if something you’re watching might be a deepfake.

First, ask yourself if the images you see seem to match reality, says Edis. Since celebrities are required to disclose when they are paid to promote products, if you see an ad featuring a celebrity pushing something obscure, it’s a good idea to check their other social media accounts for a disclosure.

Major tech companies, including Meta, Google and Microsoft, are also developing tools to help people detect deepfakes.

President Biden recently announced the first executive order on AI, which would require a watermark to clearly label AI-generated content and other security measures.

However, technology has always been one step ahead of regulations or attempts to protect them, Edis says.

“Over time, social norms and legal regulations generally correct humanity’s worst instincts,” he says.
“Until then, we will continue to see deepfake technology being weaponized to achieve negative outcomes.”

DON’T MISS: Want to be smarter and more successful with your money, your job and your life? Subscribe to our new newsletter!

CNBC will host its Your Money virtual event on November 9 at 12 p.m. ET, with experts including Jim Cramer, Ben McKenzie and Farnoosh Torabi. Learn how to boost your finances, invest for the future, and mitigate risk amid record inflation. Free registration here.

CHECK: This new tool allows artists to “poison” their works to dissuade AI companies from removing them

Gn entert

binance ad 4
Back to top button