Technology

Using memes, social media users have become red teams for half-baked AI features

“Running with scissors is a cardio exercise that can increase your heart rate and requires focus and concentration,” says Google’s new AI search feature. “Some say it can also improve your pores and give you strength.”

Google’s AI feature pulled this answer from a website called Little Old Lady Comedy, which, as the name clearly suggests, is a comedy blog. But the gaffe is so ridiculous that it’s circulating on social media, along with other blatantly incorrect overviews of AI on Google. Indeed, daily users now associate these products on social networks.

In cybersecurity, some companies hire “red teams” – ethical hackers – who attempt to hack their products as if they were bad actors. If a red team discovers a vulnerability, the company can patch it before shipping the product. Google certainly conducted some form of red teaming before launching an AI product on Google Search, which is estimated to process billions of queries per day.

So it’s surprising that a company with considerable resources like Google continues to release products with obvious flaws. This is why it is now common to clown around about the failures of AI products, especially in an era where AI is becoming more and more ubiquitous. We’ve seen it with bad spelling on ChatGPT, the inability of video generators to understand how humans eat spaghetti, and news summaries from Grok AI on X that, like Google, don’t understand satire. But these memes could actually serve as useful feedback to companies developing and testing AI.

Despite the high-profile nature of these breaches, tech companies often downplay their impact.

“The examples we’ve seen are generally very rare queries and are not representative of most people’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this new experiment and will use these isolated examples to continue refining our overall systems.”

Not all users see the same results from AI, and by the time a particularly bad AI suggestion circulates, the problem has often already been fixed. In a more recent case that went viral, Google suggested that if you’re making a pizza but the cheese isn’t sticking, you can add about an eighth of a cup of glue to the sauce to “give it more stickiness.” It turns out that the AI ​​gets this answer from an eleven-year-old Reddit comment from a user named “f––smith.”

Beyond being an incredible mistake, this also indicates that AI content offerings might be overpriced. Google has a $60 million deal with Reddit to license its content for training AI models, for example. Reddit signed a similar deal with OpenAI last week, and Automattic properties WordPress.org and Tumblr are reportedly in talks to sell data to Midjourney and OpenAI.

It must be recognized that many of the errors circulating on social networks come from unconventional research designed to trip up AI. At least I hope no one is seriously looking for the “health benefits of running with scissors”. But some of these errors are more serious. Science journalist Erin Ross posted on X that Google is spitting out incorrect information about what to do in the event of a rattlesnake bite.

Ross’ post, which received more than 13,000 likes, shows that AI recommended applying a tourniquet to the wound, cutting the wound and sucking out the venom. According to the US Forest Service, these are all things you should not do, if you are bitten. Meanwhile, over at Bluesky, author T Kingfisher amplified an article showing Google’s Gemini mistakenly identifying a poisonous mushroom as a common button mushroom – screenshots of the article spread to other platforms -forms as a warning.

When a bad AI answer goes viral, the AI ​​might be even more confused by the new content around the resulting topic. On Wednesday, New York Times reporter Aric Toler posted a screenshot on X showing a query asking if a dog has ever played in the NHL. The AI’s answer was yes – for some reason the AI ​​called Calgary Flames player Martin Pospisil a dog. Now, when you make the same query, the AI ​​pulls up an article from the Daily Dot explaining how Google’s AI keeps thinking dogs play sports. AI is fed its own errors, which poisons it even more.

That’s the inherent problem with training these large-scale AI models on the internet: sometimes people lie. But just like there is no rule against a dog playing basketball, there is unfortunately no rule against big tech companies shipping bad AI products.

As the saying goes: trash in, trash out.



News Source : techcrunch.com
Gn tech

Back to top button