Google’s AI makes a mistake in the first demo — RT World News

The company hopes its fledgling chatbot can compete with ChatGPT, which critics say is too politicized

Google’s Bard chatbot made a factual error in a demo video posted days before a high-profile launch event in Paris on Wednesday. While the AI ​​bot is still in the testing phase, it is touted as a competitor to Microsoft-backed ChatGPT, a very popular AI with its own set of problems.

In a promotional video posted by Google on Monday, a user asks Bard “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about?” The AI ​​returns a number of responses, including one indicating that the telescope “took the first-ever images of a planet outside our own solar system.”

As astrophysicist Grant Tremblay pointed out on Twitter, that answer was wrong. The first such image was taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, he wrote, saying that although “terribly impressive” AI chatbots are “often wrong with much confidence.”

The error was noticed just before Google unveiled Bard at an event in Paris on Wednesday morning, with the company’s value dropping 8% as news of the error spread.

AIs like Bard don’t provide accurate results for every question. By sifting through trillions of pages of human-created words and numbers, they predict the most likely answers to a question or prompt. Microsoft noted this when announcing on Tuesday that its Bing search engine would come with ChatGPT – developed on the Microsoft-funded OpenAI platform – built-in.

“Bing is powered by AI, so surprises and errors are possible”, a disclaimer of the company reads.

The development of conversational AI has also been plagued by accusations of political bias among programmers. Tech fans recently realized that ChatGPT would refuse to say anything positive about fossil fuelsor even former US President Donald Trump, when he extols the virtues of a meatless diet And Write poems honoring Trump’s Democratic successor, Joe Biden.

When presented with a hypothetical scenario in which she was asked to say a racial slur in order to disarm a nuclear bomb, the AI declared that he would condemn millions of people to nuclear annihilation before even using “racist language”.

Bard is likely to be hampered by similar politicized constraints, as Google CEO Sundar Pichai said on Monday he would follow the company’s decisions. “responsible” Principles of AI. These rules state that Google’s AI products “avoid unfair impacts on people, especially those related to…race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious beliefs”.

You can share this story on social media:


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button