Google Unveils AI Overviews Feature for Search at 2024 I/O Conference

Last May, Google CEO Sundar Pichai said the company would use artificial intelligence to reinvent all of its products.

But because new generative AI technology posed risks, such as spreading misinformation, Google was cautious about applying the technology to its search engine, which is used by more than two billion people and has generated $175 billion in revenue last year.

On Tuesday, at Google’s annual conference in Mountain View, Calif., Mr. Pichai showed how the company’s aggressive work on AI had finally trickled down to the search engine. Starting this week, he said, U.S. users will see a feature, AI Overviews, that generates summaries of information on top of traditional search results. By the end of the year, more than a billion people will have access to this technology.

The AI ​​previews are likely to increase fears that web publishers will see less traffic from Google Search, putting more pressure on a sector that has been rocked by divisions with other platforms technological. On Google, users will see longer summaries on a topic, which could reduce the need to go to another website – although Google has downplayed these concerns.

“Links included in AI previews generate more clicks” from users than if they were presented as traditional search results, Liz Reid, vice president of search at Google, wrote in a blog post . “We will continue to focus on sending valuable traffic to publishers and creators.”

The company also unveiled a host of other initiatives, including a lightweight AI model, new chips and so-called agents that help users complete tasks, in a bid to gain the upper hand in a battle of competition. AI with Microsoft and OpenAI, the maker of ChatGPT. .

“We’re in the very early days of AI platform change,” Pichai said Tuesday at Google’s I/O Developer Conference. “We want everyone to benefit from what Gemini can do,” including developers, start-ups and the public.

When ChatGPT launched in late 2022, some tech industry insiders saw it as a serious threat to Google’s search engine, the most popular way to get information online. Since then, Google has worked aggressively to regain its edge in AI, launching a family of technologies called Gemini, including new AI models for developers and chatbot for consumers. It has also integrated the technology into YouTube, Gmail, and Docs, helping users create videos, emails, and drafts with less effort.

Meanwhile, competition between Google and OpenAI and its partner Microsoft continued. The day before the Google conference, OpenAI presented a new version of ChatGPT which is more like a voice assistant.

(The New York Times sued OpenAI and Microsoft in December for copyright infringement over news content related to AI systems.)

At its Silicon Valley event, Google showed how it would integrate AI more deeply into users’ lives. It featured Project Astra, an experiment to see how AI could act like an agent, chatting with users by voice and responding to images and videos. Some of these capabilities will be available to users of Google’s Gemini chatbot later this year, Demis Hassabis, general manager of DeepMind, Google’s AI lab, wrote in a blog post.

DeepMind also introduced Gemini 1.5 Flash, an AI model designed to be fast and efficient but lighter than Gemini 1.5 Pro, the mid-tier model that Google has rolled out for several of its consumer services. Dr Hassabis wrote that the new model was “very capable” of reasoning and was good at summarizing information, discussing and captioning images and videos.

The company announced another AI model, Veo, which generates high-definition videos based on simple text prompts, similar to OpenAI’s Sora system. Google said some creators can preview Veo and others can join a waitlist for access. Later this year, the company plans to bring some of Veo’s capabilities to YouTube Shorts, the video platform’s TikTok competitor, and other products.

Google also showed off the latest versions of its music generation tool, Lyria, and its image generator, Imagen 3. In February, Google’s Gemini chatbot was criticized by social media users for refusing to generate images of white people and presented inaccurate images of historical locations. The figures. The company said it would disable the ability to generate images of people until it fixes the issue.

Over the past three months, more than 1 million users have signed up for Gemini Advanced, the version of Google’s chatbot available through a $20 monthly subscription, the company said.

In the coming months, Google will add Gemini Live, which will give users a way to talk to the chatbot via voice commands. The chatbot will respond with natural voices, Google said, and users will be able to interrupt Gemini to ask clarifying questions. Later this year, users will be able to use their cameras to show Gemini Live the physical world around them and discuss it with the chatbot.

In addition to AI previews, Google’s search engine will feature AI-curated search results pages, with generated headlines highlighting different types of content. The feature will start with meal and recipe results, and will then be offered for shopping, travel and entertainment queries.

Ms. Reid, the head of search, said in an interview before the conference that she expected the search updates to save users time because Google “can do more of the work for you “.

Mr. Pichai said he expected a large majority of people to interact with Gemini AI technology through Google’s search engine.

“We’re going to make it increasingly seamless for people to interact with Gemini,” Mr. Pichai said in a briefing before the conference.

News Source :
Gn tech

Back to top button