Google’s AI Revolution: Voice Queries for Images, Videos
- Google is integrating more AI into its search engine, allowing users to voice questions about images and videos.
- The AI-driven makeover began with AI Overviews, summaries written by the technology at the top of Google’s results page.
- Google’s Lens feature, which processes queries about objects in a picture, will be expanded to include voice-activated search features.
- Despite potential inaccuracies, Google is confident in its AI’s ability to decide what types of information to feature on the results page.
Google, the tech giant based in Mountain View, California, is set to revolutionize its search engine by integrating more artificial intelligence (AI) into its system. This move is seen as the next step in an AI-driven makeover that Google initiated in mid-May. The new AI capabilities will allow users to voice questions about images and videos, and occasionally, the AI will organize an entire page of search results.
The AI-driven makeover began with Google responding to some queries with summaries written by the technology at the top of its influential results page. These summaries, known as AI Overviews, sparked concerns among publishers. They feared that fewer people would click on search links to their websites, which could potentially undercut the traffic needed to sell digital ads that finance their operations.
In response to these concerns, Google is inserting more links to other websites within the AI Overviews. This move is expected to drive more traffic back to the websites. An analysis released last month by search traffic specialist BrightEdge revealed that the AI Overviews have already been reducing visits to general news publishers such as The New York Times and technology review specialists such as TomsGuide.com.
AI Overviews: A Game Changer
However, the same study found that the citations within AI Overviews are driving more traffic to highly specialized sites such as Bloomberg.com and the National Institute of Health. Google’s decision to inject more AI into its search engine, the crown jewel of its $2 trillion empire, signifies the company’s commitment to a technology that is propelling the biggest industry shift since Apple unveiled the first iPhone 17 years ago. This move leaves little doubt that Google is tethering its future to AI.
The next phase of Google’s AI evolution builds upon its 7-year-old Lens feature that processes queries about objects in a picture. The Lens feature now generates more than 20 billion queries per month and is particularly popular among users aged 18 to 24 years old. This younger demographic is a key target for Google as it faces competition from AI alternatives powered by ChatGPT and Perplexity, which are positioning themselves as answer engines.
The new AI capabilities will allow users to use Lens to ask a question in English about something they are viewing through a camera lens. Users signed up for tests of the new voice-activated search features in Google Labs will also be able to take video of moving objects, such as fish swimming around an aquarium, while posing a conversational question and be presented an answer through an AI Overview.
Addressing AI’s Blind Spots
Rajan Patel, Google’s vice president of search engineering and a co-founder of the Lens feature, stated, “The whole goal is can we make search simpler to use for people, more effortless to use and make it more available so people can search any way, anywhere they are.” Despite the potential of AI to make search more convenient, the technology also sometimes produces inaccurate information. This risk threatens to damage the credibility of Google’s search engine if the inaccuracies become too frequent.
Google has already had some embarrassing episodes with its AI Overviews, including advising people to put glue on pizza and to eat rocks. The company blamed these missteps on data voids and online troublemakers deliberately trying to steer its AI technology in a wrong direction. Google is now confident that it has fixed some of its AI’s blind spots and will rely on the technology to decide what types of information to feature on the results page.
Despite its previous bad culinary advice about pizza and rocks, AI will initially be used for the presentation of the results for queries in English about recipes and meal ideas entered on mobile devices. The AI-organized results are supposed to be broken down into different groups of clusters consisting of photos, videos, and articles about the subject.
This move by Google is reminiscent of the historical shift in the tech industry when Apple unveiled the first iPhone 17 years ago. Just as the iPhone revolutionized the way we use mobile devices, Google’s AI injection into its search engine could potentially change the way we search for information online. This could be the beginning of a new era in the tech industry, where AI plays a more significant role in our daily lives. However, as with any new technology, it will be crucial for Google to address any potential issues and concerns to ensure the credibility and reliability of its search engine.



Post Comment