Google adds AI-powered overviews for multisearch in Lens

Paulo Boaventura
93 Views
4 Min Read

In addition to a new gesture-powered search feature for Android devices, Google today also introduced an AI-powered addition to its visual search capabilities in Google Lens. Starting today, users will be able to point their camera or upload a photo or screenshot to Lens, then ask a question about what they’re seeing to get answers via generative AI.

The feature is an update to the multisearch capabilities in Lens, which allows web users to search using both text and images at the same time. Previously, these types of searches would take users to other visual matches, but with today’s launch, you’ll receive AI-powered results that offer insights, as well.

Image Credits: Google

As one example, Google suggests the feature could be used to learn more about a plant, by snapping a photo of the plant, then asking “When do I water this?” Instead of just showing the user other images of the plant, it identifies the plant and tells the user how often it should be watered, e.g. “every two weeks.” This feature relies on information pulled from the web, including information found on websites, product sites, and in videos.

The feature also works with Google’s new search gestures, dubbed Circle to Search. That means you can kick off these generative AI queries with a gesture, then ask a question about the item you’ve circled, scribbled on, or otherwise indicated you’re interested in learning more about.

However, Google clarified that while the Lens multisearch feature is offering generative AI insights, it’s not the same product as Google’s experimental genAI search SGE (Search Generative Experience), which remains opt-in only.

Image Credits: Google

The AI-powered overviews for multisearch in Lens are launching for everyone in the U.S. in English, starting today. Unlike some of Google’s other AI experiments, it’s not limited to Google Labs. To use the feature, you’ll just tap on the Lens camera icon in the Google search app for iOS or Android, or in the search box on your Android phone.

Similar to Circle to Search, the addition aims to maintain Google Search’s relevancy in the age of AI. While today’s web is cluttered with SEO-optimized garbage, Circle to Search and this adjacent AI-powered capability in Lens aim to improve search results by tapping into a web of knowledge — including many web pages in Google’s index — but delivering the results in a different format.

Still, leaning on AI means that the answers may not always be accurate or relevant. Web pages are not an encyclopedia, so the answers are only as accurate as the underlying source material and the AI’s ability to answer a question without “hallucinating” (coming up with false answers when actual answers aren’t available.)

Google notes that its genAI products — like its Google Search Generative Experience, for example, will cite their sources, to allow users to fact-check its answers. And though SGE will remain in Labs, Google said it will begin to introduce generative AI advances more broadly, when relevant, as it’s doing now with multisearch results.

The AI overviews for multisearch in Lens arrive today, while the gesture-based Circle to Search arrives on Jan. 31.

Fonte:TechCrunch

Share This Article
Leave a review

Leave a review

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *