![]() Prior to this, the company had announced it would be integrating Lens with Chrome on the desktop, as well, in the “coming months.” This April, Google also rolled out Lens-powered multisearch capabilities on mobile allowing users to search with both text and images combined - hinting at the company’s broader plans to further invest in Lens technology to make searches feel more natural. Previously, Google had offered Lens capabilities in Image search and Google Photos on the web, but its fullest offering was on mobile devices. Instead of opening a new tab to perform a search, you’ll be able to use Lens on the same page in your Chrome browser to do things like translating an image’s text, identifying an object in an image or getting the original source from an image. Today, Google is rolling out a new way to use Google Lens on the desktop. How consumers access your content, be it desktop search, mobile search, voice search, image search and now multisearch – may matter to you in terms of how likely that customer might convert, where the searcher is in their buying cycle and more.Google has been working to better integrate its visual search tools from Google Lens into its browser to enable new types of searches that can identify what you see, not just search for things you type. As Google releases new ways for consumers to search, your customers may access your content on your website in new ways as well. Google also recommended you try it with shopping searches. This feature is live now for me, and should be available as a “beta feature in English in the U.S.” Google said. For more on where Google uses MUM see our story on how Google uses artificial intelligence in search.Īvailable in US/English. I asked Google if Google multisearch currently uses MUM and Google said no. We’re also exploring ways in which this feature might be enhanced by MUM– our latest AI model in Search– to improve results for all the questions you could imagine asking.” Google made a comment in its blog post saying “this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways. Take a picture of your rosemary plant and add the query “care instructions”.Snap a photo of your dining set and add the query “coffee table” to find a matching table.Screenshot a stylish orange dress and add the query “green” to find it in another color.Google said this feature can help you narrow down your searches, here are some examples of how multisearch can be helpful. ![]() Here is a static image of the flow of how this works: Here is a GIF of this in action but you should be able to try it yourself in English, in the United States: In this box you can add text to your photo query. ![]() Then you swipe up on the results to bring it up, and tap the “+ Add to your search” button. Then point the camera at something nearby or use a photo in your camera or even take a picture of something on your screen. Open the Google app on Android or iOS, click on the Google Lens camera icon on the right side of the search box. Google will then use both the image and the text query to show you visual search results. Google multisearch lets you use your camera’s phone to search by an image, powered by Google Lens, and then add an additional text query on top of the image search. Google says this lets searchers “go beyond the search box and ask questions about what you see.” Google multisearch is Google’s latest innovative search feature that let’s you search by image and then add text to that specific image search.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |