Technology giant Google announced on the 29th of September 2021, that it will be applying AI (Artificial Intelligence) advancements, which includes a new technology called Multitask Unified Model (MUM) in a bid to improve Google Search. This was revealed during the company’s Search On event, where they introduced various new features, including those that leverage MUM, to better connect web searchers to the content they’re looking for, while also making web search feel more natural and intuitive.
One of the new feature is called “Things to Know,” and it aims to make it easier for individuals to comprehend new topics they’re looking for. This feature analyzes how people research different topics and then shows web users the parts of the topic that people are most likely to look at at first. The company explained that with this feature, when a person searches for a particular item or activity, it may suggest “Things to know” by giving tips, step-by-step instructions/ directions, styles and more.
Google stated that this feature will be available in the coming months, but that it may be expanded sooner or later through the use of ‘MUM’ to help internet users gain even more insight into the issue beyond what they may have imagined searching for. The company is also developing new methods to assist Internet users in refining and expanding their searches without having to start from scratch.
To supplement some of the assistance provided by this new feature, Google may recommend linking you to information about specific techniques, which can allow a person to zoom in on one of these other topics to see a visually rich page of search results and ideas from all over the web, including articles, images, videos, and more.
These pages appear to be designed to compete more effectively with Pinterest, as they can assist visitors to get inspired by their search, similar to how the image-heavy Pinterest board aims to convert people’s visual inspiration into actions, such as visiting a website or making an online purchase. The pages, according to Google, can be useful for queries that are on the lookout for inspiration, such as “Halloween decorating concepts” or “indoor vertical backyard concepts” or other ideas to try. This feature is currently available for testing on mobile devices.
Google may be improving video search as well. The company already uses artificial intelligence to identify significant points in videos. With the development of a feature that can discern the subjects in a video, even when the subject isn’t expressly discussed in the film, and then display hyperlinks that allow customers to delve deeper and learn more, it’ll take issues even farther.
‘MUM’ can be used to figure out what a YouTube video is about and come up with solutions as you’re watching it. For example, a film about Macaroni penguins could direct viewers to a variety of related videos, such as those explaining how Macaroni penguins find their family members.
Even if these phrases aren’t expressly addressed in the video, ‘MUM’ can figure out what to look for. This feature will be available in a beta version on YouTube Search in the next weeks and will be updated in the coming months to include more visual changes, according to Google. By utilizing YouTube’s massive reach, this transition may also assist in driving more search traffic to Google. According to surveys, many Gen Z buyers already searching for online content in a different way than previous generations. They tend to use a variety of social media platforms, have a mobile-first mindset, and are interested in video.
According to a “Think with Google” research, 85% of Gen Z kids would use YouTube to seek out content material regularly, and 80% said YouTube videos had effectively taught them something. Other research has shown that Gen Z likes to learn about new concepts and products through video as well, rather than text, native advertisements, or other content material formats. This type of addition may be required for Google because the move to mobile is affecting its search dominance.
More Updates From Google
Additional AI-powered language functions have been added to Google Lens, a visual search engine. In the upgrade, users will be able to refine their searches even more by utilizing text. So, if you use Goggle Lens to capture a photo of a polka dot dress in order to search similar items online, you may use the command “dresses with this pattern” to narrow down the results.
In addition, Google is introducing a new “Lens mode” option in its iOS Google app, which allows users to search the web using any image that surfaces. This will be accessible “soon,” but only in the United States. Google Lens is also coming to the Chrome browser on the desktop, allowing users to search for visual search results by selecting any image or video while browsing the web. This will be available “soon” all across the world.
These changes are part of Google’s latest drive to improve its search tools by incorporating artificial intelligence (AI) language processing.
Google hopes that these Lens upgrades will make its world-scanning AI more useful. It uses the example of someone attempting to repair a bicycle but having no idea what the rear wheel mechanism is called. They take a picture with Lens, type in “how to fix this” in the search bar, and Google returns results.
Did you enjoy this post? Never miss out on any of our future posts by following us»