Google has recently outlined a host of upcoming search updates that are specifically designed to streamline product discovery and make use of multiple inputs in providing better contextual search results.
The new implementation, which utilizes MUM (Multitask Unified Model), will enable users to perform searches using variable inputs, including visuals as search parameters.
“In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see.” –Google.
As seen in the above example, Google’s upcoming search update will enable users to use a visual as a reference point. If you’re looking for socks with a certain design pattern, you could use the image as the trigger to search for a product with the same pattern in a different product category.
Users can also use the above in situations where they don’t know what something is called (i.e. identifying certain elements of a bicycle).
In combination with the rollout of a new element called “Things to Know”, MUM will facilitate broader contextual searches, using machine learning that will help guide searchers in the right direction.
“If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.”