Amazon continues to tiptoe into visual search. The latest effort lets users search for items on Amazon using images as opposed to text. Or, rather, we should say in addition to text. Its new feature lets shoppers use images to search for visually-similar products, while refining searches with text qualifiers.
If this sounds familiar, Google offers something similar in its Multisearch feature. Announced a few years ago, it lets searchers use images (or live camera feed) to contextualize objects they encounter in the physical world. Text can then be entered to zero in on desired attributes (e.g., “the same shirt in green”).
All the above can involve physical objects – everything from pets to flowers to landmarks. But where the rubber really meets the road (read: monetization) is with physical goods such as style items. That’s where Google is headed with visual search, and it’s obviously where Amazon’s interests lie.
Amazon meanwhile continues to push the ball forward with visual search’s cousin, AR. Building from past features that let users try on clothes, furniture, and decor by virtually placing them in home spaces, Amazon this week announced expanded capability to tabletop items like lamps and coffee makers.
Best of Both Worlds
Back to visual search and UX particulars, Amazon users can take a picture of a physical object to then use it as a search input to find similar items. But unlike Google Lens’ live camera feed, this requires a static image – either taken on the spot or referenced from the user’s camera roll. This is an extra step.
But the real magic in the feature, as noted, is the ability to add text descriptors to the mix. This offers the best of both worlds: images are sometimes optimal for querying advanced physical properties, while text is better for further qualifications that aren’t visible in a given image, such as product manufacturer.
Fitting use cases for this unique mix of capabilities could be things like replacement parts for appliances, or style items where the tag/brand isn’t visible in a given image. The latter can offer opportunities to identify apparel that’s spotted in the wild – a potentially attractive use case for fashionistas.
The point in all the above is to expand the surface area of search. By meeting users halfway with more search inputs and modalities, Amazon can boost overall query volume in various contexts. Just like Google, query volume is the tip of the spear or the top of the funnel (choose your analogy).
Future Proofing
Speaking of surface area, Amazon’s visual search is currently available on its mobile app, but it’s unclear where else it could go. Google, by comparison, has expanded Lens to several user touch points to incubate and expose the product – everywhere from the Google home page to the Chrome address bar.
This makes sense, as visual search is closer to Google’s core product than Amazon’s. For Google, visual search is one way – along with voice search – to future proof it’s core business and expand query volume. The latter applies to Amazon, as noted, but perhaps less urgently as a non-core function.
That relatively smaller urgency level can be seen in Amazon’s history with visual search, which brings us back to the ‘tiptoeing’ comment in the intro. The company has moved surprisingly slowly with visual search – a feature that seems to align with its eCommerce evolution. But better late than never.
Past visual search efforts include partnering with Shap to be the product database that sits behind Snap’s visual search play, Snap Scan. At the time, we saw this as Amazon’s first experimental steps into visual search, to be followed by its own branded offering. That continues to unravel slowly but surely.