What’s Driving Google’s Interest in Visual Search & Navigation?

Google Lens Gets More Intelligent

Yesterday we reported on the latest updates to Google Maps. The most attention-grabbing of those updates was its expansion of the Live View 3D navigation feature to indoor use. The AR overlays that have erstwhile guided pedestrians in urban areas will now be available for wayfinding in malls & airports.

While synthesizing these new features (which include more than just visual navigation), it occurred to us that a deeper dive is necessary on Google’s overall AR and “visual search” Interests. It continues to invest heavily in these areas to future proof its core search business, given Gen-Z camera affinity.

Snap is doing similar, as Alex Dao told me at the Localogy 2020 event. It and other social players like Instagram have conditioned millennial and Gen-Z behavior around camera-based experiences. Can this apply to the future of search where the camera — along with voice — becomes a go-to search input?

Google Maps Gets More Intelligent and Visual

Internet of Places

Much of the above happens within the field of AR. As you likely know, one of AR’s foundational principles is to fuse the digital and physical. The real world is a key part of that formula… and real-world relevance is often defined by location. That principle is at the heart of local search and ad targeting.

Synthesizing these variables, one of AR’s battlegrounds will be in augmenting the world in location-relevant ways. That could be wayfinding with Google Live View or visual search with Google Lens. Point your phone at places and objects to contextualize them. Google calls it “search what you see.”

As you can tell from these examples, Google will have a key stake in this “Internet of Places,” as we’re calling it, but it’s not alone (more on that in a bit). As noted, Google wants to future-proof its core business where the camera will be one of many search inputs. And it’s well-positioned to do so, given existing assets.

For example, and as examined yesterday, Google utilizes imagery from Street View and Google Images as a visual database for object recognition. This lets AR devices “localize” themselves and know what they’re looking at. That forms the basis for its storefront recognition in Google Lens and navigation in Live View.

Google Upgrades Live View AR Navigation, Part II

Not Alone

As mentioned above, Google isn’t alone. Apple signals interest in location-relevant AR through its geo-anchors. These evoke AR’s location-based underpinnings by letting users plant and discover spatially-anchored graphics. And Apple’s continued efforts to map the world in 3D will be a key puzzle piece.

Meanwhile, Facebook is similarly building “Live Maps.” As explained by Facebook Reality Labs’ chief scientist Michael Abrash at the OC6 conference, this involves building indexes (geometry) and ontologies (meaning) of the physical world. This will be the data backbone for Facebook’s AR ambitions.

Then there’s Snapchat, the reigning champion of consumer mobile AR. Erstwhile propelled by selfie-lenses, Snap’s larger AR ambitions will flip the focus to the rear-facing camera to augment the broader canvas of the physical world. This is the thinking behind its Local Lenses, as we discussed with Snapchat’s Dao.

Beyond tech giants, there are compelling startups positioning themselves at the intersection of AR and geolocation. Most notably, Gowalla’s rebirth brings the company’s location-based social UX chops to a new world of geo-relevant AR. And Niantic’s Real World Platform aims to offer robust geo-located AR as a service.

Localogy 20/20: The Intersection of Local & Social

The AR Cloud

As technical background for much of the above, AR devices must understand a scene and localize themselves before they can integrate AR graphics believably. That happens with a combination of mapping the contours of a scene and tapping into previously-mapped metadata. This is known as the AR cloud.

But true to all of the above efforts, it won’t just be one “cloud” as its moniker suggests. All of these geo-located AR ambitions will compete. Much like the web today, the AR cloud will ideally have standards and protocols for interoperability, while allowing for proprietary content and networks.

But instead of websites, these proprietary points of value will be in “layers.” The thought is that AR devices can reveal certain layers based on user intent and authentication. View the social layer for geo-relevant friend recommendations and the commerce layer to find products when you’re out shopping.

There are of course several moving parts. 5G will help achieve millimeter-level precision for geolocated AR. And there’s ample spatial mapping required. How will it all come together? And who’s best positioned? Among the above players, Google is likely ahead but it’s still early innings and we’ll be watching closely.

Share Article...

Follow Us...

Stay ahead of the curve and get the latest on Local straight to your inbox.

By submitting this form, you agree to receive communications from Localogy. You can unsubscribe at any time.

Related Resources

Animated SMBs: Snap Launches Sponsored Filters

Snapchat has added to its arsenal of marketing formats. On Friday, it launched Sponsored Filters – a simplified flavor of its signature animated lens format. Among other things, this could make them primed for SMBs, a target market Snap continues to pursue.

Big Tech’s Regulatory Reckoning

What is the European Union’s Digital Marketing Act? And why does it appear that Apple, Alphabet (e.g., Google), and Meta (e.g., Facebook, Instagram) are about