One of the things that stood out in last week’s Apple event was the latest updates for Apple Intelligence. Its full market arrival is delayed but Apple can get away with this, given its classic approach to wait and see before bringing a given technology to market. It launches slow, then lifts all boats with its halo effect.
That pattern is evident in its current AI endeavors, right down to the rebranding of the tech to “Apple Intelligence.” Rather than some sort of app or feature, Apple is integrating a layer of intelligence throughout its operating system and apps. That includes generative and conversational AI throughout.
The “conversational AI” part was Siri’s job, which to date has largely failed. Now Siri is getting a GPT-based brain transplant as part of the Apple Intelligence overhaul. This should give it the boost that it needs to live up to its central positioning in a product line that’s known for elegance and excellence.
Apple Doubles Down on Intelligence, Introduces Visual Search
Sleeping Giant
Much of the above was examined in our coverage of Apple’s Glowtime event last week. But going one level deeper, one particular flavor of AI stood out to us: visual search. Specifically, Apple’s Visual Intelligence (again, it has rebranded the technology) will be a valuable part of the Apple Intelligence mix.
Backing up, visual search is a technology we’ve considered a sleeping giant for years. It contextualizes items you point your phone at, using a combination of computer vision and machine learning. It’s potentially a valuable utility with inherently broad appeal and high frequency… sort of like web search.
Speaking of web search, Google has been keen on visual search for many of these reasons. It’s a natural extension of its core search business, and it can meet core objectives like growing search query volume. With several options and modalities to search – text, voice visual – Google can maximize engagement.
But despite those theoretical benefits, Google Lens has been slow to get off the ground. Users are set in their ways so introducing another search UI takes a while. Beyond habitual factors, visual search involves some “activation energy,” such as tapping a specific icon and holding up your phone.
Chance to Shine
Back to Apple, the thought is that it can potentially accelerate visual search’s cultural learning curve. This will happen as an inherent byproduct of Apple simply launching it. But traction can be furthered by its physical button. Apple’s new camera control button can be used to quickly launch visual searches.
The combination of those factors – Apple’s Halo Effect and front & center positioning – could be the nudge that visual search needs. Besides the utilitarian appeal noted above – including everything from identifying fashion items to travel assistance – there’s a certain fun factor and magic to visual search.
There’s also a strong local search angle. The vision we’ve always had for visual search is as a local discovery engine. For example, point your phone at a new restaurant in your neighborhood – or an old restaurant in a new neighborhood you visit – to see its details, menus, and reservation options.
In fact, that’s the very use case that Apple gave us in last week’s keynote. So the ingredients are all there, and the underlying object recognition continues to improve. Now with Apple in the mix – and its collaboration with Google to ensure good results – visual search could be given its chance to shine.