By putting AI into everything, Google wants to make it invisible
Its annual I/O showcase demonstrates that the frontier is no longer about AI models’ capabilities. It’s about turning them into slick products.

If you want to know where AI is headed, this year’s Google I/O has you covered. The company’s annual showcase of next-gen products, which kicked off yesterday, has all of the pomp and pizzazz, the sizzle reels and celebrity walk-ons, that you’d expect from a multimillion-dollar marketing event.
But it also shows us just how fast this still experimental technology is being subsumed into a lineup designed to sell phones and subscription tiers. Never before have I seen this thing we call artificial intelligence appear so normal.
Yes, Google’s roster of consumer-facing products is the slickest on offer. The firm is bundling most of its multimodal models into its Gemini app, including the new Imagen 4 image generator and the new Veo 3 video generator. That means you can now access Google’s full range of generative models via a single chatbot. It also announced Gemini Live, a feature that lets you share your phone’s screen or your camera’s view with the chatbot and ask it about what it can see.
Those features were previously only seen in demos of Project Astra, a “universal AI assistant“ that Google DeepMind is working on. Now, Google is inching toward putting Project Astra into the hands of anyone with a smartphone.
Google is also rolling out AI Mode, an LLM-powered front end to search. This can now pull in personal information from Gmail or Google Docs to tailor searches to users. It will include Deep Search, which can break a query down into hundreds of individual searches and then summarize the results; a version of Project Mariner, Google DeepMind’s browser-using agent; and Search Live, which lets you hold up your camera and ask it what it sees.
This is the new frontier. It’s no longer about who has the most powerful models, but who can spin them into the best products. OpenAI’s ChatGPT includes many similar features to Gemini’s. But with its existing ecosystem of consumer services and billions of existing users, Google has a clear advantage. Power users wanting access to the latest versions of everything on display can now sign up for Google AI Ultra for $250 a month.
When OpenAI released ChatGPT in late 2022, Google was caught on the back foot and was forced to jump into higher gear to catch up. With this year’s product lineup, it looks as if Google has stuck its landing.
On a preview call, CEO Sundar Pichai claimed that AI Overviews, a precursor to AI Mode that provides LLM-generated summaries of search results, had turned out to be popular with hundreds of millions of users. He speculated that many of them may not even know (or care) that they were using AI—it was just a cool new way to search. Google I/O gives a broader glimpse of that future, one where AI is invisible.
“More intelligence is available, for everyone, everywhere,” Pichai told his audience. I think we are expected to marvel. But by putting AI in everything, Google is turning AI into a technology we won’t notice and may not even bother to name.
Deep Dive
Artificial intelligence
OpenAI is throwing everything into building a fully automated researcher
An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI.
How Pokémon Go is giving delivery robots an inch-perfect view of the world
Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
This startup wants to change how mathematicians do math
Axiom Math is giving away a powerful new AI tool. But it remains to be seen if it speeds up research as much as the company hopes.
Want to understand the current state of AI? Check out these charts.
According to Stanford’s 2026 AI Index, AI is sprinting, and we’re struggling to keep up.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.