Google I/O just wrapped up, and if you blinked you might have missed it — the word “AI” was mentioned over a hundred times during the keynote. This wasn’t a developer conference with some AI sprinkled on top. This was an AI conference that happened to be at Google I/O. Having watched these events for decades, I can tell you: Google is scared, and that fear is producing some genuinely impressive engineering.
PaLM 2: The Foundation#
The headline announcement is PaLM 2, Google’s next-generation large language model. What’s interesting isn’t just the model itself — it’s the strategy. Google announced four sizes: Gecko, Otter, Bison, and Unicorn. Gecko is small enough to run on mobile devices, which tells you everything about where Google thinks this is heading.
PaLM 2 powers over 25 Google products now, including the upgraded Bard. The technical improvements are notable: better multilingual capabilities across over 100 languages, improved reasoning, and significantly better coding abilities. Google claims it was trained on a dataset that included a much larger proportion of multilingual text and source code compared to PaLM 1.
From a developer perspective, the PaLM API is now generally available through Google Cloud’s Vertex AI platform. The MakerSuite tool for rapid prototyping is a clear play to capture developers who are currently flocking to the OpenAI API. Having spent a few hours with the documentation already, the developer experience feels more polished than I expected.
Bard Gets Serious#
Let’s be honest — when Google launched Bard in March, it felt rushed. The demo famously included a factual error, and the product itself was underwhelming compared to ChatGPT. Two months later, Bard is getting a significant upgrade powered by PaLM 2.
The most interesting additions are the visual capabilities. Bard can now accept images as input (powered by Google Lens integration), and it’s getting integrations with Google Sheets, Maps, and other Workspace products. Google also announced Bard would support coding assistance in over 20 programming languages, with the ability to export code directly to Google Colab.
What I find strategically significant is the Workspace integration. This is Google leveraging its existing distribution advantage — hundreds of millions of Workspace users who could get AI capabilities without switching platforms. It’s the same playbook Microsoft is running with Copilot, and it’s going to make the next twelve months very interesting.
Duet AI: The Enterprise Play#
Buried beneath the consumer announcements was something that should matter a lot more to those of us building software professionally. Duet AI for Google Cloud is Google’s answer to GitHub Copilot and Amazon CodeWhisperer.
Duet AI promises code completion, generation, and chat-based assistance directly within Google Cloud’s IDE integrations. It also extends to infrastructure — helping write Terraform configurations, troubleshoot GKE clusters, and manage Cloud SQL databases through natural language.
I’ve been running workloads on Google Cloud for several projects, and the infrastructure assistance angle is genuinely compelling. Anyone who’s ever wrestled with IAM policies or VPC networking configurations knows that context-aware suggestions could save hours of documentation diving. The question is whether Google can deliver on the execution — they have a habit of announcing impressive products at I/O that take a year to become genuinely useful.
The Broader Platform Shift#
What struck me most about this I/O wasn’t any single announcement — it was the totality. Every single product team at Google seems to have been given the mandate to integrate AI. Android 14 gets AI-generated wallpapers. Google Photos gets a “Magic Editor” powered by generative AI. Google Maps gets “Immersive View” routes using AI and Street View data.
For developers, this means the Google Cloud Platform is making a very aggressive play to be the default AI development platform. The combination of PaLM 2 API access, Vertex AI’s MLOps tooling, Duet AI for development assistance, and TPU v5e for training creates a full-stack AI development environment that directly competes with Azure OpenAI and AWS Bedrock.
My Take#
I’ve been skeptical about Google’s ability to translate research excellence into product execution. They invented the Transformer architecture, after all, and then watched OpenAI run away with the market. But this I/O felt different. There’s an urgency that wasn’t there before, and the PaLM 2 technical improvements suggest the research teams are finally getting the product support they need.
The developer tooling play is smart. If Google can capture even a fraction of the developers currently building on OpenAI’s API by offering competitive models with better cloud integration, the economics could shift quickly. PaLM 2’s multi-size approach — especially Gecko for on-device inference — also addresses a real gap that OpenAI hasn’t filled yet.
That said, I’d temper expectations. Google announced a lot today, and historically they ship about 60% of what they announce at I/O. The proof will be in the execution over the coming months. For now, though, the AI race just got meaningfully more competitive, and that’s good for all of us building software.
