• The Signal
  • Posts
  • Google’s AI Takeover, Microsoft’s Agent Army, and Claude 4 Drops the Mic

Google’s AI Takeover, Microsoft’s Agent Army, and Claude 4 Drops the Mic

AI Highlights

My top-3 picks of AI news this week.

Sundar Pichai at Google I/O 2025

Sundar Pichai / Google I/O 2025

Google
1. Google’s AI Takeover

Google's annual I/O conference showcased how the company is rapidly transforming AI research into practical, widely available tools and features.

  • Google Search: AI mode is starting to roll out in Google Search, starting with all users in the US. This provides a Perplexity-like interface, and perhaps marks the end of search engines once and for all. This is an interesting move given ~70% of Google’s revenue is from ads. Nonetheless, answer engines are the future.

  • Real-time meeting translation: Google Meet now offers near real-time, low-latency speech translation, whilst adhering to tone and expressions. This is a huge win for education and business.

  • Gemini Diffusion: 10-15x faster than current “autoregressive” models, by using diffusion which predicts tokens all at once, instead of left-to-right. This diffusion process is actually how most image/video generation models work today.

  • Generative content: Add soundtracks to videos you make with Google’s new “Veo 3” AI video model. Create talking characters, include sound effects, and develop videos in a range of cinematic styles using “Flow”, Google’s new AI filmmaking tool. Also, their Imagen 4 is now the second-best overall AI image generation model in the market. However, it’s the fastest and best at typography over other models I’ve seen so far.

  • Gemini 2.5 advancements: Introduced Deep Think, an enhanced reasoning mode for Gemini 2.5 Pro using parallel thinking techniques to explore multiple hypotheses before responding.

  • Project Astra: Camera and screen-sharing capabilities from Project Astra are now integrated into Gemini Live, available to all Android users and rolling out to iOS users.

Alex’s take: This was perhaps one of the longest AI updates I’ve written about to date. I picked only a few of my favourites—you can find the full list of 100 things Google announced here. They’re even now processing 480 trillion tokens monthly, up 50x from last year. It’s safe to say that the pace of AI progress at Google is staggering.

Microsoft
2. Microsoft’s Agent Army

After attending Microsoft Build 2025 in Seattle this week, I saw Microsoft unveil first-hand their comprehensive vision for AI-powered agents across its entire product ecosystem, positioning itself at the centre of what they're calling the “agentic web”:

  • GitHub Copilot: Evolving from an in-editor assistant to a fully autonomous coding agent that can asynchronously add features, fix bugs, refactor code, and improve tests.

  • Multi-agent orchestration: New capabilities in Microsoft 365 Copilot, Copilot Studio, and Azure AI Foundry enable specialised AI agents to collaborate on complex business tasks with human oversight, such as appearing in Teams chats or joining meetings.

  • Model Context Protocol (MCP): Microsoft contributed identity and registry standards to this open protocol that allows agents to securely communicate with apps and services, now integrated across GitHub, Teams, Azure, Windows, and Dynamics 365.

  • Microsoft Discovery: Accelerates research and development (R&D) by bringing AI to scientists and engineers and transforming the entire discovery process. Microsoft Chemistry Product Lead John Link led a team of AI agents to discover a forever-chemical-free immersion coolant using this platform.

Alex’s take: It feels like we’re ascending OpenAI’s roadmap from Level 3 Agents (systems that can take actions) to Level 4 Innovators (AI that can aid in invention), especially with the dawn of the Microsoft Discovery platform. The agents surfaced a material “unknown to humans” in hours, not months. The team then subsequently synthesised it later in the lab—this is remarkably impressive.

Anthropic
3. Claude 4 Drops the Mic

Anthropic has unveiled Claude 4 at their "Code with Claude" event, introducing two flagship models that represent a quantum leap in AI capabilities.

  • Claude Opus 4: The world's best coding model, leading SWE-bench with 72.5% and designed for sustained performance on complex, multi-hour tasks that require thousands of steps.

  • Claude Sonnet 4: A significant upgrade delivering superior coding and reasoning while maintaining efficiency, achieving 72.7% on SWE-bench.

  • Extended thinking with tool use: Both models can alternate between reasoning and using tools like web search during their thought processes, dramatically improving response quality.

  • New API capabilities for agents: Code execution tool, MCP connector, Files API, and extended prompt caching enable developers to build more powerful AI agents.

  • Claude Code general availability: Now includes native IDE integrations with VS Code and JetBrains, plus GitHub Actions support for seamless pair programming.

Alex’s take: Anthropic has implemented ASL-3 safety protections for Claude Opus 4. This means enhanced security measures and CBRN weapon guardrails. I think this sets an important precedent: as AI capabilities approach potentially dangerous thresholds, companies should err on the side of caution rather than wait for clear evidence of risk. It’s essentially an insurance policy to protect against catastrophic misuse whilst we’re still figuring out just how powerful these models really are.

Today’s Signal is brought to you by Together AI.

Together AI

Unleash hyperscale AI via Together AI. Spin up DeepSeek-R1, Llama 4 or Qwen 3 on serverless or dedicated endpoints, auto-scaling to millions. Pay as you go, continuously refine on user data—and keep full control.

Content I Enjoyed

Alex Banks visiting Microsoft’s top-secret quantum research lab

Alex Banks / Microsoft Research Lab

Microsoft's Quantum Lab Visit

This week, I got the opportunity to visit Microsoft's secret quantum computing lab, where they've been quietly working on their Majorana 1 chip—potentially the biggest computing breakthrough of our generation.

While Google, IBM and others have been building increasingly powerful but error-prone quantum systems, Microsoft took the road less travelled by developing “topoconductors”, a new system of matter which carries electricity with zero resistance.

What makes Microsoft’s approach impressive is how they are going up against quantum’s biggest challenge: error correction.

In computer science, a bit (binary digit) is the fundamental unit of digital information. It represents a logical state with two possible values of 0 or 1. Qubits (quantum bits) are spectacular compared to regular bits because they can hold multiple states at once (0 and 1); this is called superposition.

Instead of storing qubits in single points that are extremely vulnerable to disruption, Microsoft's system distributes the information across the material, making it dramatically more stable.

Satya Nadella, CEO of Microsoft said: “We believe this breakthrough will allow us to create a truly meaningful quantum computer not in decades, as some have predicted, but in years.”

1 million qubits on a single chip will be more powerful than every computer on Earth combined.

The applications they're exploring go far beyond the cryptography use cases we typically hear about. Think self-healing materials that could automatically repair cracks in infrastructure, perfect enzyme design that could help address global food challenges, and pollution conversion systems that transform waste into actually useful resources.

The team has been pursuing this approach for nearly 20 years, building an entirely new material atom by atom when everyone else was taking more conventional routes. It's a powerful reminder that sometimes betting on the most difficult technical path is exactly what leads to the biggest breakthroughs.

I made a short video of my experience at the lab, where you can see firsthand the Majorana 1 chip and the incredible environment these researchers have built to manipulate matter at the atomic level.

Idea I Learned

Jony Ive and Sam Altman. OpenAI’s $6.5 billion acquisition of io.

Jony Ive and Sam Altman

The End of Apple?

This week, OpenAI announced a shocking $6.5 billion acquisition of io, the secretive AI device startup co-founded by Apple's legendary designer Jony Ive. The timing couldn't be more strategic—announced during Google I/O week, effectively hijacking search results and tech conversation.

For decades, Apple defined what "good taste" in technology looked like, with Jony Ive as the high priest of minimalist design thinking. Now that very same design philosophy and talent pool is migrating to AI.

The acquisition brings together OpenAI's AI capabilities with the design team responsible for iconic products like the iPhone. The partnership aims to reimagine the entire human-computer interface for the AI era. As Ive himself put it: “I moved to America, drawn by the exhilarating optimism of San Francisco and Silicon Valley. We are sitting at the beginning of what I believe will be the greatest technological revolution of our lifetimes.”

Apple's design advantage allowed it to dominate mobile despite not inventing the smartphone. Similarly, OpenAI is now betting that beautiful, intuitive hardware will be the differentiator that cements their AI leadership.

Perhaps this is the most expensive acqui-hire ever? Jony and his team will assume design and creative responsibilities across OpenAI and io, whilst io will merge with OpenAI.

At the end of it all, Jony keeps control of his main company and is now steering the creative direction of the next consumer frontier in AI hardware. What a deal—and it makes me think how Apple will respond.

With ~$65 billion of cash on their balance sheet, could we see Apple acquiring someone in this space? Say, OpenAI? Never say never.

Quote to Share

Asha Sharma on the future of AI collaboration:

“Everyone's in the single-player mode, and I think there's a multiplayer mode on the horizon.”

This week, I had a great conversation with Asha Sharma, Head of AI Platform at Microsoft. Currently, most AI interactions happen in isolation: you ask ChatGPT a question, get an answer, and move on. But Sharma sees this changing dramatically. Instead of working alone with AI, she imagines scenarios where you could collaborate with an AI agent on shared documents, have it join your Teams calls to spot insights you might miss, or work together on complex projects in real-time.

This shift from "single-player" to "multiplayer" AI represents a fundamental change in how we think about artificial intelligence, from tool to collaborator. Instead of one single agent, there will be teams of agents, each with their own specialised capabilities to assist with collective problem-solving.

In the near future, we’ll be working alongside agentic colleagues who know your company’s entire knowledge base and can help draft proposals, find financial data and help you strategise during meetings, all in real-time. We’re already seeing this permeate through today.

Source: Interview with Asha Sharma at Microsoft Build

Question to Ponder

Microsoft CEO Satya Nadella

Satya Nadella and Alex Banks

This week, I got to sit down with Satya Nadella, the CEO of Microsoft, and ask my own question.

“How is Microsoft ensuring that everyone benefits from increasingly advancing AI systems in the future?”

The goal is to empower every person and every organisation on the planet to achieve more—but now in what he calls the “agentic era.”

For example, in healthcare, Stanford is using AI agent frameworks to transform tumour board meetings. What once took hours of complex workflow now happens in minutes, freeing clinicians to focus on actual treatment decisions rather than administrative overhead.

In education, the World Bank and Peru partnership showed a quantifiable impact when teachers were given access to Copilot. AI generates learning materials instantly, turning 15 days of prep work now into 3 seconds of prompting.

I think it’s all too easy in this current age of AI to get starry-eyed over the latest benchmark performance results. That is, they can be easily gamed by the models overfitting the training data and show promise in theory, yet underperform in reality. But it’s these real-world applications that are where these systems are truly put to the test—when they’re in the hands of real people performing real tasks.

That’s why it’s more important than ever to ensure these AI advancements don’t create new forms of inequality. The organisations that can afford to implement these sophisticated AI systems first will have massive advantages.

You can watch the full interview here.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?