- The Signal
- Posts
- OpenAI’s $3B Windsurf Play, Gemini 2.5 Pro Came Early, and Mistral AI “Medium Is the New Large”
OpenAI’s $3B Windsurf Play, Gemini 2.5 Pro Came Early, and Mistral AI “Medium Is the New Large”


AI Highlights
My top-3 picks of AI news this week.
OpenAI
1. OpenAI’s $3B Windsurf Play
OpenAI has agreed to acquire Windsurf (formerly Codeium), an AI-assisted coding editor, for approximately $3 billion, marking OpenAI’s largest acquisition to date.
Rapid valuation growth: Windsurf was valued at $1.25 billion just last August following a $150 million funding round led by General Catalyst, representing a 2.4x increase in valuation in less than a year.
Strategic expansion: This acquisition is designed to complement ChatGPT’s existing coding capabilities and compete against the likes of Cursor (currently valued at $9 billion).
Market positioning: The AI coding market is moving too quickly for OpenAI to wait for internal solutions, while its user base recently surged to over 400 million weekly active users in February.
Alex’s take: Time-to-market now trumps all. As we covered last month, AI is now a motion market, not a moat market. Most incumbents are approaching AI the same way they approached mobile—late, reactive, and lacking taste. The real winners will be those who can iterate quickly, collapse feedback loops, and move with conviction. Execution velocity is everything.
Google
2. Gemini 2.5 Pro Came Early
Google has released Gemini 2.5 Pro Preview (I/O edition) ahead of schedule, bringing significant improvements to coding capabilities and web development performance.
#1 on WebDev Arena: Ranks first on the WebDev Arena leaderboard, which measures human preference for a model’s ability to build aesthetically pleasing and functional web apps.
Advanced UI capabilities: Delivers improved front-end and UI development, making it easier to implement new features and create responsive designs with subtle animations.
Early release decision: Originally planned for Google I/O later this month, Google released the model early due to "overwhelming enthusiasm" to get it into developers' hands sooner.
Alex’s take: Google really is covering all bases when it comes to output generation. Text, image and code generation with Gemini 2.5 Pro. Video generation with Veo 2. Plus, being able to predict the structure and interactions of all of life’s molecules with AlphaFold 3. Google has a firm footing to capitalise on the application layer, powered by their very own TPU hardware. In my opinion, they are now a front-runner in the AI race.
Mistral AI
3. Medium is the New Large
Mistral AI has launched Mistral Medium 3, a breakthrough model that redefines efficiency and accessibility in enterprise AI deployment.
Performance at scale: Delivers 90% of Claude 3.7 Sonnet's capabilities at 8x lower cost ($0.4 input / $2 output per million tokens).
Enterprise ready: Can be deployed on any cloud or self-hosted environment requiring only four GPUs, making it significantly more accessible for organisations.
Domain dominance: Particularly excels in professional use cases like coding and STEM tasks, outperforming larger competitors like Llama 4 Maverick and Cohere Command A.
Alex’s take: Just two months after releasing Mistral Small, they've delivered a “medium” model that outperforms most flagship offerings at a fraction of the cost. It's refreshing to see a European company challenging Silicon Valley's dominance. I suspect we're witnessing a paradigm shift where performance-per-resource becomes more important than raw model size.
Today’s Signal is brought to you by 1440.
Daily News for Curious Minds
Be the smartest person in the room by reading 1440! Dive into 1440, where 4 million Americans find their daily, fact-based news fix. We navigate through 100+ sources to deliver a comprehensive roundup from every corner of the internet – politics, global events, business, and culture, all in a quick, 5-minute newsletter. It's completely free and devoid of bias or political influence, ensuring you get the facts straight. Subscribe to 1440 today.
Content I Enjoyed
Fiverr CEOs Harsh Truth About AI Adoption
I came across an internal memo from Micha Kaufman, CEO of Fiverr, that’s been doing the rounds online. His message to his team was actually quite refreshing in a sea of CEO AI statements we’ve seen of late: “AI is coming for your jobs. Heck, it's coming for my job too.”
Rather than sugar-coating reality and sprinkling buzz words about how their organisation is making a conscious effort to adopt AI and be a frontrunner, he uses simple language and offers practical advice to his colleagues for staying relevant.
Among the list:
Master AI tools in your field
Learn from knowledgeable colleagues
Become a prompt engineer
His words, “Google is dead. LLM and GenAI are the new basics,” which I do think is the right framing, given we are now shifting from search engines to answer engines.
Kaufman’s perspective pairs nicely with Nvidia CEO Jensen Huang's take that “AI will not take your job. AI used by somebody else will take your job.” It's about becoming the person who skillfully wields these new tools to do meaningful work and 10x your output.
The future doesn't belong to those waiting for permission to adapt. As Kaufman puts it, “Stop waiting for the world or your place of work to hand you opportunities to learn and grow—create those opportunities yourself.”
Idea I Learned
OpenAI’s Moat Is Not Their API, It’s ChatGPT
I've been studying the latest Ramp AI Index this week, and the numbers are staggering.
OpenAI commands a whopping 32.4% share of U.S. businesses with paid AI subscriptions, while the entire market sits at 40.1%. That’s 81% market share—and practically a pure monopoly.
I’ve found this is highly representative of the real world. Whenever I’m at a conference or meeting with an enterprise client, the conversation inevitably revolves around "ChatGPT this" or "ChatGPT that." Rarely do I hear mentions of Claude, Grok, or Gemini.
Let’s understand OpenAI’s business model a little better to unveil what this means.
There are two main ways companies can use AI like ChatGPT. The first is through the consumer-facing website (ChatGPT.com) that we're all familiar with—where you sign up, type questions, and get answers.
The second is through something called an API, which stands for Application Programming Interface. Think of an API as a behind-the-scenes connector that lets developers build ChatGPT’s intelligence directly into their own apps and websites.
For example, when you use Copilot on Windows, that's often OpenAI's technology working through an API—the same brain as ChatGPT, just wearing Microsoft’s outfit.
While the API business is critically important (and where many predicted the real money would be), it's becoming a commodity with razor-thin margins. Anyone can offer API access to AI models, and prices keep dropping. It’s a race to the bottom.
What I’ve come to realise is that OpenAI’s true advantage lies in its consumer product, ChatGPT itself, and not, in fact, its API.
By being first to market in November 2022, they've established "ChatGPT" as a household name that's now arguably more recognisable than "OpenAI".
What’s more, they're systematically turning potential competitors into features: Perplexity becomes Deep Research, Midjourney becomes GPT Image 1, Windsurf becomes CodeGPT.
Perhaps there is a first-mover advantage in AI after all, but it's not where we thought it would be. Brand recognition and consumer habit matter more than ever before.
Alex Albert on education’s AI vulnerability:
Was chatting with a friend who went to university in Denmark and he was explaining how AI isn't disrupting education there as much as it is in the US.
Denmark barely assigns homework. They have longer school days packed with collaborative work, discussions, and projects.
— Alex Albert (@alexalbert__)
9:56 PM • May 7, 2025
I thought this was a thought-provoking post from Alex highlighting the contrast between two educational models.
The Danish system is built around in-person collaborative learning, whereas the American system relies heavily on individual, asynchronous assignments.
The pandemic and AI disruption created a perfect storm. First, we had COVID which pushed American education further toward remote, asynchronous work in 2020. Then, when consumer-facing AI tools like ChatGPT emerged in late 2022, they found an educational landscape already optimised for the very tasks they excel at automating.
As Charlie Munger once famously said, “Show me the incentive, I'll show you the outcome.” It perfectly captures what's happening in education today.
The US system incentivises efficiency, standardisation, and individual assessment, naturally leading to take-home assignments that can be evaluated at scale. These incentives weren't created with AI in mind, but they have inadvertently produced an educational model that is extraordinarily vulnerable to AI automation.
The Danish approach, built around different incentives—prioritising collaboration, social learning, and in-person interaction—has accidentally created a system far more resilient to AI disruption. Their model values precisely the skills that remain uniquely human: critical thinking in real-time discussion, interpersonal skills, and collaborative problem-solving.
This contrast offers a valuable lesson in system design. When we focus solely on efficiency and standardisation, we may optimise for metrics that are easily measurable but miss the deeper, more human elements of learning.
As AI continues to evolve, we need to reconsider our educational incentives to focus on what humans do best, letting technology handle the personalisation and mechanical aspects, while we cultivate the creative, social, and critical thinking skills that make us distinctly human.
Source: Alex Albert on X
Question to Ponder
“How might public perception and fear of AI transform if we openly described deep learning as 'cultivating digital organisms' rather than 'training models'?”
I think this is an interesting question, because we've got to understand that language has profound consequences in human interpretation of concepts.
When we hear “training models,” we think of something mechanical and controlled—a process we direct. But “cultivating digital organisms” evokes something alive, evolving, and perhaps beyond our complete understanding.
Today’s language models aren’t “organisms”. They’re just sophisticated prediction systems that require our tools and instructions to take action. They don't have intrinsic drives or goals.
Yet we’re rapidly approaching a threshold.
As we integrate these systems with robotics and real-world perception—creating what many call “embodied AI”—we may witness capabilities emerge that we didn't explicitly program or expect.
Therefore, I'm not convinced that biological metaphors are the most accurate or helpful for today. They unnecessarily amplify fears while missing the fundamental difference: even our most advanced AI systems lack the intrinsic agency and self-preservation that drives biological life.
However, we are in the process of bridging the gap between what is digital and what is physical—and this might require us to rethink our language as these systems grow increasingly complex and autonomous.

How was the signal this week? |
See you next week, Alex Banks | ![]() |