- The Signal
- Posts
- U.S.-Saudi AI Alliance, OpenAI Multiplies Coders, and Google’s AlphaEvolve Breakthrough
U.S.-Saudi AI Alliance, OpenAI Multiplies Coders, and Google’s AlphaEvolve Breakthrough


AI Highlights
My top-3 picks of AI news this week.
Middle East AI Partnerships
1. U.S.-Saudi AI Alliance Takes Shape
This week's U.S.-Saudi Investment Forum in Riyadh showcased a growing partnership in artificial intelligence between the two nations, alongside Saudi Arabia’s launch of Humain, a new AI venture backed by its $940 billion Public Investment Fund (PIF).
Strategic investment: DataVolt committed $20 billion for U.S. AI data centres, while Saudi’s new AI firm Humain plans to deploy 500 MW each of AMD and Nvidia systems (roughly $20+ billion) over five years, the largest bilateral deal in history.
Ecosystem expansion: Humain signed a $5 billion accord with AWS to build an “AI Zone” running GenAI services, and a non-binding MOU with Qualcomm for edge-to-cloud AI silicon.
Broader impact: Trump’s AI advisor David Sacks emphasised building “the biggest partner ecosystem” for AI, specifically naming Saudi Arabia as a crucial strategic partner to “win the AI race and shift the balance of power” in global AI infrastructure.
Alex’s take: The specific focus on AI data centres signals Saudi Arabia's ambition to diversify beyond oil into the “oil of the 21st century”—data and computation. The desert solar will now power the next-gen AI infrastructure. It reminds me of the quote from Dune, “He who controls the spice, controls the universe.”
OpenAI
2. OpenAI Multiplies Coders
OpenAI has launched Codex, a cloud-based software engineering agent that can work on multiple tasks simultaneously, each in its own sandbox environment.
Multi-tasking capability: Codex can write features, answer codebase questions, fix bugs, and propose pull requests for review in parallel.
Enterprise adoption: Companies like Cisco, Temporal, Superhuman, and Kodiak are already using Codex to ship faster, refactor codebases, and write better tests.
GPT-4.1 integration: Alongside Codex, OpenAI has rolled out its coding-optimised GPT-4.1 and GPT-4.1 mini models to ChatGPT Plus, Pro, and Team users, with GPT-4.1 mini available to free users as well.
Alex’s take: OpenAI is taking a steady march toward what they're calling “agent-native software development,” aligning closely with their AGI roadmap. Only last week, they announced they’re acquiring the AI-assisted coding editor Windsurf in a $3 billion deal. I suspect we’ll see more and more AI-coding announcements from OpenAI as they go up against Google and Anthropic’s rising tools.
Google Deepmind
3. Google's AlphaEvolve Breakthrough
Google has released AlphaEvolve, a Gemini-powered coding agent designed to discover and optimise algorithms for complex problems in mathematics and computing.
Evolution meets AI: AlphaEvolve combines Gemini models with automated evaluators in an evolutionary framework that iteratively improves promising solutions.
Real-world impact: Already deployed across Google's computing ecosystem, AlphaEvolve has optimised data centre scheduling, enhanced hardware design for TPUs, and accelerated Gemini's training time by 1%.
Mathematical breakthroughs: The system has improved upon Strassen's 1969 algorithm for multiplying 4x4 complex-valued matrices and advanced the 300-year-old kissing number problem with a new configuration of 593 outer spheres in 11 dimensions.
Alex’s take: AI systems are now optimising the very infrastructure used to train themselves. This creates a fascinating feedback loop where each improvement compounds. What struck me also was how the system (now in production for over a year) recovers 0.7% of Google's worldwide compute resources through a simple heuristic that human engineers missed by orchestrating Google’s vast data centres more efficiently.
Today’s Signal is brought to you by HoneyBook.
Less of a platform, more of a partner
Take your independent business to new heights with the behind-the-scenes partner that manages clients, projects, payments, and more.
Plus, HoneyBook’s AI tools summarize project details, generate email drafts, take meeting notes, and predict high-value leads.
Content I Enjoyed
Elon Musk's Vision for Humanoid Robots
Musk dropped what might be his most audacious prediction yet this week when he spoke at the Saudi-U.S. investment forum.
He claims humanoid robots will become “the biggest product ever of any kind” and envision a world with “tens of billions” of humanoid robots—multiple units for every human on Earth.
I believe there’s a lot of weight in this argument. Humanoid robots have the largest possible use case for any product ever. They can do the dishes, take your dog for a walk, and tidy up, all whilst keeping their temper.
What’s more, creators of these products will have enough customisation to make them appealing as domestic objects.
Everyone, he suggests, will want their own personal robot. This sounds about right with a total addressable market (TAM) of roughly every household on planet Earth (and beyond).
Musk believes these robots could expand the global economy to 10x its current size and effectively end scarcity-based economics altogether. This isn't just universal basic income he’s talking about, but universal “high income” in a world “where no one wants for anything.”
What fascinates me most is how my inner economist sees this vision challenging our fundamental economic assumptions. Our entire system was built around human labour as the primary means of distributing resources. When AI eventually drives both the cost of intelligence and labour toward zero, we need to think thoughtfully about how we navigate a society like this.
Whether it happens in five years or fifty, the trajectory is clear, and right now it seems we’re collectively unprepared for the societal transformation it would bring.
Idea I Learned
The AI Curve Is Steeper Than We Ever Imagined
One of the most fascinating paradoxes I’ve observed recently is how we simultaneously overestimate and underestimate AI progress.
When ChatGPT launched in November 2022, it was a basic text-only interface that captured the world's imagination. Fast forward to May 2025, and we now have Claude with MCP integrations—AI that can directly embed into your workspace, manipulate applications, and execute complex tasks autonomously. What was science fiction 30 months ago is now available as a consumer product today.
This phenomenon reminds me of Amara's Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Nowhere is this more evident than in AI.
Consider what's happened in just the past year alone: AI agents now navigate websites autonomously, order pizza, handle multi-step reasoning chains requiring domain expertise, real-time voice conversations with digital assistants are near-indistinguishable from humans, and code generation produces entire applications from natural language descriptions.
What's most interesting is the invisibility of this progress. As capabilities become normalised, our benchmarks shift, and yesterday's magic becomes today’s mundane.
This creates a curious blind spot. Corporate clients I speak with simultaneously believe "AI will change everything" while also dismissing specific applications as "just autocomplete" or "just pattern matching." The reality is that once a capability becomes reliable, we no longer perceive it as AI at all.
This invisible progress is precisely why the market leadership in AI remains so fluid, and changes nearly every week with the deluge of model releases.
It reminds me a lot of a recent tool I started using (FYI not affiliated/sponsored) called Granola. I now use it 5x/day+ for all my meetings. Why? Because it’s frictionless in my workflow. Because it’s invisible yet provides resounding utility to my life.
We should track AI progress not by what’s beating benchmarks but by what becomes invisible in our lives because it simply works.
For those interested in exploring this topic further, I recommend you check out Tim Urban’s 2015 blog article, “The AI Revolution: The Road to Superintelligence”.
Paul Graham on AGI and prompt engineering:
It seems to me that AGI would mean the end of prompt engineering. Moderately intelligent humans can figure out what you want without elaborate prompts. So by definition so would AGI.
— Paul Graham (@paulg)
10:29 AM • May 14, 2025
While today’s models require carefully crafted instructions to perform optimally, true AGI would just inherently understand our needs with minimal direction, just as humans do.
There’s an important question that’s raised here about the nature of intelligence itself. Is the ability to infer unstated intent a defining characteristic of general intelligence? If so, our current need for prompt engineering highlights the gap between our present AI capabilities and what true AGI might look like.
Humans have this innate ability to adapt to vague requests through context, shared understanding, and intuition. Much like when we ask a question to a colleague, we don’t ask via a meticulously detailed prompt for everyday tasks—we rely on intelligence to ultimately fill in the gaps.
“True” AGI would likely have this same intuition. As a result, our current prompt-engineering skills would be rendered obsolete. Does that mean you should skip learning how to craft great prompts today? Absolutely not. The key tenet of someone who succeeds in this AI race is the ability to adapt their skill set as tools and techniques develop.
So until we achieve AGI, we must build our know-how to maximise the output from these sophisticated tools. For now, they will continue to require expert guidance rather than autonomous intelligence that understands us as effortlessly as we understand each other.
Source: Paul Graham on X
Question to Ponder
“Will all code eventually be written by AI?”
This shift is already happening faster than many realise.
We’re currently witnessing a fundamental transformation in how engineering work gets done. Tech founders are already reporting 90% of shipped code is AI-generated.
That is to say, the value of engineering is rapidly moving away from manual implementation and, instead, toward higher-level thinking. Clearly articulating product requirements and getting AI to think before it acts are two high-leverage tasks you can employ today to get into that 90% category.
In the future, I believe a lot of the complexity will be hidden. Integrated development environments will shift from what we see as “code” today to a universal language: plain English.
This will fundamentally reshape business dynamics. Anyone will be able to become an engineer, and the requirement to be “technical” will no longer be a thing.
Those clinging to memorising formulas and functions and manual implementation will find themselves outpaced by competitors who embrace AI to the Nth degree.
So, if you ask will all code eventually be written by AI, we're already much closer to that reality than most people realise.

How was the signal this week? |
See you next week, Alex Banks P.S. New Moves from Tesla Optimus, a bad take, and some excellent rebuttals (1) (2). | ![]() |