- The Signal
- Posts
- Apple’s ChatGPT Boost, OpenAI’s $200 Reasoning Model, and Mistral Thinks it Through
Apple’s ChatGPT Boost, OpenAI’s $200 Reasoning Model, and Mistral Thinks it Through


AI Highlights
My top-3 picks of AI news this week.
Apple
1. Apple’s AI Gets a ChatGPT Boost
Apple unveiled several AI improvements at WWDC 2025, but the standout move was doubling down on its ChatGPT partnership to salvage struggling features.
ChatGPT integration with Image Playground: Apple's previously criticised image generation app now offers new styles like anime, oil painting, and watercolour through ChatGPT, moving beyond its limited emoji-like creations.
Enhanced Visual Intelligence: Apple's AI-powered image analysis tool can now interact with content on your iPhone screen, conducting searches through Google and ChatGPT based on what you're viewing.
Foundation Models framework: Developers can now access Apple's AI models offline, though benchmarks show these models underperform compared to rivals like OpenAI's GPT-4o and Meta's Llama 4 Scout.
Alex’s take: Apple fumbled the bag hard with “Apple Intelligence”, and Siri is a different question entirely with delayed improvements until next year. What’s more, their heavy reliance on ChatGPT integration tells us a lot about their current position in the AI race. Are they sitting patiently on the sidelines preparing to pounce, are they debunking the AI myth (their paper last week got a lot of attention), or is there something we just don’t know? I’m genuinely intrigued to see how this plays out vs Google’s relentless pace across their AI ecosystem—this week they updated Android 16, and I’ve never been more tempted to switch to a Pixel than today.
OpenAI
2. OpenAI Launches o3-Pro
OpenAI has launched o3-pro, their most capable AI model yet, designed specifically for challenging problems where extended thought beats speed.
Enhanced reasoning: Replaces o1-Pro that thinks longer and provides more reliable responses, excelling in math, science, and coding domains.
Tool integration: Unlike o1-pro, o3-pro has access to web search, file analysis, visual reasoning, Python execution, and memory personalisation capabilities.
Superior performance: Expert evaluations show a 64% win-rate over o3 across all categories, with higher ratings for clarity, comprehensiveness, and accuracy, while outperforming Anthropic's Claude 4 Opus on key benchmarks.
Alex’s take: It must be highlighted that o3-Pro is reserved behind the $200/mo paywall for ChatGPT “Pro” users. “Plus” users only have access to the standard o3 model. However, we must retain a critical eye and thoroughly review the benchmark results: o3-Pro is in fact only 3-5% better than o3 across key evaluations. I’ll let you decide if you’re willing to pay $180/mo more for a 3-5% improvement—which still lags behind Google’s Gemini 2.5 Pro model (free to use).
Mistral AI
3. Mistral Thinks it Through
Mistral AI has launched Magistral, its first reasoning model designed to tackle complex, multi-step problems with transparent, step-by-step thinking across multiple languages and professional domains.
Dual release strategy: Magistral Small (24B parameter open-source version) and Magistral Medium (enterprise version) with impressive AIME2024 scores of 70.7% and 73.6% respectively.
Transparent multilingual reasoning: Chain-of-thought processing works natively across global languages including English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.
10x speed advantage: Flash Answers in Le Chat delivers reasoning responses up to 10x faster than most competitors, enabling real-time reasoning at scale.
Alex’s take: I like that Mistral is placing a large focus on transparency and traceability, especially in regulated industries and sectors like finance, healthcare, and legal. Users can follow the AI's reasoning process step-by-step back to the source of truth. I hope this auditability will help enable adoption within enterprise—especially to actively work against hallucination in today’s models.
Today’s Signal is brought to you by GrowHub.
"Great post!" and "Thanks for sharing!" are the fastest way to become invisible on LinkedIn.
GrowHub's Comment Generator turns every LinkedIn post into a networking opportunity.
It’s completely free.
No more staring at a blank comment box wondering what to write.
While everyone else drops generic comments into the void.
Create meaningful engagement that gets replies, profile views, and real connections.
Content I Enjoyed
The Gentle Singularity
This week, Sam Altman released a new blog titled “The Gentle Singularity”. It covers Sam’s perspective on where humanity stands in relation to the advancements of artificial intelligence.
Five key highlights from the post:
The “AI takeoff” has already started
ChatGPT is already more powerful than any human in history
Hundreds of millions depend on it for critical tasks daily
Scientists already 2-3x more productive with AI
AI systems now help build better AI systems
I thought the analogies highlighted in the post gave some clarity to what’s happening with the cost of intelligence as it becomes practically free. A single ChatGPT query uses just 0.34 watt-hours (about what an oven consumes in one second) and roughly one-fifteenth of a teaspoon of water. When the cost of intelligence approaches the cost of electricity, we need to think carefully about who controls access to the grid.
Which leads me to perhaps the most important part of the post—Altman emphasising the need to “focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country.”
There’s an irony here that this warning comes from the CEO of the company currently leading the race. I’ve also been watching public opinion on Altman shift over the past year, as people grapple with whether he’s a thoughtful steward of this technology or simply an articulate salesman for inevitable centralisation.
His vision of a “gentle singularity” suggests we'll adapt gradually to each new capability, but I suspect the societal implications will feel anything but gentle for those who find themselves on the wrong side of the intelligence divide.
Idea I Learned
Why I'm bullish on European AI
This week at NVIDIA GTC and VivaTech Paris, I had the opportunity to chat with Arkady Volozh, CEO of Nebius, a company that's completely reshaping how I think about Europe's position in the AI race.
Nebius provides the full-stack infrastructure for AI model development from chip to chatbot. They cover infrastructure requirements from a single GPU to big GPU clusters, letting you train, test, and deploy large language models in your applications.
Why was this week special?
Nebius is the first NVIDIA cloud partner headquartered in Europe. They just announced the first general availability of NVIDIA's GB200 Grace Blackwell superchips in Europe. These are the world's most advanced AI chips, and Europe is getting them now.
Nebius is only 10 months old, yet they're already operating at hyperscale. Back in October last year I visited one of their data centres in Mäntsälä, Finland to see how they’re able to capture 70% of the heat generated from the GPUs back into the grid to heat the local homes.
They’re not just another GPU reseller or cloud provider. Nebius actually designs its racks and servers from the ground up. They're essentially combining the power of a supercomputer with the accessibility of hyperscale cloud infrastructure. They already have two supercomputers in the HPC Top 500, ranking as the number two commercially available supercomputer globally.
Investing over $1 billion in European AI infrastructure and "AI factories" launching in London, France, and Finland, they're building world-class infrastructure on European soil.
It’s great to see that Europe’s AI moment is finally arriving.
Brett Adcock on Figure 02 humanoid progress:
UPDATE: This video was from last Saturday - robot speed was 4.05 seconds/package
Yesterday, I saw it running at 3.54 seconds/package
That’s a 13% speed-up in just 6 days 🤯
— Brett Adcock (@adcock_brett)
7:30 PM • Jun 14, 2025
Just one week after Figure AI showcased their 60-minute uninterrupted package sorting demo, we’re already seeing a 13% speed improvement in 6 days.
While conventional robots would require manual reprogramming and extensive testing for such improvements, Figure's Helix neural network can learn and optimise continuously. What's particularly exciting is the network effect: when one robot achieves 3.54 seconds per package, every robot in the fleet running Helix benefits from this enhancement.
The trajectory from their initial 6.3 seconds per package to now 3.54 seconds represents a 44% improvement through AI training alone.
I think the humanoid form is especially important for smaller companies who can’t afford ~$1M sorting machines and instead can leverage humanoids, which can also be used for different tasks—acting as a universal worker, much like a human.
Figure will soon not only be sorting packages, but also climbing stairs, lifting crates, driving forklifts, and turning into a truly adaptable, general-purpose worker. It will learn and improve at machine speed while maintaining the versatility to handle human-designed environments.
If Figure can achieve 13% gains in six days, where will they be in six months?
Source: Brett Adcock on X
Question to Ponder
“With AI progress accelerating, are we approaching an ‘end of work’ scenario where humans become economically unnecessary under our current model?”
By 2100, AI researchers predict that all human jobs will be fully automated, although this could happen much sooner.
Our economic system was built around human labour as the primary means of distributing resources. When that foundation shifts, everything must adapt.
With AI eventually driving the cost of intelligence and the cost of labour to near-zero, we need to think about how we navigate a society like this.
Understanding this trajectory highlights an important question. Will we need basic income? I believe it’s one that is increasingly more difficult to ignore. Workers competing with automation would need the security of some basic income.
Currently, routine, low-creativity jobs like trucking or data entry are prime candidates for disruption. Jobs needing creativity, emotional depth, or social finesse—like artists, therapists, or nurse anaesthetists—are far more difficult for AI to compete against as it ultimately can’t (yet) “feel”.
However, the WEF expects 39% of core skills to shift by 2030. Therefore, focusing on upskilling and retraining will be paramount.
Much like the Industrial Revolution transformed work rather than abolished it, I think we’ll see the same with the rise of AI. New roles will be created, old ones will fade, and our definition of “work” will shift over time.

How was the signal this week? |
See you next week, Alex Banks | ![]() |