• The Signal
  • Posts
  • Musk’s Robotaxi Reality, ElevenLabs’ Triple Threat, and Google’s DNA Bombshell

Musk’s Robotaxi Reality, ElevenLabs’ Triple Threat, and Google’s DNA Bombshell

AI Highlights

My top-3 picks of AI news this week.

Tesla has officially launched its long-awaited robotaxi service in Austin

Elon Musk & Tesla / Bloomberg

Tesla
1. Tesla Rolls Out Robotaxis

Tesla has officially launched its long-awaited robotaxi service in Austin, Texas, marking a pivotal moment for the company's AI-driven future.

  • Limited launch: Starting with up to 20 Tesla Model Y vehicles operating autonomously with safety monitors in passenger seats, no human drivers behind the wheel.

  • FSD technology: Uses Tesla's Full Self-Driving software powered by eight cameras, the same setup as consumer Model Y vehicles but without driver oversight requirements.

  • Future fleet: Plans to introduce purpose-built Cybercab sedans and Robovan vehicles by 2026, designed without steering wheels or pedals.

  • Market ambitions: Musk projects autonomous vehicles could add $5-10 trillion to Tesla's current $1 trillion market cap, with ARK Invest estimating robotaxis could comprise 90% of Tesla's earnings by 2029.

Alex’s take: While Waymo already handles 250,000+ paid trips weekly across multiple cities, Tesla's entry definitely validates the robotaxi market potential. I loved Dan O’Dowd’s “experiments”, yanking a test dummy in front of the Tesla, having it plough into the mannequin and claiming its full self-driving (FSD) doesn’t work. He asks, “Why isn’t somebody in jail?” I’ll tell you why—because 100/100 humans would have hit the dummy if they were pounced on with this staged ‘test’.

ElevenLabs
2. ElevenLabs’ Triple Threat

ElevenLabs has released a powerful trio of AI voice announcements this week that are pushing voice AI from novelty to necessity.

  • 11.ai Voice Assistant: The first voice-first AI assistant that actually takes meaningful action through Model Context Protocol integration, connecting to tools like Linear, Slack, and Notion to complete real workflows.

  • Voice Design v3: Generate any imaginable voice from a simple text prompt, from gruff cowboys to mythical gods, with customisable tone, accent, age, and delivery for infinite character creation.

  • ElevenLabs Mobile App: Brings their most powerful AI voice tools to iOS and Android, featuring the expressive Eleven v3 model for on-the-go content creation and seamless integration with social media workflows.

Alex’s take: Voice is becoming the new interface for AI. While everyone's been focused on text-based AI, ElevenLabs is quietly building the infrastructure for a voice-first world. It feels like the early days of the iPhone—simple concepts that will fundamentally change how we interact with technology. I predict we'll see voice prompting become as common as typing within the next two years.

Google DeepMind
3. Google's DNA Bombshell

Google DeepMind has unveiled AlphaGenome, a groundbreaking AI model that deciphers how genetic variants impact biological processes across the human genome's regulatory landscape.

  • Million-letter precision: Analyses up to 1 million DNA base-pairs at single-letter resolution, offering unprecedented context and detail compared to previous models that had to trade off sequence length for resolution.

  • Comprehensive multimodal prediction: Simultaneously predicts thousands of molecular properties including gene expression, RNA splicing patterns, chromatin accessibility, and protein binding sites across hundreds of cell types and tissues.

  • State-of-the-art performance: Outperformed existing models on 22 out of 24 sequence prediction tasks and 24 out of 26 variant effect evaluations, now available via API for non-commercial research.

Alex’s take: What excites me most is that AlphaGenome tackles the genome's "dark matter"—the 98% of non-coding DNA that orchestrates gene activity but remains largely mysterious. While we've made incredible progress with protein-coding variants through tools like AlphaMissense, the regulatory regions hold the keys to understanding complex diseases and traits. This feels like the moment genomics gets its GPT moment.

Today’s Signal is brought to you by Together AI.

Unleash hyperscale AI via Together AI. Spin up DeepSeek-R1, Llama 4 or Qwen 3 on serverless or dedicated endpoints, auto-scaling to millions. Pay as you go, continuously refine on user data—and keep full control.

Content I Enjoyed

Darren Aronofsky and Eliza McNitt on Humans, Hearts, and Storytelling in the age of AI

Darren Aronofsky / Collider

Make Soup, Not Slop

This week, Darren Aronofsky appeared at Tribeca Film Festival to discuss “Ancestra,” the first film from his new AI storytelling venture, Primordial Soup, in collaboration with Google DeepMind.

Aronofsky introduced a key principle: “make soup, not slop.” He explained how AI-generated content can be visually stunning yet emotionally hollow, grabbing your attention briefly but leaving no lasting impact. The difference here is in the storytelling and the infusion of human emotion.

Director Eliza McNitt embodied this principle by centring her film on the deeply personal story of her own birth, using her actual baby photos to train the AI model that generated the infant scenes.

Single shots often combined 15+ different AI generations, requiring traditional VFX expertise to blend seamlessly. They even pioneered an "AI unit", a new filmmaking department mixing prompt engineers with traditional artists.

Something that stood out to me from the conversation was Aronofsky's chess analogy. More people play chess now, despite computers being unbeatable. As the panel demonstrated, the future of filmmaking is not about choosing between AI usage and human creativity. It revolves around how artists can guide these powerful tools to make something truly meaningful.

Idea I Learned

Anthropic wins key ruling AI Authors copyright lawsuit

Dario Amodei / Fast Company

The First Major AI Copyright Ruling Is Here

This week, a federal judge delivered the first major ruling on AI training and copyright law.

The case involved Anthropic and three authors who sued the company for using their books to train Claude without permission.

Judge William Alsup made a split decision that will shape how AI companies operate going forward.

The good news for Anthropic? The judge ruled that training AI models on copyrighted books constitutes "fair use" and is therefore perfectly legal. He compared it to "any reader aspiring to be a writer" who studies existing works to create something new and transformative.

The bad news? Anthropic got hammered for downloading more than 7 million pirated books from sites like Library Genesis and Pirate Library Mirror. The judge was scathing, calling this "inherently, irredeemably infringing" even if the books were used for legitimate training purposes.

This ruling establishes a crucial precedent: you can legally train on copyrighted content, but you better make sure you're getting it through legitimate channels.

For Anthropic, this could mean hundreds of millions—possibly billions—in damages for the piracy aspect alone, with statutory damages reaching up to $150,000 per work.

There’s a clear takeaway for the AI industry: the acquisition method matters just as much as the use case. Whilst fair use protects AI training, cutting corners on how you get the data will cost you dearly.

This ruling essentially gives AI companies a roadmap for staying on the right side of copyright law—follow it, and you'll likely be fine.

Quote to Share

Sam Altman on startup lawsuit tactics:

Altman is responding to a trademark lawsuit from iyO, a Google-backed hardware startup that makes custom-moulded earpieces. The company is suing OpenAI over the name "io", which is what OpenAI and Jony Ive are calling their hardware collaboration.

What makes this particularly absurd is that iyO is essentially claiming ownership over "io", one of the most fundamental concepts in computer science (input/output). It's like trying to trademark "CPU" or "RAM”, or, as my partner highlighted, a cooking concept like “stir fry”.

Having the inside scoop on emails like this from CEOs is actually quite refreshing. Zero use of capital letters, ubiquitous references to “man” and “brother”, alongside the occasional emoji sprinkled on top for good measure.

To clarify this timeline: The product didn’t work, he gave a TED talk, he asked to get acquired, Sam says they’re working on something competitive, asks to be acquired again, rejected, sues days after because he didn’t get his way.

This looks less like legitimate trademark protection and more like litigation as a last-resort negotiation tactic. If entrepreneurs can weaponise lawsuits whenever they don't get the deals they want, it creates a chilling effect in the entire ecosystem.

The real lesson? Build better products, not better lawsuits.

Question to Ponder

“If robots have feelings, do they need rights?”

If AI becomes conscious, what moral dilemmas would present themselves?

Does this mean robots should have rights, too?

We can begin by understanding our current position.

If an AI says it's sad today, it’s just a neural network predicting the next word in a sequence based on the previous words.

AI doesn’t have a heart.

It can’t portray real, genuine human emotion.

It doesn’t have real idiosyncrasies and quirks that make humans human.

However, we can look to the industry and see this narrative developing.

In October last year, Anthropic quietly hired its first “AI welfare” researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection.

I believe humans will treat life-like humanoids differently than faceless metallic droids because of our human tendency to anthropomorphise, even if their underlying architecture is similar.

It's those life-like features—those visual cues that mirror humanity—that could gradually shape our moral intuitions about this topic.

Whilst robots and AI don’t have feelings today, if they do tomorrow, we need to be ready.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?