• The Signal
  • Posts
  • ChatGPT Gets Connected, ElevenLabs Drops the Mic, and Cursor’s Coding Conquest

ChatGPT Gets Connected, ElevenLabs Drops the Mic, and Cursor’s Coding Conquest

AI Highlights

My top-3 picks of AI news this week.

OpenAI releases ChatGPT Connectors

OpenAI Logo Art / OpenAI

OpenAI
1. ChatGPT Gets Connected

OpenAI has announced a major expansion of ChatGPT's capabilities, allowing it to connect directly to internal sources and pull real-time context while maintaining existing user permissions.

  • Deep Research Connectors: Available for Plus & Pro users (excluding EEA, CH, UK) and Team, Enterprise & Edu users, including Outlook, Teams, Google Drive, Gmail, Linear, and more.

  • Enterprise Integration: Additional connectors like SharePoint, Dropbox, and Box are available specifically for Team, Enterprise, and Education users.

  • Custom Connectors: Workspace admins can now build custom deep research connectors using Model Context Protocol (MCP) in beta, enabling connection to proprietary systems and apps.

  • Record Mode: Rolling out to Team users on macOS, this feature captures meetings, brainstorms, or voice notes, transcribing and transforming them into actionable follow-ups, plans, or code.

Alex’s take: This feels like OpenAI's biggest step toward becoming a true workplace operating system. While meeting transcription has been commoditised, as Olivia Moore highlighted, the real differentiator will be UI choices and how seamlessly these tools integrate into existing workflows. With 500M+ weekly active users, ChatGPT has the distribution advantage, but as Zak Kukoff suggests, this raises serious platform risk questions for existing AI tools. It’s not just about who has the best models anymore. The race to control the interface between AI and our daily work has begun.

ElevebLabs
2. ElevenLabs Drops the Mic

ElevenLabs has launched Eleven v3 (alpha), their most expressive text-to-speech model ever, pushing the boundaries of what artificial voices can achieve.

  • Multi-lingual mastery: Supports 70+ languages with human-like nuance and natural delivery across diverse linguistic contexts.

  • Multi-speaker dialogue: Generates realistic conversations where speakers share contextual understanding and emotional awareness.

  • Granular audio control: Precise direction through audio tags like [excited], [whispers], [laughs], and [sighs] for film-director level control over delivery.

  • Emotional intelligence: Handles interruptions and emotional shifts seamlessly, creating truly dynamic speech experiences.

Alex’s take: I was blown away by their demo video. We are now witnessing the death of robotic voices. Imagine when we equip humanoid robots or your personal AI assistant with this—the anthropomorphisation of artificial intelligence continues. I’d also recommend you check out ElevenLabs’ prompting guide to unlock the most out of this model. The gap between human and artificial communication continues to narrow at an extraordinary pace.

Anysphere
3. Cursor's Coding Conquest

Anysphere has delivered a massive one-two punch this week with the launch of Cursor 1.0 alongside a staggering $900 million funding round at a $9.9 billion valuation, cementing its position as the leader in AI coding assistants.

  • BugBot integration: Automatic code review that catches potential bugs in PRs and allows developers to fix issues directly in Cursor with pre-filled prompts.

  • Background Agent for all: Their remote coding agent is now available to every user, expanding beyond the early access program with cloud-based processing capabilities.

  • Memories feature: Cursor can now remember facts from conversations and reference them in future interactions, stored per project on an individual level.

  • Explosive growth: ARR has surpassed $500 million and is doubling approximately every two months, with the company now expanding into enterprise licenses.

Alex’s take: I use Cursor weekly—it’s been instrumental in helping teach myself full-stack development. You can ask unlimited “whys” in the chat pane and really uncover the ground truths of what’s actually going on. This would never otherwise be possible without LLMs, and Cursor’s effortlessly simple UI makes it a sticky product—historically doubling ARR every two months is no joke.

Today’s Signal is brought to you by INBOUND.

INBOUND 2025 is heading to San Francisco, Sept. 3–5, for a one-time-only West Coast edition. Join AI leaders like Anthropic CEO Dario Amodei, Synthesia's Victor Riparbelli, & AI thought leader Dwarkesh Patel for bold insights, real strategy, and next-gen networking at the heart of the AI revolution.

Content I Enjoyed

Figure 02 sorting packages in logistics using Helix AI

Figure 02 / Figure AI

Figure Clocks In

This week, Figure AI dropped a 60-minute uninterrupted demo of their Figure 02 robot sorting packages without interruption.

This type of performance would have been nearly impossible to achieve with traditional robotic programming approaches.

The robots run an AI system called Helix, detailed in their latest article. It’s a Vision-Language-Action model that enables robots to think like humans. Helix runs on a two-system approach.

System 2 “thinks slow” about high-level goals. This involves scene understanding and language comprehension. System 1 “thinks fast” to execute and adjust actions in real-time, translating system 2 thoughts into continuous robot actions.

Helix now handles packages 20% faster and achieves 35% higher barcode scanning success. Scaling their training data from 10 to 60 hours dropped processing time per package from 6.3 to 4.3 seconds while boosting barcode accuracy from 88% to 95%.

As Figure highlighted in their post, the robot can even feel when it first touches an object and recover from mistakes by re-planning on the fly.

I love how they did an entire 60-minute section, showcasing failures and everything in between. It makes it far more authentic than a cherry-picked demo, or, dare I say, teleoperation. I'm excited for the 10-hour version to satisfy my robot ASMR obsession.

Idea I Learned

Apple’s new research paper, “The Illusion of Thinking”

AI “Brain Chips” / Geeky Gadgets

The Illusion of Thinking

Apple just dropped a research paper that's causing quite a stir in the AI community.

“The Illusion of Thinking” challenges everything we thought we knew about reasoning models like Claude and ChatGPT.

The findings are pretty sobering. These models don't actually reason. Instead, they're sophisticated pattern matchers that completely collapse when faced with unfamiliar problems.

Here’s what caught my attention specifically: LLMs behave inconsistently across different puzzles. They can nail over 100 steps in Tower of Hanoi but fail after just 4 steps in River Crossing. This suggests performance correlates more with training data familiarity than inherent problem complexity.

Josh Wolfe, Co-Founder & Partner at Lux Capital, highlighted something fascinating too. These models “overthink” easy problems, exploring wrong answers even after finding the right one. But when things get harder, they give up early despite having plenty of compute left. Even when you hand them the exact algorithm, they still mess up the execution.

I think the real insight here perhaps sits around human cognition, not just AI limitations. As one observer noted, most of what airline pilots and doctors do is pattern matching. True reasoning only kicks in when something genuinely new appears.

This connects to Geoffrey Hinton's idea, which we explored back in April, about how humans are “analogy machines” who think by resonance rather than deduction.

The paper raises critical questions about whether current “reasoning” models can ever achieve generalisable thinking, or if they're destined to remain very expensive autocomplete systems.

Worth keeping this in mind next time you're impressed by an AI's “reasoning” abilities.

Quote to Share

Ethan Mollick on the Chief AI Officer hype:

I thought this was a brilliant quote that immediately cuts through the Chief AI Officer hype:

“The horrible realisation you have fairly quickly is that nobody knows anything.”

Most organisations are still figuring out AI basics while tech companies push increasingly advanced capabilities.

There's a massive disconnect happening. AI labs are racing toward level 8 and 9 capabilities while most workers are still trying to master level 1—how to get genuine value from the AI tools already available.

Everyone wants answers, but the uncomfortable truth is that very few people actually know how to effectively integrate AI into real workflows. The expertise isn't going to come from hiring expensive consultants or creating new C-suite positions.

The real AI champions are already in your organisation—they're the curious "weirdos" experimenting with these tools, finding what works and what doesn't. These are the people who will drive meaningful adoption, not top-down mandates.

I also think organisational context matters enormously here. One-size-fits-all AI strategies miss the mark. Real progress comes from empowering internal champions to test, iterate, and build trust in AI systems organically.

Question to Ponder

“High school students often feel lost choosing careers due to many options and unclear future paths. What exactly does it mean for a student’s interest and future if they choose to study computer science at university?”

I totally get why this feels overwhelming.

When I was choosing my path, I felt the exact same uncertainty about what direction to take.

Currently, I'm making a conscious effort to continue my education in mathematics, economics, physics, and computer science. I find this helps tremendously with problem-solving and thinking from first principles. Because at the end of the day, it's all bits and atoms.

Computer science goes beyond just coding or getting a tech job. It's about building an understanding of how information systems work at their core. It teaches you to break down complex problems into manageable parts, think algorithmically, and build solutions systematically.

What I love about CS, much like economics, mathematics, and physics, is that these subjects are grounded in fundamental truths that cannot be argued away. Once you understand the axioms that make up life, that sets your footing for decision making to carve your own path.

For me, I initially chose economics and now I'm pursuing technology head-first, but that economic foundation gave me a crucial understanding of how society, business, and the world operate. So many of these concepts apply directly to building technology today. The same is true if you're doing it the other way around.

Building a foundational understanding of how the world works (through math, econ, CS and physics) provides essential grounding for problem-solving. It enables you to ask great questions and arrive at your own truths.

The beautiful thing about studying computer science is that it opens doors you didn't even know existed. It’s honed preparation for today’s job market, all whilst you're building thinking tools that will serve you regardless of where technology takes us next.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?