• The Signal
  • Posts
  • Claude Connects Your Tech Stack, Meet Neuralink’s Third Patient, and Inside Meta’s Llamacon

Claude Connects Your Tech Stack, Meet Neuralink’s Third Patient, and Inside Meta’s Llamacon

AI Highlights

My top-3 picks of AI news this week.

Anthropic has released Integrations in Claude

New York Times / Andrea Chronopoulos

Anthropic
1. Claude connects your tech stack

Anthropic has released Integrations, allowing Claude to connect directly with popular workplace tools and enabling developers to create custom connections to virtually any platform.

  • Pre-built connections: Claude can now integrate with Asana, Intercom, Linear, Zapier, Atlassian, Square, and PayPal, gaining deep context about projects, tasks, and organisational knowledge.

  • Custom extensibility: Developers can create their own integrations in as little as 30 minutes using Anthropic's MCP protocol, connecting Claude to any tool or data source.

  • Enhanced capabilities: Also launching is Advanced Research mode to all paid plans, which can search across connected apps and web sources for up to 45 minutes to deliver comprehensive reports.

Alex’s take: This is the API-ification of AI assistants. By opening Claude to third-party integrations, Anthropic has transformed it from a standalone chatbot into a central hub that can orchestrate across your entire digital workspace. What's particularly clever is how they've taken the complex Model Context Protocol (MCP) and made it accessible to everyone in just a few clicks. This removal of friction matters more than we think.

Neuralink
2. Meet Neuralink’s third patient

Brad Smith has become the third person to receive a Neuralink brain implant, marking a significant milestone as the first nonverbal ALS patient to undergo the procedure.

  • AI-powered communication: Neuralink developed a chat app using Grok 3 AI that suggests contextual responses in Brad’s pre-ALS voice, helping him stay engaged in conversations.

  • Breaking barriers: Brad controls his MacBook Pro cursor via brain signals, allowing him to communicate outdoors and in varying lighting conditions, in contrast to his previous eye-gaze system that only worked in the dark.

  • Precision breakthroughs: Using tongue movements for cursor control and jaw clenching for clicks (after hand-based training proved ineffective), Brad achieved a Webgrid accuracy score of 5, up from less than 1 with eye-gaze technology.

Alex’s take: Brad himself has made a video showcasing Neuralink’s capabilities that’s the first known video edited entirely using a brain-computer interface. I highly recommend you give it a watch.

Importantly, this demonstrates how these advances are restoring agency to ALS patients. Brad can now communicate outdoors, build his own applications with AI assistance, and engage with the world in ways that were impossible just months ago, which I think is wonderful.

I tried playing the Webgrid game myself and got an accuracy of 9. Considering I spend a lot of my day with a mouse in hand, Brad’s jump from 1 to 5 is just incredible. Grok even helped Brad build his own keyboard training app, despite having no coding experience.

Meta
3. Inside Meta's First LlamaCon

Meta kicked off their first-ever LlamaCon conference with major announcements spanning consumer apps to developer tools, bringing together developers worldwide to celebrate the Llama ecosystem.

  • Meta AI app: Stand-alone ChatGPT competitor that leverages your Facebook/Instagram data for personalised responses, plus a social discovery feed for sharing AI interactions.

  • Llama API preview: Developer platform offering one-click API creation, custom model fine-tuning with Llama 3.3 8B, and full portability—your models stay yours.

  • Llama Protection Suite: New security tools including Llama Guard 4, LlamaFirewall, and CyberSecEval 4 for building safer AI applications.

  • Speed partnerships: Collaborations with Cerebras and Groq bringing dramatically faster inference speeds to Llama 4 models.

Alex’s take: Meta's inaugural LlamaCon feels a bit like a coming-of-age party for open source AI. On the one hand, we have OpenAI locking developers into their ecosystem, on the other, Meta is offering the best of both worlds through API creation without the handcuffs. I also think Meta has a serious distribution advantage when it comes to their ecosystem, so leveraging social graph data will just mean far more meaningful experiences when interfacing with an assistant.

Today’s Signal is brought to you by Athyna.

Athyna 2025 Salary Report

Curious how global hiring gives you a competitive edge?

  • Discover salary insights for engineers, data scientists, product managers, and more.

  • Explore top-tier talent with experience at AWS, Google, PwC, and beyond.

  • Learn how to save up to 70% on salaries while hiring top global talent.

Content I Enjoyed

Mark Zuckerberg on Dwarkesh Patel podcast – Meta’s AGI Plan

Mark Zuckerberg / Dwarkesh Patel

Mark Zuckerberg – Meta’s AGI Plan

Only two weeks ago, Zuck was in Washington, testifying in federal court against the US government, which argued that Meta should unwind the acquisition of Instagram under the belief that the purchase was made to eliminate competition and create a social networking monopoly.

This week, Mark has been doing the media rounds discussing Meta’s AI strategy. I particularly enjoyed this conversation with Dwarkesh Patel, which offered a real glimpse into how the social media giant plans to compete in the AGI race.

Meta AI now has nearly 1 billion monthly active users, with most interaction happening through WhatsApp rather than Instagram or Facebook.

Zuck cited the striking statistic that the average American has fewer than 3 friends while desiring around 15. He believes AI agents could help fill this gap in human connection through natural conversation and emotional support, eventually becoming seamlessly integrated through AR glasses like Orion.

Whilst we’re still a way off truly personal conversations through AI assistants (they’re still just predicting the next words given a previous sequence of words), it highlights an interesting point regarding the loneliness epidemic of today.

As one commenter remarked, Zuck’s Meta glasses might already be running a real-time search on Dwarkesh’s questions during the interview to optimise his responses.

It wouldn't surprise me if he's already testing the future he’s selling to the rest of us.

Idea I Learned

Cursor AI model coding benchmarks

Cursor / Happy Future AI

This Is the Real Benchmark

Forget MMLU scores and ELO ratings for a second.

This week, I came across an X post by Ryo Lu, Head of Design at Cursor.

It highlights a telling metric for AI model quality—what developers are actually choosing to use daily.

The data reveals an interesting picture: Claude 3.7 Sonnet leads the pack, followed by Gemini 2.5 Pro and Claude 3.5 Sonnet.

Meanwhile, OpenAI’s o3 and o4-mini are the fastest-growing models developers are adopting.

This matters because developers are among the most demanding AI users. They need models that can handle complex reasoning, understand nuanced instructions, and produce reliable output.

The disconnect between benchmark scores and real-world usage is becoming impossible to ignore. We highlighted only last month how these models are overfitting the data, making them unreliable when put to the test in the real world.

Some models that perform well on traditional coding benchmarks struggle with actual usage statistics. Therefore, we need to revise how we evaluate AI progress—ideally through third-party, independently verifiable methods.

Much like when we saw Hao AI Lab using Super Mario Bros to test leading frontier models and observing Claude-3.7 outperforming reasoning models like OpenAI’s o1, despite being generally stronger on most benchmarks.

At the end of the day, real-world validation can’t be gamed.

Quote to Share

Luis von Ahn on Duolingo’s AI transformation:

Duolingo CEO Luis von Ahn sent an all-hands email announcing the company's AI-first transition.

The announcement reveals how "being 100% mobile" in 2012 made Duolingo successful. Now the company is declaring itself 100% AI-focused, requiring employees to demonstrate how they're using AI to reduce costs and improve efficiency.

Two key points stand out from the email.

The intro highlighted how Duolingo “will remain a company that deeply cares about its employees.” Yet the end of the email highlighted “headcount will only be given if a team cannot automate more of their work”.

There’s a rather stark contradiction at play here.

Removing the human element from the workplace is the polar opposite of showing that you care deeply for your employees.

Being AI-first means empowering and educating your people to use these tools to 10x their output, not replace humans entirely.

Something else I wanted to pick up on was the fact that Duolingo felt the need to make a formal announcement about adopting AI, which is quite telling.

As one commenter noted: “People who understand AI just change their processes over time, quietly.” Is this a PR stunt, and are Duolingo actually playing catch-up here, rather than leading?

Most revealing is the email's distinctive ChatGPT writing style, especially overusing phrases like "It's not just X, it's also Y", which several observers immediately recognised.

The irony of using AI to write an announcement about becoming AI-first wasn't lost on anyone.

Unfortunately, in my eyes, this one didn’t quite pass muster.

Question to Ponder

“If memory becomes standard across LLMs, what prevents users from easily copying their interaction history from one AI assistant to another? Unlike social media's network effects, wouldn't it be technically simple to transfer years of conversations to train a new LLM instance?”

In the age of AI, rich, personalised context is everything.

Embedding users through memory increases a user’s opportunity cost to switch to another provider, as the model “knows” the user better than an empty chatbot.

In principle, frictionlessly transferring memory from one LLM provider to another would be a nice idea. But model providers like OpenAI, Anthropic, and xAI make it intentionally hard to do.

Why? Because in a world where LLMs are becoming commoditised, it’s their central moat (for the time being).

Memory increases the “lock-in” for users. An LLM becomes infinitely more valuable to me if it knows me, my life, my work, and any surrounding context. This augments the calibre of conversation and, in turn, the insights I then receive. That’s how a chatbot evolves into a truly useful assistant.

In practice, unless you request a copy of your data profile, you’d have to manually feed this information into a new LLM. The friction of this is already too high for many to consider as an option.

But all hope is not lost.

As we covered earlier, Anthropic now lets users integrate external apps directly into their Claude chat interface.

This means you, as a user, can connect apps like your Gmail, Calendar, and Drive, which Claude can then reference during the conversation.

Whilst this ticks the box for email conversations and cloud context, I see the “memories” these large language models pick up over time from the conversations and interactions we have with AI assistants to be far more nuanced (and, perhaps, far more personal).

That’s why I believe there is a real race for LLM providers to implement memory to build a “home” for users, and why it is accompanied by a real reluctance to share these with anyone but themselves.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?