Inside the race to recreate Claude Code and mine its guts for revelations
Sigrid Jin woke up to chaos and shipped "Claw Code" by breakfast. Here's everything it taught the world in 24 hours.
Getty Images
- Coders have had a field day weeding through the treasures in the Claude Code leak.
- "It has turned into a massive sharing party," said Sigrid Jin, who created the Python edition, Claw Code.
- Here's how it all went and down what the internet wants to know: Was this a noob mistake or an honest one?
Sigrid Jin woke up at 4 a.m. on Tuesday. His phone was "blowing up" with the Claude Code leak.
The 25-year-old student at the University of British Columbia got to work. It took two humans, 10 OpenClaws, a MacBook Pro laptop, and a few hours to recreate the popular AI agent's source code and share it with the world.
Three hours earlier, X user Chaofan Shou posted that he'd discovered an accidental publication of 512,000 lines of the source code for Claude Code, including details about how the tool works and tests for new features.
Though Anthropic quickly clamped down on unauthorized Claudes, Jin's reproduction — which he created with Seoul-based Yeachan Heo using Python and calls "Claw Code" — has so far made it out unscathed.
"Surprisingly, nobody from Anthropic or GitHub has contacted us at all," Jin wrote to Business Insider, referring to the centralized hub for managing code where Claw Code now lives. "We are continuously prepping for any potential legal action, though, because our main goal is to make sure we go about this in a completely legit way."
It's an ironic twist for an industry that's been accused of using published material to train its models and a company that's touted its signature safety pledge as a competitive advantage. Coders are also gleeful at an inside look into a tool that's become central to their work, especially after the launches of lauded models like Opus 4.5 and 4.6 in recent months.
"The best part," of the whole debacle, Jin said, is that it results in greater democratization of coding tools.
"Non-technical people are using these agents to build real things," he said. "We are talking about cardiologists making patient care apps and lawyers automating permit approvals. It has turned into a massive sharing party."
Anthropic said the leak was due to human error, not a security breach. "We're rolling out measures to prevent this from happening again," a spokesperson wrote.
Meanwhile, techies are spreading their copies in private Discord servers, weeding through newly discovered features like spinner verbs, coding pets, and the "fucks" chart — so dubbed by Claude Code creator Boris Cherny — and wondering how one of the world's biggest AI companies leaked in the first place.
The spreading of the leak was a 'workflow revelation'
Shou's 1:23 a.m. post lit the match.
On X, Shou linked to a zip file with the leaked code. He is the CTO of Fuzzland and a dropout of the UC Berkeley Ph.D. program, according to his LinkedIn profile. The link now leads to a 404 error.
An hour later, the X user @nichxbt added the leaked code to his GitHub, which spawned thousands of copies. By Tuesday night, Anthropic had filed a sweeping copyright takedown request, initially prompting GitHub to remove more than 8,000 versions of the leaked code.
Some coders' unrelated repositories were caught up in the sweep. On Wednesday morning, Anthropic reined in its takedown request.
"We retracted the notice for everything except the one repo we named" — that of @nichxbt — "and GitHub has restored access to the affected forks," an Anthropic spokesperson said in a statement.
As of Wednesday afternoon, other websites continued to offer access to the leaked code or tools built upon what was once Anthropic's proprietary information. The code continues to float around in private messages and archived links online. Jin's Claw Code reproduction now has 105,000 stars and 95,000 forks on GitHub. In one day, 5,000 people joined Jin's Discord server, a Korean open-source LLM community, he said.
Meanwhile, Anthropic's competitors seem to be pouring gas on the flame. Jin posted that xAI sent him Grok credits. "Very excited to see what you continue to build!" xAI's Umesh Khanna responded.
Jin said it's all been a "workflow revelation." It showed that he could use AI to recreate an entire tool in a new language — Python, in his case. "The fact that this is possible, combined with the sheer speed of how fast it can be done, is what is really blowing people away."
Sigrid Jin
What was in the leak: Spinner verbs, coding pets, and the 'fucks' chart
Gabriel Bernadett-Shapiro, an AI Research Scientist at SentinelOne, told Business Insider that the leak was most remarkable for how it peeled back the curtain on Anthropic's approach to the future of AI coding agents.
"The takeaway for me wasn't like, 'Oh, they were breached badly,'" Bernadett-Shapiro told Business Insider. "It was, 'Hey, we just got a really unusually clear look at where AI coding agents are going."
Bernadett-Shapiro said Anthropic's competitors likely took notes on how the AI model maker is approaching issues like agent memory.
"Any competitor is likely to use that knowledge as like, 'Okay, well this is how Anthropic is doing it, right?' he said. "Like, 'does this match our model?'"
Soon after the leak, techies began picking apart the thousands of lines of code.
They screenshotted references to unreleased models — Opus 4.7, Sonnet 4.8 — and codenames like "Capybara" and "Tengu." One Reddit user said they spotted a Tamagotchi-like pet that "sits beside your input box and reacts to your coding."
One user found a reference to "KAIROS," an agent that runs 24/7 and creates a daily log. The user described KAIROS as an "all-knowing teammate who notices and handles everything, before you even think to ask."
Cherny, Claude Code's creator, chimed in, writing that Anthropic is "always experimenting" and that 90% of tests don't ship. "Still on the fence about this one," he said, referring to KAIROS. "Should we ship it?"
Other features elicited joy, like the list of "spinner verbs." They include scurrying, recombobulating, and topsy-turvying. "Thank you Anthropic for my new fav verbs list," one X user wrote.
Then there's the cursing analytics. One user spotted that Claude will log swear words as "is_negative." He wrote that Anthropic was "tracking how often you rage at your AI."
Cherny responded that these signals determine whether the user is having a good experience. "We put it on a dashboard and call it the 'fucks' chart," he wrote.
Who's to blame for the 'noob-level' mess-up?
While Delip Rao, an AI researcher at the University of Pennsylvania who previously worked at Amazon and Twitter, wasn't interested in the leak's contents, he was interested in just how it happened.
"Anthropic hires really smart people," he said. "It's very unlikely for somebody who works at Anthropic to do this sort of noob-level mistake," he said.
He theorized that an AI agent might be involved, comparing it to a recent outage of Amazon's e-commerce operation, which was linked to its AI coding assistant Q. "As more trust develops, we tend to scrutinize their work less," Rao said.
Others online have shared similar theories. One user screenshotted a tweet from Cherny, in which he wrote, "100% of my contributions to Claude Code were written by Claude Code." It got hundreds of likes.
Cherny responded directly to that post, swatting down the rumors. "It was human error," he wrote. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks."
The answer is to speed up, not slow down, Cherny wrote. His solution: "more automation & Claude checking the results."
David Borish, an AI strategist at Trace3, initially felt bad for Anthropic CEO Dario Amodei. He saw people online using lessons from the code to improve their own products, even if they weren't Anthropic customers. As an entrepreneur, he couldn't fathom a leak like that.
Still, Borish blamed Anthropic's speed. He referenced the "move fast and break things" ethos. "There's no way you can have secure checks and balances and move that quickly," Borish said.
Don't worry: the staffer behind the leak wasn't fired. "It was an honest mistake," Cherny wrote. "It happens."
Read the original article on Business Insider