Anthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to “human error,” the company said Tuesday. The incident briefly exposed a large set of internal files before being removed, raising concerns within the developer community about software release practices.
According to the company, an internal-use file was mistakenly included in a software update, directing users to an archive containing nearly 2,000 files and around 500,000 lines of code. The material was quickly copied and shared on GitHub after the error was discovered.
Anthropic said the issue was not the result of a cyberattack. “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” a company spokesperson said. “This was a release packaging issue caused by human error, not a security breach.”
The exposed files relate to the internal architecture of Claude Code, a tool designed to assist developers with programming tasks using artificial intelligence. The company emphasised that the leak did not include confidential user data or core model information tied to Claude itself.
The incident gained significant attention online after a post on X shared a link to the leaked material, drawing more than 29 million views within hours. Developers and researchers quickly began examining the files, with some noting that parts of the system had already been partially understood through earlier reverse engineering efforts.
This is not the first time elements of the assistant’s code have surfaced publicly. An earlier version of Claude Code had portions of its source code exposed in February 2025, contributing to a broader understanding of how the tool operates.
Industry observers say such incidents highlight the risks associated with rapid development and deployment of AI tools, where complex systems and frequent updates can increase the chance of accidental exposure. While the absence of sensitive data in this case has limited the impact, the scale of the release has prompted renewed discussion about internal safeguards and review processes.
Anthropic has not indicated whether it plans to make changes to its release procedures following the incident, but it reiterated that the exposure was contained and did not compromise users.
The company, known for developing advanced AI systems, continues to expand its presence in the competitive market for coding assistants, where tools are increasingly being integrated into software development workflows. The episode serves as a reminder of the operational challenges that can accompany fast-moving innovation in artificial intelligence.
