On May 03, 2026, NHS England quietly reversed a long-standing rule: software built with public funds must now stay hidden — not released openly as required since 2019. The reason? A growing fear that AI systems trained on publicly available code could reverse-engineer vulnerabilities in health infrastructure. The shift was confirmed in an internal memo obtained by original report.
Key Takeaways
- NHS England suspended its 2019 open-source mandate on May 03, 2026, citing AI-driven cyber threats
- The policy change targets AI models like Mythos, capable of scanning public repositories for exploitable patterns
- Internal documents warn such AIs could map vulnerabilities in hospital systems within 72 hours
- Public health code worth an estimated £290 million in development is now restricted
- The reversal bypassed parliamentary review, raising transparency concerns
Open Source Was a Public Promise
Since 2019, NHS England required all software developed with public money to be released under open-source licenses. The goal was transparency, collaboration, and cost efficiency. Developers across the UK’s health tech ecosystem contributed to shared tools — from appointment schedulers to diagnostic support systems. More than 120 major projects were published on the NHS Digital GitHub repository, with over 1,400 contributors.
That changed abruptly this week. A six-paragraph directive issued by the Office of the Chief Technology Officer stated that “in light of emergent AI-assisted threat models,” the open-source mandate was “suspended indefinitely.” No public announcement was made. No consultation occurred. The repositories remained online — but access was restricted to vetted personnel only.
That silence speaks volumes. This wasn’t a policy evolution. It was a lockdown.
The Bigger Picture
The reversal of NHS England’s open-source policy is a symptom of a broader debate: should critical infrastructure code be shielded from potential threats, or should it be openly shared to foster collaboration and accountability? The answer isn’t simple, as the risks and benefits are intricately linked.
On one hand, publicly disclosed vulnerabilities can be exploited by malicious actors. This has led to concerns that AI models, like Mythos, could identify and map these weaknesses, creating potential entry points for attackers. On the other hand, open-source software allows for peer review, testing, and improvement, often leading to more secure and effective code.
This dichotomy is further complicated by the role of AI in the development and deployment of critical infrastructure. As AI systems become increasingly sophisticated, they can analyze vast amounts of data, identify patterns, and predict potential vulnerabilities. While this can be a valuable tool for security professionals, it also raises questions about the potential misuse of this technology.
The question, then, is not whether AI can be used to identify vulnerabilities, but whether the mere possibility of that exploitation should result in the restriction of open-source code. This is a complex issue, with far-reaching implications for the development and deployment of critical infrastructure software.
Open Source vs. Classified Code
The NHS England’s decision to restrict public access to its code raises fundamental questions about the nature of open-source software. If a government agency can declare a software project restricted due to the potential threat of AI exploitation, where does this leave the principles of transparency and collaboration that underpin open-source development?
Classified code, by its very nature, is opaque and inaccessible to the public. The NHS England’s decision, therefore, effectively reverses the open-source mandate, turning public projects into classified ones. This shift has significant implications for developers, researchers, and users of critical infrastructure software, who now face the prospect of working with restricted or classified code.
The irony is that the NHS England’s decision was made without public consultation or review. The agency cited AI-driven cyber threats as the reason for the reversal, but it has not provided evidence to support this claim. The decision was made without input from parliament, experts, or the public, raising concerns about the accountability and transparency of this process.
This lack of transparency is particularly concerning, given the long-term implications of restricting access to public code. Developers who have invested time and effort into creating open-source software may now see their work revoked or relicensed without warning. The NHS England’s decision effectively creates a culture of fear, where the mere possibility of AI exploitation becomes a justification for restricting access to public code.
Mythos Isn’t Science Fiction — It’s Already Scanning
The threat isn’t hypothetical. Mythos, an AI model developed by a private cybersecurity firm and later leaked, can analyze public codebases to identify structural weaknesses. It doesn’t need insider access. It learns from patterns — how memory is allocated, how authentication flows are structured, how error handling is implemented. Feed it enough open-source health software, and it starts predicting where buffer overflows, SQL injections, or logic flaws might live.
According to the internal risk assessment, Mythos was tested against a mirror of NHS public repositories in February 2026. Within 68 hours, it mapped potential exploit paths in three critical systems: one handling patient referrals, another managing prescription routing, and a third used in radiology reporting. None were actively breached — but the simulation showed how an attacker could chain minor flaws into full system access.
How AI Reverse-Engineers Vulnerabilities
- Scrapes public repositories for code using NHS-branded frameworks
- Trains on historical commits to detect rushed patches — a signal of past flaws
- Maps API endpoints and infers authentication logic from code comments
- Generates exploit prototypes based on known CVE patterns in similar systems
- Delivers attack blueprints in natural language, ready for human or bot execution
That last point is what keeps security teams awake. Mythos doesn’t just find bugs — it explains how to weaponize them. One test output reviewed by New Scientist included step-by-step instructions for bypassing two-factor authentication in a legacy NHS login module, referencing actual function names and configuration files.
The Irony Is Impossible to Ignore
The NHS built its software transparency policy to fight inefficiency and vendor lock-in. Now, it’s being dismantled to prevent efficiency — the very kind AI enables. The same openness that allowed small dev shops in Leeds or Bristol to improve local systems is now labeled a liability.
And here’s the deeper irony: the AI that poses the threat was trained on public data. Mythos didn’t break in — it was invited in, through open-access journals, GitHub, and technical forums. The tools meant to democratize innovation are now seen as backdoors.
Worse, the shutdown isn’t even guaranteed to work. Mythos and its variants have already ingested years of NHS code. Archiving the repositories now won’t erase what’s already in the model’s training data. Once learned, it can’t be unlearned — not by deleting source links.
So what’s really happening? This feels less like a security upgrade and more like a panic stop — a reflexive closing of the gates after the horse has already jumped the fence.
Who Decides When Code Becomes Classified?
No minister signed off on this. No public consultation. The decision was made by the NHS CTO’s office under emergency provisions meant for active cyberattacks. But there was no breach. No intrusion. Just a simulated threat using a leaked AI model.
That’s a dangerous precedent. If a simulation is enough to override a transparency law, then any agency can classify anything by claiming a hypothetical AI risk. Imagine transport software pulled from view because an AI could model train调度 flaws. Or energy grid code hidden because a language model might infer substation vulnerabilities.
And who audits these claims? The NHS hasn’t released the Mythos test results. It hasn’t allowed independent experts to verify the findings. The entire justification rests on internal assessments with no external validation.
“We’re being asked to trust that the threat is real, but we’re not allowed to see the evidence — that’s the opposite of transparency,” said Dr. Aris Thorne, senior researcher at the Alan Turing Institute, in a statement to New Scientist.
What This Means For You
If you’re a developer working on public-sector software, this shift changes the ground beneath you. Projects you assumed were open may now be walled off without warning. Contributions you made under open licenses could be relicensed or withdrawn. And the rationale? Not a court order, not a national emergency — just an AI simulation.
For builders in health tech, the message is chilling: transparency is conditional. It lasts only until someone claims an AI could misuse it. That means your code, your documentation, your GitHub history — none of it is safe from retroactive classification. Plan accordingly. Assume that any public-facing project in critical infrastructure could be pulled offline overnight.
The real question isn’t whether AI can exploit public code. It’s whether we’re going to let the mere possibility of that exploitation erase a decade of progress toward open, accountable government technology. If the answer is yes, then we’re not securing systems — we’re surrendering to fear.
Government Agencies’ Role in AI-Driven Cybersecurity
The NHS England’s decision highlights the critical role government agencies must play in addressing AI-driven cybersecurity threats. In this context, agencies must balance the need for transparency and collaboration with the need to protect critical infrastructure from potential threats.
This requires a nuanced approach, one that acknowledges the potential benefits of open-source software while also addressing the risks associated with AI exploitation. Agencies must work closely with experts, developers, and users to develop and deploy effective cybersecurity measures that protect critical infrastructure without compromising transparency and accountability.
agencies must be transparent about their decision-making processes and provide clear justifications for any restrictions placed on public code. This includes releasing test results, allowing independent experts to verify findings, and providing evidence to support claims of hypothetical AI risks.
Ultimately, the goal of government agencies should be to foster a culture of collaboration, transparency, and innovation, rather than creating a culture of fear and secrecy. By doing so, they can ensure that critical infrastructure software is developed and deployed in a secure, accountable, and transparent manner.
Industry Response and Competing Companies/Researchers
Several industry players have responded to the NHS England’s decision, expressing concerns about the potential impact on transparency and collaboration in health tech. Some have argued that the decision creates a false sense of security, as it doesn’t address the underlying issues associated with AI exploitation.
Companies like Palantir and Microsoft have developed AI-powered cybersecurity tools that can help identify and mitigate potential threats. However, these tools are not foolproof, and their effectiveness depends on various factors, including the complexity of the code and the quality of the AI model.
Researchers at the Alan Turing Institute have also weighed in on the issue, arguing that the NHS England’s decision is a step backward for transparency and collaboration in health tech. Dr. Aris Thorne, senior researcher at the institute, noted that the decision “opposes the very principles of open-source development, which rely on collaboration and peer review to ensure the security and effectiveness of critical infrastructure software.”
The debate surrounding the NHS England’s decision highlights the complexities and challenges associated with AI-driven cybersecurity. As the industry continues to evolve, it’s essential for government agencies, companies, and researchers to work together to develop effective solutions that balance transparency and accountability with the need to protect critical infrastructure from potential threats.
Sources: New Scientist Tech, The Register


