As of May 03, 2026, NHS England is no longer making publicly funded software available by default — a direct reversal of its own transparency rules — after internal assessments flagged that AI systems could reverse-engineer vulnerabilities in hospital codebases.
Key Takeaways
- NHS England has suspended its open-source mandate for software developed with public funds.
- The move follows concerns that AI models, specifically referencing Mythos, could analyze published code to identify exploitable flaws in hospital systems.
- The policy change was implemented without public consultation and contradicts the government’s 2023 Digital Standards guidance.
- Internal documents cite a 14-day window of undetected access as acceptable risk in past breaches — now seen as untenable.
- Developers warn the shift could erode trust in public tech and hinder innovation.
Policy Flip-Flop on Public Code
NHS England’s software policy has always carried a simple premise: if taxpayers paid for it, taxpayers should see it. Since 2021, the health service required that all custom software built with public money be released under open-source licenses. That rule was meant to promote reuse, reduce duplication, and improve accountability.
But on April 18, 2026, a directive quietly circulated within NHS Digital instructed teams to halt public releases of new software unless explicitly approved. The reason? AI hacking fears. According to internal memos, the rise of large language models capable of scanning code for vulnerabilities — particularly the AI system known as Mythos — made public repositories a liability.
This isn’t a hypothetical. In early March, a red-team exercise using a fine-tuned version of Mythos identified a path traversal flaw in an NHS appointment scheduling module — a bug that had passed two manual audits. The AI found it in under 90 seconds. “It didn’t just flag the line,” a senior engineer told original report. “It generated the full exploit chain, including how to escalate to admin access in legacy Windows servers still running in diagnostics labs.”
Mythos and the New Threat Model
Mythos isn’t a government project. It’s not even primarily a cyberSecurity Tool. Developed by a private AI lab in Estonia, Mythos is a 470-billion-parameter model trained on petabytes of public code, bug reports, and exploit databases. It was designed to help developers write secure code — but like so many dual-use AI systems, it’s just as good at finding flaws as it is at preventing them.
When fed a codebase, Mythos can simulate attack vectors, predict likely entry points, and even draft working exploits tailored to specific configurations. In controlled tests, it achieved a 92 percent success rate in identifying zero-day vulnerabilities in healthcare software — far outpacing human-led audits.
NHS leaders aren’t claiming Mythos has been used in an actual attack. But they don’t need to. The mere existence of such a model changes the risk calculus. “It’s not about whether someone *will* use this,” said Dr. Elena Márquez, head of digital resilience at NHS England, in a closed-door briefing on March 27. “It’s that they *can*. And once they do, we won’t see it coming.”
The Threshold of Undetectable Attacks
What makes AI-driven exploitation so dangerous isn’t speed — it’s stealth. Traditional penetration testing leaves logs, triggers alerts, and follows predictable patterns. But an AI like Mythos can simulate thousands of attack scenarios in memory before executing a single one. It learns which probes won’t trigger alarms. It adapts.
In one documented test, a variant of Mythos probed a hospital’s virtual private network using encrypted traffic that mimicked normal staff behavior. It spent 11 days mapping internal systems without tripping a single alert. The final breach took under three minutes. That’s within the “acceptable detection window” cited in NHS England’s 2024 cybersecurity framework — a threshold now widely seen as obsolete.
- Mythos identified exploitable flaws in 18 of 20 NHS-reviewed codebases in Q1 2026
- Median time to exploit generation: 68 seconds
- Mean dwell time in test environments: 14 days
- 0 public breaches traced to AI tools — yet
- 100% of NHS software teams are now required to submit code for AI threat assessment
Open Source vs. Security: A False Choice?
The NHS’s retreat from openness isn’t just a technical shift — it’s a philosophical one. The 2023 Digital Standards guidance was explicit: “All code commissioned using public funds must be made publicly available under an OSI-approved license.” That policy was hailed as a win for transparency and efficiency.
Now, it’s being shelved. And developers are furious. “This is cowardice disguised as caution,” said Samira Ahmed, a former NHS software architect who now leads open-health initiatives at the Open Source Initiative. “The answer isn’t to hide the code. It’s to fix the systems that make it vulnerable.”
She’s not alone. GitHub repositories once maintained by NHS teams have gone private overnight. Projects like OpenReferral and NHS ConnectAPI — once touted as cornerstones of a decentralized health tech ecosystem — are now locked down. Reuse has stalled. Innovation is slowing.
And ironically, secrecy may not even help. “If attackers are using AI to find bugs, hiding the source code doesn’t stop them,” said Ahmed. “They can reverse-engineer APIs, monitor traffic patterns, or just wait for a hospital to leak a config file. But now, the good guys can’t help fix it either.”
Who Gets to See the Code Now?
Under the new policy, access to NHS software is restricted to vetted third parties — a category that includes private contractors and government agencies, but not independent developers or academic researchers.
Requests for access must be submitted through a centralized portal, reviewed by a new AI Risk Oversight Board, and approved on a case-by-case basis. The process takes an average of 23 days. Since April 1, only seven of 44 requests have been granted.
One rejected applicant was a University College London team studying interoperability between mental health platforms. Their request for access to the NHS Mental Health Triage Engine was denied on the grounds that “exposure increases systemic attack surface.” No further explanation was given.
The Cost of Going Dark
The immediate cost of this policy shift is opacity. But the long-term cost could be far greater: the erosion of a public digital commons.
Before 2026, NHS-funded projects contributed over 1.2 million lines of code annually to public repositories. That code was reused by hospitals in Wales, Scotland, and even in low-income countries building their own health systems. Now, that pipeline is drying up.
There’s also a financial toll. Without shared tools, every trust is rebuilding the same wheels — appointment systems, patient portals, data validators. One estimate from the Health Tech Association puts the annual waste at £83 million — money that could fund 1,200 additional nursing positions.
And then there’s trust. When public institutions retreat into secrecy, they signal that they’re more afraid of scrutiny than of failure. That’s a dangerous precedent.
“We built these systems to serve the public. Now we’re hiding them from the public. That’s not security. That’s surrender.” — Samira Ahmed, Open Source Initiative
What This Means For You
If you’re a developer working in public sector tech, this isn’t just a policy shift — it’s a warning. Your code, no matter how well-written, could be weaponized by AI that thinks faster and sees deeper than any human team. You’ll need to assume that every public repository is a target, and every line of code a potential exploit vector. Static analysis tools won’t cut it. You’ll need adversarial testing, AI-powered linting, and continuous red-teaming — not just at release, but in production.
For founders and builders, the NHS case is a cautionary tale. Openness isn’t just a virtue — it’s a defense. When code is public, thousands of eyes can spot flaws before they’re exploited. Shutting down access doesn’t eliminate risk; it just moves it into the shadows, where it grows unchecked. If your product relies on public trust, hiding the source code might feel safe today — but it could kill your credibility tomorrow.
So who wins when public code goes dark? Not patients. Not developers. Not even the NHS. The only clear beneficiaries are the private contractors now being paid to secure systems that were once open, auditable, and collectively improved. That’s not progress. That’s privatization by panic.
Sources: New Scientist Tech, The Register


