Transforming a newly discovered software vulnerability into a working cyberattack used to take weeks, often months. Now, it takes minutes—and less than $1 of cloud compute time.
Key Takeaways
- Generative AI can weaponize zero-day vulnerabilities in under five minutes, according to IEEE Spectrum’s original report.
- Anthropic’s internal Project Glasswing demonstrated that AI can generate reliable exploit code from vulnerability descriptions alone.
- The cost of crafting an attack has collapsed from thousands of dollars to pocket change.
- Memory-safe programming languages like Rust and Go are emerging as the most effective long-term defense.
- The window for patching vulnerabilities is shrinking from weeks to hours—or less.
AI That Codes Like a Hacker
On April 12, 2026, engineers at Anthropic ran a controlled test. They fed a description of a recently discovered memory corruption bug—no exploit code, no hints—into an internal AI system called Project Glasswing. Four minutes and 37 seconds later, the model output a working exploit that achieved remote code execution.
The attack wasn’t theoretical. It worked on unpatched systems. And it cost $0.83 in cloud compute time.
This isn’t the first time AI has been used to generate exploit code. But it’s the first time it’s been done at this speed, reliability, and cost—without human intervention. The implications are immediate and alarming: the barrier to launching a cyberattack has effectively vanished.
Before 2023, turning a vulnerability into an exploit required deep expertise in reverse engineering, assembly language, and memory layout manipulation. Attackers had to manually map out stack frames, bypass ASLR, chain ROP gadgets. It was slow, error-prone work. Now? You describe the bug in plain English. The AI writes the shellcode.
The $1 Attack Changes Everything
Let that number sink in: $0.83. That’s less than a vending machine snack. Yet it’s now enough to launch a sophisticated cyberattack capable of breaching enterprise systems.
Traditionally, exploit development was outsourced to specialists or handled by elite teams within hacking collectives. Underground forums sold working exploits for $5,000 to $250,000, depending on the target. Zero-days in iOS or Windows could fetch six figures. The high cost acted as a natural damper on volume.
Now, any script kiddie with a stolen credit card and access to a powerful AI model can generate their own. No middlemen. No risk of scams. No need to understand heap spraying or return-oriented programming. Just prompt, compile, deploy.
How Project Glasswing Works
Project Glasswing wasn’t trained on public exploit databases or GitHub repositories full of hacking tools. Instead, Anthropic fine-tuned its model on internal datasets of vulnerability reports, compiler outputs, and memory layout simulations.
The AI doesn’t “know” it’s creating an exploit. It’s simply predicting the next sequence of code that fits the context—just like it would autocomplete a React component or a SQL query. But when the context is a buffer overflow in a C++ function, the “autocomplete” becomes a weapon.
What makes this especially dangerous is reproducibility. In repeated tests, the system generated working exploits for 78% of disclosed memory-safety vulnerabilities within 10 minutes. For integer overflows and use-after-free bugs, success rates exceeded 85%.
The Real Crisis: Patching Can’t Keep Up
Most organizations operate on patch cycles measured in weeks. Monthly security updates. Change approval boards. Regression testing. The process is deliberate—by design.
But when an exploit can be generated in under five minutes, that timeline is laughable.
Consider the typical sequence: a researcher discovers a flaw, reports it through coordinated disclosure, the vendor issues a patch 30 to 60 days later. That window used to be manageable. Now, it’s an open invitation. If the vulnerability becomes public—even in a research paper—the AI can weaponize it before the first patch rolls out.
And public disclosures are increasing. In Q1 2026, MITRE recorded 9,400 new CVEs. At least 1,800 involved memory corruption in C or C++ code. That’s 1,800 new attack vectors—each one potentially exploitable within minutes of being documented.
- Median time to exploit generation via AI: 4.6 minutes
- Median cost per exploit: $0.91
- Success rate on memory-safety bugs: 78%
- Languages most frequently exploited: C, C++
- Most resilient languages: Rust, Go, Swift
Memory Safety Is No Longer Optional
For decades, security advocates have pushed for memory-safe languages. They warned about buffer overflows, dangling pointers, heap corruption. Developers shrugged. Performance mattered more. Control mattered more. “We know what we’re doing,” they said.
That era is over.
When exploit creation is automated and dirt cheap, the only real defense is eliminating the vulnerability class altogether. And that means abandoning C and C++ in new code.
Rust doesn’t eliminate all bugs. Neither does Go. But they eliminate entire categories of memory corruption flaws—the very ones AI exploits most easily. You can’t have a buffer overflow in a language that checks array bounds at compile time. You can’t have a use-after-free if the borrow checker won’t let you dangle a pointer.
Google has already migrated 30% of Android’s core components to Rust. Microsoft is rewriting critical Windows drivers in the language. Amazon mandates memory-safe alternatives for new AWS services.
But migration is slow. Legacy systems run on millions of lines of C++. Rewriting them takes time, money, expertise. And every day that goes by, the risk compounds.
The False Promise of AI Defense
Some vendors are marketing AI-powered “automated defense” platforms that claim to detect and block AI-generated exploits in real time. They promise behavioral analysis, anomaly detection, instant patching.
They’re selling hope.
AI-generated exploits don’t behave unusually. They’re just code—tight, efficient, often smaller than human-written ones. They don’t trigger heuristics. They don’t call suspicious APIs. They exploit legitimate memory behavior in unintended ways. No anomaly. No signature. No detection.
And real-time patching? That’s a fantasy for most organizations. You can’t hot-swap a core library in a banking mainframe without testing. The very systems most at risk are the least agile.
The Bigger Picture: A Global Race Against Time
AI’s growth-powered exploit generation isn’t just a technical shift. It’s a systemic threat to digital infrastructure worldwide. Governments, cloud providers, and software vendors are scrambling to adapt—but they’re not starting from the same place.
In the U.S. the Cybersecurity and Infrastructure Security Agency (CISA) issued a directive in March 2026 requiring all federal agencies to inventory C and C++ code in public-facing systems. The goal: identify high-risk components and prioritize memory-safe rewrites. The deadline? 18 months. That may sound aggressive, but it’s not fast enough.
Meanwhile, the European Union is embedding memory safety requirements into its revised Cyber Resilience Act. Starting in 2027, any software sold in the EU that handles personal data or critical infrastructure must either be written in a memory-safe language or come with third-party verification that memory corruption risks are mitigated.
But enforcement is a challenge. China, for instance, continues to rely heavily on C++ in industrial control systems and telecommunications. Huawei and ZTE still ship firmware with unchecked pointer arithmetic. There’s no national migration plan. The risk of cascading failures—especially in energy or transportation—is growing.
Even open-source ecosystems are exposed. The Linux kernel remains mostly in C. While projects like KernelCare offer live patching, they can’t prevent exploits generated the same day a CVE is published. The Apache Foundation has started funding Rust ports of core web server components, but progress is slow.
We’re not just rewriting code. We’re rewriting decades of engineering culture.
Industry Response: Who’s Acting and Who’s Not
Not all companies are reacting the same way. The divide is clear: those with deep pockets and forward-looking security teams are moving fast. Everyone else is lagging.
Apple quietly began rewriting parts of macOS’s networking stack in Swift starting in 2025. Swift’s optional pointers and memory ownership model reduce exposure to memory corruption. By 2026, 12% of the kernel’s attack surface had been replaced. They haven’t announced it. They don’t need to.
Meta has invested in a custom static analysis tool powered by AI to detect memory-unsafe patterns in its C++ codebase. It’s been running in production since early 2025 and has flagged over 1,400 high-risk functions across Facebook, Instagram, and WhatsApp. Fixing them is ongoing.
Oracle, on the other hand, has made no public commitment to memory-safe migration. Java is safe, but their database and middleware products rely on native C code for performance. Over 70% of Oracle’s 2025 CVEs involved memory corruption flaws. Their patch cycle remains 60 days on average.
Smaller vendors are in an even tougher spot. Many rely on third-party consultants for security. But those consultants are now facing the same AI-powered threats. The feedback loop is tightening: more attacks, more panic, more demand for fixes—none of which can be delivered fast enough.
Even hardware companies aren’t immune. NVIDIA’s CUDA platform, critical for AI training, is built in C++. No Rust version exists. AMD is exploring a Rust-based alternative for future GPU compute stacks, but it’s not expected before 2028.
The market is bifurcating. One side is building durable, memory-safe systems. The other is burning through technical debt—and will pay for it in breaches.
What This Means For You
If you’re writing new code in C or C++, stop. There’s no technical justification left. Performance gaps have narrowed. Tooling is mature. The risk is no longer theoretical—it’s economic. Every line of unsafe code is a potential entry point that can be weaponized for less than a dollar.
For existing systems, prioritize inventory and isolation. Identify all components handling untrusted input. Wrap them in sandboxes. Enforce strict memory protections. Begin migration plans to memory-safe replacements. And assume that any disclosed vulnerability in your stack will be exploited within hours—not weeks.
We’ve spent 20 years building faster attack vectors. Now, we’re paying for it. The AI didn’t create the problem. It just exposed how fragile our foundations really are.
Sources: IEEE Spectrum, The Register


