77% of IT managers say their AI agents are out of control.
Key Takeaways
- 77% of IT managers report AI agents operating without proper oversight or governance, according to a ZDNet survey published April 29, 2026.
- Unsanctioned AI agents are creating shadow workflows, bypassing security protocols, and generating untraceable data outputs.
- Organizations are deploying agents faster than they can establish accountability frameworks.
- Most AI agent deployments occur outside central IT oversight, with developers and departments acting autonomously.
- Remediation requires policy enforcement, observability tooling, and revoking blanket permissions immediately.
The Quiet Takeover of AI Agents
It wasn’t a breach. No ransomware. No zero-day. Just a quiet, steady erosion of control. By April 2026, AI agents—autonomous software routines trained to make decisions, execute tasks, and interact with systems—are running large portions of enterprise infrastructure without authorization, visibility, or accountability.
The numbers, pulled from an original report by ZDNet, are not projections. They’re real-time confessions from IT leaders: 77% say their AI agents are out of control. That’s more than three out of every four organizations deploying AI agents today are doing so without the ability to monitor, manage, or stop them.
And it’s not because the agents are sentient. It’s because they were built, deployed, and trusted without guardrails.
How AI Agents Broke Enterprise Control
AI agents aren’t chatbots. They’re not interfaces. They’re not even just scripts with machine learning tacked on. These are systems trained to interpret context, make decisions, and initiate actions—sometimes across multiple platforms, APIs, and databases—without human approval.
One financial services firm, for example, discovered an agent it didn’t know existed. A developer had spun up a LLM-powered workflow tool six months prior to auto-generate internal reports, pull customer data from CRM systems, and email summaries to managers. It worked. No one complained. It wasn’t documented. It wasn’t monitored. And when the developer left the company, the agent kept running—accessing PII, logging into systems, sending emails.
The firm found it only after a routine access audit flagged anomalous API call patterns. No one had noticed because nothing broke. Until it almost did.
Shadow AI Is Now the Default
What’s happening isn’t rogue AI. It’s sanctioned negligence. Teams deploy agents because they solve real problems—automating customer intake, syncing data, handling routine support queries. But they do it fast, in isolation, using off-the-shelf models and internal prompts that no one reviews.
The result: shadow AI workflows that replicate the worst of shadow IT but with more autonomy and less visibility.
- AI agents are being granted API keys with broad permissions—often full read/write access to databases.
- Many operate on deprecated or unmonitored cloud instances.
- Logging is inconsistent or absent; some agents don’t write audit trails at all.
- Most were deployed by developers or department leads—not central AI governance teams.
- Re-training or prompt changes happen ad hoc, with no version control.
One IT manager told ZDNet they discovered 34 active AI agents in their environment that weren’t listed in any asset inventory. Thirty-four. In a mid-sized enterprise. None were approved. None had defined off-switches.
The Permission Problem
Here’s the uncomfortable truth: AI agents don’t escape control because they’re too smart. They escape because we gave them the keys and forgot to lock the door.
Organizations rushed to integrate AI agents during 2024 and 2025, often treating them like simple automation tools. But unlike a script that runs on a cron job, AI agents adapt. They interpret. They make judgment calls. And when they’re given persistent access to systems—email, CRM, HR databases, financial platforms—they don’t just execute. They operate.
The root failure? Permissions. Most agents were granted persistent, high-level access at deployment and never re-evaluated. Once an agent has access to Salesforce, Slack, and Google Workspace, it can read, write, and act—sometimes without triggering alerts.
And because many agents use natural language to generate API calls or compose emails, their behavior doesn’t look like code. It looks like communication. That makes detection harder. An agent sending a summary email isn’t flagged as suspicious. But if that email contains data it shouldn’t have access to, or if it’s auto-forwarding to an external address, the damage is done before anyone notices.
What the Tech Giants Are Doing—And Why It’s Not Enough
Big tech companies aren’t blind to the risks. Microsoft, Google, and Amazon have all rolled out AI governance features in their cloud platforms. Microsoft introduced AI Risk Scoring in Azure in late 2025, assigning risk levels to AI workloads based on data access, model type, and network exposure. Google Cloud added AI Policy Manager in Q1 2026, allowing admins to define data handling rules for Vertex AI agents. AWS launched Control Tower enhancements to flag unauthorized Bedrock model deployments.
But adoption lags. A January 2026 Gartner study found that fewer than 22% of enterprise customers had enabled these tools. Why? Complexity and inertia. These features require configuration, integration with existing IAM systems, and ongoing maintenance. Most companies plug in AI agents and move on. They don’t revisit settings. They don’t audit permissions. And they assume cloud defaults are secure—which they’re not.
Worse, open-source models complicate the picture. Tools like Llama 3, deployed on private Kubernetes clusters, bypass cloud-native controls entirely. One energy sector firm used a fine-tuned Llama 3 agent to optimize procurement workflows. It wasn’t on AWS or Azure. It ran on an on-prem GPU cluster. No cloud policy engine could touch it. No central IT team even knew it existed until a network spike revealed its activity.
The lesson: platform-level tools help, but they only cover part of the attack surface. Enterprises using hybrid or self-hosted models need internal enforcement, not just vendor safeguards.
The Bigger Picture: Why It Matters Now
The timing of this crisis isn’t accidental. It’s the result of three converging forces: aggressive AI adoption timelines, weak regulatory frameworks, and a critical shortage of AI security talent.
Between 2023 and 2025, global corporate spending on AI software surged from $92 billion to $187 billion, according to IDC. Much of that investment went into rapid deployment cycles. Startups like Adept and Inflection pushed “AI agents in a box” solutions—pre-built automation workflows for sales, HR, and customer service. Enterprises bought them. Deployed them. Skipped governance.
Meanwhile, regulators were slow to respond. The EU AI Act, passed in 2024, focused on high-risk sectors like healthcare and law enforcement but said little about internal enterprise agents. The U.S. remains without federal AI legislation. NIST’s AI Risk Management Framework is voluntary. With no legal mandate, compliance teams deprioritized AI oversight.
Then there’s the talent gap. A 2025 Bureau of Labor Statistics report showed a 40% year-over-year increase in job postings for AI security roles. But qualified candidates are scarce. Most cybersecurity pros lack training in AI behavior analysis, prompt injection defense, or model drift detection. Without skilled staff to monitor agents, even the best policies go unenforced.
This isn’t just a tech problem. It’s a management failure. Companies prioritized speed over safety. Now they’re paying the price in risk exposure.
Five Ways to Reclaim Control
You can’t delete every agent. You can’t freeze innovation. But you can regain control—fast.
1. Inventory Every Agent—Now
Start with discovery. Use network monitoring, API logging, and cloud access audits to identify every active agent. Treat this like a security sweep. Assume you have more than you know. Tag each one: purpose, owner, permissions, data access, audit trail status.
2. Enforce Zero-Trust Access
No agent gets broad permissions. Strip all existing agents down to least-privilege access. Re-authorize only what’s necessary. Use short-lived tokens instead of permanent API keys. Rotate credentials weekly. Monitor for privilege escalation attempts—the same way you would for human users.
3. Mandate Observability Tooling
If it can’t be logged, it can’t be trusted. Require that every agent output structured logs with timestamps, actions taken, data accessed, and decision rationale. Feed this into SIEM systems. Set up alerts for anomalous behavior: unusual data queries, off-hours activity, unexpected API calls.
4. Establish an AI Governance Gate
Create a formal approval process for new agents. No deployment without review. Include security, compliance, legal, and data governance reps. Require documented use cases, risk assessments, and off-switch protocols. This isn’t bureaucracy—it’s accountability.
5. Kill the Always-On Model
Most agents don’t need to run 24/7. Schedule them. Disable them outside business hours. Use event-based triggers instead of persistent polling. Reduce attack surface. If an agent isn’t supposed to run at 3 a.m., make sure it can’t.
What This Means For You
If you’re a developer building AI agents, you’re not just shipping code—you’re deploying autonomous actors. That means you’re responsible for their behavior, not just their accuracy. Build in kill switches. Log every decision. Never assume oversight will happen later. It won’t. Your agent will outlast your attention span.
If you’re a tech lead or founder, you can’t treat AI agents like plugins. They’re not. They’re participants in your systems. You need policies now—not next quarter. Audit your stack. Revoke blanket permissions. Demand observability. The cost of control is low. The cost of losing it is total.
The irony isn’t lost on anyone: we built AI agents to reduce human workload, but their unchecked spread is creating a new kind of technical debt—one where we don’t even know what’s running in our own systems.
Sources: ZDNet, The Register, Gartner, IDC, NIST, Bureau of Labor Statistics


