• Home  
  • APRA Flags AI Control Gaps in Finance
- Artificial Intelligence

APRA Flags AI Control Gaps in Finance

Australia’s financial regulator warns banks lack governance for AI agents, citing risky vendor reliance and weak oversight. Full breakdown of the May 2026 warning.

APRA Flags AI Control Gaps in Finance

As of May 02, 2026, 100% of large financial institutions reviewed by the Australian Prudential Regulation Authority were using AI agents in production environments — yet none had fully mature governance frameworks to manage associated risks.

Key Takeaways

  • APRA’s late-2025 review found universal AI adoption across major banks and superannuation trustees, but fragmented risk controls and inconsistent oversight.
  • Boards showed strong interest in AI for productivity, yet relied heavily on vendor summaries instead of direct technical scrutiny, leaving them blind to unpredictable model behavior.
  • \li>Regulators identified critical gaps in monitoring, change management, decommissioning, and named-person ownership of AI systems — especially for high-risk decisions.

  • Single-vendor dependency was common, with few institutions able to demonstrate an exit strategy or substitution plan for AI suppliers.
  • AI is introducing new cybersecurity threats, including prompt injection and insecure integrations, while identity and access controls fail to account for non-human actors.

AI Is Already Running Core Operations — Without Oversight

The Australian Prudential Regulation Authority didn’t need to look far to find AI in action. By the end of 2025, every large financial institution it reviewed had already deployed AI agents in live operations. These weren’t prototypes. They were handling real-time loan applications, automating software engineering tasks, routing customer interactions, and triaging insurance claims.

And there’s the problem: the systems were already embedded, but the governance wasn’t catching up.

APRA’s targeted review revealed that while boards were enthusiastic about AI’s potential to boost customer experience and cut costs, they were often making strategic decisions based on vendor presentations — glossy overviews that obscured technical limitations and failure modes. That reliance created a dangerous information asymmetry. Executives talked about innovation, but couldn’t answer basic questions about how models would behave under stress or what would happen if an AI agent went rogue.

One finding stood out: institutions were treating AI risk like any other IT risk. They applied the same checklists used for database upgrades or firewall changes. But AI agents don’t behave like static code. They adapt, interact, and fail in novel ways. A model trained on skewed data might approve loans unfairly. An autonomous workflow could escalate privileges without human approval. And if no one owns the instance, no one takes responsibility when it breaks.

Boards Are Out of Their Depth — And That’s a Risk

APRA didn’t mince words: boards need to understand AI well enough to set coherent strategy and oversight. That’s not a call for every director to learn Python. It’s a demand for clarity. If an institution claims its risk appetite doesn’t allow for unexplainable decisions in lending, but then deploys a black-box AI to process applications, that’s a misalignment.

Yet the review showed many boards hadn’t established that linkage. They greenlit AI projects based on promised efficiency gains, but hadn’t defined what level of error or unpredictability they’d tolerate. Worse, they hadn’t mandated procedures for when things go wrong.

There’s a quiet irony here. The same executives who demanded war games and stress tests for financial models during the 2008 crisis are now approving AI deployments without requiring similar resilience planning. An AI failure in fraud detection might let scams slip through. One in loan processing could trigger regulatory penalties or reputational damage. And if the system is opaque, debugging takes hours — or days — when minutes matter.

The Missing Inventory

One glaring gap: no institution had a complete inventory of its AI tools. That’s not a minor oversight. It’s a governance failure with teeth.

How do you monitor what you can’t count? How do you assign ownership when no one knows which team owns which agent? APRA flagged this repeatedly. Without a centralized register listing every AI instance, its purpose, risk level, and human owner, institutions are flying blind.

This isn’t theoretical. If a model starts generating faulty code in production, and no one knows where else that code is deployed, the blast radius grows. If a vendor shuts down an API an AI agent depends on, and that dependency wasn’t documented, systems could collapse without warning.

And it’s not just internal tools. APRA noted that AI can hide in upstream dependencies — third-party libraries, cloud services, even open-source frameworks. Institutions thought they were only using AI in customer chatbots, but discovery revealed AI-assisted components in logging systems, monitoring tools, and CI/CD pipelines they didn’t even know were active.

Cybersecurity Is Now a Non-Human Problem

AI isn’t just changing how banks operate. It’s rewriting the threat model.

APRA pointed to new attack vectors like prompt injection and insecure integrations — vulnerabilities that didn’t exist when most current security policies were written. A maliciously crafted input could trick an AI agent into revealing sensitive data or executing unauthorized commands. And because these agents often have access to databases, APIs, and internal systems, the stakes are high.

But identity and access management hasn’t kept pace. Most IAM policies are built for humans: passwords, MFA, role-based access. They don’t account for AI agents that need to authenticate, act, and log decisions — without being treated like a person.

Some institutions are still assigning AI workflows to service accounts meant for batch jobs. That means no granular permissions, no behavioral monitoring, no way to distinguish between a legitimate action and a hijacked agent. And when agents are granted privileged access — say, to deploy code or modify configurations — a single breach could cascade through the entire development pipeline.

AI-Generated Code Is Flooding Change Controls

The volume of AI-assisted software development is overwhelming traditional change and release processes. One bank reported that over 40% of pull requests in Q4 2025 included code partially or fully generated by AI tools. The velocity is impressive. The oversight? Lacking.

APRA stressed that entities must apply controls to agentic workflows — including configuration management, patching, and security testing of AI-generated code. But in practice, many teams are bypassing review gates. Developers paste AI output, run basic tests, and merge. Static analysis tools aren’t tuned for AI-generated patterns. Security scanners miss logic flaws that only manifest in context.

And because the code often lacks clear authorship, no one feels responsible for it. It’s not quite human-written. It’s not quite machine-written. It’s in a governance gray zone.

  • 100% of reviewed institutions used AI in production.
  • 0% had complete AI tool inventories.
  • Most boards relied on vendor summaries, not technical audits.
  • AI introduced new attack paths like prompt injection.
  • Few had exit strategies for AI vendor dependency.

The FIDO Alliance Is Responding — But Not Fast Enough

The focus on identity for non-human actors isn’t just coming from regulators. The FIDO Alliance, best known for passwordless authentication standards, has formed an Agentic Authentication Technical Working Group. It’s developing specifications to handle machine-to-machine identity in AI-driven systems.

This matters. If an AI agent needs to authenticate to a database, it shouldn’t use a shared key buried in code. It should have a verifiable identity, with attestation, revocation, and audit trails — the same way humans do.

But standards take time. FIDO’s work is still in draft. And while they build, banks are deploying AI agents with makeshift access controls. The gap between innovation and security is widening.

APRA didn’t wait. It told institutions to implement privileged access management for AI workflows now. That means just-in-time access, short-lived credentials, and continuous monitoring of agent behavior. The tools exist. The will to apply them? That’s the real bottleneck.

What This Means For You

If you’re building AI systems — whether in finance, healthcare, or any regulated sector — APRA’s findings are a warning shot. Governance isn’t a compliance checkbox. It’s a technical necessity. Start by mapping every AI instance in your environment, including dependencies. Assign ownership. Document risk levels. Require human review for high-impact decisions, and make sure that review is more than a rubber stamp.

For developers, this means writing code that’s not only functional but auditable. If your AI generates SQL queries, log the prompts. If it makes approvals, record the rationale. Assume every model will be scrutinized — because it will.

And test AI-generated code like you’d test third-party software. Don’t trust the output because the model “seemed confident.” Run it through linters, SAST tools, and behavioral tests. Treat it as untrusted until proven otherwise.

The era of flying blind with AI is ending. Regulators are watching. And they’re not impressed.

The real question isn’t whether AI will be regulated. It’s whether the tech teams building these systems will get ahead of the rules — or get run over by them.

Sources: AI News, original report

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.