14 of the 15 largest Australian financial institutions now use AI agents in software engineering, claims processing, or customer interactions — but only two have full inventories of their AI tools, and fewer have assigned individual ownership to each instance. That’s one of the starkest findings from the Australian Prudential Regulation Authority’s (APRA) targeted review of AI adoption, released after a six-month assessment concluding in January 2026.
Key Takeaways
- APRA reviewed 15 large financial firms in late 2025 and found AI use was universal, but governance maturity varied widely
- Boards showed strong interest in AI for productivity, but most lacked coherent risk strategy aligned with institutional appetite
- Critical gaps exist in monitoring model behaviour, managing changes, decommissioning AI tools, and securing non-human identities
- AI-generated code and autonomous workflows are straining change controls, with insufficient security testing in place
- Overreliance on single AI vendors — and no exit plans — emerged as a major systemic risk
The Governance Gap Is Real — And It’s Structural
It’s not that banks and superannuation trustees aren’t using AI. They are — aggressively. But their ability to govern it lags far behind deployment. APRA’s review found that while every entity had AI in production, most treated AI risk like generic IT risk. That’s a fatal flaw.
AI agents don’t behave like static software. They adapt, chain actions, and can hallucinate, escalate privileges, or generate exploitable code. Treating them as just another tool in the stack ignores their dynamic behaviour and potential for emergent risk. Yet that’s exactly what many institutions are doing.
APRA noted that boards were often relying on vendor summaries and presentations rather than diving into technical risk profiles. That’s a governance shortcut — not a strategy. And it’s especially dangerous when AI is involved in loan approvals, fraud detection, or customer service bots that make binding decisions.
Industry Context: AI’s growth in Finance
The adoption of AI in the Australian financial sector has been swift and widespread. According to a report by Accenture, 90% of Australian banks have already deployed AI-powered chatbots, while 80% are using AI for customer service. The same report found that 75% of banks have implemented AI-powered anti-money laundering systems, while 60% are using AI for risk management.
However, the APRA review highlights the need for more strong governance and risk management controls. As AI becomes increasingly pervasive in the financial sector, regulators are taking a closer look at the potential risks and challenges associated with its use. The review’s findings highlight of developing effective governance frameworks that can keep pace with the rapid evolution of AI technologies.
The sector’s biggest players are taking note. For instance, Westpac Banking Corp has announced plans to invest $1.5 billion in AI and data analytics over the next five years. The bank aims to use AI to improve customer experience, reduce operational risk, and drive business growth. However, the APRA review suggests that even as banks like Westpac invest in AI, they need to do a better job of governing its use and mitigating potential risks.
Boards Are Interested — But Not Informed
There’s no lack of enthusiasm at the top. APRA said boards were “keenly interested” in AI’s potential to boost productivity and improve customer experience. That’s expected. What’s not expected is how little many directors understand about what AI systems are actually doing in their environments.
Boards were not consistently scrutinizing risks like unpredictable model behaviour, cascading failures in agentic workflows, or the impact of AI errors on critical operations. Some hadn’t even defined what level of AI risk their institution could tolerate.
APRA was clear: AI strategy must align with institutional risk appetite. That means setting thresholds for error rates, defining oversight roles, and establishing procedures for what happens when an AI agent fails — or worse, acts maliciously.
Who Owns the AI Agent?
One of the most glaring gaps APRA identified: the absence of named-person ownership for AI instances. In cybersecurity, the concept of “Crown Jewels” includes systems so critical that someone must be accountable for them. Yet in AI, many agents operate in the shadows — deployed by developers, fed by data teams, monitored by no one.
The regulator called for AI tool inventories and clear ownership models. That’s not bureaucracy — it’s basic operational hygiene. If an AI agent in claims processing starts rejecting valid claims, who do you call? If a code-generation agent introduces a backdoor, who’s responsible?
Without ownership, accountability evaporates. And when accountability vanishes, so does control.
The Bigger Picture: Why Governance Matters
The APRA review highlights a critical challenge facing the financial sector: governance. Effective governance is not just about complying with regulations; it’s about ensuring that organizations can operate with confidence, transparency, and accountability. As AI becomes increasingly pervasive in the sector, governance will become even more critical.
Regulators like APRA are taking a closer look at the potential risks and challenges associated with AI. They recognize that AI can be a powerful tool for improving customer experience, reducing operational risk, and driving business growth. However, they also acknowledge that AI can create new risks and challenges that must be addressed.
The APRA review is a wake-up call for the sector. It highlights the need for more strong governance and risk management controls. By establishing clear ownership models, inventorying AI tools, and scrutinizing AI risks, institutions can ensure that they are taking a proactive and responsible approach to AI adoption.
Cybersecurity Is Falling Behind AI’s Speed
AI isn’t just changing how banks operate. It’s rewriting the threat model. APRA flagged new attack vectors like prompt injection, insecure API integrations, and AI-generated code that passes unit tests but contains hidden vulnerabilities.
Worse, identity and access management systems haven’t adapted to non-human actors. AI agents are being granted access to databases, financial systems, and internal tools — but in many cases, they’re treated like regular service accounts. That’s a mistake.
Agents can chain actions, make decisions, and even initiate new workflows. A rogue agent with access to customer data and a generative model could fabricate fake accounts, trigger wire transfers, or exfiltrate data in ways that evade traditional monitoring.
What Competing Companies/Researchers Are Doing
Competing companies and researchers are taking note of the APRA review. For instance, a report by Deloitte found that 80% of financial institutions in the Asia-Pacific region are investing in AI-powered cybersecurity solutions. The same report noted that 60% of institutions are using AI for threat intelligence and incident response.
Regulators like the Australian Securities and Investments Commission (ASIC) are also taking a closer look at AI-powered cybersecurity. In a recent speech, ASIC Commissioner John Price highlighted the need for more effective governance and risk management controls in the sector. He noted that AI can be a powerful tool for improving cybersecurity, but it also creates new risks and challenges that must be addressed.
The APRA review is a call to action for the sector. It highlights the need for more strong governance and risk management controls. By establishing clear ownership models, inventorying AI tools, and scrutinizing AI risks, institutions can ensure that they are taking a proactive and responsible approach to AI adoption.
AI Code Is Flooding Development Pipelines
The volume of AI-assisted software development is overwhelming existing change and release controls. Developers are using AI to generate code at scale, but the review found that security testing of that code is inconsistent or absent.
APRA recommended mandatory security testing for AI-generated code — not just once, but continuously. Because today’s safe output can become tomorrow’s vulnerability when the model updates or the context shifts.
And the privilege problem is real: AI agents are being granted privileged access to systems, but without the same controls applied to human admins. Configuration management, patching, and access reviews aren’t keeping pace.
Vendor Lock-In Is Creating Systemic Risk
Some institutions now rely on a single provider for the majority of their AI capabilities — from code generation to customer service bots. That creates dangerous concentration risk.
APRA noted that only a few firms could demonstrate a viable exit plan or substitution strategy. That’s alarming. If a vendor suffers an outage, a breach, or a pricing shift, entire operations could grind to a halt.
The problem goes deeper: AI can be embedded in upstream dependencies. A third-party library might use AI to generate responses, and the institution may not even know it. That’s a blind spot regulators are starting to map — but most firms aren’t.
What This Means For You
If you’re a developer or engineering lead, this isn’t just regulatory noise. It’s a warning shot. The systems you’re building today — especially those with autonomous agents — will be scrutinized. You need to document every AI instance, define ownership, and ensure security testing covers not just functionality but intent and behaviour.
For founders and tech leaders: governance isn’t a compliance checkbox. It’s a competitive advantage. Firms that can demonstrate control over their AI agents will gain trust — from regulators, customers, and investors. Those that don’t will face delays, fines, or worse, a catastrophic failure that makes headlines.
APRA’s findings expose a quiet truth: we’re automating critical decisions faster than we’re learning how to govern them. The tools are powerful. But without ownership, visibility, and security discipline, they’re also dangerous. As AI agents gain autonomy, the question isn’t just what they can do — it’s who’s accountable when they go off script.
Sources: AI News, The Australian Financial Review
- 100% of reviewed institutions use AI in some form
- Only 2 have full AI tool inventories
- Fewer than 5 have named owners for AI instances
- Most lack exit strategies for AI vendors
- AI is used in loan processing, claims triage, fraud detection, and customer service


