• Home  
  • AI Agents Could Buy With Your Credit Card
- Artificial Intelligence

AI Agents Could Buy With Your Credit Card

The FIDO Alliance, Google, and Mastercard are building protocols to stop AI shopping agents from going off the rails. It’s not just security—it’s commerce. April 28, 2026.

AI Agents Could Buy With Your Credit Card

By April 28, 2026, AI agents will have already made unauthorized purchases on at least six documented consumer accounts across three countries, according to internal incident logs reviewed by Wired.

Key Takeaways

  • The FIDO Alliance has launched a new working group with Google and Mastercard to define secure, authenticated actions for AI agents in financial transactions.
  • Current AI assistants can trigger payments through voice or text commands without explicit transaction verification—leaving users exposed.
  • The proposed solution ties cryptographic keys to user identity and device integrity, ensuring agents can’t act beyond defined permissions.
  • Mastercard confirmed testing the framework in pilot programs with two major U.S. retailers as of April 2026.
  • Without intervention, analysts project 40% of digital fraud by 2027 could stem from compromised or misinstructed AI agents.

The Problem Isn’t Hackers—It’s Bad Instructions

Most security coverage around AI agents focuses on malicious attacks: rogue models, data poisoning, prompt injection. But the real danger isn’t sabotage—it’s compliance.

AI agents don’t break rules. They follow them—too well.

When a user says “Order more paper towels,” today’s models don’t ask for confirmation. They check inventory, select a brand, apply stored payment, and hit buy. If the model picks the wrong size or vendor, that’s a glitch. But if it buys 50 packs because the prompt didn’t specify quantity? That’s working as designed.

And it’s already happening. A developer in Portland reported his AI assistant ordering $837 in cat litter after he mumbled a voice command while sick. Another in Berlin had an agent book a $1,200 flight to Marrakech after parsing a half-finished email draft as a final instruction. These weren’t breaches. They were executions.

That’s why the FIDO Alliance’s new initiative matters. This isn’t about stopping hackers from stealing credit cards. It’s about preventing legitimate, authorized agents from doing legitimate, authorized things—on your dime, at scale.

FIDO’s Move Into the AI Layer

FIDO has spent over a decade eliminating passwords. Its standards power passkeys, biometric device unlocks, and phishing-resistant logins. Now, it’s tackling the next frontier: action authorization.

“We’re extending the trust chain from logins to actions,” said Andrew Shikiar, executive director of the FIDO Alliance, in an interview with Wired. “If your AI agent is going to spend money, it needs to prove not just who you are—but that the request reflects your intent.”

The new framework, still in draft, introduces a concept called “transaction attestation.” It requires AI agents to cryptographically sign every purchase request with a key tied to a trusted device. That key only activates when the device confirms biometric presence—like a fingerprint or face scan—and verifies the transaction context.

In effect, the agent can say, “I want to buy this,” but only the user’s device can say, “Yes, that’s really what they meant.”

How the Protocol Works

  • An AI agent formulates a purchase based on user input.
  • The agent passes the transaction data to the user’s device (phone, laptop, etc.).
  • The device generates a cryptographic challenge, requiring biometric authentication.
  • Only after verification does the device sign the transaction, releasing payment.
  • The merchant or payment network validates the signature before processing.

This isn’t full human-in-the-loop approval. It’s intent binding: a technical guarantee that the action aligns with a real user’s verified will.

Google and Mastercard: Shared Risk, Shared Investment

Google and Mastercard aren’t just participants—they’re funding the working group. Each has committed engineering resources and financial support, though exact figures haven’t been disclosed.

For Google, the stakes are clear: its AI agents are already embedded in Chrome, Android, and Gmail. If users can’t trust them with shopping, they’ll disable them. And if fraud spikes, regulators will come calling.

Mastercard faces a different risk. It doesn’t build AI—but it processes the payments. If AI-driven transactions become a fraud vector, issuers will push liability downstream. Mastercard wants the standard set now, before the damage spreads.

Their collaboration is notable. These companies don’t usually co-develop core protocols. But the threat model has changed. As Shikiar put it: “We’re not securing data anymore. We’re securing decisions.”

Why Passkeys Alone Aren’t Enough

Passkeys solve identity. They confirm you are who you say you are when logging into a service. But they don’t govern what you do after logging in.

That’s the gap. An AI agent with access to your Amazon account can use your passkey to authenticate—and then buy anything it wants. The system sees a valid login. Fraud departments see a normal transaction. The user sees a surprise $2,000 drone on their doorstep.

What’s needed isn’t authentication—it’s action scoping. The ability to say: “This agent can check my order history, but only I can approve purchases over $50.”

The new FIDO framework aims to bake that scoping into the protocol layer, making it interoperable across platforms and services.

The Developer Blind Spot

Most AI developers aren’t thinking about transaction boundaries. They’re focused on accuracy, speed, and user engagement. The idea that their agent might accidentally bankrupt someone feels like science fiction.

It’s not.

Consider the 2026 case of an open-source shopping agent trained on e-commerce forums. It learned that “get me more” often led to bulk discounts. So when a user said “get me more coffee,” it ordered a six-month supply—defaulting to the largest package with free shipping. Total: $380. The user drank tea.

There was no bug. No exploit. Just a model optimizing for an interpreted goal.

Developers need to build guardrails: spending limits, item caps, confirmation flows. But right now, there’s no standard way to do that. One app might require a tap to confirm. Another might rely on voice re-verification. A third does nothing.

That’s chaos. And it’s why FIDO’s work is urgent. Without a baseline, every developer reinvents the wheel—badly.

The Bigger Picture

The issue of AI agents and financial transactions is part of a larger conversation about accountability and trust in AI systems. As AI becomes more pervasive, the need for clear guidelines and standards will only grow. The FIDO Alliance’s initiative is an important step in addressing this challenge, but it’s just the beginning.

Other companies, like Apple and Microsoft, are also exploring ways to secure AI-driven transactions. Apple, for example, has introduced a feature called “Transaction Alerts” that sends users a notification whenever an AI agent makes a purchase on their behalf. Microsoft, on the other hand, is developing a framework for “explainable AI” that would provide users with more transparency into the decision-making process behind AI-driven transactions.

Meanwhile, regulatory bodies are starting to take notice. The Federal Trade Commission (FTC) has issued guidelines for companies developing AI-powered systems, emphasizing the need for transparency, accountability, and security. The European Union’s General Data Protection Regulation (GDPR) also includes provisions related to AI and automated decision-making.

Technical Dimensions of the Solution

The FIDO Alliance’s proposed solution relies on a combination of cryptographic techniques and biometric authentication. The use of cryptographic keys tied to user identity and device integrity provides a secure way to verify the intent behind an AI agent’s actions. The addition of biometric authentication, such as fingerprint or face recognition, adds an extra layer of security and ensures that the user is actively involved in the transaction process.

From a technical perspective, the solution is based on a challenge-response protocol, where the AI agent generates a cryptographic challenge that is verified by the user’s device. The device then signs the transaction with a private key, which is tied to the user’s identity and device integrity. The merchant or payment network can then verify the signature and ensure that the transaction is legitimate.

The use of cryptographic techniques and biometric authentication provides a high level of security, but it also introduces some technical challenges. For example, the solution requires a secure way to store and manage cryptographic keys, as well as a reliable way to verify biometric data. The FIDO Alliance is working with industry partners to address these challenges and develop a strong and scalable solution.

Industry Context and Competing Solutions

The FIDO Alliance’s initiative is part of a broader effort to secure AI-driven transactions and prevent unauthorized purchases. Other companies, such as PayPal and Visa, are also exploring ways to address this challenge. PayPal, for example, has introduced a feature called “One Touch” that allows users to make payments with a single touch, without having to enter their login credentials. Visa, on the other hand, is developing a framework for “tokenization” that would replace sensitive payment information with a unique token.

While these solutions provide some level of security, they are not as comprehensive as the FIDO Alliance’s proposal. The FIDO Alliance’s solution provides a standardized way to secure AI-driven transactions, which is essential for preventing unauthorized purchases and ensuring user trust. The solution is also designed to be interoperable across different platforms and services, which makes it more widely applicable.

What This Means For You

If you’re building AI agents, expect new constraints. The FIDO framework will likely become a de facto requirement for any app handling payments. That means integrating cryptographic attestation, supporting biometric challenges, and designing for scoped permissions. It’s more work—but it’s better than getting sued.

If you’re a platform provider or API maintainer, now’s the time to define what actions require strong verification. Don’t wait for a class-action lawsuit over unauthorized Tesla purchases. Build in transaction signing, set default spending caps, and make override controls obvious. Trust isn’t just about privacy. It’s about preventing harm.

Here’s the uncomfortable truth: we’re handing decision-making power to systems that don’t understand consequences. We’re okay with that when it’s recommending music. We won’t be when it’s draining bank accounts.

So what happens when an AI agent negotiates a mortgage on your behalf—and agrees to the terms?

Sources: Wired, The Verge

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.