• Home  
  • OpenAI Launches Advanced Security for At-Risk Users
- Cybersecurity

OpenAI Launches Advanced Security for At-Risk Users

OpenAI rolls out Advanced Account Security on May 01, 2026, targeting developers and high-risk users concerned about phishing. Measures include mandatory MFA and login alerts. Details from Wired.

OpenAI Launches Advanced Security for At-Risk Users

On May 01, 2026, OpenAI began rolling out a new security tier called Advanced Account Security, designed specifically for users whose accounts may be targeted by sophisticated phishing attempts. The move follows a rise in AI-driven social engineering attacks aimed at developers and enterprise users of its platforms, particularly those building with Codex and accessing sensitive API endpoints through ChatGPT.

Key Takeaways

  • OpenAI’s Advanced Account Security is now live for high-risk users as of May 01, 2026.
  • The tier mandates multi-factor authentication (MFA) using hardware keys or authenticator apps—SMS is not allowed.
  • Users enrolled will get real-time login alerts and biweekly security reports summarizing access patterns.
  • The feature targets developers, researchers, and corporate accounts using Codex or managing API keys.
  • No additional cost is attached, but enrollment requires verification of identity and use case.

Not All MFA Is Treated Equal

OpenAI isn’t just nudging users toward better hygiene. It’s enforcing a hard line: if you’re approved for Advanced Account Security, you can’t use SMS-based two-factor. That method, long criticized for SIM-swapping vulnerabilities, is explicitly excluded. Instead, the company requires either a FIDO2-compliant hardware key—like a YubiKey—or a time-based one-time password (TOTP) app such as Google Authenticator or Authy.

The distinction matters. In 2025, a series of breaches traced back to compromised developer accounts at AI startups showed attackers using phishing kits that bypassed SMS codes in under 90 seconds. OpenAI’s product lead, Mira Loh, stated in an internal briefing shared with Wired that SMS “doesn’t meet the threat model we’re now facing.”

That’s not just corporate caution. The new system logs every authentication attempt, flags logins from unexpected geographies, and blocks concurrent sessions—a known tactic used in session hijacking. If someone logs in from Tokyo while the user’s key is in Berlin, the account locks automatically until manually released.

Who Gets In—and Who Doesn’t

Access isn’t open to all. Users must apply through a form that asks for their use case, API volume, and whether they’re building on Codex for commercial or research purposes. OpenAI says it’s prioritizing those with “elevated risk profiles,” including developers working on government contracts, election security tools, or AI safety research.

One startup founder building a fraud detection model told Wired they applied on April 28 and were approved within 12 hours. “They asked for my company’s domain verification, a GitHub link, and a brief on what we’re building,” they said. “No fluff. I was in by the next morning.”

The Signal in the Security

This isn’t OpenAI’s first pass at account protection. But previous efforts—email alerts, optional MFA, password resets—were generic, the kind of baseline features every cloud service offers by 2026. Advanced Account Security is different because it assumes the attacker isn’t a script kiddie but a well-resourced actor with AI-powered phishing tools at their disposal.

Consider what’s possible now: generative models that clone a colleague’s writing style, voice, and email patterns—then send a fake “urgent” message with a malicious link. These aren’t theoretical. At least three confirmed incidents in early 2026 involved fake Slack messages from “CTOs” asking developers to “review a critical model update” via a phishing portal.

OpenAI knows its users are targets. Codex doesn’t just generate code—it can pull from private repositories, access API keys, and interact with production environments. A single compromised session could leak proprietary algorithms or inject backdoors into deployed models.

  • Advanced users will receive biweekly security reports showing login locations, device fingerprints, and API call spikes.
  • New sessions require reauthentication every 12 hours, even if “remembered.”
  • API keys are rotated automatically every 7 days for enrolled accounts.
  • Admins can designate emergency contacts who receive lockout alerts if the primary user becomes unreachable.

Why This Isn’t Just Another Setting

Most security rollouts are buried in logs or announced with vague promises of “improved protection.” This one is visible, restrictive, and deliberately inconvenient. That’s the point.

By making enrollment gated and the rules strict, OpenAI is sending a message: we see the threat landscape shifting, and we’re treating high-value accounts like high-value assets. It’s also a quiet admission that their previous safeguards weren’t built for the era of AI-assisted attacks.

But there’s irony here. OpenAI helped build the tools that make these attacks possible. GPT-derived models are already being used to generate phishing content that bypasses legacy spam filters. And now, the company is asking developers to protect against that very capability—using a system OpenAI itself controls.

That creates a dependency. If you’re building something sensitive on Codex, your security doesn’t just rely on your own practices. It relies on OpenAI’s infrastructure, policies, and response speed. That’s a significant shift—one that demands scrutiny.

What the Industry Is Doing Differently

Other major AI platforms haven’t matched OpenAI’s tiered approach yet, but the pressure is mounting. Google’s Vertex AI offers optional hardware key enforcement and anomaly detection, but it doesn’t require it for any user group. Similarly, Amazon Web Services (AWS) supports FIDO2 and offers GuardDuty for threat monitoring, yet its AI services like SageMaker rely on general IAM policies rather than specialized security profiles.

Microsoft stands out. In early 2026, it rolled out Conditional Access policies for Azure AI users working on U.S. federal projects. These require hardware keys, session time limits, and mandatory biweekly audits—features nearly identical to OpenAI’s new tier. The change followed a 2025 incident where a defense contractor’s AI training pipeline was compromised via a spoofed Teams message. Microsoft now mandates these protections for any customer handling controlled unclassified information (CUI).

Startups are moving faster. Anthropic, for example, requires all users with access to its API to use either a hardware key or a TOTP app—no exceptions. The company cites its focus on safety-critical applications, including AI alignment research funded by the Open Philanthropy Project. Meanwhile, Mistral AI in France has partnered with local cybersecurity firm Gendarmerie Nationale to audit access patterns for high-risk customers in the EU’s Critical Entities Resilience Directive (CERD) sectors.

These efforts show a split: cloud providers apply broad security frameworks, while AI-native companies are building role-specific protections. OpenAI’s model sits in between—offering a privileged tier rather than universal mandates. That may reflect a business decision as much as a technical one: enforcing strict rules across all users could slow adoption, especially among small developers.

The Bigger Picture: AI, Trust, and the Developer Ecosystem

The stakes go beyond one company’s login screen. As AI tools become central to software development, infrastructure management, and data analysis, the security of developer accounts is becoming national infrastructure. A breach in an AI-powered workflow isn’t just about stolen data—it can corrupt codebases, poison training sets, or silently introduce vulnerabilities that persist for years.

This shift is already affecting policy. In March 2026, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued a draft guidance titled *Securing AI Development Environments*, which recommends hardware-based MFA for any developer with access to production AI systems. While not mandatory, federal agencies are expected to adopt it by Q4 2026. The European Union’s AI Office is considering similar rules under the AI Act’s high-risk category, particularly for systems used in critical infrastructure.

Investment is following. In Q1 2026, venture funding for identity and access management (IAM) startups focused on developer security reached $412 million, up 63% year-over-year. Notable recipients include Descope, which raised $180 million for passwordless MFA tailored to engineering teams, and Convex Security, which offers real-time API key monitoring for AI platforms.

Still, no solution is foolproof. Attackers are adapting. In April 2026, researchers at the University of Toronto demonstrated a proof-of-concept attack using fine-tuned open-source models to mimic MFA approval requests within Slack and email. The fakes were indistinguishable from real ones in 88% of cases during user testing. This highlights a harsh reality: as AI improves authentication, it also improves deception. The arms race is asymmetric, and the defenders are playing catch-up.

What This Means For You

If you’re a developer using ChatGPT or Codex for production work, especially with access to APIs or private codebases, you should apply for Advanced Account Security immediately. It’s free, it’s available, and it removes entire classes of attack vectors—particularly session hijacking and SIM swapping. The 12-hour reauthentication may feel annoying, but it’s a small trade for blocking persistent threats.

For builders, this also means documentation and team access workflows will need updates. Hardware keys must be distributed, emergency contacts designated, and security reports reviewed as part of incident response. If you’re using ChatGPT in a regulated environment—healthcare, finance, defense—this feature may soon become a compliance baseline. Start adapting now.

OpenAI’s move is smart, necessary, and long overdue. But it raises a deeper question: as AI lowers the barrier to highly effective cyberattacks, can platform-level security keep pace—or will we end up trusting a few tech giants to defend the entire developer ecosystem?

Sources: Wired, original report

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.