
OpenAI Launches Advanced Account Security With Yubico YubiKeys for ChatGPT Passkeys
Key Takeaways
- OpenAI debuts Advanced Account Security, opt-in protections replacing passwords with passkeys and hardware keys.
- Yubico provides co-branded two-pack YubiKeys for ChatGPT users under AAS.
- AAS is optional, not mandatory for all OpenAI users.
AAS replaces passwords
OpenAI has introduced “Advanced Account Security” (AAS), an opt-in setting for ChatGPT accounts that is designed to strengthen sign-in protections and reduce exposure from compromised sessions, and it is available through the Security section of users’ ChatGPT accounts on web.
In the same announcement, OpenAI said AAS is “a new opt-in setting for ChatGPT accounts, designed for people at increased risk of digital attacks,” and it brings “a set of heightened security measures” into one place.

The company also stated that once enrolled, AAS “protects users in Codex as well,” tying the protections to both products accessed through the same login.
A core change is that AAS “requires passkeys or physical security keys while disabling password-based login,” making phishing-resistant sign-in the default for people who need it most.
OpenAI further said AAS disables email and SMS recovery, requiring “backup passkeys, security keys, and recovery keys” when an email account or phone number is compromised.
PCMag described the rollout as an opt-in mode that “dumps traditional passwords for more secure alternatives,” and said it is rolled out via ChatGPT’s web interface in “Settings > Security.”
TechCrunch likewise reported that OpenAI launched Advanced Account Security on Thursday and that it is “a set of opt-in protections for ChatGPT users designed for high-value individuals — but available to anyone who wants them.”
Yubico co-branded keys
To make the phishing-resistant login approach easier to adopt, OpenAI partnered with Yubico to offer “a customized bundle of best in class security keys” as part of Advanced Account Security.
OpenAI said the partnership would “tie two new co-branded security products to ChatGPT accounts,” naming “YubiKey C NFC and YubiKey C Nano.”

TechCrunch reported that Yubico announced it “has partnered with OpenAI to link two new security key products to ChatGPT accounts,” and said the companies released a pair of “co-branded” YubiKeys “dubbed the YubiKey C NFC and the YubiKey C Nano.”
The mezha.net report similarly described the partnership as adding “YubiKey support for ChatGPT” and said the co-branded devices are intended to protect users from “rising phishing threats among chatbot users.”
OpenAI’s product description also specified how the two devices are meant to be used: “The YubiKey C Nano is designed to stay in your laptop for simple, low-friction daily authentication,” while “the YubiKey C NFC for backup, and use across laptops and mobile devices.”
PCMag added that the new setting is “also available at YubiKey C NFC and YubiKey C Nano,” and noted that “Security keys from other vendors are also supported.”
In the Yubico-linked reporting, Jerrod Chong, chief executive officer of Yubico, said, “We are introducing a new model for phishing-resistant security at scale for the AI ecosystem,” and he described the partnership as delivering “the highest level of protection against phishing with a low-friction user experience.”
Dane Stuckey, OpenAI’s chief information security officer, said, “We’ve made YubiKeys a standard part of how we protect OpenAI employees,” and framed the public offering as making it easier for users to choose the same kind of protection “when it’s right for them.”
Recovery and session trade-offs
OpenAI’s AAS announcement repeatedly emphasized that stronger protection comes with stricter account recovery rules, and it spelled out that the company itself will not be able to assist with recovery for enrolled users.
“BitcoinWorld OpenAI Security Upgrade: Yubico Partnership Shields ChatGPT Users from Phishing Threats OpenAI has introduced a significant security upgrade for ChatGPT accounts, partnering with digital security provider Yubico to launch co-branded YubiKeys”
In the OpenAI product text, the company said that if a user’s email account or phone number is compromised, “an attacker may try to use one of them to gain access to their ChatGPT account via e-mail or SMS based recovery,” and that AAS reduces this risk by disabling “email and SMS recovery.”
OpenAI then required “stronger recovery methods: backup passkeys, security keys, and recovery keys,” and it added a direct limitation: “OpenAI Support will not be able to assist with account recovery for users enrolled in Advanced Account Security.”
PCMag described the same trade-off by stating that “OpenAI’s Advanced Account Security is so locked down that the company itself won’t be able to recover your account if you lose the hardware security keys or passkeys.”
PCMag also reported that the enrollment process requires users to use “at least two hardware security keys, or one hardware security key and one software-based passkey,” with the extra key serving as a backup.
The OpenAI product page said “Sign-in sessions are shortened to reduce the window of exposure if a device or active session is compromised,” and it also said users receive “alerts when there is a login to their account” and can “review and manage the active sessions across the various devices they’re signed into.”
SQ Magazine’s write-up echoed that users “can no longer recover accounts using” email or text-based methods and that “recovery depends entirely on” the stronger methods OpenAI listed, while also stressing that “its support team cannot help recover accounts if users lose access to these methods.”
TechCrunch similarly noted that the security-key approach has a practical downside, saying that “if the key is lost, OpenAI won’t be able to help recover access,” and that “conversations could be lost for good.”
Who it targets and why
OpenAI positioned Advanced Account Security for “people at increased risk of digital attacks,” and it named specific groups in its own explanation of why the stakes can be higher for some users.
The OpenAI product page said, “People are turning to AI for deeply personal questions and increasingly high-stakes work,” and it added that “Over time, a ChatGPT account can hold sensitive personal and professional context.”

It then listed examples: “For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security-conscious, the stakes are even higher.”
PCMag similarly described the mode as “for users looking for top-tier account protection,” and said OpenAI designed it for “people at increased risk of digital attacks,” including “government officials, corporate executives, researchers, and human rights activists.”
TechCrunch also reported that OpenAI suggested AAS is a good fit for “political dissidents, journalists, researchers, and elected officials,” and it framed the rationale in terms of politically charged and risky work.
OpenAI’s own text connected the security controls to its broader “cybersecurity action plan,” stating that the effort is “part of our broader _cybersecurity action plan_” to broaden access to technologies that can help protect communities, critical systems, and “our national security.”
The company also described the mechanism for reducing risk from compromised accounts by making sign-in phishing-resistant and by disabling recovery routes through email and SMS.
PCMag described the security mode as making accounts resistant to “phishing messages, password guessing, and SIM swap attacks,” and it said these are “how hackers usually crack online accounts.”
Next steps and broader rollout
Beyond the consumer-facing opt-in, OpenAI said it would require Advanced Account Security for members of its “Trusted Access for Cyber” program, and it set a specific start date.
“OpenAI April 30, 2026 ProductSecurity Introducing Advanced Account Security An advanced set of protections against unauthorized access to ChatGPT accounts, Codex, and the sensitive information they can contain”
In the OpenAI product announcement, the company stated that “Individual members of Trusted Access for Cyber accessing our most cyber capable and permissive models will be required to enable Advanced Account Security beginning June 1, 2026.”

It also described an alternative for organizations with trusted access, saying they can “attest that they have phishing resistant authentication as part of their single sign-on workflow.”
OpenAI’s text also said the partnership with Yubico is intended to “make that level of protection easier to access,” and it stated that the “bundle will be available to all eligible users in their security settings on web.”
PCMag reported that the new mode is rolled out via ChatGPT’s web interface in “Settings > Security,” and it described the enrollment as a “3-step process to enroll.”
TechCrunch similarly described the launch as a Thursday release of Advanced Account Security and said Yubico’s partnership was designed to protect users from phishing threats that are “considered to be a growing threat for chatbot users.”
The mezha.net report added that “Early adopters should plan recovery and access policies before enabling the feature,” aligning with OpenAI’s warning that account recovery responsibility increases when AAS is enabled.
In the Yubico materials, Jerrod Chong described the partnership as “a new model for phishing-resistant security at scale for the AI ecosystem,” and Dane Stuckey said “Security keys are one of the best ways to protect accounts from phishing.”
More on Technology and Science

Hackers Exploit Russian Hosting Provider Proton66 for Malware, Trustwave SpiderLabs Finds
11 sources compared

Mark Zuckerberg Says Meta Reels Hit $50 Billion Annual Revenue as Reality Labs Loses $6 Billion
13 sources compared

Satya Nadella Says Microsoft 365 Copilot Reaches 20 Million Paid Enterprise Seats
10 sources compared

Meta Launches USDC Stablecoin Payouts for Creators in Colombia and the Philippines via Stripe
12 sources compared