OpenAI on Tuesday introduced GPT-5.4-Cyber, a specialised version of its latest artificial intelligence model designed for defensive cybersecurity, as competition intensifies with rival Anthropic over advanced AI capabilities.
The announcement comes days after Anthropic revealed its own frontier model, Mythos, as part of a controlled initiative known as Project Glasswing. Under the programme, selected organisations have been granted limited access to the system to identify security weaknesses in software and infrastructure. Anthropic said the model has already uncovered thousands of vulnerabilities across operating systems, web browsers and other widely used technologies.
OpenAI’s new model is aimed at strengthening cyber defence by assisting security professionals in identifying and analysing potential threats. The company said GPT-5.4-Cyber will be released initially to a restricted group of vetted security vendors, organisations and researchers due to its enhanced capabilities and more flexible handling of sensitive cybersecurity tasks.
The rollout will take place through OpenAI’s Trusted Access for Cyber programme, which was launched earlier this year to provide controlled access to advanced tools for verified defenders. The company said it is expanding the initiative to include thousands of individual cybersecurity professionals and hundreds of teams responsible for protecting critical systems.
As part of the update, OpenAI is introducing new access tiers within the programme. Higher levels of verification will allow participants to use more powerful features, with top-tier users gaining access to GPT-5.4-Cyber. These capabilities include advanced vulnerability research and deeper analysis of potential threats, areas that typically require careful oversight due to their dual-use nature.
The move reflects a broader trend in the artificial intelligence sector, where companies are racing to develop tools that can both defend against and potentially expose cyber risks. While such systems offer significant benefits for identifying weaknesses before they can be exploited, they also raise concerns about misuse if placed in the wrong hands.
Anthropic’s Mythos model has already sparked debate after the company limited its availability over fears it could pose serious cybersecurity risks if widely released. OpenAI’s decision to follow a similarly controlled distribution model suggests growing caution across the industry.
With cyber threats increasing in scale and complexity, both companies are positioning their technologies as tools for defence rather than disruption. The challenge for developers and regulators alike will be ensuring that these powerful systems remain firmly in the hands of trusted users while still delivering meaningful improvements in global cybersecurity.
