As artificial intelligence systems become more advanced, a new wave of AI models is transforming cybersecurity by offering unprecedented defensive capabilities while also raising concerns about their potential misuse in cyberattacks.
- +AI integrity at stake as advanced models reshapes cybersecurity
Recent developments from leading AI firms, including Anthropic and OpenAI, reveals this growing dual-use dilemma.
Recent developments from leading AI firms, including Anthropic and OpenAI, reveals this growing dual-use dilemma.
As AI continues to evolve, the balance between innovation and security remains delicate which challenge for governments, companies, and cybersecurity professionals to ensure that the same technologies strengthening digital defenses do not ultimately become the tools that undermine them.
Cutting-edge systems such as Anthropic’s ‘Mythos’ and OpenAI’s cybersecurity-focused models can now identify critical software vulnerabilities across operating systems and web platforms at remarkable speed.
While these innovations promise stronger digital defenses, experts warn they could also be weaponised if not carefully controlled.
According to the International AI Safety Report 2026, frontier AI systems are increasingly capable of automating complex technical tasks, including elements of vulnerability discovery and cyber intrusion.
Although fully autonomous cyberattacks remain limited, the report noted that these systems already play a significant role in identifying weaknesses in digital infrastructure.
Similarly, the Global Cybersecurity Outlook 2026 highlights how rapid AI adoption is reshaping the global threat landscape, making cyberattacks faster, more sophisticated, and harder to detect.
At the same time, AI is becoming indispensable to cybersecurity defense. A 2025 industry study found that 95 percent of cybersecurity professionals believe AI significantly enhances threat detection and response.
However, nearly half which are 45 percent admit they remain unprepared for AI-driven threats, revealing a widening gap between capability and readiness.
Cybercriminals are already leveraging AI to scale operations as reports show that AI-powered phishing campaigns, malware development, and social engineering attacks are becoming increasingly automated and sophisticated.
The scale of the challenge is evident because more than 44,000 software vulnerabilities were disclosed in 2025 alone, with AI playing a growing role in both their discovery and potential exploitation.
Andrew Bailey, governor of the Bank of England, recently cautioned that advanced AI models could enable ‘more sophisticated cyberattacks’ particularly in critical sectors such as banking.
In response, technology companies are adopting more cautious deployment strategies. OpenAI, for instance, has restricted access to some of its cybersecurity-focused models through vetted programs, implementing safeguards such as ‘know-your-customer’ verification and staged rollouts to limit misuse.
Industry experts say the core issue lies in maintaining AI integrity by ensuring that systems designed to protect do not become tools for harm.
Umanhonlen Gabriel, founder of Cyber Odyssey, emphasised that AI operates on both sides of the same coin in cybersecurity.
“While AI strengthens defenses by detecting vulnerabilities, improving resilience, and accelerating incident response, the same capabilities can also expose weaknesses that may be exploited.
“This creates a clear dual-use challenge,” Gabriel said, noting that trust, control, and alignment are critical in real-world deployments.
He stressed that maintaining AI integrity requires robust safeguards, including intent-aware query guardrails that can distinguish between legitimate security research and malicious exploitation.
Continuous model updates are also essential, he added, given the rapidly evolving nature of cyber threats though such updates must be carefully managed to avoid enhancing offensive capabilities.
Gabriel further highlighted the need for clear security frameworks to define acceptable use, ensuring AI systems remain constrained within ethical and authorised cybersecurity applications.
Akande Adedayo, Specialist Solutions Architect at 54pay Technologies, called for stronger regulatory oversight, noting that AI systems, when improperly configured, can generate more information than intended, potentially exposing sensitive data or system vulnerabilities.
“We need stricter regulations to ensure AI stays within its boundaries, doing good without causing harm,” Adedayo said.
