Generative AI meets Cybersecurity
Will Generative AI become our strongest ally or our most significant security threat?
Generative AI – and services like Claude, ChatGPT, DALL·E, and others that can create text, images, or code – have quickly transformed how we think about creativity, productivity, and automation. But now the same technology is starting to play a bigger role in an entirely different domain: cybersecurity.

The security community hesitates to use Generative AI, which is creative and sometimes hallucinatory. These are not the preferred characteristics in a field where assurances, guarantees, and proofs are often required. However, there is also a curiosity, realizing that creativity could be utilized, mundane work could be automated, and analysis of complex data streams could be delegated to a digital colleague.
This was the backdrop for a short talk I gave earlier this spring, starting the conversation on Generative AI for Cybersecurity in an organization. I based it on four papers reviewing the research field. Their combined work identifies current uses, emerging possibilities, and risks.
I thought I would share the work and references so you can explore the topics that fascinate you more deeply.
💬 What do you think – will Generative AI become our strongest ally or our most significant security threat? Feel free to comment or get in touch.
Current Use Cases – Where Generative AI Is Already Making a Difference
Intrusion Detection (IDS) with GANs
GANs generate synthetic network traffic to train smarter IDS systems. This enables the detection of previously unknown attacks, especially through adversarial training, dataset balancing, and data augmentation.
→ Source: Coppolino et al. (2025)Phishing and Spam Detection with LLMs
LLMs analyze textual content to detect suspicious language patterns and email domains, improving phishing detection accuracy.
→ Source: Ferrag et al. (2025)Incident Response and Forensics
During active attacks, LLMs can quickly analyze logs, suggest mitigation strategies, and summarize the incident.
→ Source: Ferrag et al. (2025)Penetration Testing and Red Team Analysis
GenAI simulates realistic attack scenarios (e.g., phishing emails, exploit chains) to evaluate organizational resilience in test environments.
→ Source: Sai et al. (2024)Chatbots for Cybersecurity
AI-driven chatbots provide real-time assistance, handle incident reporting, and offer user education and training in security environments.
→ Source: Ferrag et al. (2025)Password Security and Analysis
Models like PassGAN are used to understand common password patterns (for testing purposes) and to generate stronger passwords.
→ Source: Sai et al. (2024)
Emerging Opportunities – and Risks
Simulated Cybersecurity Environments (Cyber Ranges)
Generative AI can create simulated environments with realistic threats and traffic for training and system testing.
→ Source: Sai et al. (2024)Proactive Defense via Threat Modeling
LLMs analyze historical attacks and predict future threat patterns, enabling organizations to stay one step ahead.
→ Source: Sai et al. (2024)Security Protocol Verification
LLMs assist in analyzing and verifying the implementation of protocols like TLS, IPSec, etc., to identify weaknesses.
→ Source: Ferrag et al. (2025)Automated Vulnerability Analysis and Code Review
Code-focused LLMs (like CodeLlama) analyze, repair, and propose more secure code and detect vulnerabilities.
→ Source: Zhang et al. (2025)Real-Time Threat Intelligence via RAG (Retrieval-Augmented Generation)
LLMs are connected to external threat databases and documents to provide fact-based, up-to-date answers to security-related queries.
→ Source: Ferrag et al. (2025)
When Attackers Use the Same Tools
Attack Obfuscation with GANs
Attackers can use AI to mask real attacks by injecting synthetic traffic that confuses IDS systems.
→ Source: Coppolino et al. (2025)Prompt Injection and Jailbreaking in LLMs
Language models can be manipulated through specially crafted instructions that make them produce forbidden outputs or bypass safeguards.
→ Source: Ferrag et al. (2025), Zhang et al. (2025)Automated Exploit Code Generation
GenAI can assist attackers in writing exploit scripts or automatically identifying system vulnerabilities.
→ Source: Zhang et al. (2025)
What Does This Mean for the Future of Cybersecurity?
Generative AI is changing the playing field for defenders and attackers alike. The four papers point to a shared conclusion: we must embrace the technology, but do so with full awareness of its dual-use nature.
Key directions going forward:
Responsible Use: Organizations need clear guidelines for using GenAI in security settings.
Testing and Validation: AI-driven security systems must be rigorously tested, just like the threats they aim to defend against.
Strategic Understanding: Generative AI isn’t just a new tool – it represents a new dimension in the digital power game.
Sources:
Coppolino et al. (2025): The Good, the Bad, and the Algorithm – Focused on GANs for both defense and offense. Link to the Paper.
Ferrag et al. (2025): Generative AI in Cybersecurity – A comprehensive overview of LLM applications and vulnerabilities. Link to the Paper.
Sai et al. (2024): Analyzing the Potential of ChatGPT, DALL-E, and Other Models – Practical examples from industry use. Link to the Paper.
Zhang et al. (2025): When LLMs Meet Cybersecurity – A systematic review covering over 300 publications and 25 models. Link to Paper.

