AI Code Assistants: Fast, Helpful — and Dangerously Insecure?
TLDR; Even with the most advanced AI assistant at your side, secure software still relies on human intervention.
There is, of course, no way back. AI-assisted coding is a mature technology, pushing developers leaps ahead of those stuck in their old ways. Instead of writing every line of code, some of the work can be delegated to an agent working from natural language instructions. This has the benefit of making coding more inclusive for novice users, but it is also a serious power-up for more seasoned developers. However, is the use unproblematic? Should one be wary of how and when they are used? Let’s look at some research.
A few years ago, a team of Stanford researchers posed a deceptively simple question: Do users write more insecure code when using AI code assistants? Their study, Do Users Write More Insecure Code with AI Assistants? [1], delivered an unambiguous answer:
Yes, they do!
And, this conclusion remains strong even in 2025.
Their study examined how 47 participants approached security-related programming tasks, with or without the assistance of an AI code assistant (based on OpenAI’s Codex, a precursor to today’s ChatGPT). The participants worked across Python, JavaScript, and C, solving tasks such as encrypting and signing data, handling file paths safely, avoiding SQL injection, and securely formatting strings.
The results were striking. Participants who used the AI assistant wrote significantly less secure code on four out of five tasks. At the same time, they were more confident in the security of their solutions than those in the control group. The AI often introduced subtle but serious security flaws — for example, using unsafe cryptographic algorithms, failing to sanitize user inputs, or constructing vulnerable SQL queries. Yet many users trusted the AI’s output without fully verifying it.
This is an excellent example of how AI can give users a false sense of security. The generated code appears correct, compiles, and often works for basic test cases; however, beneath the surface, it may be riddled with vulnerabilities. Especially for less experienced developers, the AI’s authoritative tone and polished output can be dangerously persuasive.
When the study was first conducted, AI coding tools like GitHub Copilot were still new. Large language models were just starting to appear in everyday developer workflows. Fast forward to today: AI code assistants are everywhere. Millions of developers are using tools like Cursor, ChatGPT, and Copilot. Companies in security-sensitive fields, such as finance, healthcare, and critical infrastructure, are rapidly adopting AI-assisted development.
This makes the findings of that early study even more relevant today. The core problem remains: AI models are trained on massive troves of open-source code, much of which is imperfect or insecure. The models reflect those flaws. Worse, because the output of these models is syntactically correct and often works for basic test cases, it fosters a false sense of trust, especially among less experienced developers.
So, what can we do about it?
The paper offers several recommendations:
AI training data should be carefully filtered, or at least re-weighted, to emphasize secure practices.
AI assistants should provide guardrails by warning about dangerous APIs, suggesting secure defaults, and nudging users toward safer patterns.
Secure coding tools — such as static analysis and automated testing — should be more deeply integrated into AI-assisted development workflows.
Finally, they should help users build realistic expectations about what the AI can and cannot do.
Latching on to the last recommendation, it is essential to remember that the problem is not purely technical. It’s all about us humans! This is explored in a recent study by the Swedish Defence Research Agency (FOI), which asks the question: Why do developers care about writing secure code in the first place?
In the 2024 survey, "Software Security Depends on Developers’ Motivations and Deterrents", insights were gathered from Swedish developers working on societally critical systems [2]. The results are illuminating. The strongest motivators for writing secure code are internal: a sense of personal responsibility, an awareness of the risks their code may introduce, and genuine concern for users and their organization’s reputation.
External motivators — such as mandatory processes, compliance checks, or fear of blame — were weaker. Among the most significant deterrents were factors like a lack of market competition (where security is not viewed as commercially critical) and the perception that developers would not be held personally accountable for vulnerabilities.
In other words, secure coding is not simply a matter of tools or training. It is deeply tied to culture, values, and organizational context. And here is where the two studies intersect. AI assistants can certainly make developers more productive. But they can also amplify complacency, misplaced trust, and existing cultural weaknesses. A developer who already sees security as their personal responsibility will be more likely to evaluate code, also AI-generated code critically. A developer who views security as someone else’s concern — or who is racing to meet a deadline — may simply accept insecure suggestions without question.
Final thoughts
What both studies ultimately remind us of is this: secure software does not emerge by accident, nor can it be fully delegated to an AI. It depends on human judgment, developers who care about security, organizations that foster a culture of responsibility, and tools designed with safety in mind.
As AI becomes increasingly integrated into the software development process, this balance becomes ever more crucial. More innovative human practices must accompany smarter AI tools. Even with the most advanced AI assistant at your side, secure software still relies on human intervention.
References:
[1] Perry, C., Vemprala, S., Lin, Z., Mueller, S., & Boneh, D. (2023). Do Users Write More Insecure Code with AI Assistants? ACM CCS 2023. doi.org/10.1145/3576915.3623157
[2] Karlzén, H., Falkcrona, J., Eidenskog, D., & Karresand, M. (2024). Software Security Depends on Developers’ Motivations and Deterrents – A Survey Study. FOI-R--5691--SE, Swedish Defence Research Agency. (Study only available in Swedish)



