
AI and machine learning tools have had an important role in software development for many years, helping to drive efficiency and automation. The new generation of AI tools has the potential to supercharge this transformation, bringing even greater improvements to efficiency, cost-effectiveness, and innovation cycles.
However, these tools also come with new risks, including security vulnerabilities, governance challenges, and regulatory uncertainty. As with any new technological approach, organizations bringing new AI tools and specifically AI-generated code into their development lifecycles must balance benefits with the potential risks.
Faster, smarter, and more productive? The promises of AI-powered development
In common with most sectors investing in AI, one of the biggest draws in software development is the potential for boosting efficiency. The early days of the AI hype cycle saw some very inflated expectations of productivity gains, but realistically, we’re seeing various improvements in the industry so far – anything from 5 percent improvements in productivity to 50 percent.
This is mainly realized by automating repetitive tasks such as testing and infrastructure setup. These are important but can be time-consuming and will often be seen as drudgework by developers. We are also seeing new capabilities emerge every other week taking it to the next level and to greater capabilities.
Automating these activities frees developers to focus on higher-value tasks, ideally both improving development velocity, increased quality and achieving greater ROI. This translates to business velocity and the ability to innovate faster.
However, as with most technical advances, the increased use of AI also brings more risk if not managed properly.
The hidden risks in AI-generated code and security vulnerabilities
One of the biggest dangers of AI-generated code is assuming that it’s more secure than human code. In fact, the opposite is often true. A study by Stanford University found that over-reliance on AI can result in code that is less secure and more prone to errors.
Typically, we find that currently most AI tools produce code at a level you’d expect from a junior coder. It will likely do the job, but it will lack the refinement and innovation that an experienced human developer can achieve. The technology is improving fast but it’s far from being perfect.
AI also introduces some additional risks not often found in hand-crafted code. “AI hallucinations” (i.e., instances where AI models generate incorrect, misleading, or nonsensical outputs) have become less common but are still possible, so there’s always a chance the AI will go off-task and start writing useless code.
Data leaks are another issue. As demonstrated by the high-profile breach suffered by Samsung, there are potential risks around sensitive data inadvertently being made vulnerable or even misused.
Pursuing best practices to securing AI code
The good news is that managing insecure AI code is not particularly more challenging than dealing with human error. At the end of the day, code is code.
While AI tools are certainly delivering noteworthy results, it’s a serious mistake to trust their output without close supervision from experienced human developers.
AI can suggest code fixes, but developers must verify them to ensure they’re stable and secure. The right visibility tools can automatically track AI-generated code changes so that nothing slips through to break the application.
Again, treat AI-generated code like the output of a junior coder. As long as there is sufficient oversight, there should be minimal risk. I believe that we will get there at some point in the very near future, but for now, it’s important to keep the human in the loop.
Everything is now a developer and the future of all programming languages
Another aspect is vibe coding. Vibe coding is a programming technique that uses AI to write code based on natural language descriptions. It’s a growing practice that lets programmers focus on ideas and architecture instead of manual coding. This means that everyone now has the potential to become a developer by simply interacting with an AI system in plain English. You describe the application you want to create, and then AI generates it for you. This opens a whole new world of making almost everyone in the enterprise a potential developer. Quite exciting for your business velocity, but a real nightmare from a security standpoint. If the future programming language is plain English, then how do you secure it? If everyone is now a potential developer, how do you protect those development at scale? This makes security education and governance a key aspect of every organization’s future.
What does good AI governance look like now?
With AI being such a nascent and changeable technology, there is still much uncertainty about regulatory and governance frameworks. The US Executive Order on ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ issued by former President Biden was recently rescinded by President Trump, creating further ambiguity.
Everyone using AI should be proactively working out their own guidelines in the meantime. AI is no longer just an IT issue — it’s affecting almost every business function, from finance to HR. As such, companies should create strict internal security policies covering areas like vendor risk management and setting parameters for how AI can be used by personnel.
Even for those not yet integrating AI into their SDLC, with most applications expected to integrate AI models within 18-24 months, organizations must future-proof their security approach.
There are multiple frameworks available that provide a solid starting point for developing AI governance. Resources like NIST’s AI risk management framework and ISO/IEC 42001, the first AI management standard, offer guidance on understanding priorities and organizing activity.
It’s also important to consider any relevant regulatory and legal requirements. US companies developing software for healthcare will need to adhere to HIPAA, for example, while firms trading within the EU will need to comply with the GDPR. There are also specific AI regulations in some regions such as the EU’s AI Act, which focuses on AI use that may cause unacceptable risk.
AI may be a game-changer, but it’s not a magic bullet
AI is permanently changing the face of software development, transforming the workflows of development teams. While most of these changes are positive, companies must be prepared to tackle the increased risks that come with it.
We’re still in the early days of working out what good AI governance looks like, but if companies don’t start securing AI development now, they’ll soon fall behind. Organizations that proactively work to secure AI will gain a significant competitive edge over those that wait for regulatory and legal edicts.
Image credit: BiancoBlue/depositphotos.com
Ori Bendet is VP Product Management, Checkmarx.