With businesses of all sizes constantly on the hunt for ways to improve productivity and streamline processes, demand for new software has never been stronger.
As a result, there is growing pressure on software developers to get new products to market more quickly. To achieve this, increasing numbers are making use of artificial intelligence (AI) tools and platforms.
According to a recent survey conducted by GitHub, more than nine out of ten US-based developers are deploying AI coding tools. They cite the advantages as being productivity gains (53%), the ability to focus more on building/creating as opposed to repetitive tasks (51%), and the prevention of burnout (41%).
These benefits are likely to pave the way for even greater AI adoption among software developers. In addition to reducing time spent on repetitive and often tedious work, it can suggest new lines of code and respond to technical questions with solid recommendations. AI tools can even offer research assistance and explain processes that may trip up a developer in their quest to solve an ever-growing list of challenges.
Security must be front and centre
However, it’s important not to lose sight of the need to keep secure coding practices front and centre in software development - even when deploying AI tooling. Developers cannot blindly trust the output, as so-called ‘hallucinations’ are still a leading concern.
Following security best practices and spotting poor coding patterns - the type that can lead to exploitation – have emerged as skills that developers must hone. It’s simply not possible to replace the critical ‘people perspective’ which anticipates and defends against increasingly sophisticated attack techniques.
Without human insights, there will be more developers around the world creating insecure software than ever before, and this is a situation that carries immense risk. While the productivity gains afforded to developers by AI coding assistants are a boon to swift code delivery, a lack of contextual security awareness can increase the number of exploitable vulnerabilities.
Although speed is considered a virtue in software development, there is also a need to share that focus with striving for security best practices at the same time. Developers can achieve this when they are properly educated.
The importance of standards
Meanwhile, there is also a clear need for agreed standards when it comes to security and AI coding. To date, the technology simply hasn’t received enough training on insecure code to capture the intelligence required to identify the wide range of threats that exist.
The industry may get there in a few years, but it’s not there yet, and until that day arrives, software vendors should not blindly trust AI tools to code quality products that are secure.
There is still a need for the input of security-skilled developers to drive organisational strategies while producing protected code to do the following tasks:
- Fixing bugs: While some AI tools will flag potential vulnerabilities and inconsistencies, humans must still deliver cautionary oversight. Detection is only as accurate as the inputting/initial prompting from the developer, and they need to understand how the AI recommendations are applied in the context of the wider project.
- Focusing on the big picture: AI isn’t ready to fly solo with complicated components, for example, or brainstorm new and creative solutions to DevOps challenges. Again, developers have the technical knowledge to understand the big-picture goals and potential outcomes and need to keep working on adding security best practices.
- Implementing new languages: AI will slow down developers if they’re working with unfamiliar languages or frameworks. This is an occupational reality that takes time to build up a comfort zone of understanding through training and agile learning.
- Collaborating through feedback: Many developers say that constructive feedback has a positive impact on their work. Clearly, this helps them do a better job. For now, at least, organisations should continue designating collaboration as a human-to-human process.
The use of AI tools in the code creation process is still very much in its infancy. In many cases, insufficient attention has been given to the implications for code security and resilience.
By understanding the vital role humans will continue to have in the process, software vendors can be confident they are creating products that operate effectively and are secure from cyber threats.