Craig Williams
April 17, 20244 mins Read

The effect and impact of AI on Software Development

Craig Williams

The use of AI in cybersecurity is increasing rapidly, with many companies adopting it as a key tool in their cybersecurity strategy.

Traditional cybersecurity relied on signature-based detection systems and rule based systems with a very manual analysis and response. This traditional approach to managing cybersecurity was ultimately very time consuming and costly.

With the introduction of AI in the cybersecurity world, large amounts of historical and real time data can be consumed from numerous sources and by using machine learning algorithms the systems can automatically adapt to evolving security threats. This allows organisations to not only react quicker, but also free up resources to do more value adding activities.

However, with all these ethical benefits, AI can just as easily be used maliciously to attack organisations. AI can be used to manipulate and deceive systems used to protect organisations, it can also be used to enhance the attacks or to evade the traditional security defences.

Even though development teams will benefit from the ethical aspects of AI in security and the SDLC, the teams need to be acutely aware of how AI can be used maliciously against the applications they develop and security needs to be top of mind now, more than ever.

There are several strategies that can be used by development teams to reduce the incidence of insecure code moving beyond the development phase of the SDLC. And therefore resulting in fewer breaches and vulnerabilities being deployed to production.

Research indicates, that during a traditional development program, 90% of the flaws are introduced during the design and development of an application, however 80% of the flaws are identified and fixed during the testing phases. Not only does this elevate the cost and delay the delivery, but also increases the risk of deploying applications with vulnerabilities into a world where AI is being used to exploit these vulnerabilities.

With this in mind, developers need be proactive and embrace the shift-left mantra when building applications. Read about Shift Left Testing in Vuyiswa Mahlasela’s article. Shift-left means that we bring testing and quality earlier into the development lifecycle, and not let it be left to the end of the process. This testing and quality includes security and allows for the early elimination of vulnerabilities and the reduction in the feedback cycles.

With modern IDE’s, developers have tooling available to them that will identify vulnerabilities and bad coding practices as they are developing their code at their fingertips. Some of this tooling includes SonarLint and Snyk Security.

As an example of what Snyk Code promises, in terms of their AI and ML capabilities from their website.

Snyk Code learns from the knowledge of the global developer community using an unique human guided process which makes it industry-leading in its speed and accuracy. Fix guidance is offer in-line with code with additional explanations and example fixes from open source projects that fixed similar issues. Address issues in the comfort of your workbench even before issues get stored into the source code management.”

This is a perfect example of a positive and productive use of AI for the betterment of the development experience.

Further to this support within the IDE, using CI pipelines with stringent quality gates in terms of security and code quality also play a significant role in reducing the potential for deploying code that is buggy or has vulnerabilities built into them.

As with most code that is developed, third party components are utilised to deliver common functionality and frameworks. It is just as important to have a real time handle on these third party components and libraries in terms on vulnerabilities.

The potential risk introduced by third party components can be mitigated by maintaining a regular update and refresh cycle. With the world constantly evolving, software developers need to stay aware of what they are using in their code, and the downstream code which they do not have direct control over.

Containerising applications and treating infrastructure as code also allows for more control and awareness of what is being deployed. It also allows for the easier identification of vulnerabilities logged against a container as the container registries are capable of scanning a container and listing the CVE’s prior to deployment.

With security shifting left, more of the responsibility sits with developers and solution architects to produce code and applications with minimal vulnerabilities. This allows security teams to focus on predicting where the next attack vectors will be coming from and how the AI can be utilised to evolve the defenses against the constantly evolving malicious use of AI.

author avatar
Craig Williams