Frustrated by a flood of results from all your software security testing tools?

by | Apr 22, 2021 | Blog

Share This Story, Choose Your Platform

We’ve all been there, trying to get application releases out the door quickly. Innovation, responsiveness to customer requests and ongoing functionality updates — all of them are accelerating the speed of application delivery. Meanwhile, the development environment has grown, encompassing more open source than ever before, adopting containers at a rapidly accelerating pace, and integrating testing tools throughout the software development lifecycle (SDLC). Security teams must pick the right tools for the applications their organizations are building, and then find and prioritize the important findings out of potentially thousands of potential “issues” generated by the tools to deliver secure software applications. It’s no wonder security teams are overwhelmed and frustrated by the flood of results they receive from a slew of software security testing tools.

How many application software security testing tools do you have?

Take an inventory. Chances are that you have a set of tools you’ve chosen based on the breadth of their capabilities, where they fit in the SDLC, the accuracy of results, and how they fit the needs of your organization. Starting with the basics, you’ve chosen a combination of tools and methods to reduce the risk of vulnerabilities in the applications your organization delivers:

  • Static application security testing (SAST) tools analyze application source code, byte code, and binaries for coding and design conditions that may indicate security vulnerabilities. SAST solutions typically analyze an application before the code is compiled, earlier in the SDLC. SAST can review 100% of the code base, indicate the location of vulnerabilities by file name and line number, and highlight problematic code.

  • Software Composition Analysis (SCA) tools track and analyze open source components brought into a project, including related components, supporting libraries, and direct and indirect dependencies. In addition, SCA tools detect software licenses, deprecated dependencies, vulnerabilities, and potential exploits. SCA integrates across the SDLC, identifying potential open source problems very early in the development process and alerting organizations to newly disclosed vulnerabilities after an application is released.

  • Interactive Application Security Testing (IAST) tools deploy agents and sensors in running applications, continuously analyzing all application interactions initiated by automated tests, manual tests, or a combination to identify vulnerabilities in running web applications. Typically, IAST tools run during the test or QA stage of the SDLC.

  • Dynamic application security testing (DAST) tools examine an application when it’s running, with no access or visibility into the source program. It simulates attacks and, based on the application response, determines whether the application is vulnerable, and possibly susceptible to a malicious attack. Because DAST requires a running application to perform testing, it can only be used late in the SDLC, during the test and production phases. While it identifies potential vulnerabilities, it can’t identify specific lines of vulnerable code.

  • Penetration testing, or pen testing, is primarily manual, not automated. While pen testers use automated scanning and testing tools, they may also use their knowledge and skills about attack techniques to perform in-depth testing that automated tools may not be able to provide. Pen tests can evaluate any system and mimic the behavior of malicious hackers to simulate a real-world attack, which can be very valuable. Penetration testing fits late into the SDLC, often just prior to release.

Lots of tools, lots of results, now what?

Depending on your organization’s size and structure, someone in your organization is running all these tests and getting a lot of results. In a perfect world, security teams would sort through and prioritize the issues for development, but these teams lack sufficient resources (and time) to do so. Therefore, development is left to sort through the potential issues identified by the tools. While they provide valuable insight into your codebase, these application security testing tools can end up clogging up development teams’ bug reports with extraneous and unvetted “issues.”

The question isn’t whether or not there are any vulnerabilities — there are — it’s sorting through the noise to figure out which are the severe bugs and how to fix them. Veracode’s State of Software Security (Volume 11) shows that the flaw density increases exponentially per megabyte; in other words, the number of flaws per megabyte of code correlates directly to the overall size of the application. And applications today are big, and they’re getting bigger. In a 100 MB app, Veracode scans indicate that you’ll have about 2000 vulnerabilities.

Development and security teams alike know that there are bugs and flaws in applications, and they treat different types of issues differently. Some are an easy fix, others can wait for another release, while others are severe vulnerabilities and need more rapid remediation. Organizations can’t afford to sort through 2,000 vulnerabilities — they need to know which 5, 10, or 25 are the most critical and which they can safely ignore.

Veracode’s information comes from scanning 130,000 applications over a 12 month period. From an open source perspective, Synopsys’s 2020 Open Source Security and Risk Analysis Report showed that 75% of codebases contained vulnerabilities, and 49% contained high risk vulnerabilities.

These reports, while a helpful overview of the results of scanning codebases over the previous year, can’t give your teams insight into the results from your application security testing tools and additional results from manual testing. You’re still faced with a lot of results and the need to aggregate them. Once you’ve aggregated all results, you need to normalize them if you want to make decisions about the actual risks you face.

Finding your most critical vulnerabilities starts with removing noise

Even after you’ve aggregated and normalized your results, probably only a fraction of the potentially thousands of issues are worth the time and effort necessary to remediate them. Application security teams have to comb through the remaining results from software security testing tools and triage them — flagging the ones that must be fixed, weeding out the false positives, and suppressing issues that don’t meet the application’s risk threshold.

Unfortunately, this process is time-consuming, repetitive, tedious, and doesn’t scale, particularly in today’s rapid deployment environment. Luckily, machine learning can automate the triage process, determining which issues to prioritize and which to suppress based on your team’s past decisions for each application or class of applications. This allows security and development to focus on only those issues that pose the greatest threats to your goals rather than duplicating efforts over and over again.

Learn how machine learning can increase AppSec triage speed and efficiency


Share This Story, Choose Your Platform