When I think of application security metrics, I think of an immature field where there are no actuarial data sets. As a result, I see some interesting numbers come out on number of vulnerabilities per thousand lines of code, code agitation, insecure programming languages, etc… Rarely are they even close to being statistically sound (lacking standard deviation, error rates, false assumptions, etc…) As a result, when I attempt to triage which area of the code base to focus on, there is the one tried and true method: code age.
This process is built upon the following assumptions;
- Old code is likely to have more security vulnerabilities than newer code. This is due to increased application security vulnerability awareness training, secure libraries, and other engineering improvement processes.
- Threats evolve. Functionality that never expected automation is now easily automated. Do a diff between OWASP Top 10 2010 / 2007 / 2004. Proof is in that diff pudding. Third party library dependencies had their threat models and vulnerabilities change, as a result, your code is now at greater risk. Attackers have gotten smarter, better tool sets, additional resources. Think of the analogy as code ferments. Its' security posture doesn't get better with age.
First, identify all applicable source code files / binaries. Proactively prevent new security vulnerabilities from being introduced in the source code / binaries. Then rank files / binaries by age where age is when the source was originally created. Then perform static analysis on the old code. Then begin the security push phase of the SDLC – hands-on review for security vulnerabilities. This phase forces engineering to look for vulnerabilities in old code.