#cybersecurity | #hackerspace |

Autonomy and the Death of CVEs? IS the Manual Process of Reporting Bugs Holding Back the Advent of Automated Tools?


How many potholes did you encounter on your way into work today? How many of them did you report to the city?

Vulnerability reporting works much the same way. Developers find bugs – and vulnerabilities – and don’t always report them. Unfortunately, the manual process to diagnose and report each one is a deterrent. That manual process is holding automated tools back.

Software is Assembled

Software is assembled from pieces, not written from scratch. When your organization builds and deploys an app, you’re also inheriting the risk from each and every one of those code components. A 2019 Synopsys reports 96% of code bases [caution: email wall] they scanned included open source software and up to 60% contain a known vulnerability.

The risks don’t stop there. Open source and third-party components are heavily used when you operate software. For example, 44% of indexed sites use the Apache open source web-server — meaning a single exploitable vulnerability in the Apache web server would have serious consequences for all of those sites.

So, how do you determine if you’re using a known vulnerable building block? You consult a database. These databases may assume different names, but at the root of many of them is the MITRE CVE database.

Entire industries have been built on the ability to reference databases to identify known vulnerabilities in software. For example:

  • Software Component Analysis tools (e.g., BlackDuck, WhiteSource)allow developers to check build dependencies for known vulnerabilities.
  • Container Scanners (e.g., TwistLock, Anchore) check built docker image for out-of-date, vulnerable libraries.
  • Network Scanners (e.g., Nessus, Metasploit) check deployed infrastructure for known vulnerabilities.

But, here is the key question: where do these databases get their information?

Where Vulnerability Information Comes From…

Today, most vulnerability databases are created and maintained through huge amounts of manual effort. MITRE’s CVE database is the de facto standard, but it is populated by committed men and women who research bugs, determine their severity, and follow the manually reporting guidelines for the public good.

If there’s one thing we know, human processes don’t scale well. The cracks are beginning to show.

Here’s the problem: automated tools like fuzzing are getting better and better at finding new bugs and vulnerabilities. These types of automated vulnerability discovery tools don’t work well with current manual process to triage and index vulnerabilities.

Google’s Automated Fuzzing

Consider this: automated fuzzing farms can autonomously uncover hundreds of new vulnerabilities each year. Let’s look at Google ClusterFuzz, since their statistics are public.

In Chrome:



Source link

Leave a Reply