Here’s an idea: stop fixing every vulnerability you read about. The best thing to do, it turns out, is to look at the vulnerabilities that are in both Metasploit and the Exploit Database and fix those. That gives you the highest chance of fixing bugs that are likely to be used in an actual attack.
The flood of vulnerabilities published each month seems to be growing exponentially. It’s essentially impossible to keep up with the flow, and if you have a large network, it’s a full time job for multiple people to even track a percentage of them. There have been discussions for years among security professionals about how to prioritize which vulnerabilities to patch, and there a number of theories, ranging from fix only critical bugs with high CVSS (Common Vulnerability Scoring System) to fix only flaws in your mission-critical applications to fix what you have time for.
But some new data collected and analyzed by the folks at Risk I/O shows that if you patch the vulnerabilities that are in both Metasploit and Exploit DB, you will have the best chance of fixing the ones that are most likely to be used in an attack. Ed Bellis and Michael Roytman looked at a dataset of 23 million vulnerabilities and 1.5 million breaches that occurred in June and July, which involved 103 different vulnerabilities.
“The best policy was fixing vulnerabilities with entries in both Metasploit andExploit DB, yielding about a 30% success rate, or 9x better than anything CVSS gets to, and 15x better than random,” Roytman said in a blog post.
The researchers wanted to figure out which vulnerabilities were actually being used in active attacks. There are a huge number of flaws, even critical or high-risk ones, that don’t end up being targeted by attackers in the real world.
The idea for the research, which was presented at Bsides Las Vegas last week, came from looking at the huge amount of data that Risk I/O gets from its clients, who upload data from their scanners. The company correlates that with active attacks, exploits and other information to help companies figure out what to remediate.
“It was a consequence of the data that we’re swimming in here,” Roytman said in an interview. “I was looking through some academic papers and blogs and tweets about what people want to see about vulnerability statistics, but don’t. I stumbled across studies about what’s the probability that a vulnerability in the NVD has an exploit attached to it, and that seemed sort of simplistic to me.”
The 23 million vulnerabilities that Roytman and his colleague Ed Bellis looked at came from 9,500 of the company’s clients and covered 1 million assets. The breach data came from a variety of sources, including the Open Threat Exchange. Importantly, the way that organizations decide which vulnerabilities to patch still depends upon its own priorities, as well as what its main adversaries are.
“The threat actors are a hugely important part. Everybody is pretty exposed to script kiddies, but if you break it down to the threat actors you’re exposed to, things could change,” Roytman said. “If you say, I’m a financial institution and hacking groups are my biggest threat, it’s very hard to nail down the methods they use. That information seems almost impossible to get from where I sit.”
In terms of future work on this topic, Roytman said he’s looking to partner with other companies to get more data and vulnerability sets.
“The exploit sets we’re looking at now are just the baseline,” he said.