October 26, 2023
Modern vulnerability management programs combine multiple tools in multiple layers, such as asset management, vulnerability scanners, and patch management systems, to help an organization deal with vulnerabilities in its environment. Whether by proactively deploying patches or updates to remediate affected software or by putting compensating controls in place to make it harder for an attacker to exploit them, the vulnerability management tool coordinates the action.
However, there are challenges to the whole process as well. In many cases, vulnerability management tools, or homegrown techniques, base their assessments on simplified risk models. Drawing their ratings from CVE values, CVSS scores, and other common tools, they fail to consider existing mitigations or environmental details that can radically alter the real-world assessment compared to the calculated values they’re working with.
This lack of good prioritization can lead to IT, networking, and other operations teams who are doing the day-to-day work wasting effort on solutions that aren’t especially helpful or missing things that are much more important than the numbers would imply.
“We have controls like EDR, next-generation vulnerabilities, network security controls. And the funny thing is that there is no end in this. In the sense that every time we spend the investment into controls, the way we deal with vulnerabilities remains no different. It raises the question- why do we even invest in controls? I mean, if we have a dollar, we should try and send that in scanning for vulnerabilities and patching.”
– Head of IT Security, Singapore-based HealthTech company
The challenge boils down to prioritization. Or, rather, the lack thereof. While any vulnerability management process or tool can identify which assets need attention, it takes accurate prioritization to make it effective.
Good risk prioritization will reflect the real-world risks the organization has to deal with. Without accurately identifying which vulnerabilities require the most attention, the teams doing the work are liable to fall victim to one of the two main problems.
The first case stems from the static scoring methodologies used by standard vulnerability scoring systems. While some of the newer standards incorporate some attributes that measure whether exploits have been observed in the wild and how easy or hard they are to mitigate, the majority are based entirely on static assessments of the assumed difficulty of creating a working exploit and how much traction an attacker gains if they succeed.
Without considering the reality of the organization’s environment, a vulnerability may be rated as a moderate to low threat when, in reality, it could be a major threat to a targeted organization. For example, a server that technically appears low value is actually serving a mission-critical function in the organization.
The second case is more subtle in that it leads to prioritizing work that could realistically be left for later. Over-rating a patch is most apparent in cases where it’s not a “Patch Tuesday” style rollup, where multiple patches get rolled into one comprehensive deployment. Those rollups are worthwhile regardless of their fixes to low-risk issues.
The challenge with over-rating the importance of a patch is more often an issue with development libraries, specialized applications, drivers, etc., that require the Security Operations team’s time to identify the problem and the IT team’s time to deploy the fix. If the vulnerability management platform can accurately prioritize the issue, the teams can avoid spending time better used elsewhere.
The question that comes out of these realizations is just how much does bad prioritization cost us?
While the specific answer can vary broadly depending on the organization, we can make some cost assumptions.
Here, the unseen cost of bad prioritization can be nothing less than the cost of a security breach. It sounds extreme, but the reality is that a missed patch, or mitigation, could leave a vulnerability exposed that a threat actor could leverage to compromise the environment.
That’s a worst-case scenario; realistically, not every system compromise leads to a severe breach. But conceptually, a breach on even a user’s laptop could lead to a threat actor accessing the organization’s environment and stealing or destroying assets. There are lower impacts, naturally. Deploying cryptocurrency coin miners does not specifically damage assets, but it does impact the organization with high resource costs and other operational impacts.
IT resources are almost always limited. That’s just a fact of operational life. The challenge is making the best use of limited IT resources, including ensuring that the correct patches get deployed on time.
Ultimately, SysAdmins would love to patch everything. Whether it’s security patches, bug fixes, functionality updates, or any other reason, the patches ultimately need to go in. With rollups, patch clusters, or whatever term they use, it’s easy to deploy large clusters at scale. However, the situation changes regarding patches that need more effort due to layered dependencies or a manual process.
If the IT team has to manually vet a new deployment due to change management requirements, especially for specialized applications, it is a matter of time and resources. Security patches will, naturally, get priority. But what if the prioritization is wrong? The team may spend excessive person-hours validating and testing a patch that, ultimately, could have waited until the next standard update cycle.
The cost of wasted work varies considerably depending on the organization. Still, considering that acquiring, vetting, and deploying a patch can take hours to tens of hours, depending on the scope, it’s easy to see how the wasted effort builds quickly. Worse, there is the lost opportunity for risk reduction that comes with deploying the right patch.
The bottom line is that without good risk prioritization, teams run the very real risk of missing a crucial patch that leads to a compromise or the lesser, but more consistent cost of wasting resources on patches that could otherwise wait.
Balbix provides the kind of environment-conscious risk prioritization that helps prevent these two common issues. It gives the Security Operations team a contextual picture of environmental risks, helping them guide the correct stakeholders into using their resources efficiently.
Accurate risk prioritization lets the organization reduce the risk and potential cost of a breach while saving time and resources on patch management efforts that could be better used on other issues.
For more insights into how Balbix takes prioritization past the standard tools, check out our piece on CVSS 4 or please sign up for a 30-minute demo.