Building Better Evaluation Criteria for Linux Security

ID CARBONBLACK:0A3E9806DF4E119170F2095B6DED3231
Type carbonblack
Reporter Katharine Laird
Modified 2018-11-27T18:00:36


Carbon Black recently published a report on the challenges of securing Linux-based operating systems and how Carbon Black is redesigning the approach. For more information about how the Cb Predictive Security Cloud, Carbon Black's consolidated endpoint security platform, helps enterprises cut costs and realize significant business benefits, check out our webinar The Business Benefits and Cost Savings of Switching to the CB Predictive Security Cloud.

Given the flaws in applying these questions to Linux security tools, we need to identify a new set of questions we can use to evaluate the suitability of a security tool for Linux. In order to do this, we must first understand the unique security context of Linux and build out questions from that context. Linux security threats tend to be variations on the same theme: exploiting vulnerabilities in trusted software. Vulnerabilities in this context usually refer to oversights or errors in the implementation of software, such as improperly handled error scenarios or insufficient input validation. These errors and oversights can happen anywhere from the low-level Linux kernel to commonly installed system packages to popular application frameworks and plugins. Several properties of these Linux exploits present challenges unrelated to the common malware attacks found on Windows.

Challenge #1:


Windows machines face the unique challenge of large amounts of commodity malware, and a variety of fileless based attacks. A large class of Windows malware achieves persistence by installing new software at specific locations. The mere presence of a file known to be installed by Windows malware is a red flag. Fending off commodity malware, and addressing fileless attacks is table stakes on Windows. However, most Linux attacks are fileless and are executed through software that was installed on purpose, albeit for another function, making identification of potential vulnerabilities a different task than merely looking for files. This means that Linux based security tools must be designed to identify and remediate fileless attacks first. We have to identify which versions of software are susceptible to exploit and determine if any of those software versions are installed. For vulnerabilities that haven’t yet been identified, we need to be able to monitor and flag unexpected behaviour or use of software. Identifying unexpected behavior doesn’t always point to exploitation, so we need to be able to investigate further. Again, if your leveraging legacy tools which aren’t designed ‘fileless first’ those tools will prove to be ineffective. Questions that can be used to help evaluate security tools against this challenge include:

  • Can the tool help detect software installed with known vulnerabilities and exploits?
  • Does this tool allow for operational visibility in order to identify and investigate unexpected behavior?

Challenge #2


There are many Linux distributions, and rapid update cadences are common. Security tools must support both the breadth of distributions and keep pace with the rate of change. If not, security becomes a blocker, not an enabler for operations. Additionally, Linux distributions and installed software packages have unique release timelines, some releasing as often as one or more times a day. Keeping track of independent, fast-moving releases of installed software can be problematic for security teams. The problem is multiplied when you take into account the fact Linux machines across a fleet will not necessarily be running the same versions of software or even the same distribution. Questions that can be used to help evaluate security tools against this challenge include:

  • Does the tool update frequently enough that it can cover new vulnerabilities and exploits soon after being made public?
  • What is the impact of the tool being out of date?
  • Can the tool perform well across a fleet of machines with diverse configurations?
  • Can the tool clearly report findings from scans across a fleet of machines with diverse configurations?

Challenge #3


Linux administrators trust Linux for predictability and high performance for critical workloads. In these types of environments high-performance security tooling is a must. Security tools that have a material resource footprint mean breaking workloads and forcing expensive expansion. Linux is more commonly installed on servers than desktop machines, which changes the impact of scanning for vulnerabilities. As discussed in Flaw #3 above, the same security scan may have a relatively low impact on a Windows machine and a potentially high impact on a Linux machine. In addition to the hardware being fully utilized, the “user” of a server is very different than that of a desktop machine. In fact, a Linux server running production applications is likely serving many users at a time, multiplying the impact of a single failure across each user affected. Instead of blindly over-allocating hardware to account for security scans, we should have the power to balance our specific security concerns against the impact to the system. Questions that can be used to help evaluate security tools against this challenge include:

  • Does this tool significantly impact the ability for my instances to serve traffic?
  • Does the tool give me the opportunity to tune it for the security scenarios I care about?

The goals of this whitepaper are to bring light to the flaws with porting Windows security approaches to Linux, identify unique challenges with securing Linux infrastructure, introduce a list of questions one can use to better evaluate a Linux security offering, and propose a core set of design principles on which strong Linux security offerings can be built.

Read Now

Thanks for joining us as we explore "Re-designing Linux Security: Do No Harm" our report on the challenges of securing Linux-based operating systems in the modern era. Join us later this week as we continue to profile this report.

The post Building Better Evaluation Criteria for Linux Security appeared first on Carbon Black.