Potentially the most lethal kind of threat to an organization’s security, insider threats can pose risks as significant as—if not more than—external attacks. Because insiders are granted trusted access to sensitive data, these threats often fly under the security radar.
By examining how users access your data and identifying when inappropriate or abusive behavior takes place, machine learning can help you secure your data from insider threats.
In our insider threat infographic, we examine machine learning and how it works to detect and prevent against insider threats. Summarized below, we review the primary types of insider threat profiles and explain how domain-specific machine learning helps identify insider threats and protect sensitive data. Download the full insider threat infographic for more information.
Insiders don’t need to break into your network—they’re already in, with access to all your company’s valuable data. There are three types of insider threat profiles to watch for:
The term machine learning is thrown around a lot these days. So what is it exactly? Machine learning is a type of artificial intelligence that enables computers to detect patterns and establish baseline behavior using algorithms that learn through training or observation. Ideal for detecting insider threats, machine learning is able to process and analyze vast amounts of data that are simply impractical for humans. (But, do machines dream of insider threats? Sci-fi fans should get the reference.)
Data breaches occur at the intersection of users and how they access enterprise data, making it the most accurate place for enterprises to detect a breach. Placing detection capabilities at the data access level gives security teams the highest chance of identifying a potential breach. Using machine learning to identify the actors in the environment, the account types being used, the types of database tables being accessed, and more, Imperva CounterBreach learns "normal" access behavior so it can then identify out-of-baseline anomalies.
Having visibility at both the user and data levels provides the granular information necessary about who users are and the details of what they're doing with enterprise data—down to the SQL operation, table name, file type, schema and server response time. This visibility provides context for all user behavior across enterprise data. Anomalies occur all of the time, and anomalies without context result in false positives. Looking at both users and how they access information is critical when it comes to determining whether an event is simply an anomaly, or a true data breach incident.
Once the baseline is developed, data access behavior is continually monitored and compared to the baseline to identify unusual activity. Inappropriate access is flagged.
There are several types of common activity that indicate potential insider abuse. These six suspicious ones are high on the list. Alerts allow security ops teams to immediately respond to risky data access in order to contain threats.
The reality is that anomalies happen all the time in a typical data access environment. And while machine learning can detect those anomalies, effective machine learning is about more than math alone; it requires domain expertise behind the algorithms. Without applying domain expertise to your dataset, the result will be an overload of alerts and false positives.
A deep understanding of data and user access to data will help identify meaningful indicators of critical data abuse versus numerous mathematical anomalies that mean more work for security teams. Using domain-specific machine learning algorithms to identify patterns and learn baseline behavior, Imperva CounterBreach can save your team time and keep your sensitive data more secure.