Detecting Suspicious Network Access With Machine Learning
Adlumin Data Science offers two main categories of anomaly detection – single-event, based on information collected from a single security log, and multi-event, which uses information extracted from an aggregation of individual logs. Since anomaly detection tends to have a higher false-positive-rate (FPR) than signature-based methods, it is often helpful to intersect detections from both approaches, which can reduce the total FPR while maintaining an acceptable true positive rate (TPR). The logic is that truly anomalous behavior is likely to appear anomalous from multiple independent perspectives.
An excellent example of this is the intersection between Lateral Movement (LM) and the Access-Events (AE), which are multi-event and single-event ML models, respectively, on the Adlumin security platform. Both models look at Windows “successful logon” events (event ID 4624). Still, LM looks at directed graphs based on an entire day’s worth of activity and draws attention to individual machines. In contrast, AE simply draws attention to individual access events between machines. Another important distinction is that LM individually models privileged users with enough historical data, whereas AE models all windows access events together, regardless of the user involved.
The LM detection shown below features a directed graph flagged by LM and the individual access events that make up the graph, some of which were also flagged by AE, indicated by the ‘Anomaly Score’ column.
This LM detection would not have occurred if none of the comprised events were flagged by AE, which highlights the mechanism for reducing false positives in these detections. If a user’s behavior is determined to be anomalous by LM, but none of the individual access events are considered anomalous, this is likely grounds to throttle the detection.
Aggregating Single-Event Detections
One of the pitfalls of LM is that it only models privileged users with sufficient access history. A recent penetration test performed on a customer by the Adlumin SOC team has highlighted the potential utility in aggregating AE detections to look for a specific behavior that is invisible to LM. During this penetration test, the actor gained access to a machine that allowed them to enumerate the domain and access many other machines. Below are all the customer’s AE detections on the day of the penetration test.
The “NODEZERO” machine used in the Adlumin SOC penetration test involves many anomalous logons to other machines. LM is blind to this behavior because it’s associated with the “anonymous logon” user, so we must rely on AE to create an alert in this case. Sending a separate alert for each detection would be problematic from a user-experience standpoint. However, calling for an aggregation strategy to be used here. One idea is to count the number of unique logons associated with each “remote_workstation_name.” An alert is sent to the customer if that number exceeds a predetermined threshold. In this case, “NODEZERO” is associated with seven unique anomalous logons to other machines, so setting the threshold to 6 would be sufficient for an alert.
While this example shows a blind spot for LM, aggregating AE detections is nonetheless not a sufficient replacement for LM detections. LM can detect much more subtle and complicated behavior associated with moving through a network, so AE aggregate detections should be related to their alert, possibly for targeting domain enumeration attacks. Sometimes these two alerts will describe the same behavior, validating the need for further investigation. As the Adlumin ML model suite grows, these alert intersections will become more sophisticated and widely deployed.