Malware Incident Step-by-Step Guide from Detection to Resolution
By: Zach Swartz, Senior Data Scientist
The Adlumin Security Operations Platform recently detected a user in a customer’s network running multiple variations of a very suspicious PowerShell command on a single host. The command first decoded a suspicious-looking file from the user’s directory (using base64) and then unencrypted the result using an AES key in the initial command. Finally, the malware was executed after loading the resultant data into memory.
The detection offers an excellent use case of how Adlumin’s Security Operations Platform and Managed Detection Services protect client resources via automated detection and human response. The initial alert was generated by an Artificial Intelligence model activated on all PowerShell execution logs in our client data. Adlumin’s Managed Detection Response and Threat Research teams then collaborated to map the attack vector and track the incident down to its malicious roots.
This four-step breakdown dives into how Adlumin uses artificial intelligence and human judgment to secure customers.
Step 1: AI Detection for Malicious PowerShell
Adlumin Data Science has designed this alert to evaluate each PowerShell command in a series of automated steps. The malicious command was first tokenized into its fundamental components, which included using:
- security.cryptography.aescryptoserviceprovider
- frombase64string
- assembly
- createdecryptor()
- transformfinalblock
The usage pattern of these tokens did not match any patterns seen in historical data, so the command was passed into phase two of the detection pipeline.
Step 2: Character Level Comparison
Since the tokenization step of the detection can often produce false positives, the next step is to perform a more computationally expensive but precise comparison between the suspicious command and historical commands. This is done using a combination of string tokenization and what is known as the Levenshtein distance, which makes character-level comparisons between strings. In the Illustration below, the distances between dots represent the string comparison score between commands.
The red dots (representing malicious commands) stand out significantly from the rest of the benign historical commands (blue dots), earning them an “anomalous” designation.
This comparison yielded an extremely high score, leading the AI to conclude there was an insufficient match against the pool of historically benign instances. The command was therefore passed to the final phase of detection.
Step 3: Minimize False Positives
The last step involves scoring the command using a large set of regex rules designed to minimize false positives further. Due to multiple suspicious PowerShell methods and a long base64 string, the command scored well above the detection threshold, and an alert was sent to the customer with a ‘High’ severity. Since both the pipeline’s anomaly detection and scoring portions are likely to produce false positives, the intersection of these two steps is required to reduce noise and maintain fidelity.
Step 4: Response and Remediation
An Adlumin Managed Detection and Response team member responded to the alert and initiated correspondence with the customer after determining that the behavior was likely malicious. The customer confirmed that this behavior was unexpected and quarantined the host involved. Meanwhile, Adlumin Threat Research initiated an investigation and found that the involved user had downloaded an executable associated with a known trojan virus which was determined to cause the incident. After further investigation, Adlumin Threat Research concluded that the malware did not attempt to pivot throughout the network, and after quarantining the machine, the incident was considered resolved.
Are your Security Defenses Ready?
For more information, contact one of Adlumin’s cybersecurity experts for a demo to get started.