See How To Stop Successful Phishing Trips

Phishing alert: One in 61 emails in your inbox now contains a malicious link

Be careful when you click. That email might not be as innocent as it looks.

By Danny Palmer |  |

The number of phishing attacks is on the rise, more than doubling in recent months, with one in 61 emails delivered to corporate inboxes found to contain a malicious URL.

Analysis by security provider Mimecast found that between August to November and December to February, the number of emails delivered despite featuring a malicious URL increased by 126 percent.  These malicious links are one of the key methods cyber criminals use to conduct criminal campaigns: by distributing phishing emails which encourage users to click through to a link.

The emails are often designed to look like they come from legitimate senders — like a company, or a colleague — in order to gain the trust of the victim, before duping them into clicking the malicious link.

The purpose of the malicious URL could be to deploy malware onto the PC or it could encourage the victim to enter sensitive information into a fake version of a real service — like a retailer, a bank or an email provider — in order to trick the user into giving up passwords and other data.  Attackers then either use this as a jumping off point for further attacks, or they look to sell it to other cyber criminals on underground forums.

In total, Mimecast analysed 28,407,664 emails delivered into corporate inboxes which were deemed “safe” by security systems and found that 463,546 contained malicious URLs — the figure represents an average of one malicious email getting through for every 61 emails that arrive.

Given the sheer number of emails sent back and forth by employees every single day, that represents a significant security risk and a potential gateway for hackers looking to conduct malicious activity.

“Email and the web are natural complements when it comes to the infiltration of an organization. Email delivers believable content and easily clickable URLs, which then can lead unintended victims to malicious web sites,” said Matthew Gardiner, cybersecurity strategist at Mimecast.

“Cyber criminals are constantly looking for new ways to evade detection, often turning to easier methods like social engineering to gain intel on a person or pulling images from the internet to help ‘legitimize’ their impersonation attempts to gain credentials or information from unsuspecting users,” he added.

ABCs of UEBA: B is for Behavior

by Jane Grafton on February 4, 2019

We like to say, “You can steal an identity, but you can’t steal behavior.” You might compromise my credentials, but you don’t know what time I normally login, the applications I typically use, the people I regularly email, etc.

Behavior is the Leading Threat Indicator
The key to predicting threats, especially unknown threats, is to monitor user and entity behavior – to recognize when that behavior starts being anomalous. Let’s take a serious example: workplace violence. You hear it over an over again after a violent incident – people close to the perpetrator say things like, “he was acting strange” or “he was keeping to himself” or “he was obsessed with social media” before he committed the violent act. There are always signs, and they are always behavior based. If you can get ahead of the threat, if you can predict it may occur, you can likely prevent it from happening. This is the premise of User and Entity Behavior Analytics (UEBA). Think about your own behavior, specifically in terms of patterns. Do you get to work at around the same time every day? Probably. If not, you likely have reasons. Maybe you have a doctor’s appointment. Maybe on Thursdays you have a standing appointment. When do you go to lunch? When do you leave for the day? People around you will notice if your behavior changes. If you start coming in late, if your lunches drag on, if you leave work early – any change in your behavior is noticeable. So, how does this same notion translate into UEBA and threat prediction?
If your office parking garage or building requires badge access, you’re creating an audit trail every time you swipe your badge. The machine learning models that power UEBA are able to detect changes in arrival and departure times, duration spent at the office or at lunch, even bathroom breaks if your office is secured by a keycard entry system. Further, if you use a keycard to enter your office, then login from a remote location with an unrecognized IP address, UEBA links those activities and flags that as an anomaly. You can’t possibly be in the office and working remotely at the same time. Linking user behavior data from the physical badging system and the Windows security log is the only way to ascertain this particular abnormality which is why the best UEBA products ingest the broadest variety of data feeds. Multiply this example by 1000s of employees and millions of transactions over time and you start to get a sense of the power of UEBA.

To predict unknown threats, UEBA examines everything users and entities are doing in real-time, then aggregates, correlates, and links that data to identify anomalies. Keep in mind an entire library of machine learning algorithms and analytics are applied against this combined and normalized data because it’s not possible for humans to detect changes in behavior patterns at this scale.

Dynamic Anomaly Detection Using Machine Learning

by:  Dr. Tim Stacey, Ph.D.

User Behavior Analytics is an incredibly hot field right now – software engineers and cybersecurity experts alike have realized that the power of data science can be harnessed to comb through logs, analyze user events, and target activity that stands out from the crowd. Previously the gold standard for this process was manual, based on exhaustive queries against large databases. These investigations also happened ex post facto, after the hack or the intrusion occurred to diagnose what actually happened.

At Adlumin, we’ve sought to create a proactive product that reduces the amount of intensive data work that a cybersecurity specialist needs to perform. We’ve had analytics in production since inception, but today we’d like to introduce a new product that will make finding new malicious activity even easier.

Our new Rapid User Behavior Alerts will pick up on novel user behavior in a range of event types, specifically targeting combinations of attributes or actions that have never been seen before on a network. These Rapid Alerts come out within seconds of the Adlumin platform receiving the data, notifying sysadmins that something unexpected has occurred on their network.

Importantly, we have tuned our new data science engine to have high tolerances for power users (eg. sysadmins) while triggering at lower tolerances for users that have a limited range of behaviors. This is a crucial aspect to reduce over-flagging on novel behavior. Our goal is to transmit high impact findings reliably and quickly and avoid spamming the end user with bad alerts.

Our analytics engine takes advantage of an auto-encoding neural network framework, finding the difference between previous and current modes of user behavior in a heavily non-linear space. By passing the event through a trained auto-encoder, we determine the reconstruction error of an incoming event – this is a measure the anomalous nature of a user’s actions. Since the anomalous characteristics of the incoming event are condensed to a single number, we can grade this number against a distribution of the user’s previous events to determine if this incoming event is truly different.

Our fast evaluation of incoming data is made possible with the assistance of AWS DynamoDB and AWS Lambda. Pre-trained user models live in our Dynamo tables—these models are quickly queried for each event, as we process thousands or hundreds of thousands of events per second. Our Lambdas evaluate the incoming data against the queried baseline and produce a threat score with an interpretation of what caused the threat. Our baselines are updated frequently on a schedule to account for the relatively fast drift in user behavior over time.

In the coming months, Adlumin will be rolling out analytics specifically targeted to log data, system behavior, and a more detailed analysis dependent on cold storage of data. Rapid User Behavior Alerts are the first line of defense as we develop a suite of analytics to protect your network from harm.

BIO: Dr. Tim Stacey is the Director of Data Science for Adlumin Inc, a cybersecurity software firm based in Arlington, VA. At Adlumin, his work primarily focuses on user behavior analytics. His experience includes designing analytics for Caterpillar, the RAND Corporation, and the International Monetary Fund. He holds a PhD from the University of Wisconsin Madison in computational chemistry.