As humans, we constantly battle the balancing act of our online life with offline life. Unfortunately for us, those lines can be easily blurred; from oversharing to data breaches, there is much opportunity for disaster. This is precisely where the power of data privacy comes into play.
Data privacy often referred to as information privacy, is the proper handling of sensitive data, including personal and other confidential data. This type of data can include financial, intellectual property, and more. Data privacy is all about limiting who has access to this information and how they can use it. Now, do you see its importance? If not, keep reading; there’s a lot more to uncover here.
Data Privacy and Your Safety
Whether or not you are an avid internet user, the probability that your personal information is available online is more significant than you think. From Google to personal social media accounts, it’s beyond your control once your personal information reaches the internet. Data privacy is important because it has regulations to protect your information. The concept is centered around the way data should be collected, stored, managed, and shared with third parties.
Securing your customer’s sensitive data should be your main priority. Cybercriminals are attracted to cracks in an organization’s or platform’s security infrastructure. Ensuring that all data privacy rules and regulations are followed by your organization, employees, and customers is vital to safeguarding information.
3 Elements of Data Privacy
The CIA Triad is a benchmark model created to govern and evaluate how organizations store, transmit, and process data. According to CISecuirty, below are the top three elements of data privacy:
Confidentiality – Data shouldn’t be accessed without proper authorization. This ensures that authorized parties are the only people that have access to the information.
Integrity – Data shouldn’t be altered or compromised. This element assumes that data remains in the intended state and should only be edited by authorized parties.
Availability – Data should be accessible upon legitimate requests only. This ensures that all authorized parties have access to the data when required.
The main goal is to give individuals control over their data while it is in the hands of a third party. Companies need to understand how to respect personal data while also processing it. Trust is a significant piece of a successful business, and people need to feel like their information is in safe hands.
So Why is Data Privacy Important?
There must be a reason why this topic has its own day every year, right? The answer is yes. Data privacy is important because it holds organizations and individuals accountable when dealing with confidential information. Regulatory compliance is an essential piece to this puzzle because it requires businesses to meet legal responsibilities for collecting, storing, and processing the personal data of their partners, customers, and employees. Any form of non-compliance or misuse of information could lead to a significant fine or lawsuit.
Long story short, it is vital to ensure all your bases are covered when it comes to data privacy to prevent any unwanted activity or breaches. As 2022 begins, it is 100% better to be proactive with your data instead of reactive. It’ll save you money, time, and your brand reputation in the long run.
With cyber threats like data breaches and ransomware attacks becoming more common, to new regulations surrounding the cyber world, cybersecurity has become somewhat of a necessity. Every industry needs it whether they know it or not. As we enter October, which is also National Cybersecurity Awareness Month (NCSAM), Adlumin is back with some great insights. We believe that spreading awareness is key to stopping cybercriminals in their tracks, and here’s why.
Every aspect of our society relies on technology, which can be our greatest power, and our greatest downfall. As the world of cybersecurity continues to navigate through various trials and tribulations, awareness through education is the only way to remain resilient.
But, enough about its power. Let’s get into why spreading awareness is necessary and what you can do to contribute to a healthy cybersecurity ecosystem for your organization and worldwide.
Cybersecurity Awareness Training for Employees
Cybersecurity is essential because it is the shield that protects data and sensitive information from falling into the wrong hands. As a company, it is your responsibility to set your employees up for success. That should include providing employees with knowledge, tools, and training to ensure they fully understand and adhere to their information security responsibilities. Below are a few truths that will explain why cybersecurity awareness training should be an integral part of your company’s culture:
Truth 1: Cybercrime isn’t going away.
Cyber-attacks are happening daily. Tech Jury reports that every 39 seconds, there is a new attack somewhere on the web. Hackers thrive off vulnerability, which is the current state of many companies globally. Suppose your employees are not fully exposed to the reality of how data breaches, ransomware, or other threats can affect your organization’s success. The possibility of a hacker taking advantage of these individuals dramatically increases. After all, it only takes a small misstep to bring down an entire ecosystem.
Truth 2: Education is vital.
You know the old saying, “if you knew better, you’d do better?” Well, it can apply to cybersecurity. There is so much to learn about the industry, from the various types of cybercrime to prevention—it becomes information overload. However, as an organization looking to stay out of harm’s way when it comes to cybercriminals, it is best to educate your employees on the following:
This list only includes a few examples; the scope of cybersecurity awareness is far-reaching. The key is to prioritize the training topics and tools based on your organization’s needs.
Truth 3: Insider Threats are real.
External threats are scary, but there is nothing more frightening than knowing the real danger is inside your organization. According to The Department of Homeland Security, “insider threats, to include sabotage, theft, espionage, fraud, and competitive advantage are often carried out through abusing access rights, theft of materials, and mishandling physical devices.” However, not all insider threats are intentional, and some are negligent. Employees can avoid simple mistakes if they know what an insider threat is, and how a simple error or accident can become a major cybersecurity concern for the organization or individual.
Work Smarter, Not Harder
Your employees have access to sensitive data that is essential to the overall well-being of your company. Therefore, your employees need to be aware of the responsibilities and accountabilities when using their company laptops, mobile devices, and other technologies. Training courses should be a yearly requirement, and cybersecurity policies should be updated quarterly, keeping in mind that new threats and tips are created daily.
Human error is a significant threat to your organization. However, unaware, and untrained employees are one of the most significant threats of them all. Make this October’s NCSAM count – it’s a decision you can’t afford to ignore.
Visit Adlumin’s National Cybersecurity Awareness Month resource page for additional resources.
Firewall, VPN, and network security device logs can provide insight into a wide variety of malicious activities, including remote code executions, port scanning, denial of service attacks, and network intrusions. However, analyzing these logs presents several challenges.
These data streams are high velocity; even small and medium networks can produce thousands of logs per second with significant fluctuations throughout the day. A network comprises several different devices manufactured by different companies, so manually defining rules for parsing log data is unfeasible and would break as devices are updated. Preventive measures against malicious activity must be taken quickly, meaning that we need an online process to deliver insights that are useful for more than forensics.
At Adlumin, we’ve designed around these challenges and built a fast, parallelizable system capable of handling new log formats and deployable on streaming data. We own a patent that covers this method for ascertaining whether log data is anomalous and indicative of threats, malfunctions, and IT operations failures
Malicious activity nearly always produces unusual log data, so our system focuses first on finding anomalous data. Focusing on anomalous logs drastically decreases the volume of data that must be analyzed for malicious activity without running the risk of ignoring pertinent data. In this discussion, we’ll focus on how we determine whether log data is anomalous, as doing so enables a wide variety of techniques for searching for malicious activity.
As an overview, our system trains itself on-device log data that we’ve recently received. Based on the data, the system builds a set of templates that correspond to commonly seen log content and structure. The system also computes a measure of how rare each template is and builds an anomalous scoring threshold. When new data is received, it’s checked against this template set and assigned a score based on the template that best matches the data. If new data that does not directly correspond to a template arrives, then a measure of the difference is added to the score. Data with a score above the anomalous scoring threshold is considered anomalous.
Training
We begin by training our system in a batch process on recently ingested log data. We create a set of templates by grouping similar logs together. Automated systems generate logs, so they contain a mix of parameters (e.g., an IP address, a timestamp, or a username) and fixed messages (e.g., “connection opened” or “user logged in”). Our system uses a token-based parser tree to associate logs that have the same fixed messages, but that may have differing parameters.
Before applying the parser tree, logs are preprocessed. Logs are tokenized according to common delimiters. Then, they are preprocessed by applying pre-defined regex patterns that replace well-known parameters (e.g., r’ [0-9a-fA-F]{12}’ for a 12-digit hexadecimal value) with specialized parameter tokens. Typically, any token that contains a number is also treated as a parameter and replaced with a parameter token. Once preprocessed, the parser tree can determine log associations.
Thus, the intuition is that logs said to be similar in the content will have similar sequences of messages and parameters that can be represented as tokens for easy comparison. The first layer of the parser tree is assigned based on the total number of tokens in the preprocessed log. This step is utilized because parameters are typically a fixed number of tokens, meaning this step quickly groups associated logs. Subsequent layers of the parser tree are fit according to the token in the nth position of the preprocessed log, where n comes from a pre-defined set of token positions. To prevent parser tree explosion, if the number of branches at any given layer exceeds a threshold, then a wild card token is used for all new tokens.
Once the bottom of the parser tree is reached, if there are no other logs already at this position of the parser tree, then the incoming log is used as a new log format. Otherwise, the incoming log is compared to the log formats at this position using token edit distance.
If the incoming log is sufficiently similar to an existing log format, then the log is associated with that format. In addition, the log format is updated such that any dissimilar tokens between the log format and the incoming log are replaced with wildcard tokens. If the incoming log is insufficiently similar to all existing log formats, then the incoming log is used as a new log format.
Once all training logs have been processed, all the log formats generated by the parser tree are returned. Particularly rare log formats are dropped. For each log format, we generate a log template which includes: a regular expression pattern capable of matching logs associated with the template, a set of the tokens contained in the log template, and a log template score based on the frequency that the log template appeared in the training set.
Another set of recent log data is used to generate a score threshold. Logs in this dataset are scored using the log templates. To score, an incoming log is preprocessed and checked to see if it matches any log templates’ regular expression pattern. If the incoming log matches a regular expression pattern, then it is assigned the score associated with the log template that it matches.
Otherwise, the incoming log’s tokens are compared to each of the log templates’ token sets using Jaccard similarity. The incoming log is associated with the log template that has the most similar token set and assigned a score based on the log template’s score and the similarity between the token sets. This matching process allows previously unseen log formats to be assigned appropriate rarity scores.
Once all the score threshold dataset has been processed, all scores assigned to logs are considered. Based on these scores, a global score threshold is determined using the percentile score and standard deviation of the set of all scores.
Inference
In contrast to the training process, inference occurs in a streaming environment and finds anomalous records in near real-time.
As in the score threshold process, incoming logs are preprocessed and either matched to log templates using regular expression patterns or token set similarity. The incoming log is assigned a score of the matched log template or the matched log template and a similarity measure, respectively. If the incoming log’s score is greater than the score threshold, then the log is considered anomalous.
To facilitate real-time analysis, we utilize AWS Lambda for conducting inference and a DynamoDB table for template storage. We’re able to spin up additional Lambda instances automatically and adjust DynamoDB read capacity as needs demand them.
Classifying logs and assigning rarity scores opens a wide variety of analysis opportunities: keyword lists that would otherwise generate numerous false positives are more effective when applied to the anomalous dataset. Log template associations enable time series analysis of specific log messages. By determining the location of parameters, specific information can be extracted and analyzed, even in unseen log formats. At Adlumin, we utilize a variety of these techniques to provide preventative alerts of specific threats, malfunctions, and IT operations failures.
Adlumin’s Cybersecurity Maturity Model Certification (CMMC) Assessment feature is a tool to help streamline an organization’s preparation for the U.S. DoD’s CMMC.
CMMC is a unified cybersecurity standard intended to guide DoD contractors in implementing the cybersecurity processes and practices associated with the achievement of a cybersecurity maturity level. CMMC maturity levels range from Level 1 to Level 5, and cybersecurity maturity is assessed across 17 cybersecurity domains.
The CMMC is designed to provide increased assurance to the Department that a contracting company can adequately protect sensitive, controlled unclassified information (CUI) and/ or federal contract information (FCI).
Adlumin’s CMMC Assessment feature is an easy-to-use self-assessment tool that gauges an organization’s progress towards achieving the appropriate target CMMC maturity level. The feature’s core functionality includes:
A dashboard providing a high-level overview of an organization’s current compliance level across all 17 CMMC domains based on the answers to a self-assessment.
Visualizations to easily identify gaps in an organization’s cybersecurity processes and practices that will prevent attainment of the target maturity level.
The ability to note and manage tasks, which are required to improve an organization’s cybersecurity maturity.
On-demand PDF reports that reflect the results of the self-assessment and report the current compliance level across each of the 17 CMMC cybersecurity domains.
Ransomware attacks work by encrypting critical data on a victim’s devices and network in exchange for payment, generally in cryptocurrency. These attacks have become increasingly prevalent and result in losses worth hundreds of millions of dollars every year, with attackers targeting critical infrastructure, government agencies, and financial institutions. Once perpetrators gain access to a victim’s network through any exploited lapse in security, they deploy malware that works to get the victim to pay the requested ransom. The full scope of the attack may often not be known until well after the attack has concluded and the attacker has been able to spread the malware across the network, affecting maximum damage to critical data.
Adlumin Data Science has developed a machine learning algorithm for detecting ransomware attacks via comprehensive monitoring of changes across the entire file system. The detection system uses an algorithm for measuring the number of access events, specifically monitoring the number of Write/WriteAttribute (Windows Event ID 4663) and Delete (Windows Event ID 4660) events. These access events help provide a clear footprint for encryption and deletion events occurring across the network, which may be indicative of a system-wide ransomware attack. The process is made possible through Adlumin’s serverless data pipeline in the cloud that allows the algorithm to collect and monitor file access events in near real-time. Traditional file auditing processes can be used to monitor these events; however, the volume of the files read/deleted on a network tends to overflow such systems.
The newly developed ransomware detection model monitors the volume of these three events independently of each other per user across the entire network, looking for anomalous spikes in aggregate activity during specific time windows using historical data as a benchmark. If the amount of activity (either write or deletion) exceeds a model-determined threshold relative to the rest of the activity on the network, a detection will be sent for investigation. This proactive monitoring may allow security analysts to quarantine and isolate portions of their network that are being hit by excessive encryption and deletion before the attacker is able to spread the attack to the rest of the system.
In addition to monitoring for excessive levels of file access events, the algorithm analyzes the distribution of objects modified and/or deleted across the network. If the majority of activity is focused in a single subdirectory, which could be associated with software installation or anti-virus scans, the model will not externally raise a detection. However, if the spike in activity is spread across multiple subdirectories, indicative of system-wide activity, the model will raise a detection.
Below is an example of a theoretical ransomware detection. The detection view allows Adlumin users to view the aggregate file access information that triggered the detection, as well as a sample of the individual files that were accessed allowing security analysts to determine where the activity is occurring. Analysts can further click on each file access event to view more detailed information surrounding the event.
As the world begins to redefine and establish a new normal after lockdown, cyberattacks are not letting up. From attacks on small businesses to the latest Colonial Pipeline attack, ransomware continuously presents itself as a significant threat to cybersecurity.
According to Cybercrime Magazine, experts predict there will be a ransomware attack every 11 seconds in 2021. Scary to think about, right? The power lies in organizations’ hands to stay vigilant and protected from ransomware attacks and groups like DarkSide. Now, consider this: if cyberattacks have the power to take down some of the largest businesses in the world, what can they do to your organization?
Let’s step into the world of ransomware to gain a deeper understanding of what is happening in one of the industry’s fastest-growing cyber nightmares.
Ransomware’s Truth
As stated by the Cybersecurity and Infrastructure Security Agency (CISA), ransomware is “an ever-evolving form of malware designed to encrypt files on a device, rendering any files and the systems that rely on them unusable. Malicious actors then demand ransom in exchange for decryption. Ransomware actors often target and threaten to sell or leak exfiltrated data or authentication information if the ransom is not paid.”
As cyber criminals grow sharper by the day, it is no surprise that ransomware has become a 1.4-billion-dollar industry. This dangerous type of malware can do irreparable damage, as it uses vulnerabilities to infect an organization’s overall system or network. Through its various methods, including screen lockers, encryption, and scareware, ransomware can terrorize every industry. Your organization should consider what your cybersecurity infrastructure looks like; if you are not confident that it is built to withstand powerful attack attempts, it is time to start looking for ways to improve.
What is DarkSide Ransomware?
DarkSide is a group that stopped us all in our tracks. Consumers were lined up at every gas station down the East coast, filling up their cars, containers, and even plastic bags following the group’s latest victim, the largest fuel pipeline in the United States, Colonial Pipeline Co. According to CISA:
DarkSide “is ransomware-as-a-service (RaaS)—the developers of the ransomware receive a share of the proceeds from the cybercriminal actors who deploy it, known as ‘affiliates.’ According to open-source reporting, since August 2020, DarkSide actors have been targeting multiple large, high-revenue organizations, resulting in the encryption and theft of sensitive data. The DarkSide group has publicly stated that they prefer to target organizations that can afford to pay large ransoms instead of hospitals, schools, non-profits, and governments.”
Groups like DarkSide are only going to get faster, smarter, and harder to defeat. This truth poses a critical message to the cybersecurity industry that now is the time to tighten up. This recent breach has left the entire country wondering: what’s next?
The Basics of a Ransomware Attack
Ransomware at the most basic level can infect your computer in many ways. According to UC Berkeley’s Information Security Office, “Ransomware is often spread through phishing emails that contain malicious attachments or through drive-by downloading. Drive-by downloading occurs when a user unknowingly visits an infected website and then malware is downloaded and installed without the user’s knowledge.” Once a cybercriminal has gained access, they stay in your network for several weeks or months, and then invite the malware team to install ransomware.
Using the Colonial Pipeline as an example, Insider reported the company’s CEO revealing that attackers targeted a system that relied on a single password instead of multi-factor authentication. The password for the organization’s virtual private network (VPN) was previously leaked on the dark web, leading to it being compromised. Consequently, this single compromised account caused a massive domino effect, which greatly impacted the country’s economy, cybersecurity posture, and more.
While Colonial Pipeline is the most recent high-profile victim of a ransomware attack, the next organization is not far behind. Let’s just hope they are more prepared. This is where the importance of cybersecurity tools and protocols enters the conversation. Organizations are responsible for their cybersecurity posture and creating a safe network environment. Shifting that responsibility from IT teams to a security and compliance automation platform with a Darknet Exposure Module gives you added layers of security and avoids human-error.
How to Prevent a Ransomware Attack
The reality is that ransomware is a severe problem. If you do not get ahead of it, your organization could face detrimental consequences. As stated above, ransomware intruders typically gain access into networks using compromised accounts that have been stolen using malware or that the intruder has purchased from the deep and dark web.
A security and compliance automation platform with User Entity and Behavior Analytics (UEBA) is the key to stopping ransomware cold. This feature will assist you in detecting lateral movement or anomalous account activity once an attacker has entered your network to monitor the target environment. UEBA lays down a pattern of behavior for every system and account on your network and searches 24/7 for anomalies that provide clues to lateral movement or unusual activity by compromised accounts belonging to legitimate network users.
The only way to avoid becoming a new breach statistic is to invest in strengthening your organization’s cybersecurity posture. When you decide to put your network in the hands of tools designed to protect it, you will find yourself able to work smarter, not harder, to remain resilient against cybercriminals.
One of the most powerful features that Adlumin provides is the ability to integrate with third-party devices and applications. By aggregating event logs from all your devices and applications, Adlumin delivers a single pane of glass for tracking anomalies, identifying vulnerabilities, and managing your overall network security.
Traditional integration methods, which often required cumbersome on-premises solutions, are no longer compatible with the way we work. Remote work and the ever-increasing role of SaaS in the enterprise requires that platform providers like Adlumin offer cloud-native solutions to integrate with external applications. Collecting and analyzing this data is not easy and becomes exponentially more challenging as the number of SaaS services increases. Adlumin aims to make this process as simple as possible.
The Adlumin platform has robust, native support for an ever-increasing list of cloud-based SaaS solutions, ranging from network and endpoint security solutions to office and collaboration suites. Adlumin collects data from the providers you select, parses that data into a native format, and then correlates the events across your existing data. This analysis and correlation make it simple to track incidents and events across multiple platforms while also alerting you in real-time to potential threats.
This past month, Adlumin launched two exciting new third-party API integrations: AWS and Google Workspace.
AWS is the world’s leading cloud platform, and now you can automatically collect, track and alert on any IAM user events directly from Adlumin.
Google Workspace is the premier cloud-based office and productivity suite; with Adlumin’s Google Workspace integration, audit logs are automatically ingested for all Workspace services.
Network and user activity from AWS and Google Workspace is automatically associated with existing data in the platform and can be easily searched and cross-referenced. Custom detections can be created for both AWS and Google Workspace to alert on any user-specified criteria that appear in the event logs.
Integrations are an essential part of the Adlumin platform. These latest additions add even more power and functionality to our core product. The rate of development for third-party integrations is ramping up, and in the coming months, we will be announcing support for several more high-profile SaaS offerings. Adlumin never stops working to give our customers the most significant event correlation and analysis platform on the market, with the features and functionality they demand.