Posted on

5 critical capabilities for 2019

5 critical capabilities for 2019

BY ROBERT JOHNSTON

We can add NASA to the list of recent federal cyber breach victims. The space agency disclosed in late December that hackers found their way into its servers in October 2018. While NASA is still investigating the extent of the breach, the agency knows the hackers accessed personal data of both former and current employees. Unfortunately, other agencies will surely find themselves in NASA’s shoes in 2019. Here’s why:

Cyber criminals know government IT pros have limited budgets that create resource challenges when it comes to securing a daunting array of technologies and data flows. This makes agencies at all levels of government target-rich environments for hackers. So, what’s the answer? How can government IT leaders take control of their data and reduce their vulnerability to bad actors in 2019?

The solution is straightforward, but multilayered. Government agency CIOs and CTOs need a hub-and-spoke system to collect and index data from all their IT touchpoints. These include network traffic, web servers, VPNs, firewalls, applications, hypervisors, GPS systems and pre-existing structured databases. For optimal cyber protection, all those data feeds should be run through an artificial-intelligence-authored security information and event management (SIEM) system equipped with machine-learning-powered analytics to identify anomalous and malicious patterns.

The hub-and-spoke approach should enable four critical capabilities: log/device management, analytics, account/system context and visualization of user privileges across an entire network. Here’s a walk-through of the capabilities and why they matter.

1. Log/device management: This piece should include unlimited and automated coverage of logs, devices and systems as well as integrated compliance management. It should also provide real-time event log management, Windows and Linux server management, cloud and on-premise ingest, secure and encrypted log management and log data normalization.

2. Analytics: Data today is too voluminous for human analysis, so using AI and machine learning to analyze large amounts of data makes the most sense. Agencies should look for a single platform that provides automated threat intelligence, real-time intrusion detection alerts, 24/7 network vulnerability assessment, and user and device context.

3. Account/system context: Speed is essential, so agencies should look for a system that provides one-click, automated risk reporting for auditors and decision-makers that takes minutes rather than days.

4. Visualized permissions: Because cybersecurity conditions and requirements quickly change, agencies need the ability to visualize privileged users and groups in real-time across the network in order to understand who can touch an agency’s data.

5. Long-term viability: Will an agency’s technology still be viable in one, two or five years? It’s an important question, but one that is often mistakenly answered with a yes. The era of on-premise architectures is over because they are flawed by design. Tied to the constraints of initial deployment, these systems are allergic to architecture migration, software redesign, advancements in analytic capabilities and new database implementation. In the cloud, however, organizations can develop a symbiotic relationship between the service they use and new cutting-edge technologies. With today’s cybersecurity threats, agencies need to be bigger, faster and stronger than the adversary, and the cloud gives them the opportunity to deploy the best solutions available.

The hub-and-spoke approach gives government agencies a fighting chance to keep data out of hackers’ hands. What used to be nice to have is now essential. There’s just too much at stake.

About the Author

Robert Johnston is the co-founder and CEO at Adlumin Inc.

Posted on

Healthcare IT: How to Take Control of Disparate Data Sources

 by 

Here’s what we know. The number of healthcare data breaches has trended steadily higher over the past decade, in part because cyber criminals know healthcare IT pros are distracted and juggling multiple priorities. From IoT to traditional Windows networks, healthcare is a huge hacking target because managing and securing the large array of technologies and multiple data flows is overwhelming.

Plus, resource-constrained healthcare organizations struggle to find enough qualified security personnel, time and budget to mount a consistently effective cyber defense. And, with the next big breach lurking, stakeholders are asking if it is possible for a hospital or health system to take control of its data and make itself less vulnerable to bad actors. The answer is “Yes” but it will take commitment and a seriousness of purpose to be effective.

The best strategy is a hub-and-spoke system that collects and indexes data from the numerous sources common in a healthcare setting. These include network traffic, Web servers, VPNs, firewalls, custom applications, application servers, hypervisors, GPS systems, and pre-existing structured databases. But this is only the first step because in today’s threat environment, even that array of capabilities won’t be enough. Healthcare organizations need to be on high alert in their cyber-protection game which begins by running all data feeds through an artificial-intelligence-authored security information and event management (SIEM) system. This needs to be equipped with machine-learning-powered analytics to identify anomalous and malicious patterns.

The next level of protection for healthcare CIOs, CTOs, IT and data management pros to implement is making sure their hub-and-spoke systems provide four critical capabilities: log/device management, world-class analytics, account/system context, and the ability to visualize preferences across their entire network. All of these capabilities can be secured by using one platform as we have done at Adlumin. Below is an in-depth review of each that every healthcare IT executive should follow:

  1. The log/device management piece should include unlimited log/device/system coverage, integrated compliance management (PCI DSS, HIPAA, SOX, FFEIC), automated log and device ingest, and critical server log management. It also needs to have, real-time event log management, Windows and Linux server management, cloud and on-premise ingest, secure and encrypted log management and log data normalization. Storage and processing are a commodity. The days of not being able to handle your production workload are over. Security vendors should not be asking for 90% of your budget to only solve 10% of your problems.
  2. For the analytics, find a single platform that provides automated threat intelligence, real-time intrusion detection alerts, 24/7 network vulnerability assessment, automatic analysis of firewall and VPN log data alongside network account data, automated anomaly interpretation and user and device context. There is simply just too much data for a human to analyze. Using artificial intelligence and machine learning to analyze large amounts of data so you don’t have to is the perfect remedy. So, drop your log management solution and replace it with a cloud-native SIEM.
  3. The account/system context should include risk management, visualization, and analysis, plus automated reporting for auditors and compliance. It should provide the ability to understand risk with one button click to enable decision making that takes minutes rather than days. And it should power compliance audit reports. With the power of a single click you should be able to understand risk and compliance this will make your response time and network security that much better over time.
  4. Finally, the ability to visualize privileged users and groups across the network reveals exactly who can touch a healthcare organization’s most sensitive data. Every healthcare IT executive needs to, identify the groups and individuals that have privilege on share drives, and show auditors actual account privilege in real-time. A picture tells a thousand words. Being able to visualize privilege within your environment lets you get your job done faster and take that 2-hour lunch break you so deserve!

Chaos and lack of focus make a healthcare IT operation a ready mark for bad actors. The hub-and-spoke system outlined above, with the additional capabilities, gives healthcare data pros a fighting chance to keep vital patient and employee data out of hackers’ hands.

Robert Johnston is the co-founder & chief executive officer at Adlumin, Inc., and is the cyber detective and strategic thinker who solved the Democratic National Committee hack during the 2016 U.S. presidential campaign. He can be reached at robert.johnston@adlumin.com (https://adlumin.com/@dvgsecurityand@adlumin

About Adlumin Inc.

Adlumin Inc. was founded in 2016 by Robert Johnson and Timothy Evans, experienced Marine Corps leaders, both spent time at the National Security Agency (NSA). Leveraging its principals’ extensive knowledge and experience in the cyber-security incident response, offensive, and defense arenas, Adlumin has developed a NEXGEN artificial intelligence SIEM platform that detects and confirms identity theft and allows its users to respond in real-time. Adlumin is a cost-effective Software as a Service (SaaS) solution designed to stop cyber intrusions and data breaches.

PRESS CONTACT
Timothy Evans
Adlumin Inc.
P: (571) 334-4777
E: timothy.evans@adlumin.com
Posted on

Dynamic Anomaly Detection Using Machine Learning

Dynamic Anomaly Detection Using Machine Learning

by:  Dr. Tim Stacey, Ph.D.

User Behavior Analytics is an incredibly hot field right now – software engineers and cybersecurity experts alike have realized that the power of data science can be harnessed to comb through logs, analyze user events, and target activity that stands out from the crowd. Previously the gold standard for this process was manual, based on exhaustive queries against large databases. These investigations also happened ex post facto, after the hack or the intrusion occurred to diagnose what actually happened.

At Adlumin, we’ve sought to create a proactive product that reduces the amount of intensive data work that a cybersecurity specialist needs to perform. We’ve had analytics in production since inception, but today we’d like to introduce a new product that will make finding new malicious activity even easier.

Our new Rapid User Behavior Alerts will pick up on novel user behavior in a range of event types, specifically targeting combinations of attributes or actions that have never been seen before on a network. These Rapid Alerts come out within seconds of the Adlumin platform receiving the data, notifying sysadmins that something unexpected has occurred on their network.

Importantly, we have tuned our new data science engine to have high tolerances for power users (eg. sysadmins) while triggering at lower tolerances for users that have a limited range of behaviors. This is a crucial aspect to reduce over-flagging on novel behavior. Our goal is to transmit high impact findings reliably and quickly and avoid spamming the end user with bad alerts.

Our analytics engine takes advantage of an auto-encoding neural network framework, finding the difference between previous and current modes of user behavior in a heavily non-linear space. By passing the event through a trained auto-encoder, we determine the reconstruction error of an incoming event – this is a measure the anomalous nature of a user’s actions. Since the anomalous characteristics of the incoming event are condensed to a single number, we can grade this number against a distribution of the user’s previous events to determine if this incoming event is truly different.

Our fast evaluation of incoming data is made possible with the assistance of AWS DynamoDB and AWS Lambda. Pre-trained user models live in our Dynamo tables—these models are quickly queried for each event, as we process thousands or hundreds of thousands of events per second. Our Lambdas evaluate the incoming data against the queried baseline and produce a threat score with an interpretation of what caused the threat. Our baselines are updated frequently on a schedule to account for the relatively fast drift in user behavior over time.

In the coming months, Adlumin will be rolling out analytics specifically targeted to log data, system behavior, and a more detailed analysis dependent on cold storage of data. Rapid User Behavior Alerts are the first line of defense as we develop a suite of analytics to protect your network from harm.

BIO: Dr. Tim Stacey is the Director of Data Science for Adlumin Inc, a cybersecurity software firm based in Arlington, VA. At Adlumin, his work primarily focuses on user behavior analytics. His experience includes designing analytics for Caterpillar, the RAND Corporation, and the International Monetary Fund. He holds a PhD from the University of Wisconsin Madison in computational chemistry.

 

Posted on

Running Linux Systems in the Enterprise is Just Good Business

Connected World

Running Linux Systems in the Enterprise is Just Good Business

By Milind Gangwani

The prevalence for running Linux systems in the enterprise is increasing, typically depended on for running business operations software, web applications, cloud technology, internet of things devices, and core banking software. However, these systems are often the most neglected in security. Furthermore, custom Linux variants pose a significant issue for many customers. Custom Linux variants can respond very differently than traditional core Linux variants like Ubuntu, Red Hat, and Fedora; because of the high variability across kernel and builds the IT staff often leaves these systems unattended despite their crucial role in IT security-based Infrastructure.

As the preferred operating system for cloud deployments, Adlumin typically sees configurations with either Debian and Fedora as the variant of choice. Even container technology, like Docker, utilizes a Linux Kernel.

While Linux, as an operating system, is used to manage applications, conduct computing, and process large volumes of data, organizations struggle to easily monitor and analyze activity transactions.

Here at Adlumin, we have designed and developed a Linux daemon that feeds into our cloud-native SIEM (Security Information & Event Management) technology. Our Linux forwarder can be installed on any of the version of Debian or Fedora. To state it more accurately, it can be deployed on Fedora version 6.0 and Debian version 6.0 onwards, till the latest release. These base builds include Linux variants like Red Hat, Ubuntu, CentOS, and many more.

The installation of the forwarder is incredibly simple and takes just minutes. Upon installation it scans resident Kernel libraries looking for the correct setup procedures and it requires no intermediate dependencies to run seamlessly.  The daemon configuration is such that, even if a sudo/root user were to tamper with the process, the daemon would restart silently.

The forwarder uses a simple but effective approach.  Initial information is collected ‘/etc/os-release’ or ‘/etc/system-release’ paths. Subsequently all account, privilege, share, and permission data sets are collected from a variety of sources. This permits Adlumin to make an excellent assessment of risk by understanding what access points may be vulnerable to attack.

The Linux forwarder was developed using Googles Golang programming language. GoLang is known for its lightweight footprint and efficiency. It can be universally deployed anywhere without a machine interpreter.

Using a hybrid combination of the Golang operating system API, traditional API’s, and external binary inclusion Adlumin now can provide a lightweight forwarder that is effective on almost any Linux operating system version.

This Linux forwarder is part of our holistic intrusion detection approach by monitoring, reporting, and analyzing user and entity behavior.  The underlying technology and analytics is designed to traverse windows, Linux, and network device you can dream of. Using data points from all sources to instantaneously produce a conclusion.

Biography:  Milind Gangwani is a full-stack developer at Adlumin and has been in development for more than 10 years. Prior to joining Adlumin, Milind was a senior developer at SalientCRGT at the US Patent office. He has a master’s degree in computer vision from Rochester Institute of Technology.

 

Posted on

Adlumin Secure Device Data Collector Application

Adlumin Syslog Collector Screenshot

Adlumin Secure Data Collector Application

August 28, 2018
By: Dan McQuade

One of the challenges we face as a cloud-based SIEM platform is the process of collecting data from a variety of disparate sources on a local network, and securely transmitting that data into our platform over the internet. These sources can include end-user PCs, Windows/UNIX/Linux servers, firewalls, VPN servers, network security monitoring devices, and more. For traditional end-user desktops and servers, Adlumin has addressed this problem with custom applications that monitor activity and securely transmit the data into our platform for analysis. For hardware devices such as firewalls and VPN servers, the problem is a bit more challenging, as there is usually no easy way to install custom software on such devices.A common feature amongst firewalls and other network-based hardware devices, is the ability to forward log data in syslog format to an external source. One of the benefits of dealing with syslog data is that it usually conforms to one of a handful of standards (RFC 3164, RFC 5424, etc.), and can therefore be easily parsed for analysis on the receiving end. However, the transmission generally occurs over TCP or UDP as unencrypted plain-text, and therefore transmitting such data over the public internet to the Adlumin platform is not an option. We needed a way to capture syslog data, and securely forward it into our platform for analysis. Enter the Adlumin Syslog Collector.

The Adlumin Syslog Collector is a custom application written in Python, which runs on a Linux-based virtual machine as a systemd service. The application listens on numerous pre-defined TCP and UDP ports, securely forwarding all incoming data over an encrypted TLS connection to the Adlumin platform for collection and analysis. Once ingested, syslog data is immediately available to be viewed and searched through using the Adlumin dashboard. Powerful visualizations are generated in real-time, giving users the ability to spot patterns and identify threats as they occur.

We designed the syslog collector with ease-of-use in mind, and in less than 15 minutes it can be fully up and running, ready to receive and forward data. It offers a user-friendly GUI, which allows it to be installed and configured even if the end-user isn’t proficient with Linux or the command line. The application is shipped as a single-file OVA (Open Virtualization Format Appliance) and is capable of running under most modern hypervisors (VMWare, VirtualBox, etc.). The configuration required to deploy the Adlumin Syslog Collector is very straightforward. The only steps required to get up and running are as follows:

  1. Load the OVA into the hypervisor and boot the system
  2. Change the default password
  3. Enter the client-specific Adlumin endpoints
  4. Configure the network interface
  5. Set the time zone on the virtual machine
  6. Verify the configuration
  7. Route syslog traffic to the forwarder

Once the initial setup is completed, no further intervention is required of the end-user. As long as the virtual machine is running, the application will securely forward all received data to the Adlumin platform. Out of the box, the application has eight built-in listeners for a variety of syslog data sources. These include: firewall, VPN, network security device (i.e. FireEye NX), endpoint security, Carbon Black, and two miscellaneous listeners. Each listener resides on a unique TCP or UDP port (specified in the documentation). Support for additional listeners and data sources is constantly being added, based on requests and feedback we receive from our clients.

To keep up with the dynamic threat landscape, modern SIEMs must be able to interpret massive amounts of log data from a wide variety of applications and devices that reside on an enterprise network. Traditional on-premise SIEMs can become overloaded with this data, and it may take the user hours to sort through it all. The Adlumin Syslog Collector filters and normalizes syslog data in our cloud-based platform at unparalleled speed, in order to paint a more complete picture of the activities occurring on a network and to alert on anomalous events as they occur in real-time.


Posted on

Enterprise-Level Data Science with a Skeleton Crew

Enterprise-Level Data Science with a Skeleton Crew

Originally published on datascience.com

“Data science is a team sport.” As early as 2013, this axiom has been repeated to articulate that there is no unicorn data scientist, no single person that can do it all. Many companies have followed this wisdom, fielding massive data science operations.

But more often than not, a big data science team isn’t an option. You’re a couple techy people trying to make waves in a bigger organization, or a small shop that depends on analytics as a part of the product.

My company falls into the second camp—at Adlumin, we’re a small team that has a big enterprise-level problem: cybersecurity. We use anomaly detection to monitor user behavior, looking for malicious activity on a network. Because we need to catch intrusions quickly, we perform streaming analytics on the information we receive.

Two things allow us to succeed: building on the cloud and thoroughly testing our analytics. Cloud isn’t specifically for small teams, but it helps to compete with and exceed bigger competitors. Testing is our failsafe. By implementing useful tests on our analytics, we can have assurance that the models will perform when they’re released.

Below are three principles that I’ve distilled into part of a playbook for doing data science at scale with a small team.

1. The cloud is your friend.

One issue in data science is the disconnect between development and deployment. In a big company, the data scientists often have the luxury of creating something that won’t scale and then punting deployment to the engineers. Not so on our skeleton crew.

Enter the world of the cloud. By moving most of your dev ops to a cloud-based platform, you can work on getting the analytics stood up without any of the tricky details of database management or orchestration.

For streaming analytics, two great options exist: serverless and container based.

Serverless analytics involve spinning up a process when data comes in, doing some number crunching, and then disappearing. This can be a cost saving measure because the server doesn’t have to be maintained to wait for new data. However, the analytics must be fairly lightweight—most serverless offerings will time out long before you can load up a big model.

Containers are more permanent. We still can have live, streaming analytics, but now a container will load the model and keep it ready to receive data all the time. This can be a useful configuration if the model is going to be large, the library requirements many, or the uptime constant. This is also a preferred method if you have a handful of workhorse models for all of your analytic needs.

At Adlumin, we aren’t drawing on heavy libraries and we need to evaluate many (>5000) models quickly, so a modification of the serverless option makes up the basis of our anomaly detection.

The beginning of our method starts by building a baseline model for each one of our users. This is set up on a weekly interval. We probe a large data store for user behavior data, build baselines (which are small weight matrices), and then store them in a fast NoSQL database.

To process live data, we collect user data in sessions, which are event streams broken into chunks. Once a session appears to be complete, we spin up a serverless process to read the session, query for the appropriate baseline, and evaluate the two together. A result gets passed to another database and the process dies, ready for the next session.

2. Get something that works, then test it.

Sometimes testing seems more like a necessary evil. The best test might be the biggest hurdle when you’re on a tight deployment timeline.

But you need to find a way to evaluate whether your analytics are returning sensible results. Again, there are options:

  1. Real testing: Someone has imparted you with a cherished “golden” data set. This data contains ground truth labels, and you can perform classic train-tests splits, evaluate metrics, and other rigorous testing.
  2. Natural testing: Instead of being handed a data set, you can construct a ground truth from information external to your dataset. Join multiple data sets, manipulate metadata, or come up with another way to create a target.
  3. Artificial testing: Make a data set! This is a great inclusion into a testing suite, even if you have either the first or second option. You can create small data that will be evaluable every time you push new code.
  4. Feel testing: Run your model on live data and observe the output. Does the output meet your or the users’ expectations? You want to know if you have a really noisy model, a quiet model, or something in between.

At Adlumin, we have some data that reflects ground truth. For instance, saved penetration testing data reflects what a single type of an attack might look like. This is a great opportunity to test out our models, but attacks can take a number of forms, which creates an upper bound on the utility of this data.

Additionally, we know a little bit about the users we monitor. For instance, many companies create service accounts to perform the same tasks, day in and day out. We test to see if we routinely flag these accounts, and if so, the models need to be heavily reworked.

Finally, we created our own data set, complete with data that reflects both normal and anomalous behavior. We integrated this into a test before model deployment.

3. Orchestrate a lot of things at once.

One additional item that makes this all work is orchestration. Orchestration assists our automated tasks by arranging the code and managing all of the tasks.

We use a continuous integration system that puts all scripts into the right places (e.g. updating the script for serverless processes, and pushing new code to the baseline generation server) when we push any new code. We don’t have to scp anything into a server—the push to our code repository covers everything.

In addition, tests will automatically fire when code gets pushed. If the tests fail, the code won’t be updated and erroneous stuff won’t get deployed.

Updating the whole operation piecemeal would be tedious and error-prone. There are too many moving parts! Orchestration also allows us to move quickly. As soon as we develop new code, it can be run against tests and put into the right place without having to consider any additional steps. This frees up time and also headspace formerly preoccupied with deployment details.

There are many other aspects to making streaming analytics work in a small team, but these are three important ones. Doing enterprise-level data science with a skeleton crew can be challenging, but it is rewarding and fun!

 

Posted on

ACTIVE DEFENSE AND “HACKING BACK”: A PRIMER

ACTIVE DEFENSE AND “HACKING BACK”: A PRIMER

In the lead piece in this package, Idaho National Lab’s Andy Bochman puts forth a provocative idea: that no amount of spending on technology defenses can secure your critical systems or help you keep pace with hackers. To protect your most valuable information, he argues, you need to move beyond so-called cyber hygiene, the necessary but insufficient deployment of security software and network-monitoring processes.

ABOVE: Forts like HM Fort Roughs were marvels of defensive engineering at the time: capable of being brought to sea, sunk in place, and fully operational within 30 minutes.

Bochman lays out a framework that requires switching your focus from the benefits of efficiency to the costs. Ideas that were once anathema — unplug some systems from the internet, de-automate in some places, insert trusted humans back into the process — are now the smart play.

But they’re not the only play. Another that’s gaining attention is “active defense.” That might sound like Orwellian doublespeak, but it’s a real strategy. It involves going beyond passive monitoring and taking proactive measures to deal with the constant attacks on your network.

There’s just one problem: As active defense tactics gain popularity, the term’s definition and tenets have become a muddy mess. Most notably, active defense has been conflated with “hacking back” — attacking your attackers. The approaches are not synonymous; there are important differences with respect to ethics, legality, and effectiveness.

Active defense has a place in every company’s critical infrastructure-protection scheme. But to effectively deploy it, you need a proper understanding of what it is — and that’s tougher to come by than you might expect.

We enlisted two of the foremost experts on the topic to help us proffer an authoritative definition of active defense and give you a fundamental understanding of how to deploy it.

Dorothy Denning was an inaugural inductee into the National Cyber Security Hall of Fame. A fellow of the Association for Computing Machinery and a professor at the Naval Postgraduate School, she has written several books on cybersecurity, including Information Warfare and Security. She also coauthored a landmark paper on active defense, which states, “When properly understood, [active defense] is neither offensive nor necessarily dangerous.”

Robert M. Lee is a cofounder of Dragos, an industrial security firm. He conducted cyber operations for the NSA and U.S. Cyber Command from 2011 to 2015. In October 2017 his firm identified the first known malware written specifically to target industrial safety systems — in other words, its sole purpose was to damage or destroy systems meant to protect people. (The malware had been deployed that August against a petrochemical plant in Saudi Arabia, but the attack failed.) When asked about active defense, Lee sighs and asks flatly, “How are you defining it?” You can tell he’s had this conversation before. The number of people co-opting the term seems to have wearied him, and he’s happy to help bring clarity to the idea.

The following FAQ primer draws on interviews with Denning and Lee.

What exactly is active defense, also known as active cyber defense?

It depends on whom you ask. The term has almost as many definitions as it does citations. NATO defines active defense this way: “A proactive measure for detecting or obtaining information as to a cyber intrusion, cyber attack, or impending cyber operation or for determining the origin of an operation that involves launching a preemptive, preventive, or cyber counter-operation against the source.”

A solid working definition can be found in Denning’s paper with Bradley J. Strawser, “Active Cyber Defense: Applying Air Defense to the Cyber Domain:Active cyber defense is a direct defensive action taken to destroy, nullify, or reduce the effectiveness of cyber threats against friendly forces and assets.”

That sounds like offense, but Lee and Denning note that it describes a strictly defensive action — one taken in reaction to a detected infiltration. Lee argues that there’s a border distinction: Active defense happens when someone crosses into your space, be it over a political boundary or a network boundary. But Denning says that’s probably too simple, and below we’ll see a case in which the line is blurred. Lee says, “Most experts understand this, but it’s important to point out, especially for a general audience. You are prepared to actively deal with malicious actors who have crossed into your space. Sending missiles into someone else’s space is offense. Monitoring for missiles coming at you is passive defense. Shooting them down when they cross into your airspace is active defense.”

Can you give some other examples?

Denning says, “One example of active cyber defense is a system that monitors for intrusions, detects one, and responds by blocking further network connections from the source and alerting the system administrator. Another example is taking steps to identify and shut down a botnet used to conduct distributed denial-of-service (DDoS) attacks.” It’s the verbs “responds” and “shut down” that make these instances of active defense. An example of passive defense, in contrast, is an encryption system that renders communications or stored data useless to spies and thieves.

Is active defense only an information security concept?

Not at all. Some argue that it dates back to The Art of War, in which Sun Tzu wrote, “Security against defeat implies defensive tactics; ability to defeat the enemy means taking the offensive.” Centuries later Mao Zedong said, “The only real defense is active defense,” equating it to the destruction of an enemy’s ability to attack — much as aggressive tactics in active cyber defense aim to do. The term was applied in the Cold War and, as Denning and Strawser’s paper makes clear, is a core concept in air missile defense. Tactics are tactics; all that changes is where they’re employed.

That seems pretty straightforward. So why the uncertainty around the definition?

As noted earlier, hacking back — also not a new term — has confused matters. Properly used, it refers to efforts to attack your attackers on their turf. But because people often fuse it with active defense, difficult and sometimes frustrating disputes over the merits of active defense have ensued. One research paper went so far as to equate the two terms, starting its definition, “Hack back — sometimes termed ‘active defense’…”

The confusion multiplied in October 2017, when Representatives Tom Graves (R-GA) and Kyrsten Sinema (D-AZ) introduced the Active Cyber Defense Certainty (ACDC) bill, which would allow companies to gain unauthorized access to computers in some situations in order to disrupt attacks. The lawmakers called this active defense. The media called it the “hack back bill.” What it would and would not allow became the subject of hot debate. The idea that companies could go into other people’s infected computers wasn’t welcomed. Some savaged the bill. The technology blog network Engadget called it “smarmy and conceited” and observed, “When you try to make laws about hacking based on a child’s concept of ‘getting someone back,’ you’re getting very far and away from making yourself secure. It’s like trying to make gang warfare productive.” The bill went through two iterations and is currently stalled.

But is hacking back part of active defense?

Probably not. Lee says unequivocally, “Hacking back is absolutely not active defense. It’s probably illegal, and it’s probably not effective. We don’t have evidence that attacking attackers works.” Denning has a somewhat different take. “Hacking back is just one form of active defense,” she says. “It might be used to gather intelligence about the source of an intrusion to determine attribution or what data might have been stolen. If the attacker is identified, law enforcement might bring charges. If stolen data is found on the intruder’s system, it might be deleted. Hacking back might also involve neutralizing or shutting down an attacking system so that it cannot cause further damage.”

But Lee and Denning are defining the term differently. And Denning’s version refers to actions undertaken with proper authority by government entities. When it comes to hacking back on the part of businesses, the two experts are in total agreement: Don’t do it. Denning says, “Companies should not hack back. The Department of Justice has advised victims of cyberattacks to refrain from any ‘attempt to access, damage, or impair another system that may appear to be involved in the intrusion or attack.’ The advice contends that ‘doing so is likely illegal, under U.S. and some foreign laws, and could result in civil and/or criminal liability.’”

What’s an example of an aggressive form of active defense that some might consider hacking back?

Denning says, “One of my favorite examples of active defense led to the exposure of a Russian hacker who had gotten malicious code onto government computers in the country of Georgia. The malware searched for documents using keywords such as “USA” and “NATO,” which it then uploaded to a drop server used by the hacker. The Georgian government responded by planting spyware in a file named “Georgian-NATO Agreement” on one of its compromised machines. The hacker’s malware dutifully found and uploaded the file to the drop server, which the hacker then downloaded to his own machine. The spyware turned on the hacker’s webcam and sent incriminating files along with a snapshot of his face back to the Georgian government.

Is that hacking back? I don’t think so. It was really through the hacker’s own code and actions that he ended up with spyware on his computer.”

Note that the actions were taken by a government and occurred within its “borders”; Georgia put the spyware on its own computer. It did not traverse a network to hit another system. It was the hacker’s action of illegally taking the file that triggered the surveillance.

If it’s probably illegal and ineffective, why is hacking back getting so much press?

Companies are weary. “They are under constant attack and working so hard and spending so much just to keep up, and they can’t keep up,” Lee says. “This is a moment when we’re looking for new ideas. That’s why Bochman’s concept of unplugging systems and not always going right to the most efficient solution is starting to be heard. Hacking back feels like another way to turn the tide. Cybersecurity loves a silver bullet, and this feels like one. CEOs are probably thinking, ‘Nothing else has worked; let’s fight.’” Lee has heard many business leaders express these sentiments, especially if their companies have suffered damaging attacks. “This is an emotional issue,” he says. “You feel violated, and you want to do something about it.”

In a paper titled “Ethics of Hacking Back,” Cal Poly’s Patrick Lin captures the sense of utter vulnerability that could lead some to desire vigilante justice:

In cybersecurity, there’s a certain sense of helplessness — you are mostly on your own. You are often the first and last line of defense for your information and communications technologies; there is no equivalent of state-protected borders, neighborhood police patrols, and other public protections in cyberspace.

For instance, if your computer were hit by “ransomware” — malware that locks up your system until you pay a fee to extortionists — law enforcement would likely be unable to help you. The U.S. Federal Bureau of Investigation (FBI) offers this guidance: “To be honest, we often advise people to just pay the ransom,” according to Joseph Bonavolonta, the Assistant Special Agent in Charge of the FBI’s CYBER and Counterintelligence Program.

Do not expect a digital cavalry to come to your rescue in time. As online life moves at digital speeds, law enforcement and state responses are often too slow to protect, prosecute, or deter cyberattackers. To be sure, some prosecutions are happening but inconsistently and slowly. The major cases that make headlines are conspicuously unresolved, even if authorities confidently say they know who did them.

What are the ethics of hacking back?

For the most part, experts say that hacking back without legal authorization or government cooperation is unethical. And whenever activities leave your boundaries, it’s hard to condone them. The targets are too evasive, and the networks are too complex, traversing innocent systems and affecting the people working with them. In addition, Lee points out that government entities might be tracking and dealing with malicious actors, and hacking back could compromise their operations. “Leave it to the pros,” he says.

Denning stresses that unintended consequences are not just possible but likely. She says, “The biggest risks come when you start messing with someone else’s computers. Many cyberattacks are launched through intermediary machines that were previously compromised by the attacker. Those computers could be anywhere, even in a hospital or power plant. So you don’t want to shut them down or cause them to malfunction.”

What kind of work is under way with regard to ethics?

According to Denning, researchers began wrestling with these issues as early as 2006. Speaking about a workshop she participated in, she says, “I recall discussions about measures that involved tracing back through a series of compromised machines to find the origin of an attack. Such tracebacks would involve hacking into the compromised machines to get their logs if the owners were not willing or could not be trusted to help out.”

A decade later Denning collaborated with Strawser to examine the morality of active defense writ large, using the ethics of air defense and general war doctrine as a guide. They wrote that harm to “non-combatants” — especially and most obviously physical harm — disqualifies an active defense strategy. But they say that “temporary harm to the property of non-combatants” is sometimes morally permissible. (It should be noted Denning is primarily focused on the government use of active cyber defense strategies). Denning cites the takedown of Coreflood — malware that infected millions of computers and was used as a botnet. The Justice Department won approval to seize the botnet by taking over its command-and-control servers. Then, when the bots contacted the servers for instructions, the response was essentially, “Stop operating.” In the instance of Coreflood, as in some similar cases, a judge decided that the actions could proceed because they could shut down major malicious code without damaging the infected systems or accessing any information on them.

“The effect was simply to stop the bot code from running. No other functions were affected, and the infected computers continued to operate normally,” Denning says. “There was virtually no risk of causing any harm whatsoever, let alone serious harm.”

Still, the case may have set a precedent for at least the suggestion of more-aggressive measures, such as the ACDC bill. If the government can take control of command-and-control servers, it can, in theory, do more than just tell the bots to shut down. Why not grab some log files at the same time? Or turn on the webcam, as in the Georgian-NATO case? Oversight is needed in all active defense strategies.

How can I deploy an ethical and effective active defense strategy?

If you have or subscribe to services that can thwart DDoS attacks and create logs, you’ve already started. Denning says that many companies are doing more active defense than they realize. “They might not call it active defense, but what they call it matters less than what they do.”

Cooperating with law enforcement and the international network of companies and organizations combating hacking is also part of an active defense strategy. The more companies and agencies that work together, the more likely it is that active defense strategies like the one that took out Coreflood can be executed without harm. Several such operations have taken place without reports of problems.

Denning recommends A Data-Driven Computer Security Defense: THE Computer Security Defense You Should Be Using, by Roger A. Grimes. (Full disclosure: Denning wrote the foreword. “But the book really is good!” she says.)

As for more-aggressive tactics, like the ones proposed in the ACDC bill, proceed with caution. Work with law enforcement and other government agencies, and understand the risks. Denning says, “It’s all about risk. Companies need to understand the threats and vulnerabilities and how security incidents will impact their company, customers, and partners. Then they need to select cost-effective security defenses, both passive and active.” There are limits, she cautions. “Security is a bottomless pit; you can only do so much. But it’s important to do the right things — the things that will make a difference.”THEBIG IDEA

About the author: Scott Berinato is a senior editor at Harvard Business Reviewand the author of Good Charts: The HBR Guide to Making Smarter, More Persuasive Data Visualizations.

Posted on

Protect Credit Union Assets from Sophisticated Hackers

Rob Johnston MSNBC

Protect Credit Union Assets from Sophisticated Hackers

Today’s criminals have moved beyond ransomware and malware.

Protect Credit Union Assets from Sophisticated Hackers

From left: Tim Evans, chief of strategy; Don McLamb, Director of Engineering; and Rob Johnston, CEO.

Financial institutions are prime targets for cybercriminals looking to gain access to volumes of consumers’ personal data and money.

In 2017, the U.S. experienced 1,579 data breaches, 8.5% of which involved financial services companies such as credit unions, banks, investment firms, and credit card companies.

Credit unions face considerable challenges protecting sensitive personal and financial data from breaches. As nonprofit entities, they tend to have lean information technology (IT) teams and reduced technology budgets.

While credit unions may have smaller IT staff and budgets than larger banks, collectively they serve more than 100 million members and have assets of roughly $1.4 trillion.

To support their business, credit unions rely upon complex IT infrastructures with hundreds of connected devices transmitting large volumes of sensitive data. In addition to defending against intruders, credit unions must implement security controls to meet security compliance requirements.

It’s no surprise hackers find credit unions attractive.

Hackers realize that legacy security tools can’t properly protect today’s dynamic infrastructures. Firewalls and penetration testing alone can no longer keep sensitive data and assets safe.

Today’s hackers have moved beyond ransomware and malware, and have identified new methods for infiltrating networks to steal employees’ identities, and then use those identities  to roam the network—without  the network owners even knowing of  their presence.

Fileless attacks are becoming their weapon of choice. They don’t require any payload, and they are harder to detect than traditional malware-based threats.

Credit unions looking to outsmart hackers and ease the burden of compliance need to reassess their security strategies and identify the right blend of people, technologies, and programs necessary to protect themselves and their members.

To outsmart the bad guys, some credit unions are looking at advanced detection technologies that leverage machine learning and artificial intelligence.

Machines capable of cognitive functions, such as anomaly detection and classification, have superior processing power and continuously scan huge volumes of data to identify risks.

Today’s cybersecurity technology

Technology is revolutionizing the way credit unions secure enterprise assets and ensure PCI DSS (Payment Card Industry Data Security Standards) compliance. Today’s solution must be a cloud-delivered SaaS [software as a service] solution that protects against internal and external malicious actors.

A perfect Security Information & Event Management (SIEM) replacement or augmentation platform uses artificial intelligence, machine learning, and pattern recognition to monitor an organization’s network 24/7 to detect changes in user behaviors. It provides real-time visibility and analysis of the activities of every identity within the enterprise.

Creating a heuristic baseline of user activity by analyzing behavior, it identifies   potentially malicious activity and sends a warning to the administrator, providing details about the questionable event before the threat becomes critical.

PCI compliance

Credit unions also need a platform that helps manage the security and confidentiality of member information by monitoring systems and activities to detect attempted and actual attacks on, or intrusions into, member information systems.

Appropriate technology solutions help manage the complexity of a constantly changing IT environment and provide insight into what sensitive data is being accessed by every account on the network.

Visualize privilege across your network

Managing user privilege across multiple groups is a challenge. User rights that are assigned to a group are applied to all members of the group while they remain members.

If a user is a member of multiple groups, the user’s rights are cumulative, meaning that user has more than one set of rights and privileges. Failure to routinely audit privilege and groups can result in misuse of privilege and unauthorized access to sensitive files.

SIEM-like technology automates the process for managing user privilege, ensuring account privilege status is up to date and accurate.

Cyber hunting

The Adlumin Platform is revolutionizing how credit unions secure sensitive data and intellectual property while achieving their compliance objectives. Adlumin provides a virtual machine-learning team of four to five personnel that hunts networks 24/7 for anomalous behavior.

This eliminates the need for credit unions to hire a single person.

TIMOTHY EVANSJ.D., L.L.M., is co-founder/senior vice president and chief of strategy for Adlumin Inc.

Posted on

Password Safety and Complexity to Protect Your Accounts

Password Safety and Complexity to Protect Your Accounts

By James Warnken

“The most used password in the world is 123456”

A simple password like this is cracked in a matter of just a few seconds. Whereas, if your password contains one capital letter, lower case letters, and numbers it may take a few hours to crack. The most complex passwords take weeks to crack and they include everything above including randomly placed special characters within the password. Keep this in mind when renewing or creating passwords to ensure accounts and information are secured by complex and strong passwords.

Before delving into how to construct a complex and secure password, we must first understand how hackers are stealing and breaking passwords in just minutes and clicks.

There are 5 ways hackers steal and break passwords to be mindful of:

  1. Mass Password Theft-This form of theft is done solely using a program and exploiting files within websites that contain username and password credentials. A hacker uses a software that scans websites that store and create lists of user credentials and once found the hacker has full access to do with the information as they please. One interesting fact is that a computer does not have to be connected to wifi or even turned on for this to happen. This theft is done by a server basis which means websites with autofill passwords enabled and weak security are a prime target for this form of password theft.

 

  1. Wi-fi Traffic Monitoring- This form of password and credential theft often goes undetected, this is not often given a second thought. When visiting public places that offer free WIFI that require a sign in with an email are often where this takes place. A hacker sits within that network and once an email address is entered they then can monitor and record information from any site or programs visited while on the free public network. For example, say you are on a public network checking your social media accounts, if a hacker is monitoring the network once you enter your password to login the hacker now has the needed credentials to access the account.

 

  1. Trial and Error Theft- Although less practical for hackers, this method is still relevant and used with today’s technology. This method is exactly as it sounds. Hackers know that most people use significant words, phrases, or dates when setting passwords so just by guessing and performing trial and error a password can be cracked. For example, it is common for people to use their date of birth in some form within their password, this information is easy for someone to get ahold of and use when trying to guess a password.

There are two forms of phishing attacks

 

  1. Fake Websites- Everyone gets obvious spam emails, but what about the ones that seem legitimate and very important. Some hackers have been known to set up websites that mimic official sites that then send spam emails that seem real. This is one effective way hackers steal credentials without much work beyond the setup phases. The email usually seems very important and provides a link that will help resolve whatever issues is claimed to be occurring. Once the username and password have been entered the hacker has the information that then can be used to log into the actual account and do whatever they wish. These are very hard to spot and many times are never given a second thought. If this occurs and may be a problem that could be happening do not log in through the link provided in the email. Go to the official website and login there.

 

  1. Key Logging-This form of phishing is very common and usually is very easy to spot. Hackers send emails that attempt to catch the receivers attention through various ways that aims to drive them to clicking on a link attached to the email. If the link is opened it may seem that nothing bad has happened which is true from a general view. However, on the back end, the email will inject code into the device and begins tracking and recording information. Such codes track keys and information within files that are then used to breech, crack, and steal passwords, credentials, and sensitive information. One rule of thumb is if it seems to good to be true, it more than likely is.

 

Now that we know how hackers get our passwords, what can we do to stop them?

Here are 6 tips to making your password complex and impossible to crack.

 

  1. Password Length-The longer a password is, the most complex it is and harder it is to be cracked and stolen. Most websites require a minimum of 6 characters, but in reality, 8 should be the minimum characters used. Never use the minimum characters required but instead make passwords lengthy and use variations of uppercase, lowercase, numbers, and symbols to ensure passwords are complex.

 

  1. Password Variety- This may seem very simple but it is key to making password complex. Instead of using the usual variation of first name, last name, and date of birth, try switching things up and using quotes and phrases. These are much harder for hackers to guess or replicate. Use a set of words or phrases that have no direct attachment to you personally. To make the password more complex than that, use variations of this by substituting words in or out, or rearrange words so that it may not make much sense to anyone but you.

 

  1. Using the Full Keyboard- When it comes to creating a solid password we all typically use letters and numbers, but utilizing the entire keyboard will make passwords more complex and harder for hackers to crack. Using special character such as ‘’!’’ or ‘’#’’ are always a good idea along with other special characters. It is also key to not have characters, numbers, and symbols in a generic pattern. Mixing things up and replacing a character with numbers and arranging them in a unique pattern will ensure your password is complex and uncrackable.

 

  1. Variations across accounts- When it comes to logging in to accounts, many people fall into the thinking “’I want my password to be easy to remember’’ so consequently the same password is used across multiple or all accounts. This is very risky and makes all accounts vulnerable to attacks. Instead of using the same exact password, create variations of the password such as replacing letters with numbers or making characters capital or lowercase. Simple variations can protect accounts and add to their complexity making it harder for the attackers to steal.

 

  1. Avoid Common Passwords- When it comes to password complexity and making the job of a hacker harder, this tip is the easiest and can be impactful. Avoid using the famous ‘’123456’’ or ‘’qwerty’’ and any others that just seem too easy to guess. Also, it is important to keep in mind if it is something that sticks out on the keyboard, it is more than likely to easy and simple of a password. A simple password would be your initials and birthday where a more complex password may be the month you were born followed by middle name (capitalize one random letter) followed by a special symbol concluded with the day you were born.

 

  1. ` Renewing passwords- Passwords that have been the same for long periods of time are more vulnerable than ones changed from time to time. Best practices suggest a password should be reset and changed at least once every 3 months. Changing passwords will help out in both securing from future attacks but also for attacks that may have happened that were undetected. For example, a hacker could have login credentials and be hiding and monitoring data and information, but with regular password reset the hacker would be locked out and all access they had would no longer be available. In most cases stale passwords or passwords that have not been reset for long periods of time are the prime target for hackers that can grant them access often without anyone ever even knowing they are in.

 

Having all this in mind, let us see some examples of PCI compliance regulations regarding passwords.

 

  • Passwords must be reset every 90 days
  • Require a minimum password length of 7 characters
  • Passwords must contain numerical and alphabetical characters
  • New passwords cannot be the same as the old password
  • Temporary locking of account after 6 failed attempts
  • Idle timeout after 15 minutes

Be sure to check the full list of regulations as well as others within your industry to ensure both compliance and protection.