These blog posts and articles discuss the latest artificial intelligence trends and platform enhancements.

Unraveling Cyber Defense Model Secrets: The Future of AI in Cybersecurity

By: Arijit Dutta, Director of Data Science 

Welcome to the Unraveling Cyber Defense Model Secrets series, where we shine a light on Adlumin’s Data Science team, explore the team’s latest detections, and learn how to navigate the cyberattack landscape. 

The increasing threat landscape for organizations has forced cybersecurity teams to adopt digital transformation. The COVID-19 pandemic has further complicated matters by accelerating the adoption of cloud services, leading to a proliferation of cloud providers and a surge in the number of IoT devices transmitting data to the cloud.  

This complex web of interconnections has brought about greater scale, connectivity, and speed in our digital lives but has also created a larger attack surface for cybercriminals. Responding to these challenges, cybersecurity teams are turning to AI-powered automation, especially machine learning, to uncover, evaluate, and effectively counter system, network, and data threats. Understanding the role of AI in cybersecurity is critical for organizations to protect themselves against malicious cyber activities effectively. 

In this blog, we explore the current technologies available, the exciting developments on the horizon, and the transformative impact of AI. 

Current, Upcoming, and Future AI Technology  

As in most industries, AI technology is indispensable in organizations today for distilling actionable intelligence from the massive amounts of data being ingested from customers and generated by employees. Organizations can choose from various available data mining and AI methods depending on desired outcomes and data availability. For example, if the goal is to evaluate each customer for digital marketing suitability for a new product, “supervised” methods such as logistic regression or decision-tree classifier could be trained on customer data.  

These use cases require customer data on prior actions, such as historical responses to marketing emails. For a customer segmentation problem, “unsupervised” methods such as density-based clustering algorithm (DBSCAN clustering) or principal component analysis (PCA) dimensionality reduction are called for, where we don’t impose prior observations on specific customer actions but group customers according to machine-learned similarity measurements. More advanced methods, such as Artificial Neural Networks, are deployed when the use case depends on learning complex interactions among numerous factors, such as customer service call volume and outcome evaluation or even the customer classification and clustering problems mentioned earlier. The data volume, frequency, and compute capacity requirements are typically heavier for artificial neutral networks (ANNs) than for other Machine Learning techniques. 

The most visible near-term evolution in the field is the spread of Large Language Models (LLM) or Generative AI, such as ChatGPT. The underlying methods behind these emergent AI technologies are also based on the ANNs mentioned above – only with hugely complicated neural network architectures and computationally expensive learning algorithms. Adaptation and adoption of these methods for customer classification, segmentation, and interaction-facilitation problems will be a trend to follow in the years ahead. 

Cybersecurity Solutions That Use AI 

At Adlumin, we develop AI applications for cyber defense, bringing all the techniques above to bear. The central challenge for AI in cyber applications is to find “needle in haystack” anomalies from billions of data points that mostly appear indistinguishable. The applications in this domain are usefully grouped under the term User and Entity Behavior Analytics, involving mathematical baselining of users and devices on a computer network followed by machine-identification of suspicious deviations from baseline. 

To skim the surface, here are two solutions cybersecurity teams use that incorporate AI: 

Two Automation Cybersecurity Solutions for Organizations  

User and Entity Behavior Analytics (UEBA)

UEBA is a machine learning cybersecurity process and analytical tool usually included with security operation platforms. It is the process of gathering insight into users’ daily activities. Activity is flagged if any abnormal behavior is detected or if there are deviations from an employee’s normal activity patterns. For example, if a user usually downloads four megabytes of assets weekly and then suddenly downloads 15 gigabytes of data in one day, your team would immediately be alerted because this is abnormal behavior.

The foundation of UEBA can be pretty straightforward. A cybercriminal could easily steal the credentials of one of your employees and gain access, but it is much more difficult for them to convey that employee’s daily behavior to go unseen. Without UEBA, an organization cannot tell if there was an attack since the cybercriminals have the employee’s credentials. Having a dedicated Managed Detection and Response team to alert you can give an organization visibility beyond its boundaries. 

Threat Intelligence

Threat intelligence gathers multi-source, raw, curated data about existing threat actors and their tactics, techniques, and procedures (TTPs). This helps cybersecurity analysts understand how cybercriminals penetrate networks so they can identify signs early in the attack process. For example, a campaign using stolen lawsuit information to target law firms could be modified to target organizations using stolen litigation documents.

Threat intelligence professionals proactively threat hunt for suspicious activity indicating network compromise or malicious activity. This is often a manual process backed by automated searches and existing collected network data correlation. Whereas other detection methods can only detect known categorized threats.   

AI Risks and Pitfalls to Be Aware of 

When building viable and valuable AI applications, data quality and availability are top of mind. Machines can only train on reliable data for the output to be actionable. Great attention is therefore required in building a robust infrastructure for sourcing, processing, storing, and querying the data. Not securing a chain of custody for input data means AI applications are at risk of generating misleading output. 

Awareness of any machine-learned prediction’s limitations and “biases” is also critical. Organizational leadership needs to maintain visibility into AI model characteristics like “prediction accuracy tends to falter beyond a certain range of input values” or “some customer groups were underrepresented in the training data.”

Operationally, an excellent way to proceed is to build and deploy a series of increasingly complex AI applications rather than being wedded to a very ambitious design at the get-go. Iteratively adding functionality and gradually incorporating more data fields can make measuring performance easier and avoid costly mistakes. 

Organizations Embracing AI 

Organizations need to build a cybersecurity infrastructure embracing the power of AI, deep learning, and machine learning to handle the scale of analysis and data. AI has emerged as a required technology for cybersecurity teams, on top of being one of the most used buzzwords in recent years. People can no longer scale to protect the complex attack surfaces of organizations by themselves. So, when evaluating security operations platforms, organizations need to know how AI can help identify, prioritize risk, and help instantly spot intrusions before they start. 

Stay Informed

Subscribe to Adlumin’s blog series and gain access to actionable advice and step-by-step guides from cybersecurity experts. Join our community and be part of the frontlines against cyber threats.


The Intersection of AI and Cybersecurity: A Closer Look

By: Mark Sangster, VP, Chief of Strategy

Having successfully launched an unprecedented and remarkably influential technology, its visionary creator composed a significant letter that intensified the ethical and existential dilemmas associated with his groundbreaking innovation.

It’s reasonable to assume that I’m referring to the recent open letter published by the Center for AI Safety (CAIS) that was signed by known artificial intelligence (AI) experts, including Sam Altman, the CEO of OpenAI, and by the “Godfather of AI,” Geoffrey Hinton.

The letter’s dire warning made headlines with this:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

But you’d be wrong. I was actually referring to a letter written over 75 years earlier by J. Robert Oppenheimer to then-U.S. President Harry Truman on the existential risk of nuclear weapons.

When I prompted ChatGPT to compare the two letters, it returned the following:  

By 1946, after the bombing of Hiroshima and Nagasaki in Japan, nuclear weapons had demonstrated the consequences of atomic warfare.  Oppenheimer’s fears were grounded in fact.

The CAIS letter, however, is more like a predictive warning of where AI is headed, and what a dystopian future could look like.

Beyond the existential fears, there are consequences that are more likely to occur in the advent of AI and ChatGPT, such as job transformation for customer support, manufacturing, logistics,  and data analysis.

Like other forms of technology before it, AI expands the gap between the knowers and the users. It’s “Black box” in most cases, with a powerful handful of people who understand its mechanics, and this lack of transparency could lead to manipulation and political offenses.

What happens in a world of generative AI where this generated content becomes the data source for AI to develop new content? Is this an existential race condition leading to a runway skewing of reality?

The Evolution of AI

These proximal fears may be more likely to manifest than the existential fears of destruction. We are much closer to the dawn of AI than to its future sunset.

In terms of AI development over the last quarter century, the technology has advanced from reactive AI such as when IBM Deep Blue beat chess expert, Garry Kasparov in 1997, to limited memory or deep learning AI such as chatbots, self-driving vehicles, and generative AI such as ChatGPT. In terms of evolution, limited AI is a long way from self-aware, and super intelligence, but it’s good at learning and performing specific tasks.

Existential fears of AI stem more from future stages, like The Theory of Mind in which AI learns empathy and understands the entities it interacts with. Beyond empathy, self-aware AI possesses its own emotions, needs, and desires. This kind of general AI and even super intelligence could lead to self-preservation instincts and pose a threat to humanity.

When we can expect to see the emergence of self-aware superintelligence is anyone’s guess. Right now, it’s a bit more like watching the first hominid use a stick as a tool versus predicting the exact date of the first atomic detonation. I did ask Chat GPT, and this was the response:

For now, it’s “carpe diem” or seizing the day when it comes to AI.

AI in Cybersecurity

Recently, Deborah Snyder a senior fellow with the Center for Digital Government, invited me to a webinar to discuss artificial intelligence trends in cybersecurity. We couldn’t ignore the parallels between the two historical warnings. But our focus was more on today than tomorrow.

In terms of cybersecurity, criminals leverage AI to sabotage defenses, accelerate the development of their tactics and tools like phishing lures, and even lie dormant in the hands of an advanced persistent threat (APT) that’s playing a long game deploying an AI mole in the halls of government or in the defense industry.

But it’s not all dystopian on the cybersecurity front. AI automation solves big data problems and provides a scalable, cost-effective solution for security operations. According to my co-contributor, ChatGPT:

Given that most organizations face increasing cyber threats and compliance demands with diminishing budgets and exhausted resources, AI offers a complementary solution to human-based security operations.

Adlumin’s AI Advancements

From the start, Adlumin invested heavily in the use of artificial intelligence and machine learning (ML), as well as augmented user and entity behavior analytics (UEBA).

The precursors to business-disrupting incidents are buried in an avalanche of false-positive alerts and are camouflaged within legitimate activity logs and events. Adlumin’s machine learning algorithms streamline security operations, ingesting billions of data points to identify critical anomalous behaviors and present your security team with the timely information necessary to respond quickly. Adlumin leverages the latest graph-theory metrics and cluster analysis, including principal components analysis, K-Nearest-Neighbors (KNN), and cluster-based local outlier factor (CBLOF).

Machine learning also drives our risk management services including continuous vulnerability management (CVM), progressive penetration testing (attack simulation), a proactive security awareness program, and multi-layered total ransomware defense.

Determining How to Use AI in Your Organization

Here are the top five simple ways to include AI into your security operations center (SOC) and the benefits they bring:

  1. AI-powered Threat Intelligence: Integrate AI-driven threat intelligence tools into your SOC to enhance threat detection and response capabilities. These tools can analyze vast amounts of data from various sources and automatically identify patterns, indicators of compromise, and emerging threats. By leveraging AI-powered threat intelligence, you can stay ahead of cybercriminals, detect advanced threats faster, and proactively protect your organization’s assets.
  2. Automated Log Analysis: Utilize AI-based log analysis solutions to automate the detection of security events and anomalies in your network logs. AI algorithms can sift through mountains of log data, identifying suspicious activities and potential security incidents. By automating log analysis, you free up your SOC team’s time and improve their efficiency, allowing them to focus on critical tasks and respond swiftly to genuine threats.
  3. Security Orchestration and Automation: Implement AI-driven security orchestration and automation platforms to streamline and optimize incident response workflows. These platforms can integrate with various security tools, allowing for automated incident triage, response, and remediation. By automating routine tasks, you reduce manual errors, accelerate incident response times, and enable your team to handle a higher volume of incidents effectively.
  4. Behavior-based Anomaly Detection: Deploy AI-powered behavior-based anomaly detection systems to detect unusual activities and potential insider threats. These systems can analyze user behavior, network traffic, and endpoint activities to establish baselines of normal behavior. When deviations occur, the AI algorithms can raise alerts, helping you detect suspicious behavior and mitigate the risks associated with insider threats promptly.
  5. Machine Learning-based User Authentication: Utilize AI and machine learning algorithms for user authentication and access control. By implementing intelligent authentication systems, you can detect and prevent unauthorized access attempts based on user behavior patterns. This approach strengthens your security posture, reduces the risk of account compromise, and enhances user experience by minimizing friction during the authentication process.

By including AI in your SOC through these simple methods, you can enjoy several benefits. These include improved threat detection accuracy, faster incident response, reduced manual effort, enhanced anomaly detection capabilities, and increased overall efficiency. AI empowers your SOC team with advanced tools and automation, enabling them to focus on high-value tasks and better protect your organization against ever-evolving cyber threats.

[Clears throat nervously] I couldn’t have said it better myself, ChatGPT.

What’s Next for AI?

Science fiction provides a neutral forum in which we can explore the dark potential of technology. In one such TV show, Caprica, we see the pivotal moment of discovery in this sci-fi world.

Caprica is set nearly 60 years before the AI apocalypse of the re-imagined 2004 series, Battlestar Galactica, and covers the period in which artificial intelligence becomes self-aware. It’s the ground zero breakthrough that would ultimately lead to the destruction of mankind in this science fiction world.

This kind of self-inflicted extinction is predicted in what is called the Great Filter theory. The notion is lifeforms face moments of extinction through pandemics, natural disasters, or runaway technology. The real trick when it comes to AI’s existential threat is knowing which side of this particular filter we are on. Did we safely pass through this filter or is it still looming in our future as a harbinger of doom?

We have lived with nuclear annihilation for decades and haven’t yet fulfilled that apocalyptic prediction. Perhaps we can do the same with artificial intelligence. Regardless, AI today offers promise and direct operational benefits in terms of cybersecurity operations. At Adlumin, we intend to continue our AI investments to protect our customers from ever-evolving cyber threats.