The Intersection of AI and Cybersecurity: A Closer Look
By: Mark Sangster, VP, Chief of Strategy
Having successfully launched an unprecedented and remarkably influential technology, its visionary creator composed a significant letter that intensified the ethical and existential dilemmas associated with his groundbreaking innovation.
It’s reasonable to assume that I’m referring to the recent open letter published by the Center for AI Safety (CAIS) that was signed by known artificial intelligence (AI) experts, including Sam Altman, the CEO of OpenAI, and by the “Godfather of AI,” Geoffrey Hinton.
The letter’s dire warning made headlines with this:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
But you’d be wrong. I was actually referring to a letter written over 75 years earlier by J. Robert Oppenheimer to then-U.S. President Harry Truman on the existential risk of nuclear weapons.
When I prompted ChatGPT to compare the two letters, it returned the following:
By 1946, after the bombing of Hiroshima and Nagasaki in Japan, nuclear weapons had demonstrated the consequences of atomic warfare. Oppenheimer’s fears were grounded in fact.
The CAIS letter, however, is more like a predictive warning of where AI is headed, and what a dystopian future could look like.
Beyond the existential fears, there are consequences that are more likely to occur in the advent of AI and ChatGPT, such as job transformation for customer support, manufacturing, logistics, and data analysis.
Like other forms of technology before it, AI expands the gap between the knowers and the users. It’s “Black box” in most cases, with a powerful handful of people who understand its mechanics, and this lack of transparency could lead to manipulation and political offenses.
What happens in a world of generative AI where this generated content becomes the data source for AI to develop new content? Is this an existential race condition leading to a runway skewing of reality?
The Evolution of AI
These proximal fears may be more likely to manifest than the existential fears of destruction. We are much closer to the dawn of AI than to its future sunset.
In terms of AI development over the last quarter century, the technology has advanced from reactive AI such as when IBM Deep Blue beat chess expert, Garry Kasparov in 1997, to limited memory or deep learning AI such as chatbots, self-driving vehicles, and generative AI such as ChatGPT. In terms of evolution, limited AI is a long way from self-aware, and super intelligence, but it’s good at learning and performing specific tasks.
Existential fears of AI stem more from future stages, like The Theory of Mind in which AI learns empathy and understands the entities it interacts with. Beyond empathy, self-aware AI possesses its own emotions, needs, and desires. This kind of general AI and even super intelligence could lead to self-preservation instincts and pose a threat to humanity.
When we can expect to see the emergence of self-aware superintelligence is anyone’s guess. Right now, it’s a bit more like watching the first hominid use a stick as a tool versus predicting the exact date of the first atomic detonation. I did ask Chat GPT, and this was the response:
For now, it’s “carpe diem” or seizing the day when it comes to AI.
AI in Cybersecurity
Recently, Deborah Snyder a senior fellow with the Center for Digital Government, invited me to a webinar to discuss artificial intelligence trends in cybersecurity. We couldn’t ignore the parallels between the two historical warnings. But our focus was more on today than tomorrow.
In terms of cybersecurity, criminals leverage AI to sabotage defenses, accelerate the development of their tactics and tools like phishing lures, and even lie dormant in the hands of an advanced persistent threat (APT) that’s playing a long game deploying an AI mole in the halls of government or in the defense industry.
But it’s not all dystopian on the cybersecurity front. AI automation solves big data problems and provides a scalable, cost-effective solution for security operations. According to my co-contributor, ChatGPT:
Given that most organizations face increasing cyber threats and compliance demands with diminishing budgets and exhausted resources, AI offers a complementary solution to human-based security operations.
Adlumin’s AI Advancements
From the start, Adlumin invested heavily in the use of artificial intelligence and machine learning (ML), as well as augmented user and entity behavior analytics (UEBA).
The precursors to business-disrupting incidents are buried in an avalanche of false-positive alerts and are camouflaged within legitimate activity logs and events. Adlumin’s machine learning algorithms streamline security operations, ingesting billions of data points to identify critical anomalous behaviors and present your security team with the timely information necessary to respond quickly. Adlumin leverages the latest graph-theory metrics and cluster analysis, including principal components analysis, K-Nearest-Neighbors (KNN), and cluster-based local outlier factor (CBLOF).
Machine learning also drives our risk management services including continuous vulnerability management (CVM), progressive penetration testing (attack simulation), a proactive security awareness program, and multi-layered total ransomware defense.
Determining How to Use AI in Your Organization
Here are the top five simple ways to include AI into your security operations center (SOC) and the benefits they bring:
- AI-powered Threat Intelligence: Integrate AI-driven threat intelligence tools into your SOC to enhance threat detection and response capabilities. These tools can analyze vast amounts of data from various sources and automatically identify patterns, indicators of compromise, and emerging threats. By leveraging AI-powered threat intelligence, you can stay ahead of cybercriminals, detect advanced threats faster, and proactively protect your organization’s assets.
- Automated Log Analysis: Utilize AI-based log analysis solutions to automate the detection of security events and anomalies in your network logs. AI algorithms can sift through mountains of log data, identifying suspicious activities and potential security incidents. By automating log analysis, you free up your SOC team’s time and improve their efficiency, allowing them to focus on critical tasks and respond swiftly to genuine threats.
- Security Orchestration and Automation: Implement AI-driven security orchestration and automation platforms to streamline and optimize incident response workflows. These platforms can integrate with various security tools, allowing for automated incident triage, response, and remediation. By automating routine tasks, you reduce manual errors, accelerate incident response times, and enable your team to handle a higher volume of incidents effectively.
- Behavior-based Anomaly Detection: Deploy AI-powered behavior-based anomaly detection systems to detect unusual activities and potential insider threats. These systems can analyze user behavior, network traffic, and endpoint activities to establish baselines of normal behavior. When deviations occur, the AI algorithms can raise alerts, helping you detect suspicious behavior and mitigate the risks associated with insider threats promptly.
- Machine Learning-based User Authentication: Utilize AI and machine learning algorithms for user authentication and access control. By implementing intelligent authentication systems, you can detect and prevent unauthorized access attempts based on user behavior patterns. This approach strengthens your security posture, reduces the risk of account compromise, and enhances user experience by minimizing friction during the authentication process.
By including AI in your SOC through these simple methods, you can enjoy several benefits. These include improved threat detection accuracy, faster incident response, reduced manual effort, enhanced anomaly detection capabilities, and increased overall efficiency. AI empowers your SOC team with advanced tools and automation, enabling them to focus on high-value tasks and better protect your organization against ever-evolving cyber threats.
[Clears throat nervously] I couldn’t have said it better myself, ChatGPT.
What’s Next for AI?
Science fiction provides a neutral forum in which we can explore the dark potential of technology. In one such TV show, Caprica, we see the pivotal moment of discovery in this sci-fi world.
Caprica is set nearly 60 years before the AI apocalypse of the re-imagined 2004 series, Battlestar Galactica, and covers the period in which artificial intelligence becomes self-aware. It’s the ground zero breakthrough that would ultimately lead to the destruction of mankind in this science fiction world.
This kind of self-inflicted extinction is predicted in what is called the Great Filter theory. The notion is lifeforms face moments of extinction through pandemics, natural disasters, or runaway technology. The real trick when it comes to AI’s existential threat is knowing which side of this particular filter we are on. Did we safely pass through this filter or is it still looming in our future as a harbinger of doom?
We have lived with nuclear annihilation for decades and haven’t yet fulfilled that apocalyptic prediction. Perhaps we can do the same with artificial intelligence. Regardless, AI today offers promise and direct operational benefits in terms of cybersecurity operations. At Adlumin, we intend to continue our AI investments to protect our customers from ever-evolving cyber threats.