AI And Cybersecurity

Ashish Khanna, Sr. Director, Security Consulting Services, Verizon Business.


In my frequent interactions with business leaders, I have observed a consistent focus on leveraging artificial intelligence (AI) to drive business growth while prioritizing security. Additionally, there is a desire to enhance security measures and improve the speed of response to potential threats, emphasizing the modernization of security operations by utilizing AI.

AI-powered security solutions (AI for cybersecurity) can also help businesses identify and mitigate vulnerabilities in their systems faster. In addition to improving security, AI can also help businesses (cybersecurity for AI) improve their operational resilience in many ways.

Of course, with any new technology, there are also risks associated with AI (data poisoning, hallucinations, data remodeling, etc.), so it is important to weigh the risks and benefits of AI carefully before implementing it.

The use of AI-driven cyberattacks continues to pose a serious threat to digital security. Stories abound about these attacks, with cybercriminals utilizing AI-powered bots to socially engineer targets. These bots can be trained to mimic human behavior by engaging in social engineering tactics that closely resemble genuine interactions, and they can adapt to their surroundings, including different environments within the system stack (i.e., development vs. test environments), making detection more challenging.

They can also automate attacks by writing malicious code, enhancing their efficiency and effectiveness. Criminals have also exploited AI tools in fraud and extortion schemes. For instance, a Secret Service investigation resulted in the arrest of a group of individuals who utilized AI-powered translation tools, as reported in the Verizon DBIR report for 2024 (pg. 93).

As technology evolves at an ever-increasing pace, it is crucial to understand the differences and carefully evaluate the phases of evolution within these categories. One of the biggest concerns is that AI could be used to create autonomous weapons systems that could operate without human intervention, such as with drones or defense systems. If a drone is flown without sufficient virtual airframe test data, the results could be catastrophic, highlighting the importance of defensive AI and deep learning models.

AI facilitates informed decision-making. Defensive measures and the relevance of AI models require consideration of frameworks and internal processes for prioritization. So, how are AI-driven cyberattacks characterized, and where should the focus be?

Reconnaissance -> Intelligent Reconnaissance

From automated information gathering to target profiling, learning behaviors and predicting vulnerabilities and outcomes, the shift is primarily toward accelerating targeted attacks with AI and utilizing adversarial AI—using AI techniques to create attacks against ML algorithms. This may facilitate early detection.

Defense Evasion -> Model Evasion

With the aim of evading AI models, the attacker seeks to deceive the model by introducing carefully crafted training sample data. This may cause the model to misclassify or make incorrect predictions, thus falling into the category of being more adversarial (offensive). Today, offensive techniques, such as AI-driven data poisoning, data modeling (adjusting weights and biases in neural networks to recalibrate decision boundaries) and abnormal behavior generation all contribute to swift evasion, access and penetration.

Discovery, Movement And Collection -> Automated Discovery, Intelligent Lateral Movement And Large AI Multilayered C2C

What does this mean in simple terms? The AI threat actors could use automatic identification of conversations related to DDoS events in social networking logs (via a log-based threat monitoring system), as well as neural and deep neural networks to predict vulnerabilities, and the list goes on. In some recent observations, deep neural networks (DNNs) were used to predict attacks on network intrusion detection systems (N-IDS).

Recommended Actions

Action 1: ML Data Training In Predeployment

Train your datasets for the engine in the development stage (train, assess and adjust, then train again). Garbage in will lead to garbage out. At the foundation of any ML, LLM and NLM model lies the data. Maintaining data integrity is paramount, including data integrity checks, encryption key validations and privacy considerations with dynamic views based on data sensitivity. While all of the above is done, the representation of data samples must leverage real-time production data to generate results that add value to the business.

Action 2: Continuous Risk Scoring

Second, how do you ensure that the risks you are assessing are aligned with the AI models in use? How can you transition from static to dynamic risk quantification and take a step toward a continuous cyber risk scoring system (CCRSS) combined with asset provenance tracking—tracking the origins and usage of assets and the data on them through careful integration?

Action 3: Shifting From Traditional SOC To SOC As Excellence

The third area for consideration revolves around visibility and analytics, primarily to proactively manage zero-day vectors with AI. The maturity moves from contextual and pattern-based log and packet detection toward the detection and analysis of patterns of behavior indicative of malicious activity.

This then replaces tier one and two analysts and provides an elevated view to analysts by building adaptive learning algorithms that incrementally update based on business context. Please note that having the right data is still paramount, and this constant ingestion needs to be done with the right data, thus going back to Action 1.

Final Thoughts

For now, most products use generative AI to quickly query the tools in a simplistic form. In some cases, they lower the expertise required to operate (e.g., by allowing the use of human language versus a query language). Others improve efficiency by stating findings in human language, which may help reduce the fatigue of SOC analysts. AI’s ability to analyze vast datasets effectively has led to the development of tactical techniques and procedures for threat intelligence, such as automated attack pattern recognition and threat hunting.

By leveraging AI’s prowess, analysts can focus on more strategic aspects of cybersecurity, optimizing their efforts and strengthening the overall security posture of an organization. However, we need to steer clear of the buzz and consider the role of humans in making ethical decisions, which will be important to reap the benefits as AI evolves.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Read More