The dual role of AI in cybersecurity - part 2: AI for adversaries

AI, 2025-04-24

AI, 2025-04-24

By Lili Marleen Moser | 740 words | Read time: 4 minutes

Last week, we explored how AI can be used to strengthen cyber defenses, and how organizations can leverage AI-powered security solutions. Now we will turn our attention to the ways in which malicious actors are using AI to orchestrate more sophisticated attacks.

Turning the tables: AI as an offensive tool

While AI-driven defenses can strengthen our ability to counter cybercrime, the same technologies are also being explored and deployed by malicious attackers to enhance their offensive capabilities. This ‘dark side’ of AI in cybersecurity presents a significant and evolving threat, potentially making attacks more sophisticated, targeted and harder to detect.

Supercharged phishing and social engineering

One of the most concerning applications of AI for attackers is in enhanced social engineering and phishing attacks. Traditional phishing attacks rely on generic, mass-mailed emails with obvious red flags such as grammatical errors.

Now, attackers can now use AI to analyze vast amounts of publicly available data – for instance from social media profiles – and use it to craft personalized and convincing messages. If they can gather audio data from their targets, they are now also able to create realistic audio clips using voice synthesis tools. Just 20 seconds of audio is often enough to create a good clone of your voice.

These clips are then used for voice phishing (vishing): to impersonate trusted individuals, spread disinformation, and facilitate social engineering attacks[1] – a subject we have already covered.

Adaptive malware and zero-day exploits

AI is also enabling the development of advanced malware and evasion techniques. Traditional antivirus software often relies on signature-based detection, identifying known malicious code patterns.

AI-powered malware, however, can learn and adapt, altering its code or behaviour in real time to evade these static defenses. This type of AI-crafted malware – characterized by its ability to modify its code, behavior, or communication methods based on its environment – is referred to as adaptive or polymorphic malware[2].

AI can also accelerate the discovery and exploitation of previously unknown (zero-day) vulnerabilities, allowing attackers to launch attacks before vendors can issue patches[3].

More precision, less effort

AI is paving the way for more sophisticated and targeted attacks. By analyzing large datasets related to network vulnerabilities, user behavior within organizations, and publicly available exploit information, AI can help attackers identify high-value targets and pinpoint the most effective attack vectors. 

Hacking for the masses

The rise of AI-powered hacking tools has also made it easier for lesser-skilled individuals to engage in cybercrime.

Cybercrime-as-a-Service (CaaS) is on the rise — a model where groups offer ready-to-use system access, tools or services, often sourced from professional hackers seeking profit without direct involvement in the attacks themselves.

Adversarial machine learning

As we discussed in part 3 of our blog series on malware reverse engineering, adversaries are now also exploiting AI systems directly through adversarial machine learning. This involves feeding corrupted or misleading data to AI models used in defense – manipulating their outcomes and causing them to misclassify malicious activity as harmless.

This tactic further expands the attack surface and adds complexity to the threat landscape.

Meeting intelligence with intelligence – why human oversight still matters

As the offensive use of AI becomes more advanced, defenders must respond with intelligence, speed, and precision. But while AI tools can drastically improve detection and response times, they are not a silver bullet. The complexity of modern threats still requires human oversight—particularly when it comes to making sense of ambiguous alerts, recognizing attacker intent, or identifying subtle shifts in behavior.

Moreover, the lack of transparency in many AI models means that decisions are not always easy to explain. That is why explainability and human judgment remain critical components of any AI-enabled security posture.

The Trifork Security perspective

At Trifork Security, we recognize that effective cybersecurity requires a harmonious blend of advanced technology and skilled professionals. Our approach integrates AI-driven platforms like Splunk and CrowdStrike with the insights and experience of our security analysts. This synergy ensures that while AI handles vast data analysis and pattern recognition, our experts provide the necessary context, oversight, and decision-making capabilities.

By combining the speed and scalability of AI with human judgment and expertise, we offer a comprehensive defense strategy that adapts to the ever-evolving threat landscape. Our commitment is to provide intelligent, explainable, and effective security solutions that empower organizations to stay ahead of adversaries.

Because when the attackers get smarter, so must the defenses.

[1] https://abnormalsecurity.com/resources/malicious-al-hackers-insight?utm_source=google&utm_medium=Paid%20Search&utm_content=nonbrand-ai&utm_term=ai%20in%20cyber%20security&gad_source=1&gclid=CjwKCAjw47i_BhBTEiwAaJfPpmOYM2uB78j-fpi8hIiv9Isd7ZjbGxOJjerqM6jSgCAIfeA8zVb8jBoCgfUQAvD_BwE&form=success
[2] https://global.fortinet.com/emea-lp-en-secops-cyberthreat-predictions?utm_content=cg-cyber-cta
[3] https://perception-point.io/guides/ai-security/ai-malware-types-real-life-examples-defensive-measures/