AI is now on both sides of every attack. It powers the tools defenders use to detect threats—and the tools attackers use to build them. Darktrace’s State of AI Cybersecurity 2026, which surveyed over 1,500 security leaders, found that 73% report AI-powered threats are significantly impacting their organizations—and that number keeps rising. AI-enabled attacks grew roughly 50% in 2025. This article covers how AI is being weaponized against organizations, how AI systems themselves become targets, and why most enterprises are still drastically underfunded for this threat category.
- 82.6% of phishing emails now contain AI-generated content, making grammar checks useless as a detection method.
- 41% of zero-day vulnerabilities in 2025 were discovered by attackers using AI-assisted tools before defenders found them.
- 68% of organizations have experienced data leaks linked to AI tool usage, yet only 23% have formal security policies in place.
- AI-specific CVEs increased 2,000% since 2022, according to NIST data.
- The EU AI Act becomes fully enforceable in August 2026, with fines up to 35 million euros or 7% of global annual turnover.
AI as an Offensive Weapon: Phishing, Deepfakes, and Automated Exploitation

Attackers didn’t wait for defenders to figure out AI. The offensive use of AI in cyberattacks predates most enterprise AI security programs by years. The result: attackers use AI to move faster, personalize attacks more precisely, and automate tasks that used to require skilled operators—while most security teams are still trying to inventory the AI tools their own employees are using.
AI-Generated Phishing and Deepfake Voice Fraud
Phishing was already the leading initial attack vector before AI entered the picture. Now it’s harder to detect at the content level. 82.6% of phishing emails now contain AI-generated content—perfectly written, contextually relevant, and immune to grammar-based filters. The same Darktrace survey found that hyper-personalized phishing is the top concern for 50% of security leaders.
Deepfake fraud adds a voice and video dimension. In mid-2025, the FBI issued a warning that attackers were sending AI-generated voice and text messages impersonating senior US officials to government personnel to extract account credentials. Not a niche attack. Executive impersonation via AI-cloned voice has been used to authorize wire transfers and override access controls in financial organizations. The verification instinct—”that sounds like my CFO”—is broken. The technology to clone a voice is two years old.
Automated Vulnerability Scanning and Zero-Day Discovery
Speed of exploitation is now a structural advantage for attackers. AI-assisted tools let attackers scan networks at 36,000 probes per second, compress the time between vulnerability discovery and exploit deployment, and outpace defenders who are still doing manual triage. The data shows this gap is already real: 41% of zero-day vulnerabilities in 2025 were found by attackers using AI-assisted reverse engineering before defenders had identified them. That’s not a detection failure—it’s an asymmetric capability problem.
IBM’s 2026 X-Force Threat Index reported a 44% increase in attacks exploiting public-facing applications, with AI tools helping attackers identify missing authentication controls and misconfigured systems faster than patch cycles allow. Vulnerability exploitation accounted for 40% of all observed incidents—the leading cause, ahead of phishing and credential theft.
AI-Powered Ransomware and Credential Theft at Scale
Ransomware-as-a-service predates AI, but AI is reshaping who can run it and how fast it operates. AI automates exploitation, data analysis, and negotiation sequences that previously required human operators, enabling smaller groups to launch attacks that used to require organized criminal infrastructure. IBM X-Force found active ransomware and extortion groups surged 49% year over year, with publicly disclosed victim counts up 12%.
Credential theft has seen the sharpest growth. Theft driven by AI jumped 160% in 2025. Three in four breaches now use compromised legitimate credentials—attackers log in, they don’t break in. The FBI’s Internet Crime Complaint Center reported cybercrime losses exceeding $16.6 billion in 2025, a 33% increase from 2023. Once attackers have valid credentials, AI helps them move laterally at machine speed without triggering behavioral detection that was tuned for slower human activity.
AI Systems as Attack Targets: Data Leaks, Agentic Compromise, and Training Attacks

The AI tools organizations deploy don’t just introduce risk by what they do—they introduce risk by what they hold and what they can be made to do. AI systems ingest sensitive data, hold context across conversations, and in agentic configurations can execute privileged actions. Each of those properties is an attack surface.
Data Leakage Through AI Tool Usage: 68% of Organizations Affected
The fastest-moving data exfiltration channel in enterprise security isn’t network traffic—it’s employees pasting proprietary information into AI interfaces. Metomic’s State of Data Security Report found 68% of organizations experiencing data leaks linked to AI tool usage, yet only 23% have formal security policies to prevent it. Cyberhaven research found that 11% of confidential data employees paste into ChatGPT includes sensitive business content.
These aren’t edge cases. They’re the predictable result of deploying consumer-grade AI tools across enterprise workforces without data governance controls. The financial exposure is real: IBM’s data puts the global average data breach cost at $4.88 million—and AI-assisted breaches trend toward the higher end of that range because of the scope and speed of data movement.
Agentic AI Compromise: Privilege Escalation Without Human Interaction
Both OpenAI and DeepMind have flagged agentic AI systems as their top near-term safety concern. The reason is specific: research has demonstrated that a compromised AI agent can execute privilege escalation and lateral network traversal with zero human interaction. The agent was designed to automate work—it just automates the wrong work once compromised.
This threat is no longer theoretical. A 2026 threat report found 1 in 8 companies reporting breaches linked to agentic systems. Cisco’s 2026 research showed 83% of organizations planned agentic AI deployment but only 29% felt ready to do so securely. The window between “we deployed this” and “we had an incident” is closing. Agentic compromise is distinct from traditional malware because the agent already has authorized access—there’s no lateral movement required when the agent itself has the keys.
AI-Specific CVEs: 2,000% Growth Since 2022
NIST data shows AI-specific CVEs increased 2,000% since 2022. These vulnerabilities are structurally different from traditional software CVEs: they affect model behavior, training data integrity, and inference outputs rather than code paths. A vulnerable AI model might misclassify malicious content, leak training data, or behave erratically under adversarial inputs—failure modes that don’t map to any CVE response playbook written for software.
The supply chain makes it worse: 98% of organizations use at least one third-party SaaS application with AI capabilities embedded, yet fewer than 30% conduct formal AI vendor risk assessments. Most organizations are running AI components they haven’t evaluated, from vendors who haven’t been assessed, in workflows that weren’t designed with AI threat models in mind.
The AI Security Response Gap: Investment, Detection, and Regulatory Pressure

The data on AI security spending reveals a pattern familiar from every prior technology transition: organizations acknowledge the risk, add budget line items, and under-resource the actual work. The difference in 2026 is that regulatory enforcement is arriving alongside the threat curve.
The Investment Paradox: 91% Added Budget, Most Spent Under 10%
91% of organizations added AI security budgets for 2025—though the gap between budget and outcomes mirrors findings on artificial intelligence security concerns documented across the industry. That sounds like progress until you see the allocation: more than 40% dedicated less than 10% of their security budget to AI security. Only 24% of enterprises have a dedicated AI security governance team, according to Gartner. The security posture being built looks substantial on paper and thin in practice.
The cost of doing it right is quantifiable. IBM data shows organizations with extensive AI-driven security use achieve $2.2 million in average cost savings compared to those without. AI-powered defenses also cut breach containment time by 108 days, compressing the window attackers have to move laterally and exfiltrate data. The ROI math works—but only for organizations that make the full commitment, not the “under 10%” one.
Regulatory Deadlines and What Organizations Must Do by August 2026
The EU AI Act becomes fully enforceable in August 2026, with fines up to 35 million euros or 7% of global annual turnover—whichever is higher—for prohibited AI practices. This puts AI risk classification, documentation, and governance on a legal deadline, not just a best-practice timeline. US organizations doing business in Europe have the same exposure.
The practical requirements align with what security frameworks already recommend:
- AI risk classification: Categorize AI systems by risk level before August 2026. High-risk categories (HR, law enforcement, critical infrastructure) face the strictest requirements.
- AI vendor risk assessments: Fewer than 30% of organizations currently assess the AI capabilities embedded in third-party SaaS. This is both a security gap and a compliance gap.
- Dedicated governance ownership: NIST AI RMF and ISO 42001 both require clear accountability structures. Only 24% have this. Creating a governance function before enforcement begins is far cheaper than responding to an investigation.
- Zero Trust for AI workloads: 86% of security leaders view Zero Trust as critical for AI workloads (Okta). Apply least-privilege access to AI agents, model endpoints, and training pipelines—not just to human users.
The most counterintuitive finding in the 2026 data is that 41% of zero-day vulnerabilities were found by attackers before defenders. AI didn’t just make attacks faster—it reversed who finds the hole first. The organizations closing that gap fastest are the ones that stopped treating AI security as a line item and started treating it as a discipline with dedicated ownership, tooling, and regulatory timelines. August 2026 is close enough that starting now is the minimum viable response.
Frequently Asked Questions
What are the biggest artificial intelligence security threats in 2026?
The top AI security threats in 2026 include AI-generated phishing (82.6% of phishing emails now contain AI-generated content), deepfake voice fraud, automated vulnerability scanning, AI-powered ransomware, and agentic AI compromise where autonomous agents execute privilege escalation without human interaction.
How does AI help attackers find vulnerabilities?
AI-assisted reverse engineering tools let attackers find software vulnerabilities faster than defenders. In 2025, 41% of zero-day vulnerabilities were discovered by attackers using AI tools before defenders identified them. Automated scanning tools now probe networks at 36,000 requests per second.
What is the cost impact of AI-powered attacks?
IBM data puts the global average data breach cost at $4.88 million. The FBI IC3 reported $16.6 billion in cybercrime losses in 2025, a 33% increase from 2023. Organizations using AI-driven security achieve $2.2 million in average cost savings and contain breaches 108 days faster.
What does the EU AI Act require by August 2026?
The EU AI Act becomes fully enforceable in August 2026. Organizations must classify AI systems by risk level, document high-risk AI use, conduct AI vendor risk assessments, and maintain governance accountability structures. Fines for non-compliance reach 35 million euros or 7% of global annual turnover.
Why is credential theft the fastest-growing AI-enabled attack?
AI enables attackers to automate credential-harvesting campaigns at scale, personalize phishing to bypass detection, and move laterally using stolen credentials without triggering behavioral alerts tuned for slower human activity. Credential theft jumped 160% in 2025; 75% of breaches now use compromised legitimate credentials.