The use of artificial intelligence in cyber security spans more domains than threat detection and incident response — the two functions that get the most attention in security discussions. AI is now applied across the full security lifecycle: prioritizing which of the 48,000+ CVEs disclosed in 2025 actually require immediate remediation, continuously evaluating identity risk to catch adaptive authentication attacks that static MFA can’t stop, catching financial fraud in milliseconds with accuracy that reduces false positives by 60-90% at major banks, and automatically discovering exploitable attack surface in application code before attackers do. Each of these represents a distinct AI deployment model with its own technical approach and measurable performance benchmark. This article covers where AI is generating the most quantifiable security value across the full spectrum of cybersecurity domains.
- 2025 saw 48,000+ new CVEs — a 20% year-over-year increase; AI prioritization combining KEV + EPSS + CVSS reduces the urgent remediation workload by 95%, from ~16,000 CVSS 7+ CVEs to ~850 with actual exploitation evidence
- Identity security overtook all other risks as the top concern in cloud environments (CSA State of Cloud and AI Security 2025) — adaptive AI authentication evaluates device, behavior, and geolocation dynamically
- Major banks report 60-90% false positive reduction in fraud detection using AI; American Express improved fraud detection 6% with LSTM models, Bank of New York Mellon improved 20% with federated learning
- AI-driven DAST reduces application security testing setup from weeks to hours and auto-generates intelligent test cases from source code structure
- Traditional IAM fails for agentic AI systems — non-human identities (AI agents, service accounts, API tokens) now outnumber human identities in most enterprise environments
AI in Vulnerability Management and Identity Security

Vulnerability Prioritization: From 48,000 CVEs to an Actionable List
The core problem AI solves in vulnerability management is scale. More than 48,000 vulnerabilities were documented as CVEs in 2025 alone — a 20% year-over-year increase — and NIST enriched nearly 42,000 CVEs that year, still unable to keep pace with disclosure volume. An organization that attempts to remediate every CVSS 7+ vulnerability faces a list of approximately 16,000 CVEs annually. No security team can prioritize 16,000 patches — the practical result is that patching becomes triaged by criticality score without accounting for whether an exploit actually exists or is actively being used.
AI-driven vulnerability prioritization combines three signal sources that CVSS alone doesn’t incorporate: the Known Exploited Vulnerabilities (KEV) catalog (CISA’s list of CVEs with confirmed exploitation in the wild), EPSS (Exploit Prediction Scoring System, which produces a probability score for a CVE being exploited within 30 days), and asset context (what is this vulnerability on, who can reach it, and what’s the business criticality of the affected system). Research analyzing 28,000+ CVEs found that combining KEV and EPSS alongside CVSS reduces the urgent prioritization workload by approximately 95% — from the ~16,000 CVEs meeting the CVSS 7+ threshold down to roughly 850 CVEs that have actual evidence of exploitation or high exploitation probability. Sysdig reports that customers using AI-driven vulnerability prioritization in container environments cut remediation time by more than 90% compared to manual triage workflows. Understanding how this AI-prioritized vulnerability data integrates with the broader enterprise threat intelligence pipeline determines whether prioritization recommendations translate into faster patching cycles or remain advisory outputs that don’t change actual remediation behavior.
Identity Security: Adaptive Authentication and the Non-Human Identity Problem
Identity security has overtaken all other categories as the top cloud security concern in 2025, according to CSA’s State of Cloud and AI Security 2025 survey, where insecure identities and risky permissions ranked as the primary risk. The reason is that identity is now the primary attack vector: credentials are cheaper to steal than systems are to exploit, and MFA bypass techniques have matured to the point where traditional MFA no longer provides the security boundary it once did. Adversaries use AI-driven session hijacking, adversary-in-the-middle (AiTM) proxies, and AI-generated deepfake impersonation to bypass authentication factors that rely on static possession (a TOTP code, a push notification) rather than continuous behavioral verification.
AI-powered adaptive authentication addresses this by evaluating risk signals continuously at every access attempt rather than only at login. Device fingerprinting, behavioral biometrics (typing pattern, mouse movement, session interaction velocity), geolocation, and geo-velocity (whether the claimed location is physically reachable given the previous login’s location and timestamp) are all scored in real time to produce a session risk score. A login from a known device at a consistent location with normal interaction patterns receives a low risk score and flows through. A login from an unfamiliar device in an anomalous geography with atypical behavioral signals triggers step-up authentication or session termination. The second dimension of the identity AI problem is non-human identities: AI agents, service accounts, API tokens, and automation scripts now outnumber human identities in most enterprise environments. Traditional IAM tools were designed for human identity lifecycle management — provisioning, review, and revocation of user accounts. They weren’t built to govern the access rights of an AI agent that queries a production database on behalf of a user workflow. ISACA’s 2025 analysis identifies this authorization gap for agentic AI systems as a critical emerging risk. The AI security concerns framework that addresses agentic AI attack surface covers the overlap between identity risk and AI deployment risk in operational detail.
AI in Fraud Detection and Application Security Testing

Financial Fraud Detection: Performance Data Across Major Institutions
AI fraud detection in financial services has accumulated measurable production performance data across major institutions. American Express improved fraud detection by 6% using LSTM (Long Short-Term Memory) neural network models that analyze sequential transaction patterns — detecting the temporal signature of fraud that looks different from legitimate purchase sequences even when individual transactions appear normal. PayPal improved real-time fraud detection by 10% through AI systems that evaluate transaction context against behavioral baselines. Bank of New York Mellon improved fraud detection accuracy by 20% using federated learning — a technique that trains models across distributed data without centralizing sensitive financial data, enabling the bank to learn from patterns across accounts without privacy risks of consolidated data aggregation.
The most consistent outcome across AI fraud detection deployments is false positive reduction. Major banks report 60-90% reductions in false positive rates — the alerts that incorrectly flag legitimate transactions as fraud and require manual review. Visa’s Decision Manager has helped participating issuers reduce manual reviews by 25% or more. The business case for this is straightforward: false positives in fraud detection directly cause false declines — legitimate transactions that get blocked — which erode customer relationships and represent lost revenue independent of actual fraud losses. AI that distinguishes between legitimate unusual activity (a customer buying an expensive item abroad) and fraud unusual activity (unauthorized card use in the same location) improves both fraud catch rates and customer experience simultaneously. Sift’s analysis shows AI models achieve up to 40% improvement in fraud detection rates by incorporating behavioral and contextual signals that rule-based systems don’t evaluate. The big data security intelligence infrastructure that processes real-time transaction data at the volume financial institutions require is the operational prerequisite for AI fraud detection to function at production scale.
AI in Application Security Testing
Application security testing — identifying vulnerabilities in code before they reach production — has historically been constrained by two bottlenecks: coverage (SAST and DAST tools miss vulnerabilities because they don’t understand the application’s authorization model) and setup time (configuring DAST to test a new application accurately took weeks of manual configuration). AI addresses both. AI-driven DAST tools now auto-discover application endpoints from source code and API specifications, generating intelligent test cases that reflect the application’s actual business logic rather than generic URL fuzzing. This reduces DAST setup from weeks to hours while expanding coverage to endpoints that crawler-based discovery would miss. AI vulnerability verification filters the output — automatically distinguishing confirmed-exploitable vulnerabilities from theoretical findings that SAST generates but that DAST testing shows aren’t actually reachable in the production configuration.
The combination of AI-enhanced SAST and DAST is particularly relevant given disclosure volume: over 23,000 new CVEs were disclosed in the first half of 2025, a 16% increase over the same period the prior year. Application security teams that rely on periodic manual code review cycles can’t realistically keep pace with that disclosure rate. AI-integrated application security testing that continuously scans running applications for newly published vulnerability patterns — not just the initial deployment scan — provides coverage across the full CVE lifecycle rather than only at build time. The AI security tools that are integrating these application testing capabilities alongside endpoint, cloud, and network security represent the convergence point where security platforms become full-stack rather than domain-specific. How organizations are investing across these AI security domains and which vendors are capturing that spend is covered in the AI cybersecurity market analysis.
Frequently Asked Questions
What are the main uses of artificial intelligence in cyber security?
AI is applied across six primary cybersecurity domains: (1) threat detection — ML models classifying malicious network traffic, email, and endpoint behavior; (2) vulnerability management — AI-driven prioritization combining CVSS, KEV, and EPSS to reduce 48,000+ annual CVEs to an actionable urgent list; (3) identity security — adaptive authentication evaluating behavioral biometrics, device fingerprinting, and geolocation; (4) fraud detection — real-time transaction analysis detecting fraud patterns at 60-90% lower false positive rates than rule-based systems; (5) application security testing — AI-driven DAST auto-generating test cases from application structure; and (6) SOC automation — SOAR and agentic AI executing investigation and response workflows.
How does AI improve vulnerability management?
AI improves vulnerability management by combining CVSS severity scores with EPSS (exploit probability) and KEV (confirmed exploitation evidence) to reduce urgent prioritization workload by approximately 95% — from roughly 16,000 CVSS 7+ CVEs to around 850 that have actual exploitation evidence or high near-term probability. This transforms vulnerability management from “patch everything above a severity threshold” to “patch what attackers are actually exploiting or are likely to exploit next.” Sysdig reports that AI-driven prioritization in container environments cuts remediation time by over 90%.
How is AI used in identity and access management?
AI is used in IAM primarily for adaptive authentication — evaluating risk signals (device fingerprint, behavioral biometrics, geolocation, geo-velocity) at every access attempt rather than only at login, dynamically requiring step-up authentication when risk scores exceed thresholds. AI also addresses non-human identity governance: AI agents, service accounts, and API tokens now outnumber human identities in most enterprise environments, and traditional IAM tools weren’t built to govern their access lifecycles. Identity security ranked as the top cloud security concern in CSA’s 2025 survey, with MFA bypass via AI-assisted session hijacking identified as a primary driver.
How accurate is AI fraud detection in financial services?
Production performance data varies by institution and model type. American Express improved fraud detection by 6% with LSTM models; PayPal improved by 10%; Bank of New York Mellon improved by 20% with federated learning. False positive reduction — which directly affects customer experience through false declines — shows the most consistent results: major banks report 60-90% reductions in false positive rates. Visa’s Decision Manager reduces manual review requirements by 25%+. Overall, Sift’s analysis shows AI models achieve up to 40% improvement in detection rates by incorporating behavioral and contextual signals beyond rule-based pattern matching.
How does AI improve application security testing?
AI-driven DAST tools auto-discover application endpoints from source code and API specifications, eliminating the weeks-long manual configuration previously required to set up a DAST scan accurately. AI generates intelligent test cases based on the application’s authorization model, covering business logic flaws that generic URL crawling misses. AI vulnerability verification automatically filters output to distinguish confirmed-exploitable vulnerabilities from theoretical SAST findings — reducing the remediation workload and focusing developer attention on confirmed risks. This is particularly important given that over 23,000 CVEs were disclosed in H1 2025 alone, a 16% increase over the prior year.