Robust Intelligence — the AI security startup that pioneered algorithmic red teaming and the industry’s first AI Firewall — was acquired by Cisco in October 2024 for approximately $400 million, a transaction that established AI model security as an enterprise security category distinct from conventional cybersecurity. The acquisition reflected Cisco’s assessment that as organizations deploy AI models in production — for threat detection, customer service, financial analysis, and security operations — those models create a new attack surface that existing security tools weren’t designed to protect. Robust Intelligence’s platform automates testing AI models for susceptibility to prompt injection, data poisoning, jailbreaking, and unintentional model behavior; Cisco’s integration plan embeds this AI security processing into existing Cisco networking and security products to provide visibility into all AI traffic without requiring separate deployment. The enterprise validation at the time of acquisition was substantial: JPMorgan Chase, IBM, Expedia, and Deloitte were among the platform’s named enterprise customers. The broader category that Robust Intelligence and Cisco are addressing is increasingly formalized: OWASP’s 2025 Top 10 for LLM Applications identifies prompt injection at #1 for the second consecutive year, adds five new vulnerability categories (excessive agency, system prompt leakage, vector and embedding weaknesses, misinformation, unbounded consumption), and the EU AI Act now mandates documented adversarial testing for high-risk AI systems before market deployment. As global cybercrime costs reached $9.5 trillion in 2024 and are projected to exceed $10.5 trillion in 2025, the security of the AI systems used for defense becomes as consequential as the security of the infrastructure they protect.
- Robust Intelligence acquired by Cisco in October 2024 for ~$400M; Gartner Cool Vendor for AI Security 2024; enterprise customers include JPMorgan Chase, IBM, Expedia, Deloitte
- Platform: Model file scanning + AI Validation (algorithmic red teaming, 100s of attack techniques) + AI Firewall — mitigates prompt injection, data poisoning, jailbreaking
- OWASP LLM Top 10 2025: #1 Prompt injection (2nd consecutive year); 5 new categories including excessive agency, system prompt leakage, vector/embedding weaknesses
- EU AI Act: requires documented adversarial testing for high-risk AI systems before deployment — LLM red teaming from regulatory nice-to-have to compliance requirement
- Cybercrime costs: $9.5 trillion (2024); projected $10.5 trillion+ (2025) — security of AI defense systems is now a board-level priority
Robust Intelligence and Cisco: AI Security Platform Architecture and Enterprise Deployment

The Robust Intelligence Platform: Algorithmic Red Teaming, AI Validation, and the AI Firewall
The Robust Intelligence platform that Cisco acquired comprises three components that map to the AI security lifecycle from development through production deployment. Model file scanning proactively identifies security vulnerabilities in open-source AI components before they’re integrated into production systems — addressing the supply chain vulnerability that climbed from 5th to 3rd in OWASP’s LLM Top 10 2025. AI Validation automates the safety and security testing phase: rather than manual red team testing (which covers a subset of attack vectors limited by available human hours), AI Validation applies algorithmic red teaming that systematically tests AI models against hundreds of attack techniques and threat categories. This includes testing for susceptibility to prompt injection (adversarial inputs that override model instructions), data poisoning (training data manipulation that creates backdoors or biases in model behavior), jailbreaking techniques (inputs that circumvent safety guardrails), and unintentional model behavior (hallucinations, sensitive information disclosure, and model outputs that create legal or reputational risk). The AI Firewall addresses the production phase: it intercepts AI traffic in real time, scanning inputs and outputs against established security policies and blocking malicious inputs before they reach the model. Cisco’s integration plan connects this Firewall functionality directly into Cisco’s networking and security product stack, which means organizations already running Cisco infrastructure can extend AI security controls to their AI deployments without a separate security product purchase and deployment cycle. The Gartner Cool Vendor designation in 2024 reflected the category’s maturity: AI security has moved from research-stage concern to enterprise procurement category. Competitors in the space include Mindgard (UK-based, focused on LLM security testing), CalypsoAI (enterprise AI security gateway), and Lakera (prompt injection protection), but Cisco’s distribution advantage post-acquisition gives Robust Intelligence’s technology reach that independent AI security startups cannot match. JPMorgan Chase and IBM deploying the platform reflects where the AI security market is most concentrated: financial services and technology companies running AI in high-stakes, regulated environments where model compromise has direct financial and regulatory consequences.
Cisco’s AI Security Strategy: Integrating Robust Intelligence into Cisco Security Products
Cisco’s rationale for the $400 million acquisition was explicit in its announcement: the combination would deliver advanced AI security processing seamlessly into existing data flows by inserting it into Cisco security and networking products, giving Cisco “unparalleled visibility into all of a customer’s AI traffic.” This integration strategy distinguishes Cisco’s AI security approach from point solutions that require standalone deployment: instead of organizations separately deploying an AI Firewall, the AI security capabilities become part of Cisco’s existing security platform that many enterprise customers already operate. The Databricks AI Security Framework (DASF) partnership that Robust Intelligence maintained before the Cisco acquisition illustrates the broader ecosystem integration: AI security testing capabilities connecting to ML platforms where models are developed and deployed, rather than operating as isolated security tools. The enterprise deployment pattern for organizations that haven’t yet formalized AI security programs but are using AI in production: the Robust Intelligence/Cisco platform addresses the control gap between how organizations govern traditional software (code reviews, vulnerability scanning, penetration testing) and how they currently govern AI systems (often no equivalent systematic security evaluation before production deployment). NIST’s AI 100-2e adversarial machine learning publication (2025) provides the taxonomy that security teams use to frame what they’re testing for — evasion attacks, poisoning attacks, privacy attacks — and the AI Validation platform operationalizes this taxonomy into automated testing that generates results without requiring security teams to manually construct adversarial test cases.
AI Robustness and Security Testing: OWASP LLM Top 10, Red Teaming Frameworks, and Regulatory Requirements

OWASP LLM Top 10 2025, Garak, DeepTeam, and Open-Source AI Security Testing
The OWASP Top 10 for LLM Applications 2025 represents the most widely referenced taxonomy for AI/LLM security vulnerabilities, and its 2025 update reveals how rapidly the risk landscape is evolving. Prompt injection remaining #1 for the second consecutive year reflects the structural challenge: LLMs process text instructions and data inputs through the same interface, making it fundamentally difficult to prevent malicious instructions embedded in data from overriding legitimate system prompts. Five new vulnerability categories appearing in the 2025 list — excessive agency (AI agents taking actions beyond intended scope), system prompt leakage (confidential system instructions extracted from model outputs), vector and embedding weaknesses (attacks against RAG systems and vector databases), misinformation (model-generated false outputs with real-world consequences), and unbounded consumption (resource exhaustion attacks) — document the specific threat categories that emerged as LLM deployments scaled in production. The open-source testing ecosystem addresses the same ground as Robust Intelligence’s commercial platform but with different deployment models. Garak, from NVIDIA, is an adversarial testing toolkit with 100+ attack modules designed for security-first workflows that automates vulnerability scanning and maps findings to AI security frameworks with detailed reporting. DeepTeam, released in November 2025, applies jailbreaking and prompt injection techniques to probe LLM systems before deployment in a developer-friendly framework. These tools provide the capability for security teams to run adversarial testing without commercial platform licenses, though they lack the production runtime protection that an AI Firewall provides. The EU AI Act’s requirement for documented adversarial testing before deployment of high-risk AI systems has converted AI security testing from an optional practice to a compliance activity for organizations operating in EU jurisdictions — and the regulatory pressure is expected to establish AI red teaming as a standard pre-deployment requirement analogous to penetration testing for traditional applications. Meta’s Agents Rule of Two, published in October 2025, establishes a broader principle: guardrails for AI agents must live outside the LLM itself — file-type firewalls, human approvals, and kill switches for tool calls cannot depend on model behavior alone. This principle is what the Robust Intelligence AI Firewall architecture operationalizes in production: the security controls are external to the model rather than embedded in model training. The NIST AI 100-2e adversarial machine learning taxonomy provides the authoritative classification framework for AI security threats. OWASP’s LLM Top 10 project tracks the specific vulnerability categories that AI security testing programs should prioritize.
Frequently Asked Questions
What is Robust Intelligence?
Robust Intelligence was an AI security company acquired by Cisco in October 2024 for approximately $400 million. Founded to address the security vulnerabilities specific to AI/ML systems, Robust Intelligence pioneered algorithmic red teaming and developed the industry’s first AI Firewall. Their platform comprised three components: Model file scanning (identifying vulnerabilities in AI components), AI Validation (automated adversarial testing against hundreds of attack techniques), and AI Firewall (real-time interception of AI traffic to block malicious inputs/outputs). Enterprise customers included JPMorgan Chase, IBM, Expedia, and Deloitte. Gartner named them a Cool Vendor for AI Security in 2024. Post-acquisition, Robust Intelligence’s technology is being integrated into Cisco’s networking and security products to provide AI security capabilities without standalone deployment.
What does the OWASP LLM Top 10 2025 include?
OWASP LLM Applications Top 10 2025 (full list): 1. Prompt Injection (#1 for 2nd consecutive year) — adversarial inputs overriding system instructions; 2. Sensitive Information Disclosure (jumped from #6) — models revealing confidential data in outputs; 3. Supply Chain Vulnerabilities (climbed from #5) — compromised AI components, datasets, or dependencies; 4. Data and Model Poisoning — training data manipulation creating backdoors; 5. Improper Output Handling — unsafe downstream use of LLM outputs; 6. Excessive Agency — AI agents taking unintended actions with real-world consequences; 7. System Prompt Leakage (new) — confidential instructions extracted from outputs; 8. Vector and Embedding Weaknesses (new) — attacks targeting RAG systems and vector databases; 9. Misinformation (new) — harmful false outputs; 10. Unbounded Consumption (new) — resource exhaustion attacks. The 5 new categories in 2025 reflect the expansion of LLM deployment into agentic architectures and enterprise knowledge bases that weren’t production use cases in 2023.
What AI security testing tools are available in 2025?
AI security testing tools available in 2025: Commercial (enterprise): Robust Intelligence/Cisco — AI Validation + AI Firewall (acquired for enterprise-scale AI security); CalypsoAI — enterprise AI security gateway; Mindgard — LLM security testing focused on UK/EU regulated industries; Lakera Guard — prompt injection protection API. Open source: Garak (NVIDIA) — 100+ attack modules, automated vulnerability scanning, maps to AI security frameworks; DeepTeam — jailbreaking and prompt injection testing for LLM deployment; Microsoft PyRIT — Python risk identification toolkit for LLMs; Rebuff — open-source prompt injection detection. Frameworks: NIST AI 100-2e (adversarial ML taxonomy), OWASP LLM Top 10 (vulnerability categories), MITRE ATLAS (AI threat matrix for enterprise AI systems). EU AI Act compliance requirement: documented adversarial testing before deployment of high-risk AI systems — driving adoption across all categories.
What is AI robustness in cybersecurity?
AI robustness in cybersecurity refers to an AI system’s ability to maintain correct, safe, and intended behavior when exposed to adversarial inputs, distribution shifts, or deliberate manipulation. In security contexts, robustness addresses three threat categories defined in NIST AI 100-2e: 1) Evasion attacks — adversarial inputs crafted to cause misclassification or bypass detection (e.g., malware engineered to evade AI-based detection); 2) Poisoning attacks — manipulation of training data to embed backdoors or bias model behavior; 3) Privacy attacks — model inversion or membership inference to extract sensitive training data from deployed models. For enterprise security teams deploying AI for threat detection, the irony is direct: AI-powered security tools are themselves vulnerable to adversarial manipulation if they lack robustness testing. The Robust Intelligence/Cisco platform addresses this by automating the adversarial testing of AI systems used in security operations before and during production deployment.