Blog

NSA Artificial Intelligence Security Center: Mission, Guidance, and AI Threat Defense

Security analyst monitoring AI threat intelligence on multiple screens in operations center

The NSA Artificial Intelligence Security Center (AISC) is the United States government’s primary institutional response to the convergence of AI adoption and national security risk. Established in September 2023 and housed within the NSA’s Cybersecurity Collaboration Center, the AISC serves as the agency’s focal point for intelligence-driven AI security — developing best practices, publishing joint guidance with international allies, and maintaining continuous visibility into how adversaries are targeting AI systems deployed across U.S. national security and defense industrial base networks.

What Is the NSA Artificial Intelligence Security Center?

Government cybersecurity team reviewing AI security protocols on laptops

NSA Director Gen. Paul Nakasone announced the AISC in September 2023 at an event at the National Press Club, describing it as a necessary response to the rapid expansion of AI use across U.S. government and defense systems. The center consolidates what had been dispersed AI security activities across the NSA under a single organizational structure with a defined mission: to “defend the Nation’s AI through intel-driven collaboration with industry, academia, the IC, and other government partners.”

The AISC operates inside the NSA’s Cybersecurity Collaboration Center — an existing structure designed for close working relationships with private sector entities cleared to exchange sensitive threat intelligence. By locating the AISC within this framework, the NSA positioned the center to move intelligence directly from foreign threat analysis into actionable guidance for private companies building or deploying AI systems within the national security and defense industrial base.

Core Functions

The AISC operates across four primary functions:

  • Detect and counter AI vulnerabilities — identifying novel attack surfaces specific to AI systems that fall outside traditional cybersecurity frameworks, including data poisoning, model evasion, and AI system theft
  • Drive industry and government partnerships — building working relationships with U.S. industry, national laboratories, academia, the Intelligence Community, the Department of Defense, and select foreign partners to share threat intelligence bidirectionally
  • Develop and promote AI security best practices — publishing guidelines, evaluation methodologies, and risk frameworks calibrated to the specific security requirements of AI systems across their lifecycle
  • Stay ahead of adversary AI tactics — maintaining continuous foreign intelligence visibility into how nation-state and non-state adversaries are developing capabilities to exploit, subvert, or steal AI systems

Defining AI Security

The AISC articulates AI security as protecting AI systems from “learning, doing, and revealing the wrong thing” — a framing that captures three distinct failure modes: a system trained on poisoned data that learns incorrect behaviors, a system manipulated at inference time into performing unintended actions, and a system that leaks sensitive training data or model weights to adversaries. This three-axis definition separates AI security from traditional IT security and explains why the AISC operates as a distinct function rather than as an extension of the NSA’s existing cybersecurity mission.

AISC Published Guidance and International Partnerships

International partners collaborating on AI security guidance at joint meeting

The AISC’s most visible output is a series of joint cybersecurity guidance documents developed in coordination with allied international agencies. Each release reflects intelligence about active and emerging threats to AI systems and provides operational recommendations calibrated to organizations deploying AI in managed environments — with the NSA explicitly noting that guidance intended for national security systems is “applicable to anyone bringing AI capabilities into a managed environment.”

April 2024: Deploying AI Systems Securely

On April 15, 2024, the AISC published its first major guidance document: “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” The document was co-signed by the NSA, the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the Canadian Centre for Cyber Security (CCCS), and the national cybersecurity centers of New Zealand and the United Kingdom — a six-agency international coalition representing the core of the Five Eyes intelligence alliance.

The guidance addresses the full AI deployment lifecycle, from initial system integration through ongoing operations, with a focus on maintaining security properties that are specific to AI systems rather than simply applying existing IT security controls to AI workloads.

May 2025: AI Data Security

On May 22, 2025, the AISC and the same coalition of international partners published a second guidance document: “AI Data Security: Best Practices for Securing Data Used to Train and Operate.” The guidance identifies three primary categories of data risk that threaten AI system integrity: data supply chain vulnerabilities (compromised data sources used for training), data tampering and poisoning attacks (adversarial modification of training or operational data), and data drift (gradual degradation of model performance due to changes in data distribution). The framework for mitigation strategies in the document is explicitly grounded in the NIST AI Risk Management Framework.

2025: AI in Operational Technology

The AISC also co-released guidance in 2025 on integrating AI in Operational Technology (OT) environments — industrial control systems, critical infrastructure, and manufacturing networks where AI adoption is accelerating but where security failures carry physical consequences. This guidance represents a significant expansion of the AISC’s scope beyond information system cybersecurity into the physical security implications of AI deployment.

Planned Guidance Topics

The AISC has published a forward roadmap covering future guidance topics it intends to address as the AI security field evolves: content authenticity, model security, identity management, model testing and red teaming, incident response, and recovery. This roadmap signals that the AISC is building a comprehensive doctrine for AI security rather than responding reactively to individual incidents.

Key AI Security Threats the AISC Addresses

Cybersecurity professionals analyzing AI threat intelligence and attack vectors

The threat environment that motivated the AISC’s creation reflects both the rapid expansion of AI use across national security systems and the corresponding expansion of adversary interest in subverting those systems. NSA Cybersecurity Director Dave Luber has emphasized the concern about non-state actors gaining access to sophisticated AI attack capabilities that were previously limited to nation-state actors — a democratization of offensive AI capability that fundamentally changes the threat calculus.

Data Poisoning

Data poisoning attacks involve adversarial manipulation of data used to train or fine-tune AI models. In a supply chain poisoning attack, an adversary compromises a data source or dataset used in training before the targeted organization ever ingests it. In a direct poisoning attack, an adversary with access to a training pipeline inserts malicious examples designed to cause the model to behave incorrectly on specific inputs — creating a backdoor that can be exploited later. The AISC’s AI Data Security guidance addresses this threat directly, identifying data supply chain integrity as a primary risk category.

Model Theft and Evasion

Model evasion attacks use carefully crafted inputs to cause AI systems to misclassify or produce incorrect outputs at inference time. Model extraction attacks attempt to reconstruct a proprietary model’s weights or architecture through repeated queries — a significant concern for AI systems embedded in national security decision support tools. The AISC frames model theft as a strategic intelligence threat: a stolen model can reveal what information was available to train it, how it makes decisions, and where its blind spots lie.

AI-Enabled Information Warfare

The AISC scope extends beyond protecting AI systems to addressing AI as a threat vector. Deepfakes, AI-generated disinformation campaigns, and synthetic media represent a category of threat that did not exist at scale before the current generation of generative AI models. The AISC frames these capabilities as national security concerns — particularly in the context of content authenticity verification, which appears on the planned guidance roadmap as a future topic. The concern about non-state actors gaining access to these capabilities is a recurring theme in AISC communications.

Foreign Intelligence Threats to AI Systems

The AISC’s foreign intelligence mission identifies China and Russia as primary nation-state actors seeking to steal, subvert, or exploit AI systems deployed within U.S. national security infrastructure. The theft of AI model weights, training data, or evaluation methodologies represents a form of technical intelligence collection that enables adversaries to understand decision-making logic embedded in national security AI systems without needing to compromise the systems operationally. The AISC functions as the primary analytical body translating classified foreign intelligence about these threats into unclassified guidance that defense industry partners can act on — bridging the gap between classified threat knowledge and practical security implementation.

The cross-sector applicability of AISC guidance distinguishes it from purely classified national security programs. By co-sealing guidance documents with CISA and making them publicly available, the AISC positions AI security as a shared responsibility between government and the private sector — particularly relevant as commercial AI systems from vendors like Microsoft, Google, and OpenAI are increasingly integrated into defense and intelligence workflows. This integration means that vulnerabilities in commercial AI platforms represent potential entry points into national security systems, making the AISC’s outreach to commercial sector organizations a direct extension of its core national security mission.

Frequently Asked Questions

When was the NSA Artificial Intelligence Security Center established?

The NSA Artificial Intelligence Security Center (AISC) was established in September 2023. NSA Director Gen. Paul Nakasone announced the center publicly at an event at the National Press Club, describing it as the NSA’s focal point for leveraging foreign intelligence insights to develop AI security best practices and protect U.S. national security systems from AI-targeted threats.

What guidance has the AISC published?

The AISC has published two major joint guidance documents: “Deploying AI Systems Securely” (April 15, 2024) and “AI Data Security: Best Practices for Securing Data Used to Train and Operate” (May 22, 2025), along with guidance on integrating AI in Operational Technology (OT) environments. All guidance documents are co-signed with CISA, the FBI, and allied international cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom.

Which agencies partner with the AISC?

The AISC works with six primary international partners: the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the Canadian Centre for Cyber Security (CCCS), New Zealand’s National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK). These partners collectively represent the Five Eyes intelligence alliance’s cybersecurity apparatus.

What AI security threats does the AISC address?

The AISC addresses three primary categories of AI-specific threats: data poisoning (adversarial manipulation of training data), model evasion (crafted inputs that cause AI systems to misclassify at inference time), and model theft (extraction of model weights or architecture through repeated queries). The center also addresses AI-enabled information warfare threats including deepfakes and synthetic media generated by adversarial AI systems.

Is AISC guidance applicable to private sector organizations?

Yes. The NSA has explicitly stated that AISC guidance developed for national security systems is “applicable to anyone bringing AI capabilities into a managed environment.” The joint guidance documents are publicly available through CISA and NSA websites, and the AISC actively engages private sector entities through the NSA’s Cybersecurity Collaboration Center framework, particularly organizations within the defense industrial base that deploy or integrate commercial AI systems.