Blog

Artificial Intelligence Safety and Security Board: What It Was, What It Did, and What Happened to It

State legislative chamber with wooden desks, nameplates, and American flags representing federal advisory governance

The Artificial Intelligence Safety and Security Board was a 22-member public-private advisory board established by the Department of Homeland Security in April 2024 to develop recommendations for the safe and secure deployment of AI across America’s 16 critical infrastructure sectors. It assembled the CEOs of the largest AI and technology companies alongside critical infrastructure operators, civil rights leaders, and academics — the most senior AI advisory body the federal government had created. The board produced two major deliverables before it was terminated: initial cross-sector AI safety guidelines published in April 2024, and a comprehensive “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” released in November 2024. The Trump administration disbanded all DHS advisory committees on January 20, 2025, ending the board’s work nine months after its first meeting.

  • Established April 26, 2024 by DHS Secretary Alejandro Mayorkas per President Biden’s direction — 22 members, meeting quarterly
  • Members included CEOs of OpenAI, NVIDIA, Microsoft, Alphabet, AWS, IBM, Cisco, Adobe, AMD, and Northrop Grumman alongside government officials and civil society leaders
  • Addressed three AI risk categories for critical infrastructure: attacks using AI, attacks targeting AI systems, and failures in AI design and implementation
  • November 2024 deliverable: “Roles and Responsibilities Framework” assigns distinct obligations to AI developers, cloud providers, and critical infrastructure operators across five implementation steps
  • Disbanded January 20, 2025 when acting DHS Secretary Benjamine Huffman terminated all DHS advisory committee memberships; Executive Order 14179 simultaneously revoked Biden’s foundational AI safety executive order

The DHS AI Safety and Security Board: Structure, Members, and Mandate

Formal board meeting room with professionals seated at advisory table with microphones and monitors

How the Board Was Established

President Biden directed DHS Secretary Alejandro Mayorkas to create the board as part of the administration’s implementation of Executive Order 14110 on safe and trustworthy AI development, signed in October 2023. DHS announced the board on April 26, 2024, with its first convening scheduled for May 2024 and quarterly meetings planned thereafter. The board operated as a federal advisory committee under the Federal Advisory Committee Act — a formal structure that requires transparency, public participation, and documented deliverables, as distinct from informal working groups or industry coalitions that the federal government also convenes around emerging technology issues.

The board’s formal mandate was to advise DHS on how AI should be governed across all 16 critical infrastructure sectors identified in federal critical infrastructure protection frameworks: energy, transportation, financial services, healthcare, information technology, defense, food and agriculture, water, emergency services, communications, government facilities, manufacturing, nuclear, dams, and chemical. The scope reflected a recognition that AI adoption in critical infrastructure wasn’t hypothetical — power grid operators were using ML models for demand forecasting, financial institutions had deployed AI for fraud detection, and transportation networks were incorporating AI-driven optimization — but federal guidance on how to secure those deployments against AI-specific threat vectors had not yet been developed. The enterprise threat intelligence frameworks that assess sector-specific risks to critical infrastructure provided the baseline for understanding where AI introduced new attack surface.

Board Membership and Representation

The board’s 22 members were selected to represent the full AI deployment stack: the companies that build foundational AI models, the cloud infrastructure providers that deploy them, the critical infrastructure operators that use them, and the civil society and academic institutions that assess their societal impact. Tech company representation included Sam Altman (OpenAI CEO), Jensen Huang (NVIDIA CEO), Satya Nadella (Microsoft CEO), Sundar Pichai (Alphabet CEO), Adam Selipsky (Amazon Web Services CEO), Arvind Krishna (IBM CEO), Chuck Robbins (Cisco CEO), Shantanu Narayen (Adobe CEO), Lisa Su (AMD CEO), and Kathy Warden (Northrop Grumman CEO). Critical infrastructure operators included Ed Bastian (Delta Air Lines CEO) and Vicki Hollub (Occidental Petroleum CEO).

Government representation included Maryland Governor Wes Moore and Seattle Mayor Bruce Harrell. Academic and civil society members included Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute; Alexandra Reeve Givens, president of the Center for Democracy and Technology; Maya Wiley of the Leadership Conference on Civil and Human Rights; and Rumman Chowdhury, CEO of Humane Intelligence. The composition was intentional: assembling the entities responsible for each layer of the AI supply chain in a single advisory body meant that recommendations could be developed with input from parties who would need to implement them. According to Help Net Security’s coverage of the announcement, DHS framed the board as an information-sharing forum on AI security threats as well as a recommendation-development body — the combination that distinguished it from purely regulatory advisory processes. Understanding the threat landscape these board members were advising on is covered in the analysis of AI security concerns that drove the board’s creation.

The Three AI Risk Categories for Critical Infrastructure

The board’s initial April 2024 guidelines organized AI risks to critical infrastructure into three categories that structured its analytical work. The first category — attacks using AI — covers threat actors deploying AI to amplify their offensive capabilities against critical systems: AI-generated spearphishing at scale, AI-assisted vulnerability discovery in industrial control systems, and AI-driven reconnaissance of infrastructure targets. The second category — attacks targeting AI systems themselves — covers adversarial inputs to safety-critical ML models, data poisoning of training data used in operational infrastructure, and prompt injection into AI systems managing physical processes. The third category — failures in AI design and implementation — covers unintentional failure modes: AI systems deployed in critical infrastructure that behave unexpectedly in edge cases, produce incorrect outputs in safety-relevant contexts, or introduce new failure modes that engineering teams didn’t anticipate when deploying ML models in operational environments.

The three-category structure reflected a broader point the board’s work addressed: most existing critical infrastructure security frameworks were designed around conventional cyber threats to traditional IT and operational technology systems. AI deployment in critical infrastructure introduces threat vectors — particularly the second and third categories — that those frameworks weren’t designed to address. An industrial control system that incorporates a machine learning model for anomaly detection or predictive maintenance has an attack surface that includes the model’s training data and inference inputs, not just the network protocols and firmware that traditional OT security covers. The AI security tools addressing these AI-specific attack surfaces were still nascent when the board began its work in 2024.

What the Board Produced and Its Disbandment

Diverse advisory group meeting with laptop and planning documents discussing AI safety framework deliverables

The November 2024 Roles and Responsibilities Framework

The board’s most substantive output was the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” released by DHS on November 14, 2024. The framework addressed a specific gap: existing AI governance guidance tended to apply to AI developers, but critical infrastructure AI deployments involve multiple parties across the supply chain — a power utility deploying an AI-driven grid optimization system might use a model developed by a third-party AI company, deployed on cloud infrastructure from a hyperscaler, on hardware the utility purchased from another vendor. The framework assigned distinct responsibilities to each layer.

AI developers bear responsibility for secure model design and managing access to models and training data. Cloud and compute providers are responsible for the security of the infrastructure on which AI models run. Critical infrastructure owners and operators are responsible for securing their existing IT and OT environments into which AI systems are being integrated and for implementing data governance for the operational data those systems use. The framework organized these responsibilities across five implementation steps: securing environments, driving responsible model design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact. DHS Secretary Mayorkas described the framework as “descriptive and not prescriptive” — the board designed it as a living document intended to evolve as AI technology and the threat landscape changed, rather than a regulatory mandate. Voluntary adoption was the intent, though, as Nextgov reported, energy and financial services participants were treating the framework as de facto baseline guidance regardless of its non-mandatory status. The big data security intelligence infrastructure underpinning AI deployments in critical infrastructure was the operational layer the framework’s data governance recommendations addressed most directly.

Disbandment on January 20, 2025

On January 20, 2025, acting DHS Secretary Benjamine Huffman issued a notice terminating the memberships of all advisory committees within the Department of Homeland Security, effective immediately. The termination covered the AI Safety and Security Board alongside the Cyber Safety Review Board, the National Security Telecommunications Advisory Committee, the National Infrastructure Advisory Council, and the USSS Cyber Investigations Advisory Board. The stated rationale was alignment with “DHS’s commitment to eliminating the misuse of resources” and ensuring DHS activities prioritize national security. Three days later, on January 23, 2025, President Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which explicitly revoked Biden’s foundational EO 14110 and directed federal agencies to revise or rescind policies that conflicted with the new administration’s AI development priorities.

The Cyber Safety Review Board’s simultaneous disbandment drew significant attention because it ended an ongoing investigation into Salt Typhoon — the Chinese state-sponsored campaign that had breached nine U.S. telecommunications carriers. The AI Safety and Security Board had no active investigation in progress, but its termination ended the institutional structure through which the November 2024 framework was intended to be updated and expanded. As of April 2026, no successor body has been established, and the November 2024 framework remains the most recent federal advisory output on AI security for critical infrastructure. The AI cybersecurity market has continued developing private-sector AI security standards and frameworks in the absence of updated federal advisory guidance, with industry consortia and standards bodies filling part of the vacuum the board’s termination created.

Frequently Asked Questions

What was the AI Safety and Security Board?

The Artificial Intelligence Safety and Security Board was a 22-member public-private federal advisory committee established by the Department of Homeland Security in April 2024, per President Biden’s direction. Its mandate was to develop recommendations for the safe and secure deployment of AI across the 16 critical infrastructure sectors defined in federal infrastructure protection frameworks — energy, transportation, financial services, healthcare, IT, and others. It included tech CEOs (OpenAI, NVIDIA, Microsoft, Google, AWS, IBM), critical infrastructure operators, government officials, and civil society leaders. The board was disbanded on January 20, 2025, when the Trump administration terminated all DHS advisory committee memberships.

Who were the members of the AI Safety and Security Board?

The 22-member board included: Sam Altman (OpenAI), Jensen Huang (NVIDIA), Satya Nadella (Microsoft), Sundar Pichai (Alphabet), Adam Selipsky (AWS), Arvind Krishna (IBM), Chuck Robbins (Cisco), Shantanu Narayen (Adobe), Lisa Su (AMD), Kathy Warden (Northrop Grumman), Ed Bastian (Delta Air Lines), Vicki Hollub (Occidental Petroleum), Maryland Governor Wes Moore, Seattle Mayor Bruce Harrell, Fei-Fei Li (Stanford HAI), Alexandra Reeve Givens (Center for Democracy and Technology), Maya Wiley (Leadership Conference on Civil and Human Rights), and Rumman Chowdhury (Humane Intelligence).

What did the AI Safety and Security Board produce?

The board produced two major deliverables. First, cross-sector AI safety guidelines published in April 2024 organizing AI risks to critical infrastructure into three categories: attacks using AI, attacks targeting AI systems, and failures in AI design and implementation. Second, the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” released November 14, 2024, which assigns specific security responsibilities to AI developers, cloud providers, and critical infrastructure operators across five implementation steps. Both documents are voluntary guidance, not mandatory regulation.

Why was the AI Safety and Security Board disbanded?

Acting DHS Secretary Benjamine Huffman terminated all DHS advisory committee memberships on January 20, 2025, citing the administration’s commitment to “eliminating the misuse of resources” and focusing DHS on national security priorities. The termination was part of a broader sweep that also ended the Cyber Safety Review Board and other DHS advisory bodies. Executive Order 14179, signed January 23, 2025, simultaneously revoked Biden’s Executive Order 14110 on safe and trustworthy AI, which had directed the creation of the board in 2023.

Does the AI Safety and Security Board still exist?

No. The board was disbanded on January 20, 2025. As of April 2026, no successor federal advisory body for AI security in critical infrastructure has been established. The November 2024 “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” remains the most recent federal advisory output on this topic, though it was designed as a living document intended for ongoing updates that the board’s disbandment has prevented.