Skip to content Skip to footer

Executive Order 14110: The Definitive Guide to the Future of AI Regulation

Executive Order 14110

On October 30, 2023, the White House issued a directive that would fundamentally alter the trajectory of technology in the United States and, arguably, the world. Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” represents the most comprehensive attempt by any government to regulate the exploding field of Generative AI and Large Language Models (LLMs).

As Artificial Intelligence transitions from a theoretical novelty to a core component of national infrastructure, the risks have scaled alongside the capabilities. From the potential for biological weapon engineering to the erosion of privacy and civil rights, the stakes are existential.

For federal agencies, defense contractors, and technology leaders, understanding what is Executive Order 14110 is no longer a matter of policy interest-it is a compliance necessity. This article provides an in-depth analysis of the order, its strategic pillars, and the operational roadmap for implementing secure AI in the public and private sectors.

The AI Awakening

The release of Executive Order 14110 of October 30, 2023, came at a pivotal moment. The rapid adoption of tools like ChatGPT and the integration of AI into critical infrastructure outpaced the existing regulatory frameworks. The Biden Administration recognized that a laissez-faire approach to “Dual-Use Foundation Models”-models powerful enough to be used for both immense good and catastrophic harm-was no longer tenable.

Unlike previous directives that focused on research, EO 14110 invokes the Defense Production Act, giving the federal government extraordinary powers to compel transparency from the private sector. It signals a shift from “voluntary guidelines” to “mandatory reporting” for the most powerful AI systems.

Executive Order 14110 Summary: The Eight Guiding Principles

To navigate this massive document, it is helpful to view it through its core philosophy. An Executive Order 14110 summary reveals eight guiding principles that dictate how the US government will approach AI:

  1. Safety and Security: AI systems must be proven safe before deployment. This includes testing for vulnerabilities and protection against misuse.

  2. Innovation and Competition: The US must remain the global leader in AI development by supporting small businesses and researchers, preventing market consolidation by a few tech giants.

  3. Worker Support: AI should augment human labor, not simply replace it. The order mandates studies on labor market displacement and worker rights.

  4. Equity and Civil Rights: AI algorithms must not perpetuate discrimination in housing, criminal justice, or healthcare.

  5. Consumer Protection: Americans must be protected from fraud, deception, and privacy violations exacerbated by AI.

  6. Privacy: The collection of vast datasets to train AI must not erode personal privacy rights.

  7. Federal Use: The federal government must lead by example, deploying AI effectively while managing its risks.

  8. Global Leadership: The US will work with international partners to establish global norms for AI safety.

These principles act as the “Constitution” for all subsequent agency actions, influencing everything from Department of Defense procurement to Department of Health and Human Services guidelines.

The Mandate for Safety: Red Teaming and “Dual-Use” Models

The most technically rigorous aspect of the order focuses on “Dual-Use Foundation Models.” These are defined as AI models trained on broad data that contain tens of billions of parameters and pose a serious risk to national security, national economic security, or public health.

The Reporting Requirement

Under the authority of the Defense Production Act, developers of these high-threshold models must now report to the federal government:

  • Training Runs: Notification before training a model of a certain size.
  • Physical Protection: How the model weights (the “brain” of the AI) are secured against theft.
  • Red Teaming Results: The order mandates “Adversarial Testing” or “Red Teaming.” Developers must actively try to break their own models-forcing them to generate malware, hate speech, or chemical weapon recipes-and report the results of these stress tests to the Department of Commerce.

This effectively ends the era of “black box” AI development for the largest tech companies. If a model has the potential to crash the financial system or design a pathogen, the government demands to see the safety data before it is released to the public.

The Intersection with Cybersecurity and Infrastructure

One cannot discuss secure AI without discussing the infrastructure it runs on. AI models are software, and like all software, they are vulnerable to traditional cyberattacks-exfiltration of model weights, data poisoning, and adversarial manipulation.

This is where EO 14110 dovetails with the mandates of Executive Order 14028, which focused on improving the nation’s cybersecurity. The logic is linear: You cannot have a “Trustworthy AI” running on an untrusted network. The rigorous standards for encryption, cloud security, and supply chain transparency established in previous orders are the prerequisites for compliance with EO 14110.

If an agency intends to deploy a generative AI tool to summarize classified intelligence, that tool must reside within an environment that adheres to Zero Trust principles. The AI is only as secure as the identity management and network segmentation systems protecting it.

NIST’s Role: The AI Risk Management Framework

Executive Order 14110 tasks the National Institute of Standards and Technology (NIST) with developing the “gold standard” for AI testing. NIST is responsible for creating the companion resources to the AI Risk Management Framework (AI RMF).

By mid-2024, NIST is expected to release strict guidelines on:

  • Red Teaming Standards: What exactly constitutes a “safe” model?
  • Secure Software Development for AI: How to prevent the injection of malicious data during the training phase.
  • Synthetic Content Detection: Standards for watermarking AI-generated content so the public can distinguish between a real presidential address and a deepfake.

For private sector companies, aligning with these NIST standards is crucial. While currently voluntary for smaller models, these standards will likely become contractual requirements for any vendor wishing to sell AI solutions to the US government.

Government Use of AI: The Chief AI Officer (CAIO)

A significant portion of the order is dedicated to modernizing the federal government itself. The administration recognizes that it cannot regulate a technology it does not use or understand.

The Office of Management and Budget (OMB) Guidance

The OMB is tasked with issuing guidance that dictates how agencies procure and use AI. A key requirement is the appointment of a Chief AI Officer (CAIO) within major federal agencies. This executive is responsible for coordinating AI use, ensuring compliance with civil rights laws, and managing the risk inventory.

Modernizing IT Infrastructure

To effectively use AI, agencies must modernize their IT stacks. Legacy systems cannot support the computational load of modern LLMs. Agencies are directed to prioritize access to high-performance cloud computing environments.

However, moving sensitive government data to the cloud to train AI models presents massive security challenges. Agencies require specialized architectural support to navigate this transition. Implementing robust TerraZone Solutions for State, Federal, and Defense Agencies ensures that as government bodies integrate these powerful AI capabilities, they do so within a secure, compliant framework that manages data flows and strictly enforces access controls.

Addressing the “Deepfake” Dilemma: Watermarking and Content Authentication

One of the most immediate threats addressed by what is Executive Order 14110 is the proliferation of synthetic media-Deepfakes. The order recognizes that the ability to generate photorealistic images and voice clones poses a threat to democratic processes and public trust.

The Department of Commerce is tasked with developing standards for Content Authentication and Watermarking.

  • Watermarking: Embedding invisible data into AI-generated content that identifies it as synthetic.
  • Provenance: Creating a digital chain of custody for official government communications, so a citizen can cryptographically verify that a video of the President actually came from the White House.

While the technology for robust, unremovable watermarking is still evolving, the order forces the industry to prioritize its development.

Privacy and Civil Rights: Preventing Algorithmic Discrimination

The order places a heavy emphasis on preventing “Algorithmic Discrimination.” There is a documented risk that AI models trained on historical data will replicate historical biases.

  • Housing and Lending: The order directs the Department of Housing and Urban Development (HUD) and the Consumer Financial Protection Bureau (CFPB) to ensure that AI algorithms used by landlords and banks do not violate the Fair Housing Act or Equal Credit Opportunity Act.

  • Criminal Justice: It mandates a review of AI tools used in sentencing, parole, and predictive policing to ensure they do not unfairly target protected groups.

  • Privacy-Enhancing Technologies (PETs): The order champions the research and adoption of PETs, such as Differential Privacy and Homomorphic Encryption, which allow data to be used for AI training without revealing the specific details of the individuals in the dataset.

The Impact on Cloud Providers: Know Your Customer (KYC)

A subtle but powerful clause in the order targets Infrastructure as a Service (IaaS) providers-the major cloud companies that rent out the massive GPU clusters required to train AI.

The order mandates a “Know Your Customer” (KYC) program for these providers. They must report to the Department of Commerce whenever a foreign person or entity transacts with them to train a large AI model.

  • The Goal: To prevent foreign adversaries (specifically China and Russia) from using American cloud infrastructure to train cyber-attack models or advanced weapons systems. This effectively weaponizes the cloud supply chain as a tool of national security.

The Talent Surge: National AI Research Resource (NAIRR)

Recognizing that regulation can stifle innovation if not balanced with support, the order pushes for the pilot of the National AI Research Resource (NAIRR). This initiative aims to democratize access to AI research tools. Currently, only massive tech companies have the compute power to train frontier models. NAIRR aims to provide students, researchers, and small businesses with subsidized access to data and computing power, ensuring that the next breakthrough comes from a diverse ecosystem, not just a monopoly.

Furthermore, the order directs agencies to streamline visa processing for immigrants with expertise in AI, acknowledging that the “War for Talent” is a critical front in global AI competition.

Conclusion: The Road Ahead

Executive Order 14110 of October 30, 2023, is a landmark document. It transforms the US government from a passive observer of the AI revolution into an active participant and referee. It acknowledges a fundamental truth: AI is not just another technology trend; it is a dual-use capability that will define the economic and military balance of power in the 21st century.

For organizations, the path forward involves three critical steps:

  1. Inventory: Understand where AI is currently being used within your operations.
  2. Risk Assessment: Apply the principles of the NIST AI RMF to these tools.
  3. Security Integration: Ensure that AI systems are not siloed but are integrated into the broader cybersecurity architecture mandated by previous executive orders.

As we move forward, the regulations stemming from this order will harden into law and contract requirements. Those who embrace these standards of safety, security, and trust today will be the leaders of the AI economy tomorrow. The era of “move fast and break things” is over; the era of “move fast and prove it’s safe” has begun.

Table: Executive Order 14110 at a Glance

Domain

Key Requirement

Responsible Body

Foundation Models

Report training runs, physical security, and red-teaming results for dual-use models.

Dept. of Commerce

Cybersecurity

Integrate AI systems into Zero Trust architectures and report AI-aided cyber threats.

DHS / CISA

Content Auth

Develop standards for watermarking and labeling AI-generated content.

NIST / Commerce

Civil Rights

Investigate and prevent algorithmic discrimination in housing and benefits.

DOJ / HUD

Cloud (IaaS)

Report foreign entities using US cloud compute for AI training.

Dept. of Commerce

Gov Procurement

Appoint Chief AI Officers (CAIOs) and modernize IT infrastructure.

OMB / Agencies

 

Welcome! Let's start the journey

AI Personal Consultant

Chat: AI Chat is not available - token for access to the API for text generation is not specified