A North Star for AI? The White House’s Ambitious AI Executive Order

2023 has been the year that artificial intelligence finally hit the mainstream, and it shows no sign of slowing down.[1] Last week, the White House published the most significant U.S. federal development on AI to date, a sweeping executive order titled “Safe, Secure, and Trustworthy Artificial Intelligence” billed as “the most significant actions ever taken by any government to advance the field of AI safety.” Going forward, we can expect AI-related standards, tools, and tests to be developed by a range of agencies. 

Like other recent White House AI initiatives,[2] the Order seeks to strike a balance between promoting AI’s tremendous potential and safeguarding against serious risks. While some have questioned whether the Order itself contains sufficient enforcement “teeth,” that may not be its goal. Instead, it may serve as a “North Star” for this swiftly evolving technology, with eight guiding policies and principles intended to serve as a roadmap for the industry and regulators alike. This landmark Order will require companies to evaluate both their own AI-related activities and those of their vendors. 

Most comprehensive AI initiative to date

Ambitious and comprehensive, the AI Executive Order takes an expansive view of AI, defining it as — “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

That broad scope should include not only cutting-edge generative AI systems, such as ChatGPT, but also familiar technologies like (1) web search, including “autocomplete” functionality, (2) autocorrect tools, (3) content (including advertising and marketing) customization, personalization, and recommendations, (4) voice assistants, and (5) facial recognition.

The Order will impact a wide range of entities — from startups and small businesses to “dominant firms” and household names — across a broad spectrum of industries — from AI developers to data brokers, from healthcare to housing, and from critical infrastructure specifically to federal contractors generally.

AI North Star: guiding policies and principles

We examine four of the most noteworthy aspects of the Order for U.S. organizations. 

1. Safety, security, and risks
  • Rigorous standards: The U.S. Department of Commerce, including its agency, the National Institute of Standards and Technology (NIST), is charged with establishing “rigorous standards for extensive red-team testing to ensure safety before public release” of AI systems.

    The Order describes the NIST’s Artificial Intelligence and Risk Management Framework (AI RMF) and NIST’s Secure Software Development Framework as foundational resources and incorporates practices and principles set out in the NIST’s AI RMF into other directives. The Department of Homeland Security will apply such NIST-created standards to critical infrastructure sectors and will establish an AI Safety and Security Board. 
Practice Point: In light of the Order’s tacit support for NIST and the AI RMF, organizations should look to the AI RMF to help frame and address AI risks prior to, and in preparation for, the release of guidelines and best practices contemplated by the Order.

 

  • Heightened concerns and reporting requirements re “dual-use foundation models”: The Order requires red-teaming and the reporting to the government for certain dual-use foundation models, which are defined as:

    “an AI model that: (i) is trained on broad data; (ii) generally uses self-supervision; (iii) contains at least tens of billions of parameters; (iv) is applicable across a wide range of contexts; and (v) exhibits high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety.” 

    This could include, for instance, forms of automated vulnerability discovery and exploitation that can be used against a wide range of potential targets. While there may be beneficial uses to such technologies (e.g., uncovering bugs and thwarting potential hacks), they may be dual-use because they also could be used by bad actors to enable hostile cyberattacks.

    The Order requires organizations developing such dual-use foundation models to provide ongoing reporting to the federal government, including about:

> Any ongoing or planned training, development, and production activities, including the physical and cybersecurity protections taken.

> The ownership and possession of the “model weights” within the AI model.

> The results of all “red-team” safety tests. 

Entities will also be required to provide the government with specific information about any large-scale computing clusters and certain categories of Infrastructure as a Service (IaaS) products made available to foreign individuals.

Practice Point: While these reporting requirements are clearly intended to target the largest, most prominent AI companies, they also may apply to other businesses with broad and large data sets. Similar reporting requirements will apply in connection with “large-scale computer clusters.” Analysis of whether a “dual-use foundation model” exists ideally should begin at the design phase.

 

  • Using funding requirements to mitigate risks of AI engineering of dangerous biological weapons: The Order directs all federal agencies that fund life-sciences research to require that would-be recipients of such federal funding comply with applicable standards established pursuant to the Order.
Practice Point: U.S. federal agencies policing funding of AI appears to be a trend, as we have seen in the proposed outbound foreign investment rules restricting AI funding. While, under the Order, such a condition to federal funding currently is limited to certain biosecurity-related activities, it seems reasonable to expect that other federal agencies will establish similar compliance requirements. 

 

  • Detecting AI-generated content and “deepfakes”: The Order expresses concern about citizens’ abilities to distinguish between AI-generated content, including so-called “deepfakes,” and official content (for example with respect to the 2024 U.S. presidential election and other geopolitical events). The Department of Commerce will play a key role in developing guidance for content authentication and watermarking to clearly label AI-generated content, as well as otherwise detecting, tracking the provenance of, auditing, and maintaining synthetic content.

Practice Point: There is potential danger in developing reliance on labels or watermarking for the identification of AI-generated content and deepfakes given that hostile state and other bad actors, as well as others not subject to U.S. law, may not comply. 

 

An alternative solution could be blockchain technology and its ability to provide secure data sets and transparency. Rather than relying solely upon labeling of synthetic content, blockchain technology could be used for authentication purposes — for designating official or “approved” content and verifying the provenance of such data.     

 

2. Promoting innovation and competition while supporting workers and creators 

AI may just be the next “race to the moon,” with multiple jurisdictions vying to build expertise and establish dominance. The Order contains a variety of hiring- and workforce-focused provisions, including directives to federal agencies:

  • Attracting, retaining, and developing talent: The State Department and the Department of Homeland Security are directed to update programs and policies to better attract and support those highly skilled in AI and other critical and emerging technologies, in addition to streamlining visa petition and application processing times for non-U.S. applicants.
  • AI education and workforce development: The National Science Foundation is directed to prioritize the allocation of resources to support AI-related education workforce development.
  • Protecting workers: The Department of Labor is directed to develop principles and best practices to ensure that AI deployed in the workplace “advances employees’ well-being,” including implications for workers of employers’ AI-related data collection and use of data about them.
  • Promoting competition: The Department of Commerce is directed to prioritize the allocation and availability of resources to startups and small businesses, particularly in the semiconductor industry. The Federal Trade Commission — which recently has brought competition claims against several leading AI companies — is charged with ensuring fair competition in the AI marketplace.
  • Clarifying intellectual property issues: The Department of Commerce is directed to issue guidance concerning patent eligibility and how U.S. Patent and Trademark Office examiners should analyze AI-related inventorship issues. It also is directed to work with the U.S. Copyright Office (which has been actively involved in this area) to issue to the President joint recommendations regarding potential executive actions relating to copyright and AI, including (1) the scope of protection for works produced using AI and (2) the treatment of copyrighted works in AI training.
Practice Point: Companies need to be mindful not only of potential impacts to the public, but also on their employees, when implementing AI systems. Intellectual property implications also are complex and evolving, and specialist advice should be sought.

 

3. Data privacy

In his release announcing the Order, President Biden expressly “calls on Congress to pass bipartisan data privacy legislation.” Data privacy is an important focus of the Order, and we expect to see significant federal legislative developments.

The Order also provides multiple directives to the National Science Foundation to advance the research, development, and implementation of “privacy-enhancing technologies” and directs the Office of Management and Budget to evaluate how federal agencies collect and use “commercially available information” (e.g. from data brokers or other vendors, particularly where this contains personal identifiable data) and strengthen privacy guidance to better mitigate privacy risks exacerbated by AI.

The order requires agencies to issue guidance to mitigate specific risks related to discrimination resulting from the use of AI and builds upon related requirements, such as profiling that exists under certain state or sector specific privacy laws. For example, the Order encourages the Federal Communications Commission to enhance the provisions of the Telephone Consumer Privacy Act (TCPA) to combat AI-facilitated unwanted calls and texts.

Practice Point: If a company provides, or contemplates providing, to a federal agency any commercially available information, it will be essential for such company to ensure the sufficiency of its data protection program.

 

4. Algorithmic discrimination and bias

The Order highlights activities and sectors, such as hiring, education, healthcare, housing and other real-estate transactions, credit, and consumer financial markets, where the use of AI could lead to and deepen discrimination, bias, and other abuses.

Practice Point: Companies in all sectors must ensure that their AI systems, algorithms, and automated decision-making technologies do not contribute to discrimination, bias, or other abuse, in particular with respect to the provision of (or failure to provide) access to content related to highlighted sectors. It also is important to identify bias in the underlying data training sets used to feed AI models.

 

Looking ahead

Our mid-2023 overview on the U.S. legal and regulatory landscape for AI predicted that the second half of 2023 would experience substantial AI-related legislative and regulatory developments. The Order represents just such a development, and we expect that it will spur further government action. 

The Order takes a notably different approach to AI regulation compared to the EU AI Act, currently under discussion in the EU’s “trilogue” process. While the EU AI Act encompasses a wide spectrum of AI system sales and usage in the EU, the Order strives for a balanced approach, encouraging AI adoption while mitigating associated risks. The Order avoids the more prescriptive approach of the EU AI Act, which employs a risk categorization framework with specific requirements, including comprehensive risk management, data governance, accuracy standards, human oversight mandates, and monitoring procedures. Given the global implications of AI usage, companies must proactively monitor and adhere to evolving AI-related rules, both domestically and internationally. 

We have extensive experience working with multinationals on their AI, digital and automation strategies and are available to assist if you’d like to discuss creating a streamlined compliance program to manage the multijurisdictional patchwork of evolving AI obligations.

For more on the AI Executive Order and the current regulatory environment listen to our latest tech podcast episode: Untangling the Spiderweb: Biden’s Executive Order on AI, National Security, and Digital Assets.

[1]    Throughout 2023, we’ve been reporting on developments in the legal and regulatory landscape for AI and generative AI

[2]    The Order incorporates and expands on elements of the White House’s October 2022 “Blueprint for an AI Bill of Rights” and May 2023 “Action Plan to Promote Responsible AI Innovation,” in addition to voluntary commitments, secured from over a dozen leading technology companies in July and September 2023, to drive safe, secure, and trustworthy AI development.