The Biden Administration recently issued its clearest articulation yet of how federal agencies can use AI—and no surprise: There are big implications for companies delivering AI solutions to federal government customers.
On March 28, the Office of Management and Budget issued memorandum M-24-10 (“OMB AI Memo”) mandating responsible AI development and deployment across federal agencies, with a focus on understanding and tackling large societal challenges, such as food insecurity, climate change, and threats to democracy. The OMB AI Memo gives effect to several provisions in Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from October 30, 2023 (“AI EO”), especially Section 10 Advancing Federal Government Use of AI.
Many sections of the OMB AI Memo mandate requirements that only apply to agencies, such as designating a Chief AI Officer (“CAIO”). However, companies delivering or intending to deliver AI-based solutions to federal agencies can learn about—and begin preparing for—requirements that are eventually going to apply to them.
Our prior article outlined the OMB’s December draft of the memorandum, but the final version issued Thursday puts a stronger focus on using AI for the public good and includes a new section mandating agencies share their AI code, models, and data in a manner that facilitates re-use and collaboration, including with the public.
In the final OMB AI Memo, agencies must often provide a convenient mechanism for individuals to opt out of AI functionality in favor of a human alternative—so long as it doesn’t impose discriminatory burdens on access to a government service.
Another addition urges agencies to consider carbon emissions and resource consumption from data centers that support power-intensive AI.
The following summary of key provisions includes expected requirements and suggestions for taking immediate action.
Pay attention to the AI use case. The OMB AI Memo focuses on minimum practices to help manage the associated risk for “rights-impacting AI” and “safety-impacting AI,” for which OMB has issued new definitions.
“Rights-impacting AI” refers to AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on any of that individual’s or entity’s:
“Safety-impacting AI” refers to AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety any of the following:
Some use cases are automatically presumed to fall within these definitions. These are listed in the OMB AI Memo in Appendix I.
Prepare to make your case. Agencies are required to assess whether the AI meets the definitions of safety-impacting AI or rights-impacting AI. For AI that is automatically presumed to be safety- or rights-impacting, the OMB AI Memo permits the agency’s CAIO to decide that the AI does not match either definition and is therefore not subject to the minimum practices. That determination can also be reversed, however.
Anticipating those assessments and possible determinations, companies can immediately start working on a range of strategies related to product documentation, descriptions, and acceptable use policies. Agencies are expected to release information about AI use case determinations and waivers no later than Dec. 1, 2024, and companies should monitor for these disclosures.
Prepare to comply with the minimum practices. The OMB AI Memo outlines the minimum practices that agencies must undertake when using applications or components that are deemed safety-impacting AI and rights-impacting AI. Most agencies will be required to adhere to these minimum practices by Dec. 1, 2024. At that time, agencies must stop using any non-compliant AI in their operations and, going forward, must ensure these practices are in place prior to using AI. There are provisions for limited exemptions, exclusions, and waivers.
These minimum practices involve:
For rights-impacting AI, the following are additional minimum practices:
Facilitate agency assessment of the AI solution. Based on the OMB AI Memo’s guidance to agencies for procuring AI solutions, here are some tips for preparing for evaluations:
Plan for interoperability and multi-cloud. Consistent with federal procurement policy and law, the OMB AI Memo supports promoting competition and avoiding vendor lock-in by encouraging agencies to promote interoperability, such as requiring an AI solution to work across multiple cloud environments.
Expect to grant data use rights. The AI EO and the OMB AI Memo emphasize that data is a critical asset for federal agencies, and the OMB Memo tasks agencies with ensuring contracts for AI:
The final OMB AI Memo adds an emphasis on agencies sharing and collaborating on use of AI. For example:
Don’t forget about other applicable requirements. Over and above the OMB AI Memo requirements, federal agencies are likely to be early adopters of the requirements set forth in the AI EO, such as those related to dual-use foundation models and synthetic content.
Despite the OMB AI Memo calling for harmonization across the agencies and the use of standardized templates and forms, expect agency-specific requirements, especially from those with defense and national security missions. (Use of AI on a national security system is specifically excluded from the OMB AI Memo.)
The OMB AI Memo reminds agencies to procure AI solutions consistently with the Constitution and applicable laws, regulations, and policies, including those addressing privacy, confidentiality, intellectual property, cybersecurity, human and civil rights, and civil liberties.
Companies should plan ahead by mapping existing legal requirements to their AI solution while monitoring for additional requirements especially for generative AI and dual-use foundation models.
Also published by Government Contracting Law Report.