The White House on Tuesday announced a set of 10 regulatory principles to guide federal agencies as they develop regulations governing the development and use of artificial intelligence (AI) in the private sector. The principles “focus on ensuring public engagement, limiting regulatory overreach, and promoting trustworthy AI,” said U.S. Chief Technology Officer Michael Kratsios in a tweet.
Today, the @WhiteHouse is proposing AI Regulatory Principles to govern private sector AI technologies. Our 10 principles focus on ensuring public engagement, limiting regulatory overreach, and promoting trustworthy AI.— Michael Kratsios (@USCTO) January 7, 2020
Learn more in my op-ed @business⬇️https://t.co/q8g4MJBZrc
The White House Office of Science and Technology Policy released what Kratsios described as the “first of its kind” set of principles federal agencies are to be guided by as they develop regulations on AI.
How we got here
The principles are an outgrowth of President Trump’s Executive Order 13859, issued in February 2019, in which he emphasized the importance of U.S. leadership in AI to maintain economic and national security.
While AI like facial recognition applications has proved beneficial to law enforcement, there are concerns about its ethical use and data privacy. In an op-ed piece published the same day the principles were announced, Kratsios maintained deciding between embracing AI and following one’s moral compass is a “false choice” because both can be accomplished.
10 AI Stewardship Principles
A draft memorandum from Office of Management and Budget (OMB) Acting Director Russell Vought sets out the 10 principles agencies are to consider when developing regulations or nonregulatory approaches (such as guidance, pilot programs, or voluntary consensus standards) to AI.
1. Public trust in AI
Because AI applications can pose risks to civil liberties, privacy, and autonomy, and because the acceptance of AI will “depend significantly on public trust,” the OMB acknowledged in its draft memo, the government’s approaches to AI should promote “reliable, robust, and trustworthy” applications.
2. Public participation
Under the approach outlined by the OMB, agencies are encouraged to “provide ample opportunities” for public input on AI initiatives, especially when an application uses information about individuals.
3. Scientific integrity and information quality
The White House identifies best practices in fostering scientific integrity and information quality involving AI to include “transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses” of an AI application’s results. The OMB encourages agencies to remember the data used by AI applications must be of “sufficient quality” so those applications produce “predictable, reliable, and optimized outcomes.”
4. Risk assessment and management
Agencies are not required to mitigate “every foreseeable risk” posed by their approaches to AI pursuant to the White House’s suggestions; rather, they should consider the degree and nature of the risks to avoid “unnecessarily precautionary approaches” to regulation that could “unjustifiably inhibit” innovation, according to the OMB.
5. Benefits and costs
The White House acknowledges issues of liability for a decision made by AI might be a bit unclear under existing law. To that end, agencies should consider “full societal costs, benefits, and distributional effects” before considering regulations on the development and deployment of AI applications.
Agencies should focus on performance-based and flexible approaches to AI applications rather than on “rigid, design-based regulations” that will likely be “impractical and ineffective” given the rapid evolution of AI, the OMB wrote.
7. Fairness and nondiscrimination
Agencies should be mindful biases are sometimes introduced into AI applications. To that end, the OMB maintained, they should consider fairness and nondiscrimination issues regarding the decisions produced by AI.
8. Disclosure and transparency
To encourage public trust and confidence in AI applications, agencies might disclose when AI is in use, although what exactly constitutes appropriate disclosure is “context-specific,” according to the OMB.
9. Safety and security
Agencies should consider ways to prevent “bad actors” from exploiting weaknesses in an AI system, the OMB wrote. Agencies also should be aware of possible malicious deployment and use of AI applications.
10. Interagency coordination
The OMB encourages agencies to work together to ensure AI policies “advance American innovation and growth” while protecting “privacy, civil liberties, and American values.”
The draft OMB memo clarifies the principles apply to so-called “weak” or “narrow” AI rather than to “strong” or “general” AI “that may exhibit sentience or consciousness.” Narrow AI allows tasks to be performed by extracting information from data sets or other sources of information.
The public will have 60 days to comment on the draft guidance.
Lori Tripoli is a writer based in the greater New York City area who focuses on legal and regulatory issues.