Intro


At the end of October, President Biden’s administration released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (Executive Order or EO). This expansive order speaks directly to issues of responsible AI use and will function as a cornerstone in future AI Governance matters in the US and globally.

With the sweeping scope of recommendations and policy principles provided, the EO will no doubt affect organizations across all sectors of society and the economy, ranging from experienced AI developers and customers to new users of AI.  

The Executive Order’s definition of AI systems is broad—instead of limiting its concerns to newer, generative AI or systems that benefit from neural networks and large language models, the EO encompasses a variety of systems that have been built over the past few years.

“The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3):  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”

Determining the extent to which the EO affects any given organization will involve careful assessment of not only an entity’s own use of AI but also the extent to which its products and services incorporate or are reliant on third-party vendors’ AI-enabled capabilities. This guide will summarize the key points touched on in the EO, but the order should be read comprehensively, in its entirety, before implementing any of its recommendations.

Eight Guiding Principles

The EO starts by outlining its fundamental values:

1. Artificial Intelligence must be safe and secure. Ensuring that AI remains both safe and secure requires a multi-pronged approach that includes robust testing, careful monitoring, and strong safeguards against nefarious actors. Throughout the entirety of the EO, this principle is the most fundamental and centrally guiding.

2. Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. The United States has the opportunity to lead the way in AI, but it will require a concerted effort from all stakeholders, including industry leaders, policymakers, and researchers. With responsible innovation, we can ensure that AI is developed ethically and benefits society as a whole, while healthy competition and collaboration will drive progress.

3. The responsible development and use of AI require a commitment to supporting American workers. As AI continues to become more ubiquitous, it is essential to make sure that the workforce is not displaced and left without support. Therefore, responsible development and use of AI must go hand in hand with a commitment to supporting American workers. This can be achieved by investing in education and training programs, providing reskilling opportunities, and ensuring job security and benefits.

4. Artificial Intelligence policies must be consistent with the administration’s dedication to advancing equity and civil rights. The use of AI in decision-making can have far-reaching consequences, which could unintentionally reinforce societal biases. Therefore, it is critical that we take a proactive approach to ensuring that the development and deployment of AI systems are guided by principles that ensure fairness, accountability, and transparency. Our commitment to equity and civil rights must be reflected in all aspects of governance, including technology policy.

5. The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected. With AI-enabled products becoming more prevalent, it is important to understand how they function and the potential impact they can have on our privacy, security, and overall well-being. From smart speakers to self-driving cars, these products have the ability to collect and process vast amounts of personal data. It is essential that safeguards are put in place to protect consumers and their privacy while ensuring these products operate in an ethical and beneficial manner.

6. Americans’ privacy and civil liberties must be protected as AI continues advancing. With data collection and surveillance consistently increasing in today's society, it's crucial that we establish and enforce strict regulations to prevent any violations of individual rights. While AI has the potential to improve our lives in countless ways, we must ensure that it's not being used to discriminate against certain groups or compromise our personal information without consent.

7. It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support the responsible use of AI to deliver better results for Americans. As we continue to rely more and more on artificial intelligence in our daily lives, it is increasingly important that we manage the risks associated with its use. This is especially true for the Federal Government, which has an obligation to regulate, govern, and support the responsible use of AI to ensure that it delivers better results for all Americans.

8. The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change. Throughout history, the United States has been a leader in pushing the boundaries of societal, economic, and technological advancement. In order to continue this legacy and ensure a prosperous future for generations to come, the federal government must take charge in leading the way toward global progress. By tapping into the innovative spirit that has defined our country, we can work towards shifting the landscape on a global scale - from transforming traditional industries with new technologies to driving forward a sustainable and equitable global economy.

Safety and Security

- The National Institute of Standards and Technology (NIST) is directed to create guidelines and best practices for safe, secure, and trustworthy AI systems, as well as standards and procedures for AI red-teaming tests.

- Department of Energy (DOE) is to devise a plan for developing AI model evaluation tools and testbeds pertaining to nuclear, nonproliferation, biological, chemical, critical infrastructure and energy security threats.

- The Defense Production Act shall impose mandates on companies developing or intending to develop dual-use foundation models requiring them to report activities and certain information to the federal government.  

- The Commerce Department shall solicit inputs from the private sector, academia, etc. for potential risks of widely available model weights (e.g., Open Source Large Language Models).  

- Infrastructure as a Service (IaaS) providers will be mandated (by regulations) by the Secretary of Commerce to disclose activities with foreign persons training large AI models with potential capabilities used in malicious cyber-enabled activity & prohibited from allowing foreign accounts to resell their services.

- Heads of agencies supervising critical infrastructure must assess risks related to the use of AI and the Treasury Department must issue best practices for financial institutions managing AI-specific cybersecurity risks.

Federal Procurement and Use of AI Systems

- The Office of Management and Budget (OMB) is required to issue guidance to agencies within 150 days on the effective use of AI in federal operations.  

- This guidance will include the designation of a Chief Artificial Intelligence Officer at each agency, the creation of an internal Artificial Intelligence Governance Board, and risk management practices with regard to people's rights or safety.  

- OMB must also establish systems for agencies' compliance with their guidance on AI technologies, leading to yearly cataloging of agency AI use cases.

- The use of generative AI by the federal government is discouraged from being completely banned but instead must be used for experimentation and low-risk tasks with appropriate safeguards in place.  

- OMB and the Office of Science Technology Policy are tasked with determining priority mission areas for increased Government AI talent, as well as convening a Technology Talent Task Force to accelerate the hiring of AI talent across the federal government.  

- In addition, the Office of Personnel Management is authorized to consider various hiring tools for these AI professionals such as direct hire authority, pay flexibilities, personnel vetting requirements, and incentive pay programs.  

- Agencies must implement or increase training programs to educate current workers on AI issues.

Reducing Risks of Generated Content

- The US Department of Commerce was directed to complete a 240-day study of existing tools and methods to detect AI-generated content and trace its provenance.

- Following the completion of this study, the Office of Management and Budget (OMB) was tasked with issuing guidance on labeling and authenticating official US government content.

- The US Patent and Trademark Office (PTO) was directed to provide guidance to patent examiners on issues related to inventorship and the use of AI.

- Additionally, the US Copyright Office was ordered to perform a 270-day study on copyright issues raised by AI technology, including the scope of protection and treatment of copyrighted works in AI training.

- Lastly, the Department of Homeland Security (DHS) is required to develop a training, analysis, and evaluation program for intellectual property crimes through AI technologies.

Promoting Innovation and Competition

- The US is strengthening processes for noncitizens to work on, study, or research in critical and emerging technologies such as AI, particularly due to a lack of US workers.

- The National Science Foundation has been tasked with creating a National AI Research Resource and four new National AI Research Institutes.

- Federal agencies are expanding training programs for AI scientists; using AI to combat climate change and improve electric grid infrastructure; prioritizing programs to support responsible development and use of AI in clinical care, real-world programs, population health, public health, etc.; utilizing AI to improve veteran's healthcare; studying the potential for AI to tackle societal challenges; ensuring consumers are protected from harms due to the use of AI; promoting competition in the semiconductor sector, crucial for powering AI technologies.

- The Small Business Administration is allocating funding and ensuring grant programs are available for small businesses related to AI initiatives.

Protecting Workers and Civil Rights

- The Council of Economic Advisors is studying the labor-market effect of AI, and the Secretary of Labor is assessing potential AI-related displacements within the federal workforce.  

- The National Science Foundation (NSF) is focusing on AI-related workforce development through its existing programs.

- Civil rights and equity are at the forefront of the Executive Order, requiring a study from the Attorney General about using AI in criminal justice within one year, creating an interagency working group to promote hiring/training for law enforcement AI professionals, and setting up protocols to ensure access to benefits, notice to recipients, appeals, analysis of outcomes & prevention of discrimination/bias.  

- The Department of Labor has published guidance for federal contractors regarding nondiscrimination in hiring involving AI.  

- The Federal Housing Authority & Consumer Financial Protection Bureau (CFPB) are focusing on underwriting & appraisals to prevent bias in housing and consumer financial markets.  

- The Department of Housing and Urban Development (HUD) and CFPB are examining the use of AI in the property rental market for tenant screening/advertising housing & credit.  

- Architectural & Transportation Barriers Compliance Board plans to ensure people with disabilities are not subject to unequal treatment by biometric data-using AI systems.

Consumer Protection and Privacy

- The Executive Order (EO) directs various agencies to protect American consumers from fraud and considers the responsibility of regulated entities to monitor third-party AI services.

- The Department of Health and Human Services (HHS) is tasked with establishing a Task Force to develop a strategic plan on the use of AI in health care, and to issue a strategy on whether AI technologies maintain appropriate levels of quality. Actions will be taken to ensure healthcare providers comply with non-discrimination requirements when utilizing AI technology. Policies will also be issued regarding clinical errors when using AI and its use in drug development processes.

- Civil rights and equity are at the forefront of the Executive Order, requiring a study from the Attorney General about using AI in criminal justice within one year, creating an interagency working group to promote hiring/training for law enforcement AI professionals, and setting up protocols to ensure access to benefits, notice to recipients, appeals, analysis of outcomes & prevention of discrimination/bias.  

- The EO also takes steps to protect commercially available information held by federal agencies, directs the Department of Justice (DOJ) to launch regulatory proceedings for privacy impact assessments, and directs the Department of Commerce & NIST to promote the use of Privacy Enhancing Technologies (PET)s.  

- DOE & NSF are required to create a Research Coordination Network dedicated to PET research and encourage its incorporation into technology.

No matter your organization’s area of specialty, this EO is of tremendous relevance. It defines AI broadly and calls together many different departments, institutions, and foundations. It’s impact will be felt throughout all economic sectors. As Congress continues to study the legal and policy implications raised by AI, this EO will provide the required foundation.  

However, the actions summarized here and detailed in the EO are strictly the purview of the executive branch, so this EO concentrates its mandates on programs administered by federal agencies, requirements for AI systems procured by the federal government, mandates related to national security and critical infrastructure, and launching potential rule makings that govern regulated entities. This EO, like all executive orders, cannot create new laws or regulations on its own but can trigger the beginning of such processes.

What This Means for AI Governance?

This executive order will shape the conversation around AI, risk, trust, ethics, and governance over the coming months and years in the United States and globally. The policies, expert guidance, and risk frameworks that stem from this executive order will serve as a foundation for how professionals consume AI technology within their organizations and deploy it their products. AI will become central to everything we do, personally and professionally. To ensure that AI is having a positive impact on society, we need to continue to facilitate the implementation of standards, controls, and governance around AI.

How Can Fairo Help?

Our mission at Fairo is to provide a platform that enables organizations to work within this changing ecosystem; to have a central point of control for all things AI and keep up with technology and regulations that are evolving at a disruptive pace. We invest our time and money in understanding technology, regulations, standards, research, and expert guidance as they evolve in real-time. Our goal is to permit organizations to balance the need to innovate with the responsibility to do so ethically, safely, and in a risk-controlled manner. Our platform will permit your teams to operationalize governance, standards, simplicity, and consistency across your organization so that you and your customers can consume AI successfully, bringing trust, confidence, and increased ROI to your AI strategy.