A summary of the US’s new AI regulation: Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

On October 30, the President of the US, Joe Biden, issued an Executive Order on AI. More precisely, the title was “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”.[1]

Why does it matter? Because the world’s leading country of AI development has not had a compulsory legislative approach until now.  There were some previous achievements on the governmental side, like creating an AI strategy; the National Institute of Standards and Technology has issued an AI Risk Management Framework; and most importantly, the issuance of the Blueprint of an AI Bill of Rights, which collected best practices and emphasized the importance of AI.  But the Executive Order is the first document which envisions a complex legislative framework for AI.  This is a huge step, considering earlier even the best practice compilation Blueprint stirred the market, and several stakeholders stated that any kind of AI regulation is harmful for innovation and development should flow freely. Obviously, the landscape has changed for the government. According to Biden, “One thing is clear, to realize the promise of AI and avoid the risk we need to govern this technology,” he said. “And there’s no other way around it — in my view, it must be governed.”[2]

Structure of the Executive Order

But what is the Executive Order exactly?  It has 3 main parts.  First of all, it contains many definitions, including the idea of AI. While a definition sounds like a boring part of the text, in this case, it is crucial. Surprisingly, while everyone talks about AI, we currently do not have a widely accepted definition of what it is exactly. Some people may regard an IT solution as an example of AI, while others don’t.  The European Union’s proposed draft regulation, the AI Act also provided a definition; while the European version has set out a long list of examples, the US approach operates with a general concept. Despite the two versions are similar, it is not out of question that a given software solution will qualify as an AI according to one regulation, and may not, according to the other. It can lead to tricky situations regarding compliance in the case of software products aiming for both markets.

Principles

Secondly, the Executive Order sets out 8 principles. The importance of it lies in the conceptional nature of the executive order; we can expect later US legal legislation will follow the principles outlined here. So these principles are: 1) AI must be safe and secure; 2) promotion of innovation; 3) protection of American workers; 4) advancing equity and civil rights; 5) consumer protection and safeguards against fraud and bias; 6) privacy and civil liberties must be protected; 7) the Federal Government should overwatch it’s own use of AI; 8) and finally, an international cooperation should be achieved on AI safety, led by the US.

Instructing governmental agencies to work out the details

After clearing these topics, the most important part comes.  The Executive Order in itself does not create legal obligations, but rather orders several (and here I mean hardly countable) US governmental agencies, institutions, legislative bodies, etc. to communicate with each other (and sometimes even non-governmental stakeholders) about the desired legislative approach and come up with their own compulsory regulations in their specific expertise areas. They have tight deadlines: each of them is required to provide the final legislative wording within 270, 180, or, in some cases, just as swiftly as 90 days. We are speaking here about dozens and dozens and dozens of pages imposing legislative duties on different state agencies and bodies. It is impossible to mention everything, but here I am trying to grasp some significant aspects.

Promotion of innovation

Promotion of innovation will be achieved by attracting talented individuals to the US; strengthening the cooperation between the public and private sector, and finally, supporting the competition by protecting minor players (preventing dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs).

Risk management

Strong AI risk management: a) develop standards, guidelines, and best practices for AI safety and security; b) the NST is going to create standards for thorough red-team testing of new products before they are released to the public, in order to ensure safety; c) the business secrets in the case of AI are not going to have full protection from the state in the future:  developers of the most capable AI systems shall share their safety test results and other critical information with the U.S. government. In the case of the most risky models, the developers shall even notify the government about the training of the new model; d) there will be extra safety measures in the case of AIs that can affect US critical infrastructure, and to prevent the misuse of an AI to generate biological, chemical, or nuclear weapons.

A separate subsection is devoted to deepfakes, realizing the threat they pose to democratic societies. Standards and guidances will be issued about how to detect AI-generated contents and how to authenticate original / official content. Contents created by AI should be obviously marked according to their nature, for example, with watermarking. Checks shall be built into AI models in order to prevent them to be used for creating abusive content, especially child abuse content or fake sexually explicit content.

Employment, civil rights, consumer protection

The Executive Order stresses that the interests of the American workers shall not be harmed. Within 180 days, a comprehensive report shall be handed over to the president on the labor-market effects of AI {author of the article: interesting that a report concerning this area has not been made until now}, and the Secretary of Labor shall consult labor unions and workers about the pressing labor implications. AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, orlessen market competition. The education of the AI-relevant skills shall be supported / promoted.

Of course, the values of equity and Civil Rights shall be protected, and irresponsible uses of AI that lead to discrimination, bias in the criminal justice system, state benefits and housing, shall be prevented. Consumer protection plays a major role; the government shall prevent the misleading or harming of consumers.  Unsafe healthcare practices involving AI shall be reported, and these cases shall be remedied {author of the article: the question arises here, if the harm in the patient’s health has already been made, how will they remedy it…}. Even the schools are not going to be the same in the near future: educators shall be supported by deploying AI-enabled educational tools, such as personalized tutoring in schools.

Privacy

And here we arrive to privacy. The topic with which the federal legislation has struggled for so long now.  It looks like the widespread use of AI brought the turning point.  The summary of the Executive Order, provided by the White House admits “without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.” So the measures are: 1) promoting the development and use of the so-called PETS which here does not mean animal, but stands for “privacy-preserving techniques”. It means any software or hardware solution of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality”. The text lists a set of different methods, but specifically pays attention to cryptography, which should be further developed. 2) Privacy impact assessments of the models shall be widespread, and it should be examined how these assessments can be made more accurate. This part of the Order seems to be in line with the European GDPR. 3) Even governmental agencies shall examine their own activities from a privacy perspective.

Concurrent with the issuance of the Executive Order, Biden called on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. This action raises some questions. In July 2022, the American Data Privacy and Protection Act (ADPPA) was introduced. This was the first federal-level privacy regulation that proceeded relatively successfully in the lawmaking machine, and still has a great chance to materialize as real law. Actually, it could be the American GDPR. Does Biden’s action mean that he wants the legislation to speed up with the acceptance of ADPPA? Or it rather means that an entirely new text will be unveiled? I am excited to see which scenario will happen.

Intellectual property issues

We all know that the use of AI raises issues in connection with intellectual property rights. Interestingly enough, the Executive Order devotes only a small clause to this issue: a study will be prepared to address copyright issues raised by AI, and based on that, recommendations are going to be made to the President on potential executive actions relating to copyright and AI.  The recommendations shall also discuss the scope of protection for works produced using AI and the treatment of copyrighted works in AI training. The author of this article believes that currently the administrative staff around Biden neither could not decide what to do, nor wanted the responsibility, so they delayed any concrete steps until they got proper input from the experts. So here, the people who are going to participate in the preparation of the recommendations have a huge responsibility. Their work can be influential to not just on US legislation, but on other nation’s as well, and may totally rewrite the lives and markets of writers, painters, composers and all kinds of  content creators.

International plans

Last but not least, the president does not want to stop at the borders, because he acknowledges (rightfully) that AI’s challenges and opportunities are global. So the US is going to try to reach multilateral, multi-stakeholder, international engagements with allies to collaborate on AI safety. The US wants a leading role in establishing and implementing a strong international framework for managing the risks and harnessing the benefits of AI. Global standards have to be developed. An ambitious plan  is to produce a Global AI Development Playbook that incorporates the US’s AI Risk Management Framework’s principles, guidelines, and best practices into the social, technical, economic, governance, human rights, and security conditions of contexts beyond United States borders. Let’s face it, an AI regulation – even a perfect one, if such thing could exist – will not reach its goal without international cooperation. However, the UK took the first step in this by coordinating the AI Safety Summit in Bletchley Park.

Organizational changes

Will the presidential administration take the content of the order seriously? Surely, since some new institutions are to be made. The Department of Homeland Security establish the AI Safety and Security Board to provide an overview of AI used for critical infrastructure. A new governmental interagency council will be formed to coordinate the development and use of AI in agencies’ programs and operations. A position similar to the European data protection officer will be established: every agency shall designate a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI.  So we can say, a complete organizational governance infrastructure is going to be established in order to truly make the future complex AI legislation effective.

What does the future hold?

The author of this article has four deductions:

  1. First, the “wild west age” of using AI is short-lived and soon comes to an end. Within 9 months, a complex set of rules expanding through agencies, legal areas and industries will be born. The Executive Order is a product of thorough planning; obviously, the presidential staff carefully listed all important areas that will be affected by the rapid rise of AI.

While the Executive Order itself does not really create binding legislation to the market, it coordinates the legislative approach of the near future, and “prompts the government to create all-areas-covering rules”.

  1. The author believes no less than here we will be witnesses of the genesis of a new legal field. The new rules will be so over-expanding into every field, and will create a comprehensive pattern, that the new AI Law is about to born.
  2. Still, the devil lies in the details. The exact wording of the new rules is not known yet, and we may expect intense lobby power to influence the important details.
  3. The European Union is going to face a great challenge here. The EU has significantly fewer technology giants than the US, but they had a global effect via legislation. The GDPR is a good example, it inspired the creation similar laws worldwide. However, if the European AI Act proceeds slowly, and it’s acceptance takes years, than when it becomes effective, the AI legislation narrative is already going to be formed by the US and the UK. I do not think the AI Act will be irrelevant, but the “Brussel’s effect” might not work in the probably most important sector in the future.

The Author holds an LL.M. in digital economy law, and a CIPP/E, CIPM certified privacy expert.

List of references

***

Ha nem szeretnél lemaradni a további írásainkról, kövesd az Arsbonit a Facebookon. Videós tartalmainkért pedig látogass el a Youtube csatornánkra.