Regulating the Field of Artificial Intelligence: Challenges and Regulatory Efforts

The human brain works like a command centre for the human body, which communicates with other organs of the body. The brain also has many unique qualities like creativity, imagination, and the ability to think. The field of artificial intelligence enables machines to simulate these features of the human mind.

The technology of AI is developing every day and opening avenues in many sectors, especially in the field of medical science, automobile, and manufacturing. AI has tremendous potential to increase the precision, accuracy, and efficiency of various systems around the world. Machines can harness human intelligence to perform several tasks using AI.

The need for Regulating AI

The regulation of AI means developing policies and laws to encourage and regulate the applications of AI in various sectors. Many Corporate and technology leaders like Elon Musk and Bill Gates have advocated the need for regulating AI due to the long term risks involved.

Every technology has its benefits and drawbacks and comes with a certain amount of disruptions in the pre-existing arrangements. Powerful Technologies like AI create concerns in public, including the loss of employment opportunities for humans. Therefore, even though AI has many positive potentialities, we should also anticipate the threats. The advancement of AI in the field of Defence and ammunitions create trepidations amongst the citizens and the Governments. 

Challenges in Regulating AI[1]

Challenge to define AI

To be certain about the impact and extent of AI, we need to define the term AI. But unfortunately, there is no universally accepted definition of AI available. Defining AI should be the primary step in regulating AI. The difficulty to define AI does not lie in the term artificial but instead lies in the extent of the term intelligence.

Ambiguity is another concern in defining AI, as AI is a developing technology. John McCarthy was one of the pioneers in the field of AI. He stated that it is difficult to find a rigid definition of intelligence because we are unaware of the parameter to decide the intelligence of any computational procedure.

Autonomy and foreseeability

AI has a unique ability to work autonomously. AI can perform many simple and complex tasks without any human oversight. With the development of AI technology, applications of AI systems are augmenting. These applications of AI replace human resources creating a scarcity of jobs and labour requirements.

Another challenge for the legal systems in regulating AI is the lack of foreseeability. AI technology is mostly used where there is a need for creativity. Humans cannot possibly foresee the extent of creativity and actions of AI. This makes it very tough to predict possible future threats.

Controlling AI

It is very challenging to control a system that has considerable autonomy in its functioning. A single malfunction or a corrupted file can create considerable damage to the outcomes. A single misstep can be responsible for losing control. And it may be very difficult to regain control if the AI is equipped with the ability to learn and adapt.

An Open Letter to the United Nations[2]

Numerous AI experts, including Stephen Hawkings and Elon Musk, signed an open letter to the United Nations addressing the short and long term effects of AI on the human race in 2015.

The open letter mentioned short term and long term issues regarding AI given below:

Short term issues

  1. The management of the economic impact of dislodged employment resulted from the use of AI,
  2. The question of culpability in case of accidents involving self-driving cars,
  3. The concerns regarding the use of intelligent automated weapons

Long term issues

The Chief Scientific Officer of Microsoft Eric Horvitz explained the possibility of losing control over AI systems in the future, which are superintelligent.

Regulatory Efforts by European Commission

 In February 2020, the European Commission issued a white paper on Artificial intelligence[3]. Looking at the pace of development and the impact of AI in Europe in various technological applications such as the medical industry, the farming sector, security of citizens, and many other crucial areas, the European Commission (Commission) decided to lay out a plan for regulating AI.

The commission planned a twin objective approach to promote the development of AI and to address the potential hazards regarding AI. The white paper centred on devising a blueprint of steps to be taken to achieve those objectives. But unfortunately, it did not address the military applications of AI.

The commission outlined a strategy to develop a viable ecosystem for AI keeping 3 points in mind:

  1. Citizens – To provide citizens with the best health care, more efficient public services, and new technological alternatives.
  2. Business – To equip businesses with new-generation products and services to strengthen the European economy.
  3. Public Interests – To ensure the security and freedom of people by empowering law enforcement bodies with proper tools and authorities.

The main elements of the white paper are enlisted below:

Ecosystem of excellence 

It means to develop an ecosystem of excellence for the use of AI by providing incentives to small and medium businesses. The availability of AI solutions shall be achieved by promoting innovation and the development of AI technologies. The development of an ecosystem of excellence deals with the policy designing aspect of AI.

Ecosystem of trust

Trust of citizens is very crucial for any technology to survive. Therefore, the commission aimed at developing a competent regulatory framework to support AI. To develop confidence amongst citizens, the regulatory or legal framework needs to ensure the protection of the fundamental rights of the citizens. The commission aimed at adapting a human-centric approach toward AI. The development of an ecosystem of trust is related to the legal and regulatory aspects of AI. The Commission has prepared a scheme for regulating AI, which is simply known as ‘Trustworthy AI’. 

Trustworthy AI has three elements:

  1. It should comply with all the laws and regulations (lawful AI).
  2. It should adhere to ethics and values.
  3. It should be technically and socially strong enough to achieve its objectives.

Working as per the devised plan, the commission prepared an AI strategy on 25th April 2018. A High-Level Expert Group (AIHLEG) established by the Commission published guidelines on trustworthy AI in April 2019.

These guidelines are known as Ethics guidelines on trustworthy AI[4]. These Guidelines mostly deal with the last two elements of Trustworthy AI. The Guidelines assume that all the legal aspects regarding the development and promotion of AI contained in the first element of Trustworthy AI (lawful AI) are mandatory.

Ethics and Values (Ethical AI)

During the technological advancement, some systems surpass the ethical boundaries which need to be appreciated. This eventually leads to hindrance in the development of technology. Therefore, to avoid the loss of trust in the users, AI also needs to be within the ethical boundaries.

The European Union (EU) has always upheld the fundamental rights through its various charters and treaties. The Charter of Fundamental Rights by the EU provides a reliable framework to appreciate ethical values that can be used in developing a trustworthy platform for AI.

The following principles of Ethics should be followed during the development of AI:

  1. Human autonomy should be respected. There should be no unfair and unjustified surveillance. 
  2. The dignity of humans should be protected. As a part of respecting their fundamental rights, all humans should be treated equally irrespective of their sex, colour, religion, ethnicity, etc.
  3. The development and the use of AI should not create any hazard or harm to humans. The safety of humans is of paramount importance. Considering the tremendous potential of AI necessary care must be taken to ensure the security of humans.
  4. To determine the accountability of the actions involving AI systems, a suitable mechanism should be designed.
Technical and Social strength (Robust AI)

After securing ethical safety, any technology needs to perform and develop safely and reliably. Society should feel safe while availing of the benefits offered by the technology. Therefore, to achieve Trustworthy AI, any future harms arising from the use and development of AI should be anticipated and eliminated. 

Regulatory and Legislative Efforts by the United States

On February 11, 2019, The President of United States Mr Donald J. Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence[5] (executive order). Following which The Office of Science and Technology Policy of the US Government has released the draft of Guidance for Regulation of Artificial Intelligence Applications[6]. This draft includes 10 principles for the US federal agencies, which should be considered while designing regulations. These principles focus on the following points:

  1. To preserve the trust of US citizens in the development of AI by respecting their fundamental rights like Right to Privacy, Civil rights, etc.
  2. To ensure public participation in AI development, especially where AI uses the data of citizens. It also includes increasing the accountability of the systems using AI.
  3. There should be a risk assessment and management approach in dealing with the AI systems. There should be an evaluation of risks, and the acceptable risk should be taken considering the potential harm involved in it. 
  4. Benefits and costs approach should be practised where there is no clarity in regulations.
  5. The regulatory and non-regulatory approaches should be flexible enough to accommodate the rapid changes in AI technology.
  6. AI systems should not be allowed to discriminate in delivering their outcomes. Principles of fairness should be followed. There should be transparency regarding the impacts of certain AI applications to develop trust amongst the citizens.
  7. Security and safety of citizens need to be of primary importance while ensuring the development of AI systems.
  8. There should be coordination among the agencies to ensure oversight on AI systems.

The National Institute of standards and technologies also released a position paper[7] in response to the executive order by the US President.

This paper deals with the standard related aspects of AI which are very essential in conducting AI research and development.

The paper deals with the following aspects of the AI standards:

  1. Terminologies related to AI
  2. Metrics to measure the characteristics of AI
  3. Trustworthiness
  4. Performance Testing methods
  5. Safety and Risk management
  6. Data in various standard formats to validate and test the AI systems

The Artificial Intelligence Initiative Bill[8] has also been introduced in the Senate house on 21st May 2019 to stimulate the development of AI systems in the US.

The Bill, when passed, will allow the US to establish an Interagency Committee on Artificial Intelligence (ICAI) to coordinate within various federal agencies on research and development of AI technology. The committee will also search for a strategic international alliance with other nations, to coordinate on the development and research on AI.

Also, the bill provides for establishing a National Artificial Intelligence Advisory Committee (NAIC), which will be responsible for the management and implementation of the scheme. NAIC will also assess the progress of the initiative. The Bill defines the duties of various agencies in promoting the scientific development of AI.

Efforts by the US Federal Trade Commission

 The US Federal Trade Commission (FTC) published a blog named “Using Artificial Intelligence and Algorithms[9]” (blog) on 8th April 2020. The blog aims at guiding the use of artificial intelligence and algorithms. The blog not only acknowledges the potential of AI in increasing productivity but also recognizes the risks involved in the development of AI. The blog suggests that there is a likelihood that the use of AI can worsen the existing socioeconomic imbalance. 

The blog further mentions the following recommendations for the use of AI and algorithms:


Companies collecting data using AI tools should maintain transparency during the interaction with the users. The consumer should know the true nature of the interaction. The companies or organizations may face FTC enforcement action if they try to mislead the consumers using chatbots or any AI tools. 

Explain the decision to the consumer

Many companies use automated tools for decision making. The company should make the consumer aware of such automatic decisions. The company should also disclose the details of the data used by the company.

Fairness of decisions

Sometimes an indifferent use of AI can create discrimination against certain classes. The FTC may enforce the Equal Credit Opportunity Act and Civil Rights Act of 1964 against any such discrimination based on race, colour, religion, national origin, sex, marital status, age.


The companies developing AI tools and mechanisms should examine the possibility of abuse of their products. They should analyze any steps or provide other technologies to be employed to prevent such misuse.

The companies should also consider the extent of their accountability in regards to their AI tools and mechanisms.


·       Executive Order on Maintaining American Leadership in Artificial Intelligence –

·       S.1558 – Artificial Intelligence Initiative Act –

[1] REGULATING ARTIFICIAL INTELLIGENCE SYSTEMS: RISKS, CHALLENGES, COMPETENCIES, AND STRATEGIES by Matthew U. Scherer published in Harvard Journal of Law & Technology Volume 29, Number 2 Spring 2016.-


[3] WHITE PAPER On Artificial Intelligence – A European approach to excellence and trust –


[5] Executive Order on Maintaining American Leadership in Artificial Intelligence –


[7] U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools

[8] S.1558 – Artificial Intelligence Initiative Act –

[9] Using Artificial Intelligence and Algorithms- By Andrew Smith, Director, FTC Bureau of Consumer Protection published on Apr 8, 2020 –

Shivam Kene from ILS Law College, Pune

Shivam’s short term goal is to work to work at a reputed Company/Firm and his long term goal is to become a more knowledgeable and responsible person.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: