A New EU Regulation for Artificial Intelligence

Haroon Malik May 04, 2021

Over the past decade, artificial intelligence (AI) has become a global strategic priority with a relentless focus and emphasis on development and implementation in almost every industry vertical. Recently, in particular, there has been some very notable changes in the world of Artificial Intelligence Regulation.

 

A Renewed Approach to Artificial Intelligence

The Proposal for a Regulation on a European approach to Artificial Intelligence creates a new regulatory legal framework for the development, marketing and use of artificial intelligence in conformity with European Union values. These changes are uniquely positioning the EU to play a leading role in the advancement of Artificial Intelligence regulation on a global level.

The proposal also introduces a new European Artificial Intelligence Board, overseeing and coordinating enforcement of the proposal.

The regulations are wide-ranging and apply to:

  • providers that place on the market or put into service AI systems, irrespective of whether those providers are established in the European Union or in a third country
  • users of AI systems in the EU
  • providers and users of AI systems located in a third country where the output produced by the system is used in the EU. 

Core Tenets of the New AI Framework

Prohibited Artificial Intelligence Practices

The list of prohibited practices comprises all those AI systems whose use is considered unacceptable as contravening Union values, for instance by violating fundamental rights. Such rights include manipulative and exploitative practices where AI systems are being used to manipulate human behaviour through choice architectures.

Other prohibited AI practices will include:

(a) deploying subliminal techniques beyond a person’s consciousness in order to materially distort that person’s behaviour, in a manner that causes (or is likely to cause) physical or psychological harm to them or another person.

(b) AI systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort their behaviour, similar to (a).

(c) using ‘Social Scoring’ to conduct large scale evaluation or classification of trustworthiness of natural persons, based on their social behaviour or personal / personality characteristics - which then leads to detrimental or unfavourable treatment.

(d) using AI for “manipulative, addictive, social control and indiscriminate surveillance practices.” It defines “manipulative AI" as a system that would “cause a person to behave, form an opinion or take a decision to their detriment that they would not have taken otherwise."

High-risk AI Systems

Providers of high-risk AI systems will have to meet a range of obligations. The proposal defines high risk in two ways:

  1. An AI system which is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II.
  2. A product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

Some of the obligations for high-risk AI systems include:

(a)  implementing a risk management system for the entire life cycle of a high-risk AI system.

(b) implementing a Quality Management System (QMS) that ensures compliance with this Regulation. Components of the QMS may include:

  • drawing up the relevant technical documentation
  • keeping logs generated by their high-risk AI systems
  • complying with conformity assessment and cooperating with authorities.

Plus, QMS should be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include a strategy for regulatory compliance. This would include compliance with conformity-assessment procedures, and procedures for managing modifications to the high-risk AI system.

(c) Distributors, importers, users and other third parties will also be subject to providers’ obligations if they:

  • place a high-risk AI system on the market or into service under their name or trademark
  • modify the intended purpose of a high-risk AI system already on the market or in service
  • make a substantial modification to a high-risk AI system.

d) Manufacturers of products (which are covered by EU legislation,and include high-risk AI systems) are responsible for compliance as if they were the provider of the high-risk AI system.

 

Artificial Intelligence Board

The proposal states that a European Artificial Intelligence Board shall provide governance and oversight of the regulation. The Board will be composed of representatives of the appropriate regulators from each member state, as well as the European Data Protection Supervisor and the Commission.

In line with the new regulations, these authorities will have the power to issue fines and other forms of penalties which will generally be determined at a Member State level.

However, the regulation states that fines of up to €20m or 4% of a company's global turnover can be applied, where the infringement relates to undertaking prohibited AI practices or supplying incorrect or false information to notified bodies.

 

Read more about the newly proposed regulations here.

 

Book a demo with our team below.

Book Demo