Consequences for Swiss companies of the recently proposed EU-Regulation on Artificial Intelligence

Following the rapid technological development and the steady increase of global investments in the field of Artificial Intelligence (AI), the European Commission released a proposal for a regulation laying down harmonised rules on AI in April 2021. The regulation will imply several obligations and duties for companies involved with AI, since they will have to comply with different requirements to be allowed to put an AI product on the European market.

Author: Martin Cattaneo

The Regulation

The proposal of the EU Commission for a Regulation of AI (“Coordinated Plan on Artificial Intelligence", see this Link; hereafter “Regulation”) aims at the creation of a legal framework on AI in order to control its commercialization in a safe, transparent, ethical and unbiased way, encouraging the EU population to trust AI. The definition of AI given by the proposal is very broad, and explicitly “aims to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI”. The proposal thus will apply to an extended spectrum of technological applications (AI-applications), which will have to comply with the requirements. Moreover, the Regulation will be directly effective and applicable without the need of an implementation via the EU Member States, so that the risk of market fragmentation will be avoided.

The Regulation follows a risk-based approach: AI systems will be sorted in four different categories, which are constructed according to the particular use and the associated risks.

The risk-based approach

Unacceptable Risk

Article 5 of the Regulation proposes a list of applications of AI systems that are prohibited. These are considered unacceptable as contravening the EU values, for instance by violating fundamental rights. In particular, AI systems that manipulate human behaviour via subliminal techniques, exploit vulnerabilities of children or persons with a disability, or enable social scoring by governments are prohibited.

High-Risk

Title III of the Regulation deals with AI systems that create a high risk for health and safety or for the fundamental rights of natural persons. Those systems will be permitted on the market only if complying with specific mandatory requirements and will have to undergo an ex-ante conformity assessment.

There are two main categories of high-Risk AI systems, according to Article 6. On the one side, there are AI systems used as safety components of products already regulated by certain EU legislations (listed in Annex II) and subject to third party assessment. For example, certain medical devices or machinery are to be considered high-risk AI system under this category. On the other side Annex III gives a list of different stand-alone AI systems in specific areas which are to be considered high-risk independently from other legislations.

Once labelled as high-risk, the concerned AI system will have to comply with specific legal requirements listed in Articles 8-15. In particular, such requirements relate to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security. A list of precise obligations for providers and users of such systems is provided in Articles 16-29, which have to be respected in order to be accepted for the EU market.

Limited Risk

Certain AI systems, which do not reach the high-risk threshold, are subjects to different transparency obligations according to Article 52. Such obligations are imposed to systems that interact with humans, are used to detect emotions or generate or manipulate content (such as so-called “deep fakes”). In particular, users of such systems must be aware that they are interacting with a machine, for instance that they are chatting with a chatbot. The objective of these requirements is to allow persons to make more informed choices or to step back from given situations.

Minimal Risk

All other AI systems are not subject to additional requirements or obligations under the Regulation. This regards the vast majority of AI systems used in Europe. The voluntary application of codes of conduct which implement the requirements of high-risk AI systems also to Minimal Risk AI systems is however encouraged by Article 69.

Obligations for Providers

Companies dealing with AI will have to be aware of the Regulation and apply the different requirements to their products. In particular, providers of high-risk AI systems will have to subject it to the conformity assessment before putting it in the market or into service. Providers shall moreover put quality and risk management systems in place and comply with all the other obligation listed in the following articles (Articles 16-25). A post-market monitoring system will besides be put in place by market surveillance authorities, in order to ensure compliance with the requirements set out in the Regulation also once the products are put in the market.

Obligations are imposed not only to providers, but also to importers (Articles 26 and 28), distributors (Articles 27 and 28) and to users of high-risk AI systems (Article 29).

EU Member States will ensure compliance and enforcement of such obligations, in particular by designating the competent authorities which have to supervise the application and implementation of the Regulation and to carry out post market surveillance activities.

Meaning for Switzerland

The Regulation does not only affect companies in the EU but presents a very vast scope of application, determined by the use of AI systems. In fact, the Regulation is applicable when AI systems are used in the EU or, on the other side, when an output of such systems is used. This means that the obligations of the Regulation will also impact Swiss based companies and the Swiss market. Swiss Providers of AI systems wanting to stay in the European market will have to comply with the Regulation, hence respecting its obligations and procedures. Moreover, also Swiss users and consumers will be indirectly affected by the Regulation when accessing the European market and thus using AI systems from the EU. This means that discussions must take place at a political and legislative level in order not to be excluded from the market, and that Swiss companies will have to control their products and undergo the necessary market surveillance.

Conclusions

The proposal will create a unified legal framework regulating the use and commercialization of AI systems. The risk-based approach of the proposal entails several obligations for the involved companies, especially when the specific AI system is to be considered high-risk. Therefore, companies using and providing AI systems are advised to make themselves familiar with the proposal in order to be able to adjust the production in time.

 
Let’s
Team Up
You have a project, case, legal issue or anything else you want to ask us? We are passionate to find out how we can team up with you to get it done.
Let's get in touch