European Parliament Adopts Artificial Intelligence Law.

How can Ukraine Benefit from the New Opportunities?

The European Parliament has approved the Artificial Intelligence Act. The document establishes obligations for AI (Artificial Intelligence) based on its potential risks and level of impact.
 
Against the backdrop of recent changes in EU legislation, which were signified by the approval of the Artificial Intelligence Law, Ukraine is trying to determine its position in this key sector for the future. According to the new rules, artificial intelligence (AI) systems must be subject to mandatory risk assessment and meet established security and transparency standards.

“Ukraine needs to understand that European regulators are actively working on implementing full-fledged AI regulation in the EU, and such restrictions will be introduced in Ukraine in the future during its integration into the European Union. We have a few years before these measures are mandatory in Ukraine, so we can gain a significant advantage in the international market and determine the future of professional services in the country. We have a unique opportunity to take advantage of the rapid development of technology and secure a competitive advantage in this area.” — said Andrii Borenkov, Head of Advisory at BDO in Ukraine, at the recent webinar organized by BDO in Ukraine in cooperation with the offices of the European Business Association (EBA) in Dnipro and Southern Ukraine entitled “Artificial Intelligence (AI): Revolutionizing Professional Services”.

The approval of the Artificial Intelligence Law by the European Parliament demonstrates the increased attention to the regulation and control of AI in the European Union. This document establishes new obligations for artificial intelligence systems based on their potential risks and impact on society, reflecting the growing need to ensure the ethical and safe use of this technology.

Particular attention is paid to the prohibition of certain AI applications that threaten the rights of citizens, such as biometric categorization and emotion recognition systems. This reflects the trend towards protecting privacy and personal data, which is becoming increasingly important in the modern world.

The new rules prohibit the use of biometric identification systems (BIS) by law enforcement agencies, except in specifically defined situations. A “real-time” RBI system can only be implemented if security measures are strictly adhered to, such as limiting its use by time and geography, and with prior judicial or administrative approval. Such applications may include, for example, the search for a missing person or the prevention of a terrorist act. The use of such systems after the action has been triggered is considered high-risk and requires judicial authorization linked to a criminal case.

There are also clear obligations for other high-risk AI systems due to their significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law. Examples of high-risk applications of AI include critical infrastructure, education and training, employment, essential private and public services (e.g., healthcare, banking), certain law enforcement systems, migration and border management, justice, and democratic processes (e.g., influencing elections). Such systems must assess and mitigate risks, keep logs of use, be transparent and accurate, and provide human oversight. Citizens will have the right to file complaints against AI systems and receive explanations for decisions based on high-risk AI systems that affect their rights.

General-purpose artificial intelligence (GPAI) systems and the respective models on which they are based will have to meet specific transparency requirements, including compliance with EU copyright law. More powerful GPAI models that may pose systemic risks are subject to additional requirements, including model assessment and systemic risk mitigation, as well as mandatory incident reporting. In addition, artificial or fake images, audio or video content (“dipshots”) must be clearly labeled as such.

After technical clarifications, the law must be formally adopted by the EU Council. It will enter into force twenty days after its publication in the Official Journal of the European Union and will be fully applicable 24 months after its entry into force, except for certain provisions.

The Law on Artificial Intelligence is an important step in ensuring technological development that meets the requirements of society and protects the rights and safety of citizens. Such development contributes to the creation of an effective and ethical regulatory framework that will promote sustainable development and innovation in this important sector.

Reach out to us if you need advice on implementing artificial intelligence in your business processes or if you are looking for expert advice on how to deal with new regulatory requirements. Our team is ready to provide you with the necessary support and assistance in these matters.

Key Contact