Regulation and Compliance

The European AI Act: How to Prepare for Compliance

The European AI Act: what it means for Ukrainian business and how to prepare your company for compliance

Viktoriia Isarieva Viktoriia Isarieva March 24, 2026
The European AI Act: How to Prepare for Compliance

We work with technology companies, including in the areas of AI, marketing, and privacy compliance. The legislation in this field is new, but businesses are already facing practical requests—and we are working with them.


Today, AI is being actively integrated into products and processes. Often it looks like a “small feature”: a system suggests to a recruiter which candidates to move up the list, or automatically determines a user’s risk level and decides who should be granted access to a feature. At first, this is perceived as optimization. But the moment a European client appears, a different question arises: whether this system is safe, transparent, and controllable from a regulatory perspective.

The AI Act was created precisely for this purpose—the European regulation on artificial intelligence. It is the first comprehensive regulation that classifies AI by risk levels and determines what is prohibited, what is permitted only subject to strict requirements, and where transparency is sufficient.


For the regulation to apply to a product, it must meet the definition of an AI system: a system that, on the basis of data, forms an inference and generates an output—a prediction, recommendation, content, or decision that affects the external environment.


Why this concerns Ukrainian business

The AI Act has extraterritorial effect. It applies not only to companies in the EU, but also to those whose product or its output is used in the European Union.


This means that even without a legal entity in the EU, you may fall within its requirements if:

– you have clients in the EU

– you have users in the EU

– or your B2B client uses your product in processes within the territory of the EU


This is logic already familiar to business from GDPR. What matters is not where you are registered, but where and how your product is used.


If a company plans to work with the EU, compliance becomes a practical issue. If not, it often becomes necessary to technically restrict access from the EU. At the same time, it should be taken into account that Ukrainian legislation is gradually moving toward the European approach, as confirmed, in particular, by the White Paper of the Ministry of Digital Transformation.


When the requirements start to apply

The AI Act is applied in stages.

As of 2 February 2025, the general provisions are already in force, including requirements regarding AI literacy and prohibited practices.

The main part of the regulation, including the rules for high-risk systems and transparency requirements, begins to apply from 2 August 2026.


AI literacy: a basic requirement

AI literacy means that companies must ensure basic knowledge and skills for those who work with AI.

In practice, this is not a complicated process. Usually, it is sufficient to have:

– short internal rules for the use of AI

– basic training for employees

– documentation confirming that the training took place

For example, employees should understand the risks of model “hallucinations” and the need to verify results. Separate certifications or the role of AI officer are not mandatory.


Prohibited practices

The AI Act defines practices that are considered unacceptable.

This concerns the use of AI for:

– covert influence on human behavior

– exploitation of vulnerabilities

– unfair assessment of people

– analysis of emotions in the workplace or educational environment

The key logic is: if AI begins to influence a person in a non-transparent or unfair manner, this is a prohibited area.


It is important that responsibility is not limited to the developer. It also rests with the person using the system.

For example, if a company connects a third-party AI tool and uses it to influence a user’s decision in a way that may be harmful, responsibility is not removed solely on the basis of contractual limitations. The regulator assesses the actual use.

A contract may reduce risk, but it does not replace control. Therefore, compliance in the field of AI is a shared responsibility of the provider and the user.


High-risk systems

The greatest business interest lies in high-risk systems.

This refers to systems that may affect human rights or important decisions. The AI Act sets out specific scenarios (Annex III), including biometrics, education, critical infrastructure, access to services, law enforcement, and migration processes.

For business, the block related to employment and workforce management is particularly important. High-risk systems include, in particular:

– systems for recruiting and selecting candidates

– job targeting

– filtering and evaluating applications

– performance assessment

– decisions regarding promotion, dismissal, or task allocation

This concerns situations where AI begins to influence who will get a job, an opportunity, or an evaluation. For example, if a system automatically screens out candidates before a recruiter reviews them, it is no longer an auxiliary tool but a mechanism that affects access to employment. These systems strengthen compliance and KYC (we know this from experience working with such clients).


What needs to be implemented for high-risk systems

If the company is the provider of the system, it must demonstrate that it is controllable: conduct a risk assessment, determine ways to mitigate risks, describe the logic of operation, ensure data quality, and implement human oversight.


If the company uses such a system, it must ensure real human oversight: have people who can verify results, understand the verification rules, respond to errors, and stop use if necessary.


Lower risk levels

If the system is not high-risk and does not fall under a prohibition, the main requirement is transparency.

This means:

– informing the user that they are interacting with AI

– labeling AI content

– warning about possible errors

This is why appropriate disclaimers and labeling appear in products.


Sanctions

The AI Act provides for administrative fines. Their amount depends on the nature of the violation, its duration, scale, and whether the company took measures to reduce risks.



How to prepare

The practical logic of preparation is quite simple.

First, an AI inventory is conducted: where AI is used in the company, which vendors are involved, and whether there are clients or users from the EU. Next comes the classification of scenarios: whether there are prohibited practices, whether there is high risk, and whether transparency is sufficient. In parallel, AI literacy is implemented and vendor contracts are reviewed.

The next step is to form a package of documents: policies, rules of use, classification decisions, an incident process, and basic technical descriptions.

This is the package that European clients and regulators usually expect to see.


Additionally, it is advisable to prepare privacy documentation: a description of data processing, user rights, and approaches to automated decision-making.


Conclusion

The AI Act is not about restricting innovation. It is about controllability and trust.

For Ukrainian business, this is no longer “European regulation.” It is a client and market requirement.

The key question is not whether the company uses AI, but whether it can explain how exactly it works and what risks it controls.

That is what becomes the new standard.


We help with AI, GDPR, marketing compliance, and ISO certification.

Sign up for a free call.

Viktoriia Isarieva
Viktoriia Isarieva

Lawyer at Digilaw and Intellectual Property Expert

Need a consultation
15 min intro call