AI Compliance – GDPR And The AI Regulation – Data Protection

AI Compliance – GDPR And The AI Regulation – Data Protection


AI (artificial intelligence) is no longer a technology of
the future – it’s already an integral part of our everyday
lives. From text optimization and data processing to advanced
decision-making systems and customer service, AI has become a tool
that opens up many new opportunities and challenges.

But with opportunity comes responsibility. When implementing
AI in a company, it’s crucial to ensure that the use of the
technology is done in a thoughtful and responsible way.

In this article, you can read more about the AI regulation
and AI compliance, and what you need to pay special attention to
when your company develops and implements AI.

AI compliance

Basically, AI is a technology that can perform tasks that
normally require human intelligence.

AI is not a new technology, but it has been developing rapidly
in recent years and is now a technology that many companies are
either considering using or already using to optimize and
streamline all kinds of processes and tasks in the company.

AI is becoming an increasingly important part of both our
personal and professional lives. Most people are familiar with
ChatGPT and other generative AI models, and many are already using
AI for everything from helping to proofread and optimize texts to
getting an overview of large amounts of data and supporting
business procedures and processes. However, AI can be used for many
other purposes and the sky is the limit.

However, depending on what AI is used for, there are a number of
compliance issues that are important to be aware of when using AI.
When companies develop or use AI, it is, among other things,
particularly important to be aware of whether the development,
training or use of AI involves the processing of personal data, as
the rules of data protection law will then apply.

It is also important to be aware of whether AI is used to make
automated decisions that may have consequences for natural persons.
It follows from the GDPR that natural persons have the right not to
be subject to decisions based solely on automated processing,
including profiling, if such a decision produces legal effects
concerning or similarly significantly affects the natural
person.

The GDPR also means that, as is the case with all other
processing activities involving personal data, the company must
ensure lawful processing. This means, among other things, that the
company must observe the basic principles of, for example, data
minimization and transparency, just as the company must also
prepare a DPIA before the processing activity begins.

In addition, the new AI regulation (also known as the AI Act)
imposes a number of additional AI compliance requirements for
companies’ use of artificial intelligence. Depending on the
purpose of the AI model, companies must, for example, ensure the
implementation of a risk management system and the preparation of a
“Fundamental Rights Impact Assessment” (FRIA), as well as
requirements for the datasets used in the training of the AI
model.

The AI Regulation

The AI Regulation governs the development and use of AI and is
fundamentally about protecting the safety and rights of
individuals.

The AI Regulation came into force on August 1, 2024, and the
first parts of the AI Regulation will take effect from February 2,
2025.

The AI Regulation generally categorizes AI models into a number
of different risk levels. The categorization is determined by the
purpose of using the AI model and includes the following
categories:

1584068a.jpg

  • The banned AI models:

    Includes systems that exploit the vulnerabilities of natural
    persons, perform emotion recognition in workplaces or education and
    facial recognition in public spaces.

    As the use of such AI models would pose a very high and
    unacceptable risk to natural persons, there are only a few
    exceptions to this prohibition, including for example in law
    enforcement.

  • High-risk AI models:

    Includes the use of AI models in biometrics, education, critical
    infrastructure, employment, workforce management, law enforcement,
    and life and health insurance assessment, among others.

    If a company wishes to use AI models for such purposes, it is
    important to be aware that this entails a number of obligations and
    compliance requirements under the AI Regulation, including
    requirements for training data, implementation of a risk management
    system, and documentation that it is not the AI model that makes
    any final decisions in the process.

  • Regulated AI models:

    Includes AI models that are designed to interact with and target
    natural persons. When using such AI models, it is a requirement
    that the natural persons with whom the AI model interacts are
    informed of this, and that a clear marking of the AI model’s
    outputs (e.g. labeling of photos or AI generated text) is
    ensured.

  • General-purpose AI models:

    Includes AI models that are used for general purposes, but still
    possess certain impact capabilities. These AI models do not have
    the same impact as the above types of AI models, but there are a
    number of issues that are important to be aware of, including, for
    example, technical documentation for the AI model, policy for
    compliance with EU copyright law, cybersecurity and reporting of
    any incidents.

  • Non-regulated AI models:

    Includes those AI models that do not have the ability to influence
    and are therefore not included in any of the above categories.
    However, when using these AI models, it is still important to be
    aware of e.g. drafting policies for use, possible confidentiality
    issues and intellectual property rights, as well as the use of
    general purpose AI for “self-development”.

The AI Regulation has been finally adopted and the first parts
of the regulation will take effect from February 2025. Companies
should already now take into account the limitations and AI
compliance requirements resulting from the regulation in their work
with the development and implementation of AI tools. These
requirements and limitations should be taken into account as early
as possible in the process to avoid unnecessary use of resources on
the development and implementation of AI tools, which in the worst
case cannot be deployed without violating the AI Regulation.

Impact assessment (DPIA)

Prior to the implementation of an AI tool that processes
personal data, it will often be necessary to carry out a DPIA in
order to map the risk picture for the processing activities and
conclude whether the Data Protection Authority needs to be
consulted before the deployment of the system.

As part of the impact assessment, the company must, among other
things, map data flows and identify risks associated with the
intended processing of the data and identify the security measures
necessary to adapt the identified risks to an acceptable level. If
it is not possible to adapt the identified risks, the Danish Data
Protection Agency must be consulted on the intended processing
activity before the AI tool is put into use.

Legal basis and legitimate purposes

The company must also assess the legal basis on which the
personal data can be processed.

In this regard, it is important to note that personal data
already collected may not be used for other purposes that are not
compatible with the purposes for which it was originally
collected.

In addition, it is also necessary to assess how little personal
data is necessary to achieve the desired functionality of the AI
tool. The company should thus ensure data minimization,
anonymization and pseudonymization of personal data to the greatest
extent possible.

It is therefore crucial to get an overview of the processing
activities that the AI tool entails. This process also helps to
ensure that errors and compliance challenges are spotted
continuously, just as the process helps to create an overview of
special risks and ensure fast response times to any errors or
deficiencies.

Sensitive personal data

There is no general prohibition on the use of AI for the
processing of sensitive personal data, but it is important to be
aware that the legal basis for such processing will often be the
consent of the data subjects. This applies, for example, when using
biometric data in facial recognition tools for access control.

The Danish Data Protection Agency recently made a decision in a
complaint case regarding a fitness center’s use of facial
recognition, where the Data Protection Agency assessed that the
processing activity required consent and also stated that a valid
consent to such processing activity also requires that a real
alternative is made available, as otherwise the consent cannot be
considered voluntary.

In the specific case, the Danish Data Protection Agency assessed
that it constituted a real alternative to the processing that users
of the fitness center could instead use the center during the
manned opening hours or contact a 24-hour support by phone outside
the manned opening hours.

Read the full decision on the Danish Data
Protection Agency’s website

The right not to be subject to decisions based solely on
automated decisions

It follows from the GDPR that data subjects have the right not
to be subject to decisions based solely on automated
decision-making (including profiling) which produces legal effects
concerning them or similarly significantly affects them.

For example, if you plan to use AI in recruitment processes, to
support managers’ handling of performance reviews or to answer
customer service inquiries, it’s important to ensure that no
decisions are made based on AI alone.

Securing this right thus requires, among other things, an
assessment of whether the decisions made have legal effect or
significantly affect a data subject. In addition, it is a
fundamental prerequisite that employees working with such
AI-supported processes receive thorough and ongoing training in the
use of the AI tool and the limits set by the GDPR.

Information about the processing of personal
data

When a company processes personal data as a data controller, the
company is also obliged to transparently provide data subjects with
information about the processing activities, including the purpose
of the use and on what legal basis personal data is processed, as
well as whether automated decision-making occurs and, if so, at
least meaningful information about the logic involved and the
significance and expected consequences such processing may have for
the data subjects.

In addition, the requirement for transparency requires that the
communication in easily understandable language is targeted at the
data subjects, for example, taking into account whether the group
of data subjects includes children or vulnerable persons.

Proper compliance with the company’s disclosure obligation
thus requires in-depth knowledge of how the company uses AI to
process personal data.

Penalties for non-compliance with GDPR and AI
Regulation

Under the GDPR, the Danish Data Protection Agency has a number
of sanction options for companies and authorities that do not
comply with the rules of data protection law. For example, the
Danish Data Protection Agency can issue orders or issue criticism.
In addition, private companies can also be fined up to EUR 20
million or 4% of the total global group turnover, whichever is
higher.

Following the same principles, the AI Regulation means that
fines of between 2% and 7% of turnover or up to €35 million
can be issued if the use of AI models does not comply with the
rules of the AI Regulation.

In both the GDPR and the AI Regulation, the amount of the fine
depends on a number of factors, including which specific rules have
been violated.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link