Artificial intelligence in health care and social welfare
Under the EU AI Regulation, the Finnish Supervisory Agency guides and supervises the use of high-risk AI systems in health care and social welfare, and in the processing of social security benefits.
The AI Regulation entered into force on 1 August 2024 and will be applied in stages. The regulation and the Commission’s implementing acts contain provisions on the mandatory requirements for AI systems placed directly on the Finnish market. In addition to legislation, the application of the AI Regulation is guided by guidelines issued by the EU AI Office. See the end of this page for a link to the EU AI Office website.
-
1 August 2024
The AI Regulation entered into force.
-
2 February 2025
Application of Chapters 1 and 2 of the AI Regulation began, including:
- the definition of the AI Regulation concepts and the description of its objective and scope
- organisations using AI were obligated to take measures to ensure that their staff have sufficient AI literacy
- a ban on the use of artificial intelligence for harmful purposes (prohibited practices).
-
1 August 2025
Application of Chapter 3, sections 1-4 and Article 78 of the AI Regulation began. These contain:
- classification rules and requirements for high-risk AI systems
- obligations of a high-risk AI system’s provider, deployer, and other parties
- requirements concerning the designation of the notifying authority and conformity assessment bodies, i.e. notified bodies
- requirements for access to, the processing of and the exchange of information by the market surveillance authority and other bodies implementing the regulation.
In addition, the application of Chapter 5 on general-purpose AI models, Chapter 7 on the governance of the AI Regulation in the European Union and Chapter 12 on sanctions began. However, the article from this which provides for fines to be imposed on providers of general-purpose AI models is not applied.
-
1 January 2026
The Act on the Supervision of Certain AI Systems (1377/2025) that regulates the national implementation of the AI Regulation in Finland entered into force. The market surveillance of the AI Regulation has been decentralised to 15 different authorities in Finland. The Finnish Transport and Communications Agency Traficom coordinates the implementation of the AI Regulation in Finland and acts as a national contact point.
The Finnish Supervisory Agency is the competent market surveillance authority in guiding and supervising the use of high-risk AI systems in health care, social welfare, and in the processing of social security benefits.
-
2 August 2026
Application of the AI Regulation begins.
-
2 August 2027
Application of Article 6(1) of the AI Regulation begins, meaning conformity assessment must be performed on a high-risk AI system by a third-party prior to the system’s deployment or placing it on the market.
Intended purpose and classification of the AI system
Under the AI Regulation, AI systems are classified on the basis of their intended purpose and risk, and the obligations they are subject to are risk based.
AI system refers to a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Intended purpose refers to the use for which an AI system is intended by the provider. The provider defines the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
Risk refers to the combination of the probability of an occurrence of harm and the severity of that harm.
As a rule, the AI system provider, which may also be its manufacturer, is responsible for the classification of the system. If an AI system is intended for use in a high-risk purpose, it is defined as high-risk and must meet more stringent requirements than other AI systems before it is placed on the market.
Provider refers to a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Placing on the market means making an AI system or a general-purpose AI model available for the first time in the Union market.
Classification of AI systems
High-risk AI systems are subject to extensive obligations before they can be placed on the market. The obligations of the AI system provider include ensuring the AI system’s conformity and that the system has undergone a conformity assessment procedure prior to its deployment or its placing on the market. The provider must also have a quality management system to ensure compliance with the AI Regulation.
High-risk AI system refers to AI systems that can have a significant adverse impact on the health, safety, and fundamental rights of citizens. An AI system may itself be a product defined in the AI Regulation or be used as a security component, for example, in a client information system or benefit system. An AI system is of high risk, for example, when it performs the profiling of natural persons.
Profiling refers to any automated processing of personal data in which the personal characteristics of a natural person, such as their financial situation, health, reliability, or behaviour, are assessed by using the data.
Conformity assessment refers to the process of demonstrating whether the requirements laid down in the EU Regulation laying down harmonised rules on artificial intelligence (EU 2024/1689) for a high-risk AI system are met.
A low-risk AI system does not meet the criteria for a general-purpose or high-risk AI system. Certain low-risk AI systems are subject to transparency obligations.
A general-purpose AI system is based on general-purpose AI models, such as large language models. A general-purpose AI system can be used for different purposes either independently or connected to other AI systems.
A general-purpose AI model refers to an AI model that is very general in nature and capable of performing a wide range of different tasks competently. It has been possible to train general-purpose AI model with a large amount of data, and these can be connected to other systems or applications.
AI systems prohibited on the basis of their purpose of use include AI systems that manipulate or those that exploit human vulnerabilities, materially distort behaviour, and cause significant damage.
A high-risk AI system in social welfare and health care
A high-risk AI system refers to an AI system used in health care and social welfare as well as in the processing of social security benefits, and it intended for use by or on behalf of authorities to assess the eligibility of natural persons to essential public assistance benefits and services, such as health care services, as well as to grant, reduce, revoke, or reclaim such benefits and services.
The assessment of the high-risk nature of an AI system in health care and social welfare, and in the processing of social security benefits, focuses in particular on whether the AI system is used by an authority or a party acting on the authority’s behalf to assess whether a person is entitled to public:
- health care services and social services or
- social security benefits.
Artificial intelligence in the client and patient information system
The social welfare client data system and the health care patient data system may themselves be a high-risk AI system. A high-risk AI system may also be a separate part or component of a client or patient information system.
If a high-risk AI system used in health care or social welfare is also an information system referred to in the Act on the Processing of Client Data in Healthcare and Social Welfare, the provisions of both the Act on the Processing of Client Data in Healthcare and Social Welfare and the AI Regulation apply to it. The system then meets:
- the definition of a high-risk AI system under the AI Regulation and
- the definition of the information system referred to in the Act on the Processing of Client Data in Healthcare and Social Welfare.
Contact information
Customer service for health and social services
Ask our customer service by using service form
By e-mail: [email protected]
By calling: +358 295 256 930 (Monday–Friday 9:00–15:00)