Secondary use of data for AI: which law applies?

Data is the fuel of artificial intelligence (AI). The availability and quality of data are crucial for the development and performance of AI systems. However, data is often collected and stored for specific purposes, such as healthcare, business, or research, and not necessarily intended or suitable for AI applications.

Data is the fuel of artificial intelligence (AI). The availability and quality of data are crucial for the development and performance of AI systems. However, data is often collected and stored for specific purposes, such as healthcare, business, or research, and not necessarily intended or suitable for AI applications. This is where secondary use of data comes in: the use of data for purposes other than those for which it was originally collected.

Secondary use of data can offer many benefits for AI, such as:

  • Enriching primary data sources with additional information and variables to improve the accuracy and reliability of AI models;
  • Enabling cross-domain and cross-sectoral data sharing and collaboration to foster innovation and knowledge discovery;
  • Supporting evidence-based policy making and regulatory activities to address societal challenges and public interests.

However, secondary use of data also poses challenges for AI, such as:

  • Ensuring the protection and compliant secondary use of personal data (in particular with the GDPR in the EEA) and privacy rights of data subjects, especially when dealing with sensitive data such as health or biometric data; 
  • Respecting the ethical principles and values of data collection and processing, such as fairness, transparency, accountability, and human dignity;
  • Establishing trust and governance mechanisms for data sharing and reuse, such as data quality standards, consent management, data stewardship, and data intermediaries.

From the legal perspective, the GDPR and similar data protection legislation do not provide comprehensive protection against AI systems but they offer some basic safeguards:

  • Primary and secondary use of personal data by AI, especially sensitive data, requires a proper legal basis.
  • Personal data should not be processed by AI for new, incompatible purposes.
  • Individuals should not be subject to automated decisions that are solely made by an algorithm without human intervention.

Health Data for AI: Concrete example in Europe

AI can be a great tool to boost better and more efficient patient care. Therefore, the European Commission proposed a regulation to set up the European Health Data Space (EHDS) which intends to address the above-mentioned challenges and unlock the full potential of secondary use of data for AI. EHDS is a health-specific ecosystem that aims at empowering individuals through increased digital access to and control of their electronic personal health data, as well as fostering a genuine single market for electronic health record systems, relevant medical devices and high-risk AI systems (primary use of data). Moreover, the EHDS provides a consistent, trustworthy and efficient set-up for the use of health data for research, innovation, policy-making and regulatory activities (secondary use of data). The EHDS builds further on the GDPR, proposed Data Governance Act, draft Data Act and Network and Information Systems Directive. As horizontal frameworks, they provide rules that also apply to the health sector. However, more specific rules are developed in the European Health Data Space Regulation, taken into account the sensitivity of the data.

Secondary Use of Data for AI: Legally Compliant Solutions

Secondary use of data for AI is a promising domain as it can boost innovation and technological developments. But it’s an equally challenging domain that requires careful consideration of legal, ethical, technical, and social aspects. By fostering trust and collaboration among stakeholders, secondary use of data can contribute to the development of human-centric and value-driven AI in Europe.

New laws regulating AI systems, such as the proposed AI Act in the EU, are necessary to help realise AI potential. Nevertheless, they will have to be in line with existing data protection safeguards.  Therefore, creative yet compliant solutions are needed to develop AI systems and make AI applications performant. Regulatory sandboxes are a good example hereof. They can provide a legally safe environment to process personal data for a new purpose without regulatory enforcement, provided strict conditions are met and in the public interest only.

Specifically in the health field, the EHDS is an ambitious initiative that aims to create a common framework and infrastructure for the exchange, use and reuse of health data in the EU. The EHDS can serve as a model and inspiration for other sectors and domains that want to leverage secondary use of data for AI.

Author : Geert Somers, Timelex 

This topic will also be discussed during a conference on 14 November 2023 in Brussels

More Partner Blogs


21 juni 2024

Takeaways from the Belgian Presidency of the Council of the EU on Climate and Energy Topics

The introduction of the 'essential use' concept and its possible impact on the PFAS restriction...

Lees meer...

20 juni 2024

Chemicals PFAS restriction proposal

The introduction of the 'essential use' concept and its possible impact on the PFAS restriction...

Lees meer...

18 juni 2024

Getting Ready For a Group Discount - The European Commission’s Updated Guidance on Joint Purchasing Arrangements

The European Commission recently revised its Guidelines on Horizontal Cooperation Agreements.

Lees meer...

14 juni 2024

Measuring the level of maturity of your legal function

Bénéficiez de l'expertise d'Alan Ragueneau et des experts Wolters Kluwer

Lees meer...

10 juni 2024

Energetische renovaties in de drie gewesten

In het kader van de strijd tegen de klimaatverandering, heeft de Europese Unie ambitieuze...

Lees meer...