French DPA Issues Guidelines on Data Protection and AI
Time 3 Minute Read

On October 11, 2023, the French Data Protection Authority (the “CNIL”) published a new set of guidelines addressing the research and development of AI systems from a data protection perspective (the “Guidelines”).

In the Guidelines, the CNIL confirms the compatibility of the EU General Data Protection Regulation (“GDPR”) with AI research and development. The Guidelines focus on the development stage of AI systems.

The Guidelines are divided into seven “AI how-to sheets” in which the CNIL guides organizations through the necessary steps to take in order to develop AI systems in a manner compatible with the GDPR. The “AI how-to sheets” provide guidance on: (1) determining the applicable legal regime (e.g., the GDPR or the Law Enforcement Directive); (2) adequately defining the purpose of processing; (3) defining the role (e.g., controller, processor or joint controller) of AI system providers; (4) defining the legal basis and implementing necessary safeguards to ensure the lawfulness of the data processing; (5) drafting a data protection impact assessment (“DPIA”) where necessary; (6) adequately considering data protection in the AI system design choices; and (7) implementing the principle of data protection by design in the collection of data and adequately managing data after collection.

Noteworthy takeaways from the Guidelines include:

  • In line with the GDPR, the purpose of development of an AI system must be specific, explicit and legitimate. The CNIL clarifies that where the operational use of AI systems in the deployment phase is unique and precisely identified from the development stage, the processing operations carried out in both phases pursue, in principle, a single overall purpose. However, for certain AI systems, such as general purpose AI systems, the operational use may not be clearly identifiable from the development phase. In this case, to be deemed sufficiently precise, the purpose of the developmental processing must contain information on the type of system developed (e.g., large language model) in a clear and intelligible way for data subjects, and the technically feasible functionalities and capabilities (i.e., capabilities that can be reasonably foreseen at the development stage).
  • Consent, legitimate interests, performance of a contract and public interest may all theoretically serve as legal bases for the development of AI systems. Controllers must carefully assess the most adequate legal basis for their specific case.
  • DPIAs carried out to address the processing of data for the development of AI systems must address specific AI risks, such as the risk of producing false content about a real person or the risks associated with known attacks specific to AI systems (such as attacks by data poisoning, insertion of a backdoor, or model inversion).
  • Data minimization and data protection measures that have been implemented during data collection may become obsolete over time and must be continuously monitored and updated when required.
  • Re-using datasets, particularly those publicly available on the Internet, is possible to train AI systems, provided that the data was lawfully collected and that the purpose of re-use is compatible with the original collection purpose.

The CNIL considers AI to be a topic of priority. It has set up a dedicated AI department, launched an action plan to clarify the rules and support innovation in this field, and introduced two support programs for French AI players. New guidelines on various AI-related topics are expected to be issued in the future.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page