On February 19, 2020, the European Commission (“the Commission”) published a White Paper entitled “a European Approach to Excellence and Trust” on artificial intelligence (“AI”). This followed an announcement in November 2019, from the Commission’s current President, Ursula von der Leyen, that she intended to propose rules to regulate AI within the first 100 days of her Presidency, which commenced on December 1, 2019. This White Paper was published alongside the Commission’s data and digital strategies for Europe.
The Commission also published an accompanying report on the safety and liability implications of AI, the Internet of Things and robotics, which was delivered to the European Parliament, the Council, and the European Economic and Social Committee. It identifies the ways in which existing legislation may need to be amended to account for the specific risks presented by emerging technologies, such as the creation of increasingly complex supply chains.
Some of the key takeaways from the White Paper include:
- Policy Framework. The White Paper sets out a policy framework with measures designed to bring together efforts at the regional, national and international level. It discusses the Commission’s proposed steps toward building an “ecosystem of excellence” to support the development and adoption of AI across the EU economy, as well as in the field of public administration. The White Paper notes that a “clear European regulatory framework would build trust among consumers and businesses in AI, and therefore speed up the uptake of the technology.” For example, some envisioned steps include focusing on working with Member States to secure EU-level funding, ensuring that small and medium enterprises (“SMEs”) have access to AI, and encouraging public-private partnerships.
- Key Risks of AI. The White Paper highlights some of the key risks presented by AI, including risks to fundamental rights such as privacy, human dignity, non-discrimination and the right to a fair trial. Part of this issue stems from what is known as the “black box effect”—the opacity of certain AI algorithms that prevents the reasoning underlying an AI system’s decision-making from being verifiable. Further risks highlighted are risks to the functioning of the liability regime, where flaws in AI embedded in products and services cause real world issues, the root of which cannot be traced because of the opacity of the AI system. When the root cause of such issues is unclear, this creates legal uncertainty, particularly regarding the allocation of responsibility in relation to malfunctioning systems.
- Existing Legislation. The White Paper highlights that there is already an extensive body of legislation in place governing certain aspects and uses of AI, both on a sectoral and national level. This includes data-specific legislation such as the General Data Protection Regulation 2018 (“GDPR”), as well as numerous pieces of legislation relating to equality and consumer protection. However, it is noted that effective application of existing legislation can be hindered by the lack of transparency around AI systems. Therefore, the Commission considers that it may be necessary to adjust or provide clarification around certain provisions. The Commission also highlights limitations regarding the scope of existing legislation, as well as the challenge of regulating AI-enabled products that may come to the market functioning in one way but adapt through machine learning to perform new tasks.
- Future Legislative Approach. With respect to the future, the Commission proposes taking measures to deal with the gaps in existing legislation, avoiding overly prescriptive regulation by adopting a risk-based approach. This would involve identifying “high risk” AI systems. The first relevant criterion for this categorization will be whether significant risks can be expected to arise given the nature of the sector in question (for example in healthcare or transportation). The Commission suggests that relevant sectors specifically be identified and addressed by any new regulatory framework. The second relevant criterion is whether the intended use of the AI system means that significant risks are likely to arise. This could be determined by looking at the potential impact on affected parties, such as where there is a risk of injury, death, or significant material or immaterial damage. These two criteria are proposed to be assessed cumulatively, and in theory, the mandatory requirements of any new regulatory framework would be directed at those systems that are identified as high risk. There may be instances where AI systems that do not fulfill these criteria are nonetheless considered high risk, such as where they may be used for intrusive surveillance technologies. The Commission also suggests the creation of a voluntary labelling scheme for AI systems not considered high risk, where operators make themselves subject to the mandatory requirements discussed below in order to achieve a quality label in relation to their AI applications and increase trust in their use of AI.
- Examples of Mandatory Legal Requirements. The types of mandatory legal requirements that the Commission proposes are:
- Providing quality training data, for example, ensuring that AI systems are trained on high quality data to ensure that rights are fundamentally protected during the training stage, not just during deployment, and that bias or discrimination in the AI system is avoided.
- Keeping records and data, particularly records of the programming of an algorithm, so that problematic or unanticipated decisions made by AI can be traced back to their source.
- Clear information provisions should be provided regarding an AI system’s capabilities and limitations, including the conditions under which AI can be expected to function as intended. Citizens also should be informed when they are interacting with an AI system.
- Robustness and accuracy to ensure the risks of a proposed system are considered during development, and all reasonable measures are taken to minimise the risk of harm. This involves creating AI systems that are resilient to attacks, as well as attempts to manipulate the underlying data or algorithms.
- Human oversight, in order to ensure that human autonomy is not undermined. For example, an AI system’s output should not be immediately implemented without being validated by a human, or human intervention should at least be ensured following such implementation.
- Specific requirements for remote biometric identification—a technology that should only be used where such use is duly justified, proportionate and subject to adequate safeguards.
- Allocation of Responsibility. When deciding how responsibility for such measures should be allocated between different actors in the AI supply chain, the Commission suggests that responsibility should fall on those best equipped to address the risk in question. The Commission proposes that responsibility for use of AI should apply beyond EU borders to all relevant economic operators providing AI-enabled products or services in the EU, whether established there or not.
- Future Developments. The Commission noted that given the nature of AI, any regulatory regime would need to be adaptive, stating, “[g]iven how fast AI is evolving, the regulatory framework must leave room to cater [to] further developments. Any changes [to existing legislation] should be limited to clearly identified problems for which feasible solutions exist.”
Comments are invited on the White Paper and can be submitted until May 19, 2020.
Search
Recent Posts
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cyber Attack
- Cybersecurity
- Cybersecurity and Infrastructure Security Agency
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Transfer
- David Dumont
- David Vladeck
- Delaware
- Denmark
- Department of Commerce
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- Health Information
- HIPAA
- HIPPA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Madrid Resolution
- Maine
- Malaysia
- Markus Heyder
- Maryland
- Massachusetts
- Meta
- Mexico
- Microsoft
- Minnesota
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- Norway
- Obama Administration
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Phyllis Marcus
- Poland
- PRISM
- Privacy By Design
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Regulation
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Sanctions
- Schrems
- Scott H. Kimpel
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code