Time 2 Minute Read

On December 7, 2023, the Court of Justice of the European Union (“CJEU”) ruled that credit scoring constitutes automated decision-making, which is prohibited under Article 22 of the EU General Data Protection Regulation (“GDPR”) unless certain conditions are met. In a case stemming from consumer complaints against German credit bureau SCHUFA, the CJEU found that the company’s reliance on fully automated processes to calculate creditworthiness and extend credit constitutes automated decision-making which produces a legal or similarly significant effect within the meaning of Article 22 of the GDPR.

Time 1 Minute Read

On December 12, 2023, the UK Information Commissioner’s Office (“ICO”) announced that it is producing an online resource relating to employment practices and data protection. The ICO also announced that it would be releasing draft guidance on the different topic areas to be included in the resource in stages, and adding to it over time. The ICO provided draft guidance on “Keeping employment records” and “Recruitment and selection” for consultation. The former draft guidance aims to provide direction on compliance with data protection law when keeping records ...

Time 2 Minute Read

On December 13, 2023, the Federal Communications Commission (FCC) voted to update its 16-year old data breach notification rules (the “Rules”). Pursuant to the FCC update, providers of telecommunications, Voice over Internet Protocol (VoIP) and telecommunications relay services (TRS) are now required to notify the FCC of a data breach, in addition to existing obligations to notify affected customers, the FBI and the U.S. Secret Service.

Time 2 Minute Read

On December 8, 2023, the European Parliament and the Council reached a political agreement on the EU’s Regulation laying down harmonized rules on Artificial Intelligence (the “AI Act”).

The AI Act will introduce a risk-based legal framework for AI. Specifically, the AI Act will state that: (1) certain AI systems are prohibited as they present unacceptable risks (e.g., AI used for social scoring based on social behavior or personal characteristics, untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases, etc.); (2) AI systems presenting a high-risk to the rights and freedoms of individuals will be subject to stringent rules, which may include data governance/management and transparency obligations, the requirement to conduct a conformity assessment procedure and the obligation to carry out a fundamental rights assessment; (3) limited-risk AI systems will be subject to light obligations (mainly transparency requirements); and (4) AI systems that are not considered prohibited, high-risk or limited-risk systems will not be under the scope of the AI Act.

Time 1 Minute Read

On November 28, 2023, the New York Department of Financial Services (“NYDFS”) announced that First American Title Insurance Company (“First American”), the second-largest title insurance company in the United States, would pay a $1 million penalty for violations of the NYDFS Cybersecurity Regulation in connection with a 2019 data breach. The NYDFS investigated the company’s response to the data breach and alleged that First American knew of a vulnerability in its technical systems that exposed consumers’ non-public information, but failed to investigate or ...

Time 3 Minute Read

As reported on Hunton’s Employment & Labor Perspectives blog, on October 30, 2023, President Biden issued a wide-ranging Executive Order to address the development of artificial intelligence (“AI”) in the United States. Entitled the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Executive Order”), the Executive Order seeks to address both the “myriad benefits” as well as what it calls the “substantial risks” that AI poses to the country. It caps off a busy year for the Executive Branch in the AI space. In February 2023, the Equal Employment Opportunity Commission published its Strategic Enforcement Plan, which highlighted AI as a chief concern, and in April 2023, the White House released an AI Bill of Rights. The Executive Order, described as a “Federal Government-wide” effort, charges a number of federal agencies, notably including the Department of Labor (“DOL”), with addressing the impacts of employers’ use of AI on job security and workers’ rights. 

Time 1 Minute Read

On November 22, 2023, the Artificial Intelligence (Regulation) Bill was introduced into the UK Parliament’s House of Lords. The purpose of the Bill is to make provision for the regulation of AI and for connected purposes. 

Time 2 Minute Read

On November 27, 2023, the California Privacy Protection Agency (“CPPA”) published its draft regulations on automated decisionmaking technology (“ADMT”). The regulations propose a broad definition for ADMT that includes “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” ADMT also would include profiling, which would mean the “automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”

Time 2 Minute Read

On November 23, 2023, the UK government’s National Cyber Security Centre (“NCSC”) and the Republic of Korea’s National Intelligence Service (“NIS”) issued a joint advisory detailing techniques and tactics used by cyber actors linked to the Democratic People’s Republic of Korea (“DPRK”) that are carrying out software supply chain attacks. The publication follows the recent announcement of a new Strategic Cyber Partnership between the UK and the Republic of Korea where the two nations have committed to work together to tackle common cyber threats.

Time 1 Minute Read

On November 27, 2023, the UK government announced the first global guidelines to ensure the secure development of AI technology (the “Guidelines”), which were developed by the UK National Cyber Security Centre (“NCSC”) and the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”), in cooperation with industry experts and other international agencies and ministries. The guidelines have been endorsed by a further 15 countries, including Australia, Canada, Japan, Nigeria, and certain EU countries (full list here).

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page