Texas Attorney General Ken Paxton recently launched investigations into Character.AI and 14 other technology companies on allegations of failure to comply with the safety and privacy requirements of the Securing Children Online through Parental Empowerment Act and the Texas Data Privacy and Security Act.
In December 2024, the Centre for Information Policy Leadership at Hunton Andrews Kurth published a discussion paper titled, “Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators.”
On December 3, 2024, the U.S. Federal Trade Commission published a proposed consent order that would settle its investigation into IntelliVision Technologies Corp. for making false, misleading or unsubstantiated claims regarding a lack of gender or racial bias in its AI-powered facial recognition technology.
In November 2024, the Department of Commerce’s Artificial Intelligence Safety Institute established a new taskforce to research and test AI models in areas critical to national security and public safety, while ODNI released guidance on the acquisition and use of foundation AI models, both part of the national security community’s response to the directives of the recent White House AI Memo and Executive Order 14110.
As reported on the Hunton Employment & Labor Perspectives blog, on October 24, 2024, the Consumer Financial Protection Bureau (“CFPB”) issued a policy statement (known as a Circular) to explain the link between the Fair Credit Reporting Act (“FCRA”) and employers’ growing use of artificial intelligence (“AI”) to evaluate, rank and score applicants and employees. Employers should take note that the FCRA does not only apply to criminal history or credit reports. As the use of advanced data analysis and AI rise, employers should ensure that they are not running afoul of the FCRA’s requirements.
On November 8, 2024, the California Privacy Protection Agency Board hosted its public bimonthly meeting, during which it adopted new regulations applicable to data brokers and initiated the formal rulemaking process for proposed regulations for risk assessments, cybersecurity audits, automated decisionmaking technologies and AI, and insurance.
On November 7, 2024, the UK Information Commissioner’s Office released a report exploring data privacy concerns in genomic technology.
On November 6, 2024, the UK Information Commissioner’s Office published a report following consensual audit engagements conducted between August 2023 and May 2024 with developers and providers of artificial intelligence powered sourcing, screening, and selection tools used in recruitment.
On October 24, 2024, the White House released a memorandum implementing Executive Order 14110 on national security and responsible AI.
On October 21, 2024, the U.S. Department of Justice National Security Division issued a Notice of Proposed Rulemaking implementing Executive Order 14117 that will restrict certain transactions with high-risk countries.
On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an Industry Letter warning companies to update their AI security procedures around multifactor authentication, which are potentially vulnerable to deepfakes and AI-supplemented social engineering attacks.
On September 30, 2024, the State Council of China published the Regulations on Administration of Network Data Security (the “Regulations”), which will take effect on January 1, 2025. The Regulations cover multiple dimensions of network data security, including personal information protection, security of important data, cross-border transfers, network platform service providers’ obligations, and regulatory supervision and administration. Certain of the key provisions are summarized below. In general, most of the provisions under the Regulations can be found in other existing laws and regulations of China.
On October 7, 2024, the UK Information Commissioner’s Office announced the launch of a new audit framework designed to help organizations assess and improve their compliance with key requirements of UK data protection law.
On September 12, 2024, the Irish Data Protection Commission announced it had launched a cross-border statutory inquiry into Google Ireland Limited in relation to Google’s data protection impact assessment obligations under the Irish Data Protection Act.
On August 30, 2024, the Beijing Municipal Internet Information Office, Beijing Municipal Commerce Bureau and Beijing Municipal Government Services and Data Administration Bureau jointly issued the Data Export Management List (Negative List) of China (Beijing) Pilot Free Trade Zone (Version 2024) and the Administrative Measures for the Negative List.
On September 4, 2024, the Irish High Court dismissed proceedings against X related to X’s use of personal data for its AI tool Grok.
On August 1, 2024, the EU AI Act entered into force.
In June 2024, the European Union Agency for Fundamental Rights (“FRA”) published a report on the experiences, challenges and practices of data protection authorities (“DPAs”) when implementing the EU General Data Protection Regulation (“GDPR”) (the “Report”). The Report was requested by the European Commission ahead of their 2024 GDPR evaluation report, which was published on July 25, 2024.
On July 17, 2024, the King’s Speech marked the start of the UK parliamentary year. In the King’s Speech, the Digital Information and Smart Data Bill and Cyber Security and Resilience Bill were announced.
On July 9, 2024, the Federal Trade Commission issued a proposed order that banned NGL Labs, LLC, and two of its co-founders from offering an anonymous messaging app called “NGL: ask me anything” to children under the age of 18.
On July 12, 2024, the EU Artificial Intelligence Act was published in the Official Journal of the EU.
On July 2, 2024, the French Data Protection Authority (the “CNIL”) published a new set of guidelines addressing the development of artificial intelligence (“AI”) systems from a data protection perspective (the “July AI Guidelines”).
Last month, Colorado Governor Jared Polis signed into law a bill that amends the Colorado Privacy Act and introduces new obligations for processors of biometric data. The law goes into effect on July 1, 2025.
In April 2024, the National Institute of Standards and Technology released an initial draft of its AI Risk Management Framework Generative AI Profile. This blog entry provides a summary of the Generative AI Profile.
On June 7, 2024, following a public consultation, the French Data Protection Authority published the final version of the guidelines addressing the development of AI systems from a data protection perspective.
On May 1, 2024, Utah’s Artificial Intelligence Policy Act entered into effect.
On May 17, 2024, Colorado became the first U.S. state to enact comprehensive artificial intelligence legislation. This blog entry provides highlights of the key requirements.
The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth recently released a report on Enabling Beneficial and Safe Uses of Biometric Technology Through Risk-Based Regulations (the “Report”). The Report examines global laws and regulations that target biometric data and encourages adoption of a risk-based approach. According to the Report, biometric technology applications are growing and can provide societal and economic benefits. However, there are recognized concerns over potential harms for individuals and their rights, and data protection and privacy laws are increasingly targeting the collection and use of biometric data.
In April 2024, the Centre for Information Policy Leadership at Hunton Andrews Kurth published a white paper on Leveraging Data Responsibly: Why Boards and the C-Suite Need to Embrace a Holistic Data Strategy.
On April 12, 2024, the UK Information Commissioner’s Office launched the third installment in its consultation series examining how data protection law applies to the development and use of generative AI.
On March 27, 2024, the National Telecommunications and Information Administration (“NTIA”) issued its AI Accountability Report, and, on March 28, 2024, the White House announced the Office of Budget and Management’s (“OMB’s”) government-wide policy on AI risk management.
On April 1, 2024, the U.S. and UK signed a Memorandum of Understanding that details how the U.S. and UK will work together to develop tests for advanced AI models.
On March 26, 2024, the French data protection authority (the “CNIL”) published the 2024 edition of its Practice Guide for the Security of Personal Data (the “Guide”). The Guide is intended to support organizations in their efforts to implement adequate security measures in compliance with their obligations under Article 32 of the EU General Data Protection Regulation. In particular, the Guide targets DPOs, CISOs, computer scientists and privacy lawyers.
Last week, Utah Governor Spencer J. Cox signed three privacy-related bills into law. The bills are focused on, respectively, protection of motor vehicle consumer data, regulations on social media companies with respect to minors, and access to protected health information by third parties. The Utah legislature appears to be focused on data-related legislation this session, as Governor Cox signed two other bills related to AI into law last week as well.
On March 8, 2024, the California Privacy Protection Agency (“CPPA”) Board discussed and voted 3-2 in favor of further edits to revised draft regulations regarding risk assessments and automated decisionmaking technology (“ADMT”), which were released in February 2024, but did not initiate the formal rulemaking process for these regulations, which is anticipated to begin in July 2024.
On March 13, 2024, the European Parliament adopted the AI Act by a majority of 523 votes in favor, 46 votes against, and 49 abstentions. The AI Act will introduce comprehensive rules to govern the use of AI in the EU, making it the first major economic bloc to regulate this technology.
As reported by Bloomberg Law, on February 27, 2024, at RemedyFest, a conference hosted by Bloomberg Beta and Y Combinator, Federal Trade Commission Chair Lina Khan said that sensitive personal data that is linked to health, geolocation and web browsing history should be excluded from training artificial intelligence (“AI”) models.
The Federal Trade Commission held its eighth annual privacy conference, PrivacyCon, on March 6, 2024. The goal of PrivacyCon is to assemble researchers, academics, industry representatives, consumer advocates and government regulators to consider and discuss cutting-edge research and trends related to consumer privacy and data security. This year’s conference consisted of remarks by FTC Commissioners Lina Khan, Alvaro Bedoya and Rebecca Kelly Slaughter, and a total of seven panels including “Economics”, “Privacy Enhancing Technologies,” “Artificial ...
On February 28, 2024, President Biden released an Executive Order (“EO”) “addressing the extraordinary and unusual national security threat posed by the continued effort of certain countries of concern to access Americans’ bulk sensitive personal data and certain U.S. Government-related data.” In tandem with the EO, the Department of Justice’s (“DOJ’s”) National Security Division is set to issue an advance notice of proposed rulemaking (“ANPRM”) pursuant to the EO, which directs the DOJ to “establish, implement and administer new and targeted national security programming” to address the threat. The DOJ regulations will identify specific categories of “data transactions” that are prohibited or restricted due to their “unacceptable risk to national security.”
As reported on the Hunton Employment & Labor Perspectives blog, on February 15, 2024, California lawmakers introduced the bill AB 2930. AB 2930 seeks to regulate use of artificial intelligence (“AI”) in various industries to combat “algorithmic discrimination.” The proposed bill defines “algorithmic discrimination” as a “condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people” based on various protected characteristics including actual or perceived race, color, ethnicity, sex, national origin, disability and veteran status.
On February 15, 2024, the Federal Trade Commission proposed a rule that would ban the use of AI to impersonate individuals, which would extend protections of a recently finalized FTC rule against government and business impersonation. The FTC announced a public comment period for a supplemental Notice of Proposed Rulemaking (“NPR”) regarding the proposed rule that ends 60 days after being published in the Federal Register. The FTC’s swift action is in response to an AI-generated robocall mimicking President Biden that encouraged voters not to vote in the New Hampshire primary. FTC Chair Lina Khan described the FTC’s supplemental NPR as a key step in “strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals,” as malicious actors “us[e] AI tools to impersonate individuals with eerie precision and at a much wider scale.”
On February 21, 2024, the Centre for Information Policy Leadership at Hunton Andrews Kurth LLP (“CIPL”) published a white paper on Building Accountable AI Programs: Mapping Emerging Best Practices to the CIPL Accountability Framework. The white paper showcases how 20 leading organizations are developing accountable AI programs and best practices.
On January 24, 2024, the European Commission announced that it had published the Commission Decision establishing the European AI Office (the “Decision”). The AI Office will be established within the Commission as part of the administrative structure of the Directorate-General for Communication Networks, Content and Technology, and subject to its annual management plan. The AI Office is not intended to affect the powers and competences of national competent authorities, and bodies, offices and agencies of the EU in the supervision of AI systems, as provided for by the forthcoming AI Act. The Decision details the functions and tasks of the AI Office, such as:
On February 8, 2024, the Federal Communications Commission declared that calls using AI- generated, cloned voices fall under the category of “artificial or prerecorded voice” within the Telephone Consumer Protection Act (“TCPA”) and therefore are generally prohibited without prior express consent, effective immediately. Callers must obtain prior express consent from the recipient before making a call using an artificial or prerecorded voice, absent an applicable statutory exemption or emergency.
On February 6, 2024, the UK government published a response to the consultation on its AI Regulation White Paper, which the UK government originally published in March 2023. The White Paper set forth the UK government’s “flexible” approach to regulating AI through five cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits (read further details on the White Paper). A 12-week consultation on the White Paper was then held and this response summarizes the feedback and proposed next steps.
On January 22, 2024, a draft of the final text of the EU Artificial Intelligence Act (“AI Act”) was leaked to the public. The leaked text substantially diverges from the original proposal by the European Commission, which dates back to 2021. The AI Act includes elements from both the European Parliament’s and the Council’s proposals.
On January 24, 2024, the UK National Cyber Security Centre (“NCSC”) announced it had published a report on how AI will impact the efficacy of cyber operations and the cyber threats posed by AI over the next two years. The report concludes that AI “will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.” The report also notes that all types of cyber threat actors, including state and non-state, and of varying skill level, already use AI to some degree. The report further notes that AI provides capability uplift in reconnaissance ...
On January 15, 2024, the UK Information Commissioner’s Office (“ICO”) announced that it has launched a consultation series on generative AI. The series will examine how aspects of UK data protection law should apply to the development and use of the technology, with the first chapter of the series focusing on when it is lawful to train generative AI models on personal data scraped from the web. The ICO invites all stakeholders with an interest in generative AI to respond to the consultation, including developers and users of generative AI, legal advisors and consultants working ...
On January 9, 2024, the Federal Trade Commission published a blog post reminding artificial intelligence (“AI”) “model-as-a-service” companies to uphold the privacy commitments they make to customers, including promises made in Terms of Service agreements, promotional materials and online marketplaces.
On December 8, 2023, the European Parliament and the Council reached a political agreement on the EU’s Regulation laying down harmonized rules on Artificial Intelligence (the “AI Act”).
The AI Act will introduce a risk-based legal framework for AI. Specifically, the AI Act will state that: (1) certain AI systems are prohibited as they present unacceptable risks (e.g., AI used for social scoring based on social behavior or personal characteristics, untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases, etc.); (2) AI systems presenting a high-risk to the rights and freedoms of individuals will be subject to stringent rules, which may include data governance/management and transparency obligations, the requirement to conduct a conformity assessment procedure and the obligation to carry out a fundamental rights assessment; (3) limited-risk AI systems will be subject to light obligations (mainly transparency requirements); and (4) AI systems that are not considered prohibited, high-risk or limited-risk systems will not be under the scope of the AI Act.
As reported on Hunton’s Employment & Labor Perspectives blog, on October 30, 2023, President Biden issued a wide-ranging Executive Order to address the development of artificial intelligence (“AI”) in the United States. Entitled the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Executive Order”), the Executive Order seeks to address both the “myriad benefits” as well as what it calls the “substantial risks” that AI poses to the country. It caps off a busy year for the Executive Branch in the AI space. In February 2023, the Equal Employment Opportunity Commission published its Strategic Enforcement Plan, which highlighted AI as a chief concern, and in April 2023, the White House released an AI Bill of Rights. The Executive Order, described as a “Federal Government-wide” effort, charges a number of federal agencies, notably including the Department of Labor (“DOL”), with addressing the impacts of employers’ use of AI on job security and workers’ rights.
On November 22, 2023, the Artificial Intelligence (Regulation) Bill was introduced into the UK Parliament’s House of Lords. The purpose of the Bill is to make provision for the regulation of AI and for connected purposes.
On November 27, 2023, the California Privacy Protection Agency (“CPPA”) published its draft regulations on automated decisionmaking technology (“ADMT”). The regulations propose a broad definition for ADMT that includes “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” ADMT also would include profiling, which would mean the “automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”
On November 27, 2023, the UK government announced the first global guidelines to ensure the secure development of AI technology (the “Guidelines”), which were developed by the UK National Cyber Security Centre (“NCSC”) and the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”), in cooperation with industry experts and other international agencies and ministries. The guidelines have been endorsed by a further 15 countries, including Australia, Canada, Japan, Nigeria, and certain EU countries (full list here).
On November 1, 2023, 29 nations, including the U.S., the UK, the EU and China (full list available here), reached a ground-breaking agreement, known as the Bletchley Declaration. The Declaration sets forth a shared understanding of the opportunities and risks posed by AI and the need for governments to work together to meet the most significant challenges posed by the technology. The Declaration states that there is an urgent need to understand and collectively manage the potential risks posed by AI to ensure the technology is developed and deployed in a safe, responsible way. The Declaration was signed at the AI Safety Summit 2023, held at Bletchley Park in the UK.
On October 30, 2023, U.S. President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It marks the Biden Administration’s most comprehensive action on artificial intelligence policy, building upon the Administration’s Blueprint for an AI Bill of Rights (issued in October 2022) and its announcement (in July 2023) of securing voluntary commitments from 15 leading AI companies to manage AI risks.
On October 30, 2023, the G7 leaders announced they had reached agreement on a set of International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers, pursuant to the Hiroshima AI Process. The Hiroshima AI Process was established at the G7 Summit in May 2023 to promote guardrails for advanced AI systems at a global level.
On October 11, 2023, the French Data Protection Authority (the “CNIL”) published a new set of guidelines addressing the research and development of AI systems from a data protection perspective (the “Guidelines”).
On September 29, 2023, the Centre for Information Policy Leadership at Hunton Andrews Kurth (“CIPL”) released a new paper on its Ten Recommendations for Global AI Regulation. The paper is part of CIPL’s Accountable AI project and follows several earlier contributions including Artificial Intelligence and Data Protection in Tension (October 2018), Hard Issues and Practical Solutions (February 2020), and Artificial Intelligence and Data Protection: How the GDPR Regulates AI (March 2020).
On September 19, 2023, the Director of the Federal Trade Commission Bureau of Consumer Protection, Samuel Levine, delivered remarks that provided insight into the FTC’s ongoing strategy for regulating artificial intelligence (“AI”) during the National Advertising Division’s annual conference. Levine emphasized that the FTC is taking a more proactive approach to protect consumers from the harmful uses of AI, while ensuring the market remains fair, open, and competitive. Levine expressed the belief that self-regulation is not sufficient to address the regulation of ...
On September 5, 2023, all 50 state attorneys general and four attorneys general from U.S. territories urged Congress to take action on the use of artificial intelligence (“AI”) to exploit children. In their letter to Congress, the AGs address how AI can be used to exploit children, including tracking children’s location, mimicking them and generating child sexual abuse materials such as deepfakes. Based on these concerns, the AGs collectively request that Congress establish an expert commission to study the means and methods of how AI can be used to exploit children. The AGs ...
On August 29, 2023, the California Privacy Protection Agency (“CPPA”) Board issued draft regulations on Risk Assessment and Cybersecurity Audit (the “Draft Regulations”). The CPPA Board will discuss the Draft Regulations during a public meeting on September 8, 2023.
On June 12, 2023, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted a response to the U.S. National Telecommunications and Information Administration’s (“NTIA’s”) Request for Comments (“RFC”) on Artificial Intelligence (“AI”) Accountability. The NTIA’s RFC solicited comments on AI accountability measures and policies that can demonstrate trustworthiness of AI systems.
On June 15, 2023, the UK Information Commissioner’s Office (“ICO”) called for businesses to address the privacy risks posed by generative artificial intelligence (“AI”) before “rushing to adopt the technology.” Stephen Almond, the ICO’s Executive Director of Regulatory Risk, said: “Businesses are right to see the opportunity that generative AI offers . . . . But they must not be blind to the privacy risks.” An organization wishing to use AI should seek to understand at the outset how AI will use personal data, and mitigate any known risks. The ICO stated it is ...
On June 14, 2023, the European Parliament (“EP”) approved its negotiating mandate (the “EP’s Position”) regarding the EU’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the “AI Act”). The vote in the EP means that EU institutions may now begin trilogue negotiations (the Council approved its negotiating mandate on December 2022). The final version of the AI Act is expected before the end of 2023.
On June 2, 2023, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas released what appears to be the first standing order regulating use of generative artificial intelligence (“AI”)—which has recently emerged as a powerful tool on many fronts—in court filings. Generative AI provides capabilities for ease of research, drafting, image creation and more. But along with this new technology comes the opportunity for abuse, and the legal system is taking notice.
On May 16, 2023, the French Data Protection Authority (the “CNIL”) announced its action plan on artificial intelligence (the “AI Action Plan”). The AI Action Plan builds on prior work of the CNIL in the field of AI and consists of a series of activities the CNIL will undertake to support the deployment of AI systems that respect the privacy of individuals.
On May 4, 2023, the Biden-Harris Administration announced new actions to promote responsible American innovation in artificial intelligence (“AI”). The Administration also met with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI as part of the Administration’s broader, ongoing effort to engage with advocates, companies, researchers, civil right organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues. These efforts build upon the steps the Administration has taken so far, including the Blueprint for an AI Bill of Rights issued by the White House Office of Science and Technology Policy (“OSTP”) and the AI Risk Management Framework released by the National Institute of Standards and Technology (“NIST”). The Administration is also actively working to address national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity and safety.
On April 25, 2023, officials from the Federal Trade Commission, Consumer Financial Protection Bureau (“CFPB”), Department of Justice’s Civil Rights Division (“DOJCRD”) and the Equal Employment Opportunity Commission (“EEOC”) released a Joint Statement on Enforcement Efforts against Discrimination and Bias in Automated Systems (“Statement”), also sometimes referred to as “artificial intelligence” (“AI”).
On March 29, 2023, the UK government published a white paper on artificial intelligence (“AI”) entitled “A pro-innovation approach to AI regulation.” The white paper sets out a new “flexible” approach to regulating artificial intelligence which is intended to build public trust in AI and make it easier for businesses to grow and create jobs.
On March 16, 2023, the Federal Trade Commission announced it issued orders to eight social media and video streaming platforms seeking Special Reports on how the platforms review and monitor commercial advertising to detect, prevent and reduce deceptive advertisements, including those related to fraudulent healthcare products, financial scams and the sale of fake goods. The FTC sent the orders pursuant to its resolution directing the FTC to use all available compulsory process to inquire into this topic, and using the FTC’s Section 6(b) authority, which authorizes the FTC to conduct studies that do not have a specific law enforcement purpose.
On March 15, 2023, the UK Information Commissioner’s Office (“ICO”) published an updated version of its guidance on AI and data protection (the “updated guidance”), following requests from UK industry to clarify requirements for fairness in AI.
On March 6, 2023 the Centre for Information Policy Leadership (CIPL) at Hunton Andrews Kurth filed a response to the National Telecommunications and Information Administration’s request for comment on issues at the intersection of privacy, equity and civil rights.
On March 1, 2023, the U.S. House of Representatives Innovation, Data and Commerce Subcommittee (“Subcommittee”) of the Energy and Commerce Committee (“Committee”) held a hearing to restart the discussion on comprehensive federal privacy legislation. Last year, the full Committee reached bipartisan consensus on H.R. 8152, the American Data Privacy and Protection Act (“ADPPA”), by a vote of 53-2. With many of the same players returning in the 118th Congress, House members are eager to advance bipartisan legislation again.
As previously posted in our Hunton Employment & Labor Perspectives blog, on January 10, 2023, the Equal Employment Opportunity Commission (“EEOC”) published a draft of its Strategic Enforcement Plan (“SEP”) in the Federal Register, which outlines the EEOC’s enforcement goals for the next four years. While the EEOC aims to target a number of new areas – such as underserved workers and pregnancy fairness in the workplace – it is notable that it listed as priority number one the elimination of barriers in recruitment and hiring caused or exacerbated by employers’ use of artificial intelligence (“AI”).
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released the Artificial Intelligence Risk Management Framework (“AI RMF 1.0”), which provides a set of guidelines for organizations that design, develop, deploy or use AI to manage its many risks and promote trustworthy and responsible use and development of AI systems.
On September 23, 2022, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted a response to the UK Department for Digital, Culture, Media & Sport (“DCMS”) on its Consultation on establishing a pro-innovation approach to regulating AI (the “Response”).
On October 17, 2022, the French Data Protection Authority (the “CNIL”) imposed a €20 million fine on Clearview AI for unlawful use of facial recognition technology. The fine was imposed after the CNIL’s prior formal notice remained unaddressed by Clearview AI.
On October 4, 2022, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth published a white paper outlining 10 key recommendations for regulating artificial intelligence (“AI”) in Brazil (the "White Paper"). CIPL prepared the White Paper to assist the special committee of legal experts established by Federal Senate of Brazil (the “Senate Committee”) as it works towards an AI framework in Brazil.
On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.
On June 16, 2022, the Federal Trade Commission issued a report to Congress titled Combatting Online Harms Through Innovation (the “Report”) that urges policymakers and other stakeholders to exercise “great caution” about relying on artificial intelligence (“AI”) to combat harmful online content.
On June 16, 2022, Industry Minister François-Philippe Champagne and Justice Minister David Lametti introduced the Digital Charter Implementation Act, 2022 (Bill C-27), a bill that would overhaul Canada’s existing legal framework for personal information protection in the private sector. In the Canadian government’s news release, Industry Minister Champagne stated that Bill C-27, if enacted, will “give businesses clear rules to support their efforts to innovate with data and will introduce a new regulatory framework for the responsible development of artificial intelligence systems, while recognizing the need to protect young people and their information.” Bill C-27 is similar to former Bill C-11, which died in the 2021 legislative session.
On May 11, 2022, the French Data Protection Authority (the “CNIL”) published its Annual Activity Report for 2021 (the “Report”). The Report provides an overview of the CNIL’s enforcement activities in 2021. The report notably shows a significant increase in the CNIL’s activity.
Organizations increasingly use artificial intelligence- (“AI”) driven solutions in their day-to-day business operations. Generally, these AI-driven solutions require the processing of significant amounts of personal data for the AI model’s own training, which often is not the purpose for which the personal data originally was collected. There is a clear tension between such further use of vast amounts of personal data and some of the key data protection principles outlined in EU privacy regulations. On the occasion of Data Privacy Day 2022, Hunton privacy attorneys ...
Last month, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted a response to the UK Department for Digital, Culture, Media & Sport (“DCMS”) on its Consultation on Reforms to the Data Protection Regime (the “Response”). The Response also reflects views gathered from CIPL members during two industry roundtables organized in collaboration with DCMS to obtain feedback on the reform proposals. Key takeaways from the Response include the following:
On November 10, 2021, the New York City Council passed a bill prohibiting employers and employment agencies from using automated employment decision tools to screen candidates or employees, unless a bias audit has been conducted prior to deploying the tool (the “Bill”).
On November 18, 2021, the European Data Protection Board (“EDPB”) released a statement on the Digital Services Package and Data Strategy (the “Statement”). The Digital Services Package and Data Strategy is a package composed of several legislative proposals, including the Digital Services Act (“DSA”), the Digital Markets Act (“DMA”), the Data Governance Act (“DGA”), the Regulation on a European approach for Artificial Intelligence (“AIR”) and the upcoming Data Act (expected to be presented shortly). The proposals aim to facilitate the further use and sharing of personal data between more public and private parties; support the use of specific technologies, such as Big Data and artificial intelligence (“AI”); and regulate online platforms and gatekeepers.
On November 2, 2021, Facebook parent Meta Platforms Inc. announced in a blog post that it will shut down its “Face Recognition” system in coming weeks as part of a company-wide move to limit the use of facial recognition in its products. The company cited the need to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.”
On October 7, 2021, Federal Trade Commission Chair Lina Khan appointed Olivier Sylvain as a senior advisor on rulemaking and emerging technology. As announced by Fordham University School of Law, where Sylvain serves as a professor of communications, information and administrative law, Sylvain is an expert in the Communications Decency Act and also has focused his work on artificial intelligence and community-owned networked computing.
On September 14, 2021, the Federal Trade Commission authorized new compulsory process resolutions in eight key enforcement areas: (1) Acts or Practices Affecting United States Armed Forces Members and Veterans; (2) Acts or Practices Affecting Children; (3) Bias in Algorithms and Biometrics; (4) Deceptive and Manipulative Conduct on the Internet; (5) Repair Restrictions; (6) Abuse of Intellectual Property; (7) Common Directors and Officers and Common Ownership; and (8) Monopolization Offenses.
On September 10, 2021, the UK Government Department for Digital, Culture, Media & Sport (“DCMS”) launched a consultation on its proposed reforms to the UK data protection regime. The consultation reflects DCMS’s effort to deliver on Mission 2 of the National Data Strategy, which is “to secure a pro-growth and trusted data regime in the UK.” Organizations are encouraged to provide input on a range of data protection proposals, some of which are outlined below. The consultation will close on November 19, 2021, and the Centre for Information Policy Leadership (“CIPL”) will consult with members to prepare a formal response to the consultation.
The Centre for Information Policy Leadership (“CIPL”), a global privacy and security think tank founded in 2001 by leading companies and Hunton Andrews Kurth LLP, is celebrating 20 years of working with industry leaders, regulatory authorities and policymakers to develop global solutions and best practices for privacy and responsible data use.
On July 29, 2021, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted its response to the European Commission’s Consultation on the Draft Artificial Intelligence Act (the “Act”). Feedback received as part of this consultation will feed into discussions with the European Parliament and the European Council as the proposal makes its way through the EU legislative process.
On June 16, 2021, the UK Government’s Taskforce on Innovation, Growth and Regulatory Reform published an independent report containing recommendations to the Prime Minister on how the UK can reshape its approach to regulation in the wake of Brexit (the “Report”). Among wide-ranging proposals across a range of areas, the Report recommends replacing the UK General Data Protection Regulation (“UK GDPR”) with a new UK Framework of Citizen Data Rights. The proposed approach would aim to give individuals greater control over their personal data while also allowing increased data flows and driving growth in the digital economy. The Report will be considered by the Government’s Better Regulation Committee.
Building upon its April 2020 business guidance on Artificial Intelligence and algorithms, on April 19, 2021, the FTC published new guidance focused on how businesses can promote truth, fairness and equity in their use of AI.
On April 21, 2021, the European Commission (the “Commission”) published its Proposal for a Regulation on a European approach for Artificial Intelligence (the “Artificial Intelligence Act”). The Proposal follows a public consultation on the Commission’s white paper on AI published in February 2020. The Commission simultaneously proposed a new Machinery Regulation, designed to ensure the safe integration of AI systems into machinery.
On March 25, 2021, the Centre for Information Policy Leadership at Hunton Andrews Kurth organized an expert roundtable on the EU Approach to Regulating AI–How Can Experimentation Help Bridge Innovation and Regulation? (the “Roundtable”). The Roundtable was hosted by Dragoș Tudorache, Member of Parliament and Chair of the Artificial Intelligence in the Digital Age (“AIDA”) Committee of the European Parliament. The Roundtable gathered industry representatives and data protection authorities (“DPAs”) as well Axel Voss, Rapporteur of the AIDA Committee.
On March 22, 2021, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth published its paper on delivering a risk-based approach to regulating artificial intelligence (the “Paper”), with the intention of informing current EU discussions on the development of rules to regulate AI.
On February 23, 2021, the Centre for Information Policy Leadership at Hunton Andrews Kurth hosted a webinar on China’s Data Privacy Landscape and Upcoming Legislation.
On January 11, 2021, the FTC announced that Everalbum, Inc. (“Everalbum”), developer of the “Ever” photo storage app, agreed to a settlement over allegations that the company deceived consumers about its use of facial recognition technology and its retention of the uploaded photos and videos of users who deactivated their accounts.
Search
Recent Posts
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cyber Attack
- Cybersecurity
- Cybersecurity and Infrastructure Security Agency
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Transfer
- David Dumont
- David Vladeck
- Delaware
- Denmark
- Department of Commerce
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- Health Information
- HIPAA
- HIPPA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Madrid Resolution
- Maine
- Malaysia
- Markus Heyder
- Maryland
- Massachusetts
- Meta
- Mexico
- Microsoft
- Minnesota
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- Norway
- Obama Administration
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Phyllis Marcus
- Poland
- PRISM
- Privacy By Design
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Regulation
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Sanctions
- Schrems
- Scott H. Kimpel
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code