Artificial Intelligence and Insurance—Part I, American Bar Association

Time 30 Minute Read
June 12, 2024
Publication

Summary

  • Smart contracts serve to keep the law grounded in more modern, equitable contract doctrines that serve as a counterweight to classic contract theory.
  • The smart contract offers tort-based considerations that may remove it from the exclusionary aspects of CGL and other traditional coverage.
  • It may also redefine what it is to provide coverage for “property” as it becomes an indistinguishable hybrid of hardware, software, and data.

In little more than one year, since the emergence of ChatGPT, artificial intelligence (AI) has ushered in a new era, transforming industries and redefining the way we approach problem-solving. While the term “artificial intelligence” was coined in 1956, AI technology continues to advance, and it is crucial to evaluate its real-world impact and consider the challenges and opportunities it presents. This is particularly the case with insurance, given that it is insurance that will be looked to in the wake of mishaps involving AI.

If the November 2023 controversy over Sam Altman’s status as the chief executive officer of OpenAI is any indication, AI has captured the world’s attention, and for good reason. AI is predicted to grow “exponentially” over the next decade and may contribute up to 14.5 percent of gross domestic product in North America by 2030.1 Very few, if any, industries, businesses, or people will go unaffected. The insurance industry, which is itself having a “Generative AI Moment,” is no exception.2 Indeed, as the consultancy McKinsey & Co. wrote, AI “will have a seismic impact” on all aspects of the insurance industry.3

The first part of this two-part article will unpack several critical facets of that seismic shift, which is already reshaping the insurance world for insurers and policyholders alike, by delving into the intricate landscape of AI, focusing on its growing influence in the insurance industry and the legal challenges and opportunities that arise. We begin by discussing how AI is reshaping the insurance industry, and we include a survey of how AI is being deployed across different insurance functions such as underwriting and claims processing. We next analyze the use of AI in the context of litigation and how AI will affect the collection and introduction of evidence, issues that will ultimately affect the scope of liability insurance and associated coverage for defense costs.

In the second part of this two-part article, we will consider how the marketplace for AI-specific insurance might develop, including a discussion of the pros and cons of AI-specific insurance products, which continue to debut and evolve. If deployed thoughtfully, insurance can “help avoid legal issues of liability” and even “enhance the integration of AI into daily commercial routines while mitigating” potential downsides.4

Together, this two-part article will provide guidance to members of the insurance bar about this rapidly evolving landscape where the fusion of legal and technological acumen will sculpt the future of the insurance business and insurance law, while creating opportunities for insurance practitioners. Indeed, this rapidly evolving discipline provides great promise for lawyers and other insurance professionals, in part because the new, rapidly developing issues provide a platform for insurance practitioners to make their mark.

The Role of AI in Commercial Insurance

AI is revolutionizing the insurance sector, with rising interest in AI algorithms to streamline processes, enhance customer experiences, and develop innovative insurance products. From underwriting and claims processing to risk assessment, AI is reshaping the insurance landscape by providing data-driven insights and automating traditionally labor-intensive tasks. At its core, however, insurance is about clearly delineating what is covered from what is not. To do that requires clear and unambiguous wording. Definitions often must be supplied, particularly where technology and other concepts beyond the main are involved. AI is no exception. In fact, as we discuss later below and in greater depth in the second part of this two-part article, the failure to clearly define AI may lead to abject failure of the insurance product.

Types of AI. Broadly speaking, there are at least seven types of AI. Understanding which AI systems your company is running or your insurance is covering (or excluding) is fundamental to managing AI risk. Confounding even the clearest definitions and explanations, however, is the reality that many companies are not using just one type of AI or multiple types of AI in the same combinations. Complexity and technical inside baseball aside, knowing which systems are being used or insured is critically necessary to managing the AI risk.

  1. Reactive machines AI: These are the simplest forms of AI systems that are purely reactive and can neither form memories nor use past experiences to inform current decisions. They are meant to perform specific tasks, and their behavior is entirely deterministic.
  2. Limited memory AI: These AI systems can learn from historical data to make decisions. They can store past experiences or data for a brief time. An example of this is self-driving cars that observe other cars’ speed and direction.
  3. Theory of mind AI: This is a more advanced type of AI that can understand thoughts and emotions that affect human behavior. This AI system can interact socially. But it currently exists only in theory.
  4. Self-aware AI: This is the final stage of AI development and it is currently hypothetical. Self-aware AI, which currently exists only in theory and science fiction, would be systems that have their own consciousness and self-awareness.
  5. Artificial narrow intelligence (ANI): Also known as “weak AI,” this type of AI is meant to perform a narrow task, such as voice recognition. These systems can only learn or be taught how to do specific tasks.
  6. Artificial general intelligence (AGI): Also known as “strong AI,” this type of AI refers to a system that possesses the ability to perform any intellectual task that a human can do. Such systems can understand, learn, adapt, and implement knowledge in a broad range of tasks.
  7. Artificial superintelligence (ASI): This refers to a time when the capability of computers will surpass that of humans. ASI is currently a hypothetical concept often depicted in science fiction. It is proposed to have extraordinary cognitive capabilities, including the ability to understand and master any intellectual task that a human can do.

As insurance stakeholders work to derive a functional scope of coverage, definitions of AI will have to consider all types of AI. Failing to do so could lead to uncertainty of scope and ambiguity. Two existing definitions illustrate the dilemma. The first definition comes from the European Union’s recently enacted Artificial Intelligence Act (EU AI Act). That regulation provides the following definition of AI:

AI system means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[.]5

From a functional standpoint, a definition like that used in the European Union’s AI Act offers potential promise for insurance stakeholders looking to ensure a stable and predictable scope of coverage.

In contrast, one domestic insurer’s recent attempt to define AI for purposes of an optional policy endorsement that seeks to exclude “content created or posted for any third party . . . created using generative artificial intelligence in performance of your services”6 epitomizes the definition of circularity. The endorsement defines “generative artificial intelligence” to mean “content created through the use of any artificial intelligence application, tool, engine, or platform”7 and thereby offers little guidance to its users.

Regardless of the definition deployed in a particular instrument, the question for insurance industry participants going forward should remain constant: how to define scope in a manner that achieves consistency and reasonable contractual certainty. The answer to this question can have wide-ranging, multibillion-dollar implications.

How AI is used in commercial insurance. Most insurers are focused on searching, summarizing policies, and synthesizing information to provide content and answer questions based on what AI has learned. There is also increased interest in decision support (not decision-making) in the underwriting process to assist underwriters. By analyzing vast and abstract sources of data and information and having the ability to detect patterns that might escape human cognition, underwriters can focus on the most valuable risks. Likewise, claim handlers can use vast amounts of data to expedite the review of claims. But the use of AI also brings challenges, including allegations of discriminatory conduct, bias, data privacy concerns, and concerns over systemic inaccuracies without sufficient human oversight. Query, however, whether socially unacceptable outputs result from bias or simply objective analytics. Recent legislation tries to grapple with this dilemma.8

1. Key technologies driving AI in insurance

  • Machine learning: This technology enables computers to learn and improve from experience without being explicitly programmed. In insurance, it is used for risk assessment, fraud detection, and personalized policy pricing.
  • Natural language processing (NLP): NLP allows computers to understand, interpret, and generate human language. In insurance, it is used for chatbots, claims processing, and customer service.
  • Computer vision: This technology enables computers to interpret and understand the visual world. In insurance, it is used for tasks such as damage assessment in claims processing and risk assessment.
  • Predictive analytics: This technology uses data, statistical algorithms, and machine-learning techniques to identify the likelihood of future outcomes based on historical data. In insurance, it is used for risk assessment, pricing, and claims prediction.

2. Impact on underwriting

Automation of routine tasks expedites decision-making, reduces operational costs, and allows underwriters to focus on complex aspects. AI’s continuous learning enables dynamic risk assessment, crucial in a rapidly changing landscape. Sample use cases on underwriting include:

Risk Assessment: AI can improve the risk-assessment process by being trained on demographic data to better predict risk and provide underwriters with recommendations.

Intelligent Underwriting: AI can be used to identify critical documents, extract critical data in the submission process, and then feed just that critical information to the underwriter to help make quicker decisions.

Eligibility & Product Match: AI could be used to determine eligibility based on classifications and eligibility guidelines then suggest the best product match for the customer.

Social Media Sourcing: AI can be used to source social media to gather data around and confirm customers’ business operations, social interactions and customer reviews.

Rating Errors: AI can generate notifications for underwriters when rating errors have been made, the impact, and the correction needed.

Policy Manuscript Generation: AI can generate basic policy manuscripts based on class codes or operations descriptions, or even personalize a manuscript based on exposure information.

Broker Messaging: AI can generate routine human-like communications in real time from underwriters to brokers when additional information is needed in assessing a risk. While we believe that these examples represent a wide range of generative AI use cases in insurance underwriting, it is still a non-exhaustive list given the speed at which AI is advancing.

3. Implications of AI-driven risk assessment

  1. Improved risk assessment
    Today machines can aggregate and interpret data and can prioritize vulnerabilities, contextualize risk scoring, and measure exposures and countermeasures independently, resulting in more precise risk evaluations.
  2. Automation of underwriting processes
    We are also seeing increased opportunities to leverage AI and automate and streamline the data collection and analysis process, reducing the time and effort required for risk assessment. Using AI algorithms to analyze large volumes of data and identify patterns and trends, insurers are exploring ways to assess risk, improve efficiency, and reduce operational costs.
  3. Impact on premium pricing
    AI transforms premium pricing in insurance by enabling precise underwriting through data-driven insights. It facilitates dynamic pricing models that adapt to real-time risk factors, incorporates usage-based metrics (e.g., telematics in auto insurance), and detects and mitigates fraud. AI-driven predictive modeling anticipates future risks, allowing insurers to proactively adjust premiums. Customer segmentation and behavioral analytics enable personalized premium pricing, enhancing competitiveness, and customer satisfaction. Overall, AI improves accuracy, responsiveness, and customization in setting premiums, optimizing the balance between risk and pricing in the insurance industry.

Insurance Claims and Insurance Litigation

It takes little imagination to recognize the potential for AI to affect insurance underwriting, claim processing, and even the litigation of disputed claims. The use of AI in claims processing is no longer hypothetical, with multiple insurers already falling under attack for how AI is aiding their claims handling. The online insurer Lemonade has deployed its AI technology—AI Jim—to purportedly streamline and add efficiency to its claims process.9 Yet, despite the advent and use of technologies like AI Jim, the use of AI in claims processing remains new. And because AI’s use in claims processing is an unfamiliar legal area, there are not currently many fixed legal rules governing insurers’ conduct in this space.10 But one thing is clear now: For every potential benefit AI offers insurers in the claims process, corresponding legal risks must be considered. Indeed, only by taking a proactive approach that considers all the pertinent angles can relevant stakeholders avoid unwitting AI-generated pitfalls.

AI and insurance claims. AI’s impact on claims processing is a two-sided coin. That is, AI can revolutionize claims processing, but it may also come at a substantial cost for both policyholders and insurers. Starting with the potential benefits, AI-driven claims processing could increase efficiency by automating various routine tasks, ranging from data collection and documentation analysis to fraud detection. Such automation may reduce the time required to process claims, enabling insurers to provide quicker responses to policyholders. Faster claims resolution may contribute to increased customer satisfaction and loyalty.

Depending on how AI claims technologies are deployed, insurers could also minimize human errors that have given rise to liability under state bad-faith statutes for inadequate or faulty claims handling.11 One reason is that AI systems, equipped with machine-learning algorithms, could analyze vast datasets with precision and thus potentially improve the reliability of claims processing while removing the risk of human-centric animus.

Despite these possible benefits, the deployment of AI in claims processing is not without potential drawbacks. As noted throughout this article, the potential for bias in AI algorithms is substantial. That is, if the training data used to develop these algorithms reflect historical biases, the AI systems may exacerbate or perpetuate these inequalities. Data privacy is, as detailed below, another critical risk associated with AI-driven claims processing. Further, as AI systems take on more decision-making roles in insurance, questions arise about the transparency of these decisions and the accountability of algorithms. For example, there remains a real possibility that AI algorithms could be programed to reflexively deny claims or limit payouts despite contrary policy language and applicable background legal principles.

These benefits and drawbacks have only recently started to influence state insurance regulation. For example, as of April 30, 2024, 11 jurisdictions have adopted a model NAIC bulletin aimed at regulating the use of AI in the insurance industry.12 Four jurisdictions (California, Colorado, New York, and Texas) have also adopted insurance-specific regulations or guidance relative to AI.13 State regulations have focused on avoiding discriminatory outcomes, among other things.14 However, because government regulation in this area is in its relative infancy, only time will tell how much state-specific regulation will affect AI-driven claims handling.

Not only have state governments been calling out the risks of AI-driven claims handling—so too have class action plaintiffs, as shown by recent lawsuits against health insurers like UnitedHealth and Cigna.15 Take for example the UnitedHealth lawsuit pending in the U.S. District Court for the District of Minnesota. There, the estate of a deceased plaintiff has sued United Healthcare on behalf of a putative class of plaintiffs alleging that United illegally deployed AI “in place of real medical professionals to wrongfully deny elderly patients care owed to them . . . by overriding their treating physicians’ determinations as to medically necessary care based on an AI model” that United allegedly knew had a “90% error rate.”16 Based on this overarching allegation and other supporting factual allegations, the plaintiffs alleged a breach of contract claim, a breach of the implied covenant of good faith and fair dealing claim, an unjust enrichment claim, and state law bad-faith claims. The lawsuit against Cigna involves similar allegations.17 While these cases are in their early stages, the allegations themselves show how litigation over AI in claims processing might develop.

A duo of 2022 decisions—one from Washington and one from Delaware—confirms that AI-generated claims processing may give rise to legal liability.18 In the Washington case, the Washington Court of Appeals held that a health insurer’s practice of using a computer database to determine the reasonableness of a medical charge amounted to an unfair trade practice because the insurer did not undertake an individualized review.19 But across the country in Delaware, the Delaware Supreme Court instead emphasized the reasonableness of fees rather than the process used to determine whether fees are reasonable.20 One takeaway from these cases is that insurers may have to justify not only their ultimate decision on a claim but also the process used to reach that decision. A 2016 District of Arizona decision even confirms that challenges to technology-driven claims processing could get past the pleadings stage—and even summary judgment. In that case, a plaintiff alleged that an insurer was negligent and breached its duties by “improperly using . . . inadequate software” to deprive the insured of coverage under a homeowner’s policy.21 As to this negligence claim and theory, the court denied the insurer’s motion for summary judgment, reasoning that “it may have been negligent for defendant to rely solely on its computer system to determine policy limits,” among other features.22 This case signals one potential restraint on insurers’ conduct: Insurers are likely to be required to retain individualized human-centric review as part of their processes, no matter how good AI becomes in the near term.

While policyholders can state viable claims relative to AI technology in the insurance industry, a 2018 Eastern District of Pennsylvania case reaffirms that courts generally require a plaintiff to prove up specific flaws with a given computer-assisted technology.23 In that case, an insured pursued a bad-faith claim against an insurer that used a computer model called Xactimate to calculate depreciation without “investigating the ‘assumption models’ Xactimate relies on.”24 The court rejected the insured’s argument, stating that it did “not persuade th[e] Court.”25 The court reasoned that the Xactimate program was already an “industry standard computer program” and stated that the insured’s argument would have been “stronger” if it involved “specific evidence” of how the Xactimate model was flawed.26 The court also emphasized that generic complaints about assumptions were insufficient; the insured had to present “evidence that those assumption[s] [were] unreasonable. . . .”27

Taken together, the use of AI in claims processing brings with it many potential advantages and obstacles. As these issues are increasingly litigated, courts, policyholders, and insurers alike will want to monitor the rules that develop. The developing case law and the increased state-driven regulatory interest show a high degree of uncertainty about legal liabilities created using AI in the claims process. This new field also raises litigation-specific uncertainties, including under the Federal Rules of Civil Procedure and the Federal Rules of Evidence relative to both the discoverability and admissibility of AI-generated evidence. That is, as cases like United Healthcare and Cigna get past the pleadings stage (if they do), it will become essential for lawyers to consider how best to learn about and litigate relative to AI-generated evidence.

Litigating claims involving AI-generated evidence under the Federal Rules of Civil Procedure and the Federal Rules of Evidence. Electronic evidence is and has been essential in twenty-first century legal proceedings.28 For some time now, courts have been grappling with the discoverability and admissibility of text message evidence, mobile communications, and social media posts, among other types of electronic evidence.29 Since at least a 2012 ruling from the U.S. District Court for the Southern District of New York, courts have permitted the use of machine-learning tools to help with e-discovery.30 And in 2016, the Wisconsin Supreme Court held that courts may consider predictive modeling when imposing a sentence, even though courts may not rely solely on predictive modeling for the sentence imposed.31 While certain technological advances like these have been accepted by the courts, AI still represents a new frontier that will transform litigation generally and insurance litigation specifically.

One important question is how AI-generated evidence will be treated under the Federal Rules of Civil Procedure. In proceedings governed by the Federal Rules, discoverability is determined by Federal Rule of Civil Procedure 26, which provides that “[p]arties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case. . . .”32 Information “need not be admissible in evidence to be discoverable.”33 The U.S. Supreme Court has further cautioned that these rules should be applied broadly34 because the “[m]utual knowledge of all the relevant facts gathered by both parties is essential to proper litigation.”35 This broad discovery standard is the standard against which AI-generated evidence will be judged. And because the standard for discoverability is so broad, courts are likely to at least allow some discovery relative to AI-generated content.

But as discussed below, vexing questions relate to whether AI-generated output is like testimony—and, if so, should those against whom the testimony is offered have a right to examine that evidence, thereby subjecting the generative algorithm and data to discovery—and whether a software application or algorithm is even to be considered “AI.”

Other questions also remain about the reliability and authenticity of AI-generated content when courts are evaluating whether that evidence is admissible under the Federal Rules of Evidence.36 And the only way to identify whether AI-generated content is reliable or authentic is to allow discovery about it. It follows that courts are likely to allow at least some amount of discovery relative to AI-generated content. The tougher questions are the parameters of such discovery.

1. People v. Wakefield (N.Y. 2022): Addressing the scope and practicalities of AI in the courtroom

People v. Wakefield discusses the use of AI in forensic analysis, specifically the use of the TrueAllele system to interpret DNA evidence.37 The court’s decision questions the reliability of AI in a legal context and the potential implications for defendants’ rights. But the case does not provide a definitive answer on whether defendants should be granted access to proprietary source code to challenge the reliability of AI systems.

The primary issue in Wakefield is the admissibility of the TrueAllele software’s results under the Frye standard. The court found that the software was reliable and admissible, but the case raises other concerns about the use of AI in the criminal justice system. The defendant keenly argued that the AI-generated output was like an expert offering opinion testimony; and, thus, he was denied his right to confront witnesses because he was not given access to the software’s source code. The court explained:

Defendant further argues that the trial court’s denial of his request for the source code so that an expert could review it was a violation of his constitutional right to confrontation. The Sixth Amendment Confrontation Clause provides that, “‛[i]n all criminal prosecutions, the accused shall enjoy the right . . . to be confronted with the witnesses against [them]’” (Crawford v Washington, 541 U.S. 36, 42 [2004]).
. . . .

Although a computer cannot be cross-examined, as Dr. Perlin explained, the computer does the work, not the humans, and TrueAllele’s artificial intelligence provided “testimonial” statements against defendant as surely as any human on the stand.38

The court did not rule definitively on these issues, but it did acknowledge that the use of AI in the courtroom raises profound questions that will likely plague courts for years to come, even characterizing that breadth as potentially destabilizing:

The use of artificial intelligence within our system of justice presents challenging questions and may destabilize our established notions of the dividing line between opinion and uncontestable fact (see e.g., Sonia K. Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L Rev 54, 62–82 [2019]; Andrea Roth, Machine Testimony, 126 Yale LJ 1972, 2021–2022 [2017]). Courts across the country will decide how our federal and state constitutions may be interpreted in light of continued technological advances and their application in the courtroom.39

2. People v. Burrus (N.Y. Sup. Ct. Sept. 8, 2023): Discussing whether a software application or algorithm is, itself, AI

People v. Burrus40 is of interest because it discusses the definition and application of AI in the evidentiary context. Like Wakefield, Burrus also speaks from the perspective of forensic DNA analysis. The decision highlights the importance of clearly defining AI and how a failure to do so could lead to ambiguity. In Burrus, an expert in forensic biology testified that the FST (DNA analytics) software did not fit a particular definition of AI because that platform does not use machine learning, neuronets, or decision trees. The same expert later testified, however, that the FST platform did qualify as AI when defining AI more broadly to include automated decision-making systems.

What Does the Future Hold for AI in the Courtroom?

Because the use of AI-generated content in court proceedings is in its infancy, it is too early to tell how courts will evaluate the newest discovery challenges posed by AI. Early indications are still that AI will transform discovery rules, including under Rule 26, which generally dictates what is and is not discoverable. For instance, according to William Eskridge Jr., a professor of public law at Yale Law School, Rule 26(b)’s proportionality requirement may be challenged by AI.41 One reason is that AI technologies may allow lawyers to review more documents at a lower cost, which may reshape current notions of proportionality. Other commentators have noted that AI technologies may also require greater up-front discussion to make sure that all parties and courts are on the same page as the case proceeds.42

Because the standards for discoverability are laxer than the standard for admissibility, more complicated questions relate to how insurance lawyers and litigators can approach evidentiary issues under the Federal Rules of Evidence. The largest AI-specific challenges are likely to relate to the authenticity and reliability of AI-generated content and testimony, rather than threshold showings of relevance. Even though the relevance standard is more stringent under the Rules of Evidence than the Rules of Civil Procedure, the required threshold showing is still not incredibly high.43

Although the relevance threshold is moderately low, Federal Rule of Evidence 403 still provides a colorable basis to exclude certain AI-generated evidence. Federal Rule 403 provides that the “court may exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.”44 While courts generally interpret Rule 403 in favor of admissibility, Rule 403 still provides potentially strong grounds for a court to deny the admission of AI-generated evidence.45 The reason is that AI technology may cause unfair prejudice, confuse the issues, or confuse a jury. And judges may not be ideally positioned to determine whether a jury can be misled by AI evidence without first understanding how the technology works. Likewise, judges may be unable to assess the likelihood of jury confusion without understanding whether the AI being considered in a case is valid and reliable.46 In this way, the Rule 403 analysis at least is partially dependent on the two most vexing AI-related evidentiary questions: authenticity and reliability.

Proving the authenticity and reliability of an AI technology may require counsel to do more legwork than would otherwise be required for more generally accepted or well-known technologies.47 For example, without training the court about the development and use of the AI, it will be very difficult for a court to determine the reliability or relevance of that evidence.48 Anticipating the need for greater explanation, trial judges may ask that the parties apprise the court early-on about whether they intend to offer AI evidence, perhaps requesting briefing or limited discovery to inform the issues.49 The greater complexity of AI systems may also diminish the frequency of contemporaneous evidentiary rulings in favor of up-front and thorough judicial processes and procedures for determining the admissibility of AI-generated evidence.

Apart from Rule of Evidence 403, the authentication of AI-generated evidence raises questions under Rules 901(a) and 602. Rule 901(a) provides that “[t]o satisfy the requirement of authenticating . . . an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.”50 Rule 602 in turn establishes the need for an authenticating witness, which arguably means that such witness must know about how the AI technology functions to authenticate it.51 Because of the complexity and novelty of certain AI technologies, multiple witnesses may be required.52 One solution may be the use of an expert to authenticate the AI technology, which would allow the witness to testify based on inputs received from others.

But expert testimony will not be without challenges. Expert testimony, as always, is subject to additional scrutiny under Federal Rules of Evidence 702 and 703 and Daubert v. Merrell Dow Pharmaceuticals, Inc.,53 and Kumho Tire Co. v. Carmichael54 and their progeny. These rules and cases require that an expert witness provide reliable testimony based on sufficient facts or data that results from reliable principles and methods that have been reliably applied to the facts of the case. One of the reasons is that “[u]nreliable evidence has no tendency to prove or disprove facts that are of consequence to resolving a case or issue.”55

Heeding the above rules, insurance practitioners should brush up on the Rules of Evidence and Civil Procedure. And even if battles over discoverability and admissibility are lost, the weight afforded to any AI evidence still is subject to question. That is, even if AI technology bypasses the gatekeeper, deficiencies and biases still present obstacles before the trier of fact.56

Stay tuned for the second part of this two-part article, which will be published in the next issue of Insurance Coverage.

Co-authored with Iris Devriese, Client Manager and Underwriter at Munich Re, and Shiva Balasubramaniyan, Chief Innovation Officer for Capgemini.


Notes

1 Anat Lior, “Insuring AI: The Role of Insurance in Artificial Intelligence Regulation,” 35 Harv. J.L. & Tech. 467 (2022).
2 Christopher Freese, “Leading Insurers Are Having a Generative AI Moment,” Bos. Consulting Grp., Aug. 17, 2023.
3 Ramnath Balasubramanian et al., Insurance 2030–The Impact of AI on the Future of Insurance, (McKinsey & Co. Mar. 12, 2021).
4 Lior, “Insuring AI: The Role of Insurance in Artificial Intelligence Regulation,” supra.
5 Artificial Intelligence Act, art. 3, Eur. Parl. Doc. P9_TA(2024)0138 (Mar. 13, 2024).
6 Philadelphia Consolidated Holding Corp., Musical Composition and Generative Artificial Intelligence Exclusion, Form PI-IT-036 (09/23).
7 Philadelphia Consolidated Holding Corp., Musical Composition and Generative Artificial Intelligence Exclusion, Form PI-IT-036, supra.
8 Abraham Gross, “Colo. AI Bias Law Brings Little Certainty For Insurance Sector,” Law360, May 23,
2024.
9 Ilkhan Ozsevim, “Lemonade Sets World Record with 2-Second AI Insurance Claim,” AI Mag., June 14, 2023.
10 While there are not yet many formal rules, the National Association of Insurance Commissioners
(NAIC) issued a model bulletin on the use of AI systems in insurance. The bulletin is one step among many to create a comprehensive set of regulatory standards to ensure the responsible deployment of AI in the insurance industry. See NAIC Model Bulletin, The Use of Artificial Intelligence Systems in
Insurance (Dec. 4, 2023).
11 Cf. Carrol v. Allstate Ins. Co., 815 A.2d 119, 130 (Conn. 2003) (holding that the evidence supported a faulty human-driven investigation, in part because the operative people conducted a hasty, incomplete, and ill-motivated investigation).
12 NAIC, Implementation of NAIC Model Bulletin, Use of Artificial Intelligence Systems by Insurers (map) (Apr. 30, 2024).
13 NAIC, Implementation of NAIC Model Bulletin, Use of Artificial Intelligence Systems by Insurers (map), supra.
14 Daphne Zhang, “Insurers’ AI Use for Coverage Decisions Targeted by Blue States,” Bloomberg L.,
Nov. 30, 2023.
15 Ken Alltucker, “Is Your Health Insurer Using AI to Deny You Services? Lawsuit Says Errors Harmed
Elders,” USA Today, Nov. 19, 2023; Richard Nieva, “Cigna Sued Over Algorithm Allegedly Used to Deny
Coverage to Hundreds of Thousands of Patients,” Forbes, July 24, 2023.
16 Complaint at 1, Lokken v. UnitedHealth Grp., Inc., No. 0:23-cv-03514-WMW-DTS (D. Minn. Nov. 14,
2023), ECF No. 1.
17 Complaint, Kisting-Leung. v. Cigna Corp., No. 2:23-at-00698 (E.D. Cal. July 13, 2023).
18 Compare Schiff v. Liberty Mut. Fire Ins. Co., 520 P.3d 1085 (Wash. Ct. App. 2022), review granted,
526 P.3d 844 (Wash. 2023), with GEICO Gen. Ins. Co. v. Green, 276 A.3d 462 (Del. 2022).
19 Schiff, 520 P.3d at 1094–95.
20 GEICO, 276 A.3d at 462.
21 Lewis v. Allstate Ins. Co., No. 3:15-cv-8074-HRH, 2016 WL 5408332, at *6–7 (D. Ariz. Sept. 28, 2016).
22 22 Lewis, 2016 WL 5408332, at *7.
23 Sands v. State Farm Fire & Cas. Co., No. 5:17-cv-4160, 2018 WL 1693387, at *5 (E.D. Pa. Apr. 6, 2018).
24 Sands, 2018 WL 1693387, at *5.
25 Sands, 2018 WL 1693387, at *5.
26 Sands, 2018 WL 1693387, at *5.
27 Sands, 2018 WL 1693387, at *5.
28 UNESCO, How to Determine the Admissibility of AI-Generated Evidence in Courts? (July 21, 2023; last updated July 26, 2023).
29 See UNESCO, How to Determine the Admissibility of AI-Generated Evidence in Courts?, supra.
30 See Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012).
31 See State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
32 Fed. R. Civ. P. 26(b)(1).
33 Fed. R. Civ. P. 26(b)(1).
34 Hickman v. Taylor, 329 U.S. 495, 506 (1947) (“[D]iscovery provisions are to be applied as broadly and liberally as possible. . . .”).
35 Hickman, 329 U.S. at 507
36 Paul W. Grimm et al., “Artificial Intelligence as Evidence,” 19 Nw. J. Tech. & Intellectual Prop. 9 (2021).
37 People v. Wakefield, 195 N.E.3d 19 (N.Y. 2022).
38 Wakefield, 195 N.E.3d at 23–24.
39 Wakefield, 195 N.E.3d at 24.
40 People v. Burrus, No. 817/2020, 2023 N.Y. Misc. LEXIS 5805 (N.Y. Sup. Ct. Sept. 8, 2023).
41 Cassandre Coyer, “Generative AI and Federal Rules of Civil Procedure: Is It Meant To Be?,” ALM,
Oct. 13, 2023.
42 See Coyer, “Generative AI and Federal Rules of Civil Procedure: Is It Meant To Be?,” supra.
43 Federal Rule of Evidence 401 provides that evidence is relevant if “(a) it has any tendency to make a fact more or less probable than it would be without the evidence; and (b) the fact is of consequence in determining the action.” This is generally a low bar. That is, under this standard, AI-generated content has at least some tendency to make a fact more or less probable.
44 Fed. R. Evid. 403.
45 Am. Ass’n for the Advancement of Sci. (AAAS), Artificial Intelligence, Trustworthiness, and Litigation, (Sept. 2022).
46 AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 12.
47 AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 14–15.
48 AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 13–14.
[48] AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 13–14.
49 AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 12.
50 Fed. R. Evid. 901.
51 Fed. R. Evid. 602.
52 AAAS, Artificial Intelligence, Trustworthiness, and Litigation, supra, at 13–14.
53 Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).
54 Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999).
55 Sedona Conf., Commentary on ESI Evidence & Admissibility, Second Edition, 22 Sedona Conf. J. 83
(2021).
56 See Patrick W. Nutter, “Machine Learning Evidence: Admissibility and Weight,” 21 Univ. Pa. J. Const. L. 919 (2019).

©2024. Published in the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.

Related Insights

Jump to Page