When Algorithms Deny: AI and the New Frontier of Bad Faith

As industries across the board adopt artificial intelligence (AI), insurers have likewise recognized its potential to enhance efficiency in claims administration. Yet with this opportunity comes tension: consumers expect both swift resolution of claims and individualized attention. AI can help insurers meet those expectations, but if implemented without transparency or adequate safeguards, it may also create significant legal and reputational risks.
Insurers have long been bound by duties in both marketing their products and processing claims and chief among these is the implied covenant of good faith and fair dealing, which is read into virtually every contract of insurance. While the scope of these duties can vary significantly between coverages and greater still from state to state, the terms of the contract consistently govern the relationship between insurer and insured. Moreover, insurers’ claims handling practices are most often considered with attention to either the representations made to their insureds prior to issuing the contract or the standards outlined by relevant consumer fraud and trade practices statutes in the state of issuance.
The Model Unfair Trade Practices Act (UTPA) developed by the National Association of Insurance Commissioners (NAIC) provides the basic framework for most jurisdictions. Among the “unfair claims practices” contemplated by the UTPA are acts by an insurer such as “knowingly misrepresenting to claimants and insureds relevant facts or policy provisions relating to coverages at issue”; “failing to acknowledge with reasonable promptness pertinent communications with respect to claims arising under its policies”; “failing to adopt and implement reasonable standards for the prompt investigation and settlement of claims arising under its policies”; and “refusing to pay claims without conducting a reasonable investigation,” among others.
In the AI context, this framework raises novel questions. On one hand, consumers could argue that an insurer’s failure to adopt and implement AI is an unfair claims practice and that an insurer’s use of AI may give rise to unfair claims practices. This is because AI can benefit insureds as a tool that enables insurers to promptly acknowledge communications and expedite their investigation and settlement of claims with fewer resources and talent. On the other hand, the use of AI brings with it certain risks including the risk of “hallucinations”—the phenomenon when generative AI programs produce incorrect, misleading, or nonexistent content—or bias which can occur as a result of limitations in the training data input to design the system. Insurers adopting automation must therefore not only manage these risks but also ensure that their marketing representations about the claims process remain accurate.
Recent litigation underscores these concerns. For example, in Estate of Lokken v. UnitedHealth Group, Inc., 766 F.Supp.3d 835 (2025), a federal district court recently allowed claims for breach of contract and breach of the covenant of good faith and fair dealing to proceed against a healthcare plan provider despite finding the majority of the claims were preempted by the Medicare Act. Although the court agreed with the provider that the statutory bad faith and unfair trade practices claims sought rulings with respect to coverage determinations within the purview of the Medicare Act, the court distinguished these from plaintiffs’ claims that the provider never disclosed its use of AI to render claims decisions. In fact, plaintiffs argued that they paid premiums based on the provider’s representations that claims decisions would be made by “clinical services staff” and “physicians” when it instead used an AI program without any oversight or internal appeals process. Id. As such, the district court held their dispute was not over the coverage determinations themselves and therefore, these claims did not pertain to the same conduct regulated by the Medicare Act.
Building on Lokken, another district court recently cited its reasoning with approval and went a step further. In Re Humana, Civil Action No. 3:23-cv-654-RGJ, 2025 WL 2375645 (W.D. Ky. Aug. 15, 2025), the court allowed an unjust enrichment claim to proceed based on allegations that Humana collected premiums without disclosing its reliance on an AI program to make coverage determinations. The court explained “the crux of the claims in this case and in Lokken, arise from insurance companies utilizing AI to make insurance coverage determinations … [it] is not whether the use of AI to make coverage opinions is prohibited under the Medicare Act, but whether insurance companies use of AI is in violation of its contract with insureds.” Id.
While both cases remain at the pleading stage, they signal a potential shift: courts may be willing to treat nondisclosure of AI in claims handling as a breach of contractual or equitable duties. And outside the Medicare Advantage context, insurers will not be able to rely on federal preemption as a defense, leaving them more vulnerable to statutory bad faith and unfair trade practices claims.
As the insurance industry, like so many others, works to find the appropriate place for AI, here are some initial takeaways:
- Be sure to review the company’s marketing materials to ensure the use of AI programs remains consistent with representations regarding the claims review process.
- Adopt internal procedures for review and approval of AI-generated reports or customer messaging.
- Adopt internal procedures to test AI systems for accuracy, fairness, and bias, and adjust as necessary to align with contractual and regulatory obligations.
- Consider updating materials and disclaimer language to inform customers about ways the company is implementing AI programs for their benefit with appropriate controls and safeguards.
- Ensure claims staff understand both the capabilities and limitations of AI tools so they can exercise meaningful judgment.
- Ensure customers have a method of seeking review of any automated processes.
As AI continues to transform insurance, the difference between innovation and exposure will hinge on transparency, oversight, and adherence to the covenant of good faith. Insurers that embrace these principles will be better equipped to meet consumer expectations and withstand judicial scrutiny. Cozen O’Connor stands ready to help insurers navigate this evolving landscape with practical safeguards and compliance strategies.