AI in the Insurance Industry: Balancing Innovation and Governance in 2025

What You Need To Know

  • The insurance industry’s embrace of AI calls for a governance framework to assist regulators, guide insurers, and inform consumers.
  • The National Association of Insurance Commissioners’ Big Data and Artificial Intelligence Working Group is in the midst of releasing survey results on how different lines of insurance carriers use AI.
  • NAIC also formed a Third-Party Data and Models Task Force to address insurers’ growing use of AI systems or training data from third parties that are not regulated by state insurance departments.
  • NAIC will attempt to develop a comprehensive regulatory framework, detailed regulations, and model laws that strike a delicate balance between protecting consumers and encouraging innovation.

AI and Insurance

The insurance industry embraced the transformative power of Artificial Intelligence in 2024 and will continue to do so in 2025. According to the surveys conducted by the Big Data and Artificial Intelligence Working Group of the National Association of Insurance Commissioners (NAIC), among the insurers that responded to the survey, 88% of auto insurers, 70% of home insurers and 58% of life insurers report that they use, plan to use, or plan to explore AI models in their operations. As more and more insurers begin to integrate AI technologies into their day-to-day operations, the question on the regulators’ minds is, how do we ensure that AI technologies are being used responsibly?

Other than employing AI tools such as 24/7 virtual assistants or chatbots to improve customer service, AI models are also starting to revolutionize the core operations of insurers, as companies implement AI into their underwriting, pricing, claim handling, and risk management operations. For instance, early-stage tech startup FireBreak Risk, developed an AI-powered application to help homeowners to compile and analyze data from images of their properties and fences to assess wildfire risk. Insurers working with FireBreak Risk can access its model and data to identify homes with low risk and strong mitigation measures and offer discounts or tailored insurance products. Clearcover, a private passenger auto carrier operating in nineteen states, utilizes its TerranceBot AI tool to facilitate the claim handling process. Also called “Terry,” the AI bot can analyze claim files, summarize claim information, answer specific questions of the claim representative or adjuster and help draft correspondence to policyholders.

However, as more insurers adopt AI and integrate it into their core operations, this also presents new and unique risks for the industry. Underwriting, damage estimating, and claim adjusting all require accurate information. Some types of AI may hallucinate and generate false or misleading facts, which could affect the accuracy of the insurance decisions. Predictive models may have embedded algorithmic biases, which could create unintentional discrimination in underwriting or claims adjusting. There are also concerns around whether AI decisions inherently lack accountability and may prioritize cost savings over consumer protection.

Accordingly, many are calling for a governance framework to assist regulators, guide insurers, and inform consumers. The new governance framework should provide increased transparency around AI’s role in insurance and ensure that AI insurance decisions are accurate, trustworthy and fair to consumers. Establishing an appropriate governance framework without stifling innovation in the industry is top-of-mind for insurance regulators. The regulators have been and will continue working diligently to keep up with technological developments and advancements in the industry.

The NAIC established the Big Data and Artificial Intelligence Working Group (Working Group) under its Innovation Cybersecurity and Technology Committee in 2019 to study the development of artificial intelligence and its use in the insurance sector. The Working Group conducted extensive surveys on use of AI by different lines of insurance carriers. The Working Group has already published auto, home, and life insurers survey results and analysis, and will release the health insurance carriers survey results in 2025.1The Working Group will conduct follow-up surveys with auto insurers this year to see, for example, whether the insurers have established governance programs and whether any testing of third party provided AI systems has occurred.Most importantly, NAIC developed and adopted the Model Bulletin on the Use of Artificial Intelligence by Insurance Companies (Model Bulletin) in December 2023 (see our previous analysis here). The Model Bulletin sets out principles, guidelines, and expectations of responsible use of AI by the insurance companies and is a milestone document on which future regulations and model laws will be built. In 2024, 21 states and the District of Columbia adopted the Model Bulletin, with Massachusetts being the newest state to adopt in December 2024. We will see if additional states adopt the Model Bulletin this year.

Special Focus on Third-Party Models and Data

Often, insurers rely on third-party data to train their internal AI system or directly acquire and use models and AI systems developed by third parties. For example, third-party predictive models are increasingly being used by homeowner insurance carriers for pricing, which directly impacts the affordability and availability of homeowner insurance. But these third-party services are not directly regulated by state insurance departments, so the regulators lack proper avenues and means to evaluate, assess and verify the reliability of such data and models.

To develop a framework for regulatory oversight of third-party data and predictive models, including those utilizing artificial intelligence, NAIC formed a Third-Party Data and Models Task Force in May 2024 (Task Force). The Task Force was created to address the growing impact third-party data and models have on the insurance market. You can see our previous analysis of the Task Force’s mission and charges here. Leading insurance carriers, industry advisors and consultants, consumer advocating groups and other interested parties have actively participated in the work of the Task Force and provided numerous comments on its charges and work plan.

Since its formation, the Task Force has held six meetings, during which the Task Force heard presentations by Florida Hurricane Commission on regulation of catastrophe models and presentations by different state and international regulators on current existing regulatory frameworks. The Task Force specifically discussed and considered the Risk-focused Surveillance Approach used by European insurance regulators, State-Specific Market Conduct Exam Approach, and Colorado’s “Trust but Verify” approach used in Colorado Senate Bill 21-169.

At the most recent meeting in November 2024, several states, including Connecticut, Texas, Maine and Pennsylvania, presented their current solutions on regulating third parties and related regulatory issues. The discussion around why the current solutions are working or not working and whether they are scalable will inform Task Force’s decision regarding the proposed framework. The consensus among state regulators is that insurers should retain full responsibility for the data and models they employ, regardless of whether the models are developed internally or by third parties. Accordingly, insurance companies should be cautious when engaging third-party vendors and should always require audit rights and cooperation with regulatory inquiries in their vendor agreements.

After considering and evaluating different regulatory frameworks and approaches, the Task Force will start working on formulating a regulatory framework for regulating third-party data and models in 2025. Interested parties, especially third-party model providers, are encouraged to participate in the meetings of the Task Force and give their perspectives.

Regulators in the Year Ahead

In 2025, regulators will continue to try to strike a delicate balance between protecting the consumers and encouraging innovation and technology advancement in the insurance industry. Building on the extensive surveys, discussions, research, and input from various stakeholders, the NAIC Working Group and Task Force will attempt to formulate and develop a comprehensive regulatory framework and start working on detailed regulations and model laws around use of AI in the insurance industry. We think regulators are likely to take a risk-focus approach: first identifying the risk of potential harm to consumers and key insurance markets, and then developing rules, processes and systems requiring insurance companies to enhance their AI governance. We expect NAIC and state regulators will issue more detailed and prescriptive model laws and regulations to address the risk of AI errors, as well as unfair and discriminatory underwriting and claim handling practices.

These regulatory developments are driving insurers to enhance their AI governance and risk management practices. Insurers should conduct due diligence on AI systems, ensure compliance with data privacy laws, and implement robust internal controls to mitigate the risks associated with third-party data and predictive models. The increased regulatory scrutiny will prompt insurers to invest in AI technologies that are transparent, fair, and accountable. 

The insurtech landscape in 2025 will be defined by further innovation and integration of advanced technologies, evolving regulatory frameworks, and shifting consumer expectations. Regulators and insurers alike will look to balance this need for innovation and advancement with the need to ensure responsible and transparent use of new technology and data. With AI technologies at the forefront of this transformation, driving efficiency, risk mitigation, and personalization in insurance operations, we will keep our eyes to the NAIC and state regulators to see what further regulatory changes may be implemented to ensure that AI technologies are, in fact, being used responsibly and transparently.


*Mallory Goodwin contributed to this alert

Footnotes

1The survey results and analysis are available under the documents tab on the Big Data and Artificial Intelligence Working Group’s website: https://content.naic.org/committees/h/big-data-artificial-intelligence-wg.