
Experts Share Thoughts on What’s Next for AI in Insurance
By Will Behnke and Barbara Bryant
Academy Policy Staff
Part II Report From NAIC International Forum
Second of a three-part series. Read part I.
The world is evolving in unheralded ways and it’s hard to keep up with—much less adapt to—so many varied and far-reaching developments. How will the insurance industry be affected by rapidly emerging environmental, technology, economic, and other trends? And how can or should the industry evolve to adapt to uncertainty on multiple fronts while continuing to serve the public? More than 250 industry experts gathered in Washington, D.C., during the NAIC’s 2025 International Insurance Forum in late May to explore these and many other questions. This three-part blog series highlights what the Academy identified as key issues during the keynote sessions and expert panel discussions.
Artificial Intelligence (AI)
Panelists:
- Petra Hielkema, Chairperson, European Insurance and Occupational Pensions Authority
- Anthony Habayeb, CEO and Co-Founder, Monitaur
- Kristen Bessette, Chief Data Officer, Zurich North America
Moderator:
- Barbara D. Richardson, Chair, NAIC Innovation, Cybersecurity, and Technology (H) Committee and Special Advisor to the Arizona Department of Insurance and Financial Institutions
Current AI business applications range from records management and customer service to actuarial modeling and risk prevention. Other applications include marketing, sales, and, to an increasing extent, insurance underwriting and claims processing.
This session of the NAIC’s forum looked to the future uses of AI, including underwriting automation, dynamic risk assessment, and fraud detection using advanced predictive analytics. The panelists predicted that AI may one day be used to help shift the insurance industry from its current reactive insurance model (event detection and response) to more predictive (event forecasting) and even preventive (risk-reducing intervention) models. Examples included AI-enabled cameras and sensors on construction sites to spot unsafe behavior and prompt corrective action, as well as data integration across customer and risk profiles to propose specific insurance products or risk mitigation strategies.
In response to concerns that AI could replace workers, leading to widespread unemployment, the panelists offered some reassurance, suggesting that jobs are more likely to evolve than disappear. While rote tasks such as data entry may be replaced, roles requiring complex, contextual decision making, empathy, and relationship management will always require human intervention. In the health care setting, for example, AI can be used in imaging analysis, symptom triage, and note-taking to help lower costs and increase capacity. On the other hand, activities involving high-stakes underwriting or claims decisions will continue to require human oversight and active engagement. There will always be a need for humans to “check AI’s work” to prevent a breakdown in trust from slowing broader adoption while also ensuring accuracy in complex situations.
While highlighting the many areas where AI can increase efficiency and productivity, the panelists called for guardrails to ensure that the technology is used fairly and responsibly. Among the concerns shared was the potential issues related to liability, data usage, and cross-jurisdictional compliance when Insurtech AI applications are used. They called for international cooperation in harmonizing AI-related insurance regulations, in an effort to support consistency in risk classification and capital requirements.
Global coordination through such entities as the European Insurance and Occupational Pension Authority and the U.S. counterparts could help establish standards for transparency, comprehension and cross-border data governance. One example is the call for joint standards on health care AI governance issued by the World Health Organization and the Organization for Economic Cooperation and Development.
The session also highlighted the need for regulators to define clear criteria for AI use in financial services as well as in audit mechanisms, ensuring that AI models perform as intended and don’t introduce systemic bias or risk. They called on developers to explain AI’s use clearly to consumers, regulators, and other users to build trust, ensure transparency, and prevent errors. Additionally, there was a call for greater collaboration between the public and private sectors, so that innovations can be scaled responsibly.
Academy Engagement
The Academy’s Risk Management and Financial Reporting Council (RMFRC) continues to contribute to the broader AI public policy discussions. They have created an AI issue brief series, which includes a focus on how the insurance industry deploys AI and in the more technical aspects of AI. Additional work products include the August 2024 comment letter in response to the U.S. Treasury’s RFI on AI, the joint RMFRC, Health Practice Council, Life Practice Council, and Casualty Practice Council June 2025 comment letter in response to the NAIC’s RFI on a Possible AI Model Law, the February 2024 —Discrimination: Considerations for Machine Learning, AI Models, and Underlying Data additional considerations brief and the September 2024 professionalism discussion paper, Actuarial Professionalism Considerations for Generative AI.