By Tricia Matson
In light of the excellent article written by Fei Huang and Xi Xin (Xin is the recipient of the 2025 Academy Award for Research) in this issue, I wanted to write about why actuaries are extremely well-positioned to address ethical treatment in use of artificial intelligence (AI) as part of the risk-assessment process in selling, pricing, and managing risk in insurance.
AI and machine learning are transforming the insurance landscape—streamlining underwriting, refining risk assessment, and personalizing customer experiences. But with these advances comes a critical challenge: algorithmic bias. When predictive models inherit bias from historical data or flawed assumptions, the consequences can be profound—unfair pricing, inequitable access, and reputational risk for insurers.
Actuaries are uniquely positioned to lead the conversation on ethical governance for AI in insurance. Our profession has long been grounded in principles of fairness, transparency, and accountability—values that align perfectly with the need for responsible algorithmic design.
Why does this matter?
Algorithms increasingly influence decisions once made by human judgment. If left unchecked, bias can creep in through data selection, feature engineering, or optimization objectives. For example, a model trained on historical claims data may unintentionally penalize certain demographic groups, perpetuating inequities.
A September 2024 Academy professionalism discussion paper, Actuarial Professionalism Considerations for Generative AI, discusses what the actuary should think about when using AI and sets out questions for the actuary to consider, including regarding bias.
Here are some ways in which actuaries are well positioned for ethical applications:
- The Code: Our Code of Professional Conduct deals with certain ethical standards that relate to use of AI. Specifically, the Code states that “actuaries must act honestly, with integrity and competence” and that “[a]n Actuary shall perform actuarial services with skill and care.” These general concepts require an actuary to ensure that any AI model used as part of their actuarial services results in reasonable and appropriate output and does not create unfair bias.
- ASOPs: Actuarial standards of practice (ASOPs) provide guidance for all situations, including working with AI. ASOP No. 56, Modeling, which affects all practice areas, provides guidance on any type of model, including related algorithmic approaches such as AI. AI is a model, and as such, ASOP No. 56 applies. Under ASOP No. 56, an actuary designing, developing, selecting, modifying, or using an algorithmic model must meet several requirements, including evaluating the model’s appropriateness for its intended use; assessing quality of data, assumptions, and model structure; performing model validation and testing, and ensuring appropriate model governance and controls. Any material limitations or known weaknesses must be disclosed.
- Culture of Transparency: In addition to the specific standards related to use of an AI model referenced above, actuaries are used to meeting requirements for disclosure related to our work products. Every ASOP has disclosure requirements, and ASOP No. 41, Actuarial Communications, is focused on ensuring that the final work products and opinions provided by actuaries are accompanied by clear and transparent disclosures to the user.
- Experts in Research: Our profession is known for publishing meaningful and objective research on actuarial topics, including those that are highly relevant to the public at large. Actuaries can leverage these deep skills to develop frameworks for bias detection and correction, leveraging our expertise in risk quantification and uncertainty measurement.
- Collaboration across Disciplines: Ethical AI requires input from data scientists, regulators, and consumer advocates. Actuaries work with all of these stakeholders (and more!) and can serve as the bridge between technical rigor and societal impact.
As algorithms become integral to insurance operations, the actuarial profession has the proper foundation to champion ethical governance. As the above professionalism discussion paper references, integrating AI tools into actuarial work demands thoughtful consideration of two key questions: whether AI is appropriate for the project, and how to meet professionalism responsibilities while using it—along with a longer series of follow-up questions. By embedding fairness into predictive modeling, we not only protect consumers but also uphold the integrity of our profession.
Tricia Matson is president of the Academy.