By Fei Huang and Xi Xin
Artificial intelligence and big data can sharpen risk predictions—but they can also hide bias. Actuaries are developing models that make fairness measurable, actionable, and central to modern insurance.
Actuaries use data for good. Yet as insurers deploy larger datasets and more sophisticated algorithms, a pressing question emerges: What if these tools unintentionally discriminate?
Imagine two drivers. Both have comparable cars, many years of safe driving experience, and are of the same age. Yet one pays hundreds of dollars more for auto insurance. The difference is not their driving history or risk profile, but their ZIP codes—even when they live across the street from each other. ZIP codes can act as proxies for protected characteristics such as race or ethnicity. Even if the insurer never used race explicitly, the proxy effect may still exist.
This is the challenge of indirect discrimination in insurance, and it is attracting growing attention from regulators and actuaries alike.
From Big Data to Big Questions
Over the past decade, insurers have embraced big data and advanced algorithms across a wide range of operations, including underwriting and pricing. Predictive accuracy has improved, but so has the risk of bias. Machine learning models, often operating as “black boxes,” can uncover subtle correlations that effectively reintroduce proxy effects of protected attributes.
Direct discrimination, explicitly using race, gender, or other protected attributes, is prohibited in most jurisdictions. But indirect discrimination, where neutral-looking factors act as stand-ins, sits in a regulatory grey zone. This uncertainty creates both a compliance challenge for insurers and an opportunity for actuaries to lead.
Around the world, regulators are beginning to grapple with this problem. The European Union banned gender-specific premium differentiation through its “gender directive” and the 2011 Test-Achats ruling, which mandated unisex rates. In Australia, the Human Rights Commission has issued guidance on algorithmic bias and the use of artificial intelligence (AI) in decision-making.[1] And in the United States, a patchwork of state laws and national conversations is beginning to shape the future of fairness in insurance pricing.
Some U.S. states restrict or prohibit the use of proxies such as education, occupation, or credit-based insurance scores in auto rating, recognizing their disproportionate impact on minority or low-income groups. The National Association of Insurance Commissioners (NAIC) has urged insurers to avoid proxy discrimination when deploying AI models. And in 2021, Colorado passed SB21-169, which went further by requiring insurers to test whether their models result in unfair discrimination when using external data, even if they never explicitly used protected attributes.
Together, these developments illustrate a clear trend: Regulators are moving from input-based prohibitions (“don’t use race, don’t use credit scores”) toward effects-based scrutiny (“show that your models don’t produce unfair outcomes”). That shift highlights the need for actuarial approaches that can measure, mitigate, and balance fairness with predictive accuracy.
Building a Bridge: Regulation, Fairness, and Models
The machine learning community has developed dozens of fairness criteria over the past decade. But most were designed for binary classification problems such as hiring or lending—not for regression problems like insurance pricing. Our research paper, “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, andModels” (North American Actuarial Journal, 2024, Vol. 28, Issue 2),[2] which received the 2025 American Academy of Actuaries’ Annual Award for Research, set out to bridge legal frameworks, fairness criteria, and actuarial practice. The award recognizes work by an early-career scholar contributing significantly to an actuarial perspective on a public policy issue of interest to U.S. actuaries and public policymakers. Read also “Call for Submissions” on page 21.
We began this project at the University of New South Wales Business School in 2020, motivated by conversations with regulators and industry leaders who were grappling with these issues. The collaboration itself grew quite naturally: Xi, then a Ph.D. student, approached Fei for research ideas, and Fei had just completed a paper with Edward (Jed) Frees on discrimination in insurance, later published in the North American Actuarial Journal.[3] Building on that foundation, we set out to explore how quantitative fairness criteria from the machine learning literature could be applied to insurance pricing and aligned with existing regulations—an idea that quickly became the first major project of Xi’s doctoral studies and the starting point of our joint work.
What Counts as Fairness?
Fairness can mean different things depending on perspective. Individual fairness emphasizes treating similar people similarly. Group fairness is about making sure that different groups—for example, people of different races or genders—are treated in a way that leads to similar outcomes. Both approaches have merit, but they often conflict.
Consider a few representative criteria. Fairness through unawareness excludes the protected attribute from the model. This approach is simple, and common in industry practice, but it is inadequate when proxies reintroduce the attribute indirectly. Fairness through awareness goes further by requiring that similar policyholders pay similar premiums, based on task-specific similarity measures. On the group side, demographic parity requires premiums to be independent of protected attributes. In practice, however, demographic parity can be difficult to apply in many lines of insurance pricing, where risk differences exist between groups and where insurers’ portfolio mixes vary widely. A middle ground is conditional demographic parity, which permits disparities tied to legitimate risk factors while restricting those linked to illegitimate proxies. Many other fairness criteria have also been introduced and discussed in recent literature.
Each of these definitions reflects a different regulatory philosophy. And there is no one-size-fits-all answer: The appropriate fairness criterion will vary by line of business, regulatory environment, and the goals of the insurer or regulator. In short, context matters.

Implementing Fair Models in Practice
To make these ideas concrete, we applied fairness-aware pricing strategies to a French auto insurance dataset, treating gender as the protected attribute. There are three approaches to implement fair models: adjusting the data before training (pre-processing), building fairness constraints directly into model training (in-processing), and adjusting predictions after the fact (post-processing).
We tested both generalized linear models (GLMs), the long-standing workhorse of actuarial science, and modern methods such as XGBoost, a popular machine learning algorithm.
Our results show that fair models could be implemented in practice, evaluated with familiar actuarial metrics, and compared along fairness and accuracy dimensions. While introducing fairness generally reduced predictive accuracy somewhat, the decline was modest for our empirical application. In some scenarios, fairer models even improved insurers’ competitive position by attracting more low-risk drivers and fewer high-risk ones.
We also found that model choice matters. Machine learning methods like XGBoost, while powerful, can be more sensitive to small group differences than GLMs. That sensitivity raises important questions for regulators and actuaries about how modeling techniques influence fairness outcomes.
The Trade-Offs: Fairness, Accuracy, and Solidarity
Fairness interventions inevitably shift costs across groups. Some strategies lead to cross-subsidies, for instance, from low-risk to high-risk consumers. This highlights the classic tension between actuarial fairness (similar risks, similar premiums) and solidarity (sharing risks more broadly across society).
Our research demonstrates that there is no single “fair” answer. Each criterion, model, and regulatory approach involves trade-offs between statistical accuracy, consumer equity, and market stability. The task for actuaries is to make these trade-offs explicit, measurable, and transparent.
Why It Matters for Actuaries and Regulators
For actuaries, fairness is not just an ethical or legal consideration—it is a modeling choice with business implications. Insurers that rely solely on “fairness through unawareness” risk regulatory scrutiny and reputational harm. Those who adopt deliberate fairness strategies can align with evolving regulation and demonstrate leadership in responsible data use.
For regulators, actuarial approaches provide tools to translate fairness principles into auditable standards. Instead of vague aspirations, they can set measurable criteria that insurers can test against real data.
For consumers, fairer models can improve trust in insurance markets. When customers understand that their premiums are determined transparently and equitably, confidence in the system grows.
Looking Ahead
Much work remains to be done in this area. Which fairness criteria should apply in different lines of business? At what stage of the pricing process should fairness be enforced—technical cost modeling, final pricing, or both? How can fairness be ensured when protected attributes such as race are not collected?
Our subsequent research explores these questions, from fairness in annuity pricing and retirement outcomes[4] to the welfare implications of regulatory interventions.[5] Other researchers are also advancing complementary approaches. Together, these efforts mark the emergence of fairness in insurance as a truly multidisciplinary field.
As insurers harness more data and deploy more sophisticated algorithms, the potential for unintended discrimination grows. But actuaries have the expertise to ensure that fairness is not just an aspiration but a reality embedded in pricing practice.
By linking regulation, fairness criteria, and actuarial models, our research shows how insurers can navigate the fairness-accuracy trade-off, respond to evolving laws, and build systems that are both predictive and just.
This is how actuaries can continue to use data for good: not only to measure risk, but to ensure that risk is priced in ways that are fair, transparent, and trusted.
FEI HUANG is an associate professor and XI XIN is a Ph.D. candidate at the School of Risk and Actuarial Studies, University of New South Wales, Australia.
Endnotes
- Guidance Resource: AI and Discrimination in Insurance, Australian Human Rights Commission, 2022.
- “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models,” North American Actuarial Journal, 2024.
- “The Discriminating (Pricing) Actuary,” North American Actuarial Journal, 2023.
- Towards Fairer Retirement Outcomes: Socio-Economic Mortality Differentials in Australia. Available at SSRN, 2025.
- Welfare Implications of Fair and Accountable Insurance Pricing, Available at SSRN, 2025.
Call for Submissions
The Academy’s Award for Research is devoted to research on a particular theme with broad applicability across different policymaking and regulatory environments and across actuarial practice areas. It recognizes work by an early-career scholar contributing significantly to an actuarial perspective on a public policy issue of interest to U.S. actuaries and public policymakers. For 2026, the theme is “Advancing Literacy in Insurance and Finance: Understanding Risk, Overcoming Misconceptions, and Strategies for Improvement”.The Academy’s Research Committee invites submissions for consideration by March 31, 2026. The winner of the Award will be announced during the summer of 2026. The award includes a $7,500 honorarium.
Last year, the committee solicited submissions on the theme of “Bias in Assessing Financial Risk: Origins, Detection, Mitigation.” Xi Xin, a Ph.D. candidate at the University of New South Wales in Australia, received the award as co-author of research published in 2024 in the North American Actuarial Journal (Vol. 28, Issue 2), “Antidiscrimination Insurance Pricing: Regulations, Fairness Criteria, and Models.”
Related to the award’s theme of biases that might affect actuarial assessments in insurance, retirement planning, and/or financial risk, Xin’s and co-author Fei Huang’s research, as described in the article, proposed actuarial approaches squarely aimed to help meet insurance pricing challenges as interest in the regulation of indirect discrimination has been increasing. It posited fairness criteria, explored how they could be used relative to existing and possible regulatory approaches, and provided specific modeling examples for actuaries.
The Academy hosted Xin and two other 2025 Award for Research finalists on a webinar to discuss how actuaries and academics might translate these research findings into practical changes that have the potential to mitigate bias in actuarial assessments in insurance and retirement planning. The webinar recording is available on Academy Learning (learning.actuary.org; member login required). For additional discussion on the implications of Xin’s research, the Research Committee is developing an issue brief, to be published in 2026, highlighting perspectives on how actuaries might take the findings from Xin’s award-winning research and apply them in current practice.