Professionalism Counts, September 2024
Q&A: Actuarial Professionalism Considerations for Generative AI
The Committee on Professional Responsibility (COPR) released a new discussion paper, Actuarial Professionalism Considerations for Generative AI. The chairperson of the COPR task force that wrote the paper, Matt Wininger, offered some insights.
Why a paper on AI and actuarial professionalism?
When public, free generative artificial intelligence (GenAI) models first appeared and made a big splash, we saw other professionals use them right away—some responsibly and others disastrously. I joined COPR because I wanted to help improve our profession. I couldn’t pass up the opportunity and clear need to develop a valuable guide for actuaries as they navigate the use of GenAI.
When deciding whether to use AI, what should an actuary think about?
Evaluate your GenAI tool objectively to become familiar with its capabilities and its limitations. Ask, do I have the right controls around this model? Can I reproduce my results? Would my work hold up to an audit or a regulatory challenge?
In particular, check with your principal about using GenAI tools, as principals may have concerns about data privacy or how the tool interacts with the public.
How can actuaries using AI meet their professionalism responsibilities?
A good starting point is understanding what you're relying on and whether it’s reliable. Many GenAI users implicitly assume their model has complete and unbiased input data. Assuming your GenAI model has biased and incomplete data forces you to ask more stringent validation questions. AI lacks the context and judgment to know whether its result makes sense or is suitable for the assignment. The actuary must take responsibility for their services, including whether and how to use GenAI models and output from GenAI models.
What professionalism concepts and tools are particularly relevant to an actuary using AI?
The paper refreshes on the basics—validate your model appropriately, think about similar guidance that may apply, and think critically about what your principal needs. ASOP No. 56, Modeling, has excellent all-purpose guidance for using models, including GenAI models.
Reliance is another critical professionalism concept with GenAI. We often rely on other competent professionals or source inputs that we can select and evaluate. We must evaluate AI output differently than we evaluate results from another competent professional.
What might an actuary want to do differently when evaluating or validating a GenAI tool?
You may not control a GenAI model’s inputs, configuration, or modeling approach. A GenAI model can give a seemingly useful result without much prep from you. So it’s easy and tempting to jump in. But that seemingly authoritative GenAI result may be unreliable. You have to ask, what exactly am I relying on? Have I validated this tool appropriately? Have I adjusted for potential bias?
This is especially important if you use the model in a way that’s discontinuous from its training data. For example, some publicly available models were trained with data from a certain time period. If a significant event or regulatory change occurred later, the model is oblivious, and its output won’t reflect subsequent events.
How can actuaries document a GenAI model?
Right now, a proprietary model is the most useful point of comparison. Imagine you hired a consultant who used a proprietary model. You’d function-test that model, do sensitivity testing, give it positive and negative validations. Do you get the result you expect? Do you get errors when you expect to get errors? That framework can give you a valuable starting point for documenting a GenAI model.
What should supervising actuaries be aware of?
GenAI adds new twists for supervising actuaries. In the past, supervising actuaries could take for granted that they knew what tools their teams used because they provided or approved the tools. Now you may be unaware that GenAI tools are being used, and how they’re used. Anybody can check from their phone what ChatGPT thinks about X, Y, or Z. This could become critical support for your ultimate deliverable, and it just crept in without your knowledge or approval.
Any last thoughts?
Rapid developments with GenAI models will compound our professionalism challenges. As the actuary, if you’re producing actuarial findings, your professional judgment determines how you validate that work.
Be particularly cautious around GenAI models that make autonomous business decisions, especially public-facing decisions like risk-classification for an insurance applicant. These uses of GenAI models may offer great efficiencies and opportunities, and they also carry the greatest professionalism challenges. That’s where our professionalism principles will be most valuable—examining the trade-offs and helping make those decisions responsibly.