Professionalism Counts – June 2023
ChatGPT—Understanding the Model
By Brian Jackson, Academy General Counsel & Senior Director of Professionalism
The recent emergence of generative artificial intelligence (AI) tools like OpenAI’s ChatGPT has stimulated considerable discussion about how these technologies can be used by actuaries when providing professional services.
ChatGPT is a conversational AI service based on a natural language processing system that can generate realistic and coherent text responses to questions and prompts.[1] Unlike other chatbots, which are typically preprogrammed with specific responses, ChatGPT uses machine learning and natural language processing algorithms to generate responses based on the context and tone of the conversation.[2] With its remarkable ability to mimic human language and engage in conversations on a seemingly infinite number of subjects, this technology can be used to quickly generate professional communications that are tailored to the intended audience. This can help actuaries save time and effort when drafting communications for their principal, as they don’t have to spend time researching and writing from scratch.
The significance of these new generative artificial tools was recently summed up by Microsoft founder Bill Gates. In a blog post in late March, Gates notes:
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”[3]
But the credentialed actuary must use this emerging technology with care, as its capability and promise also comes with risks and limitations that raise significant professionalism concerns. When navigating these concerns, actuaries may look to the profession’s standards of conduct and practice, which provide a framework for actuaries to exercise professional judgment in the use of such technologies.
Precept 1 obligates the actuary to perform actuarial services with honesty, competence, integrity, skill, and care. Unwary use of ChatGPT can challenge the actuary in meeting these Precept 1 obligations because sometimes ChatGPT will get things wrong—either because it retrieved an untruth from its training data or because it simply invented facts in response to a query.
PRECEPT 1—An Actuary shall act honestly, with integrity and competence, and in a manner to fulfill the profession’s responsibility to the public and to uphold the reputation of the actuarial profession. |
“A.I. researchers call this tendency to make stuff up a ‘hallucination,’ which can include irrelevant, nonsensical, or factually incorrect answers.”[4] This ChatGPT shortcoming is not a secret—OpenAI warns users about the possibility of ChatGPT generating wrong or harmful information.[5]
But unfortunately, some professionals have failed to heed this warning. You may have recently= seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.[6] The lawyers in that case used ChatGPT to perform their legal research for the court submission, but did not realize that the AI software made up nonexistent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases.[7] Interestingly, the lawyer who failed to verify the cases relied on a subordinate to provide the research that contained the erroneous case citations. That lawyer obviously failed to ensure that professional services performed under his direction satisfied “applicable standards of practice” as the actuary is required to do under Precept 3.
PRECEPT 3—An Actuary shall ensure that Actuarial Services performed by or under the direction of the Actuary satisfy applicable standards of practice. |
This case serves as a cautionary tale for actuaries seeking to use AI in connection with their professional services. Pursuant to Precept 1, actuaries must provide their services with skill and care. This obligation encompasses having the knowledge and skill to use new technologies such
as artificial intelligence competently on their principal’s behalf. Actuaries should use ChatGPT while understanding its limitations and realizing that they cannot rely solely on the AI software’s output—especially because ChatGPT’s outputs not only are sometimes incorrect, they also replicate the biases of the data it has been trained on, including gender, racial, and ideological biases. Actuaries must do their due diligence by verifying any answers that come from ChatGPT, comparing them to their own knowledge as well as conducting their own research from reputable sources—a degree of caution, common sense, and professional judgment should be applied when determining the level of reliance to be placed on the outputs.
ChatGPT is a potentially game-changing tool, but it has not yet reached the point where it can be solely relied upon for professional services. For this reason, actuaries must exercise caution in entrusting tasks to AI, and—if and when they do—they must scrutinize the work it produces. While actuarial standards of conduct and practice do not require that actuaries avoid using emerging AI technology like ChatGPT, they do require actuaries to make a reasonable attempt to have a basic understanding of its strengths and weaknesses and exercise their own professional judgment when making use of this exciting new tool.
ASB—The Academy’s Actuarial Standards Board (ASB) develops and promulgates actuarial standards of practice (ASOPs)—guideposts that describe the procedures an actuary should follow when performing actuarial services and identify what the actuary should disclose when communicating the results of those services. ASOP No. 56, Modeling, section 3.2 (Understanding The Model), states: "When expressing an opinion on or communicating results of a model, the actuary should understand the following:
|
[1] “What is ChatGPT? Everything you need to know about chatbot from OpenAI”; The Washington Post; Dec. 6, 2022.
[2] “ChatGPT: Everything you need to know about the AI-powered chatbot”; TechCrunch; March 31, 2023.
[3] “The Age of AI has begun”; GatesNotes; March 21, 2023.
[4] “What Makes A.I. Chatbots Go Wrong?” The New York Times; March 29, 2023.
[5] ChatGPT warns: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” It further warns that it “[m]ay occasionally generate incorrect information ... may occasionally produce harmful instructions or biased content ... [and it has] limited knowledge of world and events after 2021.”
[6] “Here’s What Happens When Your Lawyer Uses ChatGPT”; The New York Times; May 27, 2023.
[7] Case No. 22-cv-1461 (S.D.N.Y.).