professionalismProfessionalism PerspectivesSpecial Section

ChatGPT—Understanding the Model

ChatGPT—Understanding the Model

By Brian Jackson

The recent emergence of generative artificial intelligence (AI) tools like OpenAI’s ChatGPT has stimulated considerable discussion about how these technologies can be used by actuaries when providing professional services.

ChatGPT is a conversational AI service based on a natural language processing system that can generate realistic and coherent text responses to questions and prompts.[1] Unlike other chatbots, which are typically preprogrammed with specific responses, ChatGPT uses machine learning and natural language processing algorithms to generate responses based on the context and tone of the conversation.[2] With its remarkable ability to mimic human language and engage in conversations on a seemingly infinite number of subjects, this technology can be used to quickly generate professional communications that are tailored to the intended audience. This can help actuaries save time and effort when drafting communications for their principal, as they don’t have to spend time researching and writing from scratch. The significance of these new generative artificial tools was recently summed up by Microsoft founder Bill Gates. In a blog post in late March, Gates notes:

“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”[3]

But the credentialed actuary must use this emerging technology with care, as its capability and promise also comes with risks and limitations that raise significant professionalism concerns. When navigating these concerns, actuaries may look to the profession’s standards of conduct and practice, which provide a framework for actuaries to exercise professional judgment in the use of such technologies.

Precept 1 obligates the actuary to perform actuarial services with honesty, competence, integrity, skill, and care. Unwary use of ChatGPT can challenge the actuary in meeting these Precept 1 obligations because sometimes ChatGPT will get things wrong—either because it retrieved an untruth from its training data or because it simply invented facts in response to a query. “A.I. researchers call this tendency to make stuff up a ‘hallucination,’ which can include irrelevant, nonsensical, or factually incorrect answers.”[4] This ChatGPT shortcoming is not a secret—OpenAI warns users about the possibility of ChatGPT generating wrong or harmful information.[5]

But unfortunately, some professionals have failed to heed this warning. You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.[6] The lawyers in that case used ChatGPT to perform their legal research for the court submission, but did not realize that the AI software made up nonexistent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases.[7] Interestingly, the lawyer who failed to verify the cases relied on a subordinate to provide the research that contained the erroneous case citations. That lawyer obviously failed to ensure that professional services performed under his direction satisfied “applicable standards of practice” as the actuary is required to do under Precept 3.

This case serves as a cautionary tale for actuaries seeking to use AI in connection with their professional services. Pursuant to Precept 1, actuaries must provide their services with skill and care. This obligation encompasses having the knowledge and skill to use new technologies such as artificial intelligence competently on their principal’s behalf. Actuaries should use ChatGPT while understanding its limitations and realizing that they cannot rely solely on the AI software’s output—especially because ChatGPT’s outputs not only are sometimes incorrect, they also replicate the biases of the data it has been trained on, including gender, racial, and ideological biases. Actuaries must do their due diligence by verifying any answers that come from ChatGPT, comparing them to their own knowledge as well as conducting their own research from reputable sources—a degree of caution, common sense, and professional judgment should be applied when determining the level of reliance to be placed on the outputs.

ChatGPT is a potentially game-changing tool, but it has not yet reached the point where it can be solely relied upon for professional services. For this reason, actuaries must exercise caution in entrusting tasks to AI, and—if and when they do—they must scrutinize the work it produces. While actuarial standards of conduct and practice do not require that actuaries avoid using emerging AI technology like ChatGPT, they do require actuaries to make a reasonable attempt to have a basic understanding of its strengths and weaknesses and exercise their own professional judgment when making use of this exciting new tool. 

BRIAN JACKSON, J.D., is the Academy’s senior director of professionalism.

print
Next article HAL the Actuary?
Previous article Cracks in the Foundation?

Related posts