Feature

The Future Is Here

The Future Is Here

By Robert Eaton

Transforming actuarial work with artificial intelligence

“AI is whatever hasn’t been done yet.” -Larry Tesler

ChatGPT was unveiled in October 2022 to explosive receipt, with 100 million users adopting the technology within two months. Other big tech companies—Microsoft, Google—swiftly followed with their own announcements of artificial intelligence (AI) advancements that had been in the works for nearly a decade. These companies owe their recent success to neural networks, which form the large language models (LLMs) with billions of parameters. These are referred to as foundation models, with the companies aiming to provide the general scaffolding for other companies to support specific applications, capabilities, and niches at scale.

Major insurance industry consultancies have announced large investments into AI and language models to improve the performance and efficiency of their businesses. Insurtech companies and startups—many already incorporating some form of artificial intelligence—will move rapidly to embrace the use of this new tech.

With so many announcements coming so quickly, you would be forgiven for believing this is more hype than real progress. So let’s take a step back and look at our existing uses of artificial intelligence, and see how these LLMs are different.

Artificial Intelligence: a suitcase word unpacked

The AI pioneer Marvin Minksycautioned us against describing computing processes with the same words we use to describe our minds, such as “consciousness,” “learning,” or “memory.” These words are “suitcase” words that describe phenomena about us that we do not fully comprehend; they involve many complex, emergent, and evolutionary components.

As we have developed computing and statistical techniques beyond classical hand-written theorems, “artificial intelligence” has become something of a suitcase term. It is helpful to understand what computing processes we use today in the insurance industry that go by the label “AI.” Most of these can be categorized as a form of machine learning:

  • Machine learning is computational modeling that allows computers to learn patterns from data and make predictions or decisions without being explicitly programmed to do so.
  • Tree-based predictive models are a type of machine learning approach that predicts the value of a target variable by learning simple decision rules inferred from data features, with the decisions represented in a tree structure. Gradient boosting machines, for instance, are a tree-based modeling approach that actuaries can use in setting assumptions.
  • Unsupervised learning is a subset of machine learning that identifies patterns in datasets with no pre-existing labels, often used for clustering or anomaly detection. An example is K-means clustering, which we can use to segment policyholders into different groups based on their characteristics or behavior.
  • Neural networks are a type of machine learning model inspired by the structure of the human brain, consisting of layers of nodes (or “neurons”) that adjust their connections to learn patterns from data.

Some of us refer to these techniques using the blanket term of artificial intelligence because we don’t need to concern ourselves with the details, and possibly also because these are techniques are more modern than those we learned in school, requiring greater computing power than was available to us, and different forms of statistical models.

Insurance companies, actuaries, and data scientists use many of these artificial intelligence models today to make estimates about the future. Because Moore’s law has held (roughly unending after 50 years, RIP Mr. Moore), the economic cost of making more estimates and better estimates has gone down in our lifetime. For an immediate use case, take the actuarial reserving and valuation processes. In a few decades we have seen the evolution from feeding Fortran punch cards into machines, to using spreadsheet software to calculate tabular reserves and IBNR for groups of policies, to using cloud-based applications running stochastic simulations for each policyholder. The AI researcher and author Ajay Agrawal[1] foresees the cost of computing and predictions decreasing still, allowing actuaries even greater command of the risk management of future contingencies.

The language models we now have a sample of seem poised to help us in new ways.

ChatGPT, Bard, LLaMa, and other LLMs have demonstrated an impressive variety of capabilities. At their core, the models and their chatbot interface write sentences and paragraphs in colloquial English and other languages that are easy to understand; they certainly pass my Turing Test. The models are trained on a corpus of text harvested from the internet circa 2021, embedded deeply with human written knowledge. As this publication has discovered,[2] the models aren’t exactly sources of truth (they have not claimed to be), but they serve as an incredible resource to help us with some daily writing and learning tasks. Here are some examples I’ve used recently.

I learned the nuances of the Elo rating used to rate players in chess. GPT was able to provide me with a sample calculation that it made up on the spot.

  • I asked the model to summarize a book that my parents were reading, so that I could ask some details and have a better conversation.
  • I prompted the model to write me some code for a macro that I wanted to write to simplify a task in Microsoft Word. This saved me some time (and my colleagues some more time).

The key within these AI models—sometimes referred to as generative AI—is the ability to predict the next word or handful of words in a sentence, which reads like standard English, based on its training set and also earlier dialogue in a chat. It turns out that this prediction (think of autocomplete on steroids) generates really compelling results,[3] which seem so much like decent human writing that many already ascribe human-like characteristics to it.

In the rest of this article, I explore how LLMs and their applications may help insurance professionals—and actuaries more specifically.

How AI can change actuarial work

As Agrawal estimates that the cost of prediction will decrease, the LLMs portend that the cost of other routine written work will also decrease. While most of the LLMs we see today are general in nature (trained on text across the internet), companies and people are creating applications that allow these models more specific use. Microsoft’s Copilot and Google’s Duet are examples of these, where we can use our personal and professional documents, presentations, and recorded conversations to train a model that serves as our professional assistant for writing, summarizing, and drafting new documents.

As the cost of routine work declines, the value and status of expertise will rise. In our work as actuaries, we synthesize information and inputs from stakeholders to create models estimating liabilities and assets and future revenues and expenses. These estimates in hand, we leverage interpersonal relationships within and between businesses to create a robust understanding of our environment and of what our business can accomplish. And we apply our learned domain knowledge to make the business decisions that create products that customers value.

This process remains a human endeavor. The advent of new tools such as the LLMs will improve the value of our decision-making and syntheses because we stand to receive higher-quality input (through trained models that generate the reports and analyses we value) and more of it (the cost of producing the next analysis is marginal).

To capture this improvement, we must adapt to and adopt generative AI. It goes without saying that we should do so with no less regard to our current professional standards of quality and accuracy. Companies that foster innovative and forward-thinking cultures stand to benefit the most from these new advancements. As more routine work is completed by machines (continuing the trend of decades), actuaries are free to better leverage our specific expertise and skills.

The value of interpersonal behavioral skills—listening, communicating, understanding—will also rise as we spend less time with more routine work (drafting presentations and memos) and more time understanding motivations, causes, and mechanisms of our business decisions.

How AI can change the insurance model

Language models and other forms of AI should produce meaningful changes across the insurance value chain. New models can produce marketing content with a bent toward the company brand and copy with greater ease and lower cost. Marketing units can hire or outsource language model engineers to extract volumes of rich content for many markets and products. Insurance companies will meet specific regulatory requirements with any new product marketing but, as we discuss below, insurance compliance and regulatory functions also stand to benefit from language models.

Language models can be used in training sophisticated assistants to help in nuanced insurance sales. Insurers can train specific models to learn and understand and produce valuable insights about their products when prompted. These models can produce accurate and understandable replies to questions agents receive such as “How does this new product compare to what I’ve already got?” and “What are some key considerations for someone my age when thinking about retirement?” While these questions are likely answerable today from a seasoned agent and with some keen searching, there is a lot of facility to be gained in having an assistant that remembers your conversation, referring back to prior dialogue, and with a focused understanding of specific niche products. This same process will be useful in customer service algorithms, allowing existing representatives to handle much larger volumes over time.

Traditional underwriting is performed by underwriters, applying guidelines to information about new policy applicants. In the past decade or more, improvements in data acquisition and processing have facilitated faster (sometimes called “automated” or “accelerated”) underwriting. These techniques can use rules-based decision making across large volumes of data. Machine learning and other AI methods have already improved the training of these models to better assess risk. Because the underlying data for underwriting decisions are based largely on personal history, prescription drug, and medical records (among other information), language models that excel at connecting terms and concepts can provide additional lift to models that assign risks based on conditions.

New LLMs may not offer much to the rigorous calculation toolkits of pricing, reserving, and forecasting actuaries. A neural network specifically trained on actuarial processes will likely not, for instance, do a better job of calculating VM-20 reserves or estimate IBNR or the terms of value-based contracts in health care. These LLMs may, though, provide a hyper-savvy research assistant that can peruse past company documents, search the current internet (such as through OpenAI GPT-4’s internet search functionality), and unearth related terms and concepts that are valuable to the actuary. LLMs can also provide valuable assistance documenting and assisting in programming of current and new models.

Artificial intelligence more broadly (as mentioned in the few examples above) is likely to provide other advancements in calculation and processes to improve the actuary’s position in risk management. More and better data, for instance, can provide for a richer understanding of policyholder behavior and the interaction of assumptions and contingencies on complex products. On the flip side, more data without improved quality may only serve to obscure the signal from the noise, or produce unanticipated biases. The actuary should be particularly eager to understand model results, and ensure that model and data biases are understood and accounted for in business decisions.

Insurance compliance and regulation deal mostly in written communication and interpretation of statutes, regulations, and other legal bulletins. These fields are ripe for transformation through the use of LLMs. We see this already in the adjacent field of law, where companies have trained LLMs to read laws, cases, and other legal documents to provide research assistance in summarizing existing (and disparate) sources, drafting new documents, and searching or plumbing great volumes of case law in a manner superior to current search algorithms. Each of these functions applies to insurance compliance, where companies interact with regulators based on the corpus of current law and regulation, and prior interactions and approved filings nationwide.

At the other end of the insurance value chain and the customer journey, language models can assist in claims administration. In some lines of business, determining benefit eligibility is complex. Certain health claims require specific prior authorizations, independent assessments, and other confirmation of conditions. LLMs thrive on volumes of written, often unstructured data, where long documents must be read and interpreted in conjunction with others. This is a skill that can help claims administrators as they determine whether a claim should be paid and how much. A language model trained by a company to interpret its own guidelines, detect specific fraud types, and read through a customer’s history in seconds will be able to suggest a claim eligibility decision with citations as to why it selected the decision.

Great expectations

For today’s professional, technological changes have for decades transformed how we work: Generative AI and LLMs will be no different here. I expect that we will see substantial changes to education, the advent of the digital professional assistant, and an increased demand for high-EQ[4] actuaries.

On the one hand, for many years already we have remade our professional view on all three fronts. On the other hand, we have in our palms a new machine that “understands” and interprets our requests, is armed with much of human knowledge through 2021, and creates plausible written responses. We should actively use this to improve our own understanding of our world and to better express ourselves.

The education we receive as actuaries and as professionals will adjust accordingly. We will use language models to improve our professional educational process—including in training for actuarial exams—and to facilitate our education in the workforce. Envision a language model that the company has trained to understand the last 12 quarters’ valuation memos and answer questions for the newest member of the valuation team. We will use language models in conjunction with colleagues, fellow students, and teachers to develop and enhance our domain knowledge.

Our daily work will become more asking requests of a fast and smart assistant rather than coaxing results from a script or a spreadsheet. Sleek spreadsheet writing may become the stuff of tomorrow’s cocktail party talk, just like today we are impressed with the parlor trick of programming reverse Polish notation in HP calculators. A new valuable skill instead will be in training the assistant and managing or understanding the best way to produce quality work-product (and a lot of it). Training one, after all, implies training many, with the value of the training accruing to us and our employers. Institutional knowledge will be captured in a digital agent that can explain it back to us, rather than in formulas, code, and in the brains of employees who will retire and leave one day.

To that end, recruiting and talent management will pivot to seek out and maintain a workforce skilled both in domain expertise but also in training and managing our digital assistants.

The change to education and the rise of the smart assistant must have a meaningful change to our current apprenticeship model of actuarial professional development. Virtually everyone has mentors they revere, often for reasons of personal learning: “she taught me this reserving technique when I grew up in the rotation program”; “he gave me the clearest explanation of the relationship between insurer and reinsurer cash flows and liabilities.”

When fast and smart digital assistants provide solid technical education on par with our current mentors, we may on balance rely more on our human mentors for EQ skill improvement: “she explained why the deal fell through based on the values the seller held compared with those of the buyer”; “he described how providing resources to one area demonstrated his commitment to the other team’s project, and he was rewarded later on the in quarter with help on his project.”

We will therefore come to value EQ and AQ skills more highly among actuaries than we do today, as IQ skills will be completed in greater measure by our assistants and models. Because employers will value EQ and AQ skills more highly, education and selection for actuarial students should adjust to account for this demand.

Conclusion

The pivotal moments just keep on coming. There are many in AI research who worry about the doom of the species. A great number of respected AI scientists recently stated, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[5] Provided we’re still around to see it, the future of actuarial practice is worth our contemplation; our earliest digital assistants are already here.

ROBERT EATON, MAAA, FSA, is a principal and consulting actuary at Milliman in Tampa, Fla.

References

[1] Agrawal’s book, co-written with Joshua Gans and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence, (2018) makes this case well.

[2] See “Artificial Untelligence”; Contingencies; May/June 2023.

[3] Stephen Wolfram wrote a terrific explainer on the LLM process on his blog: “What Is Chat-GPT Doing … and Why Does It Work?”

[4] EQ here means “emotional quotient,” in contrast to the better-known acronym IQ. The article “Technology and Skill Trends in the Actuarial Profession” by Julie Curtis is a good primer on this terminology, including “AQ,” or the adaptability quotient.

[5] Statement of AI Risk; Center for AI Safety; accessed May 30, 2023.

print
Next article Risky Business: Why Insurance Markets Fail and What to Do About It
Previous article Black Hole

Related posts