AI

AI and Professionalism

AI and Professionalism

By Ken Kent

At a recent company retreat, a presentation on the implications of artificial intelligence (AI)—and specifically ChatGPT as an example—was provided to our consultants. The presentation began with the presenter sharing his concern about the implications: He was able to coach the AI to complete a task that took analysts hours to complete, by obtaining the requested data and formatting it appropriately into a letter format provided, within minutes.

He then illustrated how other “bots” could be used to create accurate programs to perform complex tasks by looking up the appropriate parameters.

Each of his examples was rather astounding. But as a presenter, he was not really hearing what he was presenting. His concerns were our opportunities, and his successful demonstrations were each predicated on knowing what was an appropriate and accurate outcome—which was often the result of reiterative questions posed to the program until the correct outcomes were demonstrated through the AI.

The actuarial profession has always been working with the evolutionary precursors of AI. Maybe not the language generators like ChatGPT, per se, but we’ve become quite familiar with using big data with ever-increasing models in support of what has emerged as the area of predictive analytics. The fact that the ultimate evolution of creating algorithms that can reflect what is being referred to as “learning” should not be a surprise.

The key, however, to the successful embrace of these new tools is our profession’s responsibility to the pillars of professionalism that obligate us to continue to use our skills and our care in performing actuarial services.

I’m not an expert in the technology, but I’ve worked a bit with ChatGPT; my view of how the risk is our opportunity is my area of interest.

The enormous innovation—and what some see as the existential threat—of this technology is the ability of these AI programs to access and seem to understand a continuous stream of information. Experts identify that the solution to protecting these AI capabilities is through guardrails and ethical boundaries.

For the actuarial profession, the use of AI in any form—from gathering public information and preparing documents to performing complex calculations—is governed by our adherence to our own Code of Professional Conduct (the Code) and actuarial standards of practice (ASOPs).

Applicability of the Code of Professional Conduct and AI

I’ve already alluded to the provision in Annotation 1.1 under Precept 1 of the Code regarding performing services with skill and care. This would include how and when you use any part of an output from an AI bot. Specifically, multiple articles have highlighted that bots like ChatGPT are prone to making errors or misrepresentations at this point in their evolution, so any use of the output must be fact-checked. This activity leans into Precept 1’s requirement that “An Actuary shall act honestly, with integrity and competence…”

Throughout the Code, each Precept begins with what “An Actuary” shall do, differentiating our responsibilities and work product from what simply comes from an AI program or model.

More specifically, Annotation 4.1 states “An Actuary who issues and Actuarial Communication shall ensure that the Actuarial Communication clearly identifies the Actuary as being responsible for it.”

ASOP No. 56, Modeling, already exists. It provides the following definition for what constitutes a model:

2.8 MODEL
A simplified representation of relationships among real world variables, entities, or events using statistical, financial, economic, mathematical, non-quantitative, or scientific concepts and equations. A model consists of three components: an information input component, which delivers data and assumptions to the model; a processing component, which transforms input into output; and a results component, which translates the output into useful business information.

While AI is not specifically identified in this definition, in my opinion it certainly fits within the purview as well as the intent of the ASOP in holding the actuary responsible for its use. Further, the ASOP’s definition of “model risk” clearly covers the current and emerging implications of using AI:

2.9 MODEL RISK
The risk of adverse consequences resulting from reliance on a model that does not adequately represent that which is being modeled, or the risk of misuse or misinterpretation.

Going through the ASOP, the key challenge for the actuary in the use of models that they have not developed is provided in section 3.4:

3.4 RELIANCE ON MODELS DEVELOPED BY OTHERS
If the actuary relies on a model designed, developed, or modified by others, such as a vendor or colleague, and the actuary has a limited ability either to obtain information about the model or to understand the underlying workings of the model, the actuary should disclose the extent of such reliance. In addition, the actuary should make a reasonable attempt to have a basic understanding of the model, including the following, as appropriate:

a. the designer’s or developer’s original intended purpose for the model;

b. the general operation of the model;

c. major sensitivities and dependencies within the model; and

d. key strengths and limitations of the model.

When relying on models developed by others, the actuary should make practical efforts to comply with other applicable sections of this standard.

For these questions and the use of ChatGPT, you can actually ask the question of the program for this information or use the disclaimer currently included on the website.

The Professionalism Challenge

For many of us, ChatGPT and other language bots could be useful in providing a starting point for describing work product, articles, and descriptive assistance in our work. However, at what point do you need to identify the program under ASOP No. 56 as part of your disclosure? If, for example, you used it to consider a framework for a response to a question or to supplement a Statement of Actuarial Opinion, do you need to give recognition of the use as a model with the disclose requirements as set out in section 3.4 of ASOP No. 56? Or is there a point in which you may have modified the output such that you believe the work product is solely yours?

I posed a question for this article to ChatGPT before writing it; my direction was more in line with how the obligations of the actuarial profession already provide for the guardrails and ethical consideration through our Code and ASOPs. However, I thought I would also share my initial question and the ChatGPT response as disclosure of what I considered writing about and as demonstration of the power of this language AI.

Q Article on how the actuarial profession should view the enhancements in artificial intelligence?

ChatGPT Response:

Embracing Enhancements in Artificial Intelligence: A Paradigm Shift for the Actuarial Profession

Introduction: In recent years, advancements in artificial intelligence (AI) have revolutionized numerous industries, and the actuarial profession is no exception. As the actuarial field continues to evolve, it is essential for professionals to understand and embrace the enhancements brought by AI. This article explores the ways in which the actuarial profession should view and leverage AI to stay ahead in an increasingly data-driven world.

  1. Harnessing the Power of Data: AI offers actuarial professionals the ability to process and analyze vast amounts of data more efficiently than ever before. Actuaries can leverage AI algorithms to collect and analyze data from multiple sources, including structured and unstructured ata, leading to more accurate and robust risk assessments. By harnessing AI’s power, actuaries can identify hidden patterns, trends, and correlations, enabling them to make more informed decisions and improve the accuracy of their models.
  2. Enhancing Predictive Modeling: AI techniques, such as machine learning and deep learning, can significantly enhance predictive modeling in the actuarial profession. Actuaries can develop more sophisticated models by training AI algorithms on historical data, enabling them to project future trends and outcomes with greater precision. This not only enhances risk management but also helps actuaries make more accurate forecasts, leading to improved financial planning and decision-making.
  3. Automating Routine Tasks: AI can automate repetitive and time-consuming tasks traditionally performed by actuaries, allowing them to focus on higher-value activities. With the help of AI-powered tools, actuaries can streamline data gathering, data cleaning, and data validation processes. This automation not only increases efficiency but also reduces the risk of errors associated with manual work. By delegating routine tasks to AI, actuaries can dedicate more time to strategic analysis, innovation, and providing valuable insights to stakeholders.
  4. Enabling Advanced Risk Assessment: AI technologies enable actuaries to explore complex risk scenarios and assess the potential impact of various factors on insurance portfolios. By combining AI with advanced simulation techniques, actuaries can conduct stress tests, scenario analysis, and Monte Carlo simulations more effectively. This integration of AI helps actuaries evaluate risks from multiple angles, leading to better risk mitigation strategies and improved pricing and underwriting decisions.
  5. Addressing Ethical Considerations: As AI becomes more prevalent in the actuarial profession, ethical considerations become crucial. Actuaries must ensure that AI algorithms are transparent, fair, and unbiased. It is essential to address potential issues related to data privacy, security, and algorithmic bias. Actuaries should actively participate in the development and implementation of ethical guidelines to ensure that AI is used responsibly and to build trust with clients and policyholders.

Conclusion

The actuarial profession stands to gain immense benefits from embracing the enhancements brought by AI. By harnessing the power of data, enhancing predictive modeling, automating routine tasks, enabling advanced risk assessment, and addressing ethical considerations, actuaries can unlock new opportunities and add value in an increasingly data-driven landscape. Embracing AI is not about replacing actuaries but rather empowering them with powerful tools to make more informed decisions, deliver accurate insights, and navigate the complexities of the modern insurance industry. Actuaries who embrace AI will be well-positioned to shape the future of risk management and drive innovation in the actuarial profession.

print
Next article ChatGPT—Understanding the Model
Previous article HAL the Actuary?

Related posts