Special Section

Generative AI— Applications for Actuaries

Generative AI— Applications for Actuaries

By Alex Fykas

Generative AI (GenAI) is transforming various industries, and the actuarial field is not immune. The ability to generate text, images, simulations, and even entirely new data sets has given rise to significant opportunities in risk modeling, financial reporting, customer service, and beyond. However, for actuaries—whose work hinges on risk assessment, financial prudence, and regulatory compliance—GenAI also presents new challenges, including ethical dilemmas and increased scrutiny from regulators.

This article will explore the history and evolution of generative AI, its applications in actuarial science, and the ethical and regulatory issues associated with this technology. We will also look to the future of AI in actuarial work, identifying how professionals can navigate the opportunities and risks that come with these tools.

A Brief History of Generative AI

The roots of generative AI lie in the broader history of artificial intelligence. AI research dates back to the 1950s, with early efforts aimed at building machines that could replicate human problem-solving. Early breakthroughs, such as machine learning algorithms and neural networks, laid the groundwork for today’s more advanced models. However, the development of Generative Adversarial Networks (GANs) by Ian Goodfellow in 2014 is often cited as the beginning of the modern generative AI revolution. GANs use two neural networks—one that generates new data and one that evaluates the data’s realism—leading to AI models that can create entirely new and convincing data, images, or simulations (Goodfellow et al., 2014).

Fast-forward to 2020, and OpenAI’s GPT-3 pushed the boundaries of natural language processing (NLP), ushering in a new era of AI-driven text generation. With its 175 billion parameters, GPT-3 could produce human-like text and was quickly adapted for use in sectors such as customer service, automated reporting, and even creative writing (Brown et al., 2020). Now, in 2024, models like GPT-4 and other state-of-the-art NLP systems are helping actuaries automate complex processes, develop more accurate risk models, and enhance productivity across the board.

Current Practices: How Generative AI Is Shaping Actuarial Science

Advanced risk modeling and simulations

Generative AI offers actuaries a new level of sophistication in risk modeling. Traditional actuarial models rely on historical data and fixed assumptions, but generative AI allows for more dynamic and granular simulations. For example, actuaries working in property and casualty insurance can use generative models to simulate climate change impacts on a broader scale, incorporating a vast number of variables such as fluctuating economic conditions, evolving environmental regulations, and changing demographic patterns. These complex simulations can reveal hidden risks that would have been missed using traditional methods (Lloyd’s of London, 2020).

In life insurance and pensions, AI models can generate predictions about long-term mortality and morbidity trends, helping actuaries design more robust products that account for uncertainty. The flexibility offered by generative models means that actuaries can assess not only the “most likely” scenarios but also tail events—those unlikely, high-impact scenarios that are critical in stress testing financial solvency (Swiss Re Institute, 2022).

AI in underwriting and fraud detection

Traditionally, underwriters would assess the risks associated with insuring individuals or assets by evaluating past claims and demographic data. But with the advent of generative AI, this process has become far more streamlined. Generative models can sift through vast datasets to uncover patterns that humans may overlook, providing underwriters with new insights into potential risks. For example, AI can evaluate the risk associated with insuring businesses by analyzing real-time market data, credit reports, and even social media activity to generate risk scores (Lemonade, 2021).

AI and Climate Change Risk Modeling
Generative AI allows actuaries to model the future impacts of climate change on insurance portfolios. For example, AI can simulate potential flood risks over the next century by integrating variables like temperature rise, precipitation patterns, and regional economic developments. This helps insurers develop more precise pricing models that reflect the increasing risks posed by extreme weather events (Lloyd’s of London, 2020).

AI is also a game-changer in fraud detection. By analyzing claims data in real time, AI can quickly spot suspicious patterns that could indicate fraud. For instance, generative models might detect that an unusually high number of claims have come from a particular area or identify inconsistencies between a customer’s claims history and current behavior (Lemonade, 2021). This application of AI not only saves costs but also builds trust with clients, as fraudulent claims can be flagged and resolved more quickly.

Financial Reporting, Regulatory Compliance, and Governance

AI-powered financial reporting

Generative AI is revolutionizing financial reporting. In sectors where regulatory compliance is a major concern, such as insurance, AI can help streamline the reporting process by automating data analysis, validation, and presentation. For example, AI can quickly generate reports that meet the regulatory requirements of frameworks like Solvency II or IFRS 17, minimizing the potential for human error (Deloitte, 2022).

A large financial institution could use AI to automate quarterly solvency reports, saving hundreds of hours in manual work. In addition, AI-generated reports are more consistent and can be produced in real time, offering actuaries and other financial professionals the ability to make quicker decisions based on up-to-the-minute data. AI is also being used to flag discrepancies in financial statements and suggest corrections, ensuring that reports align with both internal governance and external regulatory standards (Deloitte, 2022).

Regulatory compliance and AI

As AI systems take on more significant roles in financial services, regulatory bodies are paying close attention. Actuaries must ensure that their use of generative AI complies with both existing regulations and new standards designed specifically for AI. For instance, the European Union’s AI Act, proposed in 2021, sets out stringent rules for AI in high-risk industries, including finance. The act mandates that companies provide transparency in their AI models, ensuring that any decisions made by AI systems can be traced and explained (European Commission, 2021).

For actuaries, this means that generative AI models used for risk modeling, pricing, or underwriting must be auditable. The data used to train these models must be transparent and free from biases that could lead to unfair outcomes. In the United States, FINRA and other regulatory bodies are beginning to introduce rules around the use of AI in trading and financial advice (FINRA, 2023). Actuaries will need to stay current on these evolving regulations to ensure their models and processes remain compliant.

Ethical Considerations for Generative AI in Actuarial Science

The ethics of AI bias and fairness

One of the most pressing ethical concerns surrounding generative AI is bias. AI models are trained on vast datasets, but these datasets can reflect societal biases that lead to unfair outcomes. For example, if an AI underwriting model is trained primarily on historical data that reflects gender or racial disparities, it could perpetuate those biases in its decisions (Buolamwini et al., 2018). This is particularly concerning for actuaries in fields like life insurance, where underwriting decisions impact people’s access to financial protection.

To mitigate bias, actuaries must engage in regular audits of AI models to ensure fairness. This might involve analyzing the data used for training the model, as well as monitoring the outcomes the model produces. Transparency is key here, as it enables actuaries to explain their decision-making processes to both clients and regulators.

Moreover, actuaries should advocate for diversity in the datasets used to train AI models. By ensuring a wider range of demographic and behavioral inputs, they can create more equitable systems that reduce the risk of discrimination.

Transparency and explainability in AI

A central tenet of actuarial professionalism is transparency, and this extends to the use of AI. One of the main challenges with advanced AI models—especially deep learning systems—is that they often operate as “black boxes,” producing outputs that even the developers struggle to fully understand. This is problematic for actuaries, who are required to provide clear explanations for the models they use in pricing, underwriting, or risk management (McKinsey & Company, 2023).

Explainable AI (XAI) is one potential solution to this issue. Techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are gaining traction in the field. These methods break down complex AI models into more understandable components, allowing actuaries to explain which factors most influenced a model’s decision (Algorithmic Justice League, 2020).

The Future of Generative AI: What’s Next for Actuaries?

Looking ahead, the role of AI in actuarial science will continue to grow. Large language models (LLMs) like GPT-4 and beyond will increasingly be integrated into actuarial workflows, automating complex tasks such as scenario analysis, stress testing, and regulatory reporting. However, actuaries must remain vigilant, ensuring that these AI tools are deployed ethically and responsibly.

One area poised for significant development is the use of AI in ESG (environmental, social, and governance) analysis. Actuaries are already using AI to model the long-term impacts of climate change, but future applications could delve even deeper, incorporating a wider range of social and governance factors into risk assessments (Deloitte, 2022). AI will also be used to enhance cyber risk management, an area where actuarial professionals are increasingly focusing their attention as cyber threats evolve.

Moreover, as AI regulations mature, actuaries will play a pivotal role in shaping best practices and compliance frameworks. By taking a leadership position in the ethical development and deployment of AI, actuaries can ensure that their profession continues to innovate while upholding the highest standards of accountability and transparency.

Conclusion

Generative AI can be a transformative force in the actuarial space, offering the ability to enhance risk modeling, automate reporting, and improve decision-making. However, with this power comes great responsibility. Actuaries must not only embrace AI’s potential but also ensure that it is used ethically and transparently, adhering to the profession’s rigorous standards.

As we move into the future, the continued integration of AI into actuarial workflows will present both opportunities and challenges. Actuaries must stay informed, adaptable, and always guided by the core principles of their profession. By doing so, they can lead the way in responsible AI adoption, ensuring that technology serves to enhance—not undermine—the values that have long defined their work.

Case Study:

AI in Insurance Underwriting—Lemonade’s Disruptive Use of GenAI

Lemonade, a relatively new insurance player, has made waves by integrating advanced AI technologies throughout its workstreams. For actuaries, Lemonade’s use of generative AI offers a practical example of how cutting-edge technology can be applied to streamline underwriting processes, enhance fraud detection, and improve customer experiences.

The Role of GenAI in Lemonade’s Underwriting Process

Traditional insurance underwriting is a time-consuming task that relies heavily on manual data collection and analysis. Lemonade, however, has managed to automate much of this process with the help of AI.

Using generative AI, Lemonade’s underwriting engine sifts through massive amounts of data to create risk profiles for potential customers. Instead of relying solely on static information, the AI system continuously learns and adapts based on new data inputs. For instance, when a customer applies for homeowners’ insurance, Lemonade’s AI system generates a risk score that takes into account not only the property’s location and the applicant’s history, but also broader environmental factors, such as weather patterns or local crime rates. This holistic approach allows the company to tailor its policies more accurately and price them in a way that reflects the true risk.

Enhancing Fraud Detection with AI

Insurance fraud is a multi-billion-dollar issue globally, and traditional methods for identifying fraud can be slow and ineffective. Lemonade’s AI systems are trained on vast datasets, enabling the company to identify patterns in fraudulent claims that would be difficult for a human investigator to detect.

For example, when a customer submits a claim, the AI immediately assesses the claim for potential red flags. The GenAI system analyzes not only the details of the current claim but also cross-references it with past claims data and even external sources like social media activity. In one case, Lemonade’s AI flagged a claim that appeared to be fraudulent because the customer’s social media activity contradicted their claim history. The system automatically rejected the claim.

Generative AI also helps Lemonade speed up legitimate claims. While traditional insurance companies might take days or weeks to process a claim, Lemonade can settle some claims in just minutes—AI models can instantly generate a decision based on the customer’s policy, claim data, and previous interactions with the system. Customers who submit straightforward claims can receive a payout almost immediately, enhancing customer satisfaction while reducing administrative costs.

Adhering to Professional Standards in AI Use

Lemonade is subject to the same regulatory scrutiny as any other insurance company, meaning that its AI systems must be transparent and explainable. The company has worked to ensure that its AI models are auditable, meaning that it can trace back decisions to specific data inputs and rules—a critical requirement in the actuarial profession.

Furthermore, Lemonade’s AI system incorporates feedback loops that enable constant model validation. Actuaries and data scientists at the company regularly review the AI’s decision-making processes to ensure they align with ethical standards and do not introduce unintended biases into the underwriting process.

Lessons for Actuaries

Lemonade’s successful use of generative AI offers several key takeaways for actuaries:

  1. Automation doesn’t mean loss of control: AI can automate complex tasks like underwriting while still adhering to actuarial principles of transparency and accountability.
  2. Fraud detection can be significantly improved: By using AI to spot anomalies and inconsistencies in claims data, insurers can reduce fraudulent claims and save on operational costs.
  3. AI must be auditable: For actuaries, ensuring that AI models are explainable and compliant with regulatory standards is critical, and Lemonade’s AI system offers a blueprint for how this can be achieved.

Lemonade’s use of generative AI is a prime example of how the insurance industry is embracing technology to improve efficiency, reduce costs, and provide better services to customers.

References

  • Algorithmic Justice League (2020). “The Fight Against Bias in AI Systems.” AJL website.
  • Brown, T. et al. (2020). “GPT-3: OpenAI’s Powerful Language Model.” OpenAI blog.
  • Buolamwini, J. et al. (2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the Conference on Fairness, Accountability, and Transparency.
  • Deloitte (2022). “The Role of AI in ESG Risk Management.” Deloitte Reports.
  • European Commission (2021). “Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence (AI Act).” European Union.
  • FINRA (2023). “Artificial Intelligence in Financial Services: Regulatory Challenges and Opportunities.”
  • Goodfellow, I. et al. (2014). “Generative Adversarial Nets.” Advances in Neural Information Processing Systems.
  • Lemonade (2021). “How AI Detects Insurance Fraud.” Lemonade Blog.
  • Lloyd’s of London (2020). “The Impact of Climate Change on Insurance.” Lloyd’s Reports.
  • McKinsey & Company (2023). “AI in Risk Management: The Next Frontier for Financial Institutions.”
  • Swiss Re Institute (2022). “How AI Is Revolutionizing Risk Management in Insurance.”
print
Next article A Quantum Leap
Previous article End of Election Season Means Real Policy Work is About to Begin

Related posts