Tradecraft

Three Major Causes of Unfair Bias in Ratemaking

Three Major Causes of Unfair Bias in Ratemaking

By Andrew Clark and Joshua Pyle

Insurers strive to avoid any real or perceived social bias. Beyond simply being good business practice, it is the right thing to do. That’s why state-level regulations and actuarial principles proscribe potential unfair discrimination in ratemaking by stipulating that rates should not be “excessive, inadequate or unfairly discriminatory.”

As a result, insurers strive to avoid any activity that can be considered unfairly discriminatory toward a group of people. They also know it is better to proactively find and address unfair bias than react to objections from regulators, lawmakers, and attorneys.

As society evaluates systemic racism’s impact, regulators, legislators, consumer groups, and others are calling upon the insurance industry to carefully examine business practices, attempting to uncover and address unintended biases that are often not readily apparent. 

A handful of states have responded and are actively addressing aspects of potential unfair bias in various insurance lines. In Colorado, as an example, the state’s insurance division is focusing on decisions and data input rather than the type of algorithm used. 

Concurrently, the National Association of Insurance Commissioners (NAIC) is also working to address social concerns related to unintentional bias in insurance practices. In July, it published an exposure draft of a model bulletin titled “Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers,” which would require insurers to implement a robust model risk management program.

Defining Bias

The NAIC document, published July 17, defines bias as “the differential treatment that results in favored or unfavored treatment of a person, group or attribute.” 

According to the National Institute of Standards and Technology, bias can be characterized in three broad categories: systemic, human, and statistical/computational. This article focuses on statistical/computational bias. Because it is arguably the most relevant from a tactical perspective, addressing it can help mitigate the others. 

Rooting out unfair bias requires insurers to identify potential unfair discrimination and evaluate model fairness, which depends on how well insurers can understand what drives decision-making in their modeling systems. Because unintended bias derives from human input decisions, it existed long before insurers began deploying model systems for automated decision-making. Therefore, unfairness in model systems often originates from historically biased data, not the modeling system itself. 

At the same time, it is humans who can find unfair biases in model systems. Beyond identifying and addressing bias, humans must develop governance for data and models. A strong data and model governance program anticipates areas of potential bias to ensure a more proactive approach earlier in the model lifecycle. 

Data governance ensures data quality, and frequently checking data will help protect modeling systems from unintended adverse impacts. As proposed by the NAIC exposure draft mentioned earlier, a robust model risk management governance structure provides a basis for holistically managing the development, implementation, and use of modeling systems.

The draft calls for insurers to have a governance framework to oversee their AI systems to prioritize “transparency, fairness, and accountability” in design and implementation. It is a principles-based approach built off model risk management best practices codified originally by the Board of Governors of the Federal Reserve System and published by the U.S. Treasury’s Office of the Comptroller of the Currency (OCC) in 2011. 

The NAIC draft establishes the requirements for model risk management governance policies, processes, rules, and procedures commensurate with the organization’s model risk. The document includes the enterprise-wide establishment of a model inventory, standards for development and documentation, and validation and deployment. 

Having a holistic process to manage model risk and data quality, the lifecycle model system provides one of the first steps for creating a control process to mitigate bias in modeling systems reliably. We believe the NAIC is moving in the right direction by incorporating the key principles published by the OCC, along with existing model risk management best practices, that include using only the highest-value elements.

Finding statistical bias in property and casualty ratemaking can be tricky. The key to avoiding “unfairly discriminatory” rates lies in determining what is and what is not justifiable differentiation when quantifying risk. Examination of loss costs and underlying data is paramount in establishing the justifiability of pricing differentiation. This explains why an insurer weighs increased claims and administration expenses against evidence of discrimination (even if unintended). 

As actuaries and statisticians know, correlation does not equal causation. The higher premium charged to residents in urban areas or suburbs compared to affluent rural neighbors with different demographic makeup does not necessarily mean the insurer is charging unfairly discriminatory rates. Detail and nuance in decision-making matters. 

Removing unfair bias from systems entails avoiding blanket assumptions and determining whether actual loss data, among several other factors, justify the pricing differentiation among subgroups of policyholders. As insurers continue to deploy powerful models, catching bias in the input data, models, and results becomes increasingly critical.

Three Major Causes of Unfair Bias 

Data that is unfairly biased naturally introduces unfair bias into models. Therefore, it is essential to locate unfair bias in data before it is fed into a model system. Below are three significant causes of bias and suggestions on addressing each one.

No. 1—Data

When applying analytics, having perfectly representative data is often challenging. For example, when building a model for individuals in Utah, actuaries and data scientists must ensure that the data selected for training the model represents or matches the actual Utah population data in all material respects. Characteristics should include the ratio of men to women, socioeconomic background, and the ages of Utah residents.

Carefully examining the underlying data will help detect unrepresentative data, its source and overall quality. To help verify whether the data is representative of the target population, an actuary should employ checks against alternative, trustworthy datasets from the source used to create the model. 

Insurance companies often augment their data with external sources. As one simple example, insurers can build proprietary datasets to achieve specific objectives through sites such as PollFish or purchase such datasets from Carto or EASI Demographics. Notably, external data also needs to be assessed for data bias issues.

A more subtle aspect of data management is minimizing measurement errors. This is a significant concern when creating fair and high-performing modeling systems. For example, the 2020 U.S. Census—a key data input for many actuarial calculations—should have accounted more accurately for Black Americans and Hispanics.

Unfortunately, measurement errors can occur and cannot be fully mitigated. Performing data validation checks against multiple data sources can help build confidence that the data is as representative as possible for the desired use case. Using alternative survey techniques beyond the U.S. Census, for example, makes it possible to isolate areas in which groups are underrepresented, allowing the addition of appropriate weighting factors as needed. 

During collection, check for data cleanliness, accuracy, and consistency. Evaluating for completeness can help locate missing data, which can otherwise make large swaths of datasets unusable and circumvent potential problems such as historical gaps. Importantly, using incomplete or missing data may lead to model systems that employ skewed data and cause a departure from user intention. 

No. 2—Algorithmic 

Algorithmic bias is complicated and can be challenging to locate. Even if input data is free of measurement errors and representative of the target population, unbalanced data can still be an issue for certain algorithms.

For instance, consider a hypothetical dataset of Texans that contains 100 millionaires with pickup trucks. The data may be representative and free from measurement errors. However, certain algorithms rely on the average outcomes for subgroups of policyholders when making predictions. Thus, rates could be biased for the 100 Texans as they will be based on the experience of other insureds.

Therefore, unless the data is pristine and representative of a socioeconomically diverse target population (having both is highly improbable), available options are to test and resample the data often (to ensure it is more representative of a diverse community) or to introduce multi-objective optimization. 

One of the most effective methods for explicitly training a model not to be biased is called equalized odds. Based on optimization theory, the algorithm is programmed to build the most accurate model and fine-tune it to be fair in multi-objective optimization. For example, the approach could require that males and females acquire loan approvals at an equal rate. Therefore, achieving a fair outcome can mean optimizing the algorithm to be as accurate as possible and meeting the criteria for equalized odds. 

No. 3—Deployment 

Imagine a model representative of the underlying target population without measurement error. Certain algorithmic approaches are more flexible than others and may be less prone to bias. Models can be up-sampled and down-sampled with multi-optimization constraints.

After deploying the model to customer-facing applications, the distribution of the customers receiving quotes may gradually change relative to the original target population. In other words, the input data may drift—which might, in turn, introduce bias. Modelers can evaluate whether adjustments are necessary by incorporating an early warning system to monitor and detect input drift. 

Related to input drift is the concept of model drift. A drift occurs when there is a change in the relationship between the inputs and outputs. This development can occur if subtle input drift is undetected but the model is sensitive to changes in data inputs. As with input drift, distributional monitoring is essential for determining whether model drift occurs over time. Therefore, reviewing the results that customers see and the underlying training data remains vital.

Modeling for Fairness

Because there are several potential ways unfair bias can emerge in data and models, and because state regulators are reviewing rate filings with increasing scrutiny, insurers are under more significant pressure to demonstrate that their recommendations are free of unfair bias. 

With these conditions in mind, insurers should review data to ensure it meets regulatory requirements. Although some believe it is better to deploy experimental model systems because regulatory reviews can take too long, we believe validating model input to reduce potential regulatory concerns about a model’s creation and characteristics is better.

Even when adapting a model to ensure fairness, there is a risk of creating new unfair bias, which is subtle and depends on perspective and context. A territory of predominantly low-income residents can, for example, be identified as having high accident frequency even though it is also caused by exurbanite commuters passing through. 

Beyond regulation, insurers have the obvious incentive to validate models, as so doing improves accuracy and maximizes model potential. There are several ways to validate models. One simple approach compares different datasets and metrics with expected and desired outcomes.

Few models are entirely immune to the algorithmic bias that can result in unfair treatment of certain groups. By taking advantage of technologies such as machine learning and other emerging advancements, actuaries can identify patterns of bias in data and models that may be overlooked by traditional methods and make necessary adjustments to ensure fairness for all policyholders. 

Conclusion

Unfair bias in data and models can exist for a myriad of reasons, from input data to algorithm selection to model deployment. Fully avoiding all unintended bias is extremely difficult, but being aware of the risk and establishing controls and processes to detect and mitigate it is in the grasp of all insurers.

While the NAIC proposed standards are in process, insurers would do well to begin adopting best practices for data and modeling systems activities. Actuaries will need to balance being diligent in detecting any hint of unfairness while also ensuring profitable rates. Those who can do so effectively and efficiently will ultimately have a competitive advantage. 

ANDREW CLARK is co-founder and chief technology officer of Monitaur, a company that provides AI governance software. JOSHUA PYLE, FCAS, has nearly 17 years of actuarial experience within the insurance, insurtech, and tech domains.

print
Next article Empowering Actuaries: How the New CAS Capability Model Enhances the Actuarial Profession
Previous article Generative AI—Pitfalls and Pratfalls

Related posts