Feature

Data Disasters: Learning From History—Case Studies in Analytic Decision-Making

By Kurt J. Wrobel

History books are filled with decisions that helped turn an important event to a victory or a defeat—whether that event took place in the arena of war, finance, or politics. Outcomes in these areas have had a profound impact on our world today; the composition of countries and the structure of economic systems can be traced to the decisions in these realms.

Lessons can also be learned from these decisions.

As actuaries, we can learn from the successes and failures in how leaders approached important analytic decisions. Failures in particular can be a wonderful learning opportunity—and an underappreciated one. They can provide specific guidance of what—and what not—to do, and offer insights into how to approach a decision to help ensure that the same unfavorable outcome does not occur.

To better illustrate these ideas, this article will focus on case studies in three broad topics—war, finance, and politics—where poor decisions were driven, in part, by faulty analytic methods. After looking at these case studies, I’ll highlight several lessons actuaries can apply in their everyday work.

Robert McNamara and the Vietnam War

By almost any measure, the Vietnam War was a failure. After devoting enormous resources and tragically losing over 50,000 soldiers, the United States had to eventually leave Vietnam without a military victory. While many factors led to the defeat, the prosecution of the war by Robert McNamara—the secretary of defense during the initial phases of the war—was a contributing factor.

After a successful career at the Ford Motor Company and the U.S. Army Air Forces, Robert McNamara became secretary of defense in the Kennedy administration. In his career, McNamara built a reputation for using statistics to improve decision-making and operational performance. He was also a member of an analytic group that had been given the title “Whiz Kids” for their use of statistics in improving operations at Ford.

Drawing upon his managerial experience from other organizations, in his new role at the Defense Department he developed an analytic group that was charged with collecting data and developing key metrics to manage the Vietnam effort. With this responsibility, they developed a report called the “Measurement of Progress System” (MPS) to measure the military’s success in waging the war.

This management system was particularly important because the Vietnam effort was not well suited to a simple line of resistance that was used in past wars. In World War II, for example, the progress of the war could be measured by simply quantifying the land taken from the enemy. The insurgent action in Vietnam required new metrics and the Defense Department analytic group responded with 71 unique metrics ranging from difficult-to-quantify metrics like “roads adequately secured” and “bases neutralized” to a seemingly more objective metric—the “kill ratio” (Vietnamese casualties/U.S. casualties).[i]

As highlighted in the book War Managers, this management approach also gave military officers an incentive to embellish the number of Vietnamese casualties—a widely used practice to make themselves look better in the eyes of military leadership. This problem was further magnified by senior leaders who used this wide range of subjective data and the kill ratio to create their preferred story regarding the war.[ii] Taken in total, the collection of selectively biased and subjective data helped create a favorable story that was an incorrect picture of the “facts.”

This problem was further magnified by the senior leader’s lack of holistic understanding of the war and the data underpinning the story being created by the data team. McNamara, instead, accepted the story at face value without sufficiently challenging either the data or the conclusions from his analytic team. Without truth-tellers to honestly highlight the facts on the ground, McNamara was left to make decisions without a complete understanding of the true progress of the war.

Value at Risk and the 2008 Financial Crisis

The 2008 financial crisis was a disaster. It led to an enormous loss in stock market value and one of the worst recessions since the Great Depression. The causes are complex and numerous, but a single metric—Value at Risk (VaR)—and its use by investment banks played an important part in contributing to the financial crisis.

As investment banks became increasingly complex, senior managers and regulators looked for methods to quantify the extent of risk accepted by a bank. A single number would help regulators quickly assess the risk of an organization and would allow managers to make immediate changes to mitigate this risk. In response to this need, risk management experts developed the VaR metric to quantify the extent of this risk. This simplifying figure would estimate the probability of a one-day loss being below a defined threshold. For example, a VaR of $50 million could imply a 95 percent chance of having a loss below $50 million.

This estimate relied on recent historical data as the basis for quantifying the underlying risk in the portfolio. This reliance on historical data ensured that unknown “black swans”—low-probability events unlikely to be captured in the most recent historical period—would not be considered as part of the analysis. With this focus on the most recent time period, the evaluation of the relative risk of asset classes that had a period of relative calm were also incorrectly estimated.

This mechanical approach did not, however, consider qualitative factors or other important metrics in measuring the actual extent of the risk held by an investment bank.

In addition, because many investment banks would limit the risk of individual traders, many traders would game the metric by hiding risks that allow for greater returns with the appearance of less risk according to the VaR.

These collective problems limited the usefulness of the metric, and investment banks consequently did not have an accurate understanding of their actual risk once the crisis hit. With the notable exception of a few investment banks, most banks continued to believe the evaluation of the simplifying metric and put themselves at enormous financial risk as the crisis worsened. This lack of understanding helped ensure that the banks did not make the necessary changes to mitigate their risk.

Presidential Election Polling

The key message from the 2012 presidential election was that polls matter. Pollster Nate Silver correctly predicted the results in all 50 states by using a weighted average of all the polls in each state. This followed his success in 2008, when he correctly predicted 49 of 50 states. One newspaper highlighted his 2012 success with a headline that read “Triumph of the Nerds: Nate Silver Wins in 50 States.”

In developing the poll average, Silver weighted the polls according to the soundness of the statistical techniques of a poll and its historical success in predicting the outcome of an election. The “poll of polls” was then used as the basis to provide a simplified estimate of the likely winner in each state—and in the overall election.

As we know, this “poll of polls” approach did not work in 2016.

Among the most well-known forecasters, the majority of whom far underestimated the probability of a Republican victory in the presidential campaign—including Nate Silver (29%); the New York Times (15%); and the Princeton Election Consortium (1%).[iii] Silver’s more reasonable estimate was influenced by the relatively high percentage of undecideds and third-party voters in the polls as well as the potential for polling errors to be correlated across the states.[iv] Relative to most other forecasters, he did make qualitative considerations and did not rely solely on polling data.

As reported after the election, a percentage of respondents did not reveal their support of the Republican nominee when asked by pollsters. These so-called shy voters may not have responded or simply lied about their support. This systematic bias had an impact on the aggregate poll estimate and the ultimate prediction accuracy—particularly in the Midwestern states, where the election was close and the polling errors correlated.

The forecasters also failed to appreciate that the 2016 election had qualitative attributes that made this election unlikely to follow predictable results of past elections. Both candidates had unique attributes that made any comparison with past election polling much more susceptible to error. The most conceptually accurate forecaster may have been Nassim Taleb, who said, “when the variance of the probability is [very] high[,] it  converges to 50%.”[v]

Lessons Learned

These catastrophic failures in prediction and ultimately operational decisions provide numerous lessons for actuaries:

  • Accurate data is the foundation of a meaningful analysis. By definition, statistical work that analyzes flawed data will also be flawed and much more likely to lead to a poor decision than an analysis based on accurate data. As highlighted in the Vietnam case and the presidential election, the collected data may be inaccurate because the people who provided data or responded to the polls had an incentive to be untruthful.
  • Historical data may not be a good indicator of the future. Beyond the accuracy of the data, an analysis based on historical data implicitly assumes that the past will automatically be a good predictor of the future. As highlighted in the Value at Risk metric, this was not necessarily true. The volatility of stocks over the previous five years was not a good indicator of how a portfolio would behave during a financial crisis. This historical fallacy was also a factor in the 2016 presidential election. Because the candidates and the political environment were much different than past elections, the use of historical data to estimate the likelihood of a winner was much less likely to be accurate than past elections.
  • Metrics that simplify complex systems should be developed with caution. Everyone likes a simple metric to guide decision-making. It is comforting to know that a decision can be supported by data and appreciates a complex model developed by smart people. The problem is that this approach has the potential to foster laziness as decision-makers fail to appreciate the importance of the assumptions that support a model as well as consider other qualitative factors in the decision-making.In all three case studies, the analysts made simplifying assumptions of very complex models:
    • The success in war was boiled down to a “kill ratio” and other highly subjective metrics in the MPS;
    • The risk of loss in an investment portfolio was simplified to a single metric.
    • The results of an election with significant uncertainty were boiled down to a weighted average of polls.

 

In each of these cases, the volatility of these complex systems—the prosecution of war, market volatility, and the results of a presidential race—proved to be far more complex than what could be captured in a single metric.

  • Holistic decision-making is important. Important decisions need to go beyond simple modeling—particularly when the decision-maker needs to consider complex systems that involve multiple variables and where decisions cannot be easily refined based on updated information. Intuition and careful consideration of the upsides and downsides of any decision needs to be weighed. In all of the above cases, the key decision-makers could have made allowances for the potential for errors in the modelling and the data, but they instead focused on the output rather than consider other factors that would have led to more caution.
  • Expensive data infrastructure investment could lead to too much reliance on data. Organizations often devote substantial resources to data collection, analysis, and infrastructure. These investments can then influence leaders to over rely on this information as part of their decision-making process even when other approaches should be used.
  • Beware of assumptions that assume independence among variables. In the 2016 election, the pollsters implicitly assumed independence among the statewide polls—a critical oversight when the polling errors could be correlated in the most competitive states.
  • Objective truth-tellers are essential to any analysis. People inherently want to hear a favorable story. Military leaders want to believe that a war is going well; CEOs want to believe the company is financially sound; politicians want to believe they will win the election. While effective advisers will appreciate this desire, they will also look to develop an accurate story rather than painting a favorable story that most appeals to a leader.

Making important decisions is difficult. Before the modern advances in data collection and analysis, people made decisions based on qualitative factors, on-the-ground assessments, and gut feel. We can now make decisions with vastly more information … but we need to approach these decisions with caution. The data could be inaccurate; historical results may not be a good indicator of the future; and models using simplifying assumptions could be unreliable. As business leaders, we need to understand the shortcomings of quantitative analysis and incorporate qualitative factors into our decision-making process.

In short, data is important—but we need to understand its limitations and look for other factors that could improve a decision.

 

KURT J. WROBEL is chief financial officer and chief actuary at Geisinger Health Plan.

 

[i] Douglass Kinnard, The War Managers, Trustees of the University of Vermont, 1977.

[ii] Ibid.

[iii] Josh Katz, “Who Will Be the Next President?” The New York Times, November 2016

[iv] Nate Silver, “Why FiveThirtyEight Gave Trump a Better Chance Than Almost Anyone Else,” FiveThirtyEight.com, Nov. 11, 2016.

[v] Via Twitter: https://twitter.com/nntaleb/status/762033852982460421.

print
Next article Model Behavior—How a modern modeler can add value and drive effective business decisions
Previous article Stopping to Reflect

Related posts