ClimateSeptember/October 2017

An Imperfect Storm

By Jeff Reeves

It’s a common trope in old cartoons and TV sitcoms: a hapless meteorologist making wild guesses, much to the consternation of picnickers and sunbathers everywhere.

But for all the frustration some folks show when forecasts go awry, Americans remain quite interested—perhaps even addicted—to weather forecasts.

In the first survey of its kind back in 2009, the National Center for Atmospheric Research estimated U.S. adults checked the weather a staggering 300 billion times a year, more than three times daily on average.[1]

A big reason we pay so much attention to weather forecasts these days is better communications technology. After all, instead of waiting for a newspaper or local TV forecast as in decades past, digital distribution via the internet now provides weather information on demand at any time and in any place—indeed, even from the computer in your pocket.

But that same digital infrastructure also has played a big role in increasing the accuracy of weather forecasting itself.

A century ago, meteorologists had to rely on weather balloons for atmospheric data and telegraph reports for regional conditions on the ground. Now, they have instant access to precise data from almost anywhere in the world thanks to billion-dollar weather satellites and an interconnected communications grid.

These improvements in technology come with their own unique set of challenges, however. Because, as any actuary knows, having more data doesn’t guarantee better predictions. Sometimes, it’s just noise that must be separated from the true signal.

So how has weather forecasting evolved in the last few decades as the technology has improved and the data sets have only gotten deeper?

And what roles, direct and indirect, do actuaries themselves play in the future of forecasting and climate science?

Weather Models Take Center Stage

There have been a host of changes to the science of meteorology over the years. But the most important hasn’t come from a difference in our basic understanding of the oceans and clouds.

Rather, it’s simply an improvement in information access, distribution, and analysis.

In the profession of meteorology, this is known as weather modeling—shorthand for a statistical analysis of current conditions that helps forecasters simulate and predict future changes.

“Weather forecasting has made enormous strides in the last few decades,” said Jason Samenow, chief meteorologist and weather editor for The Washington Post. “A three-day forecast today is as accurate as a one-day forecast in the 1980s. Computer models have become vastly more powerful and sophisticated, and today’s weather forecasters have an amazing arsenal of tools for making predictions.”

To be clear, that’s not because the meteorologists of 30 years ago were bad at their jobs, Samenow points out. It’s simply that contemporary forecasters have better access to information and technology.

The technology is perhaps the most important component. Faster computers allow models to be run more often and use more data than ever before, and thus offer greater precision. Meteorologists in the past may have seen the same patterns and correlations, but it’s only possible to process that information now—and thus better predict short-term weather trends.

U.S. Faces Technological Deficit

But while better technology and information have been boons to meteorologists over the past few decades, it’s hardly smooth sailing for the profession in 2017.

Experts in the profession agree that, unfortunately, U.S. weather modeling is well behind that of Europe or Japan in its accuracy—because our technology is inferior.

For instance, in 2012 the American Global Forecasting System predicted Hurricane Sandy would dissipate in the Atlantic with minimal disruption to the United States, while the European Center for Medium-Range Weather Forecast warned that the storm would turn sharply to the west and strike populated areas of the East Coast.

Tragically, the latter scenario proved correct as 233 people died in the wake of Sandy; damage has been estimated at around $75 billion.[2]

According to many experts, a big reason for the variance was an inferior modeling system for the “cumulus parameterization”[3]—a fancy term for estimating the physical properties of clouds as water evaporates and condenses.

Once again, it’s not a basic understanding of weather that failed U.S. weather meteorologists. It was the fact that the models and the data weren’t as precise here as they were overseas.

The irony is that weather modeling and forecasting technology continues to improve every year, and the United States has plenty of opportunities to improve with it. However, a lack of investment in technology has become the new normal in American meteorology even as forecasters around the globe are moving forward.

That means that not only are their current assumptions based on higher-resolution images from more powerful weather satellites, but future forecasts will be even better compared to ours thanks to more powerful computers that process ever bigger batches of data.

To use a corporate IT analogy, it’s akin to U.S. forecasters stuck on aging laptops with Windows 95 while European forecasters just purchased a new Surface tablet with all the bells and whistles.

Some scientists are downright ashamed of the state of play between American meteorology and the forecasting models employed elsewhere in the world. Clifford Mass, an atmospheric sciences professor at the University of Washington, wrote a blistering post on his weather and climate blog just months before Hurricane Sandy that called the lagging quality of forecasting a “national embarrassment” that “has resulted in large unnecessary costs for the U.S. economy and needless endangerment of our citizens.”

Mass contended that America was behind not just Europe but also organizations in the U.K. and Japan—and that was five years ago. The intervening time has only made the situation more pronounced, as the current annual budget for the National Oceanic and Atmospheric Administration (NOAA) is $5.6 billion,[4] virtually unchanged since 2012.

Even worse than a failure to invest over previous years is the risk of future cuts that could make the situation even worse. Thanks in large part to an unsympathetic view toward climate science, President Donald Trump is proposing a 16 percent reduction in funding for NOAA and a 32 percent cut to the weather and climate agency’s research arm.

Much to the chagrin of meteorologists, the White House even specifically called out a $5 million budget cut designed “to slow the transition of advanced modeling research into operations for improved warnings and forecasts.”[5]

“Such funding cuts would be especially unfortunate at a time when the nation is moving to regain its position as the world leader in weather forecasting,” said Antonio Busalacchi, president of the University Corporation for Atmospheric Research, in a statement[6] soon after the proposed budget was unveiled. He goes on to assert that these reductions “would have serious repercussions for the U.S. economy and national security, and for the ability to protect life and property.”

Our Weather Changes as Our Climate Changes

The inferior state of weather technology in the United States and the lack of future funding will undoubtedly have serious consequences in the near term—both on the accuracy of your local forecast and for organizations where actuaries depend on weather predictions to do their jobs and properly assess risk.

But increasingly, the long-term consequences of weather and climate forecasts are becoming the defining issue of the 21st century.

The science behind climate change should be well-proven and well-known but is worth briefly revisiting as context here:

  • Global temperatures continue to soar, with 2016 going down as the hottest year on record after breaking the prior record set in 2015—which, in turn, broke 2014’s record.[7] Three consecutive years of record-breaking temperatures never previously occurred in 137 years of meteorological data. To top it off, the 12 warmest years on record have all occurred since 1998.
  • According to NASA satellite observations, Arctic sea ice is declining at a rate of 13.3 percent per decade,[8] down from almost 8 million square kilometers in 1980 to under 5 million kilometers at present.
  • As a result, sea-level rise has accelerated from less than half an inch per decade before 1990 to a rate of 1.22 inches for the 10 years from 1993 to 2012.[9]

These are scientific facts about our planet’s climate right now and should not be in dispute.

However, there’s an important difference between facts about climate and facts about the weather. And that difference includes how the certainty of climate contrasts with the uncertainty of day-to-day forecasts.

“The difference between weather and climate is a measure of time,” said Samenow of the Washington Post. “Weather is what conditions of the atmosphere are over a short period of time, and climate is how the atmosphere ‘behaves’ over relatively long periods of time.”

In other words, it is perfectly plausible for global warming to exist over the past few decades even if a storm system brings unseasonably cold temperatures for a week or two. As the Earth’s climate changes, it’s not simply as easy as adding a degree or two to daily temperature forecasts across the board.

In truth, meteorologists in 2017 have the challenging task of updating existing modeling to reflect a “new normal” in the Earth’s atmosphere where volatile conditions are increasingly common.

This difficulty is perhaps most visible through the recent increase in extreme weather events and the importance of forecasting these events accurately to protect public safety and business interests. Warmer temperatures worldwide are altering ocean currents, causing more evaporation and resulting precipitation, and altogether redefining how weather systems behave.

“A changing climate leads to changes in the frequency, intensity, spatial extent, duration, and timing of extreme weather and climate events, and can result in unprecedented extreme weather and climate events,” wrote the Intergovernmental Panel on Climate Change,[10] an independent group of scientists convened by the United Nations.

Traditionally, these kind of extreme events have been things like hurricanes. However, the IPCC also noted that a changing climate can increase the risk that more common weather events work together to create serious problems, too.

“Extreme impacts can also result from nonextreme events where exposure and vulnerability are high,” the IPCC wrote, giving the example of “drought, coupled with extreme heat and low humidity,” which could in turn “increase the risk of wildfire.”

Global warming continues to disrupt past assumptions and the existing weather models that were once thought reliable. And that means that previous notions of weather—including everything from ocean currents to average temperatures to the number of tornadoes in a given ZIP code—are not fixed.

Overlaid with the challenges of underfunding new forecasting technology in the United States, even subtle and slow climate changes over the next decade could have serious implications for the accuracy of local forecasts—particularly those several days or weeks in the future.

Can Forecasting Ever Be Perfect?

Of course, some would contend that a desire to know the weather next month with 100 percent accuracy is a rather persnickety goal. After all, many believe the current state of weather forecasting is more than adequate.

Data wonk Nate Silver laid out the improving track record of meteorologists in a bluntly titled article a few years back, “The Weatherman Is Not a Moron.[11] “Silver paints the picture with loads of figures, including the fact that high temperature forecasts made three days in advance by the National Weather Service missed their targets about 6 degrees Fahrenheit in 1972, but missed by an average of only 3 degrees in 2012. Other items of note include hurricane mapping that now predicts landfall within 100 miles on average compared to a miss of 350 miles just two decades ago.

When you consider the general complexity of weather and then add in the variability such as time or geography that can affect even local forecasts, this kind of precision is about as close to predicting the future as you’ll find in any profession.

This is not all forecasting in the abstract, either. The improvements over the last few decades have undeniably saved lives.

According to NOAA, U.S. deaths attributed to lightning strikes haven’t topped 50 in a single year since 2002. But in the 1940s, an average of 329 Americans died annually. A similar trend can be seen in tornado-related deaths, which have declined from an annual average of 179 from 1940 to 1949 to a mere 18 deaths in calendar 2016—a 30-year low.[12]

And keep in mind, those are raw numbers. Even as weather-related deaths have declined, the U.S. population has soared from 130 million or so in 1940 to over 320 million at present day.

More accurate and timely forecasts, along with effective distribution of that information through improved communications channels, has assuredly caused the steady decline in weather-related fatalities over the long term. But is it realistic to ever expect perfect forecasting, both to protect life and property?

Believe it or not, the answer is a firm “maybe.”

And to meteorologist Dan Satterfield, that’s a crucial point to acknowledge.

“Since we do not know the state of the atmosphere at every location in the world, we have to work to give the computer a good first guess,” he said.

But what if we didn’t have to guess?

Satterfield, who has spent 33 years in the profession and is currently chief meteorologist for the CBS affiliate WBOC-TV in Salisbury, Md., said that weather modeling now depends on knowing the weather in locations we deem important instead of knowing the weather anywhere and everywhere. But the climate is complex, and often our best guesses leave out some information.

“When I was in college in 1979, I asked a professor if we could ever make accurate five-day weather forecasts. He said that would take computers running hundreds of times faster than was then available,” he says.

Some people may have thought that level of precision and volume of data was simply impossible. But the professor’s answer shows the barrier to better forecasts was technology and precision in data—not the limits of our understanding of weather patterns and climate.

But therein lies the difference between U.S. forecasts and the objectively better forecasts of the European Union.

“European models have finer grid scales, and faster computers allow for more accurate equations,” Satterfield said.

Grid scales refer to the intervals in which weather data is captured. And right now one- and two-week forecasts in the United States run on a 70-kilometer grid just a few kilometers vertically into the atmosphere,[13] while Europe is running a 36-kilometer grid.[14]

In other words, in about 500 square kilometers—roughly the area of Chicago—the European model would use roughly 196 data points on the surface to create its forecast while the U.S. model would use only about 49.

And each year, Satterfield said, Europe’s grid scale only gets smaller. And as a result, “It has a better starting point” in creating its short- and long-term forecasts.

So even if something close to perfection is attainable in theory as models and computing power improve, everyone seems to agree that near-perfect system will be somewhere else.

“NOAA’s global model needs a complete rebuild, as it is almost always less accurate than the European long-range global model,” Satterfield said. “While Europe is about to go to an even better model, we get farther behind.”

Clearly, a higher level of accuracy is very attainable for the United States and its forecasting community.

It is an open question, however, how accurate American citizens and businesses want their forecasts to be—and how willing they are to pay for the infrastructure to support such an improved system.

How Actuaries Are Playing a Role

Meteorology and forecasting is commonly the stomping grounds of earth scientists. Think a weatherman gesturing on the local news or geophysicists debating risks posed by climate change.

But increasingly, weather modeling and forecasting is touching the lives of many actuaries.

Take Oliver D. Bettis, a London-based actuary who is chairman of the Resource and Environment Board for the Institute and Faculty of Actuaries (IFoA) in the United Kingdom. Bettis writes regularly about the risks of climate change as it relates to businesses, citizens, and institutions.

In 2009, he co-presented a session on the “Risk of Ruin From Climate Change”[15] at the United Nations Climate Change Congress in Copenhagen. His lecture has been updated and presented again many times since, in an effort to marry climate change with risk management.

Bettis uses “ruin” as shorthand for an extreme bad-case outcome, or a tail risk. While that’s clearly not a high-probability event, it is still a possible event—and thus one worthy of analysis.

“As part of the regulatory regime for protection of policyholders, insurance companies must estimate their risk of ruin, ruin being defined as insolvency of the company,” Bettis wrote in a synopsis of his presentation.

“A common standard is to limit ruin probability to less than 1 in 200 over one year. An analogy with this approach could be used to investigate climate change, i.e., an attempt to estimate the tail risk, with a discussion about what is an acceptable level of tail risk.”

Bettis formerly served as chairperson of the IFoA’s Resource and Environment Working Group, which is charged with helping the profession and the public make sense of the current issues—from a risk management perspective of insurers and also to help policymakers understand what’s at stake and what the chances are of a truly catastrophic climate event in the future.

An actuary doesn’t have to have Bettis’ large stage, however, to participate in public debates about climate risks for businesses and governments.

At minimum, actuaries working in general insurance should have the same awareness that Bettis did when looking at weather data and how it affects property and casualty policies.

And increasingly there is a role for actuaries in green investments and energy consulting, where professionals are relied on to make very long-term predictions about electricity demand trends and generation costs to prove that large-scale, long-term investments are practical and will pay off.

And, of course, the American Academy of Actuaries and its members, along with the Casualty Actuarial Society, the Canadian Institute of Actuaries, and the Society of Actuaries, are jointly developing the Actuaries Climate Index (ACI) and the Actuaries Climate Risk Index (see “A Quick Look at the Actuaries Climate Index” below). These tools and data focus on measuring the frequency and intensity of extremes in key climate indicators based on controlled observational weather data across the United States and Canada.

In fact, in June the ACI just reached a new composite high—hinting at the urgency posed by climate and weather issues and the importance of data-driven analysis in crafting a solution.

There are many challenges ahead for the United States as weather modeling becomes more challenging and climate change continues to create uncertainty. But there are also important roles for actuaries in crafting future solutions to the challenges of today.

JEFF REEVES is a financial journalist with almost two decades of newsroom and markets experience. His commentary has appeared in USA Today, U.S. News & World Report, CNBC, and the Fox Business Network.

A Quick Look at the Actuaries Climate Index
Jim MacGinnitie is an actuary and the senior casualty fellow at the American Academy of Actuaries. With over 50 years of experience, he’s a respected voice on many issues within the profession, including the risks posed by climate change.

Most recently, MacGinnitie was instrumental in launching the Actuaries Climate Index (ACI)—an objective indicator of the frequency of extreme weather and the extent of sea level change.

Here’s a lightly edited transcript of a recent conversation with MacGinnitie about the index, what actuaries should know about climate change, and how it will affect their work in the future.

What is the Actuaries Climate Index, and what does it measure?

The Actuaries Climate Index is composed of six components, measuring high and low temperatures, drought or excessive rainfall, high winds, and sea level. The focus is on extreme measurements, as compared to a 30-year reference period from 1961 to 1990. It does this for both Canada and the continental United States, broken into 12 regions. The index is compiled quarterly.

The ACI launched at the end of 2016. Were there discussions or actions before then among actuaries regarding the topic of climate change?

Yes. Climate has been on the program at several actuarial meetings over the past decade or more. The impact on hurricane frequency and severity has been one focus of that discussion. Crop yields has been another. More sophisticated models of these and other weather-related phenomena have been developed. There have also been discussions of the relationship of climate and weather, which are not the same thing. That led to a discussion of developing an index that focuses on climate extremes, which are what generate insured losses. And from that came the very substantial efforts by the several actuarial associations to create the Actuaries Climate Index and the Actuaries Climate Risk Index [(ACRI), a related index that will correlate extreme weather events with economic losses].

Do you think actuaries bring any unique skills to the climate change discussion that the typical citizen or policymaker may lack?

Yes. The phenomena in the ACI—temperature, precipitation, wind, and sea level—can be viewed as a distribution of values, from low to high, with most values concentrated near the middle of the distribution. Most climate discussion focuses on the average. But it is extremes that generate losses. Actuaries are skilled at understanding the probabilistic nature of these distributions and how to interpret them.

How has the Actuaries Climate Index been received, both inside and outside the profession?

Very well. There have been several media articles, nearly 20,000 visits to the website ActuariesClimateIndex.org, and almost 1,500 downloads of data. One of the nice attributes of the ACI is that all the underlying data is available and can be accessed by interested parties for their own review and analysis. The ACI has also been featured at several seminars and webinars within the profession and presented to regulators and to scientists who are interested in climate research.

The most recent reading in June set a new high for the composite index. Was that a surprise, or not really, because we are in an established trend of warmer weather?

While we all read or hear the news reports of high temperatures in a specific location or small area, the ACI looks at a whole quarter of data from entire regions. It helps us to focus on the forest and avoid anecdotal impressions of individual trees. I expect the index to move up and down, just as the weather does. A major reason for publishing the five-year moving average is to smooth out those fluctuations.

Ultimately, what do you hope this index will help achieve in both the short and long term?

I hope that the ACI will serve as a basis for reasoned, factual discussions of climate change issues. Together with the forthcoming ACRI, it should help us to understand the impact of climate on economic losses, both insured and uninsured. The indices should also provide good input to strategic decisions of insurers, other risk takers, and allied institutions. And decisions about land use and facilities locations, for example, can be influenced by indications of rising sea levels or increased wildfire frequencies, both of which are climate-related.

 

Endnotes

[1]          “300 Billion Served – Sources, Perceptions, Uses, and Values of Weather Forecasts;” Bulletin of the American Meteorological Society; June 2009.

[2]          “Are Europeans Better Than Americans at Forecasting Storms?“ Scientific American; October 1, 2015.

[3]          Accuracy of early GFS and ECMWF Sandy (2012) track forecasts:
Evidence for a dependence on cumulus parameterization” Geophysical Research Letters; May 14, 2014.

[4]          “Trump’s Proposed NOAA Budget Cuts Rattle Scientists”; USA Today; March 6, 2017.

[5]          “White House budget aims to ‘slow’ gains in weather prediction, shocking forecasters”; Washington Post; May 24, 2017.

[6]          UCAR statement on President Trump’s Budget Proposal; The University Corporation for Atmospheric Research; May 22, 2017.

[7]          “We’re now breaking global temperature records once every three years”
The Guardian; January 23, 2017.

[8]          “Arctic Sea Ice”; climate.nasa.gov/.

[9]          “Sea level rise accelerating nearly 3x faster than during 20th century”;
USA Today; May 23, 2017.

[10] Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation; Special Report of the Intergovernmental Panel on Climate Change; 2012.

[11] “The Weatherman is Not a Moron;”
New York Times; Sept. 9, 2012.

[12]        Weather Fatalaties; The National Ocean and Atmospheric Administration. http://www.nws.noaa.gov/om/hazstats/resources/weather_fatalities.pdf.

[13]        “Global Forecast System”; NOAA’s National Centers for Environmental Information website. Accessed at www.ncdc.noaa.gov/data-access/model-data/model-datasets/global-forcast-system-gfs on Aug. 15, 2017.

[14]        “New Forecast Model Cycle Brings Highest-Ever Resolution”; European Centre for Medium-Range Weather Forecasts website. Accessed at www.ecmwf.int/en/about/media-centre/news/2016/new-forecast-model-cycle-brings-highest-ever-resolution on Aug. 15, 2017.

[15]        “Risk of Ruin Approach to Climate Change”; Institute and Faculty of Actuaries; November 2014.

print

Next article The Primacy of Judgment
Previous article Prediction and Its Follies

Related posts