May/June 2017

Risk Is Not a Four-Letter Word: Shifting societal attitudes have changed how we contemplate this integral concept

By Hilary Salt

Discussions about risk are everywhere around us. As measurers and managers of risk, this should be good news for actuaries—more work for us, more opportunities in areas outside our traditional fields, and greater social recognition. But I believe there are some clear dangers for our profession and for society more widely in the new attitude toward risk—dangers that to date we as a profession have not only failed to recognize but have to some degree encouraged.

I’m going to do this from my own narrow experience in the field of advising U.K. defined benefit pension plans. But as recent elections have perhaps reminded us, there are some interesting similarities in experiences on both sides of the Atlantic. So I hope some points raised here will resonate with actuaries across the globe.

Risk

What the word “risk” means to society broadly has changed significantly over the past decades. At one time, the word was a relatively neutral one—there were both negative and positive uses of the concept of risk-taking.

Today, society uses the term “risk” almost exclusively to mean a downside risk. An interesting illustration of this phenomenon is that the latest ISO 9001 quality standard regulations require organizations to provide evidence of their management of both risks and opportunities—the ultimate bureaucratic pronouncement that “thou shalt not see risk as a good thing”—so you need a different name for “good” risk.

This is not just a change in the way we use words but a symptom of a much wider shift in our attitude about the world and our ability to change it for the better. And the conclusion of this type of thinking is that if risk is always bad, the rational response must be to record it, find ways to reduce it, and attempt to eliminate it altogether.

While it is impossible to pinpoint the moment when risk lost its positive connotation, within the corporate world, researcher Benjamin Hunt identifies a specific change in the mid-’90s, when:

Some firms appointed Chief Risk Officers (CROs) for the first time, to sit on boards and act as permanent risk management consultants for the chief executive. … Firms began to set in motion a huge range of initiatives under the banner of risk management. By the end of the decade, corporations had institutionalised elaborate frameworks for managing risk, under the heading of “enterprise-wide risk management.”[1]

Many organizations now seem to spend more time managing their risk registers than managing their business. And these habits are infectious. The increasing incidence of the chief risk officer and enterprise risk management clearly indicates how organizations seek to control risk at every level in their business. The manager of one business unit might well see the need to take risks to achieve their immediate aims. But the ability to take risks anywhere becomes tightly constrained from the top of the organization. Often this hesitation is justified on the basis that businesses today are more complex, and the threats they face—cyber risks, for example—are more extensive and difficult to predict.

Does this justification really stand up? Do a quick mental exercise of writing a risk register for one of the 17th-century trading ship voyages seeking cover from Edward Lloyd in his coffeehouse where marine insurance first began. Were those risks really simpler and more knowable than those for the new product launch your CRO is blocking? For a more contemporary contrast, look at John F. Kennedy’s speech to Congress in 1961 seeking approval for spending for the space race with the then-Soviet Union:

For while we cannot guarantee that we shall one day be first, we can guarantee that any failure to make this effort will make us last. We take an additional risk by making it in full view of the world, but as shown by the feat of astronaut Shepard, this very risk enhances our stature when we are successful.[2]

Making a merit of taking a big risk in public seems unlikely now when even big expensive political decisions such as foreign interventions are justified on the basis of risk reduction.

Measuring Risk: A Tale of Two Worlds

Accompanying this big change in the way we view risk has been an altered approach to assessing risk—both in the actuarial sphere and in wider society.

Let’s start with the actuarial world. When I first studied actuarial science in the 1980s, textbooks and actuarial practice were based around the deterministic application of probabilities and economics. So we might consider the costs of a health insurance plan based on one set of probabilities and a fixed discount rate, then test sensitivities using a different discount rate. Then we might re-run using different sickness probabilities. Our ability to do much more was of course constrained in those days before computers—we had only just gotten calculators! But despite what might now look like relatively blunt tools, we were able to draw on our ability to analyze experience and then use the techniques of applying probabilities, projecting cash flows, and discounting at fixed rates to give actuarial advice.

The availability of cheap computing power and advancement of actuarial techniques changed all this. In the 1990s, the ability to analyze outcomes with multiple variations in input factors allowed us to significantly improve our understanding of risk profiles. The development of stochastic models, in particular in the study of asset returns and the introduction of Value at Risk (VaR) measures, meant we were able to build models to test our advice in different scenarios.

In principle, this was a huge step forward. But more complex models need clearer thinking to build and to explain. I’d suggest that as a profession, we have stumbled at times in our model building and our explanations to the public—certainly this is true in the U.K.

Meanwhile, away from our cozy hearth of actuarial science, the outside world’s approach to understanding chance was changing.

Writing in 2009, sociologist Frank Furedi identifies “a shift from probabilistic to possibilistic risk management.”[3] His analysis is based firmly in the political, cultural, and popular sphere.

Furedi’s article is worth reading in full, but let’s identify a number of key issues. The background to his argument is that contemporary society has a profound fear of the future—a future seen as not just unknown, but unknowable—and this fear results in the promotion of a precautionary principle. In this atmosphere, knowledge itself is seen as creating risk:

Leading sociologists Ulrich Beck and Anthony Giddens forcefully argue the case for the close association between the sense of risk and the increase in knowledge. ‘Many of the uncertainties which face us today have been created by the very growth of knowledge,’ wrote Giddens, and Beck has noted that the ‘sources of danger are no longer ignorance but knowledge.’

After discussing Donald Rumsfeld’s introduction in 2002 of the phrase “unknown unknowns,” Furedi argues that:

Rumsfeld’s deliberation on unknown unknowns resonates with a radically new orientation towards the perception and management of risks in Western societies. The traditional association of risk with probabilities is now contested by a growing body of opinion that believes that humanity lacks the knowledge to calculate them. Numerous critics of probabilistic thinking call for a radical break with past practices on the ground that we simply lack the information to calculate probabilities. Environmentalists have been in the forefront of constructing arguments that devalue probabilistic thinking. … The emergence of a speculative approach towards risk is paralleled by the growing influence of possibilistic thinking, which invites speculation about what can possibly go wrong. In our culture of fear, frequently what can possibly go wrong is equated with what is likely to happen.

This shift from probabilistic to possibilistic approaches has real consequences for society, as:

Probabilities can be calculated and managed, and adverse outcomes can be minimised. In contrast, worse-case thinking sensitises the imagination to just that—worst cases.

It’s easy to see a clear parallel between the developments in actuarial science (a move away from deterministic approaches and analyses of the past to building a model projecting a wide range of mathematically possible futures) and the cultural shifts discussed by Furedi. But it’s important not to be too simplistic in our analysis. Developments in actuarial techniques were not caused by cultural changes, and for the avoidance of doubt, I believe the additional tools we have are powerful ones that can significantly improve our understanding and advice.

But it is important to recognize that during times when society is increasingly fearful and interested in possible outcomes and worst-case scenarios, this is precisely the flavor that actuaries have brought to their advice.

I want to turn to U.K. pension schemes to illustrate this.

U.K. Pension Schemes

First a little background. Traditionally the U.K.’s pension system has included a significant level of provision through defined benefit occupational pension schemes. From the 1960s and ’70s, most large employers ran their own scheme offering a fixed pension at retirement calculated as a service-related proportion of an employee’s final salary. Once an employee has accrued benefits in the scheme, the legal framework makes it difficult to reduce them or take them away. Schemes are funded with a board of trustees managing the scheme’s assets with a view to meeting the benefits as they fall due from the accumulated employer and employee contributions assessed at the time of payment as being sufficient to cover the benefits promised, together with returns produced from a diversified portfolio that would typically have a significant allocation to growth assets. While the legislative framework has changed over time, the broad requirement has been that the funding position of the scheme must be checked every three years, with action taken to address any funding deficit or shortfall and future contributions (and sometimes future benefits) also reset. From the 1980s, sponsors of these schemes were required to disclose the funding position of the scheme in their annual business accounts. This disclosure is not on the scheme’s funding basis but on a “marked to market” basis comparing the benefit promise discounted at a broadly risk-free (corporate bond) rate with assets at market value.

There are two main risks for an employer running this type of scheme. First, the assets may be inadequate on the funding basis, so additional real cash contributions are required. Second, the disclosure in the sponsor’s accounts could be volatile and significant and so have an unpredictable impact on the annual results disclosed. Many U.K. companies have made a transition from being labor-intensive to machinery- and technology-intensive, and a number have also significantly reduced in size. These shifts mean their schemes can be dominated by having a much larger weight of members as pensioners or deferred pensioners (members who have left the employer but not yet drawn their benefits) than as active members. The size of the liabilities for pensioner populations can sometimes dwarf the size of the current business.

If we situate these risks in the risk-averse climate of corporations discussed earlier and in a wider social context where people gravitate toward worst-case thinking, it’s easy to see why trustees and sponsors wanted to see more and more analysis of possible future scenarios—and why actuaries were happy to oblige in providing these.

The history of U.K. defined benefit schemes in the period since the mid-’90s has been a sorry one. Initially, funding problems emerged as actual improvements in mortality, and the need to fund for future expected improvements in mortality significantly increased the cost of benefits. Turbulent market conditions at the turn of the decade didn’t help—a rising stock market was followed by negative growth in 2001, 2002, and 2003, with equities falling by around 40 percent in this period.

The difficulties experienced by scheme sponsors and trustees led many schemes to close to new members. Although this change had little impact on liabilities in the short term, it focused attention on the now shorter expected time horizon of schemes and led to reconsiderations of investment strategy. The act of closing to new members led to a loss of contribution income, creating additional exposure to market value volatility that would have been avoided had the scheme remained open. This was an area where new techniques of stochastic analysis to illustrate VaR could help, as clients wanted to test scenarios that modeled both asset and liability values so they could see how different investment strategies would affect their overall funding.

Traditionally, pension scheme actuaries might approach a valuation with a broad model of how they might set their assumptions—in particular the valuation discount rate. But they also brought significant professional judgment to the table—often a broad model might be tweaked to reflect the specifics of the market or the client. But a model designed to run 10,000 variations on future asset and liability values has no room for professional judgment. So an unconscious side effect of modeling was the erasure of actuarial judgment from projected valuation results. And in order to simplify the modeling, the value placed on liabilities for funding valuations was often based on discounting using an expected return on the valuation portfolio expressed as a margin above risk-free rates of return (a “gilts plus” approach). Of course, for those clients most focused on the effect on funding on company accounts, future valuations were also projected on a purely bond-based discount rate.

So, reacting to the need from clients to understand how they could control risk in their pension schemes, actuaries built sophisticated models. But I’d argue there are some real problems with the models we built and how we used them with clients.

First, I think we sometimes obscured rather than illuminated matters. Producing tens of thousands of answers by definition means that it’s impossible to understand the sequence of events behind each. If a client were to ask what would have to happen to lead to event 3,761, we would not be able to give a satisfactory answer. So models might well include outcomes that are mathematically possible but difficult to conceptualize and at times implausible in the real world. Of course, it is possible to refine and improve a model, but sometimes solutions were built more around throwing more data at the model than trying to understand it. There are echoes here of the points made earlier about a move in society to giving up on really knowing or understanding the world.

Second, while we tried to reflect the past in building our models—including a correlation between price inflation and wage inflation, for example—we tend not to spend much time thinking about whether our stochastic model produces ranges of results that lie significantly outside past experience. We’re not trying to replicate the past, of course—the future may be similar or very different. But if our model produced answers that lie so far out of the range of what has ever happened in the past, shouldn’t we be telling clients this?

The use of possibilistic models has had a real impact on pension fund investment strategies, in particular on trustees’ and sponsors’ sensitivity to risk, and has tended to push schemes to reduce their allocation to growth assets in favor of bonds. The average allocation to bonds in 1990 was 13 percent; in 2015, this had more than doubled to 31 percent. Over the same period, investment in equities fell from 70 percent to 41 percent.[4]

In part, this de-risking was motivated by a desire to avoid the absolute volatility of equities as an asset class, bearing in mind that closing to accrual increased the exposure to market value volatility. But remember that models were built that projected liabilities on a “gilts plus” basis (for funding valuations) or a bonds basis (for company accounting valuations). It is not surprising that a model that projects future funding positions using a “gilts plus” approach produces more stable results with a greater allocation to gilts in the investment portfolio. Couple this with the continued rise of financial economics and the concentration on accounting disclosures and shareholder value, and the attraction of greater allocations to bonds starts to become too strong to resist.

Of course, lower allocations to growth assets further increases scheme costs; the overall impact is that more schemes close not just to new entrants but to all further accrual of benefits. This is the position we have now reached. But to pile on the pressure, the U.K. has seen a massive drop in the returns on gilts and bonds, largely as a result of quantitative easing. For those schemes with a significant proportion of their assets invested in bonds, some of which have gone further and matched the movements in assets and liabilities using hedging, the effect of this on past service funding positions has not necessarily been pronounced. But the cost of future service benefits assessed using a “gilts plus” approach means that future service accrual is unaffordable for those hardy employers that had maintained open schemes. So we have now moved to a position where the number of employees accruing benefits on a defined contribution basis exceeds the number accruing defined benefits.

This position is, for me, frustrating. We have moved away from a position where collective defined benefit pension schemes could provide pension benefits efficiently and also form a pool of assets for productive investment in business. For collective schemes open to accrual and new members, the income of new contributions and the fungible nature of money means day-to-day fluctuations in market value are not important. But we’ve forgotten this thanks to an over-focus on risk.

Have our VaR models and an over-focus on possibilistic risk led society away from efficient collective vehicles? Has the reduction in risk to scheme sponsors from investing more and more assets in gilts been counterproductive?

My concern is that as actuaries, we may have contributed to our clients’ unwillingness to take risk by overplaying possibilistic outcomes and not helping clients to situate these possible results within a wider historical understanding.

Wider Lessons

The desire to avoid risk is not limited to the pensions sphere. In another essay, Furedi points out that we live in times when even insurers see risk as problematic:

After 9/11 the focus was on terrorism. At the time Rodger Lawson, the president of the Alliance of American Insurers states that “terrorism is an uninsurable act”. The claim that society cannot insure individuals and businesses against a particular threat constitutes a very serious problem. To state that a threat is uninsurable is to acknowledge that society can do little to protect its citizens. The disintegration of insurance—an institution of risk sharing—would send out the signal that everyone is on their own and left exposed by the inability of society to manage the threat they face. To claim that a phenomenon is uninsurable is to say that it is beyond human management or control. The idea that society is incapable of managing certain risks through insurance signals a powerful mood of defeatism towards the dangers ahead.”[5]

While society sees risk as unmanageable, it also sees everything through the prism of risk. Almost every area of our lives has been reinterpreted with a risk spin—here are some examples:

Childhood is now seen as hugely problematic, with risks to children at the top of things we worry about. Some children are even defined as “at-risk” children. As blogger and activist Lenore Skenazy[6] has argued, this is particularly puzzling because statistics indicate children live far safer lives than in the past.

Risk is also now the measure against which we judge our eating and drinking habits. In the U.K., each week sees a new headline cautioning against some new foodstuff on the basis that there is a risk it may cause cancer, heart disease, or some other unwelcome effect. Most recently we have been cautioned against eating overbrowned toast, as “[l]aboratory tests show that acrylamide in the diet causes cancer in animals. While evidence from human studies on the impact of acrylamide in the diet is inconclusive, scientists agree that acrylamide in food has the potential to cause cancer in humans as well and it would be prudent to reduce exposures.”[7] Note that here the warning is not due to any proven effect—just the potential for one.

While our most intimate lives have been colonized by risk assessment, the same can be said of the public sphere. The fear of terrorism is an obvious area where a palpable sense of risk informs public policy. It’s important to note that a strategy designed to reduce risk does not necessarily imply a do-nothing strategy. As Furedi points out:

The precautionary approach does not necessarily encourage cautious behaviour. In its search for worst-case scenarios, it continually raises the stakes and fuels the demand for action. If as in the case of terrorism we fear the worst, then swift action is called for.

Reasons to Be Cheerful

It’s easy to get disoriented and disheartened about how perceptions of risk seem to dominate society now. But I see some green shoots of optimism starting to break through.

In the U.K., the recent vote to leave the EU was taken against the advice of pretty much every mainstream adviser—all of whom saw Brexit as too risky. Those warning against the risks included Mark Carney, Barack Obama, the International Monetary Fund, Paul Krugman, Richard Branson, Warren Buffett, Stephen Hawking, NATO, professors, economists, business leaders, and world leaders.[8]

Whatever one’s view of the referendum outcome, the majority of the U.K. population chose to take the riskier route and vote for Brexit. There’s a similar story in the United States, where Donald Trump cannot be painted as having been the “safe” option.

Turning back to the pensions sphere, there is some evidence that individuals investing in individual pension arrangements are investing reasonable levels of assets in growth portfolios. Most members invest in the default fund, and research from Schroders indicates that for the DC schemes of employers in the FTSE350, the average allocation to bonds is only 15.5%, with 67% in developed equity and a further 17% in emerging markets and alternatives.[9] Further research in this area would be welcome.

But there is a fair amount of discussion around the idea that the population at large is skeptical of the lead given by experts—this view was articulated in the Brexit vote by leave supporter Michael Gove, who said, “Britain has had enough of experts.”

What Does All This Mean for Actuaries?

Here are three policy ideas for our profession built on the discussion in this article.

First, I’d argue strongly that we need to introduce some discipline into the use of words like “risk.” There is a difference between amorphous risk and measurable probability, and we need to use our professional knowledge to distinguish between the two. I’d also like to encourage a discussion within the profession about whether we can add anything to debates that focus on risk as an unknowable and unmeasurable threat. Our tendency when risk is mentioned is to jump up and claim expertise. But if we are talking about an unmeasurable risk—say, the risk of a biological attack on Manchester—can we really add anything? And if we can’t, should we not just say so rather than claiming this is a “wider field” ripe for the introduction of actuarial expertise?

Second, we need to remember that we live and practice our profession in a wider world—both we and our clients are subject to the cultural mores of the time. As society moves from a probabilistic to a possibilistic understanding of the world, we need to “hold the line” in maintaining a rational understanding—using evidence and knowledge to project realistic potential futures. We need to guard against the tendency to look at everything a mathematical model says is possible without trying to understand how projected futures might come about. We also need to think carefully not just about our understanding, but also about the understanding of the clients to whom we present advice. Because society has such a negative view of the future, we all tend to view risk as something to be avoided and to concentrate on the risks most important to ourselves. But as professionals managing probabilities, we should also be highlighting the effect a particular course of action has in other areas perhaps not seen by our clients. As an example, trustees and sponsors of U.K. defined benefit pension schemes have reduced the probability of there being additional calls for cash on the sponsor. But they have generally increased the probability of scheme members having inadequate pensions on which to retire. We sometimes need to take the time to widen our clients’ focus beyond their immediate concerns.

Third, we should hold on tightly and proudly to our professional judgment. It’s easy to accept that building a model that has to work mechanistically means simplifying and removing the room for judgment. But it’s important that we then analyze and critique the output from that model using our professional experience and knowledge. This is where I’d argue there’s a difference between professionals and experts. Professionals have a wider public duty to think about, they are involved long term with their clients, they take responsibility for the impact of their advice, and they consider things in the round. Experts tend to be parachuted in as a short-term fix, point to a technical solution to a narrow problem, and don’t think about the wider impacts. So I don’t much mind if the world has had enough of experts—but I think we need professionals more than ever.

A cross-Atlantic debate on these policy proposals would be most welcome.

By Hilary Salt

HILARY SALT is senior actuary at First Actuarial in Manchester, U.K., and a member of Council of the U.K.’s Institute and Faculty of Actuaries (IFoA).

Endnotes

[1] The Timid Corporation: Why Business Is Terrified of Taking Risk; Benjamin Hunt; 2003.
[2] “Excerpt from the ‘Special Message to the Congress on Urgent National Needs’”; President John F. Kennedy; May 25, 1961. Accessed April 10, 2017, from the NASA website at www.nasa.gov/vision/space/features/jfk_speech_text.html.
[3] “Precautionary Culture and the Rise of Possibilitistic Risk Assessment”; Frank Furedi; Erasmus Law Review; Volume 2, Issue 2; 2009.
[4] The Right Ingredients: Pension Fund Indicators 2016; UBS; 2016.
[5] Invitation to Terror: The Expanding Empire of the Unknown; Frank Furedi; 2007.
[6] “Free-Range Kids” website; www.freerangekids.com. Accessed April 10, 2017.
[7] “Acrylamide”; U.K. Food Standards Agency website. Accessed April 10, 2017.
[8] A full list is available at strongerin.co.uk/experts.
[9] FTSE Default DC Schemes Report; Schroders; May 2016.

print
Next article Outer Space for All: In the new age of commercial spaceflight, risk will be an individual’s to bear
Previous article How to Survive - and Thrive - Amid Regulatory Change: A look at how companies can adapt to a shifting environment

Related posts