Posted: May 17, 2016
This response is categorized as:    link to Precaution Advocacy index  link to Outrage Management index
Hover here for
Article SummaryAn article in the April 29, 2016 issue of The Atlantic focused on a study claiming that the average person is likelier to die in a mass extinction event than in a car accident. On May 4 Faye Flam asked me to comment for an article she wanted to write for Bloomberg News about the resulting controversy, noting: “I think there’s probably a bigger story about misleading use of statistics and confusion about risk.” The “bigger story” I saw was a bit different: how to communicate about high-magnitude low-probability risks – the sorts of risks that people either exaggerate (if the risk arouses a lot of outrage and they focus on its high magnitude) or shrug off (if the risk arouses very little outrage and they focus on its low probability). On May 10 I emailed Faye this response. She wrote her story, but on May 17 the Bloomberg News editors decided not to run it, judging that the news peg – the Atlantic mass extinction article – was no longer of much interest to their readers.

Car Crashes and Mass Extinction Events: Communicating about High-Probability Low-Magnitude Risks

(a May 10, 2016 email in response to a query from Faye Flam of Bloomberg News)
Faye Flam’s article drawing from this email was never published.

I’m dividing my answer to your inquiry into three parts.

  • First I’ll summarize and briefly comment on the mini-controversy you asked about, over the risk comparison of a car crash versus a mass extinction event.
  • Then I want to make some points about the broader issue here: the challenge of communicating about high-magnitude low-probability risks.
  • Finally, I’ll return to car crashes and mass extinction events, and discuss what barriers have to be overcome to motivate precautions against either.

Car crash versus mass extinction event

The April 29, 2016 issue of The Atlantic included a short article by Robinson Meyer entitled “Human Extinction Isn’t That Unlikely.” The article’s subtitle made a startling claim about risk statistics: “‘A typical person is more than five times as likely to die in an extinction event as in a car crash,’ says a new report.”

The main point of the article was that a mass extinction event (such as a nuclear war, a catastrophic human-induced climate change, a deadly pandemic, an artificial intelligence takeover, or an asteroid collision) isn’t as unlikely as people imagine. That point was drawn from a much longer report called “Global Catastrophic Risks 2016,” link is to a PDF file recently published by a group called the Global Priorities Project.

In its discussion of the probability of such a mass extinction event, the Global Priorities Project report cited a number from an earlier report, the “Stern Review on the Economics of Climate Change”: a 0.1 percent (one-in-a-thousand) annual chance of human extinction. That 0.1 percent per year is the number that caused all the fuss. Once you accept that 0.1 percent number, the startling risk comparison is just simple arithmetic. As Meyer points out, “Every year, one in 9,395 people die in a crash; that translates to about a 0.01 percent chance per year.” It’s hard to argue with a claim that 0.1 is more than five times as great as 0.01. It is exactly ten times as great.

That’s not the only way the report and the Atlantic article bent over backwards not to overstate their case. The Stern Review number was for “human extinction,” which presumably meant everyone on the planet; but the Global Priorities Project report defined a mass extinction event more loosely, as one that kills at least ten percent of the world population. If there’s a one-in-a-thousand chance of killing everyone on the planet next year, there’s got to be a greater than one-in-a-thousand chance of killing one tenth of us.

So why the controversy?

After the Atlantic article came out, a scathing critique was posted on Tumblr. It pointed out that Stern never represented 0.1 percent as a scientific, empirical estimate of the annual risk of human extinction. It merely postulated 0.1 percent in a modeling exercise, as the basis for a philosophical discussion of how/whether to count impact on future generations when thinking about climate change. The Global Priorities Project made Stern’s arbitrary number look like an actual risk estimate. That distortion became the basis for Meyer’s article.

The Tumblr critique provoked an “Errata” post from the Global Priorities Project, acknowledging the basic validity of the critique. Both the Tumblr post and the Errata post trace the course of the distortion in laborious detail. They disagree on the extent to which it was motivated or inadvertent, but they agree on what happened:

  • The text of the report said that the Stern Review “suggested a 0.1% chance of human extinction each year.” This use of “suggested” was technically accurate but surely misleading; as used by the Stern Review, the “suggestion” was purely hypothetical. Other portions of the Global Priorities Project report did make clear that the authors realized it’s not possible to estimate annual extinction risk robustly. But the misuse of the Stern modeling number implied otherwise.
  • The problem was exacerbated by the inclusion in the report of a full-page two-sentence summary pull-quote: “The UK’s Stern Review on the Economics of Climate Change suggested a 0.1% chance of human extinction each year. If this estimate is correct, a typical person is more than five times as likely to die in an extinction event as in a car crash.” It’s in the pull-quote that the misleading “suggested” morphed into the flat-out false “estimate.”
  • Meyer obviously thought the comparison was pretty stunning. Wow, he must have said to himself, you’re a lot likelier to die in a mass extinction event than a car crash. He almost certainly didn’t notice the distinction between a suggestion and an estimate, much less check the Stern Review to learn that the number was used there only as the basis for a hypothetical modeling exercise. Meyer’s Atlantic article made that number its centerpiece, treating it as an unconditional claim.

The study authors conceded that the number they borrowed was a modeling assumption, not a risk estimate; and that they should have been clearer about that in the first place. They posted that pretty gracious “Errata” explanation, which even included a link to the Tumblr critique. They also got The Atlantic to change the online text of its article and add an italics introduction that reads:

An earlier version of this story presented an economic modeling assumption – the .01 chance of human extinction per year – as a vetted scholarly estimate. Following a correction from the Global Priorities Project, the text below has been updated.

[I can’t resist noting in passing that this ital intro adds a new error. The number that got misused isn’t a .01 chance; it’s a 0.1 percent chance, which is a .001 (one-in-a-thousand) chance. But no matter.]

It’s not rare for modeling numbers to get reified as risk estimates. In 2004, for example, a CDC senior health economist named Marty Meltzer pumped a set of modeling assumptions through his “FluAid” software program, which popped out the numbers “2 to 7.4 million” as the hypothetical number of deaths to be expected in a not-very-severe influenza pandemic … given those particular modeling assumptions. This model output was widely reported by the media as the CDC’s “predicted” number of deaths from a human bird flu pandemic.

I think the Global Priorities Project people have done as much as we could reasonably expect to correct the misimpression and apologize for perpetrating it. And the point of their study isn’t much affected by the misimpression that there is a valid statistical estimate of the risk of dying in a mass extinction event – as valid as our estimate of the risk of dying in a car crash. Their original point was simply that we take precautions against car crashes even though we know our chances of dying in a car crash are low. So why not also take precautions against low-probability mass extinction events – that is, why not do more to prevent global warming, nuclear war, disease pandemics, artificial intelligence takeovers, etc.?

True, we can’t reliably estimate the likelihood of an extinction event. Only two events in history, both disease pandemics, have met the report’s criterion of killing ten percent of the world population. A total extinction event has never happened, obviously, so there’s no adequate way to calculate the probability of its happening. And then of course there are the unknown unknowns, the extinction events not yet on our list of possible extinction events. Nonetheless, most of us have an intuitive sense that there are a lot of risks in our world that might cause untold catastrophe. We know we can’t measure or even estimate the probability. But we sense it’s not trivial.

And yet we do relatively little to improve the odds. Why don’t we do more? And how could risk communication convince us to do more?

High-magnitude versus low-probability: the role of outrage

In your follow-up email to me, you wrote: “This little flap about extinction provides a teachable moment. People are sometimes told they fear big scary events too much and don’t appropriately fear mundane threats, such as car crashes and heart disease. Now The Atlantic is giving the exact opposite message.”

Both claims are accurate.

People do sometimes overreact to high-magnitude low-probability risks, as you point out. But on other occasions we underreact to such risks, as The Atlantic points out.

Obviously, the high-magnitude low-probability risk has two key characteristics: It’s awful and it’s unlikely. Cognitively, these two are entirely compatible; most awful risks are unlikely. But they lead to incompatible conclusions. Awful means take precautions, whereas unlikely means don’t worry about it. The combination is emotionally unstable. To resolve the incompatibility, we tend to fasten onto one of the two and ignore or dispute the other one.

Sometimes we focus on a risk’s high magnitude: “OMG, look what could happen!” In that case, we either pay no attention to its low probability or we argue (as the Atlantic article did) that it’s not really all that unlikely. Other times we focus on a risk’s low probability: “Why worry about something so unlikely?” So we shrug off its high magnitude or insist it’s not really all that awful.

What we don’t seem to be very good at is holding in our minds simultaneously the thought that something is really, really bad and the thought that it’s really, really unlikely – then deciding what precautions make sense. We overreact or we underreact.

What determines which response we pick? The biggest piece of the answer is what I call outrage: how upsetting the risky situation is. When I invented the label “outrage” some 30 years ago I had in mind the sort of righteous anger people feel when they suspect a nearby factory is belching carcinogens into the air. But as I use the concept now, it applies to fear-arousing situations as much as anger-arousing situations. High-outrage risks are the risks that tend to upset people, independent of how much harm they’re actually likely to do.

A risk that is voluntary, for example, provokes less outrage than one that’s coerced. A fair risk is less outrage-provoking than an unfair one. Among the other outrage factors link is to a PDF file:

  • Familiar versus exotic
  • Not memorable versus memorable
  • Not dreaded versus dreaded
  • Individually controlled versus controlled by others
  • Trustworthy sources versus untrustworthy sources
  • Responsive process versus unresponsive process

Another one of the outrage factors – central to our interest here – is chronic versus catastrophic. As I wrote in 1987, in an article for the U.S. Environmental Protection Agency:

Hazard A kills 50 anonymous people a year across the country. Hazard B has one chance in 10 of wiping out its neighborhood of 5,000 people sometime in the next decade. Risk assessment tells us the two have the same expected annual mortality: 50. “Outrage assessment” tells us A is probably acceptable and B is certainly not.

So catastrophic risks (that is, risks that are high-magnitude and low-probability, like Hazard B in my example) have a built-in advantage over chronic risks in the outrage competition. They typically have two other built-in advantages as well: familiarity and memorability. Chronic risks are usually familiar and not very memorable. Catastrophic risks are far likelier to be exotic and memorable.

All other things being equal, we’re likelier to overreact than underreact to a high-magnitude low-probability risk. And we’re likelier to underreact to a low-magnitude high-probability risk.

Car crashes and mass extinction events

It follows that all other things being equal, we’re likely to be more worried about mass extinction events than car crashes. Yet as the Atlantic article rightly complains, most people act as if they were more worried about car crashes. Why?

Just as the theorizing in the previous section would predict, we are in fact more worried about airplane crashes than car crashes. Articles on risk perception often point out that driving is statistically a lot deadlier than flying; as pilots like to say, the riskiest part of your journey is the drive to and from the airport. Yet airplane crashes arouse a lot more fear (outrage) than car crashes, largely because plane crashes kill people in periodic mass casualty events that are memorable and exotic (and therefore intrinsically newsworthy), whereas car crashes kill people in smaller and more frequent events that are not so memorable and all too familiar (and rarely deserve much media attention). In the plane-crash-versus-car-crash risk comparison, the car crash is lower-magnitude and higher-probability and therefore arouses less outrage – arguably less than it deserves.

I have seen data showing that when people are asked to assess the relative size of various risks in their lives, they accurately put car crashes somewhere near the top. Nonetheless, we often behave in ways that exacerbate our risk of an auto accident. Why? Because it’s a chronic risk we take every day; it’s familiar; it’s not memorable. Car crashes are low on some other outrage factors as well: not dreaded, voluntary, under our individual control, etc. Our risk-taking behavior behind the wheel is neither because of our assessment of the probability of death nor in spite of our assessment of the probability of death. We don’t underestimate low-outrage risks as much as we fail to estimate them at all, in real time. They’re simply not on our decision-making agendas. When we think about risks, we’re mostly thinking about the high-outrage ones.

Given the low intrinsic outrage of car crashes, what’s surprising isn’t that people so often do risky things behind the wheel. What’s surprising is that we often take reasonable care. Auto safety communicators have long understood that one of the best ways to get us to take care is to arouse more outrage about unsafe driving practices. Thus:

  • The main reason people started wearing seatbelts had more to do with not getting a ticket than with not getting killed. And it had even more to do with setting a good example for your child, who was taught in school to make a fuss if mommy or daddy didn’t buckle up. Even today, seat belt use is more about habit and social pressure than risk statistics.
  • Similarly, organizations like Mothers Against Drunk Driving have made that behavior no longer socially acceptable – today we consider drunk driving not just dangerous but also immoral. And changes in law enforcement have made the risk of getting arrested for DUI far more vivid in many drivers’ minds than the risk of getting killed in a crash.
  • Much less progress has been made vis-à-vis distracted driving. If we ever manage to get people to stop using their cell phones while they’re driving, it won’t be because we impressed on them the statistical risk they were taking. It will be because of factors like social pressure and law enforcement.
  • Those terrifying car crash movies we all watched in high school driver ed classes also worked to some extent. They worked by mobilizing outrage (fear in this case rather than disapproval), not by inculcating risk statistics.

In a nutshell, people would drive more safely if the risk of a car crash aroused more outrage. So auto safety proponents work hard to boost the outrage any way they can. And they’ve had some successes.

As we have seen, mass extinction events have a leg up on car crashes when it comes to outrage – three legs up, in fact: more catastrophic, less familiar, and more memorable.

So why aren’t people taking more precautions to reduce the chances of a mass extinction event? Because mass extinction events have some big outrage-related disadvantages as well. I’ll discuss five of them.

number 1

People may feel too much outrage about mass extinction events, leading to denial.

There is one important exception to the outrage-increases-perceived-risk rule: denial. Fear can be hard to bear. And when people cannot bear their fear, an emotional circuit breaker is tripped and they go into denial. (The availability of denial is one of the main reasons panic is rare. When we’re at risk of panicking, we usually go into denial instead.) So the women who are most terrified of breast cancer may deny their fear and “not bother” to check for lumps.

At the height of the Cold War, similarly, many people underestimated the probability of a nuclear exchange between the United States and the Soviet Union, not because they weren’t concerned but because they found the prospect too awful to contemplate. For an assessment of the role of denial in people’s response to the risk of nuclear war, see my 1986 article with JoAnn M. Valenti on “Scared Stiff – or Scared into Action.” Although terrorist attacks don’t meet the 10% criterion for a mass extinction event, see also the section on denial in my column on 9/11 and my column with Jody Lanard on “Duct Tape Risk Communication.”

Although fear is the emotion that’s likeliest to rebound into denial, to a lesser extent so can anger, hurt, guilt, sadness, and the other emotions that add up to outrage. (See “Beyond Panic Prevention: Addressing Emotion in Emergency Communication.” link is to a PDF file) My 2009 column on “Climate Change Risk Communication,” for example, talks about the many ways climate change activists inadvertently provoke denial in audiences they are trying to inspire into action. I might be more willing to reduce my carbon footprint if the enviros didn’t keep yelling at me that my selfish over-consumptive lifestyle is to blame for overheating the planet.

The key point here: If people are in denial about a risk, treating them as if they were apathetic is bound to boomerang. They’re already finding the risk unbearable. Trying to impress on them how serious it is will only push them deeper into denial.

number 2

People may feel futile about preventing mass extinction events, leading to paralysis.

If I’m worried about car crashes, I can wear my seat belt, drive slower and more carefully, buy a safer car, avoid driving late at night when the drunks are out, etc. Ask any driver what precautions are available to reduce the odds of having an accident, and you’ll get a list. We may not do everything on the list, but we know what’s on the list and we know we can always do more if we choose to.

Now ask somebody what he or she can do to reduce the odds of global climate change, nuclear war, an asteroid hit, or a takeover by artificial intelligence.

Activist groups devoted to these issues, like the Global Priorities Project, have answers to that question. They know what they want us to do. Some of their solutions involve small-scale individual actions like reducing our carbon footprint. Others involve collective action like supporting laws aimed at reducing the risk of extinction events. But their lists of mass extinction risk-reducers are a lot more debatable than our lists of car crash risk reducers.

Rightly or wrongly, many, many people are convinced that there is nothing to be done about these mass extinction scenarios. Or if they can think of steps that might help, they’re convinced that something (short-term self-interest, government stalemate, international competition, maybe just human nature) will make sure those steps are never taken. Or even if they can imagine progress being made, they are convinced that nothing they can do individually can contribute in any meaningful way.

In the language of psychology, these are efficacy (“there’s nothing to be done”) and self-efficacy (“there’s nothing I can do”) problems. When people’s senses of efficacy and self-efficacy are low, fatalism and paralysis set in. Sometimes it’s a kind of apathy – people rationally choose not to worry about a problem they consider unsolvable. Sometimes it’s a kind of denial – people can’t bear to worry about a problem they consider unsolvable. Either way, insisting on how serious the risk is won’t help.

number 3

People may feel little or no sense of personal responsibility for mass extinction events.

The risk of a car crash is personal. I realize that a crash can happen through no fault of my own; it could be the fault of equipment failure or another driver. Still, when I get behind the wheel I am taking personal responsibility for not crashing my car.

Preventing a mass extinction event, on the other hand, isn’t meaningfully my responsibility. Some mass extinction events, if they happen, won’t be anybody’s fault. Some will be everybody’s fault … which is almost the same thing.

So we reason thusly: “I have a lot on my plate already – tasks that really are mine, tasks I can’t easily shrug off. Why would I voluntarily take even partial responsibility for preventing a pandemic or a nuclear war?”

number 4

People may find mass extinction events too unlikely to worry about.

As I’ve already pointed out, we tend to react more strongly to high-magnitude low-probability risks than to risks that are less horrific but likelier. But research by Daniel Kahneman and others has shown that this tendency has limits. Unlikely but possible catastrophes capture our attention more than the statistics say they should, and we perceive them to be likelier than they actually are. But extremely unlikely, just-barely-possible catastrophes stretch the elastic band of credibility past its limits; we round off “extremely unlikely” to “impossible” and refuse to take the risk seriously. That leaves the handful of people who do take the risk seriously isolated and alienated – and sometimes inclined to write articles urging readers to take another look.

In other words, it’s probably easier to get people worried about a possible pandemic that would kill millions of people than a smaller, likelier pandemic that would kill only thousands. But it may be hardest of all to get people worried about a monster never-before-seen pandemic that would kill billions – a mass extinction event.

The point of the Atlantic article and the study behind it was that we may actually be likelier to die in a catastrophe that huge than in a car crash. Leave aside that the study authors used an arbitrary made-up estimate of mass extinction risk to “prove” their point. Even if they had used an indisputable risk calculation, a lot of people would have rationalized it away, reasoning as follows: “They’re talking about a bigger catastrophe than I have ever seen or even heard about. So it has to be really, really unlikely. I’m not going to worry about something that unlikely.”

Most of us carry in our heads a stereotype of somebody who wants to tell us the end of the world is near. He’s a middle-aged man in a dirty robe and a scraggly beard, carrying a hand-drawn sign and accosting strangers on the sidewalk. He’s a nut. We don’t often pause to wonder if he’s right and what precautions we might want to take.

number 5

People may feel more outrage about the proposed precautions to prevent mass extinction events than about the events themselves.

In the mid-1980s I was active in the nuclear weapons freeze movement, trying to stop the U.S. from expanding its nuclear arms race with the Soviet Union. A significant segment of our audience agreed with us that the U.S.-Soviet nuclear competition posed a sizable risk of a mass extinction event, but worried that abandoning that competition posed a sizable risk of Soviet domination. And to many of them, Soviet domination looked scarier than mass extinction: “Better dead than red.”

Similarly, the Achilles heel of climate change activism is the demand that we abandon our cars, our consumerism, and some measure of our control over our own lives. Our reluctance to give these things up constitutes a powerful reason to believe climate change isn’t a serious risk (at least not a mass extinction risk).

And it doesn’t help that so many climate change activists don’t share our reluctance. People who like strong regulatory regimes and loathe high-consumption lifestyles find it easy – even convenient – to think climate change is a serious risk, even a mass extinction risk; the remedies they propose for climate change would be attractive to them even if there were no climate change. But people who worry about government overreach and enjoy their ATMs and air conditioners are motivated to find reasons to decide climate change isn’t such a big risk after all.

A closely related issue is the impact of risk-benefit tradeoffs on risk perception. Logically, benefit assessment and risk assessment should be independent activities; you figure out how much good the activity does and how much harm it does, and then you can decide which effect is larger. But in practice, benefit perception and risk perception interact. Whichever one preoccupies you distorts your judgment of the other one.

So if I already strongly believe the benefit of X is high, I am motivated to decide the risk of X is low. And if I’m already convinced the risk of X is high, I’m motivated to decide its benefit is low. In risk controversies it’s hard to find anyone advocating the “high-risk high-benefit” position. Almost everybody argues either “high-risk low-benefit” or “high-benefit low-risk.” (I find the stock market refreshing because investors readily understand that the most potentially profitable investments are also the riskiest.)

A lot of mass extinction risks offer sizable current or potential benefits. The many tantalizing soon-to-be-achieved benefits of artificial intelligence make people not want to consider the risk of an AI takeover. The benefits of my highly consumptive lifestyle make me not want to consider its climate change risks. The same phenomenon holds true for lower-magnitude higher-probability risks, of course. Driving has huge benefits, which makes it harder to keep us focused on the risk of car crashes.

And when we change our minds about how beneficial a technology is, we’re likely to revise our view on how risky it is to match. So far the absence of significant benefits from genetically modified food ingredients makes it easy to overestimate GMO risks; lots of people envision genetically modified monsters taking over the world. But diabetics who gratefully rely on genetically modified insulin shrug off any suggestion that that sort of genetic modification could pose worrisome risks. A GM food killer app (no pun intended) with obvious consumer benefits could undermine GM opposition almost overnight.

For all these reasons, and more, convincing people to take mass extinction events seriously isn’t easy. That’s true despite the built-in advantage mass extinction events have with respect to three outrage factors: chronic versus catastrophic, familiar versus exotic, and not memorable versus memorable.

So the experts who say we often tend to overreact to high-magnitude low-probability risks are right. And The Atlantic is also right in its claim that we’re not paying enough attention to mass extinction risks.

Four additional points and one bottom line

Once you start trying to untangle the complexities of high-magnitude low-probability risk communication, a lot of other points come into play. Here are four more I can’t bear to leave out of this response.

number 1

The usual risk communication task vis-à-vis high-magnitude low-probability risks is calming people’s tendency to overreact. At least that’s what my career has been like.

I distinguish a risk’s “hazard” (how much harm it’s likely to do) from its “outrage” (how upset it’s likely to make people). Based on this distinction, I categorize risk communication into three tasks:

  • When hazard is high and outrage is low, the task is “precaution advocacy” – alerting insufficiently upset people to serious risks. “Watch out!”
  • When hazard is low and outrage is high, the task is “outrage management” – reassuring excessively upset people about small risks. “Calm down.”
  • When hazard is high and outrage is also high, the task is “crisis communication” – helping appropriately upset people cope with serious risks. “We'll get through this together.”

In 40+ years of dividing my time among these three tasks, I have learned that when a client wants me to do precaution advocacy – increasing people’s outrage – it’s usually about a low-magnitude high-probability risk, something more like a car crash than a mass extinction event. When I’m brought in to work on a high-magnitude low-probability risk, on the other hand, my client is usually looking for outrage management help: help reducing people’s outrage, or help telling people about some scary but unlikely possible catastrophe without unduly frightening them, or sometimes help telling them about that possible catastrophe without duly frightening them.

Not always. In my time I have worked to arouse public outrage about pandemics, nuclear weapons, and climate change, three of the mass extinction events that preoccupy the Global Priorities Project. But I only occasionally have a client that wants people to be more upset about high-magnitude low-probability risks. I often have clients that want people to be less upset about such risks. I have worked for scores of corporate clients that sought my help persuading neighbors of an industrial facility that they shouldn’t be so worried about the possibility that the chlorine sphere might explode or the tailings impoundment might collapse or terrorists might crack open the containment vessel.

In 2004 I wrote a website column entitled “Worst Case Scenarios.” My summary of the column read in part:

Most of this long column is addressed to risk communicators whose goal is to keep their audience unconcerned. So naturally they’d rather not talk about awful but unlikely worst case scenarios. The column … explains why this is unwise…. Then the column lists 25 guidelines for explaining worst case scenarios properly. Finally, a postscript addresses the opposite problem. Suppose you’re not trying to reassure people about worst case scenarios; you’re trying to warn them. How can you do that more effectively?

There’s a lot of concrete advice in that column that I haven’t replicated here. Except for the postscript, it’s advice on how to tell people about high-magnitude low-probability risks without scaring the pants off them.

number 2

The risk communication seesaw is often a key dynamic in talking about high-magnitude low-probability risks.

When people are ambivalent about a risk, they tend to focus on the half of their ambivalence that isn’t getting enough attention elsewhere in their communication environment. I call this the risk communication seesaw, link is to a PDF file and it’s a critical aspect of many risk communication challenges. Risk-benefit tradeoffs, for example, are often a seesaw; whichever one you talk about, if I’m ambivalent I’ll probably focus all the more on the other one.

High-magnitude versus low-probability is also often a seesaw. If you’re out there warning me how horrific that possible catastrophe might be, I may respond that it’s too unlikely to bother with. If you’re trying to reassure me about how unlikely it is, on the other hand, I may respond that it’s too awful to live with.

Only ambivalent people react that way, of course. If I have no opinion, I’ll probably accept yours, and focus on whichever aspect of the risk you urge me to focus on, its high magnitude or its low probability. If I have a firm opinion of my own, I’ll almost certainly stay focused on the aspect I started out focused on.

The paradoxical response happens when I’m torn, fully aware of both the risk’s high magnitude and its low probability, and unsure how to reconcile them. If you want to keep ambivalent people calm, your core message should be: “Even though it’s really unlikely, look how awful it is.” Then your audience can use your low-probability information to tell themselves, “Yeah, well, sure it’s awful, but it’s really unlikely!” If they’re less alarmed then you want them to be, on the other hand, switch seats on the seesaw. “Even though it’s really awful, look how unlikely it is,” you should assert, leaving your audience to respond in their heads, “Yeah, but it’s really awful!”

number 3

Outrage about high-magnitude low-probability risks is often manipulated.

Organizations of all sorts – from multinational corporations to activist groups to public health agencies – have a stake in arousing outrage about some risks and suppressing outrage about others. That’s not necessarily nefarious. Their goals may be selfish or altruistic (or both). The information they provide may be accurate or false (or somewhere in between). The communication strategies they deploy may be candid or devious (or, again, somewhere in between). As a consultant who has been paid for 40+ years to advise clients on how best to communicate about risk, I’m in no position to complain that there are forces out there choosing their words intentionally to try to get you to have what they consider the “right” risk response.

Think about some of the high-magnitude low-probability risks that various interest groups have warned the public about in recent years:

  • The risk that ISIS might pose an existential threat to the United States unless we ramp up our military efforts
  • The risk that measles might come roaring back in the United States unless we force hesitant parents to vaccinate their children
  • The risk that transgender people might harm our children unless we control which bathroom they use
  • The risk that Ebola might spread widely in the U.S. unless we quarantine returning volunteers from West Africa
  • The risk that Zika might spread widely in the U.S. unless we do urgent preemptive mosquito control throughout the country
  • The risk that temperatures could spike to unlivable levels unless we massively reduce our greenhouse gas emissions

In many of these cases, but not all, there’s some other interest group arguing the other side. You’re not just making up your mind about risks – including the high-magnitude low-probability risks we’re discussing at the moment. There are people trying to make up your mind for you.

number 4

Manipulating people’s risk response is a lot harder than it might seem.

Since I earn my living helping clients “manipulate” outrage up or down with respect to specific risks, you surely can’t trust my assurances that what they do isn’t necessarily nefarious. You need to make your own ethical judgments about which risk communication strategies are acceptable and which are not. You may even decide that only truly neutral information is ethical – that no effort to influence other people’s risk response in predetermined directions is honorable, no matter how altruistic its motives. That’s not my view, obviously. But it’s a defensible view.

But maybe you can trust my judgment that influencing other people’s risk response is hard.

The hardest job is getting people upset about a risk they’re already familiar with but not currently upset about – getting them upset enough to take action. Conventional wisdom says that takes at least a generation; that’s how long it took to institutionalize hardhats, seatbelts, and smoke alarms.

  • It’s much easier to introduce people to a new scary risk and get them to overreact temporarily – what I call an adjustment reaction – before they put the risk into context and calm down again.
  • It’s much easier to teach people a new instance of a risk to which they already have a strong emotional response. For example, almost every pregnant woman worries about birth defects, so it’s no great challenge to get her to worry about Zika, which can cause horrific birth defects. Most of the time what looks like a new fear being aroused is actually a preexisting fear being channeled.
  • And perhaps the easiest task of all is getting the mass media, mainstream media and social media alike, to echo the claim that people are overreacting to some new (or newly newsworthy) risk. I have collected hundreds of media stories asserting that the public is panicking with no evidence at all, or at best a panicky anecdote or two in a U.S. population of 319 million.

But actually getting people upset, that’s tough. And once they’re upset, they tend to stay upset – not forever, but for a good while. Getting them quickly apathetic again is tough too.

And here, finally, is the bottom line:

I have spent a good portion of my career trying to get people more alarmed about three specific mass extinction risks: infectious disease pandemics, nuclear war, and climate change – exactly the sorts of high-magnitude low-probability risks that The Atlantic was focusing on. I know how hard that is. I have also spent considerable time trying to get people more alarmed about low-magnitude high-probability risks – for example, trying to convince homeowners to test their homes for radon or industrial employees to wear their safety gear. I know how hard that is too.

And I have spent an even bigger part of my career trying to get people less alarmed about risks like hazardous waste facilities, industrial emissions, and food additives, when it looked to my client (and to me) like the actual hazard was a lot smaller than some group of stakeholders thought it was. That’s every bit as hard as the other two.

My one bottom-line conclusion from these efforts is that the biggest determinant of how serious people consider a risk is how upset they get about it. People don’t mostly get upset because they think the risk is serious. People mostly think the risk is serious because they’re upset. In my jargon, outrage causes hazard perception, Link goes to YouTube far more than hazard perception causes outrage.

  • Dengue and Zika are similar viruses spread by the same mosquito species. Dengue is a bigger hazard. But because of those photos everyone has seen of microcephalic babies, Zika is a bigger outrage. So the U.S. government proposes to spend far more money fighting Zika in the continental U.S. than it has spent here fighting dengue in any recent year.
  • Letting your child sleep over at a friend’s house is a lot more dangerous if the friend’s parents have a swimming pool than if the parents have a gun. But at least in liberal suburban environments, swimming pools are a familiar, acceptable, low-outrage risk. Guns are not. So suburban parents protect their kids from guns much more carefully than from pools.

People do not get dengue-versus-Zika or pools-versus-guns wrong because they don’t know the statistics. True, they don’t know the statistics. But teaching them the statistics will do shockingly little good. That’s the fundamental error the “Global Catastrophic Risks 2016” report and the Atlantic article made.

If you want to get people to take (or demand) more precautions about a risk, you need to find ways to make that risk a bigger source of outrage. And if you want them to stop taking (or demanding) so many precautions, you need to figure out how to make the risk a smaller source of outrage. That’s true whether the risk is car crashes, or mass extinction events, or anything else.

Copyright © 2016 by Peter M. Sandman

For more on precaution advocacy:     link to Precaution Advocacy index
For more on outrage management:    link to Outrage Management index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.