Posted: November 10, 2001
This page is categorized as:    link to Introductory articles
Hover here for
Article Summary Being asked to summarize the whole of risk communication in a short encyclopedia article was a challenge (even a decade ago, when much less was known). For me the biggest challenge was to summarize risk communication, not just my approach to it. I think I partially succeeded.

Risk Communication

Ruth A. Eblen and William R. Eblen (eds.), Encyclopedia of the Environment
Boston, MA: Houghton Mifflin, 1994, pp. 620–623

In the history of language, “Watch out!” was almost certainly an early development. “Stop worrying” probably came on the scene a little later, as it reflects a less urgent need, but both poles of risk communication – alerting and reassuring – undoubtedly predate written language.

So does the discovery of how difficult risk communication is. If there is a central truth of risk communication, this is it: “Watch out!” and “Stop worrying” are both messages that fail more often than they succeed. The natural state of humankind vis-à-vis risk is apathy; most people are apathetic about most risks, and it is extremely difficult to get them concerned. But when people are concerned about a risk, it is also extremely difficult to calm them down again.

Taking “Watch out!” and “Stop worrying” as the defining goals of risk communication embeds an important and very debatable assumption: that risk communication is essentially a one-way enterprise, with an identifiable audience to be warned or reassured and a source to do the warning or reassuring. For this to be an acceptable assumption, at least three other interconnected assumptions must be accepted as well: that the source knows more about the risk than the audience; that the source has the audience’s interests at heart; and that the source’s recommendations are grounded in real information, not just in values or preferences. In many risk communication interactions these specifications are not satisfied. A parent warning her children about the risks of marijuana may know less than they do about the drug; a chemical company reassuring neighbors about its effluent may be protecting its own investment more than its neighbors’ health; an activist urging shut-down of all nuclear power plants may be motivated more by a preference for a decentralized energy industry than by data on the hazard. To the extent that these things are so, risk communication ought to be multi-directional rather than one-directional, a debate instead of a lecture. And the criteria for “effective risk communication” ought to be things like the openness of the process to all viewpoints and the extent to which values are distinguished from scientific claims, rather than whether the audience’s opinions, feelings, and actions come to reflect the source’s assessment of the risk.

The judgment that risk communication should be multi-directional is well established in the literature about risk communication, but not yet in its practice. Except in the growing area of environmental dispute resolution, which is grounded in the negotiation of competing risk claims, it is considered almost heretical to assert that industry, government, activist groups, and the media (the principal risk communicators) should perhaps talk less and listen more. There is, however, progress on the more modest claim that even one-directional goals are best served by multi-directional means – that is, that it is easier to design effective messages if the sources pays attention to what the prospective audience thinks and feels.

Many risk communicators, especially in government, try to avoid the problem by defining their goal in strictly cognitive terms: to explain the risk so that people can make up their own minds how to respond. Though still not multi-directional, this approach is at least respectful of the audience’s autonomy. It measures success not by what the audience decides, but by what the audience knows, and whether it believes it knows enough to make a decision. A source that takes knowledge gain as equivalent to making the "right" decision is likely to be misled about the effort’s success; knowledge about radon, for example, is virtually uncorrelated with actually doing a home radon test. But often enough knowledge is the real goal. A 1991 California law requires factories to send out a notification letter if they pose a lifetime mortality risk to neighbors of more than ten in a million. Merely letting people know puts pressure on management to get the risk down below the trigger point; the notification letter itself need not aim at provoking or deterring neighborhood activism. Informed consent warnings, similarly, can be considered successful whatever the forewarned audience decides.

Whether the process is one-directional or multi-directional, and whether the goal is persuasion or knowledge, risk communicators typically start out with a gap they hope to bridge between their assessment of a particular risk and their audience’s assessment. In other words, “Watch out!” and “Stop worrying” are still the archetypes.

Risk-aversion, risk-tolerance, and risk-seeking are often assumed to be enduring traits of character (in individuals and in cultures), but the variations are more impressive than the consistencies. There is no great surprise in encountering a sky-diver who is terrified of spiders. Concern about personal risks (like cholesterol) shows only modest correlations with concern about societal risks (like industrial effluent). When the domain of “risk” is extended even further, the correlations may disappear or even reverse. Quite different groups lead the way in concern about environmental risks (global warming, toxic waste dumps), economic risks (recession, unemployment), and social risks (family values, violent crime). Cultural theories of risk try to make sense of these patterns; one such theory attributes them to distinctions among hierarchical, entrepreneurial, and egalitarian cultural values. Depending on the hazard under discussion, in short, we are all both over- and under-responders to risk.

“Watch Out!”

The most serious health hazards in our lives (smoking, excessive fat in the diet, insufficient exercise, driving without a seatbelt, etc.) are typically characterized by under-response – that is, by apathy rather than panic. This is apparently true even where the list of serious hazards is dominated by war, famine, and infectious diseases instead. Considering how many lives are at stake, the enormous difficulty of warning people gets surprisingly little comment. The new risk communication industry that has emerged since the mid-1980s is preoccupied far more with reassuring people; those who seek to warn operate under less trendy labels like “health education.” Apart from the fact that industry has more money for reassurance than government and activists have for sounding the alarm, there is a more fundamental reason for the distortion: Apathy makes intuitive sense to most people. We are not especially surprised, bewildered, or offended when others fail to take a risk seriously enough.

The dominant models of self-protective behavior assume a rational under-response to the risk and aim at correcting the misunderstandings that undergird that response. That is, they try to convince the audience that the magnitude of the risk is high (“X is a killer”); that the probability of the risk’s occurrence and the susceptibility of the audience are high (“X strikes thousands of people each year and is likely to strike you as well”); and that the proposed solution is acceptably effective, easy, and inexpensive (“Here’s what you can do about X”). All these propositions are difficult to convey effectively. People tend to be particularly resistant to the idea that they are at risk. For virtually every hazard, most people judge themselves to be less at risk than the average person: less likely to have a heart attack, less likely to get fired, less likely to become addicted to a drug. This unrealistic optimism permeates our response to risk, and we support it by concocting from the available information a rationale for the conviction that the hazard will pass us by, even if it strikes our neighbors and friends. “This means you” is thus a more difficult message to communicate than “many will die.”

Several newer models of self-protective behavior postulate that different messages are important at different stages of the process. Information about risk magnitude may be most important in making people aware of risks they have never heard of, while information about personal susceptibility may matter more in the transition from awareness to the decision to act. And deciding to act is by no means the same as acting. As advertisers have long known, what makes the difference between procrastination and action isn’t information, but frequent reminders and easy implementation.

In alerting people to risk, social comparison information is often as important as information about the risk itself. Since most people prefer to worry about the same risks as their friends, they are alert and responsive to evidence that a particular hazard is or is not a source of widespread local concern. (The first person in the neighborhood to worry is a coward if the risk turns out to be trivial and a jinx if it turns out to be serious; read lbsen’s An Enemy of the People.) Messages aimed at building the audience’s sense of efficacy may also be effective in motivating action about a risk. Fatalism makes apathy rational; if you are convinced that nothing you can do will help, why bother?

Emotions are also important. Concern, worry, fear, and the like can be products of the cognitive dimensions of risk, but they also exert an independent influence. Even so, many risk communicators forgo appeals to emotion, sometimes out of principled respect for the audience, sometimes out of squeamishness, and sometimes out of a mistaken belief that emotional appeals inevitably backfire. Any appeal can backfire, but the data do not support the widely shared concern that too powerful an emotional appeal, especially a fear appeal, triggers denial and paralysis. Even if the fear-action relationship turns out to be a up-side-down curve-shaped curve (that is, even if excessive fear is immobilizing), virtually all efforts to arouse the apathetic are safely on the lefthand side of the curve, where action is directly proportional to the amount of fear the communicator manages to inspire.

“Stop Worrying!”

In essence, people usually underestimate risks because they would rather believe they are safe, free to live their lives without the twin burdens of feeling vulnerable and feeling obliged to do something about it. Why, then, do people sometimes overestimate risks?

A key can be found in the sorts of hazards whose risk we are most inclined to overestimate. What do nuclear power plants, toxic waste dumps, and pesticide residues – to choose three such hazards at random – have in common? In all three cases, the risk is:

  • Coerced rather than voluntary. (In home gardens, where the risk is voluntary, pesticides are typically overused.)
  • Industrial rather than natural. (Natural deposits of heavy metals generate far less concern than the same materials in a Superfund site.)
  • Dreaded rather than not dreaded. (Cancer, radiation, and waste are all powerful stigmata of dread.)
  • Unknowable rather than knowable. (The experts endlessly debate the risk, and only the experts can detect where it is.)
  • Controlled by others rather than controlled by those at risk. (Think about the difference between driving a car and riding in an airplane.)
  • In the hands of untrustworthy rather than trustworthy sources. (Who believes what they are told by the nuclear, waste, and pesticide industries?)
  • Managed in ways that are unresponsive rather than responsive. (Think about secrecy vs. openness, courtesy vs. discourtesy, compassion vs. contempt.)

Any risk controversy can be divided into a technical dimension and a nontechnical dimension. The key technical factors are how much damage is being done to health and environment, and how much mitigation can be achieved at how much cost. The key nontechnical factors are the ones listed above, and others like them. Consider a proposed incinerator. Assume that the incinerator can be operated at minimal risk to health. Assume also that its developers tried to cram the facility down neighborhood throats with minimal dialogue; they are not asking the neighbors’ permission, not offering to grant them oversight responsibilities, not proposing to share the benefits. While the experts focus on the technical factors and insist that the risk is small, neighbors focus on the nontechnical factors, find the risk huge, and organize to stop the facility. Is this an over-response? It is if we accept only technical criteria as valid measures of risk. But it may be a proportionate response, even a forbearing response, to the nontechnical side of the risk.

The two dimensions have been given various sets of labels: “hazard” versus “outrage,” “technical rationality” versus “cultural rationality,” etc. But it is a mistake to see the two as “objective risk” versus “perceived risk” or as “rational risk response” versus “emotional risk response.” For many disputed hazards, in fact, the data on voluntariness, dread, control, trust and the like are more solid, more “objective,” than the data on technical risk. These nontechnical factors have been studied by social scientists for decades, and their relationship to risk response is well-established. When a risk manager continues to ignore the nontechnical components of the situation, and continues to be surprised by the public’s “overreaction,” it is worth asking just whose behavior is irrational.

Since people’s response to controversial risks doesn’t arise from technical judgments in the first place, explaining technical information doesn’t help much. When people feel they have been badly treated, they do not want to learn that their technical risk is small; instead, they scour the available documentation for ammunition and ignore the rest. It is still necessary to provide the technical information, of course, but the outcome depends far more on the resolution of nontechnical issues. Communication in a risk controversy thus has two core tasks, not one. The task everyone acknowledges is the need to explain that the technical risk is low. The task that tends to be ignored is the need to acknowledge that the nontechnical risk is high and take action to reduce it. When agencies and companies pursue the first task to the exclusion of the second, they don’t just fail to make the conflict smaller; they make it bigger.

Of course, not all nontechnical issues can be resolved. Part of the public’s response to controversial risks is grounded in characteristics of the hazard itself that are difficult to change – undetectability, say, or dread. Part of the response is grounded in the activities of the mass media and the activist movement, both of which amplify public outrage even though they do not create it. But the part that most deserves attention is the part that results from the behavior of the hazard’s proponents. Risk communication guidelines for the proponents of controversial technologies are embarrassingly commonsensical:

  • Don’t keep secrets. Be honest, forthright, and prompt in providing risk information to affected publics.
  • Listen to people’s concerns. Don’t assume you know what they are, and don’t assume it doesn’t matter what they are.
  • Share power. Set up community advisory boards and other vehicles for giving affected communities increased control over the risk.
  • Don’t expect to be trusted. Instead of trust, aim at accountability; prepare to be challenged, and be able to prove your claims.
  • Acknowledge errors, whether technical or nontechnical. Apologize. Promise to do better. Keep the promise.
  • Treat adversaries with respect (even when they are disrespectful). If they force an improvement, give them the credit rather than claiming it yourself.

Advice like this is not difficult to accept in principle. It is, however, difficult to follow in practice. It runs afoul of organizational norms; sources that do not tolerate much internal debate are unlikely to nurture a more open dialogue with the community. It raises “yes, but” objections, from the fear of liability suits to the contention that it is better to let sleeping dogs lie. Perhaps most important, it provokes the unacknowledged bitterness in the hearts of many proponents, who may ultimately prefer losing the controversy to dealing respectfully with a citizenry they consider irrational, irresponsible, and discourteous.

Copyright © 1994 by Houghton Mifflin

For more introductory materials on risk communication:    link to Introductory articles
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.