2011 Guestbook
Comments and Responses

Validating the adjustment reaction: “Of course you’re upset….”

name:Kate
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Former university mathematics lecturer
date:December 17, 2011
location:United Kingdom

comment:

I found your article [on adjustment reactions] very interesting, even though I am not involved in a job or situation professionally where I can use this.

Something very bad happened to me two days ago and one friend told me I was over-reacting after 24 hours. So I found it good to read what you had to say – that it’s normal and even helpful to be very anxious initially. If I am over-reacting I would need some comfort, not just to be told that I am over-reacting. Another person whose husband might have known what I needed to know said he was watching the news. I guess she didn’t want me to talk to him.

Using my own resources I found what to do and my sister helped me too. But I am left feeling bad about what people said when I was so upset. I didn’t weep or shout – just talked normally.

I think a weep might have helped but the British have to keep smiling!

So thanks for your article.

peter responds:

My writing is aimed at people whose work requires them to communicate with others about risk. My goal is to help them do it better.

But from time to time I receive an email or a Guestbook comment like this one, reminding me that people sometimes stumble on my website when they’re looking for help with a personal problem, not a professional one. It is both joyful and humbling for me to learn that you found the “Adjustment Reactions” column valuable in a difficult time.

Your comment also demonstrates how important it is for people in crisis to hear that it’s “normal and even helpful” (you put it perfectly) for them to be upset. “Over-reacting” at first is a natural and useful way to respond to a new crisis. Telling people so – validating their adjustment reaction – helps them calm down and cope.

Conversely, telling them that they’re over-reacting and should calm down and cope – that is, criticizing their adjustment reaction – adds to their pain and slows their progress.

That’s true when you’re trying to manage a natural disaster or an infectious disease outbreak. And it’s true when a friend has had some bad news.

Finding a Ph.D. research topic that tests one of my risk communication “principles”

name:Anonymous
field:Public health
date:December 17, 2011
location:Michigan, U.S.

comment:

I am a Ph.D. student in public health and am contacting you concerning research on your risk communications principles. I am responding to your interest in having more research done on your risk communication principles as you mentioned in the guestbook post by Knut Tønsberg.

I also work with a public health agency in Michigan, specializing in pandemic influenza risk communication. I am early in my studies (in the first year) and would like to do my dissertation research on your principles of risk communication in terms of public health preparedness and influenza pandemics.

The problem is narrowing down which of your hypotheses to test. Are you interested in providing some advice?

peter responds:

An old friend, a researcher with the New Jersey Department of Environmental Protection, wrote to me several months ago to suggest that I develop a list of testable hypotheses derivable from my work that I’d like to see researchers work on. It’s in my do-this-someday file. If I actually get to it, you’ll find it on my website Guestbook. Whether I get to it or not, I’d be delighted to help you think through your dissertation research.

But not yet! If I help now, the risk is too high that you will wind up studying something I have always wanted to know. There’s a long history of doctoral students doing that for their major professors, which is problematic enough; you shouldn’t even consider doing it for an outsider who’s not even at your institution.

Read my stuff on my website. Read other people’s stuff too. See if you can find an assertion of mine that meets these criteria:

  • It’s discrepant from general practice – that is, practitioners who accepted the assertion would want to manage certain sorts of risk communication tasks differently than they now do.
  • It’s discrepant from much theory or expert advice – that is, not only is there a gap between what I recommend and what most practitioners do; there’s a gap between what I recommend and what most risk communication experts recommend.
  • It’s testable – you can think of a study that could resolve the question, or at least you can imagine there being such a study and would like to try to think of it.
  • It adds to the literature – you can find some relevant literature that you can link your dissertation to, but either you can’t find anything right on the question you want to study or what you’re finding reaches mutually incompatible results and you think you can do a study that resolves/explains the muddle.
  • You think it’s important and interesting – you can imagine spending the better part of a year or two nailing down the answer to this particular question without coming to wonder whatever possessed you to make such a narrow commitment. (In academia, “important” usually means important to theory-building. In the real world, “important” means important to outcomes ordinary people care about. Aim for both if you can.)
  • You think you can persuade your major professor and a committee’s worth of his or her colleagues to agree with you on all the above bullet points.
  • You have reason to believe that your colleagues and managers at the agency you work for won’t think you’re entirely wasting your time.

When you think you’re almost there – you have an idea (better yet, two or three ideas) that seem to meet these criteria, by all means get back in touch with me and see if I agree. By that time, odds are, you’ll be able to resist me if I start saying “what you really ought to study instead is….”

By the way, you initially framed your question in terms of testing my risk communication “principles.” I put the word in quotes in the title of this Guestbook entry because I don’t think my various contentions deserve that much dignity or that much confidence. I like your phrasing at the end of your question better: figuring out which of my “hypotheses” to test.

Some of the things I routinely say and write are already well tested and truly deserve to be called principles – for example, that panic is rare in natural disasters, or that critics tend to get less outraged when organizations apologize. These really are fundamental principles of risk communication, and you can’t get a Ph.D. for testing them yet again.

The claims I make that are worth testing are the ones that aren’t principles yet, though I may think they should be and may sometimes refer to them as if they were. So far they’re just hypotheses. If you pick one and prove it convincingly, then maybe we can say it’s a principle.

Occupy Wall Street messaging – and Wall Street responsiveness

name:Rusty Cawley
This guestbook entry
is categorized as:

      link to Precaution Advocacy index       link to Outrage Management index

field:Public relations
date:November 8, 2011
email:rcawley@tamu.edu
location:Texas, U.S.

comment:

What is your analysis of outrage’s role in launching, spreading and sustaining the Occupy Wall Street movement?

The news pundits are ridiculing the movement for its “lack of a coherent message.”

The PR community says it’s time for the movement to articulate its goals, focus its messaging, speak with one voice, agree upon a standard icon, and do a much better job overall of managing its brand. As marketing consultant David Meerman Scott wrote in his blog on October 17, “If you want the world to take you seriously, then you need to take yourselves seriously.”

What's the perspective of outrage management on this?

peter responds:

Obviously, Occupy Wall Street is an expression of outrage. It’s not (at least not primarily) an effort to arouse outrage. The outrage is already widespread and the movement is simply channeling it. This is a kind of precaution advocacy I haven’t written much about.

Let’s divide precaution advocacy into three tasks:

  • Arousing outrage
  • Expressing outrage
  • Distilling outrage

The toughest precaution advocacy task is arousing outrage: trying to get people outraged who aren’t but you think should be. That’s how activists earn their stripes. They can’t manufacture outrage out of whole cloth, of course; the outrage has to be latent in the situation. Activists are skilled at making latent outrage manifest.

But outrage at Wall Street is already manifest – not just outrage over Wall Street’s role in our economic problems, but also outrage over its unwillingness to acknowledge its role and accept some punishment and some reform. So the task of Occupy Wall Street is a lot easier. It doesn’t need to arouse outage; it merely needs to provide a means and a venue – hundreds of venues, actually – for its expression.

Distilling the outrage into an economic and political agenda is the third task. David Meerman Scott complains that Occupy Wall Street hasn’t taken on that third task. He’s right. It hasn’t. Maybe the second task is enough.

Arguably the first and third precaution advocacy tasks – arousing outrage and distilling it into an agenda – require leaders, spokespeople, strategies, goals, messages, icons, and the rest. But the second task doesn’t. It’s much more an upwelling. It’s the stuff of movements, not strategists.

Does Occupy Wall Street need more coherent messaging?

I’m not an economist. I don’t understand how and why the U.S. and world economies went awry, and I don’t know what should be done to rescue them in a way that reduces the gap between the rich and everybody else – not by gratuitously punishing the rich but by restoring opportunities for everybody else.

I don’t actually think economists have the answers either. Or rather they seem to have lots of antithetical answers. (Whenever I despair of risk communication becoming a “real” field with consensus principles grounded in solid empirical data, I look at economics – which has been trying a lot longer – and take heart that things could be worse.)

So if economists don’t have the answers, how could we expect the Occupy Wall Street demonstrators to have them? What they have is a pretty coherent sense that things are being run more on behalf of the rich than on behalf of everybody else. They’re not sure exactly what a system that had their interests at heart should do differently, but they’re pretty sure the system they’re inheriting doesn’t have their interests at heart – and in that, I think, they’re mostly right.

Comparisons to the anti-Vietnam and civil rights demos of the 1960s are iffy, and so are comparisons to the Arab Spring. Still, it’s worth noting that the absence of a detailed agenda didn’t keep those predecessor movements from having enormous impact. They got attention and they instigated change.

And in many cases (not all) they did it with consensus decision-making rather than hierarchical structures and defined leaders. I sat through a lot of sixties meetings where decisions got made that way. The rise of social media has given crowd wisdom more credibility (even cachet) than it ever had in the sixties. Social media haven’t just made leaderless groups more appealing to members, prospective members, and sympathizers. They’ve also made leaderless groups easier to create and easier to manage.

The huge advantage of a vague message is of course that nobody is excluded. It’s hard to imagine a movement with a more explicit agenda credibly claiming to speak for the “other 99%.”

I see David Meerman Scott’s point about the value of more specific branding and messaging. Perhaps Occupy Wall Street can have its cake and eat it too: pay more attention to branding and messaging without chaining itself to a concrete agenda. The “other 99%” claim is a sizable step in that direction, I think – a slogan that’s catchy without excluding anybody (excluding only one in a hundred, anyway). And it was fashioned and chosen without leaders. I’ll bet some of the larger “occupations” have already set up branding and messaging committees, whose work products must be endlessly debated and then, perhaps, ultimately adopted.

Your comments on Scott’s blog make sense to me. Scott analogizes Occupy Wall Street to a tantrum, and you point out that tantrums can get attention and instigate change.

Moreover, Occupy Wall Street resembles a tantrum a lot less than I’d have predicted, given how justifiably outraged many participants are. Most days, in most places, it has seemed pretty laid back, earnest, and friendly. Its worst moments are occasional riots (tantrums on steroids), but so far those moments have been rare.

I think the main focus of Occupy Wall Street isn’t really on economic problem-solving: how to get us (99% of us) out of the doldrums. Its focus is on economic (and political) disenfranchisement – on the “deficits of democracy” and “deficits of accountability” that have worsened in the last several decades and crystallized in the last several years. (Andrew Sullivan used these phrases in an October 31 Newsweek article.)

The occupiers aren’t claiming they know how to address these deficits. In many cases they’re not even claiming they’ve tried yet. They’re demanding that the 1% start trying. As one occupier’s sign put it: “Dear 1%, We Fell Asleep for a While. Just Woke Up. Sincerely, the 99%.”

That’s not enough for a strategy aimed at arousing outrage, or for an agenda aimed at distilling outrage into reform. But I think it’s enough for a movement that simply expresses the outrage.

How should Wall Street respond?

A feeling of disenfranchisement, of not being listened to, is often the core of outrage.

That was certainly true of the sixties Vietnam War demonstrations. We now know that President Lyndon Johnson was paying assiduous attention to the antiwar movement, but he pretended he wasn’t for fear that the movement would escalate if it thought it was having an impact. Johnson hoped to discourage the demonstrators into giving up. His pretense worked – the demonstrators thought he wasn’t listening – but his understanding of outrage was deficient. The antiwar movement escalated and its violent wing flourished because it (wrongly) thought Johnson wasn’t paying attention.

I suspect that’s happening now too. I believe that Wall Street is paying a lot closer attention to Occupy Wall Street than it pretends. And I believe that it would be wiser for Wall Street to pay attention more visibly. (Some government leaders are also pretending not to be paying attention, while others are letting their attention show.)

No financial institution (or politician or government agency) has sought my advice on the outrage management implications of Occupy Wall Street. What would I say if they asked? Pay attention more visibly.

More specifically:

number 1
Pay attention to ordinary folks, especially young middle-class folks who went to college, accumulating loans in the process, and now have no jobs and lousy prospects and are reduced to living with their parents. This is by no means the only cohort that’s suffering right now, nor is it the only cohort that’s involved in (or sympathetic to) Occupy Wall Street. But it seems to be the movement’s dominant group, whose feelings of betrayal and disenfranchisement are at its core. To some extent, this cohort is also allied with and speaking for the working class and the poor, who have much less opportunity and upward mobility – less hope – than before the Global Financial Crisis.
number 2
Acknowledge that you screwed up. Yes, it wasn’t just you. The government screwed up too, Republicans and Democrats both. But given the excesses of lobbying that have accompanied the financial debacle, it’s not crazy that Occupy Wall Street chooses to blame the ventriloquist more than the dummy. You wanted Glass-Steagall repealed, for example. Now you want Dodd-Frank gutted. You seem to get most of what you want, at least so far – so if it has worked out badly, as it has, blaming you feels right. And your failure to accept the blame makes us want to blame you all the more.
number 3
Acknowledge that others paid the price for your screw-up, and for the most part you didn’t. Acknowledge that some punishment is overdue.
number 4
Acknowledge that it wasn’t just a screw-up. Your values got screwed up too. It doesn’t take an Ayn Rand acolyte to understand that selfishness can be profoundly prosocial when it motivates markets and expands the pie for everyone. But it also doesn’t take a communist to understand that when selfishness goes too far it voids the social contract … and then we’re all in deep trouble. When the rich earn 30 or 50 times as much as average and you can see the ladder up, that stimulates the work ethic. When the rich earn 3000 or 5000 times as much as average and there’s no ladder up, that undermines the work ethic. You forgot about moderation.
number 5
Acknowledge that you don’t actually know how to get back on track. You may be the smartest 1% as well as the richest 1%, but that just means you were smart enough to invent financial instruments that nobody including you could understand. The sarcastic title of David Halberstam’s 1972 book on the origins of the Vietnam War was The Best and the Brightest. That’s you.
number 6
Don’t let these acknowledgments take precedence over listening. Before the 99% can take in your long-overdue apologies, they need to vent – a lot. Occupy Wall Street is an excellent venting opportunity. But venting in a closet doesn’t reduce people’s outrage much. As I routinely tell my clients, outraged stakeholders need to vent in front of each other, in front of journalists, and in front of you. The first two are working just fine. But so far you don’t seem to be listening. That’s a huge, huge error.

For more recommendations along these lines, see my March 2010 column on “Hostile Meetings: When Opponents Want to Talk.”

Let me clarify what I’m getting at here.

Whenever lots of people are outraged at an institution, ultimately that institution needs to change. Substantive change – reform – is a prerequisite for the orderly resolution of outrage-arousing issues. (Revolution is a disorderly alternative.) Listening to people, working to reduce their outrage, and engaging them on the issues are not a replacement for reform.

But if you don’t do those three things – listen to people, work to reduce their outrage, and engage them on the issues – reform tends to go awry. Outraged people who feel like they’re not getting heard are prone to temper tantrums. They’re likely to demand too much change, or the wrong change. And they’re likely to fail to notice or fail to appreciate the change they have provoked already. The pendulum could swing too far, from insufficiently regulated capitalism to excessively and unwisely regulated capitalism … or even (if some of the Wall Street occupiers have their way) an end to capitalism.

The role of listening, outrage management, and engagement isn’t to replace substantive changes. Their role is to make sure that the substantive changes provoked by outrage actually reduce the outrage that provoked the changes.

Listening, outrage management, and engagement function like a catalyst in a chemical reaction. They cause a given amount of reform to achieve more outrage reduction. So you don’t have to reform as much – and the pendulum is less likely to swing too far.

(This explains why revolutionaries despise outrage management. If your goal is sweeping change, not reform, you need to keep people’s outrage high.)

As I typically say to my outrage management clients:

I can’t tell you which fights you should let yourself lose and which fights you have to win – that is, which substantive changes to make and which to resist. That’s your call.

But whatever fights you decide to lose, whatever substantive changes you decide to make, I can help you maximize their impact on outrage. Never, never lose a fight secretly. Never sneak in a reform when nobody is looking. Never claim that you were going to make that change anyway, that it’s an example of your wisdom and beneficence, not a victory for your critics.

Here’s how to maximize the impact of your reforms on your stakeholders’ outrage.

First you must listen while people vent. The other strategies of outrage management aren’t likely to accomplish much until outraged people have done some venting.

Then you should deploy the full range of outrage management strategies: echoing your critics’ grievances and validating those that have merit, acknowledging your prior misbehaviors and current problems, sharing control and credit, etc. Substantive engagement – that is, negotiation – isn’t likely to accomplish much until everybody’s fairly calm. Highly outraged people can’t negotiate; they have temper tantrums. Highly outraged people don’t want a win-win; they simply want to punish you. So outrage management is a prerequisite to constructive engagement.

Then, when people’s outrage is low enough that they’re seeking a good outcome, not vengeance, you can engage on the issues. And you must. People don’t want to see you making substantive reforms of your own choosing. They want to see you making reforms they pressured you to make – even if they’re the same reforms.

Then you reform.

So far, Wall Street is getting this mostly wrong. As the pseudonymous “Schumpeter” put it in an excellent October 29, 2011 column in The Economist, “Wall Street’s lawyers won an internal power struggle with its spin doctors, convincing the bosses that they would end up sued or in jail if their public statements were anything other than bloodless boilerplate.” I see outrage management as very different from spin doctoring, but I certainly agree that Wall Street has done neither.

“Schumpeter” continues:

So the big banks’ apologies for their role in messing up the world economy have been grudging and late, and Joe Taxpayer has yet to hear a heartfelt “thank you” for bailing them out. Summoned before Congress, Wall Street bosses have made lawyerised statements that make them sound arrogant, greedy and unrepentant. A grand gesture or two – such as slashing bonuses or giving away a tonne of money – might have gone some way towards restoring public faith in the industry. But we will never know because it didn’t happen.

On the contrary, Wall Street appears to have set its many brilliant minds the task of infuriating the public still further, by repossessing homes of serving soldiers, introducing fees for using debit cards and so on.

Wall Street’s narrow response to the outraged occupiers has been as tin-eared as its broader response to the outraged public. There has been a lot of what I call “crap rebuttal” – cherry-picking false allegations and unsound arguments to critique, instead of validating the valid grievances. Even worse, there has been a lot of explicit contempt. In its costumes and mores, Occupy Wall Street continues to offer plenty of easy targets for mockery, and many commentators (not just Wall Street commentators) have contentedly given in to the temptation.

To its credit, Wall Street has mostly avoided angry confrontation. Having sat through endless debates with clients over the years about why it would be a mistake to call in the cops and get all those demonstrators arrested for trespassing, I can imagine the internal debates that must be going on in targeted financial institutions. So far, it seems, the smart side has usually won the debates.

That’s something. Wall Street hasn’t declared war on Occupy Wall Street. Now it needs to start listening, managing outrage, and engaging on the issues.

Layoffs as a risk communication challenge

name:Jane
This guestbook entry
is categorized as:

      link to Crisis Communication index      link to Outrage Management index

field:Communication consultant
date:October 27, 2011
location:California, U.S.

comment:

I am a big fan of your work on risk communication and have been following it for years. I am currently researching best practice for communicating job layoffs, and wondered if you would apply your models to communicating bad news about jobs.

For example, would this comment hold true in a job crisis?

… talking candidly about worst case scenarios is likelier to reassure people than to frighten them (far less panic them). More often than not, they are already pondering what might go wrong, imagining the worst and wishing there were some way to get it out onto the table and get the facts.

I would think that employees would be expecting layoffs. Speculating about when and who is a big part of the rumor mill in an organization, and people would rather know than constantly live in fear about what might happen. So I think not telling them is very unproductive, and in our current economic climate irresponsible.

Most companies tend to keep their employees in the dark. Publicly listed companies have of course a regulatory framework to consider, but your risk communication model is an interesting one to contemplate.

peter responds:

I agree with you that most layoffs, especially big ones, don’t come as a total surprise to employees. They mostly knew or sensed that a downsizing was on the way, and the event itself is like the other shoe dropping. Would it be kinder – and better business – for employers to be candid? I think so.

In the current economic climate, a very high percentage of employed people fear for their jobs. This is a huge drain on morale and productivity – and a significant threat to workplace safety as well. It also inhibits consumption and damages the economy, as millions pull back on spending in anticipation of possible joblessness to come. (The widespread expectation of joblessness is thus a self-fulfilling prophesy.) And of course it’s hurting people’s quality of life. Waiting to see if you’ll keep your job is like waiting for a doctor’s test results about a deadly disease you might have; it’s profoundly unpleasant.

In order to analyze who benefits from candor about layoffs and who (if anyone) loses, let’s segment the workforce according to two variables – whether they think layoffs are coming (yes or no) and whether layoffs are actually coming (yes or no). Here are the four groups:

  • Yes/No – employees who are mistakenly worried about getting laid off. Their employer is actually not planning layoffs, but hasn’t said so. These employees are experiencing completely unnecessary anxiety, with all its impacts on morale, productivity, safety, consumption, and quality of life. If it were the norm for companies to level about their labor plans, these employees would enjoy a reprieve. Learning that your company won’t be doing any layoffs for at least the next six months (for example) isn’t the same thing as getting a guaranteed job for life. But assuming employees could trust the schedule, they’d have grounds at least for short-term confidence.
  • Yes/Yes – employees who are rightly worried about getting laid off. Their employer is actually planning layoffs – but secretly, so these employees don’t know if their anxiety is justified or not. They’re waiting for the other shoe to drop, not sure how seriously to take their fears. If their employer were to tell them that layoffs might well be imminent, obviously they’d be out of the frying pan and into the fire. The risk would be knowable, explicit, confirmed. Instead of a nagging malaise they’d face a genuine crisis. They’d start going through their adjustment reaction, getting used to the idea that they might well be job-hunting in a few months.
  • No/Yes – employees who are mistakenly unworried about getting laid off. Their employer is actually planning layoffs, but they haven’t been told and don’t suspect (or have suppressed their suspicions). They’re fine for now, though perhaps burdened by the stress of denial and bewilderment at their fellow employees’ job anxiety. But when the layoffs come, they will be taken by surprise, logistically and emotionally. They won’t have planned for the contingency, and they won’t have gone through the inevitable and essential adjustment reaction. Their outrage at the layoffs will be higher than if they’d been properly warned.
  • No/No – employees who are rightly unworried about getting laid off. They don’t expect layoffs, and their employer doesn’t plan layoffs. If their employer bothered to say so explicitly, it would simply confirm what they already believe. It wouldn’t do them any particular good, but it certainly wouldn’t do them any harm.

Bottom line of this audience segmentation: Most employees are better off knowing whether layoffs are in the offing or not. No employees are worse off knowing.

I understand that candor may have some downsides for an employer. Announcing likely future layoffs might damage a company’s share price, for example, by suggesting that the company isn’t doing well (independent of whether the layoffs themselves will push the price down or up). And perhaps employees who knew that layoffs were coming rather than just fearing that layoffs might be coming would perform less well; they might focus on job-hunting, or take sick days, or even engage in a little pilfering or sabotage. There are some ways in which it might be better for the employer to let the layoffs come as a shock: Yesterday you were comfortably (or anxiously) ensconced in the job you’ve held for years; today you’re cleaning out your locker or your desk and being escorted from the premises by a security guard.

But at least in risk communication terms, candor about the prospect of layoffs is good strategy.

All three paradigms of risk communication

As you probably know already, I distinguish a risk’s “hazard” (how much harm it’s likely to do) from its “outrage” (how upset it’s likely to make people). Based on this distinction, I categorize risk communication into three tasks:

  • When hazard is high and outrage is low, the task is “precaution advocacy” – alerting insufficiently upset people to serious risks. “Watch out!”
  • When hazard is low and outrage is high, the task is “outrage management” – reassuring excessively upset people about small risks. “Calm down.”
  • When hazard is high and outrage is also high, the task is “crisis communication” – helping appropriately upset people cope with serious risks. “We'll get through this together.”

Layoffs match my crisis communication paradigm perfectly. The risk is both high-hazard and high-outrage; people are rightly upset about a genuinely serious risk. Losing your job is a crisis. Knowing that you’re going to lose your job is a crisis. Knowing that you’re likely to lose your job is a crisis. Suspecting that you’re likely to lose your job is a crisis.

For that matter, knowing or suspecting that others in your organization are likely to lose their jobs is also a crisis, even if you are confident of keeping yours. Layoff survivors suffer from survivor guilt; they must endure the disruption of the layoff itself and the pressure once the layoff is over to do more with less; they lose friends and acquaintances; they have reason to worry about the possibility of another layoff down the road.

The two goals in crisis communication are to help people bear their feelings, especially their anger, fear, and misery – that is, their outrage – and to help them cope effectively with the serious hazard they face.

The cardinal sin in crisis communication is to try to reduce people’s justified outrage. That’s what you do in outrage management, the risk communication paradigm for situations where people are excessively upset about a small risk. But when people are appropriately upset about a big risk, getting them less upset isn’t a service; they need their outrage to motivate them to prepare and to cope. And anyway, getting people less upset isn’t usually achievable in a crisis. If you try, the odds are good that you’ll just make them feel abandoned, isolated, and mistrustful, worsening an already upsetting situation. You can help guide people’s outrage in a crisis, and help them cope with their outrage – but that is not the same as trying to reduce their outrage.

There is also a precaution advocacy element in talking about possible future layoffs. Employees who imagine that their job is more secure than it actually is are in a high-hazard, low-outrage situation. They need and deserve to be warned. Failing to warn them will increase both their eventual hazard (because they won’t have prepared logistically – by cutting expenses, tuning up their résumés, etc.) and their eventual outrage (because they won’t have prepared emotionally; they’ll have been blindsided by a management that chose not to give them a heads-up).

And as always when the crisis is your doing, layoffs have an outrage management component too. My outrage at the fact that I’m getting laid off (or at risk of getting laid off) is justified. But my outrage at you for laying me off may or may not be justified. It may be excessive, in which case it’s legitimate for you to try to manage my outrage at you downward … but not my outrage at the fates for my job loss.

That’s part of why it’s sensible to warn people about the prospect of layoffs. The warning mitigates employees’ outrage at you for blindsiding them, whether or not it mitigates their outrage at the fates for their joblessness.

The “job creator” boast

Because crisis communication is the most applicable risk communication paradigm for talking about layoffs, I plan to conclude this response with a list of some of the crisis communication principles that apply. But first I want to address just one way in which many employers have unwisely set themselves up as targets for employee outrage (and broad societal outrage) with regard to layoffs: boasting that they are “job creators.”

It is certainly true that companies create jobs. When a company grows, it needs more people to accomplish its objectives. So it hires – and that’s good for the people hired, good for the community, and good for the country.

I have no objection to a company pointing out that its new or expanded factory will mean more jobs. Increased employment is typically among the upsides of industrial growth, just as increased pollution is typically among the downsides. A company seeking permission to grow is obliged to acknowledge the downsides; it is certainly entitled to mention the upsides too.

But mentioning job creation isn’t the same as boasting about it. The truth that your company employs many people needs to be accompanied by another truth, a perfectly obvious truth that employers rarely mention: Your company employs as few people as it can, and still get the job done.

The payroll goes on the debit side of a company’s balance sheet, not the credit side. If the company gains more customers, makes more widgets, and earns more profits, it will usually need more employees – so additional employees are often a sign that the company is doing well. But they’re not a reason it is doing well. If the company can find a way to make its current workforce more productive so it won’t need to hire more people, that’s how it can do better yet.

Similarly, layoffs are often a sign that a company is doing badly; it has less work to get done, and needs fewer people to do it. But if the company is laying people off because it has found a way to increase productivity, to accomplish more with fewer people, that’s good news for the company. Obviously it isn’t good news for the employees who are laid off – at least not in the short term. In the long term, nobody’s job is safe at companies that maintain a larger payroll than they actually need; eventually such companies will be overtaken and destroyed by leaner, meaner competitors.

I remember riding in an automatic hotel elevator in Budapest before Hungary started trying to convert from communism to capitalism. There was a full-time employee stationed in the lobby who pressed the button that called the elevator. There was another full-time employee stationed in the elevator who pushed the button for the floor I wanted. These jobs were neither sustainable nor soul-satisfying. To the extent that capitalism is working in Hungary, these two jobs should no longer exist.

We all understand this in our own lives. Suppose you have a plumbing problem. If your plumber arrives with a team of three unnecessary assistants, and the four of them sit around chatting about sports – with all four hourly rates ending up on your bill – you’re likely to look for a different plumber next time. You don’t want to pay for more plumber time than you actually need, even if you can “afford” to pay extra and even if doing so would help create jobs.

Huge multinational corporations feel exactly the same way.

But they lie about it. Especially since the start of the Global Financial Crisis and the rise of the U.S. unemployment rate, more and more companies have allowed their advertising and public relations to imply that they’re “job creators” and proud of it – that their goal is to keep as many people employed as possible. And then they wonder why employees (and many non-employees) get outraged when they lay off people even though they’re making plenty of money and thus could afford a fair amount of featherbedding.

I’m no economist. Maybe there are good ways to rejigger the incentives, to create a society where employers benefit from having more employees. Governments do that when they give tax credits for hiring new people; I’m not competent to judge if/when this is smart and if/when it distorts the economy and undermines real growth (including real job growth). What I know is that, except in rare circumstances, companies maximize profitability, ROI. And, except in rare circumstances, that means trying to increase the productivity of your workforce so you can get more done with fewer people. Job growth is not normally a corporate goal, though it is often a byproduct of profit growth, which is the fundamental corporate goal.

My long-term reason for wanting companies to come out of the closet about job creation is my sense that the U.S. public and the U.S. media no longer understand capitalism. If you think capitalism needs to be reformed and reregulated, I’m with you. If you think it needs to be replaced with a better system, I’m not with you – but have at it; that’s certainly a question worth debating. Misunderstanding capitalism is unlikely to help us reform it or replace it. People need to get it that no company seeks a larger payroll than it needs. Then they can start thinking more clearly about what policies are likeliest to encourage job growth.

My short-term reason for wanting companies to come out of the closet about job creation is simply to avoid unnecessary and unproductive outrage at employers – misplaced outrage that is harmful to employees and employers both.

I’m not necessarily urging employers to specify that this particular layoff is “because we figured out how to make people more productive so we need fewer of them,” or “because we automated a lot of jobs,” or “because we moved the jobs to Asia where labor is cheaper,” or “because our business is going to hell and we don't have as much work to be done.” It may or may not be wise to explain in detail the business rationale for a particular layoff on the day of the layoff.

On the whole, I think the day of a big layoff isn’t the best day to try to educate people about capitalism. For one thing, the focus that day should be on the employees who are getting laid off, not the company that is laying them off. Any explanation offered on that day is almost foreordained to be heard as an excuse. “It’s not really our fault. It’s the nature of capitalism.” Or worse yet: “This hurts us more than it hurts you.”

But over time, employers need their employees to realize that they will downsize whenever it is good business to downsize. Employees may not like capitalism much when they understand it, but they’ll like it even less if we let them misunderstand it and then suddenly discover what it’s really like. Long before the layoffs come, therefore, companies should make educating people (especially employees) about capitalism part of their corporate communications mission. At the very least, companies should avoid miseducating people about capitalism – which is precisely what boasting about job creation does.

Layoffs as crisis communication

Below are a few of the principles of crisis communication that are obviously relevant to talking about layoffs.

number 1

Tell people what to expect. link is to a PDF file

This is the crisis communication principle you raised in your original comment and I discussed at the start of my response. It is better to tell people what to expect in the way of layoffs than to blindside them – or to leave them at the mercy of their own imaginations and the rumor mill.

The term of art for telling people what to expect is “anticipatory guidance.” It’s basic to good crisis communication.

number 2

Don’t over-reassure. link is to a PDF file

Crisis communicators should always try to give people as accurate an understanding as possible of what’s likely to happen – and then help them bear it and cope with it. Of course it’s useful to offer people as much reassurance as you legitimately can. But over-reassurance tends to backfire, either immediately when people smell a rat or belatedly when they learn the truth. An empty claim that you’ll do “everything possible” to avoid layoffs, for example, isn’t actually reassuring, nor is it true. It isn’t reassuring chiefly because people sense that it isn’t true.

Even legitimately good news is experienced as more reassuring if it’s subordinated to a candid acknowledgment of the bad news. A guarantee that there will be no layoffs (or no more layoffs) until after a specified date, for example, is legitimate reassurance. Here’s a good way to frame it: “Even though we can guarantee no layoffs until after Christmas, we do expect to lay off roughly 200 to 300 people in January, most of them in the X, Y, and Z departments. These layoffs are a result of our decision to merge X and Y, and to move most of Z’s responsibilities to overseas contractors. January is going to be a tough month for the people affected. For a list of the relevant outplacement and termination policies, see….”

It’s better to err on the alarming side than to get caught over-reassuring. If you predict 200–300 layoffs and end up laying off only 100 people, that’s good news. If you predict 200–300 and end up laying off 400, that’s not just bad news; it’ll be widely seen as evidence of bad faith.

number 3

Acknowledge uncertainty. link is to a PDF file

One of the reasons companies routinely give for not forewarning employees about layoffs is uncertainty; their layoff plans are unsure, they say, until the last minute. Sometimes this is simply a lie. But even if it’s true, refusing to speak because you’re unsure is bad crisis communication. So is sounding overconfident when you’re unsure.

The solution is easy to explain but hard to implement: Let yourself sound unsure. The goal is to replicate in your employees’ minds the level of confidence in your mind. Sometimes that means saying you’re all-but-certain about your layoff plans, and would be surprised if circumstances forced (or enabled) you to change your mind. Sometimes it means saying that your early layoff plans are closer to wild-ass guesstimates than to firm predictions.

People don’t like uncertainty. They’d rather you knew for sure what was going to happen, and they’ll put pressure on you to imply that you do. But if you give in to the pressure, trust will decline, especially after you turn out wrong. They may even say you should shut up until you have something firm to report – but if you do that, trust will decline as well. So stick to your guns. Acknowledge how frustrating it must be for employees to hear such iffy predictions about such an important issue. Mention (but don’t overstress) that it’s frustrating for you too; you wish you knew more firmly where your business is going. But you don’t – and you’re not going to pretend you do.

Acknowledging uncertainty doesn’t have to mean sounding like you’re a bumbling, uncertain leader. What’s called for in a crisis is uncertain content delivered in a confident tone. You’re used to managing uncertain situations; it goes with the job and it doesn’t unduly fluster you. The worst option is the oppose combination: confident content delivered in an uncertain tone.

number 4

Tolerate early over-reactions. link is to a PDF file

When people become newly aware of a serious risk, they may temporarily over-react. They pause, become hyper-vigilant, imagine how they would feel if the worst happened, imagine how they would cope if the worst happened, and begin taking some precautions even though nothing has happened yet. Psychologists call this an “adjustment reaction.”

Adjustment reactions are natural, almost inevitable. More importantly, they’re useful. They help people prepare logistically and emotionally for the crisis they may soon be facing. One key to effective crisis communication is to anticipate the adjustment reaction, even to encourage it – so you can help guide people through it before the crisis actually strikes.

All this is true of layoffs. The transition from feeling pretty confident that my job is safe to realizing that it isn’t necessarily safe at all is bound to be rocky. Many employers refrain from talking about prospective layoffs because they’d rather not deal with employees’ adjustment reactions. Many employers who tell the truth about prospective layoffs ignore, deprecate, or even ridicule employees’ adjustment reactions. Smart employers help employees get past their adjustment reactions, so they’re that much readier to address the layoffs themselves.

number 5

Establish your own humanity. link is to a PDF file

This is a toughie. My normal advice to crisis communicators is to make sure their commitment to “professionalism” doesn’t keep them from letting their humanity show. If the hurricane or the pandemic feels frightening to them too, for example, they should say so – and become a role model for coping with their fear instead of trying to look fearless.

It’s different if you’re a senior official who’s helping to decide which employees are getting the axe, and whose own job looks pretty damn safe. Nobody is responsible for hurricanes and pandemics – but if you’re the perpetrator (or even just the communicator) of your company’s layoffs, your claims to share the pain are very likely to boomerang.

But the need to let your humanity show is greater, not less, when you’re partly responsible for the crisis you’re trying to communicate. Personalize the deed. Instead of saying “The company has decided…” try grasping the nettle: “John, Cynthia, and I got the job of cutting our department’s payroll by 15 percent. Today’s bad news, I’m sorry to say, is our handiwork.” Express wishes: “I wish I could have found a way to meet the budget goals I was given without layoffs.” But don’t get so carried away that you end up sounding like you think the layoffs are mostly about you. In fact, it may help to say explicitly that the layoffs are not mostly about you: “I can’t know exactly how anybody else feels. But I do know that this news is bound to be a lot harder on the people who have to start job-hunting than on those of us who are left behind.”

Note: At the same time as sending her comment to me, Jane posted it (framed a little differently) on a LinkedIn Q&A page. She has four interesting and thoughtful responses so far. I have resisted the temptation to “borrow” good points from these responses to integrate into my response. But there’s one spectacular suggestion I can’t resist quoting: “Honor those people who are leaving, since they are in part responsible for building the company.”

Financial risk communication, full disclosure, and self-fulfilling prophesies

name:Frederic
field:Consultant, financial audit and internal control
date:October 18, 2011
location:France

comment:

I would like to know if you have written some articles on the topic of the effect of risk communication when potential “self-fulfilling prophesies” are possible.

When such side effects are possible, what would you recommend (i.e. in the case of the banks, where any bad news can lead to people running to the bank, bond rate increases, etc.)?

peter responds:

I haven’t written anything specifically on the self-fulfilling prophesy problem in risk communication. So let me take a crack at it here.

Note that the last half of this answer adds up to a big “But…” that changes the meaning of the first half. Please don’t stop halfway through and miss the “But….”

Avoiding self-fulfilling prophesies is sometimes a legitimate reason to withhold information

It is certainly true that communications can have impacts. When you tell people X, they learn X, and at least sometimes they act on what they learned. If what you tell them concerns the future, their actions may affect that future. It’s like the Heisenberg effect of measurement: What you tell me about X can affect X itself.

The effect can be in either direction. If you say a particular bank may fail, and a lot of people withdraw their money in response, the bank will be that much likelier to fail. If you say the coming flu season may be severe, and a lot of people get flu shots in response, the flu season will be that much less severe. So sometimes the prophesy is self-fulfilling, and sometimes it’s self-negating.

These effects are problems only if they’re the opposite of the effect the communicator intended. When a politician predicts victory, for example, the goal of the prediction is probably to motivate neutrals to join the bandwagon and potential funders to contribute to what looks like the winning side. It’s meant to be self-fulfilling. If it turns out self-negating instead – for example, if supporters decide they needn’t bother to vote – then the politician (or the politician’s communication consultant) has mishandled the situation.

Most warnings are meant to be self-negating. That is, the goal of warning people about a risk is usually to motivate precautions that will reduce the probability or magnitude of the risk. So a self-fulfilling warning – a warning that makes the situation worse – is by definition a mistake.

Here’s where the analysis gets complicated.

Sometimes a warning makes the situation worse in some ways but better in others. When an outside analyst warns that a bank is doing poorly, some depositors may withdraw their money – a self-fulfilling effect the analyst presumably wasn’t seeking. But the bank’s management (or its regulators) may also be forced to make much-needed changes or much-needed explanations to deal with the problem; or other banks may heed the warning and take action to avoid a similar crisis down the road.

And sometimes a warning makes the situation worse for some people but better for others. Assume for example that I own stock in the XYZ Corporation, and you have reason to think XYZ may be in trouble. I’m best off if you whisper your warning in my ear, so I can quietly sell my stock, offloading the risk onto someone who’s not in on the secret. But if you announce your warning publicly, the share price will probably go down, worsening XYZ’s plight – and mine as well if I didn’t sell fast enough. In terms of share price, then, shareholders (including me) are better off if nobody is warned than if everybody is warned. On the other hand, prospective shareholders (including the person who might buy my stock) are better off warned.

And sometimes a warning makes the situation worse in the short term but better in the long term. XYZ shareholders won’t be happy if the analyst’s warning makes the share price tank. But if the warning provokes XYZ management to get its act together – or propels new and more competent management onto the scene – shareholders may end up in better shape than if the company were allowed to sink quietly into oblivion.

For obvious reasons, these kinds of warnings are usually subject to detailed government regulation. Certain facts about a financial institution or publicly traded company have to be made public. Certain facts have to be kept confidential. Few if any facts can be shared selectively.

What about information you’re neither required nor forbidden to make public?

Suppose the president of a publicly owned corporation believes the company is headed for hard times, perhaps even bankruptcy. The president has a plan to avert the disaster, but the plan isn’t guaranteed to succeed. Assume no legal obligation to announce or suppress this information; it’s up to the president. Announcing the information would obviously constitute a self-fulfilling prophesy; shareholders would sell, creditors would call in their loans, customers would find a more reliable supplier, suppliers would refuse to sell on credit, etc.

Should the president be honest at the expense of the company’s survival? Does the president have a moral obligation to volunteer that the company is in trouble? Does the president even have a moral right to volunteer that the company is in trouble, thereby reducing the value of other people’s stock and endangering other people’s jobs? What should the president say if asked a specific question, the answer to which will be either a lie or a self-fulfilling prophesy? For example, what is the president supposed to say at the annual shareholders’ meeting when asked about the company’s financial prospects, or at a Q&A with employees when asked how much company stock should be in employees’ retirement portfolios?

These are tough questions. I think there are three guidelines:

  • Obey the law. Say what you must. Don’t say what you mustn’t.
  • Insofar as the first guideline permits, protect the interests of your company. That’s what you’re hired to do. (And that’s why the law rightly requires you to reveal certain information even though doing so may harm the interests of your company.)
  • Insofar as the first and second guidelines permit, tell the truth.

The fact that the third guideline comes third, not second, is a concession that you’re not expected to put honesty ahead of the interests of your company (unless the law requires you to do so). A warning that constitutes a self-fulfilling prophesy (e.g. “my company is in trouble”) is exactly the sort of information you’re not expected to share unless you’re legally required to do so.

Avoiding a reputation for dishonesty is often a good reason to take the risk of self-fulfilling prophesies

Here comes the big “But….”

Honesty is in the long-term interests of every financial institution and every publicly owned company (and everyone else too). A reputation for honesty is a valuable corporate asset. And a reputation for dishonesty is a huge corporate liability.

My clients often get this wrong, judging that suppressing information is in their interests when it really isn’t. My quarrel with them isn’t so much that they put their companies’ welfare ahead of their stakeholders’ welfare. It’s that they misjudge how important their stakeholders’ assessment of their integrity is to their companies’ welfare.

In a January 2011 column on “Full Disclosure,” I addressed the question of what to do with accurate information that you are convinced will mislead people, leading them to make worse rather than better risk decisions. For example, if your chemical factory or your polio vaccine poses tiny-but-scary risks, is it okay not to mention those risks on the grounds that people are bound to overreact to them if you do? I discussed eight reasons why it’s better – not just more ethical but also more effective – to tell the whole truth.

We’re talking now about a different rationale for withholding information – because the information may constitute a self-fulfilling prophesy. But some of the same arguments for full disclosure apply here as well.

Particularly important, I think, are the column’s third argument, that people feel betrayed when they belatedly find out facts they think you should have told them earlier, and its seventh argument, that even if one-sided risk communication might work for a single decision, it won’t work for an ongoing relationship.

The column estimates that negative information about a company is roughly 20 times as damaging when the company withholds the information and someone else reveals it as when the company “blows the whistle” on itself. It follows that secrecy is justifiable (empirically if not ethically) only if the company can sustain a better-than-95% success rate at keeping secrets. The key word here is “sustain.” Even if a company successfully suppresses several pieces of reputation-damaging information, once one such secret gets out the company’s reputation for dishonesty becomes a long-term drag on profitability … a bigger drag, in most cases, than the information itself.

Let’s go back to the corporate president who sees tough times ahead unless the company manages to turn things around. I grant that the president’s candor (if s/he decides to be candid) may constitute a self-fulfilling prophesy that could worsen the company’s prognosis. But mightn’t the president get some credit for candor – credit that could help the company recover? And won’t the company’s prognosis be even worse if the president smilingly asserts that all is well, and then outside analysts or inside whistleblowers reveal damaging information that the president withheld?

If the disaster to be averted is very short-term, the self-fulfilling prophesy risk may be greater than the reputation-for-dishonesty risk. It’s hard to make a case for candor if the crisis will be resolved one way or the other by next Tuesday – assuming the law doesn’t require the president to tell all now, and assuming candor now is likely to undermine any chance of turning things around.

If the path back to stability will take years, on the other hand, it is unlikely that the president can travel that path in perfect secrecy. In that case, the reputation-for-dishonesty risk is probably greater than the self-fulfilling prophesy risk.

This is a judgment call, obviously. But one thing I know for sure after 40 years of risk communication consulting: The president is a lot likelier to err in the direction of too much secrecy than in the direction of too much candor. When making this judgment call, in other words, corporate (and government) officials unconsciously put their thumb on the scale. They overestimate the risk that their candor about the company’s problems could turn into a self-fulfilling prophesy, and they underestimate the risk that their failure to disclose the company’s problems could earn it a long-term reputation for dishonesty.

Note in particular how often corporate (and government) officials refrain from “revealing” information that the public already knows – claiming (and probably believing) that doing so might damage their organizations’ reputations. That’s crazy!

It may be debatable whether or not to reveal a damaging secret. It may even be debatable whether or not to remind people of a prior misbehavior that’s not secret but not much in the news either. But the worst of all possible situations vis-à-vis reputation-damaging information is to refuse to acknowledge it while your critics get to brandish it at will, making sure that everyone is talking about it except you. For more on this incredibly common error, see my 12-minute YouTube video on “Wallowing in Your Prior Misbehavior” or the unexpurgated 42-minute version on Vimeo.

In my judgment, refusing to talk about well-known prior misbehaviors is one of the cardinal risk communication sins of the world financial establishment since the start of the Global Financial Crisis. I’m not suggesting that the outrage motivating Occupy Wall Street would disappear if only Wall Street were more candid about what went wrong, what it did wrong, and what sorts of regulation and reregulation are needed. But surely more candor (and contrition) would help.

And surely the risk of self-fulfilling prophesy if Wall Street were more candid is far, far smaller than the risk of ever-escalating outrage and mistrust if Wall Street remains silent about its sins.

Scaring people into getting their flu shot

name:Owen Simwale, MPH
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Influenza Surveillance Coordinator,
State Department of Health
date:September 18, 2011
location:Pennsylvania, U.S.

comment:

On September 16, U.S. News & World Report posted an article by Dennis Thompson entitled “Fear Proves Prime Motivator for Vaccinations.” It’s based on an interview with Dr. Adewale Troutman, former head of the Department of Public Health and Wellness in Louisville.

I am curious what you think about the statement below about what motivates people to get the flu shot:

And, of course, you need to make sure they are properly frightened.

Fear has proven to be the most potent motivator in getting people to not shrug off important immunizations, like an annual flu shot, Troutman said.

peter responds:

I have two responses to Dr. Troutman’s advocacy of scaring people into getting their flu shots and other vaccinations.

Fear works, but it has drawbacks as a motivator.

It is certainly true that fear motivates precaution-taking. We all have a “worry agenda” with more items on it than we have time and energy to address. So we prioritize, and fear has a huge impact on which worries make it to the top of the list and get acted on.

People who are not very anxious about infectious diseases (or a particular infectious disease) are likelier to put off getting vaccinated. So if somebody is insufficiently worried about flu, getting him or her more worried (more concerned, more afraid, more upset – in my jargon, more outraged) is a good way to motivate vaccination.

An exception worth noting: If people have already developed a precautionary habit such as wearing a seatbelt or getting an annual flu shot, they don’t need to remain frightened about car accidents or influenza in order to propel the protective behavior. The same is true of people who have already made a firm decision about how they’re going to manage a particular hazard. It’s a misapplication of effort to frighten people who are already doing what you want them to do, or already committed to do it.

Still, when people are failing to take action because they’re not frightened enough, then fear appeals work – sometimes. But there is an extensive research literature exploring the conditions under which fear appeals don’t work, or even backfire. For an early and still wonderful introduction to this complicated issue, see Kim Witte’s 1992 monograph.

Here are some of the drawbacks of fear as a motivator:

  • People get desensitized to fear appeals, so it takes more and more horrific messages to arouse sufficient fear to motivate action – and eventually nothing is horrific enough to do the trick. (That’s what happens to gory driver’s ed movies.)
  • Excessively strong fear appeals trigger a psychological circuit-breaker. People go into denial rather than experiencing the fear. (That’s what happened to a lot of anti-nuclear weapons messaging in the 1950s through the 1980s. Instead of opposing nukes, people avoided thinking about them.)
  • Efficacy determines people’s reaction to fear appeals. People who feel highly efficacious can tolerate a lot of fear, but people who feel little ability to protect themselves – either there’s nothing to be done or nothing they feel capable of doing – can tolerate less fear before going into denial. (That may be an issue for Contagion, which has little to say about actionable preparedness.)
  • The sort of fear that can be aroused by mass media – a newspaper article, a web post, a health department pamphlet – tends to be transitory. The new fear occupies our attention only until newer ones get piled on top, so if we don’t act fast we probably won’t act at all. (That may be an issue for Contagion as well.)
  • Not all fear appeals are credible. People look for reasons not to be fearful, and skepticism about whether the fear-monger is knowledgeable or unbiased is one likely rationale for continuing apathy. (Lots of people resist taking climate change alarmism seriously.)
  • Fear appeals that strongly imply that the feared event is imminent and near-certain are falsifiable; when it doesn’t happen, the result can be “warning fatigue.” (SARS was stopped; swine flu was mild; bird flu hasn’t learned efficient h2h transmission (yet) – and many people are now much less inclined than previously to take pandemic warnings to heart.)

Some of these drawbacks are relevant to vaccination campaigns.

For many people, insufficient fear of infectious diseases is not the main deterrent to vaccination.

In a sense, every decision to take or not take a precaution is grounded in a comparison: the pros and cons of taking the precaution versus the pros and cons of going unprotected.

But quite often one of the two – the risk or the precaution – is the main focus of attention. So when people aren’t taking a precaution you want them to take, it’s worth asking whether the core of your problem is that the risk is insufficiently upsetting or that the precaution is excessively upsetting.

When workers in heavy industry don’t wear their personal protective equipment (hard hats, safety glasses, etc.), for example, the core of the problem isn’t usually that they’re too apathetic about getting hurt; it’s that the PPE is itchy or hot or geeky or otherwise aversive. The same goes for health care workers who resist wearing masks.

Obviously, there are lots of people who are apathetic about flu, and therefore don’t bother to get vaccinated or get their kids vaccinated.

But there are more and more people – worried new parents and others, not just hard-core anti-vaccination activists – who are actively anxious about vaccine risks. Their reluctance to vaccinate is more about the vaccine (and distrust of the public health establishment) than about the disease. They’re not apathetic; they’re skeptical or hostile or nervous.

Of course a severe pandemic or a serious and highly publicized measles outbreak could change that. Even those who had previously focused on the vaccine would probably refocus on the disease. In a scenario like that depicted in Contagion, for example, the vast majority of anti-vax people would grit their teeth and get vaccinated.

But in normal times, I think, vaccination proponents will need to talk more and more about vaccines, not predominantly about infectious diseases.

Here’s one reason this distinction is important: In any risk controversy, the alarming side is freer than the reassuring side to tell an unbalanced story – to dramatize the part of the truth that favors their side and neglect the part that favors opponents. Not that alarmists have a free pass to exaggerate with impunity; even the alarming side of the controversy is constrained by ethics, credibility, and other factors. But within limits, dramatic and even unbalanced warnings are acceptable (ask Greenpeace); dramatic and unbalanced reassurances, on the other hand, are thoroughly unacceptable (ask the oil industry).

So when the conversation was mostly about the risk of infectious diseases, public health got to play offense. I still think public health professionals were unwise to cherry-pick which vaccination facts to emphasize and which to deemphasize. But as long as the audience was apathetic and the goal was to frighten people into action, public health mostly got away with one-sided vaccination sales pitches. Now, as the focus shifts more and more to vaccine safety worries, public health needs to play defense instead … which means being much more punctilious about acknowledging facts that don’t advance the cause.

My website has lots of material about how public health agencies can sometimes be sloppy with facts, so I won’t belabor the point here. See particularly:

It’s clear from the rest of the article you’re citing that Dr. Troutman knows there are people who mistrust vaccines. But he seems to think they’re a small group of hardcore unreachable ideologues and paranoids. I think there’s a much larger and growing group of mainstream nervous skeptics, whose vaccine nervousness and skepticism are fueled as much by proponents’ overstated sales pitches as by opponents’ even-more-overstated warnings. Fear appeals can easily backfire on that group.

The bottom line for me: Yes, fear appeals can help us motivate vaccination in people who are too apathetic about infectious diseases. But we need to be careful about the drawbacks of fear appeals. Even more importantly, we need to learn how to address the growing audience of people for whom the main issue is worry about vaccine safety, not apathy about infectious diseases.

Getting your organization to use information you have gleaned from public participation exercises

name:Cindy
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Health agency public participation officer
date:September 16, 2011
location:Australia

comment:

I recently completed a training course on “Emotion, Outrage, and Public Participation” with IAP2 in Sydney.

The issue that we are struggling with in our health organization is feeding back what the community has said to us in various community consultations, etc. – i.e. capturing what they said, so that it is accurate; interpreting it (as sometimes it goes on and on and we aren’t always sure what is the point); and making it sound interesting to our planners and top management so they actually think it is worthwhile information.

Do you have any ideas we could use?

peter responds:

What use an organization makes of public comments depends on why it sought the comments in the first place. So let’s start there.

Three public participation purposes

Public participation exercises have three main purposes:

  • To find out what people think so you can learn from them and improve your project.
  • To give people an opportunity to tell you what they think, even though you probably won’t take their input onboard very much – for the record, maybe, or to meet a legal obligation.
  • To give people an opportunity to tell you what they think – and then to be visibly responsive to at least some of their input.

We’re talking about different “people” in the first bullet point than in the second and third.

If finding out what people think – eliciting useful input – is your principal purpose, you’ll need to spend a lot of time trying to get uninvolved community members to tell you what they think. Many public participation professionals work very hard at this task. They know that everyone has a perspective and everyone has wisdom to impart. More importantly, they know that the perspective and wisdom of people who don’t care much about the issue (whatever it is) will be different from the perspective and wisdom of committed stakeholders. A meeting attended only by committed stakeholders is going to yield a very distorted picture of “what people think.”

But if your principal purpose is to give people an opportunity to tell you what they think, then the only people who count are those who want to avail themselves of this opportunity. It no longer matters much that the input you get won’t be representative of the perspectives and wisdom of “the public.” It will be the input of the people who really wanted to give you input.

The difference between the second purpose and the third, of course, is whether you expect to pay much attention to the input.

The three purposes sometimes overlap in unexpected ways. Even if an organization started out just wanting to let people vent (the second purpose), it may end up ameliorating their concerns by making some of the changes they’re demanding (the third purpose), and management may ultimately conclude that the project got better in the process (the first purpose). But learning what the public thinks, letting stakeholders vent, and being responsive to criticism are nonetheless very different public participation purposes.

You may well be an advocate of the first purpose: improving the project by consulting the public. Most public participation professionals are – which explains why they lean toward phrases like “public participation” and “community consultation,” even though the vast, uninterested majority stays home and what’s really going on, ninety-nine times in a hundred, is stakeholder participation/consultation. When managements really want to know what the public thinks, they don’t rely on a public participation exercise; they commission a survey or a series of focus groups.

Despite the everyone-has-wisdom-to-offer convictions of public participation professionals, they spend most of their time trying to address the concerns of outraged stakeholders. That’s why IAP2 and I collaborated on the course you just completed.

Too often, in my judgment, what managements care most about (if they care about the public participation process at all) is the second purpose: letting stakeholders vent. They don’t want to amend the project; they can’t imagine the changes being improvements, and they can’t abide compromising with critics. They simply want to punch their community consultation ticket. If that somehow calms the waters of stakeholder opposition, so much the better.

I don’t want to paint with too broad a brush here. I know nothing about your employer except that it’s a health agency. And health agencies are likelier than most kinds of organizations to take the first purpose seriously – that is, to seek public input in order to improve project design and implementation. But if you’re having trouble getting your management to pay attention to your report, if nobody seems very interested in what you learned from your stakeholders, this might explain why. If your management just wants to let stakeholders vent, I can understand why it might not be very interested in reading your report on what got vented.

If venting is all your management hopes to accomplish – I’m not saying that’s true, but if it’s true – then management’s public participation purpose was accomplished when you scheduled a meeting, publicized it adequately, got a bunch of agitated stakeholders to come, and gave them a chance to vent. From this perspective, your meeting was its own reward, and your report on what this unrepresentative group of malcontents had to say is superfluous.

As you know from the IAP2 course, I stand for the third purpose: using public participation to mitigate stakeholder outrage.

Unlike most public participation professionals (and like most of their employers and clients), I think it’s okay to focus on stakeholders – to pretty much ignore the large majority of the public who are pretty much ignoring you. In a 2008 column on “Meeting Management: Where Does Risk Communication Fit in Public Participation?”, I lay out two paradigms of the ideal meeting. The public participation paradigm is a calm, substantive discussion by a diverse group representative of the public. The outrage management paradigm is an emotional venting of grievances by an unrepresentative collection of upset stakeholders. The rest of the column explores five differences between the two paradigms, differences regarding:

  • The value of venting
  • Who you want at the meeting
  • Whose side you’re on
  • The relative importance of substantive issues versus process issues
  • What skills you need

But I’m fervently on your side in my conviction that a meeting with outraged stakeholders is not its own reward. Letting people vent is necessary to effective outrage management, but it is not sufficient. There has to be a meaningful response too. And with rare exceptions that response has to come from senior management – from the organization itself, not from its public participation officer.

To spell it out a little more, here’s what’s needed, in this order:

  1. Stakeholders need a chance to vent – ideally in the presence of each other, representatives of your organization, and perhaps the media.
  2. They need to know that they were heard accurately – which means you have to echo what they said and give them a chance to correct any errors or omissions.
  3. They need an outrage-related response to what they said. Echoing is a start, but you also need to validate that some of what they said is right. Among other things, that usually means acknowledging past misbehaviors and current problems.
  4. Finally, they need a hazard-related response, a substantive response.

The first three steps make it likelier that outraged stakeholders – no longer quite so outraged – will notice your organization’s substantive response and find it adequate (if it is adequate). But you still need the fourth step. And it’s that fourth step, the substantive response, that requires management’s attention.

Getting management to pay attention to stakeholder input

I think this is the key to making your meeting report meaningful to management. Here’s what you need to get across – maybe in the report itself; maybe in a cover letter; maybe in conversations before you submit the report:

Yes, the meeting was unrepresentative. My report isn’t a summary of “public opinion”; it’s what I heard from stakeholders unhappy enough about our plans that they bothered to come to the meeting. But this unrepresentative group of unhappy stakeholders can exercise a lot of influence on whether we succeed in accomplishing what we set out to accomplish. Reducing their outrage is an important organizational objective. Just having the meeting helped, but it’s not enough. Now we have to show them that we have heard them, and that we’re willing to make some changes in response to their concerns. We can’t do everything they want, of course. But I hope we will find some things in this report that we can do, and that we can give them credit for making us do.

Here are five other things that might be worth saying to your management in order to increase the likelihood that it will take your public participation outputs seriously:

number 1
Make it clear that you’re not endorsing everything in your report. “This report is a summary of what I heard from stakeholders. Some of their factual claims are simply true; some are flat-out mistaken; some are debatable. Some of their recommendations (or “demands”) are feasible; some are technically or economically impossible; some are a stretch but not out of the question. Some of their values and concerns will strike a responsive chord; some will seem misguided or even crazy. It’s a mixed bag. We should be looking for input we can accept, not for input we need to reject. There is plenty of the latter, but it’s the former that will add the most value.”
number 2
Explain why all stakeholder input is important. “These are the people who care most about our organization – who care so much they gave up an evening and came to a meeting. Their views can have a lot of impact on their neighbors, on regulators and politicians, and ultimately on us. We need to try to address their concerns to the extent that we can. But we also need to know about the concerns that we cannot address to their satisfaction. If we decide to pursue certain pathways despite their opposition, we should do that with our eyes open. If we decide to try to convince them that they’re wrong, or try to convince others in the community that they’re wrong, we still need to start by understanding what they think.”
number 3
Mention the possible substantive value of stakeholder input, but as a secondary factor. “Sometimes outside stakeholders, even hostile ones, come up with genuinely useful suggestions. They’re thinking outside our box; their assumptions, experience base, and blind spots are different from ours. They may suggest an option that we never thought of, or one that we dismissed too early with too little serious attention. But that’s gravy. We’re not mostly looking for input we actually like. We’re mostly looking for input we can live with, perhaps reluctantly, in order to earn their support or at least reduce the fervor of their opposition. And even the input we can’t live with needs to be heard and understood in order to plan our path forward.”
number 4
Acknowledge that management may not want to listen. “Some of this stuff was hard for me to hear, and may be hard for others to read. It’s often difficult to absorb input from critics who may come across as hostile or ignorant or both. It’s tempting to dismiss what they say, to feel angry or contemptuous or merely bored instead of trying to take it all in. And lurking beneath the anger, contempt, or boredom, sometimes, is resentment that opinions like these might actually exert some influence, that we might need to compromise in ways we’d rather not. I know I felt some of that, even though listening to stakeholder concerns is at the heart of my job. I can imagine that absorbing some of the criticisms in this report might be even harder for someone with deep substantive expertise and decades of commitment to our organization.”
number 5
Give management a job to do. “This report is actionable, and I believe it requires action. I have promised the people I met with that I would convey their concerns to my management. I didn’t promise they’d get everything they wanted – but I did promise they’d get a fair hearing. I need to bring back to them evidence that I kept my promise. So I need a response. I need to know some things you’re willing to do (and give them credit for pushing you to do); and some things you’re willing to talk about maybe doing; and, yes, some things you’re taking off the table entirely. If the report provokes new ideas from our end, that too is useful proof that we’re taking them seriously. In addition to these substantive responses, I also want evidence of a human response. If something in this report triggers a recollection, or a sense of recognition and commonality, or a feeling (any feeling), please let me know. Just as I am reporting to you on what stakeholders had to say, I will be reporting back to these stakeholders on what my management had to say in response. I will schedule a meeting with you soon to get your responses.”

Keeping stakeholders apprised

As the last point suggests, public participation professionals occupy a crucial intersection between stakeholders (especially unhappy stakeholders) and management. You’re a key source of information in both directions.

So you shouldn’t just meet with stakeholders and then report to management. You should report back to the stakeholders on what you heard from your management.

Ideally, this is how it should work: Everybody at the meeting knows you’re going to report to management; everybody gets to see what you’re reporting to management; everybody gets a report back from you on how management responded to your report … and if that leads to another stakeholder meeting and another series of reports, so much the better.

I’m not suggesting that the whole process should be completely transparent and completely symmetrical. Most stakeholders expect that everything they say at a public meeting may be reported back to management; that’s largely why they said it. But your organization’s officials don’t necessarily expect that everything they say in a private conversation may be reported back to stakeholders – and if that’s what’s going to happen, a lot that needs to get said won’t get said.

If an angry stakeholder at a public meeting announces that your organization is run by a bunch of dishonest idiots, you may well decide to carry the attack back to management verbatim. The criticism is too broad to be actionable, so you’ll want to supplement it with more explicit grievances. But it does capture the degree of outrage you heard at the meeting, and that’s something management needs to know.

But if an angry senior official of your organization tells you that a particular activist group is run by a bunch of dishonest idiots, that’s not something you should report back to all the stakeholders – or, indeed, to any stakeholders. It was probably useful for that senior official to vent his or her outrage in-house, mostly in order to get it under control so it’ll be less likely to leak in public settings. For you to make it public would be counterproductive.

Nonetheless, we can and should get closer to transparency and symmetry than we usually get. Here’s what I would like to hear at the start of a typical public meeting (really stakeholder meeting):

Part of tonight’s meeting is for me to answer your questions if I can. If you have questions I can’t answer, I’ll try to get the answers after the meeting and get back to you.

But our main goal tonight isn’t for me to tell you things; it’s for you to tell me things. Specifically, our main goal is to for you to give me input on X, Y, and Z.

I want to make sure I understand your input, so I will be asking a lot of follow-up questions. “Here’s what I think I’m hearing,” I’ll say. “Am I hearing that right?” Please correct me when I get something wrong, and elaborate your point when my understanding is incomplete.

I will try to produce an accurate summary of what I hear tonight for my management. If there are points you especially want me to tell my management, please flag them for me and I will be sure to pass them on. If there’s anything you want to tell me in confidence and don’t want me to pass on, that’s tougher. Mostly I see myself as a conduit for accurate information in both directions. But I’m happy to pass on what you say without mentioning who said it. If being anonymous is important to you, please make sure I know that. If being named is important to you, please make sure I know that.

I will write up what I hear tonight – or what I think I hear – and feed it back to everybody who was here tonight and signed in. That will be like the minutes of the meeting. I hope you will write me or email me or call me with any corrections or elaborations – or second thoughts, if you have any. The revised minutes will be my report to my management. I’ll send it out to everyone here, so you will know exactly what I’m reporting.

I don’t know how my management will respond to what you say tonight. But part of my job is to find out how my management responds and let you know. It may take a few weeks or even a few months, but it will happen. I can’t promise that you will like the response, but you will get a response! That I can promise.

I don’t want to leave the impression that facilitating feedback from stakeholders to management and from management to stakeholders is the only important thing public participation professionals do. Your own relationships with stakeholders are also important; in real time, how you listen and how you respond have a major impact on stakeholder outrage. You’re not just a conduit. But you are a conduit.

The risk communication in Contagion and Contagion as risk communication

name: Bruce Hennes
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Crisis communications consultant
date:September 12, 2011
email:hennes@crisiscommunications.com
location:Ohio, U.S.

comment:

In 1971, it was The Andromeda Strain. In 1995, Outbreak.

Starting tomorrow, in a theater near you, Hollywood brings us Contagion.

With an all-star cast that includes Matt Damon, Gwyneth Paltrow, Kate Winslet, Laurence Fishburne, Marion Cottilard and Jude Law, “the movie tracks the global spread of a lethal flu-like virus, [resisting] the sheen of science fiction or fantasy and instead stresses the chilling plausibility of its nightmare situation,” according to The New York Times.

There were two paragraphs in the Times movie preview I found to be particularly encouraging:

“Scott and I were fascinated by the science,” Mr. Soderbergh [the director] said. “I don’t know how you could make a film about a subject like this without wanting it to be accurate.”

“It’s not that Warner Brothers is in the habit of making $60 million P.S.A.’s,” Mr. Soderbergh said, “but I do want people to come out of this film with an understanding, were this to happen again, of what’s going on.”

Time Magazine’s preview is equally encouraging: “…ultimately that’s what makes Contagion so scary – it shows what would actually happen in the case of a severe pandemic, from the panic-buying of bogus herbal cures to international squabbling over limited vaccine supplies. The film’s realism makes it a rarity in Hollywood, where most disease movies would have flunked high-school biology.”

Hopefully, Contagion is indeed true to the science. Here’s the question for you, Peter: Is it true to the practice of risk communications? Is this a Hollywood blockbuster we can use for training purposes? Will this movie help us next time we have a pandemic, especially one more virulent than last year’s bout with H1N1?

I hope you get a chance to see it this weekend and to read your thoughts after.

peter responds:

Spoiler Alert: If you haven’t seen the movie yet and don’t want your reactions contaminated, stop reading.

Your question focuses on how Contagion depicts risk communication. Another colleague sent me an email asking what I think the movie will accomplish – essentially, my assessment of Contagion as risk communication itself. I’ll try to answer both questions … starting with the other one.

Early in Contagion, CDC Director Laurence Fishburne briefs epidemiologist Kate Winslet before dispatching her to Minneapolis to cope with the fast-growing epidemic. “What’s your Single Overriding Communication Objective?” he quizzes her.

Along with “be first, be right, be credible,” “SOCO” is CDC communication jargon – and the line undoubtedly gets a knowing laugh from CDC communication habitués.

The movie itself has a straightforward SOCO: to demonstrate that a pandemic like this could really happen. Virtually all the previews and reviews I have looked at stress the same point as the two previews you quote, that Contagion director Steven Soderbergh worked hard not just to make the movie realistic-looking, but to make it scientifically sound. It boasts so many technical advisors that the credits divide them into ordinary technical advisors and “senior technical advisors.” And despite some quibbles, virologists, epidemiologists, CDC top brass, and pandemic-obsessed experts and laypeople have mostly given the movie high marks for verisimilitude.

My first reaction was that this is a disappointingly unambitious SOCO. But my wife and colleague Jody Lanard, who saw the movie with me, convinced me that it’s SOCO enough: to hit people over the head with the reality that new infectious diseases do emerge; that they occasionally go pandemic; that when they do they can be far more severe than H1N1 and far less stoppable than SARS; and that when that happens (as it will sooner or later) we’re really, really going to need a functioning public health apparatus.

Little or nothing on preparedness

Contagion certainly doesn’t tell people what they should do about the threat of contagion. There is little or nothing in the movie about pandemic prep – no citizen role model who stockpiled necessities for prolonged sheltering in place, for example, and nobody wishing in hindsight that he or she had done so. (It would have been easy enough to cut away from the short scene of people rioting when the MREs ran out to a montage of stockpiled food in the homes of Mormons, survivalists, and pandemic preppers.)

Nobody in the movie even bemoans our having underfunded public health for decades. And given the unrealistic speed with which the movie’s experts develop, test, and mass-produce a vaccine, public health funding doesn’t seem to have been a problem (even though Winslet was sent to Minneapolis without a team to back her up). There are scenes of more people piling into hospitals than the hospitals can treat. But I don’t recall anybody mentioning “surge capacity.”

You can infer things about preparedness from Contagion, but they’re not underlined.

Actually, very little in Contagion is underlined. The movie is sketchy, episodic – like a 13-hour television serial condensed down to 105 minutes. Dozens of complex pandemic issues are raised, almost as if Soderbergh were working from a checklist, but no issues are really explored. The sole exception – the only thing the camera lingers on – is fomites: the sinister threat of door knobs, whiskey glasses, and other objects that can transmit contagion from one person to another.

But preparedness didn’t seem to make Soderbergh’s checklist.

If Contagion really were a PSA, that would be my main complaint: It’s scarily plausible, but not actionable. Research on fear appeals has shown that scaring people without offering them credible things to do can easily backfire; we’re a lot likelier to go into denial when we feel powerless to protect ourselves, powerless even to try. The movie offers neither its non-scientist characters nor its non-scientist audience much that they can do.

I’m not actually worried about denial, though, at least not in the sense of people getting “scared stiff,” paralyzed by futility. Lacking an actionable message, Contagion strikes me as most likely to add pandemics to audience members’ worry agenda for a few days or a few weeks, until it is supplanted by other problems that are more immediate and more obviously actionable.

There are people working to convert momentary Contagion-inspired worry into something more lasting. Probably the most ambitious effort so far comes from Participant Media, one of the movie’s producers. Participant Media has built a Contagion webpage that tries to take “viral media” literally, offering both background information and a range of action opportunities – most of them easy, entertaining “gateway” actions (tweet about the movie, play with a pandemic simulation, etc.). Participant Media has done the same thing for Al Gore’s An Inconvenient Truth and a couple dozen other “message” films. I don’t know how much success the group has had with previous movies, or how things are going with Contagion, but it’s a worthy effort.

The CDC’s Contagion webpage, on the other hand, concentrates on links to other CDC pages about what the CDC is doing; there’s nothing really on what the public can do.

But on another CDC webpage, crisis communication expert Barbara Reynolds (with whom I worked on the CDC’s “Crisis and Emergency Risk Communicationlink is to a PDF file training) talks about the importance of giving frightened people things to do, as well as about the crucial truth that people rarely panic in crises.

Prosocial and antisocial behavior

Perhaps worse than its failure to address preparedness is Contagion’s failure to offer models of communitarian coping. Nobody seems to be looking in on neighbors; nobody organizes a neighborhood watch to deter looters; nobody even bags the garbage that’s collecting on the streets.

CDC head Fishburne tells us that three-quarters of the pandemic’s victims recover. So there’s potentially an army of recovered people who are now immune and available as volunteers – not to mention those who are naturally immune, like Matt Damon, and those who win the birthday lottery to get the first available vaccine doses. But nobody tries to organize a Recovered Victim Volunteer Corps; nobody even volunteers.

Instead, the movie shows us panic, rioting, and looting. The social disruption is weirdly spotty. In an iconic but not really typical Contagion image, blogger Jude Law walks the littered, abandoned streets of San Francisco in his homemade moonsuit, looking like the sole survivor of a nuclear Armageddon.

But he’s still blogging (the wifi is up and running), and the authorities are still organized enough and motivated enough to arrest him for fraud because he’s peddling forsythia as a cure.

Maybe this degree of social disruption is realistic; I’m not sure. Maybe it’s even understated, as the University of Minnesota’s Michael Osterholm has commented.

As disaster researchers have long known, most people respond to emergencies with their best selves. Antisocial behavior is the exception, not the rule. But it does happen, and the worse (and scarier) the emergency, the likelier it is that some people will lose control. According to CDC Director Fishburne, the movie virus was expected to infect one-twelfth the population unless a vaccine could be developed. With a 25% case fatality rate (again Fishburne’s number), that means the virus could kill about two percent of the population, making it twice as bad as the 1918 Spanish Flu pandemic, which killed about one percent.

1918 saw very little pandemic panic, rioting, or looting. But maybe American society is less resilient today than it was a hundred years ago. And twice as bad is twice as bad. It’s not the presence of antisocial behavior that bothered me about the movie so much as the absence of prosocial behavior. The movie’s only heroes are scientists.

Better risk communication might also have prevented some of the antisocial behavior the movie shows us. Citizens lined up in a drugstore start to riot when the store runs out of forsythia; citizens lined up at a food distribution point start to riot when the authorities run out of MREs. In neither case did those in charge forewarn people about how much was available, explain when they expected to get more, hand out numbers to preserve people’s place in line, or do anything else to help keep the outrage under control. (They did at least say they were sorry.) This may be realistic too, but it is far from best practice.

Although scientists are the only heroes in Contagion, they are heroes who keep violating protocol. More often than not, the violations work out well for the world. I can’t tell whether this rules-are-made-to-be-broken theme is intentional. As far as I know, none of the real-world scientists who have praised the movie have commented on this aspect of it.

Examples:

  • Even though the virus is already spreading rapidly and widely, the CDC decides not to allow work on it except in BSL-4 labs. Scientist Elliott Gould, who has been looking for a way to culture the virus in his BSL-3 lab, is ordered to shut down his investigation. He ignores the order … and he is the one who figures out how to culture the virus, thus making vaccine research possible.
  • CDC scientist Jennifer Ehle sees that a rhesus monkey she has given a test vaccine isn’t coming down with the disease, suggesting she might have found a vaccine that works. Does she test more monkeys? No, she immediately jabs the vaccine into her thigh and goes to visit her sick father without her protective gear – thus, the movie tells us, saving months of delay and millions of lives.
  • Knowing that Chicago is about to be quarantined (I’m not sure why), CDC Director Fishburne tells his fiancée to get out of town. He gets into trouble for making private use of this secret information – but says he would do it again (and the movie is clearly on his side). Later, perhaps to make up for his earlier selfish violation of protocol, Fishburne commits a selfless violation. He gives the vaccine dose intended for him to the son of the CDC janitor who had overheard him warning his fiancée. Then he shakes hands with both son and janitor (so much for social distancing), and puts on his vaccinee bracelet anyway.

CDC Director Fishburne also breaks an important risk communication “rule.” Interviewed by Sanjay Gupta of CNN (Gupta plays himself), Fishburne lashes out at Jude Law for his irresponsible blog, accusing him of being more dangerous than the virus itself. A risk communication advisor would have urged Fishburne to be much more respectful, allowing Law to hang himself out to dry with his own extremism.

Law arguably gets the better of the exchange when he backs up his (false) accusations of a CDC/Big Pharma conspiracy with the (true) accusation that Fishburne helped get his fiancée out of Chicago before the National Guard could implement the quarantine that trapped Matt Damon and millions of fellow Chicagoans. Law also serves up to Gupta’s audience the worst case scenario that Fishburne doesn’t want to discuss – using Fishburne’s own data to calculate how many might conceivably sicken and die in the months ahead.

Depictions of risk communication and the media

I’m not sure what Fishburne – and Soderbergh – hold against Law the most: his profiteering on a naturopathic remedy that doesn’t work, his rabblerousing against the vaccine and the public health establishment, his insistence on the pandemic worst case scenario, or his revelation of Fishburne’s misbehavior.

For sure, Law is the movie’s only villain. The movie comes close to blaming the panic, rioting, and looting on Law’s blog. If only the social media were as respectful and compliant as CNN’s Gupta, Contagion seems to be telling us, the public might behave too.

It’s too bad that Soderbergh made Law’s blogger not just a conspiracy theorist, an anti-vaccination activist, and a worst-case-scenario alarmist, but also a sleazy, self-interested, hypocritical, dishonest crook. The combination is easy to hate and easy to scapegoat. In the real world, most conspiracist bloggers are earnest and sincere.

And it’s too bad that Law is the movie’s only blogger. As the real-world CDC fully recognizes, the social media aren’t just a huge potential source of dangerous mischief. They are also a huge potential force for good. A large percentage of the pandemic preparedness information available today comes from social media. And if a severe pandemic ever starts to look imminent, we’ll all be looking to websites like Flu Wiki Forum, Crawford Kilian’s H5N1 blog, and Avian Flu Diary for crucial just-in-time preparedness information. (All three sites have posted comments on Contagion, of course.)

Aside from Gupta and Law, Contagion dwells surprisingly little on the media, for good or for ill. An early CDC news conference is a model of good crisis communication, with Fishburne calmly stressing that everything is still very uncertain, that the fatality numbers will surely grow, and that the situation may get a lot worse in the days ahead. (Fishburne’s comments may have been modeled on the superb way Acting CDC Director Richard Besser opened his April 24, 2009 news conference on swine flu.)

But that’s the movie’s last news conference. After his debate with Law on Gupta’s news segment, Fishburne is ordered to stay away from the TV cameras. The movie doesn’t say who, if anybody, takes his place.

In a real pandemic emergency like this one, of course, the White House would be making all the key public communication decisions, and the principal spokespeople would be in Washington, not at CDC headquarters in Atlanta.

Bottom line: Contagion models only a little good risk communication, and only a little bad risk communication. Modeling risk communication wasn’t Soderbergh’s SOCO.

I don’t mean to sound churlish. Contagion shows us a possible severe pandemic in a way that’s pretty entertaining, pretty scary, and pretty accurate. If it doesn’t say much about citizen preparedness, at least it teaches that we need our virologists, epidemiologists, and other public health professionals (on the national and international level, anyway; state and local public health people don’t do much of value in the movie).

Contagion will probably do some good, some useful consciousness-raising – and I can’t see it doing any harm. How much more should we expect from a movie?

Hurricane Irene risk communication: public service or weather porn?

name:Bruce Hennes
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Crisis communications consultant
date:September 4, 2011
email:hennes@crisiscommunications.com
location:Ohio, U.S.

comment:

As I write, Hurricane/Tropical Storm Irene has already passed over Manhattan and is on its way upstate. The latest news reports cite a handful of deaths, flooding in some areas not usually prone to flooding and millions without power on the east coast. Thankfully, this Category 1 hurricane turned out to be less destructive than it could have been.

Sitting in Cleveland and far from harm, after watching hours of coverage over the last two days on the national cable stations and sampling a few newspapers online, my initial impressions: FEMA, the governors and mayors of the major cities caught up in Hurricane Irene did a fine job of communicating risk and most residents seem to have heeded government instructions. I’m even prepared to give the media decent marks for much of what I heard said and saw on my screen. For the most part, the words they used were appropriate, though the wall-to-wall coverage and screaming graphics are likely to lead many to the conclusion that Hurricane Irene was over-hyped by the media, and that government agencies, too, overreacted.

The second-guessing will start in earnest tonight, I’m sure, with various pundits accusing officials of overselling Irene’s dangers, making things tougher for government and crisis communicators next time. Even the well-respected Poynter Institute has jumped in early on the conversation with an article titled “Public service or weather porn, how much coverage of Hurricane Irene has been valuable, how much hype?” – with comments already spanning the public-service-to-porn continuum. You can read the article here.

Shades of last year’s H1N1, another dodged bullet!!

I’m interested to read your thoughts about how government agencies and the media handled Hurricane Irene.

peter responds:

Your prediction about second-guessing has been borne out. Aside from the Poynter Institute article, lots of others have also complained about official and media overreaction to Hurricane/Tropical Storm Irene. Among them:

In fairness, most of the “hype” articles and blog posts date from the immediate aftermath of the storm, before inland flooding roughly doubled the number of Irene-related deaths and greatly exacerbated the property damage and economic impact. And readers have done a fine job of correcting the record. The Telegraph entry, for example, has 1,407 comments so far, most of them hostile to the hype theme.

Let me concede at the outset the critics’ strongest point. Yes, a lot of the TV coverage was unduly breathless, even when there wasn’t much onscreen to be breathless about. It got tiresome hearing reporters in raingear marveling that a fallen tree was actually blocking a road somewhere, then pausing portentously so the viewer could take in the sound of the chainsaw. It got tiresome hearing reporters continuing to talk about “Hurricane Irene” hours after it had been downgraded to Tropical Storm Irene.

More than a few anchormen and anchorwomen – and not just on the Weather Channel – came across as more disappointed than relieved that Irene was turning out less fierce than many had feared. (I have to admit to a sense of anticlimax myself, peering out my New Jersey window on Sunday morning at what looked like a pretty ordinary rainstorm. When a nearby tree came crashing down and my power stayed out for days, I felt … better?)

But the core of the hype complaint isn’t that reporters enjoy disasters. That’s old news. Decades ago, A.J. Liebling reminisced about his days as a cub reporter, thanking God for giving him a big fire to cover. It’s not even that reporters get overexcited during minor disasters, and sometimes take awhile to notice how minor they really are. The core of the hype complaint is that experts, officials, and the media exaggerated how dangerous Irene was likely to be, and thereby encouraged more fear and more preparedness than the situation justified. How valid is that complaint?

Did Irene get more attention than it deserved?

Pollster Nate Silver wrote what I consider the definitive rebuttal of the claim that Irene got more media attention than it deserved. Silver’s August 29 New York Times blog is entitled “How Irene Lived Up to the Hype.” Silver compared Irene to previous hurricanes and tropical storms on two dimensions: how much U.S. coverage they got (measured by the percentage of all archived news stories published during the storm that mentioned the storm by name) and how much harm they did (measured by the number of U.S. fatalities and the amount of U.S. economic damage).

In amount of news coverage, Silver found, Irene ranked 10th among the 92 storms that have made landfall in the U.S. since 1980. Of all the stories published during the storm, 22.5% mentioned Irene by name. (Hurricane Katrina, which devastated New Orleans and the U.S. Gulf Coast in 2005, ranked only 14th at 15.6%; most of the Katrina coverage was about the disastrous aftermath, not the storm itself.)

In fatalities, Irene also ranked 10th when Silver wrote his blog on August 29, killing 21 people. (Katrina ranked first by far, killing 1,500+ people.) As I write this on September 4, Irene’s U.S. death toll has risen to 46, making it the third deadliest U.S. hurricane since 1980, after Katrina and Floyd, which killed 56 Americans in 1999.

It’s too early to have a decent measure of Irene’s economic impact, but on August 29 Silver put it between $14 billion and $26 billion. Using the low end of the range, and adjusting for inflation and the growth in wealth and population, Silver ranked Irene as the 8th most destructive U.S. hurricane since 1980. (Katrina was again #1.)

So: 3rd in fatalities and 8th in economic damage, but only 10th in news coverage. By these measures, Irene actually got less coverage than it deserved.

Of course Silver is leaving aside the contention that all hurricanes may get more coverage than they deserve. Certainly there are risks that kill a lot more than 46 people without getting anything like as much coverage as Irene got. My 1994 article on “Mass Media and Environmental Risk” proposed seven principles that seem to govern media coverage of risk. The first principle was: “The amount of coverage accorded an environmental risk topic is unrelated to the seriousness of the risk in health terms. Instead, it relies on traditional journalistic criteria like timeliness and human interest.” I added:

Journalists are in the news business, not the education business or the health protection business….

Seriousness (or “consequence”) is only one of a host of traditional journalistic criteria for newsworthiness. Most others – timeliness, proximity, prominence, human interest, drama, visual appeal, etc. – make a big controversy intrinsically newsworthy even if it is not a serious health threat.

I have used “hazard” and “outrage” to refer, respectively, to technical and nontechnical (a composite of such factors as control, fairness, familiarity, trust, dread and responsiveness) seriousness of a risk. In these terms, the mass media are in the outrage business: They don’t create it, as my clients sometimes suppose, but they amplify it.

A big hurricane, like a big controversy, is newsworthy for reasons other than death and property damage. Among other things it’s exciting; and it’s concentrated, wreaking its havoc over a period of days, not years. And even though hurricanes are far from the most dangerous risk we face in our lives, a big hurricane causes a lot more death and property damage than many high-visibility risk controversies I have worked on over the years.

Unlike journalists, experts and public officials are supposed to be more interested in death and property damage than in newsworthiness. But only within their own bailiwicks. It’s hard to imagine a weather forecaster trying to put an approaching hurricane into “context” by pointing out that hurricanes kill orders of magnitude fewer Americans every year than tobacco, alcohol, and automobile accidents.

In sum, Irene certainly didn’t get more attention than it deserved compared to other hurricanes. Whether hurricanes in general deserve the attention they get is a tougher question.

Is hindsight bias an appropriate standard?

Silver’s assessment focuses on Irene’s coverage versus Irene’s impacts, and concludes that by normal hurricane standards the two are proportionate. But let’s imagine that Irene had fizzled entirely, threatening disaster but then veering out to sea with no U.S. impacts at all. Would that mean that officials and the media had overreacted?

No – not unless you think hindsight bias is an appropriate standard for deciding how much attention experts, officials, journalists, and the public should pay to a risk.

After every disaster, hindsight bias leads us to imagine that the people in charge should have known the disaster was imminent and taken all possible precautions. And after every false alarm, hindsight bias leads us to imagine that the people in charge should have known the outcome would be benign and avoided taking unnecessary precautions.

My favorite example of hindsight bias is the London editorialists who published two simultaneous complaints about the U.K. government: It failed to anticipate that the swine flu pandemic of 2009–2010 would be mild and bought far too much vaccine, and it failed to anticipate that the winter of 2009–2010 would be severe and bought far too little road salt.

The concept of “risk” entails uncertainty. You cannot definitively predict the severity of a pandemic or a winter – or a hurricane.

So the measure of how aggressive a warning should be isn’t what ends up happening. It’s the experts’ best assessment of the size of the risk before anyone knows what’s going to happen – that is, the estimated probability of outcomes of various magnitudes, and the amount of uncertainty surrounding those estimates. The goal is to get people to take appropriate precautions (PRE-cautions) against what might happen.

Here’s a more complete list of criteria for deciding how aggressive a warning should be:

  • How bad is the likeliest outcome?
  • How likely are the really bad outcomes?
  • How uncertain are you about those assessments?
  • How useful is forewarning? How feasible is it to take precautions? How effective are those precautions likely to be, and how costly will they be if they turn out not to be needed?
  • How much urging will people need to get them to take precautions? Are they inclined to be overly upset or overly apathetic?

On all these measures, hurricane warnings need to be aggressive.

  • Irene could have been a lot worse. As Silver points out, a slightly worse storm would have meant a lot worse impacts. To some extent we “dodged the bullet” by taking sensible precautions – and now hindsight bias tempts us to imagine that those precautions weren’t needed. To some extent the precautions we took really did turn out unnecessary, because we got lucky. (It’s always hard to distinguish an effective precaution from an unnecessary precaution, snapping your fingers to keep away the elephants; in both cases the bad outcome is avoided.) Some people evacuated or avoided areas that were soon flooded; some people evacuated or avoided areas that stayed above water. And of course what actually happened wasn’t a walk in the park; it was devastating for tens of thousands and a major hassle for tens of millions.
  • There were obvious things to do before and during the storm to protect life and property. It’s hard to read Irene tragedy stories without encountering some really stupid deaths, along with others that were unavoidable. There’s a huge literature on how to make hurricane warnings more effective. But the key bottom line is that in developed countries with good mass communications systems, hurricane warnings are very effective already.
  • People were inclined to under-react, not overreact. The highly populated U.S. northeast thinks it knows all about hurricanes, but we haven’t actually experienced a bad one for several decades. This is a recipe for insufficient outrage: high familiarity, low memorability.

What critics call “hype” and “weather porn” is easily defensible if it motivates precautions that sometimes save lives, even if other times those precautions turn out not to be needed. But the criticism would be valid if the hype led people to over-prepare, taking excessively cautious, excessively burdensome precautions; or if the hype led people to become skeptical about hurricane preparedness generally, thus undermining future warnings.

If we define “over-preparedness” sensibly – not as taking more precautions than turned out to be needed, but as taking more precautions than made sense based on the forecasts – then I don’t see much evidence of hurricane over-preparedness in general or Irene over-preparedness in particular. There are always some individuals on the over-prepared tail of the normal curve, of course. But did the U.S. northeast over-prepare for Irene? I don’t think so.

Skepticism is a concern I do have about some kinds of hype. I think flu vaccination hype, for example, can lead to vaccine skepticism. But I don’t see that happening with regard to extreme weather events. In fact, hurricanes are one of the classic counterexamples to the “warning fatigue” hypothesis. Gulf Coast and North Carolina residents are routinely warned to prepare for an approaching hurricane. Not infrequently, the hurricane changes course or loses steam, and their preparations turn out unnecessary. A few weeks later, another hurricane is approaching, new warnings are issued – and most people heed the warnings and prepare again. Hurricane warnings and even hurricane hype don’t seem to be leading to hurricane skepticism.

Like you, I worry that the “hype” criticism itself could induce some skepticism. (And yes, I have the same worry with regard to my vaccination hype criticism.) If people take onboard the meme that hurricanes are hyped, that might actually discourage preparedness for future hurricanes.

In a thoughtful, fair-minded, but I think ultimately mistaken August 30 piece entitled “What the hurricane hype reveals about NYC,” “Spiked” columnist Sean Collins argues that New York City Mayor Michael Bloomberg was overly cautious to shut down the city’s subway system and impose mandatory evacuations of low-lying areas of the city – both unprecedented hurricane precautions – against a storm that was already losing power. In a follow-up article on August 31, “The politics of fear blows into New York,” Collins’s colleague Tim Black goes even further, viewing Bloomberg’s timidity as emblematic of dangerous “worst-case thinking” that he considers increasingly entrenched in the minds of politicians and bureaucrats.

This isn’t just hindsight bias. Collins and Black concede that Irene might have turned out worse than it did. But they think society has become far too risk-averse – a frequent leitmotif in “Spiked” articles.

But I rode out Irene in central New Jersey, with no power for 60-plus hours and no rail service to New York for five days (because of actual flooding, not preemptive rail shutdowns). My daughter and her family were evacuated at midnight from a weekend cabin in upstate New York when a nearby stream unexpectedly flooded. And Paterson NJ (among other places) faced very serious flooding for many days. If Battery Park City and other parts of New York City’s “Zone A” had flooded as severely as nearby Paterson did, plenty of commentators would be demanding Mayor Bloomberg’s head for failing to enforce his mandatory evacuation.

How well do hurricane communicators proclaim uncertainty?

The best answer to warning fatigue, I think, is to make sure people know the warning is uncertain – grounded in the risk’s high severity, not in confidence that the risk will materialize. You want people to take the risk seriously, but you don’t want them to feel misled if the risk fizzles.

This is always a balancing act. My advice to clients is to ground their warnings in two key data points:

  • What’s the likeliest scenario (or scenarios) – what do you think will probably happen?
  • What’s the credible worst case scenario (or scenarios) – what’s the worst outcome that’s not so vanishingly unlikely it would be foolish to worry about?

These are the two main questions we ask our doctor or our plumber: What do you think you’ll find, and what are you worried about finding?

Of course other possibilities should also be mentioned – the complete range of possibilities, from the fizzle scenario to the vanishingly unlikely disaster scenario. And it’s crucial to give people a sense of your confidence level – how certain or uncertain you are about which outcomes are likely and which are unlikely.

When warnings go awry, it is usually because officials made one or more of three mistakes:

  • Focusing too much on the likeliest scenario and neglecting the worst case.
  • Focusing too much on the worst case, making it sound like it is the likeliest scenario.
  • Focusing too little on uncertainty, so your tentative warning comes across as a confident prediction.

One common way of making all three mistakes at once is to “compromise” your worst case scenario and your likeliest scenario into a single prediction (or what comes out sounding like a prediction) that is more severe than what you think will probably happen, less severe than what you’re worried might happen, and more confident than you meant to sound. I think that was true, for example, of the CDC’s warnings in the early weeks of swine flu.

So in the run-up to Irene’s landfall the public needed to hear aggressive warnings of how bad it might be – accompanied by equally aggressive proclamations of uncertainty. In a recent column for this website, Jody Lanard and I made the case for “Explaining and Proclaiming Uncertainty.” Our case study was Germany’s May-through-July 2011 outbreak of foodborne E. coli. German and European food safety authorities, we argued, did a poor job of proclaiming their uncertainty about which salad ingredient was responsible for the outbreak.

In contrast, official U.S. weather forecasters did quite a good job (as usual) of proclaiming their uncertainty about Irene’s course and strength. Time and again, National Weather Service forecasts and advisories contained boilerplate language like this: “It is vital that you do not focus on the exact forecast track. To do so could result in bad decisions and place you or those you are responsible for at greater risk.” This really is boilerplate. If you don’t believe me, search the National Weather Service “Hurricane IRENE Advisory Archive” for the phrase “focus on the exact forecast track” and see how many hits you get.

In fact, I have long used official U.S. hurricane forecasting as a reliable good example of uncertainty communication. Not only does the National Weather Service routinely proclaim its uncertainty about the things it’s uncertain about. It also routinely proclaims near-certainty about the things it’s nearly certain about. Here are some quotes from one utterly typical National Weather Service advisory about Irene, this one dated August 23 at 5 a.m.:

A pretty confident forecast:The 23/00z G-IV jet aircraft and Air Force C-130 dropsonde data appear to have settled down the models…and there is considerably less difference among the various model solutions now. The overwhelming consensus is that Irene will gradually turn northwestward over the next 2–3 days and then move northward through a developing break in the subtropical ridge over the southeastern United States. The official forecast track is similar to the previous advisory and lies very close to consensus models TVCN and TVCA.
A somewhat less confident forecast: The overall appearance in satellite imagery has changed little since the previous advisory. As a result…the initial intensity is being held at 85 kt…which could be generous. Irene is forecast to remain in a relatively low vertical wind shear environment and over SSTS near 30C. That combination…along with expanding outflow in all quadrants…should allow for Irene to become a major hurricane within the next 24 hours once the cyclone clears the effects of Hispaniola…and probably maintain major hurricane status throughout the remainder of the 5-day forecast period.
A strong uncertainty warning: It is important to remind users not to focus on the exact forecast track…especially at days 4 and 5…since the most recent 5-year average errors at those forecast times are 200 and 250 miles…respectively.

As uncertainty proclamations go, that last paragraph is as good as it gets.

What’s the pattern of hurricane crisis communication generally?

Overall, I think official U.S. hurricane crisis communication is some of the best crisis communication around.

Aside from doing a pretty good job of acknowledging and even proclaiming uncertainty, hurricane forecasters manage to avoid three extremely common crisis communication misbehaviors:

  • Hurricane forecasters are immune to the absurd injunction not to speculate. Hurricane forecasting is all speculation – responsible speculation, which means speculation that is candidly uncertain and isn’t masquerading as prediction, and that focuses on both the likeliest scenarios and the worst case scenarios. The National Weather Service routinely refers to an early hurricane forecast based on specific technical data as its “first guess.”
  • Hurricane forecasters are also immune to the “speak with one voice” mantra that leads so many crisis communicators to hide expert disagreement. The National Weather Service uses several different storm behavior models to suss out what’s likely to happen in the days ahead. It systematically reports the predictions of all the models, emphasizing where there’s a consensus and where the models diverge. (If only climate change scientists were that candid in their public pronouncements.)
  • Hurricane forecasters are extraordinarily open about saying so when they change their minds, and acknowledging the ways in which they turned out wrong. These are things I usually fail to convince my clients to do – but the National Weather Service does them automatically. In fact, its frequent weather advisories invariably begin with what has changed since the previous advisory. “We thought X would happen but Y happened instead so now we think Z.”

Irene was no exception. I am tempted to give Irene-related examples of these three crisis communication virtues (and others). But this is already a long answer. Feel free to spend some time reading in the “Hurricane IRENE Advisory Archive.”

To be sure, media hurricane coverage doesn’t always meet the high standards of crisis communication set by the National Weather Service. With regard to uncertainty, for example, journalists inevitably lose track of some of the uncertainty the forecasters showcase. Just as inevitably, some readers and viewers lose track of the uncertainty that makes it through the media filter.

And in hindsight, people tend to misremember uncertainty communications. Some of those who took precautions that turned out unnecessary (or they think turned out unnecessary) are likely to feel foolish, so they project the feeling, blaming officials and the media for hyping the risk. Some of those who didn’t take precautions feel vindicated, as though being lucky were the same thing as being smart; forgetting that the predictions were uncertain makes them feel even smarter.

Here’s the pattern that Irene followed. It’s a common pattern:

  1. Experts and officials issue their warnings, focusing (as I think they should) not just on likeliest scenarios but also on worst case scenarios that are likely enough to be worth preparing for. They acknowledge their uncertainty, though they don’t always stress/proclaim it as strongly as I think they should – except for the consistently good National Weather Service forecasters.
  2. The media carry the warnings, but putting less emphasis on uncertainty than the sources did. The worst case scenarios sound likelier than they are.
  3. Some audience members take appropriate precautions. Some don’t.
  4. The event turns out less awful than the warnings (rightly) said it might – or at least less awful in certain places. In Irene’s case, New York City became the poster child for major precautions followed by modest impact.
  5. The media are a little slow on the uptake, hanging onto the worst case for a while even after the evidence starts suggesting that the worst case isn’t happening. Officials and experts sometimes do that too. The World Health Organization, for example, was slow to acknowledge that swine flu was turning out mild. But weather experts are always fast to say so when a hurricane is getting weaker or heading out to sea. Even so, the media always gear up for a big weather story, and then they’re reluctant to see it peter out.
  6. Eventually, it becomes clear even to television journalists that the worst case didn’t happen, or at least it didn’t happen where their cameras were. Hindsight bias takes over. Whether they took precautions or not, audience members blame the media and their sources for having overstated the risk; some sources blame the media; some media blame the sources. Hype becomes the meme: “Remember when they said we were all gonna die?” – even though no one ever said that.
  7. As you point out, the hype meme sometimes makes it harder for officials and experts next time – harder to talk themselves into issuing sufficiently emphatic warnings, and harder to get the warnings heeded. For example, bird flu didn’t turn into a human pandemic (yet) and the swine flu pandemic turned out mild. Millions think both risks were exaggerated – which probably means warnings about the next pandemic will be less emphatic and less effective than they ought to be.
  8. Sometimes – this happened with Irene – the media belatedly notice significant impacts and reconsider the hype meme. In the case of Irene, the news focus shifted from wind damage to inland flooding. And sometimes the media belatedly acknowledge that “better safe than sorry” isn’t a bad guideline for public officials and even for ordinary citizens – that excessive precautions are preferable to insufficient precautions. On August 31, NPR even ran a story on hindsight bias, pegged to Irene.

The “better safe than sorry” message (excessive precautions are preferable to insufficient precautions) can sound defensive when it’s deployed retrospectively, after a hurricane or other crisis has turned out anticlimactic. The message is a lot more effective when used prospectively. In the run-up to Irene, New York’s Mayor Bloomberg got it close-to-right with what The New York Times called “his oft-repeated mantra of precaution and prudence.” The August 28 Times story offers a number of Bloomberg Irene-related better-safe-than-sorry quotations, including this rhetorical question aimed at citizens reluctant to evacuate: “Can you imagine … looking back and saying we could have avoided a tragedy because you just didn’t want to get going until you had to?”

But it’s possible and useful to go further than Bloomberg went. The reality of preparedness isn’t “better safe than sorry.” You’re sorry either way: Either you’re sorry about the disaster and wish you’d taken and urged more precautions, or you’re sorry you took and urged precautions that turned out unnecessary. But the first “sorry” is a lot worse than the second. As my wife and colleague Jody Lanard likes to put it: You’re not damned if you do [take precautions that turn out unnecessary] and damned if you don’t [take precautions that would have saved lives]. You’re darned if you do and damned if you don’t.

Here’s what I would have liked Bloomberg to say:

The experts tell me that Irene may weaken or miss the city. Let’s hope it does both. But we can’t count on it. The experts also say that if Irene stays as powerful as it is now and continues on its most likely track, things could get really bad here – and that’s what we’ve got to prepare for. Yes, if we get lucky I’ll take some heat for evacuating people and shutting down the subways. We’ll all feel a little foolish if we take precautions that turn out not to be needed. But I’d rather take some heat and feel foolish than risk a disaster we could have prepared for and didn’t.

As a cyclone named Katia strengthens or weakens out in the Atlantic Ocean, following its still-unknown course to wherever, here’s the weather crisis communication bottom line as I see it.

  • Official weather forecasters should keep emphasizing their worst and most likely predictions, and the degree of uncertainty or expert disagreement that accompanies those predictions.
  • Officials and journalists should keep emphasizing how bad hurricanes can get and how important it is to take precautions.
  • Officials and journalists already do a better job of acknowledging uncertainty with regard to extreme weather events than with regard to most risks – but they could do better: proclaiming their uncertainty more aggressively and talking in advance about how foolish we’ll feel if the hurricane fizzles and how much worse-than-foolish we’ll feel if it’s a bad one and we didn’t prepare properly.
  • Officials and journalists really need to focus more on the better-safe-than-sorry uncertainty theme, not just afterwards (if the hurricane turned out not so bad) but beforehand (when we don’t know how it will turn out).

And some advice for crisis warnings generally – whether you’re warning about a hurricane or a pandemic:

number 1
Warn us how bad it might be, and how wise it is to take precautions.
number 2
Remind us that it might not be that bad, and how foolish we may feel about those precautions (even though we weren’t actually foolish) if it fizzles.
number 3
Emphasize that hindsight bias is a misleading guide to action – that the proper test of precautions isn’t whether they turn out necessary, but whether they’re proportionate to the risk as it looked beforehand. We take precautions because of what might happen, not because we’re confident it will happen.
number 4
If it fizzles (or just starts looking less bad than it might have been), don’t hesitate to say so, with appropriate attention to the uncertainty of that as well; it could get bad again.
number 5
When it’s all over, put it into context: how bad it was, how bad it would have been if we hadn’t taken precautions, how bad it might have been if we hadn’t gotten lucky, and how bad it might be next time.

Getting apathetic or resentful health department people interested in crisis communication

name:Heather Kost
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Precaution Advocacy index       link to Outrage Management index

field:Public information officer, county health department
date:September 1, 2011
location:Ohio, U.S.
 

comment:

I have worked with our preparedness coordinator to put together a risk communication plan for emergencies. Part of the plan covers what the health department staff will do. The other part covers what role the health department will play in communicating to county residents.

Overall, we have had a lot of employees uninterested in emergency preparedness. They wonder why they have to be burdened with it.

I am looking for a fun and creative way to introduce the importance of this to them – and more than that, I am looking for a way to present this to them so that they will understand their role and will discover that they will play a vital role in a county emergency.

Do you have any suggestions? I read through your article “Games Risk Communicators Play” and in your terms the population we are dealing with here are “donkeys.”

Any suggestions would be great!

peter responds:

In the wake of 9/11 and the anthrax attacks that followed immediately afterwards, the CDC paid me to give a series of bioterrorism-focused crisis communication seminars for state and county health departments. Most attendees were minimally interested. With some justice, they felt that they were understaffed and under-budgeted for the serious problems they were already facing, that they were unlikely ever to face a bioterrorism threat, and that preparing them to communicate better during such a threat was an unwelcome distraction.

My best shot, I found, was to acknowledge that that was how they felt, concede that they had a point, and ask whether there were areas of overlap between the federal mission that had brought me to town and the various missions to which they felt the deepest commitment. In essence, I asked participants to consider whether there might be anything in bioterrorism crisis communication training that they could use to address natural disasters and other emergencies they actually confronted from time to time. (I wasn’t prescient enough to use flu pandemics as an example.)

Doing this helped only a little. And I get it that you’re having trouble arousing your colleagues’ interest in how to communicate better about emergencies your county is actually likely to face, let alone ones like bioterrorism that it probably won’t face.

But the strategy behind the story may be helpful to you: Figure out why your audience is resisting your message, acknowledge the resistance, validate its sources to the extent that you honestly can, and then invite the audience to help you find a way to make your time together useful.

Conceptually, your goal is to teach people crisis communication – what to do in a high-hazard, high-outrage situation. Odds are your emergency risk communication plan focuses more on logistics than on what I sometimes call “meta-messaging” to guide the public through upsetting crises. Here are some of the key crisis communication meta-messaging principles: avoiding over-reassurance, acknowledging uncertainty, sharing dilemmas, being willing to speculate, and focusing more on denial than on panic. If you’re interested in a meta-messaging approach to crisis communication, see my 2004 column entitled “Crisis Communication: A Very Quick Introduction.” See also my crisis communication seminar handouts. Or look for this website’s other relevant resources in my Crisis Communication Index.

Even though crisis communication is your topic, it isn’t your problem. Your problem is getting your health department colleagues to want to learn crisis communication. That’s really two problems.

Insofar as your audience is apathetic about crisis communication, your problem is “precaution advocacy” – trying to arouse more outrage about what you consider a high-hazard low-outrage risk: the possibility of future health emergencies and the need to communicate effectively during such emergencies. Your focus on finding “a fun and creative way” to present your material suggests that you think apathy is your main problem. You may well be right. For a primer on how to arouse more concern in apathetic people, see “‘Watch Out!’ – How to Warn Apathetic People.” For a list of more resources on this topic, see my Precaution Advocacy Index.

But to the extent that your audience members resent your efforts to distract them from their daily focus on obesity or foodborne illness (or whatever they’re focused on), the main problem may not be precaution advocacy. Rather than being insufficiently outraged (concerned) about the possible need to function well in an emergency, they may be excessively outraged (irritated) about being asked to take on yet another task … and one they consider pretty low-priority. So maybe your problem is “outrage management.” You may need to reduce your colleagues’ outrage about your presentation as a precondition for any effort to increase their outrage about health emergencies. If so, check out the resources listed in my Outrage Management Index.

I hope this helps a little. Please let me know what you come up with.

Sarcasm isn’t an effective way to persuade parents to vaccinate their kids

name:Ken
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Public information officer
date:August 21, 2011
location:California, U.S.

comment:

I would like to hear your thoughts on a strategy of encouraging a specific behavior to those who reject it, by accepting their rejection and advising them to adjust by adopting other behaviors that may appear less appealing.

Here’s what I mean. To parents who refuse to immunize their child because of fears about adverse health effects, what do you think of the following message?

If your child is not immunized, then please refrain from taking him to Europe because of a huge measles outbreak; teach him how to properly wear an N-95 mask and encourage him to wear it during flu season; if he is an infant, keep him secluded during his first year of life and allow only individuals who are recently immunized against pertussis to have contact with him.

This strategy gives these parents actions that they can take to protect their child from vaccine-preventable diseases while respecting their decision not to immunize their child. It also challenges them by saying, in effect, that since your decision to refrain from immunization is based on your belief that doing so protects the health of your child, why wouldn't you take these other health-promoting actions?

peter responds:

Years ago a Canadian health agency sought my advice about ways of persuading motorcyclists to wear helmets, short of legally requiring them to do so. I suggested they might want to offer motorcyclists a choice between accepting a mandatory helmet law and signing a waiver. The waiver would not only acknowledge the danger of helmetless motorcycle-riding; it would explicitly waive the signer’s right to government-financed medical care for any head injury sustained while riding without a helmet.

I didn’t actually support my own suggestion, and the agency wisely didn’t take it. Like Jonathan Swift’s eighteenth century satirical essay A Modest Proposal, my suggestion was basically sarcastic.

I suspect yours is too. The first sentence of your comment accurately describes what you have in mind: convincing people to vaccinate their kids by “suggesting” obviously unacceptable (“less appealing”) alternatives. (If you’re serious about recommending N-95 masks for children not vaccinated against the flu, you should reasonably recommend N-95 masks for all children – because the flu vaccine has a relatively low efficacy rate compared with most childhood vaccines, especially in young children. But I doubt you’re serious.)

By contrast, your last paragraph implies that the unacceptable alternatives are real alternatives, that you are “respecting” the decisions of anti-vax parents by suggesting other ways for them to protect their children’s health. I don’t buy it, and I don’t think anti-vax parents would either.

Sarcasm is a notoriously ineffective communication strategy. It appeals to the speaker, providing a deniable way to vent his or her own outrage – and often his or her contempt for the other person. It may appeal to third parties who share the speaker’s outrage and contempt. But it’s exceedingly unlikely to appeal to the target of the sarcasm.

Especially if it’s denied! Openly acknowledged sarcasm may sometimes be effective, especially if there’s already a well-established good relationship in which teasing plays a role: “Yes, I’m making fun of your nutty idea….” Even then, sarcasm is risky. Sarcasm that’s disingenuous (fake-sincere sarcasm) can only make things worse.

But offering alternatives that aren’t sarcastic – that aren’t obviously unacceptable – is a very promising approach. As I’m sure you know, the number of childhood vaccinations recommended by the CDC has ballooned in recent decades. Parents who aren’t doctrinaire vaccination opponents but are leery of subjecting their kids to so many vaccines would benefit enormously from an alternative vaccination schedule to guide their decisions about which vaccines are safest to skip or delay.

Several such alternative schedules exist, but the CDC has steadfastly refused to develop one. As far as the CDC is concerned, either you give your kids all the recommended vaccinations at the recommended times or you’re on your own.

I understand the concern that an alternative “vaccination lite” schedule will attract some parents who might otherwise have adhered to the complete schedule. On the other hand, it will also attract some parents who might otherwise have decided not to vaccinate their kids at all. And it will improve parents’ “selective vaccination” decisions by enabling them to lean on the CDC’s prioritization preferences. Above all, it is genuinely respectful of parental vaccination concerns. Over the long haul, I believe, this respect will itself encourage more parents to begin to accept the assurances of public health professionals that childhood vaccines are safer than childhood vaccine-preventable diseases (mostly preventable – no vaccine is 100% effective).

Similarly, I would welcome thoughtful advice from public health agencies about ways to protect unvaccinated children from vaccine-preventable diseases. Such advice is already available with regard to children who aren’t eligible for certain vaccinations for medical reasons. The same advice is presumably relevant to children whose parents’ are anti-vax or selective vaccinators, but it isn’t routinely offered to them. It should be. All-or-nothing is neither sound medicine nor sound risk communication.

Of course a brochure on “What to Do If You’re Not Vaccinating Your Children” would need to stress that vaccination is the first, best line of defense. But it would go on to discuss a range of alternatives – ways of achieving some protection, some harm reduction, for those who can’t vaccinate their kids or choose not to vaccinate their kids … and even for those who are vaccinating their kids but are concerned that the vaccine might not take.

The purpose of offering alternatives isn’t to convince parents to vaccinate because the alternatives are more onerous and less effective. The purpose of offering alternatives is to offer alternatives, the best alternatives you can think of – and to offer them respectfully, not sarcastically. If some parents respond to the deficiencies of the suggested alternatives by deciding to vaccinate their kids after all, that’s a good outcome. But if that’s the intended outcome, the sarcasm will probably show and the message will probably boomerang.

Ken responds:

Thank you for your response to my earlier question. Although I had not intended the message to the public to appear sarcastic, I can certainly see how it could be taken that way and have a very negative impact.

Perhaps a better example of the strategy that I was asking about is when emergency workers in the Southeast go door-to-door to urge residents to evacuate because of an impending hurricane. Residents who refuse to evacuate are asked to use a magic marker and write identifying information on various body parts, in the event that they lose their life from the hurricane. Emergency workers report that some residents, upon hearing this request, change their mind and choose to evacuate.

What are your thoughts about this strategy?

Peter responds:

I have heard this anecdote too, and have sometimes used it as a good example.

It’s possible that a prospective hurricane victim could hear this request as sarcastic, as an intentional reductio ad absurdum aimed at demonstrating the foolishness of not evacuating – in which case it would probably boomerang too. But if the request is sincerely meant, I think it stands a good chance of convincing the recipient that the speaker thinks the danger is severe. And that in turn might make the recipient rethink his or her decision to stay put.

As far as I can tell, the anecdote first went public in 2005, in the wake of Hurricane Katrina. Joy Buchanan of Virginia’s Daily Press wrote a September 3 story attributing the strategy to a local emergency response official, Jim Judkins. Three days later, New York Times columnist John Tierney featured it (with credit) in a widely quoted column entitled “Magic Marker Strategy.”

Figuring out how risky it is to fly your own airplane – the pesky “denominator problem”

name:Wayne
field:Electrical engineering professor
date:August 21, 2011
location:Indiana, U.S.

comment:

I am considering learning to be a private pilot. An obvious concern is safety. In trying to evaluate the wide range of information on the web, I have found various (apparently conflicting) answers.

Many are oriented toward the question how safe is driving compared to flying.

Unfortunately, the many possible assumptions (per hour, per mile, per vehicle, per passenger, per participant, all general aviation vs. single-engine piston, etc.) allow the numbers to support almost any position. Flying enthusiasts would like to see a relatively low number (general aviation is 10 times safer than driving), while armchair statisticians on user forums get numbers like 5–10 times the fatality rate of driving.

Obviously, comparisons with things like boating, swimming, caving, rock climbing, and motorcycle riding are instructive, if subject to many of the same ambiguities.

I would love to see an effort to communicate the complexities of reality without overwhelming (I found the Nall Report a bit more than I could chew).

I’d like to know roughly how much more risk I would be assuming by flying my family a few hours in a small plane vs. driving, and how that compares with things like swimming or boating, but I know that the answer will vary with my number of hours of experience, etc.

I haven’t found the information I’m looking for yet, but I think you seem to be in the business of developing this kind of information. I think there’s a real gap and a lot of interest in the general aviation safety question. Perhaps you could help.

peter responds:

First of all, this response isn’t going to give you a usable answer to your question. I routinely help clients communicate how risky something is – but the clients have to provide the data. If I were to delve into the available information on the dangers of flying (versus driving, boating, etc.), I would quickly become immersed in the same morass that you found so discouraging.

And I’d reach the same conclusion you reached: It all depends on the denominator.

Every representation of risk is at least implicitly a fraction. The numerator of the fraction is how much of some undesirable outcome you counted. The denominator is the universe you looked at in order to find that number of undesirable outcomes.

Choosing your numerator is controversial enough. Should you count the number of accidents, the number of injuries, the number of deaths, the number or total dollar value of resulting insurance claims, the number or total dollar value of successful insurance claims, or what?

Some potentially useful numerators are awfully hard to count. The number of near-misses, for example, might be a good risk measure for your civil aviation question … if we had a decent way to determine how many times pilots have almost lost control. For high-magnitude, low-probability risks that seem impossible until they happen (think Fukushima and Deepwater Horizon), near-misses are just about the only meaningful risk measure, even though they’re exceedingly difficult to count.

When looking at toxic chemical risks, to pick a different example, it’s fairly easy to measure poisoning deaths from acute exposure; much harder to measure cancer deaths from chronic exposure; and harder still to measure long-term health impacts other than death, ecosystem impacts, impacts on future generations, synergistic impacts in combination with other chemicals, etc. Understandably but perhaps unwisely, we tend to focus most on the numerators we can measure most reliably. We should be careful not to overvalue the numerators we’re good at measuring, and not to discount the risks we don’t know how to measure at all.

And of course some numerators are scarier than others. The number of tons of toxic waste at a hazardous waste dumpsite, for example, usually sounds a lot more alarming than the actual exposure of the dump’s neighbors to the toxicants onsite. Communicators tend to pick the numerators that will best make their point. Once you’ve decided whether the risk is serious or trivial, that decision helps you choose either an alarming numerator or a reassuring one: whichever numerator will help people see that you’re right.

If you’re really trying to enlighten your audience rather than corner it into sharing your conclusion, you’ll choose several numerators. Unfortunately, risk communicators don’t do it that way very often.

Often the choice of numerator is fairly straightforward. Odds are you’re most interested in the number of people who died in civil aviation accidents. That’s your numerator. Now comes the tough part: choosing your denominator.

Consider the occupational risks of coal mining, for example. The numbers look very different depending on whether you focus on underground mining (relatively dangerous) or strip mining (relatively safe) or both. Similarly, coal mining is orders of magnitude more dangerous in places like China than in places like the United States. And coal mining is a lot safer today than it was a hundred years ago.

So decisions about what universe to look at as you count coal mining deaths will have a huge impact on your conclusions about coal mine safety – U.S. strip mines in the last decade on one extreme, versus worldwide underground mines over the last century on the other.

Here’s a more complicated coal mining factoid: The number of accidental deaths per million tons of coal mined in the U.S. decreased steadily over the last half of the twentieth century, while the number of accidental deaths per thousand coal mine employees actually increased. These are not incompatible results. Automation enabled the coal industry to mine more coal with fewer people, but some of the remaining employees ended up in very dangerous jobs indeed (often in mines too small to automate properly).

Which denominator makes more sense, tonnage or employment? It depends on your purpose. If you’re trying to advise your children about career choices, deaths per thousand employees is the relevant measure. But if you’re trying to assess the pros and cons of coal as an energy source, you’re better off with deaths per million tons of coal (or better yet, deaths per million BTUs of energy).

That’s the problem you had trying to assess the risks of civil aviation. You put it perfectly: “Unfortunately, the many possible assumptions (per hour, per mile, per vehicle, per passenger, per participant, all general aviation vs. single-engine piston, etc.) allow the numbers to support almost any position.”

For assessing your individual risk, moreover, any single denominator you find in the literature is going to be, to a greater or lesser extent, the wrong denominator.

  • You can improve your risk estimate by confining yourself to recent U.S. data (since you’re a U.S. pilot and you won’t be flying into the more dangerous past).
  • You can probably narrow your denominator a bit more by looking for data on the sorts of planes you’re planning to fly. (Bigger corporate jets have a better safety record than small private planes.)
  • You might even be able to get data for pilots who resemble you in some other key ways: age, hours of training, hours of experience, etc.

But only you know whether you tend to be careful or reckless when operating machinery; whether you’re meticulous or a bit casual about equipment maintenance; whether your family is likely to be a help or a distraction onboard; whether you usually heed warnings (bad weather ahead, for example) or shrug them off; whether you’re inclined to push yourself even if you’re feeling tired or a bit sick; and whether you might give in to the temptation to have a drink or two before taking off.

Even if you can manage to be completely honest with yourself about these important risk estimation variables, you won’t find the appropriate data to create a denominator that truly represents pilots like you in situations like yours.

Risk comparisons between flying and other means of transport compound the problem further; now you need comparable denominators for completely different risks. Comparisons of driving to flying commercial, for example, are often expressed in terms of deaths per hundred million miles traveled. Commercial aircraft carry dozens or hundreds of passengers at once, of course, while cars carry at most three or four. So putting vehicle miles in your denominator will make cars look safer; putting passenger miles in the denominator will make planes look safer. Most of the flight risk is confined to a few minutes taking off and landing, whereas automotive risk is mostly on the highway. So the safest airplane trips (if the denominator is mileage) are the long ones, racking up lots of miles with only one takeoff and one landing. The safest car trips are the short ones, with the fewest high-speed, high-risk highway miles and the most low-speed, low-risk miles on local roads.

But why use mileage for your denominator at all? Vacation travel decisions are more time-dependent than distance-dependent. The real choice may be driving a few hours to a nearby holiday destination versus flying the same few hours to a more distant destination. If so, deaths per hundred million passenger hours (or deaths per million passenger journeys) are a better measure than deaths per hundred million passenger miles.

Here’s another factoid about time versus distance I can’t resist adding: Drunk driving, I am told, is actually safer than drunk walking – for the drunk, at least. Suppose you’ve had too much to drink and nobody is available to drive you home. Your only choices are to drive home drunk or walk home drunk along the same highway. In this extremely hypothetical example, let’s leave out risks other than your own death – the risk of killing other people, the risk of getting stopped by the cops, etc. Per unit time, drunk driving is obviously more dangerous than drunk walking. But it’ll only take you a few minutes to weave your way home in your car, whereas you’ll be staggering down the road for an hour or more if you walk – at constant risk of veering into the path of an oncoming car. Per unit distance, the relevant variable if you’re committed to getting home, drunk driving is safer (for you) than drunk walking.

Speaking of drunkenness, we know that a very high percentage of driving deaths are attributable to teenagers driving between midnight and four a.m. after consuming alcohol. If you’re middle-aged and do your driving sober in the daytime, driving is a lot safer than the overall statistics suggest – which means flying isn’t as safe, comparatively, as you might think.

Also relevant here is the distinction between absolute risk and relative risk. Once you have picked both a numerator and a denominator for your risk fraction, you have calculated the absolute risk. Suppose it’s 1.4 deaths per hundred million passenger miles flown (a number I just picked out of thin air). Suppose the comparable driving risk fraction is 2.1 deaths per hundred million passenger miles (another arbitrary number). Those are both absolute risks.

Given these fictional assumptions, now we can calculate the relative risk of flying versus driving. Here are four mathematically equivalent ways to express this relative risk:

  • Driving is 50% more dangerous than flying. (2.1 is 50% higher than 1.4.)
  • Driving is 150% as dangerous as flying. (2.1 is 150% of 1.4.)
  • Flying is one-third less dangerous than driving. (1.4 is one-third smaller than 2.1.)
  • The risk of flying is two-thirds the risk of driving. (1.4 is two-thirds of 2.1.)

Although they are mathematically equivalent, these four expressions sound different to most people, even people who are quite comfortable doing the math. (People who have trouble doing the math may find it hard to believe they’re equivalent at all.) All four are expressions of the relative risk of flying versus driving. Relative risk is very often what risk managers and ordinary citizens are most interested in.

In this flying-versus-driving example, we stipulated the two absolute risks first and then calculated the relative risk. But suppose somebody just told you the relative risk of two activities whose absolute risk was completely unknown to you. The information would be impossible to interpret sensibly. Discovering that X (driving) is 50% more dangerous than Y (flying) isn’t terribly useful unless you already know how dangerous one or the other of them is in absolute terms.

Risk communicators often make this error. Sometimes they do so intentionally, using a big relative risk to make a small absolute risk sound big, or using a small relative risk to make a big absolute risk sound small.

Suppose chemical A is ten times as dangerous as chemical B – that is 1,000% as dangerous. That’s the relative risk. Consider two different sets of absolute risk that both yield this 1,000% (10×) relative risk:

Situation One:A
B
10 deaths per million people exposed
1 death per million people exposed
Situation Two:A
B
100,000 deaths per million people exposed
10,000 deaths per million people exposed

In both situations, A is ten times as risky as B. The two situations have the same relative risk. But in Situation One, both A and B are small risks. In Situation Two, both A and B are huge risks.

Now let’s look at two different relative risks. As before, chemical A is ten times (1,000%) as dangerous as chemical B. In addition, chemical X is only 110% as dangerous as Chemical Y (that is, X is 10% more dangerous than Y).

At first glance, it may seem obvious that the difference between A and B is huge, while the difference between X and Y seems almost trivial. That’s true in relative risk terms. But it may or may not be true in absolute risk terms. Let’s invent some absolute risks to add to the picture:

A
B
10 deaths per million people exposed
1 death per million people exposed

X
Y

1,100 deaths per million people exposed
1,000 deaths per million people exposed

Suddenly things look very different. The difference between A and B is only 9 deaths per million people exposed, while the difference between X and Y is 100 deaths per million people exposed. As a public health improvement or an individual risk management decision, replacing X with Y may be a lot higher priority than replacing A with B.

But we’re not done yet. We still don’t have our denominators, how many people are actually exposed to A, B, X, and Y. Depending on what those numbers are, the risk comparison of A to B and X to Y may change yet again. Here is one out of an infinite number of possible sets of denominators:

A10 deaths per million people exposed; 10 million people are exposed annually. Expected number of deaths per year: 100
B1 death per million people exposed; 100 million people are exposed annually. Expected number of deaths per year: 100
X1,100 deaths per million people exposed; 10 million people are exposed annually. Expected number of deaths per year: 11,000
Y1,000 deaths per million people exposed; 100 million people are exposed annually. Expected number of deaths per year: 100,000

Now we really know something. In this specific example, A and B kill the same number of people per year, 100; A is ten times as deadly to the people who are exposed, but only one-tenth as many people are exposed. X is more than a hundred times as deadly as either A or B, killing 11,000 people a year. But the real killer is Y. Even though Y is slightly less deadly than X to the people who are exposed, ten times as many people are exposed, yielding an annual death toll of 100,000 people. Focus your risk mitigation efforts on Y!

Suppose flying isn’t 50% safer than driving, it’s 50% more dangerous. Should you fly anyway, simply for the pleasure and convenience? The relative risk data give you very poor guidance for this decision. “Fifty percent more dangerous” sounds like a lot, but wait till you calculate the absolute risks. Assume flying adds up to 2.1 deaths per hundred million passenger miles while driving is only 1.4 (I made up the numbers, remember). And assume you’re planning to fly a total of 100,000 passenger miles over your entire life as a pilot (four people in the plane for 25 trips of a thousand miles apiece). 100,000 is one-thousandth of a hundred million. So if you end up driving those 100,000 passenger miles, you’re looking at 0.0014 statistical deaths; if you fly them instead, the number goes up to 0.0021 statistical deaths.

The lifetime safety penalty to you and your family for deciding to fly instead of drive: 0.0007 deaths. In other words, the chance of somebody in your family dying because you decided to go be a pilot is less than one-in-a-thousand. That’s a risk you may or may not consider acceptable – but an additional lifetime absolute risk of 0.0007 deaths sounds a lot smaller than that 50% relative risk sounded.

Of course some of those 25 thousand-mile flights you’re looking forward to aren’t replacing equivalent car trips at all; if you decide not to get your pilot’s license you’ll probably stay home more. I’ll leave it to you to calculate the risk of hanging out around the house.

For more on the denominator problem and absolute versus relative risk, see this superb high school lesson plan from American Biology Teacher.

I know your question was about risk estimation, not risk communication, but let me end with some relevant risk communication conclusions.

number 1
Always provide a denominator. After every holiday weekend, the media run stories about how many people died on the road. Sometimes they compare the number to the average number of weekend traffic fatalities. That’s better than no comparison at all, but it’s still missing the essential denominators. How many people were on the road over the holiday weekend, and how many are on the road most weekends? Is holiday travel really more dangerous, or were there more accidents simply because there were more cars?
number 2
Explain why you chose the denominator you chose. What’s the best measure of tornado risk, for example? Just the raw number of people who died last year from tornadoes (no denominator)? The chances of a U.S. citizen dying in a tornado (national population as denominator)? The chances of dying that way if you live in a tornado-prone state (getting people out of the denominator who live where tornadoes rarely appear)? If you live in a tornado-prone state and don’t take cover during tornado warnings? If you actively go out looking for tornadoes to photograph? The last couple of possibilities aren’t as silly as they may sound. Lots of risks are statistically low mainly because most people take sensible precautions – and then risk-takers may use those statistics to justify not taking sensible precautions. If very few people die every year playing golf during a lightning storm, is it because it’s pretty safe to play golf during a lightning storm or because most people know enough to get off the golf course when a storm is coming?
number 3
Don’t choose an unrealistic denominator to “prove” that your side is right. A risk estimate for industrial air pollution is bound to be more alarming for the plant’s fence-line neighbors than for the overall community. Averaged across the entire state’s population, just about every factory is a trivial risk. Calculated for a hypothetical trespasser who climbs the stack naked and sits in the plume 24–7 for a lifetime, just about every factory is dangerous. Any denominator you pick will favor one side or another, even if you’re not intentionally making your choice to “prove” your case. So if you’re really trying to help people understand the risk, use several different denominators that point in several different directions.
number 4
Think hard about your numerator too. The denominator problem is the biggie, but the numerator in the risk fraction can pose problems too. In particular, be careful not to neglect important aspects of the risk simply because you don’t have a good way to measure them. Otherwise, risk communication best practice for numerators is a lot like best practice for denominators: explain why you picked the one you picked, try not to let your bias determine your choice, and provide balance by using more than one.
number 5
When talking about relative risk, always explain absolute risk too. Relative risk is informative. It’s the most straightforward summary of the difference between two risks. I’m not advocating against its use. But a small relative risk can be important if the absolute risks are big, and a big relative risk can be inconsequential if the absolute risks are small. Since lung cancer is a huge killer, anything that increases or decreases lung cancer risk by even a few percentage points is a big deal. A much larger percentage change in a much less common disease matters far less. Absolute risk should be the crucial risk information when you’re making decisions about what to do. But after a bad outcome, we all tend to focus on relative risk when assessing blame. Thus I may willingly accept a medicine with a small absolute risk of a dangerous side-effect; but if the side-effect materializes, I will probably be furious at my doctor for prescribing a medicine that had such a large relative risk of that side-effect.
number 6
Don’t forget about outrage. This Guestbook answer has been unusual for me; I haven’t used the word “outrage” once yet. But I would be remiss to end my answer without reminding readers of two basics:
  • Outrage determines people’s reaction to risk data far more than the data determine their outrage. Don’t get so preoccupied with your risk fraction that you neglect to address the outrage. If outrage is high, trust is low, and you’re trying to coerce, corner, or bamboozle people, your risk communication efforts are unlikely to succeed no matter how expertly you calculate and explain your risk fraction.
  • Contrariwise, if outrage is low, trust is high, and you’re really trying to help people understand the risk, you’ll probably come out okay even if you violate some of the recommendations in this answer.

Why U.K. nurses resisted swine flu vaccination – and why health care workers resist flu vaccination

name:Michael
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Risk analysis master’s student, King’s College London
date:June 30, 2011
location:United Kingdom

comment:

During the 2009–10 swine flu pandemic, in a survey in the U.K., only 33% of nurses said they would agree to being vaccinated against H1N1 influenza. Considering the importance of immunising frontline nurses during a pandemic, I was wondering whether you had any explanations for this low uptake. Also, are there any improvements in terms of communication strategy that could be made in order to produce a greater uptake in the future?

Could the nurses have become apathetic to infectious diseases given their familiarity with the risk or as a result of warning fatigue?

Or is the issue more closely related to trust (i.e. nurses do not trust the information they are being given, despite it being the advice they are providing the patients)?

From a risk communications perspective, has the issue been complicated by the individual’s professional obligations (i.e. nurses feel pressured into accepting the immunisation due to their professional responsibilities)?

Here are two links to news stories with survey data:

peter responds:

Whether during a pandemic or not, persuading health care workers (HCWs) to get a flu shot is simultaneously a precaution advocacy challenge and an outrage management challenge. That is, some HCWs are insufficiently worried about the risk of influenza, while other HCWs are excessively worried about the risk of influenza vaccine. Some, of course, are both.

The same is true of ordinary citizens, and of vaccinations other than flu. But I’ll focus my response on the effort to convince health care workers to get a flu shot, and on what I consider the three key reasons why HCWs might resist flu vaccination.

Before designing a risk communication strategy to increase vaccine uptake, obviously, you’d want to know which of these reasons (and what other reasons) are high-priority in the situation you’re planning for.

Flu usually isn’t perceived as very dangerous, and after the early weeks the swine flu pandemic wasn’t either.

This is always the core vaccination precaution advocacy issue: Are people worried enough about the disease to overcome their resistance to getting vaccinated? It’s an issue with regard to both diseases too unfamiliar to take seriously yet (like HPV) and diseases too familiar to take seriously anymore (like measles). It’s especially an issue for influenza, the Rodney Dangerfield of infectious diseases: It “can’t get no respect.” Many health care workers (and others) imagine that getting the flu is no worse than catching a bad cold, so getting vaccinated against the flu feels like more trouble than it’s worth.

The swine flu pandemic was initially much more alarming than the seasonal flu, but pretty soon most people (and most HCWs) got the accurate impression that it wasn’t turning out very severe as pandemics go. They stopped responding to it as an oh-my-God pandemic and reverted to their normal ho-hum flu response.

The two articles you cite show this effect. In the first article, published on August 18, 2009, 30% of the U.K. nurses surveyed replied “no” when asked if they’d seek to get vaccinated when the vaccine became available; 37% said “yes” and 33% said “maybe.” That was bad enough. But in another poll of U.K. nurses nearly two months later, reported in the second article on October 11, 2009, 47% said no and only 23% said yes. What changed in the interim? Increasing evidence that swine flu was pretty mild for most people who got it, and was beginning to look less deadly overall than the average flu season.

You ask whether nurses might have become apathetic about infectious diseases in general and swine flu in particular because of excessive familiarity, or because of “warning fatigue.” Both are possibilities, I think.

Familiarity certainly does tend to diminish outrage (in this case, concern). But the effect is mostly a result of familiarity with the potentially risky situation, not familiarity with the bad outcome. If you drive a car for years without having a serious accident, for example, you may lose your fear of car crashes and become a less careful driver. But if you’ve actually been in a serious car crash, the risk of a car crash becomes much more memorable. That sort of familiarity tends to make people more fearful and more careful, at least for a while. (On the other hand, if you’ve survived a whole bunch of car crashes, you might get desensitized and careless again.)

I’d expect a nurse who had spent time with really sick swine flu victims to feel an increased desire to get vaccinated. A nurse who had spent time with mildly ill but very worried patients unnecessarily crowding hospital emergency rooms, on the other hand, might well see getting the swine flu jab as unimportant – even if doing so might help uncrowd the emergency rooms.

As for warning fatigue, it’s true that people who have heard too many warnings about risks that didn’t materialize are likelier to shrug off future warnings. This is a huge problem right now, I suspect, for pandemics in general. SARS and bird flu were prospective pandemics we were warned about that didn’t happen (yet, anyway); swine flu was a pandemic that actually happened but turned out mild – by some measures, milder than the average flu season. In the wake of these false alarms, the next pandemic warning will face considerable public skepticism at the start. But by the time vaccine is available, people will probably have a pretty clear sense of the pandemic’s severity. If it turns out severe, I doubt they’ll have any problem overcoming their warning fatigue.

Were U.K. nurses skeptical as they listened to repeated warnings about the need to get vaccinated against swine flu? Probably so. But my guess is that their skepticism was less a result of fatigue because they’d been warned too often than of the evidence that the swine flu pandemic was turning out mild.

Warning fatigue notwithstanding, the proper risk communication response to apathy is more warnings … and smarter, more effective warnings. For an all-purpose primer, see my 2007 column, “‘Watch Out!’ – How to Warn Apathetic People.” For a list of recommendations geared specifically to getting health care workers vaccinated, see the last section of my 2009 column with Jody Lanard on “Convincing Health Care Workers to Get a Flu Shot … Without the Hype.”

A lot of people, including health care workers, were nervous about the safety of the new swine flu vaccine.

This, of course, is always a key vaccination outrage management issue: Are people excessively worried that the vaccine might be dangerous?

Statistically, it is hard to think of a vaccine that’s more dangerous than going unvaccinated when an infectious disease is circulating widely. (Unless there’s some kind of screw-up, vaccines that are more dangerous than going unvaccinated don’t get licensed.) Nonetheless, the grounds for concern about vaccine safety are many and varied. Among them:

  • Fear of needles.
  • Nervousness at the counterintuitive idea of voluntarily subjecting yourself to a weakened or killed version of the very disease you’re supposed to be protecting yourself against.
  • Awareness of vaccine safety controversies, even well-debunked controversies – e.g. whether the MMR vaccine or the vaccine preservative thimerosal might be linked to autism.
  • Awareness of actual vaccine scandals – e.g. the “Cutter incident,” when a U.S. company in the 1950s manufactured an inactivated polio vaccine that wasn’t completely inactivated and caused paralytic polio in scores of vaccinees. (The head of the U.S. National Institutes of Health had ignored warnings from his staff that the Cutter vaccine was incompletely inactivated.)
  • Awareness that serious vaccine side effects, though uncommon, are not unknown.

New vaccines understandably (and rationally) feel riskier. And the pandemic swine flu vaccine, though manufactured the same old-fashioned way flu vaccines are conventionally manufactured, was arguably a new vaccine, aimed at a new strain of flu. When the first survey of U.K. nurses was conducted, it hadn’t even been licensed yet – and there were concerns that corners might be cut in the rush to get it licensed. The last time swine flu was a source of widespread human health concern was 1976, when a U.S. outbreak provoked President Ford to order a massive vaccination campaign. Early vaccinees showed a higher-than-normal incidence of Guillain-Barré syndrome, leading the government to cancel the program. All this might be in a nurse’s mind when deciding whether to be one of the first in the U.K. to get the new swine flu vaccine.

Of course if people were dying in droves from swine flu, all but the most vaccine-phobic health care workers would quickly overcome their concerns and line up to get vaccinated. In making the yes-or-no vaccination decision, people compare their fear of the disease with their fear of the vaccine. The most vivid fear wins.

I found it interesting that you didn’t mention vaccine safety concerns in your comment. You may think that nurses know too much science to imagine that a vaccine might be more dangerous than going unvaccinated. But survey after survey has shown that this isn’t the case. The August 2009 survey of U.K. nurses discussed in the first article you cited found that 60% of those who said they wouldn’t get vaccinated mentioned vaccine safety concerns as their main reason, while only 31% attributed their decision to the low swine flu risk.

(The other 9%, by the way, focused on inconvenience, claiming that they wouldn’t be able to take the time out of work to get vaccinated. Convenience issues are likely to be crucial when people aren’t especially worried about either the disease or the vaccine. I doubt the hassle factor matters much when one or the other is a source of serious concern.)

Since worries about vaccine safety are an important reason why HCWs resist flu vaccination, should we explain more forcefully how safe the flu vaccine really is? Yes and no. Reassuring information can certainly help, but only if the audience actually finds it reassuring. Over-reassuring information isn’t reassuring; it tends to backfire, especially when trust is low. So if the evidence is 90% on your side that X is safe, or even 99.9% on your side, you need to acknowledge the other 10% (or 0.1%) of the evidence that suggests there might be some risk to X after all. In a 2008 Guestbook entry, I made this case even with regard to the pretty thoroughly discredited link between vaccination (not flu vaccination) and autism.

Similarly, reassuring information is more reassuring when it comes from sources who are trusted. Hospital managements and public health professionals are known to be deeply committed to vaccination. So their assurances that a particular vaccine is safe may well be seen as biased and self-serving, rather like a factory management’s assurances that the emissions coming out the stack are safe. It would help to involve vaccination skeptics (if not actual critics) in the vaccine risk assessment process.

Health care workers are not necessarily inclined to do whatever their employers and public health authorities recommend.

Pretty much everybody who writes about the problem agrees with my first two reasons why many health care workers resist getting vaccinated: They’re insufficiently worried about the disease and excessively worried about the vaccine. But then most commentators move on to reasons I consider distinctly secondary (though still important): busyness, inconvenience, and the hassle factor; beliefs – largely justified – about the low efficacy of the influenza vaccine; beliefs about personal invulnerability or the efficacy of alternative precautions; etc.

HCWs’ disinclination to do what they’re told – by their employers or by public health authorities – rarely makes the list. I think it’s crucial, and I was pleased to see that it made your list. In fact, your comment raises both the possibility that nurses “do not trust the information they are being given” and the possibility that they might “feel pressured into accepting” the swine flu immunization. I agree on both counts – though the data are limited, because surveys rarely ask HCWs if their resistance to vaccination has anything to do with mistrusting the experts or disliking the pressure.

I could go on forever about the ways public health authorities have sacrificed trust. (See my speech transcript entitled “Trust the Public with More of the Truth: What I Learned in 40 Years of Risk Communication.”) But I’ll confine myself here to one aspect of the problem: the fact that public health authorities routinely overstate the risk of flu.

Here’s my favorite U.S. example. For years, public health officials tried to convince young people to get vaccinated by citing the estimate of 36,000 U.S. deaths per year, nearly always without mentioning that over 90 percent of those deaths were in the elderly and that non-elderly healthy people were vanishingly unlikely to die of the flu. Last year the CDC recalculated the estimate based on a different range of years, and came up with just 20,000 U.S. flu deaths per year. Some state and local health departments didn’t make the switch in time for the 2010–2011 flu season; we’ll see if they make it for 2011–2012.

Once it became clear that the swine flu pandemic was turning out mild, public health professionals were extremely reluctant to say so – for all sorts of reasons, from their worry that it might get worse to their fear of being accused of having hyped it earlier to their desire to find a use for all the vaccine they had ordered. For a U.S. case study, see “Why did the CDC misrepresent its swine flu mortality data – innumeracy, dishonesty, or what?” For a World Health Organization case study, see “The ‘Fake Pandemic’ Charge Goes Mainstream and WHO’s Credibility Nosedives.” (I haven’t done a U.K. case study.)

Exaggerated warnings work only as long as people don’t smell a rat. Then they backfire. How many health care workers in the U.K. smelled a rat by the later stages of the swine flu pandemic? And how many smell a rat during routine seasonal flu vaccination campaigns?

In a 2009 column, Jody Lanard and I presented three extended examples of what we called “flu prevention hype”: the need to get a flu shot every year, the value of cough etiquette and hand-washing, and the quality of the match between the flu strains that are circulating and the strains the vaccine protects against. We argued that flu prevention hype leads to learned mistrust, which leads health care workers (and others) to reject the experts’ advice to get a flu shot.

I want to emphasize that I am not claiming the expert advice is wrong. I get my flu shot every year, and urge family and friends to do the same. I’m claiming that hype – even on behalf of a valid recommendation – undermines trust and therefore diminishes compliance.

Pressure to comply has a similar effect. I have written about mandatory flu vaccination for health care workers three times:

There’s no question that more HCWs get vaccinated when they have to (or feel pressured to) than when it’s only recommended. But there are sizable collateral costs, I think: damage to morale, resentment of management, and above all an increased inclination to disapprove of the vaccine (and perhaps to tell patients so: “I had to get vaccinated but thank God you’re free to refuse”).

And I’m sure many HCWs notice the hypocrisy implicit in the fact that there is no comparable pressure on hospital visitors. A hospital policy that urged visitors to get vaccinated and asked them to wear masks if they were unvaccinated would suggest that hospital management was really worried about all nosocomial transmission of flu, not just iatrogenic transmission. (For those unfamiliar with these terms: nosocomial = originating in a hospital; iatrogenic = caused by medical treatment.) But if loved ones sit for hours next to the patient, coughing and sneezing at will, then potential herd immunity is majorly compromised, and so is the patient-health rationale for demanding that HCWs get vaccinated. Maybe management is worried mostly about absenteeism, not patient health. Maybe it’s just a power struggle.

HCWs are left feeling not just misled, but also abused.

The solution is obvious, though difficult. Public health authorities need to be scrupulous about acknowledging the small parts of the truth that don’t support their recommendations; that is, they need to apply the principles of informed consent even to vaccination ads and brochures. The employers of health care workers need to do likewise. Health officials shouldn’t overstate the risk of influenza or the efficacy or safety of the flu vaccine. Hospital administrators shouldn’t overstate the evidence that vaccinating health care workers reduces influenza incidence among patients; and if their real concern is staff absenteeism, they shouldn’t pretend it’s patient health. Perhaps most painfully, both groups need to acknowledge, first to themselves and then to their audiences, that they have sometimes undermined trust by overstating their case.

Am I making the same mistake here, overstating the case that HCWs’ resistance to flu vaccination is largely grounded in relationship issues like trust? Perhaps so. I have an enormous amount of evidence, far more than I can summarize here, that flu prevention hype (including vaccination hype) is widespread. But I have very little evidence that the hype makes health care workers less willing to get vaccinated. It makes sense, but I can’t prove it’s true.

Media coverage of Three Mile Island versus Fukushima: Getting experts versus vetting experts

name: Sharon M. Friedman
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Director, Science & Environmental Writing Program,
Department of Journalism and Communication,
Lehigh University
date:June 22, 2011
email:smf6@lehigh.edu
location:Pennsylvania, U.S.

comment:

As someone who worked on the TMI task force and a noted expert on media and risk, would you care to comment about what you thought of the media coverage of the Fukushima radiation issues versus the TMI coverage?

peter responds:

I haven’t got much to say about media coverage of Fukushima – which is itself striking to me as I review my five previous website Guestbook entries on that crisis, two of them quite long:

They’re all about how government and corporate officials managed their crisis communication efforts and how the public (especially the Japanese public) reacted or was likely to react. And yet I found little occasion to consider how the Japanese and international media were covering the crisis.

I’m not sure if this is because I was able to read only the English-language coverage, or because I have become so focused on risk communication by sources that I’m no longer sensitive to what journalists bring to the table, or because journalists were bringing very little to the table but rather just uncritically regurgitating what their sources were saying.

Of course the main communication difference between Fukushima and TMI is that nobody trying to make sense of Fukushima was confined to what was available in the mainstream media. A lot of what I read as I collected impressions of official Fukushima risk communication was online commentary that wouldn’t have existed when the Three Mile Island nuclear plant went haywire in 1979 – and wouldn’t have been accessible to me if it did exist. (Even my reading of the media coverage, at least the English-language media coverage, was enormously quicker and more thorough online than it could have been in 1979 without an elaborate, expensive, and exhausting collection effort.)

This difference is perhaps most true about efforts to explain radiation risk, and other science and engineering aspects of the story. The best technical explanations were online, prepared pro bono by unaffiliated experts working in their proverbial pajamas.

That’s true of detailed background information about how radiation is measured and the radiation dose-response relationship. It’s true of infographics that summarized the comparative risk of various radiation exposures, including Fukushima exposures. It’s true of efforts to deduce more than officials were revealing about the state of the reactors from what they said about radiation releases, and to deduce more than officials were revealing about radiation releases from what they said about the state of the reactors. It’s true of efforts to assess the probability of various radiation release scenarios – both to figure out what was most likely to happen in the coming days and to judge the seriousness of the risk of a widespread radiation disaster. And it’s true of efforts to map the available radiation information, fill in the blanks, and guesstimate how much exposure people in various locations were likely to be accumulating.

Reporters weren’t doing all that. Expert volunteers were doing it – online. And they weren’t doing it in a vacuum. On many though obviously not all websites, there was a respectful and illuminating dialogue among volunteer experts and ordinary observers (some very alarmist, other very Pollyanna, and many in the middle).

Of course there was a lot of Fukushima science and engineering crap online as well. It’s safe to say that both the worst Fukushima technical information and the best Fukushima technical information were in blogs and listservs, not in the mainstream media.

The best media explanations were borrowed from these online explanations. Very early in the crisis, an expat in Japan put online a long email he had received from a relative in the U.S. with some nuclear expertise; it was widely linked and reposted (and challenged and revised), and then became the focus of a useful flurry of mainstream coverage. A bit later a risk comparison infographic made the same transition: One person offered it up as an unsolicited contribution to the global effort to understand Fukushima; others suggested improvements, some of which were incorporated in successive iterations of the graphic; it was widely linked, reposted, massaged, and discussed online; within days, various versions of it had been reprinted in mainstream media and linked on their websites.

I’m sure you will have encountered other examples of the same phenomenon.

As a result of the Web, the main technical information problem facing journalists covering Fukushima was exactly the opposite of the problem that faced journalists covering Three Mile Island. At TMI, reporters had great difficulty finding expert sources to help them understand and explain what they were seeing and hearing. Reporters with good Rolodexes (remember Rolodexes?) like Stu Diamond of Newsday had a huge advantage over those who were starting from scratch; news teams that brought along their own health physicists to monitor reporters’ exposure and keep them safe ended up using them and farming them out as sources. (Remember Harry Astarita, “Radiation Harry” to hundreds of reporters hanging around Middletown, Pennsylvania trying to cover the TMI accident?)

At Fukushima, by contrast, the problem wasn’t getting expert sources; it was vetting expert sources. Everyone with broadband had access to more expertise than a reporter could possibly read and absorb. The problem was distinguishing genuine expertise from bogus “expertise”, the nuggets from the crap. And given that the reporter’s audience had the same access as the reporter to the same cornucopia of good and bad expert information, the only way journalists could add value vis-à-vis all that online expertise was by doing a better job than their audience could do of distinguishing the good from the bad. A mainstream media reporter who intelligently winnowed what knowledgeable people were saying online could add enormous value. But intelligent winnowing itself took considerable expertise … and an awful lot of time.

It is perhaps not surprising that most reporters chose instead to do something their audience couldn’t do, something more traditionally journalistic: ask officials questions and report their answers.

And that was certainly a useful thing for them to do! Most of the Fukushima-specific raw material the experts were chewing on came not directly from official sources, but via journalism. Without reporters at the scene, online experts thousands of miles away would have had far less to say, and what they said would have been far less relevant, authoritative, and useful. The Web may have destroyed the business plan of the mainstream media, but it hasn’t obviated the need for journalists on the ground!

It seems to me that reporters could have done far more than they did to harvest this crop of expert commentary. They fertilized it with facts they gleaned from interviews and official news conferences, but they didn’t harvest it nearly as much as they could have. (I apologize for the metaphor.) There was a greater flow of content from Fukushima reporters to online experts than from online experts to Fukushima reporters.

Still, I’ll bet the main difference between outstanding coverage and routine coverage of Fukushima wasn’t the extent to which each reporter unearthed new facts on the ground. It was the extent to which each reporter made intelligent, discriminating use of the glut of online expertise.

Note: Sharon Friedman, a journalism professor at Lehigh University, is a renowned expert on media coverage of risk. In 1979 she and I worked together on the Public’s Right to Information Task Force link is to a PDF file of the President’s Commission on the Accident at Three Mile Island.

Research on the trust/communication relationship – and the paradoxical role of trustworthiness and accountability

name:Lori Geckle
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Federal government health and environmental
risk communication
date:June 16, 2011
location:Maryland, U.S.

comment:

We’ve exchanged a few emails over the years, and I’ve found your insights very helpful. What I’m looking for today is any sort of academic research that has looked into the possible association between trust and communication. I’ve read many of the standard research articles by Covello, Slovic, Rowan, etc., but was wondering if you knew of anything specific.

The reason I’m asking is because my organization has been separating efforts to build a culture of trust from strategic communications, which I think is completely counterproductive. How can you have trust without effective communication skills and vice versa?

I just know that the folks I work with would respond much better if I can reference/cite academic research in this area.

peter responds:

I am a notoriously poor bibliographer. I used to read most of the published risk communication research but failed to keep track of what I read where (and what I read nowhere but thought up – or made up – myself). Now I can’t even claim that. As you know, the riskcomm research literature has burgeoned, and my reading hasn’t.

And I confess I find the research literature to be fairly low-grade ore – too many methodologically careful examinations of narrow questions of greater theoretical interest than practical importance. I’m not terribly interested in relationships that are statistically significant but small, accounting for only ten or twenty percent of the variance.

I also have less confidence than I used to have that citing research support for riskcomm claims makes a big difference to technically oriented clients (or in your case colleagues). It seems to me that my clients’ objections to my recommendations are more grounded in their egos and their own outrage than in my failure to provide enough evidence. I have gotten further as a consultant by addressing these unstated motives to dismiss my advice than by taking clients’ demands for proof too much to heart. (See my 2007 column entitled “Talking with Top Management about Risk Communication” for a bit more along these lines.)

Still, you’re obviously right that it would be good to have more and better evidence to support our assertions about the relationship between trust and communication.

It’s a bit hard to imagine a study to show that lying to people leads them to mistrust you! I’m sure it’s provable, but it’s almost self-evident. On the other hand, I’d love to have studies showing that exaggerating/overselling X leads people to mistrust you about Y. I say this all the time – for example, that people mistrust public health claims about flu vaccine safety in part because public health officials so routinely exaggerate flu vaccine efficacy. I can provide plenty of examples besides this one that I think illustrate the point … but I don’t have any quantitative studies that prove it (though there may be some).

Similarly, it would be nice to have proof that candor (especially admissions against interest) increases trust.

I do remember one study that’s somewhat on point: “Presenting Uncertainty in Health Risk Assessment: Initial Studies of Its Effects on Risk Perception and Trust” by Branden B. Johnson and Paul Slovic. (This article, published in Risk Analysis in 1995, is available online link is to a PDF file to subscribers.) Johnson and Slovic found that acknowledging uncertainty increased perceived trustworthiness and decreased perceived competence. As I recall, the study authors interpreted this finding as one good effect (increased perceived trustworthiness) and one bad effect (decreased perceived competence). I would tend to see it as two good effects – because excessive public confidence in the “competence” of officials to work miracles is unsustainable and therefore harmful; it’s a precursor to disappointment and feelings of betrayal. But I have rarely succeeded in persuading my clients that they shouldn’t want the public to assess their abilities more highly than they assessed them themselves.

In this context, it’s worth wondering whether Germany’s E. coli outbreak (ongoing as I write this) might have done less damage to the credibility of public health if German authorities had aggressively insisted from the outset that they weren’t sure they had nailed the source of the outbreak correctly and might very well have to back up and correct themselves in the days to come. My best guess is that people would have been just as cautious about eating cucumbers, lettuce, and tomatoes, and better prepared to learn later that they should have been avoiding sprouts instead.

I’m sure there are more studies out there that address the relationship between trust and communication, and even the relationship between trust and risk communication. But I’m the wrong person to ask for bibliographic help!

Lori responds:

Thanks so much for your response and insights, Peter. I always find them helpful.

I fully agree with everything you’ve said, with the exception of research being minimally helpful. In my world, most non-communication people have responded very well to what evidence-based research has to say.

Do you have any specific suggestions that might help my case in arguing why communications SHOULD be included in the trust-building initiative?

I continue to be flabbergasted that these have been separated completely and absolutely. I have tried my best to convince “the powers that be” to reconsider, but to no avail. For example, the trust team is conducting a pilot project starting next month to assess the baseline level of trust at several sites. To me, this would also provide the perfect opportunity to assess baseline perceptions of communication effectiveness, preferred communication methods, level of knowledge/understanding, trusted sources of information, etc. But I’ve had no luck.

At this point, I’m ready to give up the fight for now since the project starts next month. But I feel quite strongly that at some point leadership has to reconsider this segmentation of efforts, particularly when it comes to the research the trust team is conducting. I also fully agree that ego and/or personalities may have something to do with the decision, so it will take me some time to suss all that out.

As always, I appreciate your taking the time to respond to me. I’ll be sure to continue monitoring your website (which I find very useful), and will likely be in touch again.

peter responds:

As I read your second comment, I began to suspect that your organization may be focusing too much on trust and too little on trustworthiness.

I’m not sure I like the idea of a “trust team” busy assessing the extent to which people trust your organization, presumably in order to figure out how to convince people to trust you more. I’d rather the team focused more on asking respondents about the sources of mistrust: “What are the things we have done or said that made you trust us less?” Learning how your organization has earned its stakeholders’ mistrust strikes me as more actionable than learning how much they trust or mistrust you. It could lead you to reconsider some aspects of the way you have acted and communicated. It might even lead you to consider acknowledging and apologizing for some of your prior actions and communications.

I believe that organizations that work hard to be trusted rather than working hard to be trustworthy are barking up the wrong tree. This isn’t just a matter of needing to earn trust by being trustworthy – of seeing people’s trust in your organization as more a characteristic of you (your behavior and communication) than a characteristic of them. On a more fundamental level, I think wanting to be trusted is a mistake.

There are good reasons why people should be skeptical about the risk claims of government agencies and corporations. This is especially true of government agencies and corporations that are not neutral observers but interested parties: that are sources of the risk in question or at least deeply committed to their stance on that risk. Even if your organization has a pretty good record for integrity, you may still face understandable temptations to cut corners. Wise stakeholders should be aware of those temptations and hang onto their skepticism.

Moreover, I think it’s dangerous to be trusted. The more trusted an organization is, the more tempted it is to take advantage of that trust – exaggerating its confidence and understating its uncertainty, making debatable positions sound inarguable, suppressing contrary factoids, etc. This is part of the reason for my controversial and even paradoxical contention that public interest organizations (activists, academics, public health agencies, etc.) are actually likelier than multinational corporations to cut ethical corners in their risk communications. Bad guys know they usually can’t get away with dishonest messaging, so they’re less likely to try. Good guys cut more corners both because they know they probably won’t get caught and because they think their prosocial ends justify their not-so-punctilious means. For a cavalcade of examples from my 40 years as a risk communication consultant, see my 2009 Berreth Lecture to the National Public Health Information Coalition, “Trust the Public with More of the Truth.” For a single extended example, see my 2010 Guestbook entry, “Why did the CDC misrepresent its swine flu mortality data – innumeracy, dishonesty, or what?

Trust, in short, tends to sow the seeds of its own undoing: The more trusted we are, the more likely we are to betray the trust, get caught, and no longer be trusted.

That’s why I advise my clients to forswear trust as a goal. Instead they should aim for trustworthiness and accountability. I advise them to act in trustworthy ways, and to set up accountability mechanisms so stakeholders don’t have to trust them; instead, stakeholders can “verify” their trustworthy behavior. (As Ronald Reagan famously said about arms control negotiations: “Trust, but verify.”)

The paradox of accountability: If you know you have mechanisms available to you to catch me if I cheat, and you know I know it too, then you have reason to think I probably won’t dare to cheat, and therefore you have reason not to resort to those accountability mechanisms very often.

Accountability and trustworthiness go together. If you know I’m accountable you can afford to trust me. Trust and trustworthiness don’t go together. The more trusted I am, the more tempted I will be to behave in untrustworthy ways.

Try talking the management of your organization into this perspective on the actual relationship between trust and trustworthiness. It carries with it the marriage of trust issues with communication issues that you are trying to promote. An organization that has thought deeply about the relationship between trust and trustworthiness would be less interested in asking stakeholders how much they trust it, and more interested in asking them what are the things it does and says that lead them to trust or mistrust it. Such an organization would also put a priority on asking stakeholders what accountability mechanisms they are aware of, and what accountability mechanisms they would like to see added or strengthened. Above all, perhaps, such an organization would want to ask stakeholders how confident they are that there are sufficient accountability mechanisms in place to keep the organization communicating honestly.

Even without rethinking the relationship between trust and trustworthiness, a sensible organization understands that both trust and trustworthiness are fundamentally communication variables. I think you’re right that it’s “ludicrous” to ask your stakeholders how much they trust you without simultaneously asking them what they think about your communication efforts … which are bound to be at the core of why they trust or mistrust you. Mistrusting you pretty much means seeing or sensing a discrepancy between what you say and what you do, or between what you say and the external, objective truth, or at least between what you say and what more trusted sources say. So exploring whether people trust you without exploring what they think about what you say is kind of silly.

It’s also worth noting, in support of your argument, that trust in individuals is very different from trust in organizations. Typically but not invariably, stakeholders trust the spokespeople they know more than the organization those spokespeople speak for. So it’s silly to ask people whether they trust or mistrust your organization without asking them whom they trust or mistrust when speaking for your organization.

Another thought, along entirely different lines: Is it possible that your management is resisting yoking trust to communication because they want to be trusted because they did the right things, not just because they said the right things? We have all encountered this view, grounded in the aphorism that “actions speak louder than words.” Some non-communicators genuinely believe that communication is almost intrinsically untrustworthy, that wise stakeholders ignore mere words and focus exclusively on deeds.

However naïve, this is an honorable perspective, and if you think it’s part of the resistance you’re encountering, you need to rebut it respectfully: “Of course what we do is paramount; that’s where we earn or forfeit trust. But people experience what we do through the lens of what we say, and if they sense a discrepancy between the two, trust suffers. If we’re saying the right things but doing the wrong things, we will end up mistrusted. This is characteristic of ’bad guys’ who try to paper over bad acts with smooth words. But if we’re doing the right things but saying the wrong things, we will also end up mistrusted. This is characteristic of ’good guys’ who undermine trust in their genuinely good behavior by exaggerating their case and denying their opponents’ share of the truth. We can’t assess trust in our organization without assessing how people see both what we do and what we say.”

But even if you succeed in convincing your organization’s “trust team” to pay attention to communication, I think it will still go off-track unless it focuses more on being trustworthy and being accountable, rather than being trusted.

Lori responds:

You again raise very good points. Although I don’t know exactly what the trust-building team is focusing on (e.g. trust vs. trustworthiness) in their project, my current frustration lies in what may only be my perception that communication is not an integral part of their upcoming study.

Your suggestions to leadership (as well as links re: “Talking with Top Management about Risk Communication” will be great assets to my toolbox. Thanks again for your thoughts, and for furthering this conversation on my behalf.

Pushing for a new murder investigation: precaution advocacy or outrage management?

name: Cheryl
This guestbook entry
is categorized as:

      link to Outrage Management index      link to Precaution Advocacy index

field:Family member of unsolved homicide victim
date:June 2, 2011
location:Canada

comment:

Peter, I am a huge fan of yours. I’ve attended a couple of your seminars and have the utmost respect for your expertise in the area of risk communications and outrage management. I often utilize your website as an excellent resource.

The purpose of my email is to seek your recommendation on a matter that involves the Ontario Provincial Police (OPP), who are a policing authority in Ontario, Canada. The matter involves an unsolved 1974 homicide of a 14-year-old teen named Karen Caughlin.

Yesterday, a press conference was held by the family. The purpose of the press conference was to publicly request an independent review of the case. Karen’s body was found outside of her community and as a result the policing authority where her body was found [not where she lived] has had control of the investigation. There is good reason to believe that many mistakes have been made and the family are seeking approval from the Commissioner of the OPP to order and approve an independent review.

This has been an ongoing struggle for several years now. Recently we’ve been working with a cold case team from the U.S. (Sarnia is a border city.) They have been extremely helpful and continue to provide good insight. Although the OPP were invited to the press conference, they did not show.

Immediately following the press conference, the OPP publicly denied the families’ request for an external review. Meetings have been set up over the next couple of days to meet with politicians in the area, in hopes of seeking their support to move this forward. It’s a sensitive issue. The OPP are concerned about their reputation and the family is seeking justice for Karen.

Can you suggest or recommend some material or perhaps the best approach to have the OPP agree to an independent review? All authorities are passing the buck and claiming they are not accountable for such decisions.

A couple of sites with some good info related to recent happenings on the case are:

peter responds:

I read the two links you provided, but of course that’s not nearly enough information for me to develop an independent opinion about whether the OPP “bungled” this murder investigation, as the clips say the family charged at the recent news conference. Nor do I know anything about Ontario law governing police investigative jurisdiction, who can authorize an independent investigation for what reasons, etc.

From the outside, it looks like the OPP has tried twice, and failed twice, to find Karen’s murderer. I can’t tell whether the first investigation was badly flawed, or whether the second investigation might have been contaminated by a fear of exposing the inadequacies of the first. Since you think the first one really was a botch, I can see why you would want a new investigation to be in the hands of someone unafraid of embarrassing the OPP.

From a risk communication perspective, it seems to me that you have a choice between two approaches – and in some ways you look like you’re caught in the middle where there is no viable approach at all.

Your first option is the precaution advocacy approach: Raise hell. Harangue provincial (and even federal) politicians about the inadequacies of the OPP. Enlist the support of NGOs dedicated to cold-case investigation, and especially NGOs dedicated to exposing anything negative they can find about the OPP. Try to arouse community outrage about the authorities’ failure to get justice for Karen, about their failure (as you see it) to do a decent job of trying, about their cowardly unwillingness even to meet with Karen’s family. See if you can interest a muckraking journalist in doing a long magazine article or even a book about the case. Build an NGO of your own, aimed at pressuring other authorities to take the case away from the OPP, in hopes that the new investigation will both find the murderer and expose the OPP.

Your second option is the outrage management option: Conciliate the OPP. Emphasize how much effort it has devoted to the case. Instead of pointing to evidence of bungling, find examples of good OPP investigative work you can praise. Make a big deal out of how much forensic technology has improved since 1974. Sympathize with how hard it must be, even with modern forensics, to pursue such an old case. Suggest that perhaps the family’s strong criticism of the OPP might have been counterproductive, triggering a natural defensiveness that could easily have made investigators less open to new angles and new interpretations. Note that any organization would find it hard to reexamine its own past work with total neutrality, especially in the glare of hostile publicity. Offer to start anew in a spirit of cooperation, empathy, and mutual respect, gently urging the OPP to turn the case over to a different investigative agency for a third and final effort, in the hope that new eyes may see new clues. Look for ways that this can happen without embarrassing the OPP.

I’m guessing that the first approach has a lot more appeal to the family and its allies than the second, because it enables you to vent your own outrage at the OPP. But the second approach – reducing the OPP’s outrage at you – may have better prospects for breaking the case and finding Karen’s murderer.

It probably feels to you like it’s too late to “make friends” with the OPP. It may be too late, of course, but I doubt it. If the family can outline a path forward that protects the OPP’s reputation while renewing the search for Karen’s murderer, the OPP may find that a more attractive path than the alternative, where the family keeps demanding an independent investigation that could threaten the OPP’s reputation and the OPP must choose between capitulation and intransigence.

I’m pretty sure there is no middle path. The family is currently attacking the OPP while simultaneously complaining that the OPP isn’t cooperating with it. From a risk communication perspective, that makes no sense at all! Since the family is treating the OPP as the enemy, how surprising is it that the OPP has trouble remembering that its real enemy here isn’t the family, but Karen’s murderer?

Let’s assume the family is right that the OPP has mishandled the case so far. Then the family must choose:

  • Attack the OPP, try to expose its inadequacies, and mobilize anti-OPP outrage to secure a new investigation (the precaution advocacy approach); or
  • Conciliate the OPP, express empathy for the difficult situation in which it finds itself, and look for ways to make a new investigation less of a reputational threat (the outrage management approach).

I’m not sure which approach is more promising. But I think your own outrage is understandably propelling you to choose the precaution advocacy approach without even considering the other option. Think about whether managing the OPP’s outrage might be a feasible way to go.

Bear in mind that if the family does choose the second approach (outrage management), family members will still be endlessly tempted to revert to expressing their own outrage instead. Keeping family communications focused predominantly on empathy for the OPP will be a tough, tough challenge.

Research on the risk communication seesaw (and other untested Sandman ideas)

name:Knut I. Tønsberg
field:Public relations
date:May 26, 2011
email:123bratt (at) gmail.com
location:Norway

comment:

The time must have come to establish the seesaw principle as a scientific theory.

I am preparing a master’s degree at the University of Oslo’s Department of Education, and want to elaborate on your work. I have watched your website on a regular basis since 2003 and the outbreak of SARS.

I would be happy to get in touch with others interested in linking your “principles” to social sciences.

peter responds:

I would love to see more research on my ideas about how to do risk communication, including the risk communication seesaw. I hope your master’s thesis will help make up the gap.

If any readers of this Guestbook entry know of some relevant research, or want to do some, I hope they will get in touch with you – and copy me!

Over the years, a number of academics and their students have studied various aspects of the “Risk = Hazard + Outrage” formula and its implications. But nearly all the work I have seen is either unpublished or behind firewalls, so I can’t provide links.

You’re right to put the word “principles” in quotation marks as a descriptor of my claims. They’re not principles until they have been thoroughly tested and validated. Even though they’re grounded in 40 years of experience, they’re still just hypotheses. A lot of what we all think we know from experience turns out to be wrong.

Outrage management for a mining company in the Maghreb

name: Yvette
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Mining company community and environmental officer
date:May 18, 2011
location:Algeria

comment:

So much of what you talk about rings so true, and I can see how your findings will very much align with how our company has to go as it develops.

Here in the Berber region of the Maghreb, though, we have a population that has over 50% unemployment, a good 26% of that people under 25 (approximately). So when the company gets going and only employs 400-odd, plus the indirect employment, we are going to have an extreme high-outrage situation on our hands (and a very unfair one from the company point of view).

In addition, we are the only mining company around, the first, a small one (financially), and all eyes will be on us.

It is going to be very hard to find some middle path to tread. Already we are unpopular with our host community and we haven’t even started mining. We can’t do all the outreach implementation until we actually get the go-ahead to mine. The EIS is finished but awaiting all the approvals stages.

In addition, we have a local partner who is not really cooperating with the project!

On the issue of fair vs. unfair: Any advice on how, in a male-dominant society like this, one accesses the women and the vulnerable? (Maybe it is not so important to access them for the local people’s point of view, because it will be mostly male fanatics link is to a PDF file who will drive the roadblocks when they occur.) We certainly plan to use the media, since women incarcerated at home listen to the radio, and they may respond to texting and to websites (in French of course).

peter responds:

Twenty years ago I would have found it a little surprising and maybe even a little frightening that much of what I have written “rings so true” to you regarding your interactions with Berbers in the Maghreb desert of Algeria. I’ve never been to Algeria and have had only a little contact with Berbers elsewhere. Who am I to advise you on how to talk about risk? But I have become more and more confident that the principles of risk communication really are universal. The details aren’t; I don’t know how Berbers are likeliest to express their outrage, for example, or what form they expect an apology to take. But I’m no longer surprised that the principles ring true to you.

You probably know this, but just in case you don’t: The term “Berber” is sometimes considered insulting by the people to whom it is applied. Their word for themselves in northwest Africa is “Imazighen.” “Berber” is the conquerors’ name for them.

Your comment raises several interesting issues.

Employment demands

In developing countries where subsistence living, unemployment, and underemployment are all endemic, the first mining company in a region is usually a huge source of social disruption. The company comes in promising jobs and other economic benefits. Often the company and its local partners exaggerate those benefits in what they say or imply; often the local populace exaggerates them in what it thinks it hears. No matter who is to blame for the community’s unrealistic expectations, when they are dashed the result is community outrage, especially on the part of young men who briefly saw a glimmer of hope and then saw it flicker out.

This pattern is so close to universal that it strikes me as a bit naïve for a company to see it as “unfair” (to the company). I agree that one small mining company can’t singlehandedly solve the daunting economic problems of even your corner of the Maghreb. But I’ll give odds that your company has done a lot of messaging about economic benefits to come, and relatively little messaging suggesting that those benefits would be sorely limited. Now you will begin to reap what you sowed.

It’s not just that only a small percentage of the local populace ends up with good mining jobs or service contracts. What’s also nearly universal is that those jobs and contracts tend to destabilize the community. Very likely they increase the gap between the “haves” and the “have-nots” among the locals. Moreover, they create a new class of “haves,” a new elite of previously poor young men who can now lord it over their former peers and even their former social superiors. Depending on the local culture, some of these young men may spend their newfound wealth in ways that exacerbate social tensions: on alcohol, drugs, or prostitutes. And some of the best jobs, typically, will have no local applicants the company considers qualified. So you’ll end up hiring outsiders, perhaps members of a different tribe or ethnic group, whose presence in the community will intensify the tensions.

The company will probably consider it unfair when it is blamed for these impacts as well. I can’t agree. It’s predictable that certain groups will be outraged by the outcomes of your company’s presence in their midst. Unsuccessful applicants for jobs and contracts will be outraged that they didn’t get picked. Successful applicants who are later let go (for whatever reasons) will be outraged that they were dropped. Previously high-stature individuals will be outraged that they have lost stature compared to your newly affluent employees. Community leaders and elders will be outraged that a once-stable social structure has suddenly gone to hell.

Of course community unrest surrounding the arrival of the mining industry in a previously unmined part of the world is not attributable only to outrage. A lot of it is greed, and a lot of it is politics. Some of the participants in the anti-mine roadblocks you anticipate will be looking for a payoff from you, or looking for an advantage in a political battle in which you are merely a pawn – or will be getting paid to participate by someone looking for a payoff or a political advantage.

You’re at little risk of missing the politics and the greed. But you may be tempted to miss (or misattribute) the outrage – and thus do less than you should to address it.

What can you do to ameliorate employment-related outrage? Here are a few approaches that might help a little:

  • Diminish expectations. Be as explicit as you can about how many – that is, how few – jobs you actually expect to have on tap, and how many (how few) of them you think will go to locals. Of course it’s a crapshoot (and you should say that too). The mine may prove uneconomic and never open at all. Or you may be on the brink of a huge discovery that will keep your company and dozens of competitors hiring for decades. But the short-term prognosis is just 400 jobs or so, plus the indirect employment, minus the jobs that you’ll end up giving to skilled workers you import from elsewhere. It will help to acknowledge that you have sometimes let people imagine a much bigger bonanza than this, and that a lot of job-seekers will inevitably end up disappointed and perhaps angry.
  • Share control. For a lot of reasons – greed and politics as well as outrage – local leaders usually want some say over local hiring. You have to hang onto the final say, of course, so you can be sure everyone you hire can do the job. And any sharing of control will greatly complicate the hiring process, not just logistically but also politically and legally. Nonetheless, it’s usually wiser to find a compromise that doesn’t completely exclude local leaders from this newly important source of power. Make sure you know the community well enough to identify who usually has a say in such decisions. Then give them a say – not total control, but a say. For example, ask them to nominate individuals they think you should consider, or to provide input (pro or con) on people you are considering. And if there are key divisions in the local community (different tribal groups, for example), you might want to develop some kind of proportional system so no group gets an unfair edge and no group gets frozen out.
  • Think about training/apprenticeship programs. If the local labor pool lacks key skills, set up some kind of training/apprenticeship program, so that locals are being taught to do the jobs for which you’re importing outsiders. Do the same thing for the most disempowered groups within the local community, such as certain ethnic groups … and perhaps (if it’s not too socially disruptive) for women as well. But before you initiate any sort of training/apprenticeship program, make sure you’re reasonably confident that it’s not a dead end, that you’re preparing locals for jobs that will be there when they’re ready to start work.

Broader social impacts

I know that mining companies vary substantially in their social and environmental responsibility, and in particular they vary enormously in how much they “give back” to the local community and how effectively they manage the negative impacts of their presence. But I have seen very few persuasive examples of local communities in a developing country that were better off after the mine was operating and after it was closed than they were before it opened. The country as a whole may be better off – depending on what use the national government makes of royalties and other payments. The world is arguably better off (depending on environmental impacts), since we need the ore. But I doubt the locals are – despite the jobs and other economic benefits, and sometimes significant health and educational benefits as well.

If it’s a given that mining and the social disruption it brings are inevitable, then it’s important that the local community may suffer less at the hands of Company X than Company Y. As a rule, I think, large publicly owned multinational mining companies do more good and less harm than smaller independents or locally owned companies, whether their owners are private or government. This is mostly because large publicly owned multinationals are more vulnerable to pressure from NGOs.

Or to put the point differently: Every company has to manage local outrage. Publicly owned multinationals have to manage non-local outrage as well; they have to worry about shareholder resolutions, hostile presentations at their AGMs, angry blogs and websites, and negative coverage in the mainstream media. So multinationals tend to be more responsive to stakeholder concerns, including the concerns of local stakeholders – whose concerns nonlocal NGOs may adopt. (I don’t know if your company is connected to a large multinational or not.)

I am happy to help mining companies honestly ameliorate local outrage – about employment, social impacts, environmental impacts, or anything else. But even with good will, good skills, good ethics, a good budget, and a good deal of activist pressure, it’s still hard to run a mine in a developing country in such a way that the local community ends up glad you came.

It will help a bit to share dilemmas and share control with local leaders and the community at large. Your mine will bring new benefits, new opportunities, and new problems. Be candid about the problems. If you think you can make a case that the pluses will outweigh the minuses, make your case; if not, make the case that you will do more to ameliorate the minuses than some other mining company might do. Above all, open yourself to local expertise about what the minuses are likely to be and how they can best be ameliorated. Even if you have a staff of anthropologists (and I’ll bet you don’t), local leaders know better than you do what’s likely to inflame local outrage and what’s likely to please the local community. Seek their advice, and follow as much of it as you can.

Better yet, look for ways to empower them (and fund them) to ameliorate the minuses on your behalf. I’m surprised how often my clients pass up valid opportunities to duck responsibility for mitigation of social harms. Your company wants to run the mine; that’s its mandate and its expertise. If it can put community leaders in charge of the XYZ Corp Mitigating Social Harms Foundation, so much the better. Of course you need some sort of oversight/advisory role to make sure the company’s money isn’t misspent or stolen. But local leaders are bound to have more ability and more credibility at this task than you have. If it goes well, you’re a lot better off if they get the credit. And if it goes badly, you’re a little better off if they get the blame.

Here’s a final recommendation: Be a good chief. This is hard to explain briefly but it’s often critically important, not just in thinking through employment conflicts but in planning all aspects of your relationship with the community.

In many developing countries, local social relations are dominated by the chief-tribesperson relationship. There are strong unwritten rules governing how powerful people (chiefs) and ordinary people are supposed to relate to each other. Ordinary people owe chiefs certain kinds of respect and obedience. Chiefs owe ordinary people certain kinds of generosity. This isn’t unique to tribal societies, of course; consider the French term noblesse oblige, the literal translation of which is “nobility obliges.”

When a mining company comes into a tribal community, the mine manager (and earlier, the exploration manager … and perhaps even the community relations manager) becomes in effect another chief in that community. One of the most insightful comments I ever heard during a mining industry outrage management consultation came from a Ghanaian community relations manager, hired locally to help a multinational company address his neighbors’ concerns about a proposed new mine. “We’re acting like a bad chief,” he said.

He went on to explain that the company was being generous in ways a Ghanaian chief would be rigid, and rigid in ways a Ghanaian chief would be generous. Moreover, the company was framing its community relations in terms of transactions (“we’ll give you X if you give us Y”) in situations where a Ghanaian chief would unconditionally offer people X as his obligation and expect them to give him Y as his due. It wasn’t the mine the villagers resented most, he said. It wasn’t even the mining company’s power. It was the alien ways the company expressed its power.

Very little of my advice in this response should wait till the mine is launched. You write: “We can’t do all the outreach implementation until we actually get the go-ahead to mine.” Obviously you can’t do the actual hiring yet. And just about everything you say now has to be conditional: “If we get the go-ahead….” But your community relations budget shouldn’t be conditional. You need your own company’s go-ahead for a full-scale outrage management effort now. It would be a huge mistake to let outrage go unmanaged till you have nailed all your permits and approvals.

Recalcitrant local partners

The most alarming sentence in your comment is this one: “In addition, we have a local partner who is not really cooperating with the project!” Your explanation point tells me you find it alarming too.

I don’t know if your local partner is another company or a government entity. I don’t know if it’s the majority partner or you are. But I do know this: Every partner in a mining project gets blamed for what every other partner does, or says, or fails to do or say, regardless of how much of the joint venture each partner owns.

So if your partner doesn’t understand reputational risk, you need to find a way to educate your partner. Otherwise, your partner is likely to make some bad outrage management mistakes that will endanger both your organizations. This is especially problematic if your partner controls the community relations budget or has say-so over implementation for both of you – that is, if your partner can keep you from doing what you know needs to be done to ameliorate local outrage.

But that’s not the worst case scenario. The worst case scenario is if your partner doesn’t need to understand reputational risk and outrage management, because your partner is immune. The worst case scenario is to be a reputation-sensitive company whose reputation is linked to the actions of a partner that doesn’t have to worry about its reputation – because it’s privately owned, perhaps, or because it’s a government monopoly, or simply because its reputation is already so bad it can’t get any worse.

Several times in my career I have worked with clients in this situation. Two examples come immediately to mind:

  • A multinational oil company had no choice but to partner with the autocratic government of the country in which the oilfield was located (no choice if the company wanted to produce oil there, that is). When the government launched a systematic (and arguably genocidal) attack against the minority tribe that dominated the area surrounding the oilfield, the company was aghast. It was nonetheless accused of complicity by international human rights organizations, and its share price plummeted.
  • A multinational mining company bought a minority interest in a highly profitable mine, the majority owner of which was a much smaller, privately owned company. The majority owner ran the mine, and periodically ran it in a way that the multinational company considered unwise, highly controversial, and potentially dangerous to social welfare or environmental protection. When the multinational made a list of worldwide threats to its reputation, its minority stake in this mine topped the list.

A partner whose actions and inactions endanger the host community and therefore your company’s reputation is too great a liability for your company to tolerate.

The gender issues

In traditional, male-dominated societies, a modern mining company with Western values (and Western stakeholders) is inevitably caught in the crossfire.

Your sympathy for local women who are denied anything like an equal role is obvious. You want to reach out to them, include them, involve them, and perhaps even employ them in non-menial, nontraditional jobs. Many of your employees (the nonlocal ones, at least) probably feel that way too. And if your company is a publicly owned multinational, so do many of your shareholders, NGO critics, and other stakeholders.

On the other hand, your mining operation is already a threat to local values and mores in countless ways you can’t help. Do you really want to take on the gender issue as well?

You say that it won’t be women manning the anti-mine roadblocks you expect to see sooner or later. You are probably right that it will be young men, for the most part. But there have been innumerable anti-mine protests in developing countries (including Muslim countries) where women have not only marched but in some cases also blocked the roads to the mines. You are right to want to address the women as well as the men.


Bangladeshi women protest the Phulbari coal mine

The problem isn’t how to gain access to local women. Your proposal to focus on radio sounds right. Nor is the problem what to say to them about the mine’s impacts. The problem is what to say to them, and what to say to their men, about gender – starting with what to say about the fact that you yourself are a woman in a high-stature, high-visibility job in a culture that prefers to keep its women (as you so succinctly put it) “incarcerated at home.”

I’m not sure how far I’d go. But here are a few tentative suggestions:

  • Share the dilemma candidly but diplomatically. You’re a woman from a culture that insists aggressively that women should be free to do everything men do. That’s what you believe, what your company believes. (I’d bet the majority of your industry’s community relations professionals are women.) But you’re also a visitor here, and you know the culture you have entered has very different ideas. You don’t want to abandon your own values and you don’t want to offend the values of your hosts. You are looking for a middle road.
  • When you reach out to local women (woman-to-woman, not just company-to-tribesperson), seek guidance on how to find the middle road. Ask which specific issues local women want you to push, and which issues they want you to steer away from – either because they judge that their men aren’t ready for those issues, or because they themselves aren’t there yet.
  • Don’t neglect to talk to local women about mining issues unrelated to gender. Making them aware and asking their views about such issues (normally closed to them) is itself a step toward gender equality. And even in the most patriarchal societies, women have a powerful (albeit subtle and invisible) impact on the views and actions of the men in their lives. The potential roadblock troublemakers you’re worried about are their brothers, husbands, and sons.

Persuading children to take precautions

name:Stephen
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:Retired teacher
date:April 29, 2011
location:Washington, U.S.

comment:

For us adults (though I barely qualify despite my 67 years), the biggest challenge in communicating about risk is in dealing with children.

Our granddaughter is seven years old, very big and very bright for her age, and has just learned to ride a two-wheel bicycle. On a recent visit, she proudly demonstrated her skills to grandma and me, riding around the quiet neighborhood, wearing a safety helmet, and staying on the sidewalk. When it comes to intersections, she has been instructed to walk her bicycle across the street.

Although she has been instructed many times to “look both ways” before crossing the street as a pedestrian and now a bicycle rider, she often forgets.

What is your advice for parents and grandparents?

peter responds:

There is no field of pediatric risk communication. Riskcomm principles are identical for kids and adults.

The best strategies for persuading children to take precautions are the same ones that work best for persuading adults. Or, rather, the best strategies for adults are the same ones that work best for children. It’s better put that way for two reasons:

  • Most people’s risk communication experience is largely with their kids and grandkids.
  • Our “adult” responses to risk and to risk communication tend to be pretty childlike.

The most basic principle of risk communication strikes many people as decidedly childlike. Instead of figuring out how much danger a particular situation poses based on the data available to us, we tend to become concerned about a risk in proportion to the strength of a set of emotion-arousing factors link is to a PDF file like mistrust, unresponsiveness, lack of control, unfairness, etc. In the terminology I have popularized, these are “outrage factors,” and our response to risk is mostly a response to “outrage,” not to “hazard.”

Your problem is that the risk of crossing the street engenders very little outrage in your granddaughter. If vehicles weren’t permitted in your residential neighborhood, for example, your granddaughter might experience some outrage at drivers who flouted the rule. Her outrage would greatly increase her inclination to check for vehicles, whether she was crossing the street or not.

“Precaution advocacy” is my label for risk communication designed for high-hazard, low-outrage situations – that is, situations where people are less upset by a risky situation than the situation justifies technically. For a list of articles on ways of motivating people (including granddaughters) to take precautions in high-hazard, low-outrage situations, see my “Precaution Advocacy Index.” See particularly my article entitled: “‘Watch Out!’ – How to Warn Apathetic People,” an introductory catalog of precaution advocacy strategies.

Arousing outrage

The most obvious precaution advocacy strategy, and often the best, is to arouse some outrage. You probably don’t want to undertake a risk communication campaign to convince your granddaughter that she is being oppressed by evil drivers who maliciously ignore her preference to monopolize the road, unfairly and unpredictably intruding on her space. But some of the outrage factors on my list – dread and memorability, for example – are more closely related to the fear component of outrage than to its anger component, and are readily available to you as tools to get your granddaughter to focus more on bicycle safety.

In other words, you can probably scare your granddaughter into better street-crossing compliance by making use of vivid (memorable, dread-inducing) imagery of looming trucks, squished little girls, sterile intensive care units, and the like. (In another decade or so, she’ll probably take a high school drivers ed class that will pursue exactly this approach.)

Frightening your granddaughter about street-crossing doesn’t mean turning your granddaughter into a more fearful child than she is today. Her fearfulness is pretty much a constant, a stable part of who she is. (I call this “the Law of Conservation of Outrage.”) So all your vivid imagery can do is reallocate some of her preexisting fearfulness from other objects (thunderstorms, maybe, or parental arguments) to street-crossing. This is important to bear in mind, lest you decide (as many health agencies have decided) that frightening children is out-of-bounds.

Here’s another way of seeing the Law of Conservation of Outrage. People have a “worry agenda” – and everyone’s worry agenda (even a seven-year-old’s) is so overcrowded that many worries never make it to the top of the stack. Getting hit by a car when she crosses the street is already on your granddaughter’s worry agenda, thanks to you, but it keeps getting supplanted by other worries that are more emotionally impactful or have been brought to her attention more frequently or more recently. One way to keep your granddaughter focused on her street-crossing worry, therefore, is to raise the level of emotional arousal attached to that worry.

As a strategy of precaution advocacy, fear appeals have three problems.

First, you’ve got to offer people things to do to reduce their fear. Arousing fear without providing an action outlet is unkind – and ineffective. People are strongly motivated to reduce their fear. If they can’t reduce it by taking precautionary action, they’ll reduce it by deciding that you’re a foolish worrywart and shrugging off the danger. You’ve got this one knocked; your granddaughter can reduce her fear of street-crossing by looking both ways before she crosses.

The second problem with fear arousal is that it can overshoot the mark, ending in denial instead of bearable and actionable fearfulness. The relationship between fear appeals and precaution-taking is a -shaped curve. If you arouse too little fear, street-crossing doesn’t make it to the top of your granddaughter’s worry agenda and she “forgets” to look both ways before she crosses the street. If you arouse too much fear, on the other hand, the mere act of looking both ways may stir up intolerably strong emotions. So your granddaughter doesn’t cross the street at all. Or, worse yet, she crosses the street with her eyes tightly shut so she won’t have to think about getting squished by a truck. Or she simply “forgets” to look – only now it’s unconsciously motivated forgetting; thinking about crossing the street is too scary, so she crosses without thinking about it. If you’re going to scare people as a way to motivate precaution-taking, make sure you don’t scare them too much, all the way into denial.

The third problem is usually the biggest one: You might not scare your granddaughter enough, at least in the long run. People tend to get desensitized to fear-arousing messages. Over time, it takes grosser and grosser imagery to sustain the fear. Of course events in the real world may retrigger your granddaughter’s street-crossing fear – witnessing a pedestrian accident, for example, or experiencing a near-miss of her own. But you obviously can’t plan for those sorts of events, nor do you hope for them. So you need to see fear appeals as an interim strategy. Your goal is to use your granddaughter’s street-crossing fear to help her build a habit of safe street-crossing that will kick in as her fear begins to dissipate.

There’s a fourth problem that’s sometimes mentioned: the inability of young people to imagine their own deaths. Most seven-year-olds don’t yet understand that death is permanent; most 17-year-olds know it’s permanent but are somehow convinced that it’ll never happen to them. I don’t think this is a big precaution advocacy problem. Seven-year-olds understand perfectly well that getting hit by a car is a catastrophe. If they can imagine hospitals and pain and missing a best friend’s birthday party more easily than mortuaries and nonexistence, that’ll do.

Habits, rewards, and punishments

Over the long haul, one of the most effective ways to instill a precaution into people of any age is to make it a habit. Most people who put on their seatbelts or lock their doors, for example, don’t have to think about it every time; it’s habitual. For that matter, most adults habitually look both ways before crossing the street (whether on foot, on a bike, or in a car). They don’t have to decide to do it. They just do it.

Habits are acquired by practice. If you’re there often enough when your granddaughter walks her bike across the street, and each time you remind her to look both ways first, looking both ways will eventually become habitual for her. Better yet, if your granddaughter will put up with it, is to drill her. A bunch of times in quick succession, have her ride her bike to a specified point, get off, look both ways, and then (if nothing’s coming) cross the street.

Positive reinforcement is also an effective way to get people to take precautions. A quarter (or a gold star, or just an “attagirl” smile from grandpa) every time she stops and looks both ways will probably motivate your granddaughter to remember better. Nor is “every time” an essential prerequisite. In fact, B.F. Skinner discovered many decades ago that rodents and people alike respond best to what he called a “variable reinforcement schedule” of periodic rewards.

Of course we get desensitized to rewards the same way we do to fear. The time will come when quarters and gold stars fail to motivate your granddaughter, though hopefully your approval will still mean something.

Don’t take the concept of “reward” or “positive reinforcement” too literally. Fun is positively reinforcing too. Each time your granddaughter stops and looks before crossing the street, try asking her what she sees. This can turn into an observation-training game, as she learns to notice more and more details she missed in previous go-rounds. Or it can turn into an imagination game, when she claims she sees a unicorn climbing a beanstalk and you say no, that’s a baby rhinoceros practicing its circus balancing act. As long as the game isn’t so engrossing that your granddaughter forgets to take note of an approaching car, it’s a good positive reinforcer for the stop-and-look behavior you’re trying to encourage.

The best positive reinforcements are intrinsic rather than extrinsic. There’s a good chance your daughter likes her bike helmet, for example. Maybe wearing the helmet makes her feel like a big girl. Maybe she got to choose it herself and likes its appearance: its color, its decals, etc. So you don’t have to reward her for helmet-wearing; helmet-wearing is its own reward. I can’t think of a way to make it similarly self-reinforcing to get off your bike at every corner and cautiously walk it across the street. It sounds like a drag, frankly, something I’m unlikely to do unless motivated by fear, extrinsic reward, or something. But if you can make the stop-and-look pattern intrinsically rewarding for your granddaughter, so much the better.

Adult bicycle riders, by the way are less taken than children with the intrinsic rewards of bike helmets – especially if they recall happy childhoods with the wind in their hair as they raced their bikes downhill …. and no sweaty scalp at the end of the ride. A good friend of mine had a serious bike accident a few weeks ago riding without a helmet; maybe now fear – and her grown daughter’s razzing – will get her to acquire the helmet habit. Even so, adults aren’t immune to positive reinforcement as a strategy of precaution advocacy. Back when hardhats were new and controversial, one key to getting construction workers onboard was decals. A hardhat that displayed the insignia of your outfit and the name of your spouse (or your motorcycle) became a symbol of both group membership and individual identity, not a symbol of being afraid of head injuries.

On the whole, it’s best to think of rewards as a short-term measure, like fear appeals. By the time the rewards you’re willing to dispense are no longer a strong enough motivator, hopefully your granddaughter will have acquired the habit of looking before she crosses.

Negative reinforcement (punishment) is also a tool that belongs in your precaution advocacy toolkit. Most research suggests that rewards work better than punishments, but the combination works better than either alone. “You didn’t stop and look before you crossed the street just then, so we’re going to quit for today. You can ride your bike again tomorrow.”

Instilling norms

The next best thing to instilling a habit is instilling a norm. Norms are unwritten rules within a group about what members of that group commonly do or believe, and should do or believe, in specified situations. Parenting and grandparenting are largely about inculcating norms, so our children and grandchildren will know the “right” way to drink soup, deal with handicapped people, express their anger or frustration, cross the street on a bicycle, etc.

Once people have acquired an appropriate norm, fear arousal is no longer needed, and neither is positive reinforcement. Like habits, norms are self-sustaining.

Adults already have norms; they know (or think they know) the right way for members of their group to behave and think in most situations. Trying to replace these norms with new ones is exceedingly difficult. So the principal use of norms in adult precaution advocacy is to try to hook the behavior you’re recommending to an existing norm. When anti-fluoridation activists talk about “fluoride pollution” and anti-powerboat activists talk about “noise pollution,” for example, they’re trying to hook their causes to people’s existing norm that pollution is wrong. Adults are wide open to new norms only when they see themselves as in new situations. The newbie at a job site, for example, will pay serious attention to the orientation class, and even more concentrated attention to the casual advice (and actual behavior) of old-timers.

Kids are newbies at almost everything. They don’t know yet what the norms are, and they’re actively trying to find out – from their peers, their parents, their grandparents, everyone. That’s why it’s important to be a good role model for your granddaughter – to let her see you looking both ways before you cross the street, and to let her hear you expressing some criticism (privately to her) of strangers you see crossing the street without looking.

There’s an interesting – and extremely useful – connection between rewards and norms, and it’s a connection that applies to both adults and children. The more a particular behavior is rewarded, the likelier people are to do that behavior. But smaller rewards are actually better at inculcating norms than larger rewards are. Here’s how it works. If you offer your granddaughter a huge reward – ice cream, say – every time she looks before she crosses, she’ll certainly look …. and if you ask her why, she’ll tell you candidly, “for the ice cream.” By contrast, your smiling approval may well be enough to motivate your granddaughter to stop and look, but it’s probably not enough for her to tell herself that she stops and looks simply to please grandpa. So she searches for a better reason to account for her behavior, and comes up with a norm: “Bicyclists are supposed to stop and look for cars before they cross the street so they won’t get run over.”

Thus, precaution advocacy is often a two-step strategy: first you motivate the precautionary behavior with a motivator that’s strong enough to produce the behavior but not strong enough to justify it cognitively. Then you offer people cognitive support – reasons why the behavior makes sense – and help them build a norm that will sustain the behavior over the long haul. For more on this two-step strategy, see “Using cognitive dissonance to get apathetic people moving.”

Warning people about swine flu … again

name:Rema Venu
This guestbook entry
is categorized as:

      link to Precaution Advocacy index       link to Pandemic and Other Infectious Diseases index

field:Health project officer, UNICEF
date:April 29, 2011
location:New York, U.S.

comment:

On April 20, PAHO issued an Epidemiological Alert warning: “Since the beginning of 2011, in the region of the Americas, there have been significant outbreaks of influenza A(H1N1) 2009.” The same day Crawford Kilian of Crofsblog headlined the story: “H1N1: Yesterday's pandemic is coming back.”

If you get a few minutes, please read the Alert and then the almost sensational title that the “pandemic is coming back” (even if only in the Americas). I would like your thoughts on the PAHO Alert and the blog post, and on how we should be communicating this ongoing risk.

As far as pandemic communication is concerned, already many of the agencies may be in the process of winding down their communication units. Media, donor and national/government interest are all subsiding. So in such contexts, how do we continue to push preparedness? – for in the event there is a pandemic we will quite probably be back to Square 1.

peter responds:

Note: My wife and colleague Jody Lanard collaborated on this response.

Like you, we were very interested to take note of the April 20 PAHO Epidemiological Alert – which as far as we can tell was not coordinated with any comparable alert from WHO or its other regional offices. We know that PAHO and the other World Health Organization regional offices are quite autonomous. So it’s not surprising that PAHO would decide on its own to issue an alert for the Americas. Of all the WHO regions, moreover, PAHO is the one that has done by far the best job of integrating risk communication into its operations. Like all WHO branches, it works in close collaboration with member state governments, which vary widely in their commitment to risk communication. But if any WHO region will understand how to get the word out, it’s PAHO.

That said, PAHO’s Alerts are aimed mainly at its member states’ health departments from the ministry level on down, and only secondarily at anyone else who might be interested. The publicly issued April 20 Epidemiological Alert was undoubtedly accompanied (and preceded) by private communications between PAHO country offices and member state governments, particularly in those countries with recent or current outbreaks of H1N1.

Sounding the alarm

Later in this response, we want to offer some speculation on why PAHO issued an Alert at all, given that swine flu isn’t doing anything unexpected or unusual right now. But first we want to address the issue you raise in your comment: “how we should be communicating this ongoing risk.”

Two ongoing risks, actually. There are two quite different pandemic-related warnings we should be trying to communicate in 2011:

  • The warning that swine flu isn’t gone and isn’t trivial (and neither are the other circulating flu strains) – and that people should take appropriate precautions. As winter approaches, which is right now in the southern hemisphere, those with access to flu vaccine should get vaccinated. And in tropical and semi-tropical regions where flu follows a less seasonal schedule, people should pay attention to influenza guidance directed to them by knowledgeable sources.
  • The warning that another, more severe pandemic is just as likely as ever. We were not “overdue” for a pandemic in 2009, and we are not “underdue” for a pandemic now. Avian flu is still entrenched in the bird populations of several countries, and still makes the occasional jump from birds to humans, with a very high case fatality rate when that rare jump occurs. An entirely new flu virus of initially unknown severity might appear with no warning, the way novel H1N1 did, and might turn out a lot worse than H1N1 turned out. Even swine flu could mutate into something worse than we experienced in 2009–2010; the virulence of the relatively mild 1968 H3N2 pandemic virus waxed and waned as a seasonal strain during the decades that followed.

Both of these warnings will be difficult to communicate – and not just because most health agencies have refocused on other priorities than pandemic communications.

Your comment suggests that you are worried largely about the second warning – being back to Square 1 with a new pandemic. We too think the second warning is both tougher and more important. At least in the minds of people who were paying attention, there have been two pandemic flu “false alarms” in recent years: H5N1 (bird flu) didn’t go pandemic and H1N1 (swine flu) went pandemic but wasn’t such a big deal. Then add SARS, a non-influenza viral epidemic that looked like it had pandemic potential (and had a case fatality rate far higher than the 1918 Spanish Flu), but turned out much less transmissible than influenza and was successfully beaten back. So in the past decade we have had three warnings that a severe pandemic might be imminent – and we haven’t yet had a severe pandemic.

Of course three false alarms won’t stop people from taking a severe pandemic seriously once it’s clear that people really are dying in droves this time. But it will surely make it harder to sell pandemic preparedness in advance.

The PAHO Epidemiological Alert focuses on the first of the two warnings. It’s not about the possibility of a new pandemic; it’s about no-longer-pandemic swine flu, and the fact that it is still a threat to be reckoned with. So I’ll focus there as well.

The problem of warning people about swine flu is different from the problem of warning people about the next flu pandemic. The swine flu pandemic of 2009–2010 was mild as pandemics go – certainly mild compared to the 1918 flu pandemic we were told so much about or the avian flu pandemic we were warned to get ready for. Most people had no category in their minds for a mild pandemic; to normal people who had been following the news about bird flu, “pandemic” meant severe. Although swine flu was deadlier to children and young adults than the average flu season (and most deadly to adults 50–64), it was less deadly overall than the average flu season. Except for victims, their families, and hospital emergency rooms and ICUs, it didn’t disrupt our lives much.

And that was when swine flu was at least a certified pandemic. Now that WHO has decreed that the pandemic is over, swine flu is just the newest circulating seasonal flu strain. So except when there’s a particularly noticeable outbreak, any warning that swine flu is “still around” or “coming back” is likely to be seen as a ho-hum story. Worse yet (especially if the source has anything to do with the World Health Organization), it’s likely to be seen as beating a dead horse – as WHO trying one more time to convince people that swine flu is a big deal.

In much of the world (especially Europe and North America), flu is once again the Rodney Dangerfield of infectious diseases: It can’t get no respect.

We asked Crawford Kilian what he intended when he wrote the headline “H1N1: Yesterday's pandemic is coming back.” Peter thought Crof might have meant to be a bit tongue-in-cheek. “‘Yesterday’s pandemic’ – so 2010.”

Crof responded:

I was probably trying to pack too much meaning into the “yesterday's pandemic” headline – irony doesn’t always travel well online. As I’ve seen over the life of the blog so far, we tend to forget threats pretty quickly. I was using “yesterday” in that “so 2010” sense you mention. My intention was to suggest that we’re not paying attention but H1N1 is still around.

In Europe and North America, in short, warning people about formerly pandemic and now seasonal H1N1 probably requires acknowledging at least three things:

  • that the swine flu virus has turned out comparatively mild (at least so far);
  • that there has been criticism of WHO and many governments for “hyping” the risk – which is better seen as criticism for taking the risk more seriously than turned out necessary; and
  • that many people are therefore inclined to put a low priority on protecting themselves and their families from flu in general and swine flu in particular.

Having acknowledged these three truths, it is possible to make a case that flu vaccination is a wise precaution – not because swine flu is especially awful but rather because ordinary flu is awful enough. Of course it will also be necessary to address people’s concerns about the safety and the efficacy of the flu vaccine – respectfully rebutting the former (serious side effects are rare) and candidly acknowledging the latter (the vaccine often fails to take, especially on sick and elderly vaccinees).

Our argument here is highly controversial. Public health agencies in Europe and North America have made considerable progress over the past decade in persuading more and more people to get vaccinated against the flu. They have done it largely by overstating both flu vaccine efficacy (for all groups) and flu risks (for lower-risk groups). That has done some serious collateral damage – increased mistrust that has contributed to a growing anti-vaccination movement – but even so, it has worked.

The key questions now are whether it will keep working, and whether it is worth the loss of trust. We are not in the majority when we urge a switch to a more candid strategy.

For more on this issue, see:

It’s different south of the (U.S.) border

The question of how to manage swine flu warnings plays out very differently in the Latin American and Caribbean nations that are the dominant stakeholders for the Pan American Health Organization.

In much of Latin America and the Caribbean, flu has traditionally been a nonissue – not because people were inclined to shrug it off but because they were almost entirely unaware of its existence. For many people in that part of the world, the arrival of the swine flu pandemic (thought to have started in Mexico) was the beginning of their awareness of influenza. And it initially sounded terrifying – partly because the early reports from Mexico really were terrifying, and partly because flu, while prevalent, was previously little-known in most PAHO countries, and any “new” infectious disease is intrinsically scarier than a familiar one.

For this reason and others, only some of which we understand, at least a few Latin American and Caribbean nations never quite figured out that the swine flu pandemic was milder than the early warnings had implied. Instead, swine flu was and to some extent remains a new and scary disease. So in much of PAHO’s territory, warning people that swine flu is still around and urging them to take precautions doesn’t require addressing the widespread skepticism that we believe must be addressed in North America and Europe. Some of PAHO’s member states are dealing with excessive public alarm over a few recent H1N1 outbreaks … not public skepticism about the fact that swine flu is still a serious disease. This certainly makes precaution advocacy an easier task.

In many countries, most notably in Asia, governments’ initial responses to the emerging pandemic in 2009 included extreme, doomed-to-fail containment efforts:  Keep it out of our country at all costs! Containment conveyed the message that the new swine flu virus must be horrendous indeed. Many of these countries were still doing containment (airport screening of passengers, isolation of patients) more than a year after the pandemic began. We wrote about the early stages of “Containment as Signal: Swine Flu Risk Miscommunication” in June 2009.

Few if any Latin American countries overreacted to swine flu as extremely as some in Asia overreacted. But there were – and still are – significant numbers of overreactions. The most extreme recent example is Venezuela’s official reaction to several H1N1 outbreaks in March, 2011. In an early but not otherwise atypical local outbreak in the state of Mérida, there were “three confirmed deaths in and around the city of about 200,000, with 56 cases confirmed and many more suspected.” In response, the Governor of Mérida “ordered the closing of all public spaces covered by a roof, including bars, movie theaters and nightclubs.”

This excessive reaction recapitulated Mexico City’s much more justifiable reaction at the very start of the pandemic, when the virulence of H1N1 was not yet known.

It is in this context that PAHO issued its Epidemiological Alert. Crof experienced the Alert as a warning that swine flu is still around. You experienced it as that, and also as an implicit warning that a new pandemic could arrive at any moment. We don’t disagree. Warnings are always most effective during “teachable moments,” when there is a readymade news peg to hang the warning on. The local reaction to the Venezuela outbreaks, and the subsequent media coverage, provided PAHO with just such a teachable moment.

In at least one way, the PAHO epi alert is extraordinarily alarming. The full text link is to a PDF file includes this unequivocal statement: “It is recommended that all of the countries activate their National Preparedness Plans for the pandemic and follow the WHO and PAHO recommendations.” We see no way to read this sentence except as a recommendation that countries in the PAHO region act as if the swine flu pandemic were still ongoing, notwithstanding the fact that WHO has declared the pandemic to be over. A number of flu blogs picked up on this sentence as a significant ratcheting up of official concern. We wonder how it went down in Geneva!

But PAHO’s epi alert is a more double-edged document than we think you realize, simultaneously warning governments (and their people) about swine flu and reassuring them about swine flu.

The full text link is to a PDF file of the Epidemiological Alert begins this way:

Since the beginning of 2011, in the region of the Americas, there have been significant outbreaks of influenza A (H1N1) 2009, that while geographically limited, have generated a significant demand on health services….

This situation is not unexpected. Since the end of the pandemic (2009–2010), the influenza A (H1N1) 2009 virus, continues to circulate on a global level like a seasonal strain, periodically causing important outbreaks in various continents….

In the Americas, the level of circulation and the impact caused by the influenza A (H1N1) 2009 virus, during the pandemic varied. In the countries of the Southern Cone and the southern region of Brazil the circulation of the virus was very intense during the pandemic, then resulting in a low detection of the virus during the 2010 winter. In other places of the tropical regions, where the predominance is not as defined, the circulation of the virus was not as intense; consequently, the proportion of the population susceptible is still high and this favors the appearance of geographically limited sporadic outbreaks.

Considering the possibility of an outbreak occurring on account of the influenza A (H1N1) 2009 virus in the countries of the Region, national authorities should be prepared to mitigate the resulting impact.

In other words: Expect some sporadic swine flu outbreaks here and there, governments, especially in places that didn’t see much swine flu during the pandemic. Be prepared to cope with the demand these outbreaks will place on health services; do your best to mitigate their impact. And don’t freak out.

The Alert’s risk communication recommendation has more to do with reassuring the public than with warning the public. “Implement a risk communication plan,” it advises governments, “to prevent and/or reduce the population’s anxiety.” In particular, it urges member states to tell the media that “the large majority of infections are asymptomatic or present non-specific symptoms. Only a fraction of those affected develop a medical case that requires seeking medical assistance. An even smaller fraction develops difficulty breathing which requires hospitalization. Deaths are very infrequent.”

PAHO’s advice to reduce public anxiety by reminding people that swine flu is rarely deadly sits uncomfortably beside its advice to activate “National Preparedness Plans for the pandemic.” Taken as a whole, the Epidemiological Alert – a scant two pages in length – has to be called a mixed message.

Vaccination isn’t even mentioned in the Alert – partly, we assume, because many PAHO countries still have a limited supply of vaccine, and partly because vaccination is almost irrelevant by the time a local outbreak is in full swing. Instead, the alert stresses non-pharmaceutical interventions:

The population must be informed that the primary form of transmission is via interpersonal contact. Washing ones hands is the most efficient way to diminish transmission. An understanding of “respiratory hygiene and cough etiquette” also helps to avoid transmission. Persons with fever should avoid going to their workplace or to public places until their fever has disappeared.

Note that this paragraph overstates the efficacy of hand-washing, much as most flu vaccination advocacy overstates the efficacy of vaccination. (See “Convincing Health Care Workers to Get a Flu Shot …. Without the Hype.”)

However PAHO intended its Epidemiological Alert to be received, the evidence suggests that it was received mostly as a warning. On April 27 alone, the following headlines hit the English-language media in PAHO countries:

So why did PAHO decide to issue an Epidemiological Alert about swine flu at a time when the overall level of influenza in its region is low, but when flu season is approaching in temperate southern hemisphere countries?

Probably the dominant reason is the one we have discussed: PAHO, we speculate, may have wanted to warn governments in Latin America and the Caribbean to be prepared for sporadic and not-very-deadly swine flu outbreaks, and to tell their people to be prepared as well, in the hope that neither governments nor people would overreact, as Venezuela did, when such outbreaks materialize.

There is a second possible reason (also speculative on our part) for PAHO’s release of the Epidemiological Alert. It is related to many PAHO member states’ perception that the World Health Organization wanted to declare the pandemic over a couple of months earlier than it finally did so in August 2010. An earlier end-of-pandemic declaration would have come just as some countries in the southern hemisphere were gearing up for their winter flu season prevention-and-precaution campaigns; several of these countries pushed WHO to delay its “post-pandemic” declaration in order to avoid undermining the campaigns.

As noted in the PAHO epi alert, some of the more tropical PAHO countries had experienced only low levels of H1N1 transmission during their first round of pandemic outbreaks, leaving a high percentage of their populations with no pre-existing immunity. In advance of their 2010 flu seasons, these countries could easily picture new pandemic-level waves of H1N1 occurring in the months ahead. Some of them had finally received flu vaccine that included the still-new H1N1 strain. (Remember, most developing countries got very little vaccine during the height of the pandemic.) They wanted to get as much vaccine into people’s arms as possible – a goal that might have been undermined, many officials felt, if WHO had declared the pandemic over in June 2010.

WHO could have declared instead that the pandemic was “post-peak,” link is to a PDF file defined by WHO as the pandemic phase when “pandemic activity appears to be decreasing; however, it is uncertain if additional waves will occur and countries will need to be prepared for a second wave.” But WHO chose to declare “post-pandemic,” defined as a period in which “[i]t is expected that the pandemic virus will behave as a seasonal influenza A virus.” In their autumn of 2010, some PAHO countries were concerned that the pandemic virus might still behave like a pandemic virus. Other pandemic flu strains have sometimes caused full-fledged “pandemic waves” after moving further down the road than this one has traveled so far.

In sum: As Latin America was gearing up for its 2010 flu season, many officials feared that WHO might undermine their influenza prevention-and-precaution campaigns by declaring an end to the swine flu pandemic. They succeeded in getting WHO to delay that declaration for a couple of months, but failed (if they tried) to get it to declare “post-peak” instead of “post-pandemic.” Now, nearly a year later, Latin America is starting to gear up for its 2011 flu season. Perhaps PAHO decided to bolster this year’s prevention-and-precaution campaigns with an Epidemiological Alert – one that virtually converts “post-pandemic” into “post-peak” by warning that many populations still have little immunity and by recommending that governments reactivate their National Preparedness Plans … albeit one that simultaneously stresses that swine flu outbreaks will be “sporadic” and swine flu deaths will be “very infrequent.”

Four bottom lines

We see four bottom lines here. Three are about the question you raised at the outset – how to warn people about flu risks, pandemic and seasonal. The fourth is about the peculiarities of the April 20 PAHO Epidemiological Alert you cited.

number 1
Warning people about seasonal influenza (which includes swine flu) in North America and Europe is tough because most people have learned to be skeptical about flu warnings in general and swine flu warnings in particular. We think the skepticism needs to be acknowledged and addressed, not ignored.
number 2
Warning people about seasonal influenza (which includes swine flu) in Latin America and the Caribbean is easier, since there is less swine flu skepticism and more swine flu concern. It would be desirable for PAHO and its member states to be careful to avoid overstating their warnings, overstating the benefits of precautions, and thus arousing the sort of flu skepticism that is already rampant in the Northern Hemisphere. In its April 20 Epidemiological Alert, PAHO mostly avoids the problem of over-warning. But PAHO falls into the trap of exaggerating the benefits of a precaution: hand-washing. It should try to avoid this in the future, in order to build or maintain its credibility.
number 3
Warning people about the next pandemic is the toughest task of all. SARS came and went without going pandemic. Bird flu came (and is still around) without going pandemic. Swine flu did go pandemic … and was a decided anticlimax compared to what we’d been led to expect. It will take extraordinary risk communication skill to persuade people to care about preparedness for the next pandemic, or even to persuade public health agencies to take up the cause.
number 4
PAHO’s April 20 Epidemiological Alert is a peculiar mélange of conflicting messages.
  • Some parts of the Alert are a role model for doing exactly what we recommend: trying to arouse appropriate levels of swine flu concern without resort to exaggerated warnings.
  • Other parts of the Alert seem to be pursuing a different goal entirely: trying to reassure member states and their people that swine flu is less horrific than they might imagine. That's an important task in countries where governments and the public remain excessively rather than insufficiently worried about swine flu – though assessing the strengths and weaknesses of the Alert in accomplishing that task goes beyond the scope of this response.
  • At least one recommendation in the Alert, that member states should reactivate their pandemic plans, seems to take the unilateral (and implicitly alarmist) position that in the Pan American region, at least, swine flu is not post-pandemic as the World Health Organization has declared it to be, but only post-peak … that another swine flu pandemic wave may be on its way.

Cultural differences regarding Fukushima crisis communication

name:Erik
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Scientist at government-funded research institute
date:April 20, 2011
location:Japan

comment:

I’m a Swedish scientist (molecular biology) living outside of Tokyo. I follow the events at Fukushima carefully – and all the surrounding commentary. I have read your insightful comments about crisis communication as it regards to Fukushima with great interest.

I have a question about the impact (if any) of cultural context on crisis communication, or more specifically on the need for speculation that you have stressed in your comments.

As you may know, Japanese culture is quite different from “Euro-American” or “western” culture. One aspect of this is a (sometimes strong) reluctance to speculate.

An anecdotal example: When asked to make a quick estimate of energy use at our office (in anticipation of scheduled blackouts over the coming months), a colleague of mine painstakingly went around to all computers (hundreds), read the power specs and added up an exact number, rather than just counting the computers and estimating a reasonable average energy consumption. The task of course took a lot longer than needed, and didn’t provide a significantly more accurate result – but it’s the way things are done here.

Another example: It is always pointless to ask technical staff to give an estimate for when a certain task will be finished – you are always answered with a question: “When SHOULD it be finished?”

I often find that asking Japanese colleagues or technical staff to speculate, give estimates, etc. is met with a certain amount of anxiety.

As you may know, many foreigners left the Tokyo area during the emerging nuclear disaster. At my workplace, more than 80% of the foreigners left (most of them are back now), while not a single Japanese (out of hundreds) left. Tentative interpretation: Crisis communication was subpar from the foreigner’s point of view, but maybe adequate from the Japanese perspective.

In a nutshell, many (not all) foreigners reacted as you have predicted (distrust in the government etc.), whereas Japanese (not all) reacted differently.

Of course, in a globalized world and when dealing with radiation which really has no borders, crisis communication will have to be tailored to a domestic as well as an international audience. Maybe the lowest common denominator should be the international audience. But it would be interesting to hear your thoughts on the cultural aspect of crisis communication, if there is such an aspect.

Thanks for a very interesting website, which I will undoubtedly return to!

Peter responds:

Thank you for raising two very important questions:

  1. the narrow question of whether my advocacy of responsible speculation (particularly responsible speculation in the alarming direction) in crisis situations might be grounded in western culture and inapplicable (or less applicable) in Japan; and
  2. the broader question of how cultural differences affect crisis communication.

I cannot answer the narrow question authoritatively. I’m not entirely sure of my answer to the broader question either. But I’m going to take a crack at both.

I have never worked in Japan, and never helped manage a Japanese crisis. I’ve worked for Japanese companies, but always on problems arising out of their operations in the west. I have been asked to work on controversies and crises in many other countries, including Asian countries, but never in Japan. I have long wondered why not Japan, and have surmised (without actually knowing) that prospective clients in Japan believe their culture to be sufficiently different from all others that my culturally ignorant recommendations were unlikely to add value.

My wife and colleague Jody Lanard has had a similar experience. She does a lot of risk communication training for the World Health Organization and its member states in Asia. Several times the Japanese government has helped fund Jody’s Asian work – but it has never sent high-ranking people to her seminars.

Moreover, neither Jody nor I can read Japanese. We have followed the Fukushima crisis in the English-language media only. We’ve been able to watch Japanese officials giving news conferences in English and Japanese citizens explaining their reactions in English, but we are missing important nuances that would be far more detectable in their mother tongue. Worse yet, nearly everything we know about Fukushima was written in English by journalists and commentators whose understanding of Japanese language and culture (while surely superior to ours) may be shaky.

This is the same way we follow and respond to crises in other Asian countries. Many of those countries have repeatedly sought risk communication guidance, and found some of the guidance very useful. But Japan may be a special case.

So read what follows with appropriate skepticism.

I’ll organize my response in four sections:

  • My professional assessment of the frequency and value of alarming speculation, both generically and with regard to Fukushima.
  • My very tentative, amateur assessment of Japanese cultural expectations regarding government crisis communications.
  • My understanding, also tentative, of how the Japanese public has responded to the government’s crisis communications about Fukushima.
  • My conclusions from decades of consulting on the broader question: whether crisis communication principles are universal or culture-specific.

Fukushima and alarming speculation

My generic argument on behalf of speculation is that people in crisis situations don’t like to be blindsided by bad news, and don’t like to be left alone with their fears. It is therefore advisable for officials to provide two sorts of “anticipatory guidance” (a less pejorative synonym for speculation about the future):

  • Events that would constitute a high-probability but low-magnitude worsening of the situation – e.g.,“We wouldn’t be surprised to find increased radioactivity in locally grown vegetables and in milk from local cows.” (This is a genuine economic catastrophe for the farmers and their communities; it feels like a serious worsening of the radiation risk to the rest of the society as well.)
  • Events that would constitute a low-probability but high-magnitude worsening of the situation– e.g.,“We remain worried about the possibility of another explosion, conceivably one that might damage a reactor’s containment enough to release large amounts of radiation into the atmosphere.”

Here’s what I think happens when these two kinds of speculation are withheld from the public:

  • When a likely bad possibility happens, it takes people by surprise. They are more liable to overreact than if they had been forewarned, which would have enabled them to get through their adjustment reaction and put the already-anticipated new problem into context more quickly.
  • Because unlikely bad possibilities are not being openly discussed by officials, people imagine their own worst case scenarios, or latch on to those suggested by unofficial sources, including some who may be biased or ignorant. Silent officials cannot help people bear their fears about these dire possibilities; their very silence, in fact, makes worst case scenarios seem both more likely and more horrific.
  • With bad things happening that officials didn’t warn them might happen, and with worse things obviously possible that officials won’t talk about at all, people lose trust in officials – trust in their candor and trust in their ability to manage the crisis. Fear and pessimism increase; compliance with official guidance decreases; “precautions” that may be unnecessary or even harmful become more attractive.

It’s not that the public likes to be burdened with officials’ alarming speculations. Uncertain information about bad things that might or might not happen is upsetting; often people do seem to want officials to over-reassure them instead. But if something bad does end up happening, responsible alarming speculation in advance about what might happen yields less stress and less mistrust than the alternative: failure to warn. Thus there is research showing that official acknowledgments of uncertainty lead simultaneously to increased perceived credibility and decreased perceived competence. “It makes me anxious to learn how uncertain you are about managing this situation, but I’m grateful that you’re telling me what you’re worried about.” Since blindsiding people decreases both perceived credibility and perceived competence, forewarning them about potential bad news is the crisis communicator’s best bet.

How much of this applies to Fukushima?

It is clear that the Japanese government was extremely reluctant to speculate in the early weeks of the Fukushima crisis. (It is doing much better now, I think.) Or, rather, the government was extremely reluctant to speculate about possible (both likely and unlikely) bad news; officials were quite happy to speculate optimistically – for example, about the prospects for restoring power and pumping out all that radioactive water. As I wrote in “More on Fukushima crisis communication: The failure to speculate”:

But they failed to predict that there would probably be increasing radiation levels in local milk, vegetables, and seawater; that Tokyo’s drinking water would probably see a radiation spike as well; that plutonium would probably be found in the soil near the damaged plants; that the evidence of core melt would probably keep getting stronger; that all that water they were using to cool the plants would probably become radioactive, probably make repair work more difficult and more dangerous, and probably begin to leak; etc. After each of these events occurred, the government told us they were predictable and not all that alarming. But it failed to predict them….

Officials not only failed to speculate responsibly about their gloomy but still tentative expectations. They also failed to address still more alarming (and still less likely) worst case scenarios.

Consider this: On March 11, the day it all started, Prime Minister Naoto Kan ordered the evacuation of everyone within three kilometers of the damaged reactors. On March 12 the evacuation zone was expanded to 10 kilometers, and then (on the same day) to 20 kilometers. By March 13, 170,000–200,000 people had been evacuated from around Fukushima. They’re still gone; in fact, the evacuation radius was later expanded to 30 kilometers, and still later the government identified other hotspots outside that radius to be evacuated within the coming month.

The later evacuations were explicitly a response to the long-term risk from radioactive fallout. So these evacuees almost certainly know they won’t be back soon. Even so, the government remains reluctant to provide clear information about how long it might be – or even clear information about the algorithms that will determine how long it might be.

Even clarity about uncertainty would be helpful: “We are hoping that in the next three to six months we will have more confidence that the reactors are stable and the major releases of radioactivity are over. And by then we’ll have better data about how contaminated various areas are, how quickly the radioactivity is decaying, and what our prospects are for cleanup. Then we can develop a rational schedule for allowing evacuees to return to their homes. Will it be less than a year, a few years, decades, or even longer? Sadly, we can’t even begin to answer that question until the crippled power plants are stable.”

And what about the people who were evacuated in the first few days? Were they warned as they gathered a few belongings that their homes might be unsafe for decades? Or were they told (or allowed to assume) that the evacuation was a short-term thing, a response to a transitory radiation spike and the possibility of more such spikes in the days ahead? The news coverage I saw implied the latter, though officials were surely worried about both. Imagine the feeling of betrayal if you’re told to move out … presumably for a few days as a temporary precaution … and then slowly you begin to figure out, pretty much on your own, that your home may have become permanently uninhabitable … and still the government won’t tell you that you’re right, or that you’re wrong, or even that it can’t tell yet whether you’re right or wrong.

Erik, you’re not questioning my contention that the Japanese government avoided alarming speculation. To the contrary, you’re hypothesizing that Japanese officials may be less willing to speculate than western officials. That’s the significance of your anecdotes about your colleagues’ reluctance to estimate the organization’s energy needs or the time required to complete specific tasks. Japanese culture, you’re suggesting, is more concrete than western cultures, less willing to go beyond the data – that is, less willing to speculate.

Bear in mind that western officials (corporate as well as government) are also extremely reluctant to speculate about possible bad news to come. Like a lot of my crisis communication and outrage management recommendations, this advice is profoundly counterintuitive in east and west alike.

Some of my best crisis communication “good examples,” including examples of what-if speculation, come from Asia – for instance, from Singapore’s superb management of its 2003 SARS crisis. For some of these good examples, see “SARS communication: What Singapore is doing right,” written jointly with my wife and colleague Jody Lanard for The (Singapore) Straits Times. For more good examples, see Jody’s keynote presentation at a 2004 World Health Organization global consultation on “Outbreak Communications,” held in Singapore to honor that country’s outstanding SARS communications effort … and to symbolize that good crisis communication is not a western cultural artifact.

One of Jody’s examples is this April 21, 2003 dialogue between Goh Chok Tong, then Singapore’s Prime Minister, and BBC reporter David Bottomley:

Bottomley:
“By talking in terms of this being potentially the worst crisis that Singapore has faced, aren’t you in danger of stoking up [public] fear?”
Mr Goh:
“Well, I think I’m being realistic because we do not quite know how this will develop. This is a global problem and we are at the early stage of the disease. If it becomes a pandemic, then that’s going to be a big problem for us.”
Bottomley:
“How are you going about deciding where to strike the balance between warning people and making people aware of the virus and actually going so far that you’re actually worrying them?”
Mr Goh:
“At the moment, I’d rather be proactive and be a little overreacting so that we get people who are to quarantine themselves to stay at home. The whole idea is to prevent the spread of the infection.”

Singapore Health Minister Lim Hng Kiang also speculated alarmingly about the danger of SARS. He said, “We’re facing an unprecedented situation, this is a 9/11 for health. … We’re not going to go back to the pre-SARS situation for some time. We’re in for the long haul.”

It’s not just Singapore. In the wake of a devastating 2008 earthquake in Sichuan, China, earthquake evacuees needed to be relocated a second time as the risk of flooding became apparent. Officials speculated candidly in the People’s Daily about the alarming possibilities:

“If the flood comes, newly-built temporary settlements for homeless quake survivors in low-lying plain will be inundated,” said worried officials in Aba Tibet and Qiang Autonomous Prefecture, where the earthquake’s epicenter of Wenchuan County was.

They moved over 110,000 quake survivors out of the mountainous areas to a 20-km valley strip to protect them from secondary geological disasters such as landslides and mud flows in the wake of the earthquake.

If embankments could not hold flood water in Minjiang River, they would have to carry out another round of massive evacuation of people living in tents and makeshift houses at any moment.

Whenever we give western audiences good examples from Asia – like these – they’re quite likely to retort that “maybe that works in Asia, but it wouldn’t work here in North America [or “here in Europe”] – it’s just not in our culture.” When we present western good examples in Asia, we get the same response.

This much, at least, is universal: Regardless of culture, officials don’t want to speculate alarmingly, and they routinely assert that doing so is incompatible with their particular culture.

That still leaves open the question of whether alarming speculation might really be incompatible with Japanese culture. Maybe Japanese officials are right. Maybe the Japanese public really doesn’t mind being blindsided by bad news and left alone with its terrors about worst case scenarios. Or at least, maybe the Japanese public expects no better from its government, and therefore experiences no loss of trust when its government shuns alarming speculation.

Japanese cultural expectations regarding government crisis communications

I write this next section with great trepidation. I’m an expert in crisis communication, but I know next-to-nothing about Japanese culture. I haven’t found an expert in Japanese crisis communication, at least not one who writes in English. So I’m stuck reasoning by analogy.

The strongest analogy I can find is doctor-patient relations. Much has been written about what sorts of communications Japanese patients expect from their doctors in a medical crisis. It supports your hypothesis that in Japan alarming speculation may be so unwelcome that it actually does more harm than good. Here are some quotations drawn from two 1996 articles: “Barriers to Informed Consent in Japan” by Atsushi Asai and “Death and Dying in Japan” by Rihito Kimura. These are 15-year-old sources. The situation they describe may (or may not) have changed significantly since then, especially in the younger generation.

  • “It is still not uncommon for physicians to make unilateral decisions in the clinical setting. Some physicians would withhold relevant medical information from patients, on which they can judge what they need and what they want to avoid. Disclosing a true diagnosis to a patient with cancer is still controversial. Several recent surveys showed that only a minority of physicians disclosed a diagnosis of cancer to the patient….” [Asai]
  • “In the 1991 nationwide opinion poll by the Yomiuri Shimbun, 65 percent of the participants said that they would like to be given full diagnostic information about themselves even if they were terminally ill. Nevertheless, only 22 percent of the people questioned said they themselves would definitely be prepared to disclose such information to a family member…. Traditionally, the physician-patient relationship in Japan is based on a complete and unquestioning trust of the physician by the patient, such that the physician acts to make health care decision on behalf of the patient…. To complicate matters even further, although a patient’s family is informed of the incurable nature of the disease affecting their family member, the patient, as noted above, usually is not told of the terminal diagnosis.” [Kimura]
  • “A survey conducted by the MHW [Ministry of Health and Welfare] revealed that 40% of 1600 physicians who responded deemed it appropriate that physicians decide how much medical information should be given to a patient, while only 26% responded that they would give as much medical information as a patient wants…. Japanese patients may be unwilling to decide by themselves, even with a physician’s advice. One survey on inpatients in a community hospital showed that about 60% of respondents thought that a physician should make the final decision for the patient….” [Asai]

At least as of the mid-1990s, then, Japanese patients were far likelier than western patients to expect and accept – and perhaps even prefer – a benignly authoritarian physician who decides what course of treatment is best without burdening the patient with the choices, the prognosis, or even the diagnosis. In that sort of doctor-patient relationship, obviously, the doctor is unlikely to do a lot of alarming speculation. A doctor who plans not to tell the patient s/he has cancer if tests confirm the diagnosis certainly wouldn’t precede the tests by telling the patient s/he might have cancer!

I haven’t nailed down how much this situation has changed in the past 15 years. Nor have I been able to find comparable examples or counterexamples from other aspects of Japanese life. Do Japanese employees prefer not to know that their employer is going through tough economic times and considering layoffs? Do Japanese householders prefer not to know that the neighborhood crime rate has been climbing alarmingly and police are worried that the trend might continue to worsen?

And do Japanese citizens prefer not to know that their government is anticipating radioactive contamination of food and water, and secretly testing the crops and the milk around Fukushima, as happened during the early days of this crisis? Farmers were clearly angry that they were not told about the crop testing until the government announced the contamination. Were the residents of Tokyo also angry that the government gave them no forewarning? Or was no forewarning what they expected, even what they preferred?

It’s a long stretch from observing that Japanese patients expect less than total candor from their doctors to concluding that Japan is a society in which people don’t want officials to warn them about what might happen tomorrow or next week in the midst of a huge public crisis.

Note also that Dr. Asai’s article says that Japanese patients’ trust in their doctors is quite low. He cites a survey in which 75% of respondents said they “did not trust Japanese physicians,” and points out that “one of the main complaints patients have is insufficient explanation by their physicians.” So even if Japanese patients expect to be blindsided by their doctors, they don’t necessarily like it.

Trust in government is also notoriously low in Japan. A 2004 study of “Japanese Attitudes and Values toward Democracylink is to a PDF file found that only 22.2% of respondents expressed “a great deal” or “quite a lot” of trust in the national government. The comparable figure for newspapers was 67.2%. In a follow-up study link is to a PDF file in 2007, Japanese respondents once again expressed very low trust in political institutions, including the “Prime Minister,” “the national government,” “political parties,” and “Parliament.” Compared to the figures for other Asian countries, Japan was second-lowest in trust in government after South Korea. This is all the more surprising since Japan scored the most trusting of any Asian country when asked: “Generally speaking, would you say ‘Most people can be trusted’ or ‘that you must be very careful’?” Japan was the only country where the majority thought that “most people can be trusted” – but not the government.

I’m not entirely sure what Japanese mistrust in government means. I’d like to think it means that the government is paying the price for its citizens having learned that it can’t be trusted to be candid – and in particular to forewarn them about possible bad news. But a 2010 article by Soonhee Kim on “Public Trust in Government in Japan and South Korea” found that Japanese trust in government was most highly correlated with economic performance, political corruption, national pride, and official attention to citizen input. There was virtually no correlation between trust in the national government and “satisfaction with the right to be informed about the work and functions of government.” Trust in government, in other words, doesn’t seem to mean trust that the government will tell you what’s going on – at least not in Japan.

On the other hand, there is some evidence that the Japanese public does want its government to tell it what’s going on. A 2005 paper on “Information Disclosure in Japanlink is to a PDF file traces the development of a Japanese freedom of information movement. Author Jeff Kingston argues that there has been substantial progress in governmental openness since the 1970s – though there are still “teething problems” and considerable bureaucratic “foot-dragging and stonewalling.” The political landscape, he says, “has changed dramatically in favor of open government, as revelations of abuses have generated a momentum in support of transparency.” According to Kingston, this change was achieved despite strong opposition from government itself, mostly because citizen groups demanded it, media supported it, and the public slowly came to expect it. “Citizens are propelling Japan’s quiet revolution,” he writes, “by exercising their new power to monitor government officials and hold them accountable.”

What I keep looking for, and haven’t found, is survey data on whether the Japanese public expects and whether it wants candor from its government officials – especially candor during crisis situations about possible future bad news (that is, candidly alarming speculation). Western publics do expect and want that kind of candor, and when they don’t get it (as they often don’t) they tend to become more anxious, overreact, and lose trust in officials. But Westerners also expect and want their doctors to be candid about their medical prognosis, whereas many Japanese patients apparently do not. Is there a comparable difference with regard to alarming speculation from government officials during public crisis situations?

My intuition, based largely on experience in other Asian countries and on media quotations from Japanese citizens during the Fukushima crisis, is that Japanese people, like westerners and other Asians, want to be warned about what might happen in a crisis, so they can brace themselves and make decisions about precautions. But I haven’t found convincing evidence that this is true.

Japanese reactions to Fukushima communications

On April 18, three Japanese newspapers published polls that addressed public satisfaction with the government’s handling of the Fukushima crisis. I can’t read the actual polls, or even the articles on them in their parent Japanese newspapers. I am relying instead on English-language news stories in Reuters, Japan Today, and AFP.

Here’s what I have learned:

  • In a poll by the Nikkei business daily, 70% of respondents said the government’s response to the nuclear crisis was not acceptable, while 19% praised its handling of the crisis. Nearly 70% said Prime Minister Kan should be replaced. Even so, support for the Kan government stood at 27%, up five points from February (before the earthquake).
  • In an Asahi Shimbun poll, 67% expressed disapproval of the government’s response, against 16% who approved of it. The overall approval rate for the Kan government was 21%, up one percentage point from the previous survey in February.
  • In a Mainichi Shimbun poll, 68% disapproved and 28% approved of the government’s management of the crisis. Seventy-eight percent said Kan had not shown leadership. Moreover, 58% said they did not trust government information on the Fukushima crisis, while about a third said they believed what the government told them. Support for the Kan government was 22%, up 3 percentage points from February.

The most relevant information here, obviously, is the 58% of Mainichi Shimbun respondents who didn’t believe what the government was telling them about Fukushima. Is this number higher than it would have been if the government had followed my strategy of anticipatory guidance/alarming speculation in the early weeks of the crisis? Is it higher than it was after Japan’s much less serious Tokai nuclear accident in 1999? Is it higher than the level of distrust in what the Japanese government tells the Japanese people on an ordinary day? I don’t know.

(A study of public reactions to the Tokai accident found a substantial increase in opposition to nuclear power but no effect on distrust of the government. But I know nothing about how the government handled its communications during that earlier crisis.)

As you note in your comment, Erik, foreigners living or working outside the government’s evacuation radius were far likelier to move still further (or flee the country entirely) than Japanese residents and workers were. But I’m reluctant to attribute that (as you do) to the hypothesis that foreigners were more distressed than the Japanese public by the Japanese government’s failure to warn – that is, its dearth of alarming speculation. There are lots of other possible explanations, starting with the fact that many foreign governments advised their citizens to put more distance between themselves and Fukushima than the Japanese government thought necessary. In addition, foreigners were likelier than Japanese to have another place to go; they were freer to go, less firmly rooted in their homes and workplaces; they probably felt less loyalty to the group’s (and country’s) shared crisis; they were less constrained by Japan’s famous work ethic; etc.

The Japanese government recommended against stockpiling water, food, or other essentials, insisting (correctly, at least so far) that there would be no need. Many people disobeyed this recommendation, leading to temporary shortages. Many others obeyed it (or had no choice). Was the compliance rate lower than in comparable situations in Japan or elsewhere? Was the compliance rate lower than it would have been if the Japanese government had been more candid about its worries? To what extent did the government’s avoidance of alarming speculation lead some Japanese citizens to decide that things were probably worse than the government was admitting, and to what extent did that mistrust lead some Japanese citizens to disobey the government’s recommendations and stock up on bottled water and uncontaminated vegetables? I would love to see research addressing these questions. All we have so far is guesses.

In recent days, similarly, the government has launched a public relations effort to persuade people in the Tokyo metropolitan area to eat vegetables from the Fukushima region. I have read news stories that say the Fukushima veggies still aren’t selling, and other news stories that say many people are overcoming their fears and making a conscious effort to support Fukushima farmers. Again, I would love to see a survey that asked people why they were or weren’t willing to buy Fukushima vegetables, testing the chain of causality I have hypothesized:

The government failed to forewarn people that Fukushima vegetables might show increased radiation levels

When the radiation levels increased, people were more surprised, more alarmed, and more mistrustful than they would have been if they had been forewarned

People were less inclined to accept the government’s later assurances that Fukushima vegetables were safe to eat.

There have been many news stories reporting that the Japanese public is enraged, terrified, and mistrustful about the Fukushima crisis and how the government is handling it. And there have been many stories reporting that the Japanese public remains calm, though I have seen far fewer suggesting that the Japanese people trust their government with regard to Fukushima. Everything I have seen is in English, and most of it comes from non-Japanese reporters and commentators whose own expectations undoubtedly color their observations about Japanese reactions to the crisis.

A handful of news stories and columns are worth examining, though skeptically.

On March 19, just a week into the crisis, The Asahi Shimbun published a column by Hirotada Hirose, an expert in disaster psychology at Tokyo Woman’s Christian University. Professor Hirose said he was concerned that “up to now, there have been inadequate explanations about the failed nuclear power plant in Fukushima Prefecture. If a serious situation is developing, the people need to be properly informed of it. Failure to do so will spark anxiety and aggravate stress.”

A March 28 article from Deutsche Presse is entitled “Japanese bemoan lack of information.” Written by Lars Nicolaysen, the article relies partly on the views of Professor Hirose. Nicolaysen’s article begins as follows:

In Japan fear is combining with outrage over the lack of exact information about the nuclear accident at the Fukushima nuclear power plant.

Local government leaders bemoan the fact that the neither the authorities nor the media provided them with clear insights into the scope of the crisis even after it became known that radiation levels were far higher.

The result is increasing scepticism in the face of continual governmental assurances that the radiation poses no immediate threat to the public.

To assuage growing public concerns, experts are calling for more concrete and updated information on radiation levels.

“The biggest problem is suspicion,” warns Hirose Hirotada, professor of psychology at Tokyo Woman's Christian University.

“Regardless whether information actually is false or being kept under wraps, the fact is that if doubts arise, it can lead to panic.”

For that reason, it is vital that accurate and continual information be available to the public, Hirotada says. Only that way is it possible to counter over-reactions by a fearful public.

I don’t agree with Professor Hirose (mistakenly called “Hirotada” in the article) that Fukushima-related panic might result from the government’s failure to provide “accurate and continual” information about radioactivity. Panic is rare in crisis situations. And I think Nicolaysen’s story neglects to acknowledge that often the only available information is incomplete and uncertain – and that under such circumstances the only way to provide “accurate and continual” information is to keep telling people what little you know, keep telling them how uncertain that knowledge is, keep telling them you wish you knew more, and keep telling them that as you learn more in the coming days a lot of what you learn is likely to be alarming.

But I agree fervently that the government mishandled Fukushima information (and especially information about radiation) in the early weeks of the crisis. The government may or may not have told people what it knew. It gave the impression of failing to tell people what it knew because it demonstrably failed to tell them what it didn’t know, what it expected, and what it feared. Most importantly for this discussion, I agree with Nicolaysen and Professor Hirose that the government’s crisis communication failures in the early weeks led to “outrage,” “skepticism,” “concerns”, “suspicion,” and “doubts” in the minds of many people in Japan.

But not everyone agrees. From the English language edition of the South Korean newspaper Chosun Ilbo comes an April 20 column by reporter Cha Hak-bong with the intriguing title “What Makes Japanese People Trust Their Government in Times of Crisis?” Cha writes that “the Japanese public is overcoming the unprecedented disaster with amazing orderliness” because of “a deep-rooted trust in the Japanese government.” He continues:

From a foreigner’s point of view, the Japanese government gets a failing grade in terms of handling the crisis. Even though refugees were complaining of hunger and cold, officials prevented cars from transporting relief supplies citing safety regulations. They also showed a lack of expertise by dumping huge amounts of contaminated seawater into the ocean because they were unable to find a large ship that could store them, and they failed to tap into emergency fuel reserves to power relief supply trucks, while there was a lack of information flow between government agencies.

But although the Japanese people have undoubtedly been angered by this, they still have confidence in what their government tells them. When the government said vegetables and fish from the Fukushima region pose no hazards to the human body, people launched a drive to consume more products from that area. Voluntary efforts to save electricity allowed Japan to overcome a major power shortage crisis. And although websites are filled with criticism of the government, Prime Minister Naoto Kan’s approval rating has climbed 10 percent since the earthquake.

It is foreigners who do not trust the Japanese government’s announcements. Some foreign businesses temporarily moved their Japanese headquarters to Osaka or packed up and left for Hong Kong, and some foreign baseball players forfeited hundreds of thousands of dollars to quit their teams and leave.

The Japanese public, however, appear to have decided to trust their government to overcome the crisis. This is possible because of the public’s firm belief that their government will not lie to them no matter how inept it may be.

I have no idea what basis Cha has for his claim that the Japanese people trust their government. As we have seen, that’s not what the empirical research suggests. The last sentence of Cha’s column, in fact, makes me wonder if the whole thing might be tongue-in-cheek.

Nonetheless, Cha illustrates a leitmotif that characterizes a fair amount of Fukushima commentary, and that resonates with your comment: the notion that the Japanese are more communitarian, patient, fatalistic, and trusting than westerners, and that these traits have led them to take the events since the March 11 earthquake (and the government’s handling of those events) more in stride than westerners would have done.

There is surely some truth to this. The Japanese term gaman is usually translated as “perseverance,” but a better definition (from Wikipedia) is “enduring the seemingly unbearable with patience and dignity.” The term is deployed by foreign observers to explain Japan’s reaction to any crisis, and it has been much-used with regard to Fukushima. (A Google search for “Fukushima gaman” on April 19 got 184,000 hits.)

Writing in the Jakarta Post, Ashwini Devare puts it this way:

So much has been written about gaman and its significance in the Japanese context. “Calm endurance”, “control for the sake of others” and “bearing the unbearable” are some interpretations of this word, widely credited for the incredible stoicism displayed by the Japanese during this crisis.

What karma is for Hindus, gaman is for the Japanese. It is not something they practice consciously as they go about their day-to-day lives, rather, it is intuitive and therefore, intrinsic to their behavior and thinking.

“We don’t really stop and think about gaman too much, because for us, there’s no second way,” says Yusei Watanabe, an expatriate Japanese whose family lives 60 kilometers from the nuclear plant in Fukushima.

“You know there is hardship and you just plough through. I think it is difficult for foreigners to understand the nuances of gaman, but for us, it is just something we have grown up with.” …

“It is a very simple one-sentence philosophy,” explains Watanabe. “We were taught that whatever conveniences you, inconveniences another. My family is in Fukushima and yes, it is dangerous for them to be there, but there is no concept of just getting up and leaving town. They will never abandon the community.”

I doubt that gaman implies trust in government – although when I Googled “gaman trust” I did unearth this March 16 tweet: “‘Gaman’ is based on trust in authority. Alternative – to shout or complain, will yield no positive results.” That tweet notwithstanding, my best guess is that, yes, the Japanese are more communitarian and more patient than westerners; and yes, they may show more fortitude, more gaman, in crisis situations – but no, they don’t trust their government. And they trust it less when it fails to tell them what it knows and fears about the ways a crisis may worsen.

A communitarian society committed to a philosophy of gaman may well have less need for trust in government than more individualistic societies; people can trust each other and rely on each other for mutual support. But in a massive societal crisis like the one Japan has been enduring since March 11, only the government has the information the public needs to decide what to expect, what to do, and even what to feel. Under those circumstances, distrust of the government is a severe handicap – a handicap greatly exacerbated when the government is reluctant to say alarming things about what may happen next.

Today, I think, the Japanese government is doing a much better job of responsible alarming speculation. But it took officials nearly a month to acquire the wisdom or courage to speak openly about the ways things might get worse. (They did so just as the probability was increasing that things might not get worse.) I believe the delay did damage – not just to trust in government in Japan, but also to the ability of the Japanese people to cope with the crisis. They have coped nonetheless, and earned the world’s admiration. But the government’s failure to speculate about alarming possibilities made coping harder than it had to be.

The broader question

On the narrower question you raise – whether alarming speculation in a crisis is less important when talking to Japanese publics than when talking to western publics – I have tentatively concluded that I don’t think so.

But that’s the conclusion I was hoping and expecting to reach … so don’t put too much trust in it.

The broader question is whether the principles of risk communication, especially crisis communication, are universal or culture-specific.

I started my career thinking they were probably culture-specific. I used to be very reluctant to give risk communication advice in unfamiliar cultures; I would beg the client to bring in a communication professional from within that culture to partner with me. I still think that’s a good idea, but not because the principles vary.

Based on decades of experience (not just mine but also Jody’s, much more of which is in Asia – though not Japan), I’m pretty confident now that risk communication principles are universal. The fact that outrage determines hazard perception is universal. The resistance of outraged people to evidence that hazard is low is universal.

As for crisis communication when hazard is high, the World Health Organization held an international meeting in Singapore in 2004 (funded by the governments of Japan, Ireland, Singapore, and Canada) to develop crisis communication principles for infectious disease outbreaks. Jody prepared draft background materials for the meeting and delivered one of the keynotes – but the majority of participants were non-westerners, including two Japanese officials. The consensus document that resulted, entitled “WHO Outbreak Communication Guidelines,” link is to a PDF file was divided into sections on “Trust,” “Announcing early,” “Transparency,” “The public,” and “Planning.”

There is nothing in the WHO Outbreak Communication Guidelines that needs an asterisk to limit its applicability to certain cultures. WHO has had considerable difficulty getting member states to obey the guidelines – and considerable difficulty obeying them itself – but never to my knowledge because they were inapplicable to the culture in question. They are universal. And they are universally resisted by officials when they’re under the gun … not infrequently on the fallacious grounds that “that wouldn’t work in this culture.”

The WHO guidelines don’t quite nail the speculation issue; they don’t instruct officials explicitly to tell the public what’s likely to happen and what they’re worried might happen. This excerpt from the section on “Announcing early” comes close:

People are more likely to overestimate the risk if information is withheld. And evidence shows that the longer officials withhold worrisome information, the more frightening the information will seem when it is revealed, especially if it is revealed by an outside source….

Early announcements are often based on incomplete and sometimes erroneous information. It is critical to publicly acknowledge that early information may change as further information is developed or verified.

Although I think the principles of risk communication and crisis communication are universal, the applications are not. The codes are different from one culture to another – the way people apologize, for example, or the way they express their outrage.

Are there any exceptions to my claim that the principles are universal? There may be; that’s why I needed to take your comment very much to heart. But I haven’t found any yet.

I thought for a while that maybe you had found one … but now I don’t think so. No one likes being blindsided by bad news the authorities anticipated but failed to mention. No one likes being left alone with imagined worst case scenarios and the fear they engender. Everyone copes better with crisis if given anticipatory guidance – that is, if the authorities are willing to share their alarming, as well as their reassuring, speculations.

Note: My wife and colleague Jody Lanard contributed to this response.

Erik responds:

Thanks a lot for taking the time to provide such a thorough reply! – a very interesting read that makes a lot of sense.

From my own experience and countless commentaries, there are indeed a lot of cultural differences between Japan and the rest of the world. But I also think that language barriers may inflate the image of Japan as uniquely different. And it certainly makes sense, in the absence of good data, to assume that what holds true for the rest of the world also is applicable in Japan. I guess major elements of your field of crisis communication are basic human emotions like fear, anxiety, hope, etc., which really may follow very similar mechanics even across widely different cultures.

I do try to ask my Japanese friends about these matters from time to time, but it’s difficult to form a solid opinion from anecdote. And my proficiency in the language is unfortunately not strong enough to get a clear understanding of general public opinion as it is expressed through Japanese media.

As you point out, the Japanese have a notoriously low trust in government. My knowledge about political life in Japan is not great, but I believe one of the reasons may be that there’s a wide gap between “normal people” and politicians in Japan. In my understanding, high-ranking officials seldom come out of any grassroots movement, labor union, etc. They are more likely to come from an elite background, attend specific universities, and sometimes even “inherit” positions from their parents (i.e. fathers, since most Japanese politicians are male). Compare that to Sweden. Although a political elite certainly exists there also, it is virtually impossible to reach the higher echelons without paying dues and slowly rising through the ranks.

Also, as you may be aware, Japanese politics are not very stable. Five different prime ministers in five years probably adds to the disconnect between ordinary people and politicians.

It is quite interesting that, as you point out, the low trust in Kan’s government does not seem to have dropped much during the crisis; if anything it seems to have increased ever so slightly. People may generally feel that the non-nuke aspects of the crisis have been dealt with in a good way, whereas the nuke part has been handled badly (I remember seeing such a poll). So maybe the negative trust from Fukushima communication is balanced out to some degree by an increase in trust from communication regarding tsunami victims, etc.

One thing is for sure: This three-tiered disaster will be the subject of study for many years to come, across all disciplines ranging from the humanities to engineering.

Fukushima mistrust

name:Declan Butler
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Reporter, Nature
date:April 11, 2011
location:France

comment:

I’ve read your most recent Fukushima updates with interest:

I’m following up for Nature on both data issues (see for example “Fukushima update: Data, data, everywhere”) and risk communication issues.

We may well cite some of your recent posts; I just thought I’d check if you have any draft posts in preparation giving an update of your thinking.

For all our coverage of Fukushima see our news special at http://www.nature.com/news/specials/japanquake/index.html.

Peter responds:

Jody and I are not on the verge of publishing anything right now – we’re just finishing up a vacation in Spain!

But if we were going to write something new tonight, it would emphasize that far too many commentators are misinterpreting as “irrationality” or “excessive concern” what is actually rational and proportionate mistrust … especially mistrust about the future.

In a sentence: Officials have no right to consider the public irrational, hysterical, or radiophobic once they have taken into consideration the public’s justified, cognitive (not just emotional) mistrust of official pronouncements about anything to do with Fukushima.

The public (both in Japan and elsewhere) has figured out two things about Fukushima.

  • What might happen next is a potentially bigger problem than what has happened so far.
  • Governments, experts, and authorities of all sorts have been consistently behind the curve in talking openly about what might happen next.

The authorities have pretended that a situation that is not disastrous YET (except locally) must be not disastrous period, which means that anybody who’s worried is being “irrational” or “hysterical” or “radiophobic.”

Given the authorities’ record of consistently understating how bad things may get, the public has learned to mistrust. People have rightly learned to mistrust that the current situation is as bad as things will get and have rightly learned to mistrust that the authorities will tell them if things are likely to get worse.

As a result, people have (perhaps rightly, perhaps mistakenly) also learned to mistrust that the authorities will tell them the truth if things do in fact get worse. So a variety of precautions that might be considered excessive if you trusted the authorities – avoiding Japanese foods, for example, and seeking out a supply of potassium iodide – become rational.

Experts and officials who consider themselves trustworthy interpret these precautions as irrational, hysterical, or radiophobic. Once you get it that experts and officials have not been trustworthy, these precautions seem pretty sensible. (Of course we don’t know yet whether they will turn out necessary or not.)

Note: My wife and colleague Jody Lanard collaborated on this response.

More on Fukushima crisis communication: The failure to speculate

name:Patrick Tucker
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Senior editor, The Futurist
date:April 1, 2011
location:Washington, D.C.

comment:

It’s been about two weeks since the nuclear power plant in Fukushima Japan entered a state called partial meltdown. In that time, Japan has seen an exodus of foreign executives, a near 16% drop in its stock market (which has since partially rebounded) and shortages of food linked to hoarding. All this is occurring despite the fact that nuclear experts say that even the effects of a full-scale meltdown would be limited to the area around the plant.

I’ve been watching the ongoing news coverage of Fukushima Daiichi both in Japan and the United States. The Kan administration has been roundly criticized for its PR response to the situation. Have you been watching this situation at all? If so, I hope you might be willing to answer a couple of questions.

  1. What were the specific crisis management missteps of the Kan administration?
  2. What should they do now?
  3. What is the best corollary to the present situation? Does another incident come to mind that could be instructive?

Even if you haven't been watching the situation, I hope you might answer the following:

  1. How would you advise a public official dealing with a potential nuclear meltdown to communicate the risks to the public without alarming them?

Peter responds:

I’ll answer your four questions in the order you posed them.

What were the specific crisis management missteps of the Kan administration?

I will confine myself to crisis communication, the only aspect of crisis management where I have some expertise. And I will focus on one specific Fukushima crisis communication failure, the one I consider most serious: the government’s failure to speculate publicly about what-if scenarios it was certainly considering privately.

Every crisis raises three key questions:

  • What happened – and what are you doing to respond to it, and what should we (the public) do?
  • What’s likely to happen next – and what are you doing to prepare for it, and what should we do?
  • What’s not so likely but possible and scary, your credible worst case scenario – and what are you doing to prevent it (and prepare for it in case prevention fails), and what should we do?

There are other questions as well, of course – notably questions about how the crisis will affect “me” (my health, my family, my home, my income, my community). And there will be still other questions once the crisis ebbs or stabilizes – especially questions about blame.

But regarding external events – regarding the crisis itself – the three key questions the public asks are always: what happened, what do you expect to happen, and what are you worried might happen.

With regard to Japan’s March 11 earthquake and tsunami, the first question was far more important than the second and third. March 11 was a tragedy but not a cliffhanger.

The resulting nuclear crisis at the Fukushima power plants, on the other hand, was a cliffhanger but not a tragedy (at least not early on) – so the second and third questions were the crucial ones. As I write this on April 1, they still are. “What happened” remains a largely unanswered question; the answers keep changing as the situation evolves. But as long as the Fukushima crisis threatens to deteriorate, the most important questions will be about the future, not the past.

By far the biggest crisis communication error of the Japanese government has been its failure to answer the second and third questions satisfactorily: its failure to forewarn people about tomorrow’s and next week’s probable headlines, and its failure to guide people’s fears about worst case scenarios.

The second and third questions – what are you expecting (“likeliest case”) and what are you most worried about (“worst case”) – are the inevitable forward-looking questions in any crisis, personal as well as societal. They are what we ask our plumber or our doctor. They are what we were asking about Fukushima from Day Two. (Day One was mostly about the first question, what happened.) And the government of Japan rarely gave us adequate answers.

Talking about what’s likely and what’s possible is necessarily speculative. Some commentators and even some crisis communication professionals have argued that authorities shouldn’t speculate in a crisis. This is incredibly bad advice.

Imagine a weather forecaster who refused to say where the hurricane was likely to be headed: “Here’s where it is now. We can’t be sure where it’s going, and we certainly wouldn’t want to speculate. Tune in again tomorrow and we’ll tell you where it went.” Or imagine a weather forecaster who refused to say how bad the hurricane might get: “It’s only Category 3 now. Any mention of the possibility of it strengthening to Category 4 or 5 and what that could mean to the cities in its path would be strictly speculative.”

In fact the Japanese government didn’t shy away from reassuring speculation, only from alarming speculation. Officials were happy to predict that they would probably get power to the pumps soon and would then be able to cool the plants properly, for example.

But they failed to predict that there would probably be increasing radiation levels in local milk, vegetables, and seawater; that Tokyo’s drinking water would probably see a radiation spike as well; that plutonium would probably be found in the soil near the damaged plants; that the evidence of core melt would probably keep getting stronger; that all that water they were using to cool the plants would probably become radioactive, probably make repair work more difficult and more dangerous, and probably begin to leak; etc. After each of these events occurred, the government told us they were predictable and not all that alarming. But it failed to predict them.

My guess is that officials did in fact predict most of these events – privately. But they failed to predict them publicly.

It would have been a mistake – the opposite mistake from the one they made – for officials to predict all these events to the public confidently. They may have been confident about some of them but highly uncertain about others. Whatever their actual level of confidence/uncertainty was, whatever level of confidence/uncertainty they were expressing in their private planning meetings, that’s the level of confidence/uncertainty they should have expressed to the public. This is what responsible speculation requires: telling the public that X is probably going to happen, Y is a 50-50 shot, and Z wouldn’t be a huge surprise but it’s not really likely either.

My 2004 column on “Acknowledging Uncertainty“ offers specific advice on how to communicate your level of confidence/uncertainty. It’s not terribly hard to do, once you decide to do it. But especially when you’re talking about more bad news in what is already a devastating crisis, it’s very hard to persuade yourself and your organization to do. The nearly universal temptation is to keep quiet about your gloomy but still tentative expectations, hoping against hope that they won’t happen … until they do happen and you have no choice but to say so.

Officials not only failed to speculate responsibly about their gloomy but still tentative expectations. They also failed to address still more alarming (and still less likely) worst case scenarios: what if there’s an explosive breach of a containment, propelling nuclear fuel fragments high into the atmosphere; what if fuel elements melt together and achieve recriticality; what if we have to think seriously about evacuating Tokyo; what if we can never reoccupy a significant chuck of Japan; etc.

Because the government avoided alarming speculation, the people of Japan (and the world) kept learning that the situation at Fukushima was “worse than we thought.”

This violates a core principle of crisis communication. In order to get ahead of the evolving crisis instead of chasing it, crisis communicators should make sure their early statements are sufficiently alarming that they won’t have to come back later and say things are worse than we thought. Far better to be able to say instead: “It’s not as bad as we feared.”

Among the reasons why officials have been reluctant to speculate alarmingly is, undoubtedly, a fear of panicking the public. But despite some condescending newspaper columns about “radiophobia,” there is little evidence of nuclear panic in Japan or elsewhere. Nuclear skepticism, nuclear distrust, nuclear dread … but not nuclear panic.

Crisis communication experts know that panic in emergency situations is rare. People may very well feel panicky, but panic isn’t a feeling. Panic is a behavior. Panic is doing something dangerous and stupid – something you know is dangerous and stupid – because you are in the grip of intolerable emotion and can’t stop yourself.

It isn’t panic to try to get your hands on some potassium iodide, even if you’re thousands of miles from Fukushima and vanishingly unlikely to need it. Panic is when you knock down your grandmother in your haste to get to the drugstore before it runs out of potassium iodide.

And it certainly isn’t panic to stockpile food and water in case these necessities become contaminated or their supply lines are disrupted. That’s simply prudence. It is what disaster experts often recommend – right up until the disaster happens. Then suddenly their tone turns disapproving, and they call the stockpilers “hoarders” and accuse them of panicking. The government has been appropriately empathic about the suffering of victims – the families of the dead and missing, the evacuees sheltering for weeks in school gymnasiums. But there has been precious little empathy for the millions who were rationally worried that they might be among the next victims.

Moreover, the Japanese government’s failure to speculate alarmingly didn’t “protect” the public from alarming speculation. It simply left people speculating on their own, and listening to the speculations of outside experts (and outside non-experts). Knowing that public and media speculation are inevitable, a wise crisis manager guides the speculation, rather than boycotting it or condemning it.

The bulk of the criticism of the government’s crisis communication has assailed it for failing to provide information promptly and honestly. (The same charge has been leveled against TEPCO, by the government among others.) There is doubtless some truth to this charge … and a few months or years from now we may know that withholding information was the most serious sin of Fukushima crisis communication.

But I doubt it. For the most part, I suspect, the government has told us what it knew for certain. Its biggest sin has been failing to tell us enough about what it guessed, what it predicted, and what it feared.

These failures felt dishonest. And in a sense they were dishonest. We kept hearing alarming speculations from outside academics and anti-nuclear activists and even the U.S. government and the International Atomic Energy Agency that we weren’t hearing from the government of Japan. We kept waking up to bad news that the Japanese government hadn’t told us might be coming. We rightly judged that the government was failing to keep us on top of the situation – but not, I think, because it wasn’t telling us what it knew; rather, because it wasn’t telling us what it guessed, predicted, and feared.

When a bad thing happens without warning midway through an evolving crisis, there are only three possible explanations:

  • The authorities had reason to think it was going to happen, and decided not to forewarn people – not to give the public time to prepare (emotionally as well as logistically).
  • The authorities knew they didn’t know what was going to happen, and decided not to tell the public that – not to tell us that the situation is unpredictable and warn us to expect scary surprises.
  • The authorities thought they had a better handle on the crisis than they actually had – and the new development is as shocking to them as it is to the public.

The truth is usually some mix of the three.

To the extent that the Japanese government had reason to expect particular bits of bad news, it should have said so. It is absurd, for example, that the 12 million people in Tokyo were not warned to stockpile at least a modest supply of a readily available resource – tap water – in advance of potential contamination.

And to the extent that the Japanese government knew it didn’t know what to expect, it should have said that. Acknowledging uncertainty, ignorance, and the resulting inevitability of scary surprises is itself a kind of forewarning. Even if you can’t prepare logistically for what you don’t know is coming, you can at least prepare emotionally to be surprised.

What harm has resulted from the Japanese government’s unwillingness to speculate?

There has been damage, obviously, to the credibility of the Japanese government, and therefore to its ability to lead its people through the hard times ahead. There has been damage to the future of nuclear power, exacerbating the damage done by the crisis itself.

The worst damage may be the public’s growing sense that the Fukushima crisis is out of control and uncontrollable, that it cannot be predicted and is therefore greatly to be feared. Perhaps that very frightening assessment will turn out to be an accurate one. But if the crisis does stabilize and begin to ebb, if we stop waking up every morning to further bad news from Fukushima, if worst case scenarios start coming off the table in the minds of experts, will the public notice and believe? If it doesn’t, that will be largely a legacy of the Japanese government’s unwillingness to speculate.

What should they do now?

Obviously, be willing to speculate – and learn how to speculate responsibly. Jody Lanard and I entitled our 2003 essay on crisis speculation “It Is Never Too Soon to Speculate.” It is never too late to speculate either.

In recent days, I think, the Japanese government has been more willing to address my second and third questions – to tell people what bad news it considers likely in the days ahead and what worse scenarios it is taking seriously, preparing to cope with, and trying to avert.

The short-term effect of this increased candor about likely and possible futures may well be increased concern. Journalists and the public are picking up on the change in tone, and some are interpreting it as evidence that the situation at Fukushima is worsening. When even the government says things look bad, some people figure, things must look very bad indeed. This is inevitable when officials switch from stonewalling and over-reassuring to responsible speculation.

I hope the government stays the course. In fact, I hope it focuses even more on becoming Fukushima’s Cassandra, not its Pollyanna. If predictable bad things happen (as they surely will), the government’s having predicted them will help keep people from overreacting to them. If the crisis worsens (as it may), the government’s pessimism will at least have alerted us to this real possibility. And if the crisis eases (as we all hope it will), I look forward to the day when the Japanese government will have earned the right to say to the public, “it’s not as bad as we feared.”

Then it will be time to address the much smaller problem of being accused of having “fear-mongered.” That accusation is almost inevitable when crisis communication has been well handled.

That’s the crisis communicator’s choice. Either you over-reassure people, fail to forewarn them about likely bad news to come and possible worst case scenarios, and leave them alone with their fears. Or you treat them like grownups, tell them what you expect and what you’re most worried about, and help them bear their fears. In the former case, they are forced to endure scary surprises, lose their trust in you, and have trouble noticing when the crisis is over. In the latter case, they prepare for the worst, manage their fears (and the situation itself) better … and end up a little irritated at you for having been so alarmist.

What is the best corollary to the present situation? Does another incident come to mind that could be instructive?

The most compelling precedent for Fukushima is of course Three Mile Island. Like Fukushima (and unlike Chernobyl), it was a cliffhanger too.

Seven years ago, on the 25th anniversary of the 1979 Three Mile Island nuclear accident, I wrote an article entitled “Three Mile Island – 25 Years Later.” link is to a PDF file In it I listed what I saw as the most enduring crisis communication lessons of the Three Mile Island Accident.

Several of these lessons strike me as relevant to Fukushima, and the rest of this section is adapted from that article.

Pay attention to communication.

Three Mile Island was technically much less serious than Fukushima; it was a near miss, but very little radiation was actually released. No local crops were contaminated. Pregnant women and young children were evacuated, but that turned out to have been unnecessary. What went wrong at TMI – really, really wrong – was the communication.

Communication professionals were minor players at TMI. I was at Three Mile Island, first for the Columbia Journalism Review (covering the coverage) and later for the U.S. government commission that investigated the accident. In the latter capacity, I asked Jack Herbein, the Metropolitan Edison engineering vice president who managed the accident, why he so consistently ignored the advice of his PR specialist, Blaine Fabian. (Risk communication hadn’t been invented yet.) He told me, “PR isn’t a real field. It’s not like engineering. Anyone can do it.”

That attitude, I think, cost MetEd and the nuclear power industry dearly. And that attitude continues to dominate the nuclear industry, contributing to one communication gaffe after another. Nuclear power proponents keep shooting themselves in the foot for lack of risk communication expertise.

I don’t know if TEPCO or the Japanese government has any in-house risk communication or crisis communication professionals, and I don’t know if either brought in outside risk communication or crisis communication advisors. I’m guessing the answers were no and no, at least in the first couple of weeks. There have been some signs of improved “uncertainty communication” and “worst case communication” in the last few days.

Don’t lie – and don’t tell half-truths.

Companies and government agencies try hard not to lie outright, but they usually feel entitled to say things that are technically accurate but misleading – especially in a crisis when they are trying to keep people calm. Ethics aside, the strategy usually backfires. People learn the other half of the truth, or just sense that they aren't being leveled with, and that in itself exacerbates their anxiety as it undermines their trust in officialdom.

Here is one spectacular example of a not-quite-lie from Three Mile Island. (We don’t know yet if there are comparable examples from Fukushima.)

The nuclear power plant in central Pennsylvania was in deep trouble. The emergency core cooling system had been mistakenly turned off; a hydrogen bubble in the containment structure was considered capable of exploding, which might breach the core vessel and cause a meltdown.

In the midst of the crisis, when any number of things were going wrong, MetEd put out a news release claiming that the plant was “cooling according to design.” Months later I asked the PR director how he could justify such a statement. Nuclear plants are designed to survive a serious accident, he explained. They are designed to protect the public even though many things are going wrong. So even though many things were going wrong at TMI, the plant was, nonetheless, “cooling according to design.”

Needless to say, his technically correct argument that he hadn’t actually lied did not keep his misleading statement from irreparably damaging the company’s credibility.

Get the word out.

Most government agencies and corporations respond to crisis situations by constricting the flow of information. Terrified that the wrong people may say the wrong things, they identify one or two spokespeople and decree that nobody else is to do any communicating. In an effort to implement this centralized communication strategy, they do little or nothing to keep the rest of the organization informed.

There is certainly a downside to authorizing lots of spokespeople; the mantra of most crisis communication experts is to “speak with one voice.” But I think the disadvantages of the one-voice approach outweigh the advantages. This approach almost always fails.

It failed at Three Mile Island. Reporters took down the license plate numbers of MetEd employees, got their addresses, and called them at home after shift. Inevitably, many talked – though what they knew was patchy and often mistaken. The designated information people for the U.S. Nuclear Regulatory Commission and the utility, meanwhile, had trouble getting their own information updates; those in the know were too busy coping with the accident to brief them. (The lesson here: There need to be technical experts at the scene whose designated job is to shuttle between the people who are managing the crisis and the people who are explaining it. As far as I can tell, nobody was assigned that role at Fukushima.) The state government felt its own information was so incomplete that Press Secretary Paul Critchlow asked one of his staff to play de facto reporter – trying to find out what was going on so Critchlow could tell the media … and the Governor.

In today’s world of 24/7 news coverage and the Internet, the information genie is out of the bottle. If official sources withhold information, we get it from unofficial sources; if official sources speak with one voice, we smell a rat and seek out other voices all the harder … and find them.

But crisis information wasn’t controllable three decades ago in central Pennsylvania either. As my wife and colleague Jody Lanard likes to point out, even in the pre-Gutenberg era, everyone in medieval villages knew when troubles were brewing. The information genie never was in the bottle. Keeping people informed and letting them talk is a wiser strategy than keeping them ignorant and hoping they won’t.

Err on the alarming side.

This is the Three Mile Island crisis communication lesson of greatest relevance to Fukushima.

In the early hours and days of the Three Mile Island accident, nobody knew for sure what was happening. That encouraged Metropolitan Edison to put the best face on things, to make the most reassuring statements it could make given what was known at the time. So as the news got worse, MetEd had to keep going back to the public and the authorities to say, in effect, “it’s worse than we thought.”

This violated the cardinal rule of crisis communication I discussed in my first answer: Always err on the alarming side, until you are absolutely 100% certain the situation cannot get any worse.

In the three decades since TMI, I have seen countless corporations and government agencies make the same mistake. Its cost: The source loses all credibility. And since the source is obviously underreacting, everybody else tends to get on the other side of the risk communication seesaw and overreact.

That’s why Pennsylvania Governor Dick Thornburgh ordered an evacuation of pregnant women and preschool children. MetEd was saying the amount of radiation escaping the site didn't justify any evacuation – and MetEd, it turns out, was right. But MetEd had been understating the seriousness of the accident from the outset. When the head of the Pennsylvania Emergency Management Agency misinterpreted a radiation reading from a helicopter flying through the plume, thinking it was probably an offsite reading of exposures reaching populated areas, Thornburgh didn't even check with the no-longer-credible utility (which could have told him PEMA had misunderstood the situation). He decided better safe than sorry and ordered the evacuation.

In contrast to Metropolitan Edison, the Pennsylvania Department of Health adopted an appropriately cautious approach. The Health Department was worried that radioactive Iodine 131 might escape from the nuclear plant, be deposited on the grass, get eaten by dairy cattle, and end up in local milk. Over a two-week period health officials issued several warnings urging people not to drink the milk. Meanwhile, they kept doing assays of the milk without finding any I-131. Their announcements moved slowly from “there will probably be I-131 in the milk” to “there may be I-131 in the milk” to “there doesn’t seem to be I-131 in the milk, but let us do one more round of testing just to be sure.”

By the time the Health Department declared the milk safe to drink, virtually everyone believed it. While the Health Department’s caution hurt the dairy industry briefly, the rebound was quick because health officials were credibly seen as looking out for people’s health more than for the dairy industry’s short-term profits.

By contrast, the Japanese government said nothing in advance about even the possibility of radioactive milk, and then it suddenly announced that it had tested the milk from around Fukushima (apparently secretly), found more radioactivity than it considered acceptable, and decided to ban its sale. If and when the milk is deemed safe again, I wonder how soon anyone will believe it.

How would you advise a public official dealing with a potential nuclear meltdown to communicate the risks to the public without alarming them?

I wouldn’t! Why on earth wouldn’t you want to alarm people about a potential nuclear meltdown?

There is a purpose to alarming people, after all. You want to motivate them to put aside more ordinary concerns and focus on the crisis. You want them to start thinking about what they should do to protect themselves, their loved ones, and their community – what they should do now, and what they may need to do soon if the situation gets worse. You want them to get through their adjustment reaction (a brief over-reaction to a new risk), gird up their loins, and prepare themselves not just logistically but also emotionally.

My crisis communication clients often want the public to take precautions … but don’t want the public to get alarmed. But the main reason people take precautions is because they are alarmed.

One crucial goal in risk communication, therefore, should always be to achieve a level of public concern commensurate with the actual risk – or at least commensurate with the experts’ level of concern, since the “actual risk” may be unknown. When the actual risk (or the experts’ concern) is low, you want people to stay calm (or calm down); you don’t want them focusing undue attention on a tiny risk. But when the actual risk (or the experts’ concern) is high, the level of public concern should be high too – perhaps too high for the word “concern” to capture. (You don’t install “fire concerns” in buildings; you install “fire ALARMS.”) Even “alarm” may not capture it. Sometimes, in really bad times, you should be aiming for fear.

That’s true even if the current situation isn’t very serious. Don’t forget the “pre” in “precaution.” Ideally, precautions are things you do (or at least prepare to do) before the risk is imminent. Since a key goal of alarming people is to motivate precaution-taking, you need to alarm them about what might happen, not just what’s already happening. Japan’s earthquake and tsunami were so deadly mostly because there was no time for precautions, no time to alarm people before their risk was imminent.

The Fukushima crisis has allowed plenty of time to ramp up people’s alarm … and preparedness. One of the most frequent non-sequiturs in Fukushima crisis communication has been to assure the public that there’s no reason to be alarmed because the current level of radiation (except right near the plants) isn’t dangerously high. But what’s most frightening about Fukushima (except right near the plants) isn’t the level of radiation so far; it’s what might happen that could send the radiation literally through the roof.

In crisis communication, the goal isn’t to keep people from being fearful. The goal is to help them bear their fear (and the situation that provokes it), and to help them make wise rather than unwise decisions about precautions.

Arguably the cardinal sin in crisis communication is to tell people not to be afraid. If your false reassurance succeeds, they are denied the time they need to prepare. If your false reassurance fails, all you’ve accomplished is to leave people alone with their fear – prompting them, justifiably, to take guidance from sources other than you, and frittering away your own credibility and thus your capacity to lead them through the worsening crisis that may be coming.

My clients hate this advice. Their fear of fear – their reluctance to frighten the public even when the situation is legitimately frightening – results partly from what I call “panic panic”: the mistaken tendency of officials to imagine that the public is apt to panic or already panicking.

Publics rarely panic in emergencies. They are especially unlikely to panic when they feel they can trust their leaders to tell them scary truths … that is, when they feel their leaders are trusting them to bear scary truths.

There is a downside to frightening the public, but it isn’t panic. The downside is that the crisis may ease instead of worsening, and with 20-20 hindsight people will blame you for frightening them unnecessarily. In the winter of 2009–2010, the U.K. went through an unexpectedly severe winter and an unexpectedly (and blessedly) mild swine flu pandemic – and the U.K. media reproached officials (sometimes on the very same day) for having bought too little road grit and salt and too much vaccine. But it’s not damned if you do and damned if you don’t. The repercussions (and thus the recriminations) of under-preparedness are a lot more harmful than those of over-preparedness. When it comes to warning – and frightening – the public about a crisis that could get worse, it’s darned if you do and damned if you don’t.

Note: My wife and colleague Jody Lanard provided input to these responses.

Mental models in risk communication – and mental models about risk communication

name:Brian
field:Information assurance
date:March 26, 2011
location:Iowa, U.S.

comment:

Thank you for making your articles available to the general public. There are a lot of facets of your risk communication taxonomy that are universally relevant. I’ve only just stumbled across your resources and work on risk communications in my efforts to better describe and prioritize data security threats. Your online articles on “Accountability” and “Explaining Risk” have proved very insightful lately.

Beyond the random “thank you,” I thought you may be interested in another paper I spotted being discussed on a prominent computer security researcher’s blog. In “Folk Models of Home Computer Security,” the author (Rick Wash) investigates how non-experts think about and work with computer systems and complex technology.

I’m curious what “folk models” you’ve encountered in working with mere mortals in the context of risk management and how they affect different stakeholders in their learning and management of risk.

Peter responds:

Thanks for your kind words about the articles on my website. It always pleases me enormously when somebody from a different field stumbles across my work and finds it useful.

I enjoyed Rick Wash’s paper on “Folk Models of Home Computer Security” – all the more so as I am a heavy computer user who falls into several of the cognitive traps Wash identifies.

The Mental Models Approach

There is a well-developed literature in risk communication that addresses similar issues, the “mental models” work done by Granger Morgan, Baruch Fischhoff, and colleagues at Carnegie Mellon University. See “Risk Communication: A Mental Models Approach” for the classic book in the area – but of course there are plenty of articles as well, some of them available online.

The mental models approach is much more cognitively oriented than my “outrage”-grounded approach to risk communication. It focuses on finding out the content and structure of an audience’s belief systems about a risk issue in order to develop communications that address those belief systems systematically – conforming your message to the audience’s preexisting beliefs when you honestly can, and when you must contradict those beliefs, doing so consciously and carefully rather than blunderingly.

The key principle here: If you don’t know your audience’s preexisting beliefs, you are likely to produce messages that run afoul of those beliefs without knowing you are doing so, without thinking hard about how to do so, and often without having needed to do so in order to make the points you were trying to make.

The notion that people in fact have preexisting beliefs is simultaneously obvious and profound. So much risk communication – and indeed communication of all kinds – seems to assume that the audience is a tabula rasa, an empty vessel. That misperception is a huge strategic defect, since correcting misinformation is a different and more difficult task than merely providing new information.

It is also an ethical defect. At its best, the mental models approach pushes risk communicators to realize that the “audience” is made up of people who want to participate in the conversation, have a right to participate in the conversation, and have something of value to add to the conversation.

The mental models approach can also push risk communicators to realize that people’s belief systems are grounded not just in factual claims but also in values. (That might be less true of belief systems regarding home computer security than it is of belief systems regarding, say, nuclear power.) Both the goals of communication and its means change radically when we come to realize that our disagreements aren’t just about the facts.

One of the core assertions of the Carnegie Mellon team is that it takes research to learn the content and structure of other people’s beliefs. When we guess, they keep telling us, we’re almost certain to guess wrong. They have lots of examples to demonstrate that our belief systems about other people’s belief systems are usually deficient.

I don’t deny this claim. But the biggest deficiency is failing to take into account that other people actually have beliefs, and not even trying to guess what they are. “Why don’t they want to live near our factory?” “Why do they think our data can’t be trusted?” “Why are they trying to defeat our company and even our technology?” Data-based answers to these questions are surely better than guesses. But guesses are enormously better than never asking the questions and brainstorming potential answers. The process of taking a client from unconscious assumptions to explicit guesses is useful, even if the client doesn’t find the time and budget to test the guesses. (Of course it helps if the client remembers that the guesses haven’t been rigorously tested, and keeps revising them based on experience.)

Playing Donkey

My concept of the game of “Donkey” (see “Games Risk Communicators Play: Follow-the-Leader, Echo, Donkey, and Seesaw”) is relevant to Wash’s work, I think. “Donkey” is the game a communicator plays when trying to convince someone who believes Y that he or she ought to believe X instead. It is played in two steps.

First, you must validate the other person’s belief in Y, without validating Y itself (since Y is what you wish to challenge). People resist learning that they’re mistaken when they feel like “mistaken” is tantamount to “stupid.” So the goal of the first step is to make explicit why the other person isn’t stupid to believe Y – it’s common sense, it’s widely believed, it’s what we all learned at our mothers’ knees or from the media, it used to be true, whatever.

In the second step, you take the other person on a journey from Y to X. All sorts of rhetorical tools are available: data, anecdotes, experience (a field trip), emotional appeals, third-party endorsements, etc. What matters is that the topic isn’t X, as it would be if you were talking to someone with no prior opinion. The topic is the journey from Y to X.

Mental Models about Risk Communication

Your email and Wash’s paper also provoked me to do some thinking about my clients’ mental models of the public and the communication process.

Seeing the public as a tabula rasa or an empty vessel with no mental models of its own is itself a mental model – a very damaging one. Seeing the communication process as one-way – writing on the tabula rasa, filling the empty vessel – is a closely associated mental model, and it too is pernicious. But just as my clients must learn to respect the mental models of their audience, even those they are trying to alter, as a consultant I need to respect the mental models of my clients. I probably need to think more deeply than I have about my clients’ belief systems about risk communication.

Better yet (following the advice of the Carnegie Mellon team) would be systematic research into what corporate and government communicators believe about risk communication. There has been some research along these lines, but a lot more is needed.

In fact, that’s arguably the single most important risk communication research priority, especially in the areas of outrage management and crisis communication. Although risk communication experts have some remaining disagreements about what works and what doesn’t (see for example “Speak with One Voice“ – Why I Disagree”), there’s actually a fair amount of expert consensus about most of the key risk communication principles and strategies. The problem is that many front-line communication practitioners don’t agree with the expert consensus.

Some aren’t aware of it, of course. But when made aware of it, they often find it unconvincing.

Recent events in Japan have underlined for the umpteenth time the unlikelihood of true panic (see “Tsunami Risk Communication: Warnings and the Myth of Panic”), the importance of informing early (see “When to Release Information: Early – But Expect Criticism Anyway”), the value of anticipatory guidance/responsible speculation (see “It Is Never Too Soon To Speculate”), and other risk communication basics – and have underlined for the umpteenth time how often practitioners ignore these basics.

More than research proving that the basics are correct, we need research exploring why they are so hard to learn and so hard to teach. That is, we need research on communicators’ mental models of their audience and the communication process.

“Mental models” – not mental model. There will be more than one! For example, we need to distinguish communicators’ mental models of “the public” from their mental models of “opponents” – the subset of the public that is giving them a hard time.

And I suspect there are systematic differences depending on whose mental models we’re looking at. The “communicators” in my clients’ organizations that I’m likeliest to work with fall into five main groups:

  • Technical experts (engineers, public health experts, etc.)
  • Managers
  • Lawyers
  • Health, safety, and environmental professionals
  • Full-time communicators (public relations people, community consultation people, former journalists, etc.)

It would be interesting to correlate these five occupational categories with the five main risk communication approaches my clients are typically considering. (We’ll probably need some occupational sub-categories as well – PR and public consultation are very different mindsets, for example.) How do people in the various occupational categories think companies and government agencies should address their stakeholders?

  • Educate them – explain the data.
  • Ignore them – keep a low profile.
  • Attack them – tell everyone else that they’re either liars or fools.
  • Persuade them – find the right arguments (and suppress the facts that don’t fit).
  • Ameliorate their concerns – make concessions, give away credit, share control, etc.

The real payoff would be to dig deep into these various mental models. I could be a better risk communication consultant if I knew more about my clients’ belief systems about risk communication. At the moment, I am relying mostly on my decades-of-experience guesses or – worse yet – on my unconscious assumptions (which I sometimes dignify in my own mind as “expert intuition”). Data would be better.

Going one level deeper: What are the belief systems of risk communication consultants (me, for example) about their clients and their communications with their clients? Which of the five bullets just above best describes the consultant-client relationship? I don’t often ignore or attack my clients, but the other three are contenders. Am I trying to educate them? Am I trying to persuade them? Or, as the last bullet would recommend, am I taking their concerns (and their outrage) onboard, doing risk communication on behalf of risk communication? That’s what I try to do, but how often do I fall into one of the other – and I think lesser – approaches?

That may be deeper than I want to go right now….

Japan’s nuclear crisis: The need to talk more candidly about worst case scenarios

name:Anonymous
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Newspaper reporter
date:March 16, 2011
location:Canada

comment:

I’m a reporter working on a story that takes a look at the rhetoric/fear-mongering surrounding the disaster in Japan. I figured you’d have some awesome insights based on your field of work.

The story is pegged to these comments, from Europe’s Energy Commissioner Guenther Oettinger, on Tuesday:

  • “There is talk of an apocalypse and I think the word is particularly well chosen. Practically everything is out of control. I cannot exclude the worst in the hours and days to come.” [from: www.telegraph.co.uk]
  • “The site is effectively out of control. In the coming hours there could be further catastrophic events which could pose a threat to the lives of people on the island.” [from: msnbc.tumblr.com]

Do you have any reactions?

Peter responds:

Note: My wife and colleague Jody Lanard collaborated on this response.

Oettinger’s statements are at the upper end of dramatic. “Practically everything” isn’t out of control.

Statements like his result from the failure of officials to follow two crucial precepts of crisis communication.

  • Be incredibly respectful of and empathic about normal people’s fears and fantasies.
  • Get out in front of worst-case-scenario speculation by sharing the worst case scenarios that officials consider possible and worth planning for.

The main communication problem results from the public’s inability to know how much of the situation is under how much control, and what might happen if things get worse. Japanese officials have not helped us to understand that. Worse, they have not communicated in ways that encourage us both to trust that they are telling us everything they know and everything they’re worried about, and to trust that they know what they are doing.

What are the worst cases their experts are worried about and working to prevent? The world has a right to know that, and the world has a right to judge them harshly for not revealing that. More importantly, the world has no choice but to try to figure out on its own what the worst case scenarios might be that officials are either too irresponsible to consider or too cowardly to reveal.

Under such conditions, outside speculation about worst case scenarios justifiably gains traction – especially since the trajectory of the story has been to keep getting worse in the face of official assurances that things were not likely to get worse.

Ideally, officials would have preempted much outside speculation by sharing their worst fears publicly. Our crisis communication advice is always that officials should try to recreate in the public the same level of concern that they themselves are experiencing, and should focus about equally on the two most important questions about the short-term future:

  1. What do you think is most likely to happen?
  2. What’s the worst outcome that you haven’t dismissed as too unlikely to be worth worrying about?

Since officials have avoided answering the second question, speculation has taken off.

So now officials should steer into the skid of speculation, and talk about the very worst things they are worrying about and trying to avert. Their communications should aim for the trust-building goal of getting to a next-day story that says: “Of the bad things the experts were worrying about yesterday, none has happened yet.” Or, more realistically: “Of the bad things the experts were worrying about yesterday, this one happened but these other three haven’t. Those three are still on their watch list, and here’s a new one they have just started to worry about.”

Apart from avoiding credible worst case scenarios, officials’ main risk communication error has been to present justifiably reassuring information in a way that showed disrespect for normal people’s fears. Most people are not freaking out or panicking. They are trying to figure out what to do. They’re not being helped by reassuring information from officials who seem to believe that they are all fools.

Several articles you might want to look at and are free to quote from:

Unempathic over-reassurance re Japan’s nuclear power plants

name:James Donnelly
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Outrage Management index

field:Senior Vice President, Crisis Management, Ketchum
date:March 14, 2011
location:New York, NY

comment:

I thought you’d be interested in reading William Tucker’s piece in today’s Wall Street Journal, where he tries to inject “calm context” on the potential threats of the Japan reactors.

In my opinion, his attempt is wrongheaded. It sounds like he’s minimizing the threat so much that the public won’t buy it. Of course, he should first attempt to connect with the public by saying “…hey this is scary stuff, and we experts are really focused on this too….”

Just thought I’d send it to you for your ongoing study and analysis.

Peter responds:

There are more and more analytically solid but emotionally unempathic pieces like Tucker’s appearing on op-ed pages, explaining oh-so-patiently and just-a-little-patronizingly why what’s happening to Japan’s nuclear power plants is exceedingly unlikely to be a health or environmental catastrophe – though it may very well be an economic catastrophe for TEPCO and for Japan, and a reputational (and therefore economic) catastrophe for the nuclear industry worldwide.

From what I know and what I can find out, I basically agree with Tucker’s conclusion, even though it would be more credible coming from a less predictably one-sided proponent of nuclear power.

It’s always possible that TEPCO is hiding information that would convince experts the situation is less controllable than they now believe. And even if TEPCO is being scrupulously honest, it’s always possible that the situation will deteriorate dangerously. Given how many things have gone wrong already, further failures (in a post-earthquake, post-tsunami reality of malfunctioning systems and countless aftershocks) couldn’t be called unexpected. Very few commentators are claiming that it was foolish to evacuate nearby residents and distribute iodine tablets to the evacuation centers – so obviously a serious radiation release isn’t inconceivable.

Moreover, no one can have followed the news from Japan over the past few days without noticing that the nuclear plant situation keeps looking like it’s about to get better and then gets worse instead. One of the basic principles of crisis communication is to be sufficiently alarmist in your early communications that you’re confident you won’t have to come back later and tell people “it’s worse than we thought.” On the whole, the public communications of TEPCO and Japanese government officials haven’t struck me as over-reassuring in tone or content; certainly they’ve been less reassuring than Tucker’s op-ed. And yet things keep getting worse – “more worse” than we were led to expect.

Nonetheless, the nuclear disaster scenarios I have read sound improbable, and the arguments I have read explaining why they are improbable sound convincing. Although a serious radiation release isn’t inconceivable, most experts think it remains unlikely. It’s a possible catastrophe threatening an area that is still experiencing an actual catastrophe.

So I agree with Tucker that “[w]ith all the death, devastation and disease now threatening tens of thousands in Japan, it is trivializing and almost obscene to spend so much time worrying about damage to a nuclear reactor” … or even a bunch of nuclear reactors. I understand that the nuclear story is breaking news, a real cliffhanger, while the earthquake and tsunami stories are mostly about coming to grips with what has already happened. Even so, the real repercussions of a three-day-old tragedy ought to command more attention than the possible repercussions of an ongoing melodrama. It will be illuminating to look at the Google Trends data in a few days and see whether or not the nuclear story really got disproportionate attention, and if so, for how long.

My wife and colleague Jody Lanard disagrees. She sent me this comment on my draft response:

Disaster porn is disaster porn. People focusing on distant nuclear malfunctions may actually be distracted somewhat from their usual tendency to focus on disaster gore. Another justification for that focus: The nuclear power story applies to “me” more than the people-squashed-under-buildings-with-arms-sticking-out story. Powerful institutions want “me” to accept more nuclear plants in my own country. So “I” am more of a stakeholder in the nuclear story, not just a “wish-I-could-go-help-I-feel-so-bad” imaginary responder (and target of “please donate” ads).

However you feel about the relative importance of these competing stories, there are predictable reasons why people in general and Japanese people in particular are primed to obsess over nuclear risks. And we know a lot about what it takes to help overwrought people put their fears into context. Dismissive, disrespectful articles like Tucker’s aren’t what it takes.

I’m afraid a piece like his may actually widen the gulf between those who are fearful of nuclear power and mistrustful of its proponents and those who are defensive of nuclear power and contemptuous of anybody who isn’t a proponent.

As you say, we need explanations that start from an empathic base: conceding that people’s nuclear fears are not just natural and understandable, but even wise. We need explanations whose aim is to reassure people who are excessively upset, not to over-reassure people who are appropriately upset. Tucker’s aim seems to be mostly to show how foolish it is for anybody to be upset. Even if he is just preaching to the choir, he is setting them a bad example – providing debating points with which to overwhelm the opposition rather than the sorts of more moderate comments that might actually comfort the fearful or convince the doubtful.

And we need to keep in mind that it is hard to know right now, in this evolving crisis, just how worried is worried enough.

Perhaps most nuclear power proponents are too angry (at ignorant media coverage that caters to people’s fears, and at the fearful people themselves) and too frustrated (at the inevitable damage this event is doing to the worldwide prospects for a nuclear power renaissance) to think strategically … far less empathically.

Proponents should be asking themselves and their communications professionals how best to respond to people’s fears in this scary time. I doubt they’re doing that.

Nuclear power proponents have been their own worst enemy for a long time. They can’t help scoffing at the creepy horror a lot of people feel about nuclear anything. They can’t help discounting the past – not just accidents like Chernobyl (a genuine disaster) and Three Mile Island (a near miss), but also the industry’s heritage of baseline opacity punctuated by periodic dishonesty.

Not to mention its heritage of over-optimism. The “Nuclear Power and Earthquakes” page of the World Nuclear Association website was last updated in January 2011. It’s mostly about various nuclear plants, mostly in Japan, that have been safely shut down when earthquakes struck, and then uneventfully reopened a few days later. Here is the entirety of the two-paragraph section on “Tsunamis,” the last section on the page:

Large undersea earthquakes often cause tsunamis – pressure waves which travel very rapidly across oceans and become massive waves over ten metres high when they reach shallow water, then washing well inland. The December 2004 tsunamis following a magnitude 9 earthquake in Indonesia reached the west coast of India and affected the Kalpakkam nuclear power plant near Madras/Chennai. When very abnormal water levels were detected in the cooling water intake, the plant shut down automatically. It was restarted six days later.

Even for a nuclear plant situated very close to sea level, the robust sealed containment structure around the reactor itself would prevent any damage to the nuclear part from a tsunami, though other parts of the plant might be damaged. No radiological hazard would be likely.

The nuclear power industry is among the few industries that still don’t begin to understand the need to treat people’s fears as valid, the need to keep acknowledging that those fears are not baseless. Even with what’s happening right now in Japan, the nuclear industry is still having trouble acknowledging that nuclear fears are not baseless.

Note: My wife and colleague Jody Lanard contributed to this response.

Restoring confidence after the Christchurch earthquakes

name: Erica Seville
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Risk management specialist at Resilient Organisations
date:March 13, 2011
email:erica.seville@canterbury.ac.nz
location:New Zealand

comment:

You may have heard that Christchurch, New Zealand experienced a major earthquake on 22 February 2011, causing multiple building collapses in which 200 people have potentially been killed.

I am directly involved with response to this earthquake and would like to ask for your advice. I am a risk management specialist and teach risk assessment at the post-graduate level at the University of Canterbury. I regularly refer my students to your work regarding risk communication.

A major challenge facing us is managing people’s perception of the risks of moving back into multistory buildings after this most recent event.

To give you some background, we had a major earthquake on 4 September 2010. There was extensive damage to older buildings, but no lives lost in September. Processes were put in place following the September event for structural engineers to assess buildings and deem whether they were safe to reoccupy.

The latest earthquake on 22 February had a vastly different character to the 4 September earthquake. While our most recent February quake had a smaller magnitude, the epicenter was much closer to Christchurch city and much shallower than in the September event. The exact numbers are still jumping around a bit, but to give you an indication of the size of this event, the accelerations that buildings experienced within the Christchurch CBD were 50% greater than what new buildings are designed for, and even exceeded the Maximum Credible Event for which we design our buildings to avoid collapse. In other words, buildings in the CBD experienced very severe shaking.

Several buildings that received “green tags” (deemed safe to enter) after the 4 September earthquake subsequently collapsed in the 22 February event, causing a major loss of life. From a building code perspective these collapses were not unexpected (i.e. we would have expected at least some collapses with this exceedance of the Maximum Credible Event). However, this now presents us with a major risk communication challenge.

We also have a task ahead to restore confidence in the engineering profession, confidence that when engineers say something is safe it really is safe (safe in this context being such a relative term).

As the community seeks to reoccupy buildings and reestablish a sense of normalcy, institutions including schools and universities are going through a robust process (more robust than after the 4 September event) for assessing that buildings are safe to reoccupy. Regardless, many staff, students, and parents will be naturally dubious of any assurances given about the safety of buildings they are asked to reoccupy.

While people here are equipped to provide the technical assurances, I believe that we need to focus carefully on managing our risk communication efforts to provide people with the information they require.

Initial ideas for how to communicate effectively to provide the public with information beyond assurances (many taken from your work) include offering “road shows” to explain how individual buildings are designed to cope with earthquake loads, details of how the assessments have been completed, and the reasons why buildings have been deemed safe. We could also be providing fact sheets and FAQs about the assessment process and safety concerns. That focuses on “how” to deliver the message, but perhaps the more important question for us is “what” messages to deliver.

We would greatly appreciate ideas, case studies, or examples of how these challenges have been resolved elsewhere. Any advice you can offer would be very welcome.

Peter responds:

Thank you for writing to me. My heart goes out to you and to everyone in Christchurch right now. I hope you are taking good care of yourself and your loved ones as you try to help the people of your city.

At the same time, I am glad to hear that some of my work has been helpful to you in your Resilient Organisations research program – and what a wonderful name “Resilient Organisations” is!

I doubt anything I can say about the building reoccupation issue will break new ground for you, but I welcome the chance to try to help. Here are some points you might want to consider.

Acknowledge the correctness of people’s fear and skepticism.

Your comment focuses largely on two difficulties: credibly assuring people that buildings in Christchurch are safe to reoccupy, and restoring people’s confidence in the engineering profession’s judgments about building safety.

Paradoxically, I think the key to both tasks is not to focus so much on them.

I would aim for a very different goal: to convince people that you (and the government officials and engineering authorities on whose behalf you are working) accept their reluctance to reoccupy buildings in Christchurch – and even their reluctance to trust the experts’ safety assurances.

It’s not enough to acknowledge that people’s fear and skepticism are natural. That’s certainly a step in the right direction – but I think officials must go a very painful step further: They must acknowledge that people’s fear and skepticism are fundamentally correct with regard to many buildings.

This is a crucial distinction. Telling people that it is “natural” for them to feel the way they feel validates that these feelings are psychologically understandable and widely shared. But this halfway validation is very likely to be experienced as an implicit invalidation – as a claim that however natural people’s feelings may be, they are nonetheless technically unsound, fundamentally mistaken. Telling people that some reaction is “natural” is very often experienced as damning with faint praise. If you doubt this, try turning it around. How would a structural engineer respond to assurances from citizens that it is “natural” that he or she feels capable of assessing the integrity and safety of a building?

Everyone who lives or works in Christchurch faces a series of difficult safety decisions. You can provide information to help guide these decisions, but you cannot preempt the decisions. And your information is likely to be seen as useful only if you have clearly accepted that they are difficult decisions – that reoccupying a high-rise building in Christchurch is decidedly not a no-brainer.

Some people have left Christchurch forever. Some others are planning to leave if and when they can. Many, many others are wondering if they should leave … or maybe I should say they are wondering if they must leave – if they can ever again be safe (not “feel safe” – be safe) in Christchurch. Officials must show that they see these reactions not as overreactions, not even as natural overreactions, but as sensible responses to what has occurred.

It’s okay – better than okay, it’s valuable – to talk about how resilient the people of Christchurch are, to predict that the city and its people will get through this horrific time together. The language of resilience and determination is fine, as long as it is leavened with respect for those who are considering whether to go elsewhere instead.

In contrast to the language of resilience and determination, the language of reassurance – telling people they are mistaken to feel unsafe – is self-defeating.

Apologize for getting it wrong.

Christchurch just went through an earthquake that was much worse – about 50% worse – than the Maximum Credible Event on which the authorities based their building standards. In other words, the people in charge thought what happened on February 22 was so unlikely to happen they deemed it not “credible” and therefore not worth planning for.

I am a risk communication expert, not an earthquake expert or a building safety expert. I don’t know whether there are things earthquake experts and building safety experts in Christchurch should have known or should have done.

The fact that their maximum credible worst case turned out wrong doesn’t prove that they did anything wrong. I understand that exceedingly unlikely events do happen. As Scott Sagan put it in his book The Limits of Safety, “things that have never happened before happen all the time.” And I understand that it is impractical to plan for exceedingly unlikely events. I can believe that earthquake experts had good reasons to discount the possibility of a big quake in Christchurch, and I can believe that building codes in Christchurch (and New Zealand generally) were admirably conservative and would have had to be insanely over-conservative to protect people from what actually happened on February 22. Not to mention that no city in the world, as far as I know, has retrofitted every one of its older buildings to meet “new building” earthquake codes.

On the other hand, events that are classified as rare sometimes turn out not so rare after all. There are too many communities where “hundred-year floods” happen every ten years or so.

There are really three separate technical questions here:

  1. Should Christchurch have worried more and done more about earthquake preparedness before the September 4, 2010 quake, the one the city got through so well?
  2. What if anything should Christchurch have learned from the September 4 quake that might have led to greater preparedness in time for the far more devastating February 22 quake?
  3. What if anything should Christchurch learn now from the February 22 quake and the subsequent aftershocks? Is the prospect of another severe quake pretty credible now? Should the rebuilding effort meet tougher standards than the standards previously in effect? Or is there reason to think Christchurch is good for centuries of geological calm before the next time? (Not that any normal Cantabrian would be likely to believe such a reason.)

The last of these three questions is obviously crucial to the decisions that must be made – not just communal decisions about whether and how to rebuild Christchurch, but individual decisions about whether and how to stay in Christchurch … and to reoccupy its remaining multistory buildings.

It must be tempting for earthquake experts and building safety experts to keep asserting their innocence with regard to the first two questions. I think that’s a mistake. Even assuming that they did nothing wrong, the risk communication bottom line here is that they got it wrong. I wouldn’t get caught up in a debate over whether they should have been expected to get it right. I’d just keep apologizing, abjectly, for getting it wrong.

I accept that the government and the experts had what seemed like good reasons (based on history, modeling, and science) not to expect anything as severe as February 22. Assuming that is true, it needs to be said with enormous anguish, not with defensiveness. There are two truths here: the experts got it wrong, and nobody could possibly have realized what was going to happen. In the dialogue between experts and public, the experts need to emphasize that they got it wrong, leaving the public to emphasize that nobody could possibly have realized….

Stop saying any building is “safe.”

Here is some of what normal, sturdy people who live or work in Christchurch may be thinking, based on the last half year’s experience:

  • The September 4 earthquake shook us up but we got through it pretty well. We thought that was evidence of good engineering and good preparedness. In hindsight, it looks like it was evidence that the September 4 earthquake just wasn’t a very severe quake.
  • We thought the green tags attached to buildings after the September 4 earthquake meant they were “safe.” But the February 22 earthquake killed lots of people in those buildings. So we need to rethink what a green tag means. Maybe it just means that the building hasn’t been so damaged that it’s going to collapse on its own. Maybe it means that the building will survive some typical aftershocks, but not necessarily another significant quake. Maybe it means the building will survive another quake like September 4, but not necessarily a “Maximum Credible” quake. Maybe it means the building will survive a Maximum Credible quake, but not necessarily another quake as bad as February 22.
  • It’s pretty clear now that the maximum credible earthquake is a worse earthquake than the authorities imagined was credible. Is another quake as bad as February 22 now credible? A worse one? What’s the worst quake that’s credible now? And how might that sort of quake affect a green-tagged building? It’s not clear that the authorities have real answers to these questions. Sometimes they seem to be claiming they have some answers. Sometimes they seem to be evading the questions.

The bottom-line lesson here is that the surviving multistory buildings in Christchurch aren’t “safe.” Structural engineers can presumably identify which of them are so unsafe people won’t be allowed back in. The rest are safe enough that you’re willing to let people use them, even desperate to persuade people to use them. But most of them are probably not as safe as the next-generation buildings that will be designed and constructed to replace those that cannot be repaired. And even the next-generation buildings won’t be unequivocally safe. Nothing is unequivocally safe.

In a 2005 column entitled “Risk Words You Can’t Use,” I wrote:

Saying something is “safe” means that it is risk-free. But nothing is risk-free, so nothing should ever be called safe. Safer than something else? Sure. Or safer than a standard. Or safer than it used to be. Or even “pretty safe” (though not “acceptably safe” – what’s acceptable to me isn’t up to you). But not “safe.” … [C]laiming that something is “safe” allows people to imagine (or pretend) that you’re promising zero risk. Then when the risk turns out greater than zero, they can protest: “But you said it was safe!” “There is no such thing as zero risk,” you belatedly explain. “Now you tell us!”

But after a worse earthquake than the authorities considered credible has killed nearly 200 people, this isn’t just a theoretical point. Whether to inhabit a high-rise structure in Christchurch is now obviously a judgment call. The job of the city’s engineers is to inform that judgment, not to make it. They need to tell people how safe they believe various buildings will be in the event of a future earthquake of specified severity. They should avoid telling people that any building is simply, unequivocally “safe.”

People know already that nothing is simply, unequivocally safe. The people of Christchurch are especially aware of that fact in the wake of what they have just lived through. But the human hunger for reassurance leads many people to beg to be told that things are safe – that there will be no more earthquakes, no more collapsing buildings, no more violent deaths. People imagine that these sorts of reassurances will make them feel better, so they put out signals to officials asking to be reassured.

But in fact empty reassurances do not make people feel better. Instead, they smell a rat, lose confidence in the officials that are saying over-reassuring things, and are left all the more alone with their fears.

Your goal can’t be to make the people of Christchurch feel safe; they’re not safe – at least they’re not simply, unequivocally safe. Your goal needs to be to help them bear this reality and the anxiety it causes. In order to do that, it helps if officials show that they themselves are bearing this reality and the anxiety it causes. A leader who is able to bear appropriate fear is a far more useful role model than a leader who keeps issuing empty claims that everything is safe and there is nothing to fear.

Make use of the risk communication seesaw.

The risk communication seesaw will doubtless be at work here. That is, if people feel pressured to reoccupy one of Christchurch’s surviving multistory buildings, they are likely to resist the pressure. But if the authorities are respectful of people’s reluctance to come back – affirming not just that their reluctance is “natural” but that it is sensible and wise – people will overcome that reluctance more quickly.

Not everyone, of course. Some people will get stuck in a post-earthquake “adjustment reaction” and may need clinical help before they are once again able to enter a multistory structure. Others will decide on the merits that they prefer to avoid spending time in old and possibly somewhat damaged buildings in a city that seems to be vulnerable to more severe earthquakes than previously thought likely.

But assuming no more disastrous quakes, most people will eventually come reluctantly to terms with the risk. They’ll do that even if they’re pressured to come back. But they’ll do it more quickly if they’re not pressured.

Every effort should be made, therefore, to avoid forcing people to choose now between reoccupying buildings they consider potentially dangerous and severing their ties with Christchurch. Help them find alternative workplaces or living spaces they consider safer. Let them spent a few months in a different office or on a different campus. And wait for them to get past their adjustment reactions and willing to contemplate coming back.

Far from pressuring people to come back before they’re ready, officials should validate that even some very sturdy people may take quite a while to feel confident enough in their guts, as well as confident enough in official assessments, to go into Christchurch’s remaining multistory buildings. For those who are nowhere near ready to come back, such comments will be experienced as empathic. For those who are nearly ready to come back, they may well trigger a seesaw response: “I’m sturdier than you think! I’m going to reoccupy that building now!”

A few years ago I worked with a U.S. government agency many of whose employees were experiencing health symptoms they attributed to indoor air quality problems in a particular high-rise office building. Parts of the building were evacuated while the agency tried to effect some repairs. When the workplace was reopened, hundreds of employees expressed strong resistance to reentering the building. As long as the government was insisting that the building’s air quality was fine now, the resistance kept increasing. But when the government promised that anyone who wished to do so could move to a different building, most people opted to come back to the old building instead. They preferred the old neighborhood; they didn’t want to disrupt their carpool and lunchtime arrangements; they would miss coworkers who were staying. These factors were far less salient than the health and safety issues – until the agency got on the other side of the seesaw and insisted that no one should come back to the old building who felt that doing so constituted an unacceptable risk. With the agency firmly on the health-and-safety side of the seesaw, far more employees found themselves migrating to the lifestyle-and-convenience side.

Much the same thing happened in the weeks and months after the September 11, 2001 attack on the World Trade Center in New York City. Hundreds of thousands of people lived or worked within a few city blocks of the Twin Towers. Some nearby buildings were too damaged to reoccupy. But many were lightly damaged or completely undamaged, though potentially contaminated with dust from the Twin Towers’ collapse. So there were at least four issues to be addressed:

  • Was my building damaged to the point where it might not be structurally sound?
  • Was my building contaminated with dust from the Twin Towers that might threaten my health?
  • Was my building contaminated with “corpse dust” from the thousands who perished when the Twin Towers collapsed?
  • Is my building likely to fall prey to another terrorist attack on lower Manhattan?

According to anecdotal evidence, employers and landlords found that their employees and tenants came back more quickly when these concerns were taken seriously than when they were ignored or pooh-poohed.

Yes, winter is coming. People need shelter; they need to reoccupy the safer buildings in Christchurch. It must be incredibly difficult for officials to resist pushing too hard (and the over-reassurance that naturally accompanies pushing too hard). But pushing too hard will backfire. Instead, officials must find ways to keep people warm and reasonably safe from non-quake dangers while they get through this phase of their adjustment reaction and slowly come to terms with the horrendous new reality that their world can fall down around them in an instant.

Get clear on what the green tags mean.

If you’re going to continue to use green tags at all, you need to get much clearer about what they mean, and that includes taking into account the way people felt about green-tagged buildings after the disastrous February 22 quake.

I understand the point you make in your comment: The green tags used after the September 4 quake meant only that the tagged buildings hadn’t been rendered uninhabitable – not that they had somehow been made more earthquake-proof than they’d been to start with. To people who understood that, it came as no surprise that some green-tagged buildings – and some people who were in them – didn’t survive the February 22 quake. The February 22 quake itself was a surprise, but the failure of some green-tagged buildings to withstand it was not.

I get that. And the people of Christchurch are capable of getting it too – but not if you seem excessively intent on explaining it. Consider the difference between these two sentences:

A.   “It’s awful that we gave some people the impression that green-tagged buildings would be safe even in a disastrous earthquake.”

B.   “We never claimed that green-tagged buildings would be safe even in a disastrous earthquake.”

If you keep saying B, people will keep thinking A. If you start saying A, people will start thinking B. That’s a seesaw too.

Miscommunications are always mostly the source’s fault. If some people got the misimpression that a green tag meant a building was guaranteed to withstand even an off-the-charts earthquake, you need to take responsibility for having given them that misimpression – even if you think you were pretty clear and they shouldn’t have misunderstood. And if you weren’t all that clear, if the meaning of a green tag really was explained overconfidently in an effort to reassure people quickly, I’d admit that and apologize for it.

Remember to emphasize that “we gave” that misimpression, not that “people got” that misimpression. You will probably be under official pressure not to say “we gave.” Resist that pressure; fight it – hard.

Going forward, I’d think twice before using color-coded tags again. But if you take responsibility for the prior misunderstanding, you could perhaps turn that misunderstanding into a good teachable moment to explain what a green tag means – and what it doesn’t mean. The time to tell people you’re not guaranteeing that a building will withstand another earthquake isn’t after another earthquake in which that building collapsed.

The main focus of my comments so far has been on the importance of not over-reassuring people. You can’t assure people that any building will be safe if there’s another earthquake, and you can’t assure people that there won’t be another earthquake. At most, you can assure people that particular buildings have been carefully assessed for earthquake damage and found sound – as safe (or nearly as safe) as they were before the two quakes, but obviously no safer.

Instead of a green tag, I’d like to see a sign something like this:

This building survived the two recent earthquakes with minimal structural damage. In ordinary terms, it is a safe building – as safe as it was before the quakes. It meets or exceeds all building code standards, including the earthquake standards. However, this does not mean that the building will necessarily withstand another earthquake as severe as the one that occurred on February 22. That quake was significantly more severe than the standards were designed to withstand. The fact that a building made it through one severe quake is not a guarantee that it would make it through another as severe … if we ever have another as severe.”

Sound unduly pessimistic? Good. That’s the best way to sound. Aim to inspire letters-to-the-editor and blogs complaining that you’re far too worried about the remote possibility of another severe earthquake in Christchurch. That’s the seat on the seesaw you want to occupy.

And yes, that long message is not as crisp as a green tag. Perhaps you can put it in fairly large print on a green sign. But the words are much more important than the color. Right now, making your building safety message easy to understand isn’t as important as making it virtually impossible to misunderstand.

Be empathic even about fears that aren’t technically justified.

As I have emphasized above, fears that are technically justified (such as the fear that another earthquake might collapse more buildings) need to be validated in exactly those terms – as correct, not just as natural or understandable.

But even fears that are technically unjustified (such as the fear that a green-tagged building might collapse all by itself, without any earthquake) need to be validated as natural and understandable.

Try to address technically unjustified fears with a ratio of nine parts empathic validation for every one part data-based reassurance. (This 9:1 ratio is just a rule of thumb, not an empirical finding.)

Like much of what I have written already, this may very well be advice that you and Christchurch officials don’t need to hear. In fact, I have seen some excellent examples of empathic support for people experiencing post-quake stress reactions. After the September 4 quake, the New Zealand Ministry of Health put together a series of fact sheets on how people respond to the stress of emergency conditions and how they can help themselves and each other feel better. In a series of ads distributed after the February 22 quake, local celebrities reassure people that their stress reactions are normal … a far cry from “reassuring” them that there’s nothing to feel stressed about!

Shortly after the September 11 terrorist attacks of 2001, the U.S. experienced a bioterrorist attack via a series of anthrax-laced letters. Although the odds against any individual encountering such a letter were astronomical, many Americans found themselves fearful of touching their mail. The problem, then, was how to provide reassuring information in a way that respected and empathized with people’s natural and understandable fears.

Here’s what I wrote about that problem in my column on “Anthrax, Bioterrorism, and Risk Communication.” It may be a model for trying to give people reassuring information – assuming the reassurance is technically sound – about entering buildings they fear might collapse on them.

Timothy Paustian at the University of Wisconsin has a page on his website on anthrax. [Note: This page is no longer online.] My wife, Dr. Jody Lanard, happened on the site in early November 2001, and sent Dr. Paustian some unsolicited risk communication advice. He changed the site. Here is one before-and-after comparison. (The breezy tone isn’t an addition; the original had the same tone.)

Before:
However, it will be very unlikely that you will receive one of these letters. Think about how many pieces of mail go out and how many people there are. Your chances are very low.

After:
You know it’s unlikely that you will receive one of these letters, but you’re still scared. You know how many pieces of mail go out, and how many people there are, but you can’t completely shake that inner worry. You know your chances are very low, but you find yourself reaching cautiously for the envelope, and you feel … just a little nuts. Welcome to the human race.

Part of being empathic about people’s fears is to guide them through a variety of emotional rehearsal and desensitization exercises. Instead of urging people to “get over it” and move back in totally, invite people to attempt more modest progress:

You might want to try imagining going into a tall building for the first time since the February 22 quake. Don’t actually do it, just imagine it. Imagine going in just for five or ten minutes – not for the whole day, just a few minutes to see how it feels. Then imagine staying longer – don’t actually do it, not yet, just imagine it.

Now try something tougher: Imagine being in a tall building during an aftershock – or even when there’s a slight vibration because a big truck is rolling past. Very scary at first, probably! Imagine feeling the fear. Imagine that some of the people around you are also afraid, while others are calm (or pretending to be calm) and might even tease you a little. And imagine getting through the aftershock; imagine noticing that the building stayed solid.

Somewhere down the road, it will be time for you to invite people to try actually entering a multistory building for a few minutes. Or they may surprise you and conduct their own real-world desensitization exercises without your invitation. But that’s down the road. Start by asking people to imagine it.

It goes without saying that nobody should ridicule anybody else for being afraid. Scoffing is the opposite of empathy! If any scoffing does occur, consider showing some strategic anger at the scoffers.

In a nutshell: Give people permission to take a long time to “put their foot in the door.” Don’t demand that they skip their adjustment reaction. Help them get through it.

Think about developing accountability mechanisms.

Even if you carefully confine yourself to assurances you’re entitled to offer – that a building isn’t structurally damaged and isn’t going to fall down on its own or in a minor aftershock – you may still encounter some skepticism.

After all, the assurances are coming from some of the same city departments and technical experts that thought a really severe earthquake wasn’t a credible possibility. And the assurances are coming from organizations dedicated to rebuilding the city and reestablishing normalcy – organizations that have an incentive to accentuate the positive.

For maximum credibility, therefore, think about partnering with people and organizations that don’t have that baggage.

Is there an activist who has been shouting from the rooftops that Christchurch officials and experts should have known a severe earthquake was a distinct possibility? Put her on an oversight team to help you assess which buildings can safely be reoccupied. Is there a maverick engineer who thinks the city’s building codes should have been much tougher? Put him on the team too. And how about that cranky columnist or blogger who thinks the whole crisis has been badly handled?

You don’t want to ground your accountability mechanisms in the judgments of crazy people, obviously, or of people so hostile they’ll claim a building is unsafe even if they know it isn’t. You’re looking for people who have sound technical credentials and a strong sense of integrity – but who are your critics, not your allies, and whose endorsement therefore carries a lot of weight with the skeptics.

Give it time.

Even if you do superb risk communication, it is going to take time for many people in Christchurch to feel okay about spending their days or nights in one of the city’s surviving multistory buildings. If you rush them or over-reassure them, it will take longer.

Give it time. And let me know if there’s any way I can help further.

Erica answers:

Thank you so much for taking the time to write us such an insightful reply. Your advice is already being factored into the response of several organizations that I work closely with, and I have also forwarded your email far and wide to people active in the Christchurch response and recovery effort.

With the awful events in Japan the last few days, I am sure your advice will now be read with keen interest much further afield than New Zealand. I will keep in touch and let you know how things are going. We have a long road ahead of us in this recovery effort, but the resilience of the community is already emerging.

Will the shale gas industry try risk communication?

name:Pam Walaski
This guestbook entry
is categorized as:

      link to Outrage Management index

field:SH&E consultant
date:March 8, 2011
email:pam@jcsafety.com
location:Pennsylvania, U.S.

comment:

I wanted to share with you an article that appeared recently in my home city of Pittsburgh.

We are in the midst of a royal battle regarding the Marcellus Shale play and the drilling efforts that have been occurring for the past few years. There are definitely two sides to the issue – those that think drilling is the economic savior of our region and those that think it will destroy our life as we know it due to environmental damage.

I’m trying to be an interested observer of the process, watching the risk communications (and sometimes crisis communications) of both sides, with an eye to a case study/article.

The linked article emphasizes a local coalition’s efforts to keep the community informed when there are incidents. The idea that all mistakes should be immediately acknowledged and corrected stands out and this article is going into my file for future reference.

Peter responds:

First a quick tutorial for people unfamiliar with shale gas.

The map below shows how much of the United States – including most of Pennsylvania – has harvestable natural gas deposits bound up in underground shale. Some shales are naturally fractured, making it easy to get to the gas; those deposits have been harvested for decades. What’s comparatively new is a technique called hydraulic fracturing (“fracking”) – using fluid under high pressure to crack the shale and liberate the gas. Innovations in fracking have opened up huge quantities of gas for commercial production in just the past few years.

(Graphic source:  EnergyindustryPhotos.com)

The good news: lots of new local jobs, substantial royalties for lucky property owners, and a new tax base for state governments; a new domestic energy source that can hold down energy prices and reduce U.S. dependence on foreign sources; a plentiful energy supply that from some perspectives (not all) is cleaner than oil and much cleaner than coal.

The bad news: the risk that fracking fluid or the gas itself can contaminate local well water, groundwater, and surface water; the risk of gas explosions and other accidents that can injure workers and neighbors and lead to air contamination; the need to pollute large quantities of water with various chemicals to make fracking fluid, and the need to put the fracking fluid someplace when its job is done; the proliferation of thousands of drilling sites in people’s backyards, pristine wilderness areas, and other places where they may not be wanted.

I’m neither an energy policy expert nor an environmental expert, but as near as I can tell the potential benefits of shale gas are huge and the risks are considerable – maybe not huge, but far from negligible.

The goal should be careful, cautious, candid, and highly regulated exploitation of the resource.

We may manage to fumble our way toward that goal, but we’re not off to a particularly good start. Like other new industries with huge potential benefits and considerable risks – think biotech – the shale gas industry has too often overstated the benefits, understated the risks, lobbied hard against regulation, and built a reputation for carelessness, callousness, and deceitfulness.

(Opponents of shale gas development haven’t always been scrupulous communicators either. The much-praised documentary film “Gasland” is superb propaganda but hardly a reliable guide to shale gas realities and controversies.)

In this context, the Pittsburgh Post-Gazette article you link to is very encouraging. The Marcellus Shale Coalition counts among its members nearly all the major shale gas companies with operations in Pennsylvania. In late 2010 the Coalition brought in former Governor Tom Ridge as a “strategic advisor,” and Ridge has been crisscrossing the state talking about the Coalition’s commitment to transparency.

Of course it’s easy to voice a theoretical commitment to transparency. What I like about this article is that it focuses on a specific local accident at a MarkWest Energy Partners compressor station. The simple fact that the Coalition saw the accident as an opportunity to communicate is a refreshing change. Instead of going mum, Tom Ridge came to town to talk about the accident.

The article includes ample evidence that the Coalition is serious about transparency … and that the companies it represents have a long way to go:

Mr. Ridge told the group – which included local state representatives, township supervisors and aides to congressional representatives – that the coalition hopes that even “if there are missteps, acknowledge it, correct it.”

The coalition believes MarkWest handled the incident well, acknowledging and detailing to the public quickly what they believe happened.

And Thursday, the company’s chief operations officer, John Mollenkopf, told the roundtable the company “will figure out what it was about that design” of a small heater that caused the fire “and we will fix that and it’s not going to happen again.”

But Jim Penna, district director for U.S. Rep. Mark Critz, whose district, which stretches from Johnstown to Washington, Pa., includes much of the Marcellus Shale region, said that kind of openness is not what constituents have found.

Mr. Ridge acknowledged that there is too often an information gap between what the industry knows and what the public has been informed about that “and we have to fill that gap.”

I love the ambiguity of that final quote from Tom Ridge. Does he mean the industry needs to do a better job of telling people how safe shale gas development is? Or does he mean the industry needs to do a better job of telling people about its problems and missteps?

A look at the Coalition’s website isn’t encouraging on that score. Despite its admirable “Commitment to the Community” Guiding Principles, the website itself is predictably one-sided.

Everyone I talk to in the shale gas industry seems to agree that the industry needs to do better risk communication. Some of them seem to think that means what you and I think it means: acknowledging prior misbehaviors and current problems, sharing control, becoming more accountable to critics, etc. Others obviously think it means doing a better job of explaining the industry’s strengths and rebutting opponents’ claims.

This question splits not just the industry at large, but individual companies as well. I have done seminars and consultations for a number of shale gas companies. More often than not, my recommendations divided the room, and the room was still divided when I packed up and went home. (In fairness, this reaction isn’t confined to the shale gas industry. My recommendations often divide the room, whether I’m talking to a gas company or a public health agency.)

Industry associations like the Marcellus Shale Coalition will play a key role in determining which way the industry goes. As a rule, industry associations aren’t leaders. Whether the innovation in question is technological or social – a better way to manage fracking fluid or a better way to address stakeholder outrage – individual companies usually lead the way. Then, once it’s pretty clear that the innovation is a good idea, trade associations help the laggards catch up.

But there are exceptions. In the late 1980s, for example, the Chemical Manufacturers Association (now the American Chemistry Council) genuinely led the way in the U.S. chemical industry’s adoption of the Responsible Care® program. It would be wonderful to see the Marcellus Shale Coalition leading the way in the shale gas industry’s move toward transparency and responsiveness.

One reason why trade association leadership may be needed: The shale gas industry is in danger of a race to the bottom that could undermine its credibility (and perhaps even its viability) for a generation or more.

In the long term, companies that address stakeholder concerns honestly and effectively will earn a competitive advantage. But in the short term, companies that overpromise, cut corners, and hide their problems have an edge. And shale gas companies are in a short-term competition right now to nail down the most promising leases. Even a company with a long-term vision that wants to set high standards for itself may succumb to this competition.

Stringent regulation could keep the bad actors from ruining it for everybody. But in Pennsylvania and many other states, shale gas regulators are playing catch-up, and the bad actors have more leeway than is good for the community, the environment, or the industry itself. And activist groups, more often than not, are focused on stopping shale gas development, not policing it. Can trade associations like the Marcellus Shale Coalition fill the void? It’s a long shot, but it’s worth a try.

Using cognitive dissonance to get apathetic people moving

name:Sandra Cohen
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:Research scientist at NJDEP
date:March 5, 2011
email:sandra.cohen@dep.state.nj.us
location:New Jersey, U.S.

comment:

I was one of your students in 1985 as part of the Cook College Certificate Program in Social Strategies of Environmental Protection and I have since credited you (and Dr. Bill Goldfarb) as major influences throughout my 25-year career in environmental protection.

While I initially had qualms with your “Persuasion Theory” (I thought it too Machiavellian), I eventually came to understand its brilliance and have successfully applied it in many areas of my work, including stormwater regulation, air quality education and outreach, and watershed management.

I have recommended this approach to many of my colleagues in their efforts to communicate with the public about environmental issues such as sustainability, climate change, greenhouse gas reduction, and other new and legacy environmental issues. I have also tried, with limited success, to use this concept to influence messaging over union-management relations and other public interest issues.

While it still seems counterintuitive to my scientist colleagues, information alone does not change behavior. This has been proven over and over again in the history of environmental protection and is currently playing out in the political realm.

Peter responds:

The two-step approach to behavior change that I taught you in 1985 is one I still teach. And it still arouses the same ethical reservations you felt then. I’m glad you have found ways that you think are both effective and ethical to use this approach in your environmental protection work – but I wouldn’t urge anyone to use it in a way that he or she considered unethical.

The approach is grounded in Leon Festinger’s Theory of Cognitive Dissonance. As you point out, it is usually pretty ineffective to give people information that conflicts with their current views and actions, in the hope that the information will change their attitudes and the new attitudes will lead to new behaviors. We all tend to resist learning that we’re wrong.

The dissonance-based approach starts by persuading people to do something they haven’t previously done, not by using information but by appealing to their existing needs – for example, the need to be like some much-admired celebrity who endorses the recommended behavior. Once they have taken this action, people tend to feel uncomfortable, since they have no adequate cognitive rationale for what they did. This discomfort (cognitive dissonance) leads them to start looking for information to justify the new behavior.

Now your information is no longer seen as challenging people’s old behavior; instead, it helps them feel better about their new behavior. The information is therefore far likelier to be accepted, and to become the basis for new attitudes compatible with the new behavior. These new attitudes make the behavior independent of the initial need-based motivation; they also make it generalizable to other behaviors grounded in the same attitudes.

Here it is in a flowchart:

  1. Trigger an existing need to get initial behavior change.
  2. Unsubstantiated new behavior arouses dissonance, which motivates the search for substantiating information.
  3. Information reduces the dissonance, leading to supportive attitudes and stable, generalizable behavior.
  4. Without information, the new behavior stops, unless the need is repeatedly triggered.

In a nutshell: We are not a rational species, but we’re very good rationalizers. So we often need a silly or irrelevant reason to motivate us to do something new (reduce our greenhouse gas emissions, say), followed by information to show us why what we did (albeit for silly reasons) was actually pretty smart.

For more on this approach, see Brian Day’s 2000 book chapter on “Media Campaigns.” link is to a PDF file (Brian was my student a decade before you were, in the 1970s! He is now Executive Director of the North American Association for Environmental Education.) You might also want to look at the “Cognitive Dissonance” section of my 2009 column on “Climate Change Risk Communication: The Problem of Psychological Denial.”

How not to play into the hands of extremists

name:Donald Cho
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Pharmacy cashier
date:February 20, 2011
location:Canada

comment:

I read your article on “Managing the Outrage of Extremists” and it was amazing.

Can you do me a favor by answering this question? What social, political or other factors could potentially help animal or environmental rights extremists have greater success in achieving their political goals?

You can post your answer on your website – but should it be one that can inspire animal or environmental rights extremists, then I prefer you don’t. I am just curious about what is happening and their intentions.

Peter responds:

I deduce from your last paragraph that your interest isn’t in how to help animal rights and environmental extremists succeed; you want to know how to avoid helping them succeed. Which isn’t surprising. As I mention in the 2009 Guestbook entry you’re writing about, extremism is in the eye of the beholder. Though U.S. presidential candidate Barry Goldwater famously intoned in 1964 that “extremism in the defense of liberty is no vice,” it’s virtually never a term we apply to our own views and actions.

Goldwater’s protest notwithstanding, the word “extremist” is intrinsically pejorative. It means someone who goes further in some direction than the speaker considers appropriate. And how far someone can go before getting labeled an extremist depends on what the speaker thinks about the direction itself. If I approve of environmental activism overall, I’ll reserve the label “extremist” for people who are way, way out there; if you disapprove of environmental activism overall, people who look pretty moderate to me may well look awfully extremist to you.

Thus there are two social science literatures on what factors hinder the success of extremists. One is all about how to stop social movements, period – how to defend the status quo. The other is about how to ally with more moderate change agents in ways that disempower their radical wing. I think everything starts there. Whether we’re talking about the democracy movement in Egypt or the animal rights movement in North America – or legions of other causes, from anti-racism to anti-vaccination – the core question is whether you’re a moderate sympathizer trying to stop your movement from going the extremist route or an establishment strategist trying to stop somebody else’s movement from getting anywhere at all.

The risk communication principle that connects these two perspectives is this: By far the best way to disempower extremists is to empower moderates.

I made this point, too, in the 2009 Guestbook entry. Though it was entitled “Managing the outrage of extremists,” that entry was really about managing the outrage of their more moderate followers. Building on my distinction between stakeholders who are “fanatics” (extremists) and those who are “attentives” (moderates), I argued:

Trying to win them [fanatics/extremists] over is a waste of time, and even trying to negotiate a mutually acceptable compromise is a long shot. Trying to persuade the attentives that you’re right and the extremists are wrong (and extremists!) is also very difficult. In fact, it can easily backfire. Your best course is to convince the attentives that you respect the extremists and their views, that they have already successfully forced you to make some important concessions, and that it’s time to move on.

When interacting with people you consider extremists, it’s wisest to think of the interaction as a kind of theater. Your goal is to dramatize your responsiveness to what the extremist fanatics are telling you, so the moderate attentives don’t feel the need to get extreme themselves. The fanatics almost certainly won’t think you’re being responsive enough, but that’s not a problem. As long as you sound pretty responsive to the attentives, you’re wooing them away from the fanatics, and thus undermining the extremist cause.

To keep change from getting too extreme, in short, you need to convince moderate change activists that you have been and continue to be suitably responsive.

Note that the best way to do that isn’t by responding to the moderates while leaving the extremists out in the cold. My clients are endlessly tempted to try that. “We’ll deal with you,” my clients want to say to their moderate, respectful, critics, “but we’ll have nothing to do with those crazies over there.” More often than not, this approach backfires. It makes the extremists look less crazy and more powerful to the moderates. Far from splintering your critics, it tends to unite them behind the extremist agenda.

Instead of leaving the extremists out in the cold, you need to offer them a seat at the bargaining table … making sure the moderates are watching as you do so. If the extremists accept your offer, they’re starting down the path to compromise; they risk alienating their radical wing and turning into moderates themselves. If they reject your offer, they’re demonstrating their extremism, and the moderates can’t fail to take notice that you’re really trying to accommodate the extremists’ concerns and they’re just being intransigent and unsatisfiable.

Of course you have to be serious about your willingness to compromise with the extremists. Your half-a-loaf offer can’t be a bluff. Assuming the extremists reject your offer and insist on staying out in the cold, sooner or later the moderates will take their place at the table. Your offer to negotiate with the extremists isn’t just theater. You need to be okay about ending up in serious negotiations with the moderates.

What should you do if you’re trying to defeat the extremists’ goals altogether? They’re not just extremists, in your view, they’re extremists on behalf of a bad cause. I don’t know. I guess maybe you stand tough; you refuse to negotiate; you call out the army; you try to convince the rest of the world to support you in your intransigence. But if that’s what’s going on, don’t pretend to yourself that your problem is the other side’s extremists. Your problem is the other side’s movement, moderates and extremists alike.

If your problem really is the other side’s extremists, the optimum strategy is to keep giving those extremists chances to act moderate, and let the moderates watch while the extremists choose between getting coopted (as they’re likely to see one choice) and getting marginalized (as the moderates are likely to see the other choice).

I think this advice is common sense. But there is an opposing “common sense” that asserts you should never offer to compromise with extremists. This opposing view is nicely captured in the proverb, “give them an inch and they’ll take a mile.” I’m simply not worried that animal rights or environmental extremists are going to take a mile. They would if they could, no doubt. But offering them an inch doesn’t empower them to take a mile. Instead, it forces them to make a tough choice: Accept the inch and bargain for a few more inches. Or reject the inch because it’s not a mile. My recommended strategy is to keep offering the extremists that choice, making sure the moderates are there to watch them choose.

The worst mistake you can make in dealing with extremists is to ignore them.

For one thing, being ignored (or thinking they’re being ignored) arouses extremists’ outrage, and inspires them to escalate. During the U.S. war in Vietnam, for example, President Johnson was paying assiduous attention to the domestic antiwar movement. But he carefully pretended he wasn’t. He succeeded in convincing the movement (I was on its fringes) that he wasn’t listening … so the movement escalated. Also, ignoring the extremists (or pretending to ignore them) tends to convince the moderates that the extremists are right, that they need and deserve moderate support. When powerful institutions look like they’re ignoring their extremist critics, it’s a great recruiting tool for the extremists.

My clients do this all the time – pretend unresponsiveness. They claim they do it in order not to empower the extremists, but I think ignoring the extremists – while the moderates are watching – is what empowers them most. I think my clients’ ego and my clients’ own outrage are the main reasons they refuse to deal with extremists.

Secret responsiveness is always stupid. Consider the foolishness of a company that assiduously monitors its critics’ websites, blogs, and tweets, but always through an anonymizer so the critics can’t tell they’re being monitored. Several of my corporate clients are practically obsessed with their extremist critics. They read everything their critics write, and sometimes they even make changes in company policy because of something a critic said. But they consider it crucial to pretend they don’t even know those critics exist. They never respond – not on their critics’ virtual turf and not on their own websites either. So the extremists think they’re not getting heard and escalate their rhetoric. And third parties, from moderate critics to undecideds to merely interested observers, get the distinct impression that the company is arrogant, impervious, and completely unresponsive.

Is there ever a time when it makes sense to ignore extremists? I can think of two such times. If they’re a really, really, tiny group, and guaranteed to remain so, you can afford to ignore them. Why give them the status of your attention if they’re nobodies and no one else is paying them any attention? And if they’re really, really marginalized and the moderates/attentives have already written them off, you can afford to ignore them. Why rehabilitate them after they’ve overplayed their hand so badly? But these exceptions are comparatively rare. And you need to be careful not to let wishful thinking lead you to a misdiagnosis.

Generally, extremists deserve your respectful attention – your very public respectful attention for all the moderates to see.

Going to war with extremists is nearly as self-defeating as ignoring them. (I’m talking metaphorically here. If the extremists are throwing Molotov cocktails and you’re launching tear gas canisters, I’m out of my depth.) In a battle of extremes, the alarming extreme, the outrage-provoking extreme, has a natural advantage. So if you’re on the reassuring, establishment side of a controversy, you need to stake out the middle. That means putting a higher priority on demonstrating your responsiveness to the extremists’ valid claims than on rebutting their invalid ones.

What’s really at work here is an ecosystem. The extremists’ niche in the ecosystem is to stay pure (that is, extremist) and refuse to compromise. The moderates’ niche is to make the deal for half a loaf. Thoughtful moderates know that the existence of extremists is essential to moderate clout. Establishments compromise with moderates in large measure because extremists loom in the background.

In U.S. labor history, the extremist Industrial Workers of the World (the “Wobblies”) made more moderate union organizing efforts look legitimate by contrast. In U.S. civil rights history, Stokely Carmichael’s Student Nonviolent Coordinating Committee (and later the Black Panthers) helped legitimize Martin Luther King’s Southern Christian Leadership Conference. Bra-burners legitimized moderate feminists; Queer Nation and Act Up legitimized moderate gay activists.

It is the nature of extremists not to understand that their main contribution is to inspire, legitimize, and empower moderates. The moderates end up cutting the deal and getting the credit. The extremists end up feeling like they failed. But even if the moderates don’t publicly acknowledge their debt to the extremists, they should realize the extent to which moderate clout piggybacks on extremist threat. And the corporate and government establishments that end up reforming their policies in response to moderate critics shouldn’t forget that they were willing to listen to the moderates largely because extremists loomed in the background.

Humility: why senior executives have trouble addressing their misbehaviors

name: Philip Connolly
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Corporate communications
date:February 6, 2011
email:philip.connolly@ntlworld.com
location:United Kingdom

comment:

I enjoyed the “Two Kinds of Reputation Management” article a lot. I have nightmares about a reputation management conference I went to in Amsterdam many, many years ago. I think I even met the newly sainted Charles Fombrun there. What was awful about the delegates from third-rate business schools with their “models” was the complete lack of any meaning for the world of work.

Your article was thoughtful and I can see how it might shape corporate communications programs.

After journalism I cut my corp comms teeth in the oil and pharmaceuticals industries. It has been a source of continuous surprise (I’m clearly not a quick learner) to see how senior folk just cannot accept that their company may not be liked and there are (legitimate) points of view that differ from ours. So getting them to address the negative reputation is always going to be a challenge – they just want to be loved.

Anyway I will gird my loins as we merger and hope that the new company will be different.

Peter responds:

Thanks for your kind words. “Two Kinds of Reputation Management” seems to be striking a responsive chord in people who have known (or sensed) all along that there’s something wrong with the idea that companies can make up for their misbehaviors with good works, without the painful need to address the misbehaviors themselves.

Of course doing something good when people are angry at you is better than doing nothing at all. In an ongoing relationship that’s generally positive, flowers, candy, and breakfast in bed are time-tested ways to say you’re sorry. But they are no substitute for talking through what made the other person angry in the first place.

Similarly, corporate philanthropy isn’t a very effective response to stakeholder disapproval, much less to stakeholder outrage. Purely in business terms, a philanthropy dollar buys a lot less outrage reduction and reputation recovery than a reparations dollar.

What I’m wondering about now is why so many corporate and government leaders don’t get this – why, as you put it, “senior folk” tend to ignore reputational problems and “just want to be loved.” I didn’t say much about this question in the column. I settled for a pretty facile, one-sentence answer: “It feels a lot better – safer, more comfortable, easier on the ego – to burnish your reputational strengths than to try to remedy your reputational weaknesses.”

But I think it’s got to go deeper than that. If you and I are right that mitigating negative reputation is a lot more profitable than polishing positive reputation, then why do smart, profit-driven top managers keep focusing on the latter instead of the former? We all want to be loved. We would all rather spend time with people who approve of us than with people who disapprove. We would all rather “make up” for our misbehaviors with irrelevant good works than own up to them in painful confrontations. Even so, most of us bite the bullet when we have to.

Do senior executives find this harder than most people? It seems to me that they do. If so, why?

The answer must have something to do with humility. Senior executives have succeeded big-time. Perhaps they unconsciously interpret their success as evidence of their worthiness, their overall excellence, even their infallibility. Perhaps they lose the sense most of us have that we screw up a lot. So when a company’s stakeholders are angry because the company has screwed something up, that company’s senior executives may imagine that it simply can’t be so, that the stakeholders’ grievances have got to be misguided and unfair.

We all feel defensive when we’re under attack, but I suspect senior executives are likelier than most to feel offended when they’re under attack. “How dare you attack me?” Other people’s outrage, I suspect, gets senior executives outraged – and their outrage makes it harder for them to tackle the problem.

Of course when other people’s outrage really is unjustified, then it’s pretty normal to feel outraged right back at the unfair attack. But people with a healthy level of humility force themselves to consider the possibility that the attack might not be completely baseless after all, that the other person could have a point, and that trying to see the situation from that other person’s perspective is important.

Outrage management is grounded in the principle that even when stakeholders’ technical concerns are completely mistaken, there are always things your organization has done that provoked your stakeholders’ outrage – if not technical things, then “relationship things” like being arrogant and unresponsive. And outrage management is grounded in the principle that identifying, acknowledging, and remedying the things your stakeholders are right about is more useful than rebutting the things your stakeholders are wrong about.

Senior execs may find that harder than the rest of us.

This is all just a hypothesis. I haven’t seen any research showing that senior executives are more likely than lower-ranking people to ignore negative reputation, much less that the reason they do so (if they do) is a shortage of humility. But it feels right to me. And it’s consistent with what I have sometimes called the triangle of greed, outrage, and ego. (Although I have written about the greed/outrage/ego triangle from time to time, the discussion I like best is in my video entitled “Sixth Outrage Management Strategy: Get the Underlying Issues into the Room.” )

Here’s the hypothesis in a nutshell. An executive who was appropriately “greedy” – that is, focused on what’s good for the company – would want to address the company’s negative reputation. He or she would understand that being less hated was more central to the company’s goals than being more loved. But people at the top of the corporate heap are likelier to have inflated egos, which makes them likelier to get outraged at their stakeholders’ outrage, which makes it harder for them to stay focused on their own greed. So they tend to fight back at their critics instead of trying to ameliorate their critics’ outrage. And they keep trying to mend the reputational damage with irrelevant good works.

What implications might this have for a corporate communications manager (like you) or a risk communication consultant (like me)? We need to take seriously the possibility that senior executives may be too entangled in their own ego and outrage to think straight about the best way to manage the company’s reputation. Instead of just telling them again and again how important it is to address critics’ concerns, we may need to look for ways we can address senior executives’ ego and outrage needs, in order to help clear their minds. My 2007 column on “Managing Management’s Outrage at Outrage Management” has some preliminary thoughts along these lines.

Full disclosure and personal reputation

name:Melodie Selby
field:Former state government employee, current professor
date:February 3, 2011
email:melodie.selby@wallawalla.edu
location:Washington, U.S.

comment:

Thank you for your article on “Full Disclosure.” It clearly summarizes and explains my observations and experience over the last 20 years.

I especially want to underscore your point about the ongoing effects of hiding information. It is incredibly rare that you have one chance and you don’t care how many bridges you burn.

Even if your company is disposable, you will want a future career and these choices follow you forever. It’s about winning the war, not individual battles.

I think some graphics to illustrate some of your concepts would be helpful – such as the communications seesaw. Overall, though, this is a great site. I recommend it regularly.

Peter responds:

I appreciate your point that failures of full disclosure don’t just burn an organization’s reputation. They burn the reputation of individual communicators as well – making the next job (hopefully for a more candid organization) harder to get and harder to do.

And of course you’re totally right that I am graphically challenged. There’s no reason why I couldn’t have included a photo like this one in the original column – except, of course, I never even considered doing so. I did decide recently to start posting videos (thus entering the 21st century about a decade late). But my videos consist mostly of me standing there talking without any graphics!

The reputation “bank account” and reputational redemption

name:Tony Jaques
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Issue and crisis consultant
date:February 3, 2011
email:tjaques@issueoutcomes.com.au
location:Australia

comment:

I am glad to see someone finally challenging the validity of the much-beloved idea of the reputational bank account.

I believe the problem here lies in assuming that good actions and bad actions are measured in the same currency. It has been proved over and again that years of positive reputation can be destroyed in weeks by unacceptable or improper behavior. The bank account metaphor suggests that badly behaved organizations can “buy back” reputation with some high-profile good citizenship. But it just ain’t so.

Moreover, if reputational problems are repeated, withdrawals are not just dollar for dollar, but multiply with accumulated penalty interest.

When failed Australian tycoon Alan Bond was jailed for dishonesty, some of his supporters tried to mitigate his record corporate collapse by emphasizing that he helped Australia win the America’s Cup yachting trophy. It made a good story of reputational redemption, but it meant nothing to the investors who had lost millions. They knew the real meaning of an empty bank account. And it was no metaphor.

Peter responds:

For me, what matters most in my column on “Two Kinds of Reputation Management” is my argument that increasing an organization’s positive reputation calls for very different actions than decreasing its negative reputation – and that the latter is a lot more important for most purposes than the former. My clients enjoy striving to be loved more by their supporters when they should be working on how to be hated less by their detractors.

This is certainly compatible with your point that the currencies are different.

And I like your point about repeated infractions, which you expanded on in the most recent issue of your Managing Outcomes newsletter, “When corporations get a touch of the ‘Charlie Sheens’.” It works in both directions, I think. Old misbehaviors make new ones more newsworthy and more objectionable. New misbehaviors give old ones a second (third, fourth, fifth) life. And as you say, the combined effect is certainly more than additive.

(Readers not familiar with Tony Jaques’s twice-monthly e-newsletter can look at some past issues – and subscribe – on his website, “Issue}Outcomes.” It’s free, I think it’s excellent, and unlike my work it’s invariably brief. Tony’s blog, which covers the same ground as well as additional topics, is also worth following.)

So how can an organization achieve reputational redemption? Good works do help some, I think. The currencies of positive and negative reputation aren’t completely unconnected.

As I said in the column, the evidence suggests that prior good acts are only marginally useful in diminishing the reputational damage of more recent misbehavior. It works a bit better in the other direction: More recent good acts can contribute to public forgiveness for prior misbehavior. Compensating your victims is especially important. (As you point out, helping Australia win the America’s Cup did nothing to compensate Alan Bond’s victims.) But even “irrelevant” good acts can serve as a penance, demonstrating that you know you have sinned, that you feel appropriately ashamed, and that you’re trying to reform and give back.

There are two prerequisites, however. First, order matters. Even victim compensation doesn’t confer reputational redemption until after you have acknowledged your misbehavior, been berated for it, and apologized.

And second, good works don’t constitute much of a penance unless you say they do. A factory that has polluted the neighborhood, for example, might decide to build the neighborhood a park. If the company calls the park philanthropy, its outrage-reducing, forgiveness-inducing capacity is negligible. Management may feel generous but the neighbors feel bribed. If the company calls the park compensation for all that prior pollution, its reputational value is much higher. For still greater reputational impact, the company should stress that aggrieved neighbors demanded the park. It’s okay if management feels blackmailed; what matters is that the neighbors feel vindicated and victorious. The more explicitly the company uses terms like “reparations” and “penance,” the better.

For more on the forgiveness process and reputational redemption, see my 2001 column, “Saying You’re Sorry.” A more recent take on forgiveness can be found in my video, “Second Outrage Management Strategy: Acknowledge Prior Misbehavior.”

Outrage at nuclear power versus the “solar power halo” – and a postscript on carbon capture and storage

name:Gogo Erekosima
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Consultant/CEO
Date:January 2, 2011
Location:Colorado, U.S.

Comment:

I recently read an article online on the disadvantages of solar energy relative to other clean energy alternatives that I thought highlighted some of the issues I first came across on your website.

I would love to hear your thoughts on the risk communication issues associated with the public preference for solar energy over nuclear.

Peter responds:

William Pentland’s December 30 Forbes blog post that you linked to makes three basic assertions:

  • Even after 30 years of subsidies, solar power still isn’t cost-effective. There are far wiser technologies than solar for transitioning to low-carbon power – among them nuclear technology.
  • But the U.S. public hates and fears nuclear power. By contrast, the public loves solar power so much that people will pay a hefty premium for it. That’s the solar power halo versus nuclear NIMBY.
  • Ultimately, the public’s reactions matter more than the economic realities. Even though solar “sucks” economically, solar skeptics who care about global warming should set aside their skepticism in deference to “the profound psychological appeal that drives the public’s support.”

I have no expertise with regard to Pentland’s first assertion. I think everyone pretty much agrees that solar isn’t competitive today without continued subsidies. What’s not so clear – at least to me – is whether solar will always suck economically, or whether the subsidies will lead to breakthroughs that will make solar competitive in the future. Since it’s hard to tell which technologies will pan out, I’m inclined to think we ought to invest in as many alternative energy options as we can: solar, nuclear, wind, carbon capture and storage, etc. But I have no wisdom to offer on the tough decisions about how much of whose money to invest in which energy technologies.

Pentland’s second assertion is mostly true, though only a small fraction of the U.S. public actually pays extra for solar power. In terms of my hazard-versus-outrage distinction, solar power is intrinsically pretty low-outrage and nuclear power is intrinsically pretty high-outrage. Any normal person is going to feel more comfortable collecting some sunshine than splitting the atom. It’s notoriously difficult to get people worried enough about sunburn or calm enough about uranium.

But the difference isn’t all intrinsic. Lots of factors contribute to the high outrage Americans feel about nuclear power – the association with nuclear weapons; the stalwart opposition of environmentalists; the periodic dishonesty and continual arrogance of the nuclear industry; etc. At least some of these factors are alterable. There are places (like France) where nuclear power is far more acceptable than it is in the U.S. It’s not inconceivable that nuclear power could be far more acceptable in the U.S. a decade from now than it is today.

That’s why I disagree with Pentland’s third assertion.

Most of my career has been grounded in the conviction that outrage is alterable. When a risk is high-hazard, I help clients try to increase people’s outrage so they’ll take (or accept) more precautions. And when a risk is low-hazard, I help clients try to decrease people’s outrage so they won’t take (or demand) excessive precautions. My clients’ efforts to manage outrage upward or downward aren’t always successful, of course. But they’re not foreordained to fail.

For me it’s a fundamental principle that the proper response when outrage is too high or too low is to try to alter the level of outrage – not to manage the hazard as if the level of outrage weren’t alterable (or weren’t mistaken).

I’m not convinced that Americans are excessively fond of solar power. But if they are, the solution isn’t to live with it and over-invest in solar power; the solution is to arouse some solar outrage. I am convinced that Americans are excessively leery of nuclear power … and the solution there isn’t to give up on nuclear technology but rather to plan a campaign to diminish nuclear outrage.

I hasten to add that a campaign to diminish nuclear outrage wouldn’t focus on telling people they’re stupid to mistrust nuclear technology. It would concentrate far more on acknowledging the validity of many nuclear concerns and on cleaning up the nuclear industry’s act. I’d love to work on such a campaign – but for the most part the nuclear industry prefers telling people they’re stupid to mistrust nuclear technology. It has been more than a decade since a nuclear utility sought my advice on outrage management.

But I did get contacted recently about an energy option that’s potentially every bit as stigmatized as nuclear power: carbon capture and storage (CCS). Fossil fuel power plants, cement kilns, and many other industrial facilities emit huge amounts of carbon dioxide. Instead of releasing this greenhouse gas into the atmosphere, where it contributes to global warming, CCS proponents want to pump it underground. They don’t deny that the gas will eventually escape anyway, but “eventually” may be decades or even centuries – by which time other greenhouse gas emissions will presumably be lower. CCS thus buys time (if it works), reducing global warming in the short term while the world transitions away from a fossil-fuel-based economy.

Like solar power, CCS is uneconomic today. There are plenty of unanswered questions about whether it will ever be cost-effective compared with other ways to mitigate global warming, about whether it will work at all, and about whether pumping all that carbon dioxide underground might lead to new problems. But the big CCS issues are outrage issues.

Many environmental activists hate CCS not just because it might not work but also because it might work. Their fear is that instead of easing the transition away from fossil fuels, CCS might alleviate the pressure to transition away from fossil fuels at all. Their endless attacks on CCS are reminiscent of the green movement’s long opposition to nuclear power.

Meanwhile, the principal public proponents of CCS in the U.S. are coal companies, which promote it under the oxymoronic and outrage-provoking label “clean coal.” They’re hoping for exactly what the greens fear: a renaissance in coal-burning power plants and factories. Leaving CCS advocacy in the hands of Big Coal could easily doom it – partly because of the coal industry’s earned reputation for environmental intransigence; partly because of the industry’s obvious self-interest in the issue; and partly because of the industry’s insistence on overselling CCS technology as a done deal instead of a promising possibility. All this is reminiscent of how the nuclear power industry has undone nuclear power. With friends like these, a technology doesn’t need enemies.

My CCS client, based outside the U.S., seeks to facilitate carbon capture and storage pilot projects around the world. It sought my advice on ways of increasing public awareness and stakeholder engagement, in order to respond better to the local opposition that usually arises. I don’t know yet whether this will be a continuing consultation for me or just a one-shot, but here’s a summary of my perspective that I recently sent to the organization:

Your goal is to facilitate the implementation of CCS pilot projects capable of assessing how much CCS technology can contribute to slowing global climate change and buying time for more permanent solutions. For that goal to be worthwhile, you need the pilot projects to be widely credible – that is, a wide range of observers (from moderately skeptical to moderately enthusiastic) need to believe that if there are problems the pilot projects will find them and be candid about them, and that if no major barriers are found that will constitute significant reason to proceed toward scaled-up deployment.

If that’s accurate, then public awareness and stakeholder engagement are not goals. They are hurdles that must be cleared to achieve your goals.

Local public awareness of particular pilot projects is far likelier to arouse anxiety than support. Those championing the pilot projects cannot avoid that anxiety (it’s a necessary adjustment reaction); rather, they have to figure out how to get people through it in a way that manages the outrage and avoids stopping the projects. Similarly, the stakeholders who choose to engage with pilot project champions will do so mostly because they’re concerned, not because they’re enthusiastic.

The goal for the general public is to nurse it from its current pre-awareness apathy through awareness (and the resulting anxiety) to post-awareness apathy – where people make an “aware” decision to stand aside and let the pilot project proceed, having become convinced that its champions and managers are honorable and competent, and are being closely watched by critics and skeptics who will make sure they don’t cut corners or hide problems.

The goal for stakeholders is to woo them from the destructive role of outsider opponents to the constructive role of insider skeptics, prepared to let the pilot project move forward while they watch, knowing they will be able to detect and expose serious problems if they arise, and knowing they will be called upon to certify success if no such problems arise.

I would like to see nuclear power outrage reduced, but I’m not optimistic that it will happen. I’m more hopeful about CCS. It’s off to a bad start, in outrage management terms, but unlike nuclear it hasn’t spent decades digging a deeper and deeper hole for itself. I’d also like to see the solar power experiment continue … but not simply in deference to its low outrage. Instead of giving up on the higher-outrage alternatives that look technologically, economically, and environmentally promising, let’s try to resuscitate them.

Name: William Pentland
Field:Researcher
Date:January 7, 2011
Email:wpentland@law.pace.edu
Location:New York, U.S.

William Pentland responds:

Thank you for your thoughtful response to my blog post about solar energy.

One point I would like to emphasize more emphatically here than I did in my post is that solar technologies have established a compelling record of incremental improvements and are likely to continue to do so as investment increases. In other words, solar will deliver over the long run. The total cost of getting over the global warming goal line will still be far higher if solar is the primary instrument that allows us to do so.

In addition, it bears mention that most consumers DO pay for solar energy in the form of higher electricity rates. Utilities are sinking vast sums into expensive solar power projects and passing the cost off to ratepayers.

With regard to your assessment of nuclear energy, I believe you hit the nail on the head with your assessment of its public relations problems. On the other hand, I would suggest that these sentiments cannot be fully disaggregated for purposes of trying to reverse the public’s perception of nuclear energy. Dual use breeds dishonesty and dishonesty breeds environmentalist scorn. These issues are compound and, from my perspective, unlikely to be sufficiently resolved before it is too late to avert climate change.

Copyright © 2011 by Peter M. Sandman

Contact information page:   Peter M. Sandman     


Website design and management provided by SnowTao Editing Services.