2010 Guestbook
Comments and Responses

Why – and how – the United Nations should have admitted its forces may have brought cholera to Haiti

name:Lisa Pogoff
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Continuing education specialist,
University of Minnesota School of Public Health
Date:December 6, 2010
Location:Minnesota, U.S.

Comment:

If you were hired by the U.N., or the government of Haiti, what would you do to communicate (and have people believe) that the U.N. didn’t bring cholera to Haiti?

Peter responds:

The right question isn’t how the U.N. (or the government of Haiti) could have credibly claimed the U.N. didn’t bring cholera to Haiti. Most experts think it is credible – though certainly not proven – that Nepalese U.N. peacekeepers did bring cholera to Haiti. If so, they did it unknowingly, and perhaps the U.N.’s early denials were more knee-jerk than dishonest. That can’t be said about its later denials. For weeks, U.N. spokespeople refused to acknowledge that the Nepalese might be responsible, and avoided making a serious effort to find out for sure.

So the interesting questions are how the U.N. could have acknowledged the truth (that it was possible the Nepalese troops might have brought cholera to Haiti), why it didn’t, what price it (and Haiti) paid for its dishonesty, and what it should do now.

But first, here’s a little of the evidence about the U.N.’s unsuccessful cover-up:

The U.N. said the Nepalese peacekeepers couldn’t be responsible for Haiti’s cholera because their sanitary systems were so excellent – until the Associated Press did some on-the-ground reporting to show otherwise:

The U.N. issued a statement on Tuesday defending the base. It said the Nepalese unit there uses seven sealed septic tanks built to U.S. Environmental Protection Agency standards, emptied every week by a private company to a landfill site a safe 820 feet (250 meters) from the river.

But those are not the conditions AP found on Wednesday.

A buried septic tank inside the fence was overflowing and the stench of excrement wafted in the air. Broken pipes jutting out from the back spewed liquid. One, positioned directly behind latrines, poured out a reeking black flow from frayed plastic pipe which dribbled down to the river where people were bathing.

The U.N. also said it couldn’t be the Nepalese because no Nepalese soldier had come down with cholera – an irrelevant fact since many cholera carriers are asymptomatic and it would take only one to launch an outbreak. From the same October 29 AP story:

The mission strongly denies its base was a cause of the infection. Pugliese [U.N. spokesman Vincenzo Pugliese] said civilian engineers collected samples from the base on Friday which tested negative for cholera and the mission’s military force commander ordered the additional tests to confirm. He said no members of the Nepalese battalion, whose current members arrived in early October for a six-month rotation, have the disease.

On November 4 the Nepalese army and the U.N. expanded on the denial:

The DPR [Nepalese Army Directorate of Public Relations] also clarified that no symptoms of cholera were found in Nepali peacekeepers during their health check up following the allegation that they are the source of the epidemic. “All the Nepalese peace keepers had their health checked up instantly after the allegations. The investigation did not find any evidence that Nepalese peace keepers were the source of the cholera; there is no evidence that the disease has been caused or carried by Nepalese peacekeepers,” the statement points out….

Asked about UN efforts to determine whether its peacekeepers have cholera, Nesirky [Martin Nesirky, spokesman for U.N. Secretary-General Ban Ki-moon] said that all the soldiers in the Nepalese contingent underwent all necessary medical tests and were found innocent of the allegations. “If they had diarrhea or any other cholera-related symptom, they would have undergone further tests, including for cholera. But none of them had to do that, as they were all healthy and remain that way now,” Nesirky said.

Perhaps most damningly, the U.N. (and the World Health Organization, a U.N. agency) said figuring out the cause of the cholera outbreak was unimportant and the focus should be on controlling the outbreak and treating its victims – as if there weren’t a universal need to know why; as if the outbreak had no implications for other U.N. missions in other countries; and as if the accused perpetrator of a horrific event had a right to the opinion that its guilt or innocence wasn’t worth investigating.

Consider this from a November 3 AP story:

A spokesman for the World Health Organization said finding the cause of the outbreak is “not important right now.”

“Right now, there is no active investigation. I can’t say one way or another (if there will be). It is not something we are thinking about at the moment. What we are thinking about is the public health response in Haiti,” said spokesman Gregory Hartl.

I don’t know whether the U.N.’s denials were provoked chiefly by a desire to save face or chiefly by a concern that acknowledging the truth might provoke violence that could make things even harder for Haitians and for those trying to address Haiti’s many emergencies, including its cholera emergency.

Insofar as it was the latter, I think U.N. officials miscalculated. It’s true that Haiti was and remains a powder keg. But the U.N.’s denials were not believed, and didn’t deserve to be believed – and so the denials almost certainly exacerbated the violence they were perhaps intended to forestall. As the AP reported on November 19:

When riots broke out across northern Haiti this week, the U.N. blamed them on politicians trying to disrupt the upcoming vote. But observers say the U.N.’s early stance fanned the flames.

“If the U.N. had said from the beginning, ‘We’re going to look into this’ ... I think that, in fact, would have been the best way in reducing public anger,” said Brian Concannon, director of the Institute for Justice & Democracy in Haiti. “The way to contribute to public anger is to lie.”

In the early days of the cholera outbreak, U.N. officials might usefully have said something like this:

We feel horrible that a member of the U.N. peacekeeping force from Nepal might have brought cholera to this tortured island. It is awful for all of us who are trying to help to realize that we may unwittingly have done harm instead of good. But our experts tell us that that could very well be the case. In the weeks ahead, we pledge to do everything we can to discover the truth, to report the truth, and to learn from the truth whether changes are needed in the way U.N. forces are deployed in the future.

We haven’t found convincing evidence yet that the U.N. is the source of Haiti’s cholera – we haven’t found a member of the Nepalese detachment infected with cholera, for example. But there are reasons to think that someone in the detachment might have been asymptomatically infected and carrying the disease when the troops arrived in October. The kind of cholera now circulating in Haiti for the first time in decades is a strain that circulates in much of South Asia, including Nepal. And Nepal did have cholera outbreaks last August and September. So it is a reasonable hypothesis that some Nepalese troops could have been exposed before they were deployed to Haiti.

While the evidence is still uncertain, it is obviously possible that Haiti’s cholera is our fault – and for that possibility we apologize, with great anguish. We accept that this puts our Haiti aid effort in a different light. We must do all we can for Haiti, not just as fellow humans trying to help our neighbor in an emergency, but as potential perpetrators trying to atone for our own possible mistake.

The U.N. should then have left it to non-U.N. officials to point out that some of the many aid workers entering Haiti also came from places where cholera is endemic, and might also have had asymptomatic cholera during their early days in Haiti. It is largely in response to the U.N.’s categorical denials that outsiders mostly ignored this possibility and instead took the “blame Nepal” side of the seesaw.

I find it much harder to decide what the U.N. should say now. Its denials helped provoke violence, and that violence has made U.N. aid efforts (and others’ aid efforts) more difficult. But at least for the moment, the violence has mostly receded. A belated U.N. acknowledgment now that it is quite possible the U.N. brought cholera to Haiti, and that its previous denials were dishonest, seems pretty likely to provoke a new wave of violence.

As a risk communication consultant, I always advise clients to tell the truth. And I almost always advise that the sooner the better. But in Haiti right now, I couldn’t in all conscience advise the U.N. to come clean tomorrow. It should have come clean at the start, and it will need to come clean eventually – and when it does it will rightly pay a price for its prior dishonesty. But I think I would rather see the U.N. continue to postpone the overdue moment of truth than see its belated candor make it even harder to meet the needs of Haiti’s thousands of cholera victims (and its millions of earthquake victims).

At the very least, U.N. and WHO officials should stop insisting that it is not important to try to learn where Haiti’s cholera came from. And at long last it looks like they will. In a December 3 speech to the General Assembly, U.N. Secretary-General Ban Ki-moon reversed course in a big way:

Finally, let me say a few words as an additional matter. Let me say directly to you that I am determined to understand and address the manner in which the cholera outbreak occurred and was spread.

The people of Haiti are suffering. They are suffering enormously, and they are asking legitimate questions. Where did this come from? How did this happen?

We may never be able to fully answer these complex, difficult questions, but they deserve our best efforts….

The people of Haiti deserve nothing less.

My wife and colleague Jody Lanard provided additional research for this response.

Talking about CEO compensation

name:Alan Crawford
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Writer/editor
Date:December 3, 2010
Location:Virginia, U.S.

Comment:

For an article in Impact, the Public Affairs Council newsletter, I'd like your advice to public affairs practitioners in view of the fact that CEO compensation – already controversial – continues to increase.

Peter responds:

Public outrage at CEO salaries and benefits is widespread, and in my judgment it is justified.

The outrage is justified in part because the gap between top executives' compensation and ordinary workers' compensation is so enormous – far wider than it was a few decades ago, and far wider in the U.S. than in most of the developed world. And the outrage is justified in part because top executives’ compensation is so thoroughly unresponsive to the fact that for millions of people the economy still stinks.

As CEO compensation continues to soar, so does the outrage – creating a serious problem for corporate pubic affairs practitioners.

In 2008 I wrote a long column about “Managing Justified Outrage.” One of the main points I made there was the importance of acknowledging that the outrage is, in fact, justified – rather than adding insult to injury by pretending that there’s no reason why people should be upset.

This needs to be said every time a company talks about its CEO’s astronomical salary: “Of course it can make people pretty angry that our CEO, or any CEO, earns so much more than the average working person. And there’s also understandable anger that the compensation of CEOs is soaring while many people are having a hard time finding work and even more have seen their earnings stagnate.”

Only after this has been said, I think, can a company hope to be heard about why it just gave its CEO another few million dollars:

So why did we pay our CEO a hefty raise this year? The truth is going to sound awfully lame, more like a teenager than a huge corporation, but it is the truth nonetheless. Everybody else is doing it.

A great CEO who keeps making the right decisions for her company is worth so much to the company’s shareholders that her salary and benefits, however high they may be, shrink to insignificance compared to her value. And even though this past year was not such a good year for the overall economy, it has been – paradoxically – a very good year for corporate profits and the stock market.

So what would happen to a company that didn’t give its CEO a big raise in such a year? That might signal that the company wasn’t pleased with its CEO’s performance – which could hurt the company’s reputation and even its share price. Or it might signal that the company didn’t appreciate its CEO as much as it should – which could tempt another company to try to woo the CEO away, and could tempt the CEO to be willing to be wooed.

A company that is delighted with its CEO – and we are certainly delighted with ours! – had better show it the same way other companies are showing it: with a big raise.

For extra credit, consider going one step further:

This explanation obviously leaves the most important question unanswered: What happened to the rising tide that lifts all boats? How do we reorganize the economy so everybody benefits – not equally, because that would be the death of the motivation to excel, but more equally, more like things used to be? Anyone who can figure out the answer to that question, and put it into action, will earn the gratitude of us all. A bit unfairly (it’s beyond her pay grade), we put this question to our CEO, and here’s what she said….

Telling people explicitly what you don’t mean

Name:Stephen L. Brown
Field:Retired risk assessor
Date:December 2, 2010
Location:California, U.S.

Comment:

The Institute of Medicine has just released its latest report on recommended dietary intake levels for Vitamin D and calcium. See http://www.nationalacademies.org/morenews/20101129b.html.

What is interesting to me is how the press is spinning the recommendations. The New York Times headline is “Report Questions Need for 2 Diet Supplements,” while the Wall Street Journal headline is “Triple That Vitamin D Intake, Panel Prescribes.”

Although both articles acknowledge the opposite interpretation, the Times article emphasizes conclusions like, “For most people, taking extra calcium and vitamin D supplements is not indicated” (attributed to a member of the panel, not the report itself) while the Journal article emphasizes ones like, “Most Americans and Canadians need to get much of their vitamin D from supplements.” I suppose those could be reconciled if the word “extra” in the first means “over the amount they are already taking” rather than “over the amount they get from food and sunlight exposure.” But how is the ordinary reader to know?

I only read a little from the actual IOM report brief, but it seemed to me somewhat more consistent with the Times spin. I suppose supplement doubters will accuse the Journal of shilling for the supplement industry. I haven't figured out yet what motive the supplement supporters will ascribe to the Times.

Having worked for the National Research Council (the worker bees that staff committees of the IOM, National Academy of Sciences, and National Academy of Engineering), I know that members of the panel were probably deliberately chosen to represent a range of views on supplements and that the report constitutes a compromise among those views. If one were to read the entire report, I suspect that (s)he could find text to support almost any view within that range.

But if you use the press for your health advice, you’ll certainly arrive at different conclusions depending on what source you see. Because I don’t take supplements and won’t start unless I’m convinced by my internist, I like the Times article better.

Does this gross difference in reportage mean that the IOM didn’t do a good job of communication, or is it just the way things are?

Peter responds:

I think you’re right on both counts: It’s just the way things are, and the IOM didn’t do a good job of communication.

One tough-but-standard piece of a communicator’s job is to anticipate likely misinterpretations and forestall them. Instead of just saying “X,” say “X, not Y.” Better yet, say “A lot of people are tempted to think Y, but the truth is X.” Even better: “A lot of people may think I mean Y, and I can understand how they’d think that. But that’s not what I’m trying to say. My point is X, not Y.”

This is especially important – and rare – when talking to journalists. What I sometimes call “going meta” works pretty well with reporters, and I’m always surprised how reluctant my clients are to do it. “Some of your editors may want to interpret today’s story as meaning Y. I know you don’t control headlines, but please try to make it as clear as you can when you write your story and when you pitch it to your editor that the key news here, at least from our point of view, is X, not Y.” And in a continuing story: “A lot of news yesterday emphasized Y. I’m sorry I wasn’t as clear as I wanted to be yesterday. Y isn’t the main focus of what I was trying to say yesterday, and it’s not an accurate interpretation of what our organization believes is important here. The key story in our judgment is X.”

In the case at hand, the key message probably wasn't “X, not Y.” It was “both X and Y, not just one or the other.” That's doable too. “Our research findings have led us to two conclusions, which at first glance might seem mutually contradictory. Reporting either one at the expense of the other would be a mistake. Let me summarize the two findings and explain how they relate to one another. I hope you’ll try to make sure your stories – and your ledes – achieve a good balance between the two.”

Of course none of this is achievable if the two key messages really are mutually contradictory, and if you’re unwilling to call attention to the contradiction – for example, if you’re announcing a committee compromise that has some language to please some members and different language to please other members. Your comment suggests that that’s very likely to be what happened with the IOM report on Vitamin D and calcium. If the report was a compromise and the IOM didn’t want to say so, it was stuck issuing a self-contradictory document and letting reporters figure out for themselves what was going on.

Under those circumstances, it was predictable that some reporters would grab onto one set of quotes and others would grab onto the competing set of quotes. Only a pretty savvy reporter on what that reporter considered a pretty big story would tease out the contradiction and write about it … or even do follow-up interviews about it. Which side of the contradiction each story focused on would be determined partly by chance and partly by the values/bias of the individual reporter or publisher. But it would be determined mostly by two other factors: which half of the contradiction was most emphasized in the news release accompanying the report, and which half of the contradiction came across to most journalists as more interesting, understandable, credible, and actionable. Since the two latter factors are relatively invariant, I would expect most of the coverage to lean in the same direction.

Note: Stephen L. Brown’s comment was originally posted on the RISKANAL listserv on November 30, 2010.

Prospects for persuading activists and public health officials to be more honest

Name:George Vigileos
Field:Volunteer activist
Date:December 2, 2010
Location:Oregon, U.S.

Comment:

I am trying to be, not just a critic/watchdog, but a constructive, contributing citizen to our civic planning processes, and I find myself very unschooled in the issues and practices in public involvement – what's good practice and what’s hype and abuse.

I am very fortunate in that I have a good friend who retired from a long career as a communications specialist for various governmental bodies in the northwest region. In his many years of practice, he was most appreciative of your own approach to public communication issues. He recommended that I study your site. He especially recommended that I read your “written speech.”

I have done so, enjoyed it, and have been very encouraged by its message.

But simply being honest is a difficult message to sell to people, perhaps more cynical people, who are accustomed to “spinning” while understandably rationalizing that they are spinning for a greater good.

The big question I walk away with is this: What indicators can one offer people to suggest that the payback – the benefit of unvarnished truth and honesty – is achievable in their lifetime?

Even for well-meaning people, that is an understandable concern. I am sure you have gotten this question before and am very interested to hear your comments on it.

Peter responds:

Thank you for your kind words about my 2009 Berreth Lecture, “Trust the Public with More of the Truth: What I Learned in 40 Years in Risk Communication.” A couple of hundred people a month read that lecture or listen to the audio link is to an audio MP3 file on my site. And in the 13 months since the National Public Health Information Coalition posted the video , about 2,000 people have watched it. It’s a long way from viral, but for me those are big numbers. (In 2011, I posted the video on Vimeo Link goes to Vimeo page as well.)

The message of the Berreth Lecture – that “good guys” (such as public health officials and public-interest activists) ought to be more honest – obviously touches a nerve. So does my claim, based on 40 years of consulting experience, that in practice good guys are not more honest than bad guys, and are surprisingly often less honest than bad guys. I think many people who have devoted their professional lives or their volunteer hours to communicating on behalf of good causes do feel diminished by how often they end up telling less than the whole truth, and resonate to my argument that a more honest approach would be more effective in the long run.

But when I get on my high horse about a specific case of good guys’ dishonesty, I usually lose. For example, I started the 2009–2010 swine flu pandemic working closely with top officials of both the World Health Organization and the U.S. Centers for Disease Control and Prevention. I ended the swine flu pandemic pretty much persona non grata with both organizations. I think the reason they stopped seeking my risk communication counsel (at least my pandemic communication counsel) is that I pushed hard for candor about aspects of the pandemic that they wanted to be less-than-candid about. (For more than you want to know about a couple of specific examples, see “The ‘Fake Pandemic’ Charge Goes Mainstream and WHO’s Credibility Nosedives” and “Why did the CDC misrepresent its swine flu mortality data – innumeracy, dishonesty, or what?”)

To assess the prospects for persuading public-interest activists and public health officials to be more honest, let’s review the three main reasons why dishonesty is tempting to good guys.

At least in the short term, dishonesty often works very well for good guys – much better than it works for bad guys.

It’s no secret that everybody is tempted to lie – and almost everybody sometimes gives in to the temptation. (I know I do.) Even more seductive is the temptation to mislead without lying – to construct carefully worded messaging that showcases some parts of the truth and hides or distorts other parts, thus giving a false impression without actually saying anything false.

The main reason we don’t all lie and mislead constantly is that we have learned that both the probability of getting caught and the cost of getting caught are high. (Yes, ethics are another reason – but I think the fear of getting caught is the biggie.)

I routinely point out to my corporate clients that their dishonesty has done them more harm than good. Not only are their dishonest communications frequently exposed; even their honest communications are routinely disbelieved. In a world where you have opponents watching your every move, and a world where your opponents have access to research tools like Google and FOIA, and a world where your opponents can significantly affect your ability to achieve your goals, dishonesty is simply a bad bet.

My corporate clients aren’t always persuaded. And even when they’re persuaded, they aren’t always honest. Hope springs eternal, so maybe they discount the risk of getting caught sometime in the uncertain future and go for the short-term benefit of misleading people right now. Or maybe they see their low credibility as a sunk cost, figuring that if people are going to mistrust them anyway, they might as well be hung for a sheep as a lamb.

Even so, I stand a fair chance of actually persuading my corporate “bad guy” clients to be more truthful.

But what if you don’t have opponents watching your every move? What if your credibility is high enough that nobody really wants to hear about your misleading half-truths – what if reporters don’t want to run the story and the public doesn’t want to read it or watch it? What if your opponents are themselves so low-credibility that nobody believes them when they expose your dishonesty?

Then the empirical case for honesty falls apart, and all that’s left is the ethical case.

Something close to that is reality for many public health officials and many public-interest activists. Not all, obviously. There are plenty of activist groups whose opponents have more mainstream credibility than they have (think about ACORN, for example), and even public health officials are periodically under fire.

Still, my terms “good guys” and “bad guys” are shorthand for the reality that we cut slack for altruistic organizations like the CDC and Greenpeace in a way that we don’t for profit-motivated organizations like BP. And we cut slack for the local health department and the local environmental group in a way that we don’t for the local corporate polluter.

Dishonesty is therefore a better bet for good guys than for bad guys.

Good guys often feel more psychological pressure to be dishonest than bad guys feel.

The decision to mislead isn’t always grounded in a rational calculation of the benefit if you get away with it versus the risk of getting caught. Underlying psychological factors can distort the calculation, or can replace the calculation as a motivation to mislead.

Consider for example the belief that your cause is both just and desperate. If you’re fighting to rid the world of polio or to protect the world from global climate change, a little verbal sleight-of-hand may not seem so objectionable. Even ethicists acknowledge that there are times when it’s right to lie. Public-interest activists and public health officials are a lot likelier than corporate flacks to judge that right now is such a time.

And then, typically, a more internal sleight-of-hand comes into play. Having decided that their cause is so crucial that dishonesty is justified, good guys often persuade themselves that dishonesty in such a cause isn’t really dishonest at all. They’re not misleading people when they suppress a fact that might diminish the public’s willingness to get vaccinated (for example). They’re leading people. It’s that awkward fact that would have misled people, if they had been so foolishly punctilious as to publicize it. Getting vaccinated is good, so leaving out a fact that might keep people from getting vaccinated is also good. Since it’s good, how could it possibly be dishonest?

This kind of self-righteous self-deception is common even among corporate spinners. But they know their “cause” is selfish. And periodically they get caught and excoriated, which may serve to remind them that they were actually being dishonest. Good guys know their cause is altruistic. And they get caught less often, and get punished less harshly when they’re caught. So self-righteous self-deception about their own dishonesty is epidemic among the good guys. When they’re 90% right and don’t want to acknowledge their critics’ 10%, they convince themselves that they are 100% right. When science alone won’t take them where they want to go, they go beyond the science … and continue to believe (and claim) that everything they say is grounded in “sound science.”

Another psychological factor conducive to good-guy dishonesty is the psychodynamics of self-sacrifice. Compared to corporate executives, public-interest activists and public health officials are obviously underpaid. Often they feel under-appreciated as well, especially by the public they’re desperately trying to protect. The combination of insufficient economic reward and insufficient appreciation can lead to resentment and even contempt, which can fuel the impulse to mislead. Not always, but all too often good guys have a chip on their shoulder. They feel entitled to cut a few corners.

Finally, consider the urge to fight fire with fire. All of my clients, good guys and bad guys alike, justify their own dishonesty by pointing to the other side’s dishonesty. But bad guys are renowned for their dishonesty, which provides societal support for the good guys’ rationalizations. Since everyone knows corporations are all liars, why should anti-corporate campaigners have to hobble themselves by sticking to the truth?

The battle between public health officials and anti-vaccination activists is especially interesting on this dimension. Each side sees the other side as obviously the bad guys. Each side rightly notices that the other side is often dishonest, and sees that as “permission” to be dishonest in return. By contrast, corporate spokespeople are often outraged at how unfair it is that their critics mislead and exaggerate with impunity and they’re not allowed to do likewise – but they do usually manage to remember that they’re not allowed to do likewise.

Dishonesty – or at least exaggeration – is genuinely more acceptable when trying to warn people than when trying to reassure them.

One of the core principles of risk management is conservativeness. When we’re not sure how dangerous something is, we consciously try to err on the alarming side, on the grounds that over-protecting people is wiser than under-protecting them. If a smoke alarm goes off and there’s no fire, for example, that’s a minor problem. But if there’s a fire and the smoke alarm doesn’t go off, that’s a major problem. So we calibrate smoke alarms to go off too much, so they won’t miss a fire.

For similar reasons, we also calibrate activists to go off too much.

In a typical risk controversy, a company wants to do something that activists claim could be dangerous. People know the activists’ warnings are probably exaggerated; they generally approve of the exaggeration. People also know the company’s reassurances are probably exaggerated, and consider that a much more serious problem. The asymmetry is built in: Exaggerated warnings are a public service, while exaggerated reassurances are a public disservice. So Greenpeace gets to be more dishonest than BP. (For more on this built-in asymmetry, see my 2006 column on “The Outrage Industries: The Role of Journalists and Activists in Risk Controversies.”)

Although there are certainly exceptions, in general the good guys are on the alarming side of most controversies, while the bad guys are on the reassuring side. Because they’re on the alarming side, the good guys are entitled to more leeway to exaggerate and mislead.

Not surprisingly, good guys can get into bad habits and end up abusing the leeway they have been given. This is especially a problem when the good guys end up on the reassuring side of a controversy. Accustomed to being on the alarming side, where exaggeration is conservative, they may neglect to recalibrate.

Public health officials, for example, spend endless hours trying to convince apathetic people to get vaccinated – essentially in the activist role. Since they’re warning about the dangers of infectious diseases, exaggerating those dangers is a conservative bias. So is exaggerating the benefits of vaccination. But when public health officials start talking to people who are worried about the dangers of vaccines, they’re no longer on the alarming side of the controversy. Now they’re on the reassuring side, the corporate side. Their exaggerations (and omissions, distortions, and lies) are no longer conservative, and therefore no longer nearly as appropriate. But it’s a rare public health official who is sensitive to this distinction.

I don’t see good ways to diminish the psychological factors that impel and justify good-guy dishonesty. Nor do I see much prospect of persuading good guys that they should recalibrate their impulse to exaggerate when they’re on the reassuring side of a controversy.

Our best shot, I think, is to focus on trying to make dishonesty work less well for good guys – and trying to convince them that it already works less well than they imagine.

Every institution needs watchdogs. Corporations are more honest when there are anti-corporate campaigners around; governments are more honest when there are opposition politicians and anti-government activists yipping at their heels. Public health officials, then, need more (and more powerful) critics to keep them honest. And public-interest activists need more (and more powerful) counter-activists.

Of course it’s possible for opponents to get too numerous and too strong, not just keeping you honest but undermining everything you do. Arguably that’s already the case for corporations and politicians, our traditional bad guys. I don’t think so; I think anti-corporate and anti-government activism is still doing far more good than harm. But it’s arguable. It’s not arguable that public health and public-interest activism face excessively strong opposition. They face insufficient opposition, and that’s a big piece of why they’re excessively dishonest.

I’m not arguing that public health officials and public-interest activists are too powerful. In fact they’re often ignored, and certainly underfunded. But they’re not often bird-dogged by hostile stakeholders waiting to pounce on their every misstep. That’s what they need to keep them honest.

The increasingly universal skepticism of the public is an important step in the right direction. I don’t share the widespread worry that the public has become too cynical and mistrustful. I think we need more mistrust, not less – leading to more accountability and more honesty. (Yes, honesty under pressure, honesty motivated by the fear of getting caught. That’s the most reliable kind of honesty, I think.) People are wise to mistrust political and corporate leaders. I look forward to the day when people will mistrust the CDC and Greenpeace as well – if not as much as political and corporate leaders, at least a little more than they do today.

In a nutshell: We need to make dishonesty more costly to good guys by holding them to the same high standard to which we already hold bad guys.

In the meantime, we can focus on convincing good guys that dishonesty is already more costly than they think. The two pandemic examples I mentioned earlier in this response are instructive.

  • The U.S. CDC was dishonest about the age-specific death rates of the swine flu virus, eliding from the true claim that swine flu was more dangerous to young people than flu usually is to the false implication that swine flu was more dangerous to young people than to their parents and grandparents. It got away with it.
  • The World Health Organization was dishonest about the severity of the swine flu pandemic (reluctant to concede it was mild), and about changes in its flu pandemic definitions and descriptions just before swine flu arrived. It didn’t get away with it. Instead, many people, especially in Europe, came to believe that WHO had manufactured a fake pandemic in deference to Big Pharma.

It’s important to publicize the CDC’s successful dishonesty in order to teach the public to mistrust public health officials more – so the officials will have to become more honest. It’s just as important to publicize WHO’s unsuccessful dishonesty in order to teach public health officials that the public mistrusts them already – so the officials will realize that (like corporations) they’ll do better if they tell the whole truth, even the inconvenient bits.

For me, good-guy honesty is not an ethical precept. It’s an empirical proposition. I concede that saving lives by getting people vaccinated may be ethically more important than telling the whole truth. And I concede that dishonesty is less costly to good guys than to bad guys, less costly to public-interest activists and public health officials than to corporations and politicians. But even for good guys, I believe, the costs of dishonesty are unacceptably high … and they’re mounting.

In the very short-term, of course, hiding or distorting the parts of the truth that don’t help your case works better than admitting those parts of the truth. That’s true even for corporations. But corporations are learning to think longer-term than that, and activists and public health officials should do so too.

I see seven serious long-term bad effects of good guys’ dishonesty:

  • Good guys’ dishonesty undermines their credibility on the specific issue they were caught being dishonest about. That’s the most direct and most obvious effect.
  • Good guys’ dishonesty undermines their credibility on the issue they’re being dishonest about even if they’re not actually “caught.” Surprisingly often, people smell a rat. They don’t know which facts were omitted or distorted. They just know the messaging felt one-sided and aroused their mistrust.
  • Good guys’ credibility problem spreads to their allies on that issue – and to the issue itself. If you can’t be trusted on the issue, maybe your side of the issue can’t be trusted period.
  • Good guys’ credibility problem spreads to their positions on other issues – and to their reputation as an organization. If you can’t be trusted on this issue, maybe you’re misleading people about other issues as well.
  • Good guys’ credibility problem spreads more broadly – to the whole enterprise of which they are a part. In general, do public-interest activists tell the truth? Do public health officials?
  • Good guys’ credibility problem spreads still more broadly to issues beyond the honesty or dishonesty of their messaging. If you’re not honest with us, why should we assume you have our back? Fundamental doubts may begin to arise. Are public-interest activists actually acting in the public’s interest? Are public health officials actually protecting the public’s health?
  • Finally, all this dishonesty and all this suspicion undermine good guys’ self-image and self-esteem. Being dishonest takes a toll, even when it’s in a good cause. Struggling not to notice your own dishonesty takes a toll. Watching as others learn to mistrust you (even when you’re telling the truth) takes a toll. Ultimately, good guys’ dishonesty has a profoundly corrupting effect on the good guys themselves.

Am I optimistic about convincing public-interest activists and public health officials to be more honest? “Optimistic” goes too far. But I am cautiously hopeful that it’s worth trying.

The toughest part of the job, I think, isn’t convincing good guys that their dishonesty does themselves and their causes more harm than good. The toughest part is convincing them that they are in fact dishonest. That has to be done empathically – probably more empathically than I have done it in this Guestbook response. (I may have done it a bit better in the Berreth lecture, where I harped less on the word “dishonesty” and more on the advisability of “trusting the public with more of the truth.”)

It is important to put a lot of emphasis on all the compelling reasons good guys have for being less than completely candid when they’re trying to save lives or save the world. I recommend giving lots of examples where the good guys’ dishonesty really has helped save lives or save the world. Only after my audience has reluctantly conceded that “yeah, we do that a lot” would I introduce the possibility that we might be doing it too much, and paying a higher price than we realize.

Mandatory flu vaccination for health care workers (again)

name: Bill Borwegen
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Occupational Health and Safety Director, SEIU
Date:November 5, 2010
Email:bill.borwegen@seiu.org
Location:Maryland, U.S.

Comment:

Today patients suffer 1.7 million hospital-acquired infections (HAIs) resulting in 99,000 patients deaths each year, according to the CDC. Comprehensive infection control programs can dramatically stem this epidemic. Instead, many hospitals are fixated on mandatory flu vaccine programs for their employees – one way to reduce the disease burden of one disease.

Many of these same employers fight against the use of adequate respiratory protections against the most likely route of transmission of the flu, airborne droplets.

Of course we all support voluntary flu vaccination programs. But unfortunately there is little empirical data demonstrating flu transmission from health care workers to patients, and last year the H1N1 vaccine was only 62% effective. link is to a PDF file

Why are health care employers and many infection control associations overemphasizing this very narrow and less-than-perfect way to control one disease (a vertical program), when we could make much more progress in reducing HAIs with a horizontal program that would address the full spectrum of diseases that pose risks to patients in health care settings? Such a program would include everything from vigorously promoting (and rewarding) routine hand-washing to ensuring there are a sufficient number of well-trained environmental services (housekeeping) workers.

My own answer to the question I asked: It has to do with the balance of workplace power and Hospital Marketing 101. Why make the employer spend all the time and energy to have an infection control program that is real, detailed, and complicated to explain? Why not put all of the burden of infection control on the lowly health care worker and then issue a nice sound bite press release saying how much you care about patient safety (when if you really did you would promote a comprehensive program) – with a vaccine that is at best only two-thirds effective?

I’m not sure I am posing a “communications” question per se. But there are risk communication issues here. What do you think they are?

Peter responds:

I have written about mandatory flu vaccination for health care workers (HCWs) twice before: in my October 2009 Guestbook entry on “Mandatory flu vaccination for health care workers” and my March 2010 Guestbook entry on “Making health care workers get vaccinated against the flu.” I’m a believer in flu vaccination and I go get my own shot every year, but like you I’m opposed to making it mandatory, even for HCWs. Some of my reasons are outlined in those two entries.

You’re asserting two core points here, it seems to me:

  • Forcing HCWs to get vaccinated against the flu isn’t as effective for protecting patients as other infection control measures that hospitals and other health care employers are ignoring; and
  • Health care employers are focusing on mandatory vaccination in order to look like they’re taking action, as a strategic distraction from the infection control measures they ought to be taking instead.

You’re right that neither of these is a risk communication issue – and also right that there are important risk communication issues lurking beneath the surface, issues I didn’t fully address in my two previous go-rounds.

Debatable Effectiveness

I certainly agree with you that the flu vaccine is far from perfect. And I’m sure broad hospital infection control programs accomplish more than vaccinating HCWs against the flu accomplishes. Broad programs exist already, of course – but I take your point that expanding and improving them should be a higher priority than mandatory flu vaccination. Still, you have to start somewhere. The greater value of a more comprehensive, more expensive program doesn’t prove that small, narrow programs are valueless. We have to assess mandatory HCW flu vaccination on its own merits.

The evidence is strong that flu vaccination often fails to generate protective immunity. The CDC says the flu vaccine is 70–90 percent effective in healthy young adults, and much less so in older vaccinees and those with chronic diseases. But despite its high failure rate, flu vaccination does significantly reduce the incidence and severity of influenza in vaccinees. It’s a sure thing that vaccinated HCWs are much less likely to get the flu than unvaccinated HCWs.

The tougher question is whether vaccination significantly reduces flu incidence in others – in close contacts of vaccinees who were themselves unvaccinated or whose vaccinations didn’t take. This kind of third party protection apparently requires a high level of vaccination in surrounding individuals. If 80–90 percent of the people around you have been vaccinated against the flu, you get meaningful protection, link is to a PDF file but if it’s only 50–60 percent of your contacts, it may not help you much. The tipping point is thought to be somewhere in the 70 percent range. That may explain why the evidence is so weak that vaccinating HCWs actually reduces patient mortality. It seems to do some good in nursing homes, where there aren’t many people around (sadly) other than HCWs and elderly long-term patients. There’s no proof it helps significantly (or maybe at all) in hospitals with lots of visitors and new patients all the time – where there are plenty of contacts other than HCWs to give you the flu.

It is well documented that more HCWs get vaccinated when they’re told they have to than when they’re told it’s up to them. Some voluntary programs achieve a vaccination rate of almost 90 percent, but the national rate of HCW vaccination remains below 50 percent – even after thirty years of CDC urging. Most mandatory programs achieve around 98 percent compliance. The difference between a mandatory program (98%) and a typical voluntary program (<50%) may have a significant impact on employee absenteeism (and “presenteeism” – HCWs who go to work with flu-like symptoms). But there’s no proof that this difference has a significant impact on patient health in a busy hospital. The difference between a mandatory program (98%) and a state-of-the-art voluntary program (90%) seems unlikely to have much impact on patient health in a busy hospital.

Claims of Hypocrisy

That’s probably why it feels hypocritical for hospital managements to put so much stress on vaccinating their employees. If patients are likely to catch the flu from HCWs, they’re presumably also likely to catch it from visitors and other patients. Yet most mandatory vaccination programs for HCWs are not accompanied by any effort to get visitors and patients vaccinated as well. Perhaps mandatory vaccination of visitors and patients would be too much to expect – but why not at least urge visitors and patients to get vaccinated? And why not urge (if not require) unvaccinated visitors and patients to wear masks, as unvaccinated HCWs are sometimes required to do? If meaningful third party protection requires vaccinating more than 70 percent of the patient’s contacts, link is to a PDF file what is the point of a halfway program that focuses exclusively on health care workers?

It’s also worth examining how HCW flu vaccination programs address the problem of unsuccessfully vaccinated employees, as opposed to the problem of those who decline to be vaccinated. Since the CDC says flu vaccination is 70–90 percent effective in healthy young adults, let’s generously assume 80% for HCWs. So if a particular program gets 98% of employees vaccinated, the vaccination worked for 78.4 percent of all employees (80 percent of 98 percent). Who’s left to give patients the flu? The 2 percent who weren’t vaccinated and the 19.6 percent whose vaccinations didn’t take. In this hypothetical hospital, unsuccessfully vaccinated employees are more than nine times as dangerous to patients as unvaccinated employees.

Yet HCW flu vaccination programs typically ignore the former risk, while many such programs force employees who decline vaccination to wear masks or take antiviral prophylaxis during flu season. The discrepancy doesn’t necessarily mean the programs are hypocritical or punitive. Unvaccinated employees are lower-hanging fruit than unsuccessfully vaccinated employees. Identifying the latter would be difficult; making all employees wear masks during flu season or flu outbreaks would be burdensome (and would undermine the case for vaccination), while feeding all employees antiviral drugs at such times would be bad public health policy. Still, a hospital administration focused rationally on patient health would have to think hard about the wisdom of inviting a bitter controversy over forced HCW vaccination and forced masking of the holdouts, while leaving the much larger problem of unsuccessful vaccination unaddressed – and unacknowledged.

I don’t feel qualified to judge the motives of health care employers who impose mandatory flu vaccination rules. I suspect they’re usually sincere rather than hypocritical, even if their programs are sometimes inconsistent and not very well grounded in effectiveness data. But I certainly agree that charges of hypocrisy sometimes arise when mandatory flu vaccination is imposed. One such charge is your accusation that management is trying to shift the burden to employees to avoid pressure to implement across-the-board infection control measures. Reasonable rebuttals to this charge include: “Don’t let the best be the enemy of the good” and “Let’s use flu vaccination as a launch-pad for additional infection control campaigns.” Besides, broad infection control programs also make lots of demands on employees.

There are more pervasive risk communication issues here than hypocrisy, I think. They include resentment, skepticism, and distrust, all of which contribute to health care workers’ outrage.

Mandatory flu vaccination arouses resentment. Mandatory anything arouses resentment – but the resentment is far greater when the compulsion is (or is perceived to be) inconsistent, unproved in its effectiveness, and in some cases hypocritical. Employers are sometimes right to impose requirements on their employees regardless of the resentment they incur. But they should pick their fights wisely. Is mandatory flu vaccination of health care workers a wise fight?

Impact on HCW Vaccination Attitudes

Besides evidence showing whether or not fewer patients actually catch the flu, one key to answering this question is the impact of the requirement on the vaccination attitudes of HCWs – and therefore on their future acceptance of flu vaccines and other vaccines when the choice is theirs … and also on what they say to patients when the topic of vaccination arises. Consider two diametrically opposed possibilities:

  • Getting vaccinated (even under duress) could arouse cognitive dissonance, and thus over time could increase support for vaccination in general and patient vaccination in particular: “I get vaccinated against the flu every year. So I must believe in vaccination. You should get vaccinated too.”
  • Being forced to get vaccinated could arouse resentment, and thus could decrease support for vaccination in general and patient vaccination in particular: “They made me get vaccinated against the flu. But thank God I’m free to refuse most vaccines. You should refuse them too.”

I lean toward the second hypothesis, at least in the short term. Cognitive dissonance occurs when people do things that don’t make sense even to them, and therefore look for information to make sense of the behavior. So coerced behavior doesn’t arouse much cognitive dissonance: I know why I did it – because I had to!

The second hypothesis also makes more sense in terms of the dynamics of outrage. Coerced risks are high-outrage risks; high outrage leads to high hazard perception. In this case, then, being forced to get a vaccination I don’t want increases the odds that I will consider the vaccine (and vaccines in general) dangerous. Certainly people are likelier to interpret any post-vaccination health problems as side-effects of the vaccine if they didn’t want to get vaccinated in the first place.

I am reminded of the 2003 U.S. smallpox vaccination campaign. (See “Public Health Outrage and Smallpox Vaccination: An Afterthought.”) Intelligence agencies pushed smallpox vaccination out of a concern that terrorists might acquire the ability to launch a smallpox epidemic. The public health establishment opposed the program, unconvinced about the risk of a smallpox attack and worried about the risk of the smallpox vaccine itself. The President compromised with a program of voluntary smallpox vaccination for health care workers and emergency responders. Forced to implement (and pretend to support) a program they had vigorously opposed, public health professionals found ways to undermine it, and achieved a much lower level of vaccination than proponents had sought. It’s hard not to see the failure of the smallpox vaccination program as a success (perhaps unconscious; certainly unacknowledged) for its public health opponents.

In much the same way, HCWs forced to get vaccinated against their will can find ways to undermine patient vaccination.

I’m less sure about the long term. As you know well from your own safety work at SEIU, mandatory programs that are hotly controversial when first promulgated often become “ordinary” and widely accepted over time. Coercion can lead to attitude change in the end. One good example is the regulation requiring dentists to wear gloves to protect against bloodborne pathogens. Similarly, most people who started using seatbelts because they had to eventually came to accept seatbelts as normal; still later they came to assume (and sometimes even actively advocate) that seatbelts were worth wearing.

Mandatory behavior becomes habitual; we forget that it started out compulsory, and that we may have resented it enormously at the time. Forgotten resentment is similarly a linchpin of parenting: Force your kids to clean their rooms, endure their resentment, and there’s a good chance they’ll grow into adults who like their rooms clean.

But what happens when you force your kids to do something that’s arguably unreasonable, unfair, and hypocritical? Sometimes they grow into adults determined not to make the same mistake with their kids.

Impact on HCW Morale and Mistrust

The broader impacts of mandatory vaccination on HCW morale also need to be studied. For example, will compliance decline for procedures that are not mandatory or not easily monitored (such as hand-washing)?

Among the broader impacts on morale, I think, will be an increase in mistrust. The technical case for mandatory vaccination of HCWs is surprisingly weak, a lot weaker than hospital policy statements typically imply. In January 2009, Jody Lanard and I wrote a column about “Convincing Health Care Workers to Get a Flu Shot … Without the Hype.” After documenting some of the ways in which flu vaccination is oversold, we argued that the hype leads health care workers (and others) to mistrust what the campaigners are telling them, and that the mistrust probably reduces their willingness to get vaccinated. It’s bad enough from a trust perspective when proponents exaggerate the case for choosing to get vaccinated. The mistrust is even worse when HCWs catch them exaggerating in their rationales for mandatory vaccination.

The strongest proponents of mandatory vaccination often overstate the known benefits. In a 2010 editorial, link is to a PDF file for example, the Editor-in-Chief of Vaccine, the Mayo Clinic’s Gregory Poland, wrote:

Further, studies have now demonstrated the relationship between levels of HCW influenza immunization and mortality among the patients they care for [3,4].

Poland’s two footnotes in support of this statement lead to articles showing that HCW vaccination protected elderly patients in long-term care facilities. He cites no studies showing a similar protective effect in the general hospital population. The title of Poland’s editorial is worth contemplating in the context of his own overstatement: “Mandating influenza vaccination for health care workers: Putting patients and professional ethics over personal preference.”

Vaccination Proponents’ Outrage

So if mandatory flu vaccination has so many drawbacks (and there are others mentioned in the two earlier Guestbook entries I cited at the start of my response), why is the trend so strong? Why are more and more hospitals imposing the policy, and more and more public health organizations urging them to do so?

I think that’s attributable to outrage and resentment too – the outrage and resentment of flu vaccination proponents at the opposition, plus their frustration at their own inability to make their case more persuasively. Many vaccination proponents, in fact, are outraged at the mere existence of opposition, outraged at the need to promote – let alone defend – vaccination.

Vaccination has had a tough decade – not just flu vaccination; all vaccination. Anti-vaccination activism is up. Public skepticism is up. Trust in officials (including health officials) is down. Easy, automatic compliance is down.

Nearly all public health professionals (and hospital administrators) consider vaccination an obvious good. For many, it follows that prospective vaccinees who don’t think vaccination is an obvious good are obviously irrational, and so reasoning with them is obviously a waste of time. This isn’t a reasoned conclusion. In their calmer moments nearly all vaccination proponents will concede that it’s better (if you can) to win over the doubters than to coerce them. But in their more outraged moments, they don’t want to talk (far less listen). And over many years, their persuasion efforts have mostly failed. No wonder they want to coerce.

Deep in their hearts, many vaccination proponents would dearly love to make all recommended vaccines required for everyone, so they wouldn’t have to spend precious time and emotional energy trying to coax reluctant vaccinees. Their outrage makes them want to coerce everyone. But they can’t get away with coercing everyone, at least not yet (thank goodness). HCWs are one of the few groups they can try to coerce. Add to that the contempt of too many public health leaders and medical administrators for working-class HCWs, and the emotional appeal of making HCWs get their flu shots becomes even clearer.

Note: My wife and colleague Jody Lanard contributed to this response.

Optimism, “vision,” and crisis communication

name:Matthew Leitch
This guestbook entry
is categorized as:

      link to Crisis Communication index

Field:Independent writer, researcher, consultant
Date:September 30, 2010
Location:United Kingdom

Comment:

I recently published an article on my website about optimism and alternatives to it. A number of wise people responded with comments and I’ve improved the article from them. It’s here:

http://www.managedluck.co.uk/objectivist/index.shtml

However, one of these responses raised something that seems to be related to risk communication and leadership. The argument given was that the ordinary employee, the follower, is best off being unbiased and open-minded, but leaders are best when they have a vision. Having a vision and unbreakable expectation of success is a powerful part of leadership, it was argued. Winston Churchill was given as an example of a leader who demonstrates this.

To me, persistent optimism and a refusal to recognize or admit to uncertainty are a frightening combination and all the more so when seen in a powerful person. I have seen research (sorry but I can’t remember who) which suggested that many leaders feel under pressure to have a vision, but in practice they actually get their results by incrementally working with what they have in front of them to steer people in a good direction.

But others say a vision is necessary for change.

Must a “vision” be used and if it is, must it be accompanied by refusal to consider possible ways that failure might occur, or possible ways that something else, perhaps even more desirable, might occur?

Peter responds:

Like everyone else, I have heard the phrase “failure is not an option” (though I don’t think I’ve ever used it myself). I don’t hear this as an optimistic thing to say. To me it doesn’t mean “we can’t fail.” Rather, I think it means that failure would be so devastating that it makes sense to keep struggling no matter how damaging the effort might become. It is thus an expression of determination and perhaps even desperation, not optimism. (Or it could be just part of a pep talk, sort of like “Go Team Go!”)

I am not a Winston Churchill scholar, but I think Churchill’s World War Two speeches are fundamentally about failure not being a viable option for Britain – that is, about determination, not optimism. This is true even when Churchill uses optimistic phrases. Here, for example, is the last paragraph of his famous “fight them on the beaches” speech:

I have, myself, full confidence that if all do their duty, if nothing is neglected, and if the best arrangements are made, as they are being made, we shall prove ourselves once again able to defend our Island home, to ride out the storm of war, and to outlive the menace of tyranny, if necessary for years, if necessary alone. At any rate, that is what we are going to try to do. That is the resolve of His Majesty’s Government – every man of them. That is the will of Parliament and the nation. The British Empire and the French Republic, linked together in their cause and in their need, will defend to the death their native soil, aiding each other like good comrades to the utmost of their strength. Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this Island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.

I would certainly not call that an optimistic speech. It is a determined speech.

I do think leadership entails having a vision and being able to articulate that vision publicly. Leaders need a vision in order to figure out where to lead, and they need to articulate that vision in order to inspire the public to follow.

Having a vision isn’t incompatible with incremental decision-making. Having a vision means knowing where you hope to end up, and it means having some kind of rough “map” of the territory you’ll need to traverse in order to get there. But it doesn’t mean you have plotted your course irrevocably; the course and the map itself should keep changing as you learn more about the territory. Even the destination, the vision, may need to be revised – “re-visioned” – though leaders who change visions too often may lose their followers.

And I certainly don’t think vision requires optimism. The territory you need to traverse may be perilous indeed, as it was for Churchill. Recognizing the perils of the journey is essential to good leadership.

So I share your judgment that overconfident over-optimism is exceedingly dangerous, especially among leaders and especially in crisis situations.

Pretty much everyone agrees that leaders shouldn’t be overconfident or over-optimistic in their crisis management decision-making. What’s controversial is whether leaders need to sound overconfident and over-optimistic in their crisis communications. The argument that they should is grounded, I think, in a contemptuous assessment of the public: a mistaken judgment that people cannot handle bad news about what has happened so far or alarming and uncertain speculations about what is likely to happen next, that the public will lose heart and perhaps even panic if told too many unpleasant truths.

It is Churchill’s determination, and his expectation that his stakeholders will also maintain their determination, that empowers his speeches, not his optimism or his expectation that others will also be optimistic.

Decades of experience and research in crisis communication demonstrate that officials are wrong to mistrust the public’s resilience. In a 2003 column, Jody Lanard and I wrote about “fear of fear” and “panic panic” – the unfortunate and unjustified tendency of officials to fear the public’s fearfulness and feel panicky at the prospect that the public may panic. The public can tolerate its fearfulness in crisis situations, we wrote, especially if officials validate and guide that fearfulness instead of trying to avert it. And the public rarely panics in crisis situations; people in crisis often experience panicky feelings but usually manage to act appropriately – especially if officials resist the temptation to mislead and over-reassure in a misguided effort to “allay panic.” In a 2005 follow-up column on “Tsunami Risk Communication: Warnings and the Myth of Panic,” we reviewed some of the research that underlies these assertions.

When I do seminars on crisis communication, I concentrate on a list of 25 key recommendations. Four of them bear particularly on the issues you are raising:

  • Don’t over-reassure.
  • Err on the alarming side.
  • Acknowledge uncertainty.
  • Be willing to speculate.

Although crisis managers often violate all four of these recommendations, most crisis communication experts give the same advice on the first three. The fourth is controversial even among the experts. In a 2003 column Jody and I make our case why “It Is Never Too Soon to Speculate.”

You and I are in agreement about the merits of being unbiased and open-minded, rather than optimistic and over-confident. Since over-optimistic overconfidence is among the most common mistakes of crisis managers in their public communications, I urge my clients to avoid over-reassurance, err on the alarming side, acknowledge uncertainty, and talk more freely about speculative worst-case scenarios.

But there is another aspect of your article that I would quarrel with.

I am not convinced that your article and the accompanying graph accurately capture actual optimism and pessimism. You assume that optimists and pessimists are necessarily overconfident. But I think an optimist may be biased in favor of a good outcome without being overconfident with regard to that outcome; ditto for a pessimist and a bad outcome. A broader, asymmetrical curve could capture the reality of a positive or negative bias that isn’t overconfident.

Neil Weinstein’s research on “optimistic bias” defines optimism as estimating oneself to be less likely than “most people” to have specified negative outcomes: less likely to get fired, get mugged, get cancer, etc. Most people are optimistic by this definition; 85% of the respondents in one study (if I remember right) considered themselves better-than-average drivers. Other research has found that this sort of optimism is correlated with longevity, happiness, and various measures of success. It isn’t necessarily overconfident, only biased.

Also worth examining is the research of Julie Norem and others on defensive pessimism (and the related Singaporean concept “kiasu”) – a stance that systematically combines a kind of pessimism with a high level of determination. At least in the U.S., defensive pessimism is less common than Weinstein’s optimistic bias, but I don’t think it’s less successful. It’s a different approach that works for a different subset of the population.

A lot of what we think of as optimism and pessimism are actually “as if” thinking. That is, a person’s best guess at the probabilities of various outcomes are distinguishable from his or her stance toward those outcomes.

I may choose to think optimistically – to imagine/rehearse successful rather than unsuccessful scenarios and images – without losing track of my probability estimates.

Similarly, I may choose to think pessimistically – to dwell on worst case scenarios and determine what precautions I want to take to be prepared for those scenarios – without forgetting that they are unlikely. Some of the risk managers I work with routinely refer to themselves as “professional pessimists.” But they don’t actually “predict” disaster (though they are often accused of having done so when the disaster doesn’t materialize); they prepare for possible disaster, and when appropriate they urge others to prepare as well.

In the interests of preparedness, I see real merit to focusing on what might go wrong. In the interests of mental health and enthusiasm, I see real merit to imagining the most hoped-for outcomes. The trick is to do both without losing one’s realistic assessment of the actual probabilities – that is, to remain unbiased and open-minded while simultaneously considering, even emphasizing (and sharing) both optimistic and pessimistic scenarios.

Goals and impacts of my website, especially the postings on the BP oil spill

Name:Phoebe Rowell
Field:Public relations executive
Date:September 29, 2010
Location:United Kingdom

Comment:

I’m studying the impact of social media – of all kinds – in crisis communications, using a small-scale case study of the BP oil spill earlier this year.

As someone who blogged about the situation and BP’s response, I thought it would be interesting to find out more about your motivation and the impact you intended or thought you may have as a contributor to online responses.

  1. What are your reasons for writing a blog, particularly about the BP oil spill earlier this year?
  2. What, if anything, did you hope to achieve by writing your blog?
  3. What impact did you intend, or were you surprised by any impact it had?
  4. On a larger scale, in what ways do you think social media impact organisations’ activities in times of crisis?

Peter responds:

I don’t see myself as writing a blog. I think bloggers are supposed to post far more frequently than I do.

What I’ve got is a website devoted to risk communication. It has four main kinds of content on it:

  • I have posted a sampling of articles I wrote on risk communication before I had a website. And when I write something now for publication elsewhere, I post it on my website as well. (I generally decline to publish anything elsewhere that I won’t be permitted to post here.) All this content is listed at http://www.psandman.com/webpubs.htm.
  • I have also posted a sampling of articles others have written about me or my approach to risk communication. I add new articles sparingly, when they say something not already on the website. All this content is listed at http://www.psandman.com/articles/articles.htm.
  • Seven or eight times a year I write something especially for the website, sometimes jointly with my wife and colleague Jody Lanard. I call these articles “columns,” but they’re often V-E-R-Y long. Some are about recent events; others are about enduring principles. They’re listed at http://www.psandman.com/col/columns.htm.
  • Three or four times a month somebody sends a comment or question (like yours) to the website Guestbook. I write an answer – sometimes short, sometimes long. These Guestbook entries are listed at http://www.psandman.com/guestindex.htm.

My writing on the BP oil spill includes one column written expressly for the website, three commentaries that appeared elsewhere first, and four website Guestbook entries. Here’s a complete list (so far):

I haven’t got very good answers to your questions about the goals and impacts of these particular postings. My goal for everything on the website is to help readers understand the principles and strategies of risk communication. I try (not always successfully) to avoid getting seduced into commenting on other aspects of high-profile events.

As for impacts, I always find those difficult to judge. The four freestanding postings have had a total of around 2,000 visitors so far. They led to the four Guestbook comments. (I can’t tell how many visitors to my 2010 Guestbook read those four entries in particular.) They also led to a handful of emails, including a few interview requests and one reprint request.

As far as I can tell, they led to absolutely no reaction from BP. I have no idea if anyone at BP has even seen them in the welter of commentary on the Deepwater Horizon explosion and spill.

With regard to your broader question, social media are obviously crucial in crisis situations. More and more people get their information from social media – not just information about crises they’re watching from the sidelines (like the BP spill) but also information about crises they need to figure out how to respond to personally (like the swine flu pandemic … though it turned out to be a pretty mild crisis).

The rise of social media has some obvious pros and cons. Among the pros: Information gets out quicker than ever before. Among the cons: It’s increasing difficult to tell which information to trust. People used to have a hard time getting information. Now we have a hard time vetting information. Please note: I am not suggesting that official information is always the most trustworthy. Often it is not. Determining whether official information is trustworthy and accurate is a very difficult task we can’t blame on social media.

Social media have become critically important to any organization trying to guide affected people through a crisis, or trying to influence what bystanders think of the way it is managing the crisis.

None of that has much to do with my website – which I think I can safely say is profoundly unimportant in crisis situations … though it has lots of guidance on how corporate and government officials ought to communicate in such situations.

Let me go beyond the scope of your questions and comment briefly on the goals and impacts of my website overall.

Most of what’s on this website addresses one or more of three kinds of risk communication:

  • Precaution advocacy – arousing greater concern in high-hazard, low-outrage situations
  • Outrage management – calming excessive concern in low-hazard, high-outrage situations
  • Crisis communication – helping people bear justified concern and act wisely in high-hazard, high-outrage situations

(Reality is of course more complicated than this neat trichotomy suggests. For instance, a very important type of risk communication is responding to justified outrage about hazards that occurred in the past, or hazards that are technically low but nevertheless justifiably outrageous.)

The website dates back to 2001. As it has grown bigger and richer in content, I have grown older and closer to retirement. Even in the beginning, my goal wasn’t to promote more consulting and speaking gigs. To the contrary, I was hoping the website would replace a lot of gigs, enabling me to slow down gracefully without having to discipline myself to say no to prospective clients.

It hasn’t worked that way. Instead, people run across the website and then want me to come to their organization to give a seminar or consult on a specific problem. As a way to ease me into retirement, the website has been a total flop.

Even people who have used the website extensively tell me they find it different (and presumably more valuable, though they rarely say so explicitly) to hear me present live than to wade through all that reading material. And of course a lot of people who would like to learn more about risk communication have no intention of wading through all that reading material. In the coming year I plan to post a batch of video clips for website users who would rather watch than read. I also hope to put together a proposed “curriculum” so those who want to read can take a guided tour if they wish instead of browsing on their own.

Though it will remain primarily a repository of written commentary on risk communication, the website will be my principal legacy when I retire. I left academia in 1994 so I don’t have graduate students to carry on my work. I never built a company or took on apprentices. I have so far failed in my efforts to launch a master class or a partnership affiliation with a management consulting or PR firm. So the website is really all I’ve got to leave behind … aside from a few tens of thousands of people who have heard me speak or worked with me for a day or two over the past four decades.

Averaging roughly 500 visits a day, 15,000 a month, the website is hardly a pop favorite. But in the tiny (but growing) field of risk communication, it’s a player. It comes up very near the top in Google searches for “risk communication” and related terms. Wherever I go, I encounter people who tell me on my way in the door that they’ve been using my website for years, and people who tell me on my way out that they’re going to look at my website to learn more.

I don’t think the website has raised up a new generation of risk communication consultants who do things my way. That was my fondest hope, and I doubt I have achieved it. Still, within days after I post something new, especially a column, it often begins to come to the attention of the most directly affected corporate or government officials, the people whose risk communication efforts I am commenting on. Sometimes they call or write. Sometimes I can tell from my webstats that they’re looking … and even passing my commentary along to others within their organization. Sometimes I imagine I can see some impact on how they handle the next phase of ongoing situations. And sometimes, as in the case of my writing on the BP spill, I get no indication that the most directly affected officials have seen it at all.

“Corn sugar” and other euphemisms

name:Nick
This guestbook entry
is categorized as:

    link to Outrage Management index

Date:September 27, 2010
Location:U.S.

Comment:

Have you seen the recent corn sugar ads on TV? It’s an interesting communication strategy by the corn lobby to switch from the negative connotation of “high-fructose corn syrup” to the more benign-sounding “corn sugar.”

How effective do you think this re-branding strategy will be for the industry?

Peter responds:

I have two quite different answers to the question of renaming “high-fructose corn syrup” as “corn sugar.”

In outrage management, euphemisms almost always backfire. People who are already upset about an organization’s activities or products aren’t going to be fooled by a name change; the change will simply add another reason for irritation and mistrust.

Among the hundreds of examples I have encountered over the years:

  • “rapid oxidation” for explosion
  • “biosolids” for dried human excrement – I don’t require “shit”; “sewage” would do
  • “thermal oxidizer” for incinerator
  • “American Chemistry Council” for the Chemical Manufacturers’ Association
  • “cold pasteurization” for food irradiation
  • “tailings impoundment” or “containment facility” for slimes dam (and various other less benign terms)
  • “corporate communications” for public relations (which was in its time a euphemism for publicity)

My advice to clients is always to use their opponents’ terminology. That’s a good idea, I think, even if the terminology is mistaken. The oilsands of Alberta are routinely called “tar sands” by critics. I fully understand that “tar” is another substance entirely. Nonetheless, there are lots of reasons for supporters of oilsands development to address the connotation of the “tar sands” term head-on, starting with the term itself. (Among the reasons: Anyone who Google searches “tar sands” will find almost exclusively opponents’ URLs, because the proponents so studiously avoid the term.) It’s fine to correct the technical error in passing: “The oilsands – which many people call ‘tar sands’ because of their tar-like consistency….”

That’s the first answer, the outrage management answer. If the corn industry and the sweetened food industry want to address people’s concerns about high-fructose corn syrup, they ought to use the term. Then discuss the issue on the merits.

But millions of people have no such concerns. What about them?

The name change will still do more harm than good (from the perspective of the affected industries) with regard to people who are going to hear about the issue sooner or later. Why give opponents a new argument? “The chemical that food manufacturers deviously call “corn sugar” is really high-fructose corn syrup, and here’s why health-conscious people should avoid it….”

But there are two other audiences: people who have already heard about these concerns and discounted them, and people who are unlikely ever to hear about them. For those audiences, “corn sugar” is an obvious improvement over “high-fructose corn syrup.” It’s short and sweet.

For those two other audiences, outrage management is the wrong frame. Public relations and marketing communications are the right frames. And for those audiences, “corn sugar” is a good euphemism.

So the industry has to prioritize between two competing goals: to nurture a mutually respectful dialogue with people who are or may become concerned about the product or to avoid any such dialogue with people who aren’t concerned and aren’t likely to become concerned. Replacing “high-fructose corn syrup” with “corn sugar” is choosing the latter goal. That may be a sound choice if the issue is both foolish and ephemeral.

But if opponents’ concerns have some technical merit, or if opponents’ efforts to arouse concern and even outrage in others are likely to prove fruitful, or both, then euphemism is a bad choice for the industry because it is bad outrage management.

Blowing the whistle: when a low-status employee sees a risk

name:Anonymous
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

Field:Student
Date:September 27, 2010
Location:Tanzania

Comment:

I am a student, doing my field work in a particular mining operation as part of my studies. I recognize some hazards that were not foreseen. What can I do to communicate the risk to the community, knowing that it is not my responsibility and I have no authority?

Peter responds:

You face a very difficult problem here: deciding whether and how to blow the whistle about a set of risks that your superiors on the job either don’t see, or don’t consider significant, or don’t want to acknowledge openly.

In principle, obviously, everyone has a moral responsibility to alert others to risks of which they are not aware. If you see something wrong in your workplace that endangers the surrounding community (or your coworkers), the obvious thing to do is to tell your manager. If he or she doesn’t take the matter seriously, you go over your manager’s head to a more senior company official, or you go around your manager to the site’s HSE manager. You keep pushing your concerns higher and higher in the corporate hierarchy until somebody responds appropriately. If your internal efforts fail – or if you consider the risk too urgent to escalate so slowly – you go public. That probably means informing the appropriate government regulatory agency, or the media, or both. It may also mean informing local, national, or international NGOs that are hostile to the company you’re working for and will sound the alarm for their own reasons.

In other words, you make as much fuss as you need to make to ensure that your concerns can’t be suppressed.

Everywhere in the world, following this advice is potentially costly. In some workplaces, it could get you imprisoned, assaulted, or even murdered. In many workplaces, it could get you fired. In most workplaces, it could get you labeled a troublemaker. There has been a fair amount of research on what happens to whistleblowers. Even if a whistleblower is proved right and becomes a bit of a hero, future employment prospects are likely to be damaged; no company wants to hire someone who made a fuss in his or her last job. Your prospects are worse still if you’re proved wrong, or if the legitimacy of your concerns remains a matter of conjecture.

Little wonder employees often report unsafe conditions anonymously … or not at all.

Of course anonymous complaints have little credibility. But so do complaints from a student whose employment is temporary, low-level, and part of his or her studies.

Your best bet may be to tell your professor. He or she has some responsibility for the predicament you now face, having put you in that job in the first place. He or she may be able to help you decide if the risks you think you see are really there, or if you’re making a mountain out of a molehill. (Ultimately, however, that decision is going to have to be yours.) And if you can convince your professor that your concerns are serious, he or she has the stature to push the issue more effectively than you could alone, both inside the company and externally. Even if your professor is unwilling to champion your cause, he or she may still have valuable advice for you.

Much probably depends on the company you’re working for. If it’s a western multinational corporation, it probably has whistleblower provisions built into its employment procedures. There may be safety hotlines to which you can report dangerous situations, with your identity protected from those in authority over you. You may even work at a site where all employees (even students) have Stop Work Authority (SWA): the right and obligation to cause all work to come to a halt while your safety concerns are investigated.

These and similar protections don’t necessarily work as well as they should. They obviously didn’t work well at BP and Transocean; survivors of the April 2010 Deepwater Horizon explosion now say they saw lots of safety problems, but as far as we know nobody exercised SWA or made the sort of fuss that might have prevented the disaster that followed.

Still, employees of BP and Transocean had more whistleblower protection than employees of a nonwestern mining company would typically have.

Whatever path forward you choose, I urge you to be simultaneously persistent and tentative – and of course respectful. You are a novice and you know it. You’re not trying to be a hero or a troublemaker. You would dearly love to see your concerns disproved so you can go back to your job and your studies. You are reluctantly raising the issue, even though you’re not sure you’re technically correct, because you have been taught in your studies that it is everyone’s obligation not to stay silent about a possible threat to health or safety. And so on….

The precaution advocacy index of this website will steer you to articles on how to warn people about risks. The trick is figuring out how to do so without doing yourself too much career damage.

Did the BP spill in the Gulf of Mexico create a crisis for the oil and gas industry?

name:Dave Johnson
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Editor, ISHN [Industrial Safety & Hygiene News]
Date:September 12, 2010
Location:Pennsylvania, U.S.

Comment:

I am writing a paper to be delivered at an oil and gas industry executive closed-door meeting in Houston in November. The theme of the paper: nine questions senior leaders should ask to assess their organizations’ current exposure to low-frequency, high-consequence events.

I want to start the paper by providing context, the nature of these times that oil and gas execs are operating in. Is the industry in a crisis as a result of the BP oil spill? Or is the crisis over now that the leak is contained? Or is “crisis” too strong of a description for the circumstances the industry finds itself in:

  • Public outrage to a degree
  • Declining public trust and confidence
  • Increased media scrutiny
  • Increased NGO attacks
  • Increased federal regulatory oversight and intervention
  • Hostile political pressure for punitive legislation
  • Threats to brand reputations and credibility
  • Prospects of increased drilling costs in the Gulf.

If these are not elements of a crisis, what do they constitute?

I get the sense the oil and gas industry has hunkered down with a siege mentality. I see a lot of anger. But I don’t know if they would say they are confronting a crisis.

What do you say, Peter?

Peter responds:

This is my seventh comment on the BP oil spill. The others (in chronological order) are:

You’re raising a different issue than any I’ve discussed so far: how the spill has changed things for the oil and gas industry generally – and whether it constitutes a “crisis” for the industry. Here are a few off-the-cuff reactions, for whatever they might be worth:

number 1

Mistrust in the oil and gas industry has been high for a long time. But the nature of the mistrust has changed, I think, as a result of the BP spill.

The public used to believe the industry was “evil” – that is, self-interested rather than altruistic and dishonest rather than candid – but at least it was competent. People’s assessment of the industry’s evil probably hasn’t changed much. Tony Hayward’s many gaffes did some harm, but the public wasn’t that surprised by the things he said, only that he was stupid enough to say them. What has changed, I suspect, is people’s assessment of the industry’s competence.

It is patently obvious now that BP didn’t know how to prevent a blowout, didn’t know how to prepare for one, and didn’t know how to respond to one.

number 2

If I’m right about competence, it has two implications.

First, the industry needs to improve its competence with regard to black swans (low-frequency, high-consequence events). That’s the main topic of your November presentation in Houston.

Second, and nearly as important, the industry needs to convince the public that it has improved its competence with regard to black swans. That second implication has two prerequisites (other than actually improving). Before it can convince the public that it has improved its competence with regard to black swans, the industry needs to acknowledge two other things – and do so with contrition:

  • that the BP spill does in fact show that the industry (the whole industry, not just BP) didn’t know how to prevent a blowout, didn’t know how to prepare for one, and didn’t know how to respond to one; and
  • that the industry was deceptive and self-deceptive about its competence in these regards, claiming far more than the facts justified and then half-believing its own propaganda.

It will be difficult for the industry to find the will to address these two prerequisites … but I doubt it can get credit for improved competence (even if its competence actually improves) without first apologizing credibly for its prior incompetence and its dishonesty about that incompetence.

number 3

There may be a silver lining here.

Over the long haul, I think it is in the oil and gas industry’s interests to be seen publicly as trying to do a good job but screwing up a lot … as opposed to being seen as capable of doing a good job but preferring not to bother.

People have long seen the industry as more evil and more competent than it actually is. Copping to incompetence and struggling visibly and genuinely to become more competent may actually help the industry seem less evil. (See my column on “The Stupidity Defense.”)

number 4

Nonetheless, an “evil” industry – or one that is thought to be evil – has to prove more than competence. It has to prove that it is well-policed – that it must restrain its evil inclinations because it can’t get away with much.

This is the second big implication of the BP spill, I think. We have learned just how incompetent and/or evil the regulators are! The MMS was in bed with the oil industry, both before and after Obama’s election. Obama says he set about cleaning up Bush’s regulatory cesspool – but in the meantime (prior to the spill) he pushed deepwater oil development more aggressively than Bush ever did. The Coast Guard managed the spill pretty competently, I think, but it was partnered with BP, not policing it.

The industry’s biggest communication challenge now, I think, is to convince the public that the fox is no longer in charge of the henhouse – that government oversight of oil development is serious and effective. I think onerous regulation is better for the industry than lax regulation in at least three ways:

  • It will lead to fewer spills and other accidents.
  • It will rid the field of the small players who can’t afford the new regs.
  • It will help reconcile the public to continuing oil development.

The industry needs to overcome its kneejerk ideological opposition to all regulation and realize that it needs credible cops with their boots on its neck. The worst of all possible worlds for the industry – and not an unlikely outcome – is ending up with more burdensome regulations while the public continues to consider it essentially unregulated.

number 5

One other competence question needs to be addressed. In what ways was BP typical, and in what ways (if any) was it different from the rest of the industry?

After the Exxon Valdez accident, some of my oil industry clients told me it was ironic that Exxon was involved, since it had a premier safety culture. After Deepwater Horizon, some of my oil industry clients said they weren’t surprised to see BP involved, since its safety culture had deteriorated significantly.

I don’t know whether these offhand comments represent widespread industry opinion, much less whether they represent the actual situation. But the industry needs to have a coherent, consistent position on this issue: Was BP a bad actor or was this an accident that could have happened to anyone?

The media coverage hasn’t been very helpful on this important point. We have been told in detail all the bad decisions that helped lead to the Deepwater Horizon disaster – but we have learned comparatively little about whether other rigs and other companies had a similar pattern of bad decisions without any disaster. (This is a universal problem; it is similarly hard to figure out from the media whether other egg producers have hygiene and regulatory records as bad as the ones that gave us the current salmonella outbreak.)

But the industry doesn’t need to wait for a journalism renaissance. It can and should tell us – loudly and clearly – the ways in which BP lagged the industry norm and the ways in which it was typical or even exemplary. I doubt this will ever happen, but it would help a lot.

number 6

Does all this add up to a crisis? I don’t know. It’s certainly a major downturn.

But the real crisis is that the U.S. public still prefers to give the oil industry its head rather than consider serious conservation. That’s a crisis for the climate, the economy, and national security … but it’s the oil industry’s ace in the hole.

The oil industry is still hated and mistrusted, as it has been for many decades; and now it’s seen as less competent and less well-regulated than prior to the spill. Maybe that adds up to a reputational crisis; certainly it’s a reputational slump. I think it’s worth a serious effort at reputational repair … but of course I would think that, given what I do for a living. Still, even I have to admit that the oil industry is a lot like the financial industry; it can be widely hated and immensely profitable at the same time.

Talking about risk reduction when risk elimination isn’t possible: the case of dengue

name:Subraminiam Thavaraj
This guestbook entry
is categorized as:

      link to Precaution Advocacy index       link to Pandemic and Other Infectious Diseases index

Field:Health promotion researcher and trainer
Date:September 12, 2010
Email:s_thavaraj@yahoo.com
Location:Malaysia

Comment:

We have been working on dengue prevention and control for the last three years. In Malaysia, dengue is a worrisome problem to us. Public health measures include COMBI (communication for behavior intervention), fogging, antilarval activities, spots and trailers in the electronic media, and distribution of health education materials, among others.

We have trained our staff in risk communication. We have conducted several focus group discussions based on the Health Belief Model among the staff and the community. The community is well aware of the problem. The public health personnel are doing their best on the ground. Case management is stepped up in the hospitals.

From your experience, and experiences from other countries facing similar problems, are we fighting a losing battle with the Aedes aegyptii and the Aedes albopictus?

Peter responds:

Your experience with dengue in Malaysia is of course much greater than ours in the United States. (Dengue is endemic in Puerto Rico, but local transmission was rare in the continental U.S. until a recent outbreak in Key West, Florida.) We may start catching up with you as our temperatures get hotter. But for the foreseeable future, the U.S. has more to learn from Malaysia about fighting dengue – and communicating about dengue – than it has to teach.

And I have no personal experience whatever with dengue risk communication. My wife and colleague Dr. Jody Lanard has worked on dengue from time to time in Asia and Latin America. I haven’t worked on it at all. (Jody suggests that your “dengue communication” people talk with the HIV/AIDS “harm reduction” people in the Malaysian Ministry of Health. Some of their risk communication strategies may be generalizable to the dengue problem.)

Your comment does raise a question that goes way beyond dengue and is all too familiar to me. You have a long list of steps you’ve taken – technical steps and risk communication steps – and yet dengue remains endemic in Malaysia. Are you fighting a losing battle?

If “winning” is defined as eliminating dengue in Malaysia, then I think the answer is probably yes, you are fighting a losing battle. It doesn’t seem likely to me that you will find a way to rid Malaysia of the mosquitoes that carry the disease, nor that you will manage to rid Malaysia of humans who have the disease and can transmit it to a mosquito that bites them, which then transmits it to one or more of their neighbors.

So your goal can’t be dengue elimination. It has to be dengue reduction and dengue “harm reduction” – dengue control. You have to aim for fewer cases than you had last year – and sometimes, perhaps, for an even more modest goal than that: a smaller increase in the number of cases than would have occurred if not for your efforts.

This raises an important risk communication issue – an issue not just for dengue but for a wide range of risks that can’t be eliminated but can be reduced.

The issue in a nutshell is this: Risk reduction is enormously less attractive than risk elimination. That’s true for your agency, for the legislatures and philanthropies that fund your agency, and for the stakeholders and publics whose behavior your agency is trying to influence.

There are of course sound medical reasons for preferring to eliminate a disease rather than merely controlling it. If the disease is really and truly gone, no longer in existence anywhere in the world (except perhaps in a few labs), presumably it can’t stage a comeback. So health authorities don’t have to keep fighting it; they can reallocate their resources to other problems.

But medical reasons aside, the psychology of risk elimination versus risk reduction is extremely important. Eliminating a disease feels like a huge achievement. Reducing one hardly feels like an achievement at all. Just as important, trying to eliminate a disease feels like a heroic effort, while trying to reduce one feels like an endless, Sisyphean chore.

In a famous experiment, psychologists Daniel Kahneman and Amos Tversky asked people to imagine two equally deadly diseases and two different vaccines. Vaccine A provides perfect protection against one of the two diseases, but doesn’t touch the other. Vaccine B is 50% effective against both. Mathematically, the two vaccines prevent an equal number of deaths. But nearly everyone is willing to pay substantially more for Vaccine A than for Vaccine B.

Cutting two problems in half feels much less exciting than getting one of the two problems off the table altogether.

That’s what I’m sensing – perhaps mistakenly – in your comment. Obviously you are doing a lot to fight dengue. Perhaps you could do more, or find more effective ways to do what you’re doing. But mostly, I sense, you may be feeling a bit frustrated about the endless Sisyphean chore of dengue control and dengue risk communication.

It’s not just you and your colleagues. Your funders may be wondering when if ever you’ll finally stop asking them for more and more money to fight dengue.

And the community may be fed up hearing the same dengue control messages year after year: get rid of standing water, wear long pants and long sleeves, etc. Why bother when nothing you recommend seems to get the job done? Mightn’t it be better to try a massive pesticide fogging campaign that (people wrongly imagine) might actually finish off the dengue-carrying mosquitoes once and for all?

So what can risk communicators do when risk reduction is all that’s feasible? At the end of this response, I will list a few strategies for making risk reduction less unpalatable – strategies grounded in acknowledgment and empathy.

But first I want to underscore that it is very hard for officials to come right out and admit that they are probably never going to beat dengue – or any unbeatable problem. Instead, officials all too often overpromise what the government can do about the problem, while simultaneously expressing contempt for the public’s concern about that problem.

If you are trying to convince your officials to do better dengue risk communication, it might help to show them a bad example. Then you can explain why it is bad and what you would recommend trying instead.

Here is an extreme example of How Not to Do Dengue Risk Communication: Lessons from India.

With the Commonwealth Games fast approaching in Delhi, Indian officials are doing what they so often do:

  • over-reassuring the public;
  • disrespectfully trashing public concerns;
  • minimizing the hazard; and
  • overconfidently overpromising results.

Consider for example this September 2, 2010 article from India’s Central Chronicle, headlined “Dengue will be contained soon.“

The government today once again denied that dengue was taking epidemic proportions in the city, stressing that the disease was only an outbreak that “will be contained soon.” …

“All civic agencies are well into the job and soon we will be able to announce greater control over the situation,” an official said, adding, “Things were not as serious as was being made out.” …

Dismissing fears of the dreaded vector-borne disease tightening its grip over the capital, the official said when sincere efforts are underway there was no reason to fear or spread panic.

The government is very much into the job and dengue will be under control “very soon,” he added.

With 77 new cases surfacing yesterday, the official figure for the number of patients suffering from dengue rose to 1,014 – a figure which is more than nine times regular levels. …

The reporter paraphrased the official as saying that “special drives have been launched across the city to see that no fresh cases came to the fore.” Unnamed doctors told the reporter that “Fatality rate can be prevented with early detection and proper management of cases.” “No fresh cases” is of course an unachievable goal. Preventing fatalities is more feasible – under ideal conditions. Would the average poor citizen of New Delhi feel reassured to learn that with prompt and superb medical care her child would probably survive a dangerous bout of Dengue Hemorrhagic Fever?

Nothing in this article suggests that there is anything ordinary citizens can do to reduce the risk of dengue to themselves, their loved ones, or their communities. The article focuses entirely on what the government claims it has accomplished or expects to accomplish.

A similar article, this one from the Hindustan Times, is entitled “Free from dengue in 14 days.” It inspired this right-on sarcastic comment from an online reader:

These battle-hardened politicians of India think that they can bribe the mosquitoes to go away. They must be looking for the Mosquito Union Leader to bribe? How else do they feel so certain of containing a biological hazard given the rains, the puddles of water and the overcrowding? Pass an executive order or send in the Muscle?

So what would be a better approach, more in line with the Health Belief Model and other precaution advocacy models that promote behavior change?

number 1
Tell everyone that completely eliminating dengue (or whatever risk you’re working on) is almost certainly beyond your capability. For the foreseeable future, your goal is risk reduction, not risk elimination. Say this not just once, but frequently. It’s a huge mistake to let anyone imagine that you’re going to rid Malaysia of dengue, only to feel disappointed, discouraged, and dispirited (and perhaps even deceived) when you fail.
number 2
Acknowledge that risk reduction is a lot less exciting than risk elimination. (That, of course, is why it’s so tempting to ignore #1 and let your funders and your audiences expect too much.) Express your own wish that you could get rid of dengue once and for all, and acknowledge that you know we all wish you could too – but, sadly, you can’t.
number 3
Especially when talking to the public, acknowledge that it’s harder to adhere to dengue management instructions when doing so is only going to lessen the problem, not solve it. Let people know that you know it’s a pain in the neck to keep putting on mosquito repellant, removing standing water, and doing everything else you’re asking them to do, all for the sake of a mere halfway solution – harm reduction, a stalemate rather than an all-out victory.
number 4
Since all you have to offer is a halfway solution, give people “permission” to participate halfway in that solution. It shouldn’t be all or nothing for them either. “Of course there will be times when you go out unprotected, even though you know you shouldn’t. Wearing long sleeves most of the time is better than wearing long sleeves only occasionally. Checking for standing water once a week is better than checking once a month. The more often you take action, the better for the whole community. But you don’t have to be a fanatic to be part of the program.”
number 5
Document what the dengue management program is accomplishing – and do it in a way that dramatizes the real value of partial progress. “Every case of dengue we prevent means…. Reducing the total number of cases by just ten percent would save this much money and this many lives and this many days of illness.” But don’t focus too quickly on #5. People are in no mood to learn the benefits of risk reduction until you have validated their disappointment that you can’t just eliminate the risk altogether.
number 6
Set achievable targets that you can meet and then celebrate – and invite your funders and communities to celebrate with you. Since eliminating dengue is probably impossible, and “controlling” dengue feels unsatisfyingly endless, how about fighting to reduce dengue by X percent over the next five years? Have some shorter-term targets too; five years is too long to wait for a success to celebrate.
number 7
Perhaps most important, try to take these messages to heart yourself. Keep reminding yourself that fighting dengue is a marathon, not a sprint. Like any smart marathon runner, pace yourself.

Do we need safety activism and safety outrage (like environmental activism and environmental outrage)?

Dave Johnson, editor of Industrial Safety & Hygiene News, has a group of 100-odd advisors whom he emails from time to time for guidance on a question he intends to write about in his publication. On August 17, 2010, he emailed us all that he was composing his annual reader survey, and wanted a “fresh infusion of ideas and areas of inquiry.” One theme in the flurry of emails that followed was the role of the U.S. Occupational Safety and Health Administration (OSHA). Several people’s comments about OSHA, not reproduced here, provoked Frank White to write the comment that follows, comparing OSHA’s enforcement “twig” with the much more powerful enforcement “stick” of the U.S. Environmental Protection Agency. That provoked me to write about the possible impact of activism and outrage on the relative clout of the two agencies and the two professions, which in turn provoked Tom Lawrence to disagree with my contention that more safety activism and more safety enforcement might be a good thing.

Here, with permission from Frank and Tom (and Dave, who started it all), are the relevant emails.

name:Frank White
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

Field:Safety consultant
Date:August 23, 2010
Location:Washington D.C., U.S.

frank white comments:

Damn it, Dave – I was passively enjoying the dialogue and doing my best to stay disengaged, but I’ve been sucked in, I regret to say, by what I consider to be a misguided notion expressed by a few of your correspondents in one way or the other that somehow OSHA is a) a political annoyance that distracts us from our real work, b) an irrelevant anachronism, or, worst of all, c) the enemy of the profession that we must distance ourselves from in order to maintain our “true” identities. That’s just silly – again, in my not-humble-enough opinion.

Don’t get me wrong – I heartily subscribe to the line of reasoning advocated by [several dialogue participants] that our primary focus must be on engaging leadership at all levels of our organizations in the collective effort to assess and reduce, control and mitigate risk. I also want to second (or third) [one participant’s] observations about the need to do a better job of evaluating the root causes of near misses and similar pre-injury indicators.

But when did “compliance” become a discredited notion? Isn’t compliance with all laws and regulations embedded in the basic policy statements that establish our risk management systems, and doesn’t it complement the risk reduction focus?

PLEASE don’t reply that “compliance isn’t safety” or some such platitude – we know that to be the case, but it is an integral element of a safety and health management effort nonetheless. And our political earth-mother OSHA, for all her sometimes bizarre and counterproductive behavior disorders, is at the end of the day our ally, without which, candidly, most of us – individually and collectively – would have been much less effective in achieving the steady progress in improving worker health and safety than we actually have been.

It is surely true that the agency is, and has been for the past decade or so, a laggard in many of the areas – risk management, in particular – that we now view as essential to future success. But shouldn’t we all, whether from industry, labor or the profession in general, share some of the responsibility for that situation? There have, indeed, been institutional and political barriers to more progress by OSHA, but all of us should be engaging the agency, not distancing ourselves from it, and doing what we can to help it “catch up” and be part of the solution.

Finally, the notion that somehow, our environmental colleagues have succeeded in aligning themselves with some sort of “alternate universe” to EPA through focusing on Sustainability is ludicrous.

Organizations in general are much more attuned to compliance with EPA requirements than with OSHA mandates because EPA’s stick is so enormous relative to OSHA’s enforcement twig. Sustainability may have reached a tipping point of standing on its own merit, but not as a reaction to, or irrespective of, EPA’s influence, but because sustainability is increasingly grounded in principles that are becoming more successfully integrated into business strategies and processes.

But make no mistake, environmental professionals ignore compliance at their, and their organizations’, peril. The challenge for safety and health professionals is to find our own “sustainable” links that align us more closely to our organizations without needing to reject our legitimate compliance roles.

Peter responds:

I fervently agree with Frank. U.S. companies spend a lot more money per life saved on environmental protection than on safety in large part because EPA has so much more clout than OSHA. I love his comparison of EPA’s “stick” and OSHA’s enforcement “twig.”

Another factor worth considering: U.S. environmental activist groups have a lot more clout than U.S. safety activist groups … all the more so as unions have lost ground and become more narrowly focused on job protection. Corporate environmental officials exist as much to cope with the demands of Greenpeace et al. as with EPA’s demands.

I wonder if it would be useful for Dave to ask respondents what role they see for safety activism. Is there any where they work? Would it help if there were more? Are there ways they – or the profession at large – could encourage more?

A related factor is what I call “outrage.” Corporate environmental misbehavior is widely seen as sinful, as evil; the victims (whether neighbors, creatures, or ecosystems) are seen as innocent and worth protecting. By contrast, on-the-job accidents and even worker exposure to toxicants tend to be seen as inevitable side-effects of the work itself. Affected employees may be viewed as complicit in what went wrong, as having assumed the risk voluntarily, and/or as having been paid for assuming it.

Dave might want to ask respondents what they think about the wisdom, efficacy, and feasibility of trying to arouse more safety outrage among employees, their families, and the general public. Arousing outrage is of course what activists do – which is why activists may be a more potent force for risk reduction than either regulators or risk management professionals.

Name:Tom Lawrence
Field:Safety consultant
Location:Missouri, U.S.

tom lawrence comments:

On September 1, 2010, I begin my 41st year as a practicing safety professional. Just still learning.

I don't know anything about this radicalism you mention. How do you do radicalism in a professional role and who do you do it to?

Thanks.

Peter responds:

I don’t think safety activism needs to be “radical” … not unless you think it’s radical to believe people should be outraged that workplace safety norms are as lax as they are. For those who think safety standards ought to be tougher, it’s not so radical to think a little outrage might help them get tougher.

But I agree that it’s a bit of a conundrum to figure out how safety professionals inside organizations could encourage such outrage without creating another sort of outrage altogether: their managements’ outrage at them for stirring up dissatisfaction and mobilizing pressure. Safety professionals working for advocacy organizations – unions, for example – might have more freedom of action in this regard.

The insufficiency of safety outrage/dissatisfaction/pressure does seem to be part of the problem. That’s why I thought it might be a productive topic for Dave to survey people about.

But mine is an outsider’s perspective, and perhaps an ignorant one. I am not a safety professional. I do spend a lot of time working with clients who are trying to respond to outrage, dissatisfaction, and pressure, the substance of which is far more often environmental risk than employee safety risk.

Tom responds:

Let me see if I understand this. We need radicals to drive OSHA to impose more standards on the workplaces. Some say workplace safety and health standards are not tough enough. As a safety professional who has worked there and supports those who do now, I take strong issue with that.

We have forty years worth of OSHA standards that we deal with – thousands and thousands of requirements to manage. The radical’s view is that is old stuff. OSHA should be adding more standards and adding more requirements to those already required. Just pile it on.

Of course the radicals have zero responsibility and accountability for how those imposed requirements are met. But who does? Safety and health professionals have that responsibility and share the accountability with the management teams we work with and for.

So as soon as the ink is dry on the Federal Register for more imposed requirements, the radicals can retire to their Beltway latte parlors and celebrate their political agenda victory.

Meanwhile, back in the workplaces, the S&H Professionals are trying to find time in their already hectic schedules to get this new compliance requirement management piece in place. Oh, but we wanted to work on transformation issues for excellence in our workplace – but, there is no time. We have to spend all of our time on compliance – on basic safety and health – not excellence: transactional safety and health, not transformational. And thus, our profession is driven by OSHA and its politically driven agenda.

Dr. Sandman, do I have this process described correctly?

Would it be out of the question for at least some of us to feel that after 40 years of dealing with a politically obsessed and driven OSHA that we have had enough of it and want to separate our profession and its image from the Beltway OSHA process – except to try to protect ourselves and our workplaces from the radicals?

Peter responds:

I don’t have an opinion – at least not a professional opinion – on whether OSHA is currently too strong or too weak or just right. I’m not an expert on safety or safety regulation. I get your point that many OSHA regulations impose onerous compliance obligations on management without having much actual payoff in terms of workplace safety. I don’t have the knowledge to agree or disagree. There are plenty of people on Dave’s list, you among them, who are far more expert on that point than I will ever be.

My point is simply that what I call outrage – stakeholder unhappiness / concern / anger / fear about the status quo – is a powerful driver of change. Or to put it differently: Professional problem-solvers gain clout when powerful institutions become convinced that the problem needs solving. And one important way powerful institutions become convinced that a problem needs solving is when stakeholders whose opinion matters are deeply dissatisfied with the status quo.

Of course there are other drivers of change as well. I’m not claiming that arousing more safety outrage is the only way to win more clout for safety professionals. But I do think the greater clout of environmental professionals is largely attributable to the greater outrage attaching to the issues they manage. Safety professionals who judge their own clout to be insufficient might find this a hypothesis worth considering.

As a risk communication professional (not a safety professional), I am often asked to help a client try to influence the level of stakeholder outrage about a specific issue so it is more commensurate with what my client considers the actual hazard – that is, to try to increase stakeholders’ outrage about serious problems that are receiving insufficient attention, and to try to decrease stakeholders’ outrage about small problems that are receiving excessive attention (in my client’s opinion, at least).

I don’t know which safety problems you would put in which category, nor do I know whether you’d say most safety problems are in one category or the other. I do know that managing stakeholders’ outrage upward or downward is a pretty effective way to motivate more or less management attention to a problem.

The process of arousing outrage about X (or diminishing outrage about Y) doesn’t have to be driven by “radicals,” or even by activists who aren’t radical. And it doesn’t have to be implemented via regulators. I am intrigued by the possibility that – perhaps – it could be driven by safety professionals and implemented by companies in ways that were more transformative than merely compliant.

Tom responds:

Why do safety professionals need clout – provided by OSHA?

Line management and employees together own safety and health. My role is as a staff professional. If I as a safety professional am considered to “own” safety, that will ultimately be bad news. It is conceptually incorrect but it is also practically not achievable. There is not enough of me to go around. And my owning safety would separate it from the work processes – making it an off-line staff issue rather than a line issue. All of that would lead to poorer performance.

If clout means that I open my mouth and everybody does what I say, then they don't own safety, I do – and we have already premised that is bad. I don’t covet that kind of clout.

I want to earn respect and will work to do that – not just because I have a safety professional title. It comes from being a servant leader and demonstrating that my job is to help them do what needs to be done in their view. If changes to their view are needed from what I am seeing, we will have that discussion at some point.

The litany from some is that “management is the problem” in safety. Some of them are. I just have to work harder and more creatively with those. But I guarantee you that all of them are the solution – sustainable, lasting safety progress does not happen without them.

In my 40 years of experience, I can count on one hand the number of times I have invoked OSHA as my means of influence. To be sure, I help my clients understand the minimums – the requirements and the interpretations in the applicable OSHA regulations. That is knowledge that they want from me. And I definitely want them to know in that process that I am not OSHA or any extension of OSHA. I am working to help them accomplish their safety objectives, not because of some OSHA agenda that I represent or even my personal agenda. I am on their team and I let them know it.

That is not some unique-to-me skill. I talk about my approach because it is doable by any safety and health professional. We don’t have to put on the OSHA cloak as professionals to have influence for safety and health and for improvements in protection of people in the workplace.

President Obama’s handling of the Deepwater Horizon oil spill

name:Anonymous
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Outrage Management index

Field:Freelance writer
Date:August 8, 2010
Location:Germany

Comment:

I am currently writing an article on Obama’s crisis communication during the oil spill in the Gulf of Mexico. It will be a guest article published in an upcoming issue of a German monthly magazine on political communication.

The article will deal with Obama's strategy throughout the spill and analyze his successes and setbacks. Given your expertise in the field of crisis communication and your in-depth knowledge of both communication theory and the circumstances in the U.S., I would be happy if you could respond to the following questions:

  • According to your approach, the two factors “hazard” and “outrage” determine the dimension of a crisis situation and should define the adequate reaction.
    • Where would you see the Deepwater Horizon case on your risk communication “map”?
    • What characterizes this kind of a crisis?
    • What are the critical reactions everyone involved in the crisis should take? (For example, is calming the public the first priority, or communicating every fact of the crisis, or what?)
  • President Obama was often criticized by the media for reacting dully, without emotions, for not being personally involved.
    • How important is emotional/personal involvement in such a crisis situation?
    • What impact does the impression of being distanced have on Obama’s perception as a leader who can help the nation solve the crisis?
    • Would it have been better if Obama was seen more often on site, washing birds or talking to locals?
  • With the first face-to-face meeting on June 16, President Obama met very late with BP executives at the White House.
    • Did Obama need the time to “get control over the situation”?
    • From a risk communication perspective, was it effective to attack BP as extensively as President Obama did?

Peter responds:

These are the comments I have posted on the BP oil spill:

But I haven’t posted anything since early June, and I haven’t posted anything at all about President Obama’s handling of the crisis. Let me take your three questions in order. I also want to add a comment on what seems to be happening now with regard to “good news” from the Gulf of Mexico.

What kind of risk communication does Deepwater Horizon require?

Deepwater Horizon is high-hazard, high-outrage – people are upset and right to be upset. That makes it a classic crisis, as opposed to the far more frequent low-hazard, high-outrage reputational “crisis.”

The difference is crucial. When people are more upset than the actual hazard justifies, the core risk communication task is to help them get the risk into perspective and calm down. But when the hazard is serious and people are right to be as upset as they are, as in the case of Deepwater Horizon, the core task is to validate their concerns, help them bear the situation and the way it makes them feel, and help them take (or support) wise rather than unwise actions to cope with what’s happening.

Corporate and government officials often think they must “calm the public” in a crisis. This is mistaken. Crisis management professionals know that panic is comparatively rare (at least when people are sober; a high percentage of actual cases of panic occur in nightclubs and sports stadiums among people who are not sober). Paradoxically, an overwhelming impulse to calm the public often leads officials to do things that exacerbate the public’s concern: withholding alarming information, giving false reassurance, expressing contempt for people’s fears, infringing on people’s freedom, etc. When people are rightly upset – a defining characteristic of a real crisis – the task is to help them bear what they’re feeling, not to try to talk them out of feeling it.

In any crisis, expressing compassion for the victims is also paramount; it’s a precondition for credibility in your efforts to validate people’s feelings and guide their actions. If your organization caused the crisis, expressing contrition is also paramount, and also a precondition. Even expressions of compassion aren’t credible from a “perpetrator” whose contrition isn’t clear. And if your organization is responsible for addressing the crisis (whether or not you caused it), expressing determination to stay the course is a third essential crisis communication requirement.

In the case of Deepwater Horizon, BP’s failure to express contrition made it impossible for the company’s expressions of compassion and determination to ring true. This is also the case for President Obama, though to a lesser extent. He hasn’t taken sufficient blame for the government’s failure to regulate the oil industry’s emergency preparedness adequately, and that has undermined his expressions of compassion and determination   as well as his expressions of anger at BP.

All of this, of course, is in addition to the rudimentary obligation to tell people what you know about what has happened so far, what is happening now, and what is likely to happen next. There are four usual pitfalls here:

  • Over-reassurance – sources should always err on the alarming side, so later messaging takes the form of “it’s not as bad as we feared,” not “it’s worse than we thought.”
  • Overconfidence – sources should always make sure their tentative messages are expressed tentatively, so the inevitable errors don’t come out sounding like lies.
  • Refusal to communicate under conditions of uncertainty – sources should always do their best to fill the crisis information vacuum, so refusing to “speculate” until all the facts are known is almost as bad an error as speculating overconfidently.
  • Outright dishonesty or secrecy – the discovery that you said things you knew weren’t true or refused to say things you knew were true is, of course, devastating to the credibility of a crisis information source … as it should be.

Obama’s leadership and emotional expressiveness

The conventional wisdom is that an American president must be not just Chief Executive but also Emoter-in-Chief – that he (and eventually she) must voice the public’s feelings. In the case of Deepwater Horizon, that means voicing both our sadness and devastation at what has happened to the Gulf, its residents, and its creatures and our anger at those who made it happen or let it happen.

Public emotional expressiveness obviously isn’t President Obama’s strong suit. He projects cool wonkiness, not emotionality.

The question is whether he should try to overcome this weakness or should play to his strengths. If I had been advising the President in the early days of the Deepwater Horizon crisis, I would have pushed him to be more visible, more personally involved. But I don’t think I would have pushed him to be more emotional.

He should have addressed the nation on the crisis earlier and more often. He should have met with BP leaders earlier and more often.

Above all, I think, he should have made greater use of his extraordinary ability to teach. When he’s at his best, President Obama can explain complicated and emotionally unsettling truths in ways that come across as genuine, insightful, and paradoxically calming – calming not because he is trying to calm us, but because he is calm himself as he explains something upsetting. He did it on race during the campaign. He needed to do it on the Deepwater Horizon spill. (He also needed to do it on the financial crisis, but that’s another matter.)

There are many things the President could have taught us about this crisis. Three that strike me as especially crucial are these:

  • That Deepwater Horizon is simultaneously worse than we pretend and not as bad as we imagine – that it will be impossible to “put things right” completely (environmentally or economically) but that ecosystems and economies do recover and the sites of previously grievous oil spills give grounds for hope.
  • That BP and its contractors are very much to blame but there is ample blame to go around – that the rest of the industry’s emergency response plans were just as inadequate as BP’s; that government (Obama’s government as well as Bush’s government) approved those patently inadequate plans; that we all made a foolish choice to develop oil where it’s most dangerous but least visible, far offshore, instead of opting for land and shallow-water development.
  • That there are lessons that must be learned – lessons about alternative energy and conservation; lessons about domestic energy development; above all, lessons about taking worst case scenarios and emergency preparedness more seriously. Deepwater Horizon is the third inadequately anticipated risk management crisis of the Obama presidency. The other two were the financial crisis and the influenza pandemic (we got lucky on that one; it turned out very mild). We need to stop shrugging off dire possibilities and start taking more seriously the twin jobs of preventing them and preparing to respond to them.

No President can be successful if he comes across as unfeeling … or, worse yet, as insensitive to what the public is feeling. But Obama is never going to be a good Emoter-in-Chief. When he tries – as when he tried to excoriate BP – he sounds forced and insincere. He should play to his strengths.

Scapegoating BP

I’ve already answered this one. President Obama should have met with BP earlier and attacked it less vigorously.

Deepwater Horizon isn’t about a rogue company. Yes, BP has a safety culture problem – a problem that got worse during John Browne’s tenure as CEO and that Tony Hayward promised to correct, tried to correct, and failed to correct in time. There are other oil companies whose safety cultures are better than BP’s – other oil companies that might have cut fewer safety corners in the years before the Deepwater Horizon explosion and might have taken the warning signs more to heart in the days and weeks before the explosion. But there are also lots of oil companies less attentive to safety than BP, and lots of industries less attentive to safety than the oil industry.

The biggest failure revealed by Deepwater Horizon was the failure to accept that serious accidents can happen, that preparing to manage serious accidents is essential, and that if a serious deepwater oil spill cannot be prepared for adequately then the whole society needs to know that and decide together whether we want to take the risk or meet our energy needs a different way. That failure was shared by the whole oil industry and the government agencies charged with regulating the oil industry. Scapegoating BP doesn’t help us remedy that failure.

Good news in high-outrage situations

In late July and early August, the news from the Gulf got better – though maybe only temporarily; it’s too soon to tell. The spill was capped; the “static kill” was successfully implemented; the amount of oil (at least on the surface) started declining; some previously closed waters were reopened for fishing; some experts backed off their previous predictions of catastrophe.

President Obama and others in his administration hailed what the White House called on August 4 “the beginning of the end.”

Responses to this new tone of optimism – or at least reduced pessimism – was decidedly mixed. Some in the Gulf region saw it as evidence that the Obama administration was preparing to abandon them, just as they feel the Bush administration did after Katrina. Administration estimates of how much oil had been recovered, how much had naturally degraded or been eaten by microorganisms, and how much remained to do further damage were hotly disparaged as over-reassuring political spin.

This is perfectly typical. Outraged people do not like good news. When we’re angry and upset, our desire to be found right in our grievances becomes deeper than our desire to discover that things are better than we feared. Just as neighbors of a polluting factory normally dispute any evidence that the emissions probably aren’t going to give them cancer after all, Gulf residents and their sympathizers around the world are attached to what they endlessly call “the worst environmental disaster in U.S. history” and are extremely skeptical about anything resembling good news.

Of course this skepticism often proves prescient. The tendency of outraged publics to be highly skeptical about good news is matched by the tendency of crisis managers to be overconfident and over-reassuring – to over-emphasize good news even when the bad news is a lot more plentiful and reliable.

Risk communication professionals know something about how to cope with this phenomenon:

  • Make the good news very tentative. In fact, put it in a subordinate clause: “Even though it looks like more oil than we expected may be degrading naturally, that evidence is very uncertain so far. We still don’t know how bad the spill’s impact may turn out in the end.”
  • Express the good news as a hope, and explicitly deny that it’s a conviction. “There are grounds for hope that this awful spill may have less lasting impacts than we originally feared. But the experts are far from unanimous and far from certain. There may still be some very bad surprises in store for us.”
  • Acknowledge the concern that the good news may be an excuse for reneging on your promises. “Some people naturally worry that our administration might pivot on this preliminary good news and back off our obligations to the people of the Gulf. That’s not going to happen.”
  • Emphasize the known downsides. “Even the best-case scenario is no picnic. The economy, environment, and reputation of the Gulf region have all taken a big hit. We don’t know yet whether things are as bad as we initially feared, but they are plenty bad enough!”
  • Go out of your way to disavow the “no-harm no-foul” position. “Regardless of the ultimate impacts of this spill, it will stand forever as a stunning example of irresponsible corporate risk-taking and inadequate government regulation. People’s righteous anger remains as justified as ever – anger at BP and the other companies involved, and anger at the Bush administration and my own administration.”
  • Above all, subject the good news to as rigorous a quality control process as the bad news – maybe even more rigorous. Coming back later to say the situation is “worse than we thought” is a lot more damaging to credibility and morale than coming back to say it’s “better than we feared.” So err on the alarming side. Don’t lean very heavily on preliminary good news you might have to take back later.

Over the long haul, one important goal of risk communication in situations like the Deepwater Horizon spill is to validate and address people’s outrage. Only when the outrage declines will the public become more open to evidence that perhaps the hazard isn’t so horrific after all.

Why aren’t people more worried about cell phone health risks?

name:Sharon Begley
This guestbook entry
is categorized as:

      link to Precaution Advocacy index       link to Outrage Management index

Field:Science Editor, Newsweek
Date:August 8, 2010
Location:New York, U.S.

Comment:

I am working on a story about little signs here and there that concerns about the health risks of cell phones might be gaining traction (SF’s labeling law, Kucinich planning to introduce a bill mandating federal labeling), and am wondering something that you might have some thoughts on: Why does the public not seem to care/believe that there might be a health risk?

Peter responds:

Note to readers: Sharon’s August 5 article on cell phone risk quotes briefly from the response below, but focuses mostly on the endless uncertainties in the technical debate.

The first thing that comes to mind in response to your question is the role of uncertainty.

The evidence about the health risk of cell phones is exceedingly uncertain. After literally hundreds of studies, there is nothing approaching a smoking gun. Most studies have found no evidence of an impact. But a few studies have found weak indications of possible problems. Most experts look at this pattern and conclude that the risk is either small or nonexistent; if it were big it wouldn’t keep disappearing and reappearing from one study to the next. (Leave aside the important question of latency. If cell phones caused brain cancer after 30 years’ use, we wouldn’t have a clue yet.) A few experts look at the same pattern and find it worrisome.

People don’t need to read up on the studies to get the accurate impression that the experts simply aren’t sure. The question is how to react when the experts aren’t sure.

In determining whether people will take a risk seriously or shrug it off, uncertainty is a swing vote. When people are upset about the risk for other reasons, uncertainty adds to their outrage: “How dare you make me the unwitting subject of your experiment!” But when people are inclined not to worry, uncertainty becomes a reason for calm: “Even the experts don’t know whether it’s dangerous or not, so why sweat it!”

We rely on our cell phones – to get work done, to stay in touch with friends and family, to fill time when we’re bored or lonely. We really, really don’t want to learn that there are health reasons to restrict our use of this miraculous invention! Uncertainty gives us a reason to stay unconcerned.

Cell towers are a different story. A tower very close to your home isn’t essential to your wellbeing; your phone would work fine if the tower were further away. So people would rather worry about the nearby tower than the instrument clamped to their ear. From the outset, cell tower health and safety controversies have been a lot hotter than cell phone health and safety controversies. (Irony: The further the tower is, the harder your phone works to bring in a signal; cell phones may actually be less risky when the nearest tower is close.)

Another way of framing the same explanation is in terms of the risk/benefit ratio. I don’t know the actual correlation between the risks and benefits of technologies. It’s probably positive, and probably pretty small. But people want the correlation to be big and negative. That is, people want to believe that technologies with high benefits are low-risk, and that technologies with high risks are low-benefit. We want to avoid the dilemma of a high-benefit high-risk situation: the uncomfortable choice whether to endure the risk or forgo the benefit.

So if we’re already convinced that something is dangerous, we resist learning that it’s valuable. If we’re already convinced that it’s valuable, we resist learning that it’s dangerous. On controversies from deepwater oil exploration to genetic engineering, people polarize, with almost nobody taking the “high-benefit high-risk” position.

Cell phones are obviously high-benefit. So people want them to be low-risk. As long as the evidence of risk is uncertain, most people will shrug off that evidence and hang onto their cell phones. (Earlier in the cell phone adoption curve, when lots of people loved them but lots of other people hated them and chose not to use them, the latter group tended to be more inclined than the former to take cell phone health concerns seriously.)

Cell towers are like living near a polluting factory: You face the risk (big risk or small risk or unknown risk) and somebody else benefits. Cell phones, on the other hand, are like microwave ovens, coffee, computers, Wifi, and smoking. You face the risk (again, big or small or unknown) but you also get the benefits. So uncertainty about cell tower health effects leads to concern and even opposition, while uncertainty about cell phone health effects leads to apathy.

What would change that? Obviously, strong, unequivocal evidence of a serious risk might do the trick. But it would need to be really strong and really unequivocal; look how long smokers held onto their wishful conviction that cigarettes might not cause cancer after all. A viable alternative technology would lower the resistance a lot. Imagine a new generation of cell phones that used a radically different technology. Should I buy one of those new phones or stick with my old one? Now, suddenly, evidence that the old one might be dangerous isn’t arousing risk-versus-benefit cognitive dissonance anymore; it’s resolving spend-versus-save cognitive dissonance.

WHO: Hyping the pandemic or helping the world prepare?

name:Andy Schwarz
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Pandemic and Other Infectious Diseases index

Field:Environmental policy consultant
Date:June 11, 2010
Email:ams@indecon.com
Location:Massachusetts, U.S.

Comment:

I have been developing a training for officials on communication around climate change adaptation, and have found your writings on riskcomm and climate change helpful.

On another topic, I did have one question about the “hyping” of the H1N1 virus. Could the agencies involved be more in line for some credit for raising awareness and limiting the impact of the pandemic, rather than only being seen as crying wolf?

Peter responds:

I certainly agree with you that it was a major service, not a disservice, to warn the public about the H1N1 pandemic.

If swine flu had turned out more serious than it did – which it might well have done – the warnings would have been very important. And if swine flu had turned out extremely serious, there would have been justified (and unjustified) criticism for the failure to warn more aggressively.

Did the warnings do actual good or actual harm, given the mild pandemic we have faced so far? I think the answer is a little of both, and not too much of either – except for the credibility crisis now facing the World Health Organization.

Most developed countries ended up buying more vaccine than they needed, for example, and didn’t have it available till the pandemic was already receding. So pandemic vaccination turned out more expensive than useful. (That’s not blameworthy. At the time various governments were deciding how much vaccine to order, they couldn’t possibly have known the future course of the pandemic or its eventual virulence.) Another negative: Some developing countries were distracted from endemic diseases of substantially greater importance to the health of their citizens.

On the positive side, there is documented evidence in some countries of increased hand-washing and increased flu vaccination uptake. Hand-washing presumably reduced the incidence of salmonella, shigella, norovirus, and other contagious diseases, whether or not it had any effect on swine flu incidence. Both increased flu vaccination and increased hand-washing may lead to reduced disease burden for years, if the behaviors can be successfully reinforced after the pandemic ends.

The proper standard for judging a warning isn’t how necessary and useful it turned out to be, but rather how likely it was to turn out necessary and useful, based on what was known when it was issued. Hindsight bias makes people segue from the accurate perception that the H1N1 pandemic has been surprisingly mild so far to the nutty charge that officials should have downplayed it from the start, and that their failure to do so constitutes hype.

Many national governments around the world have faced some criticism for responding more aggressively to the H1N1 pandemic than its later development (so far) has justified. But the charge of “hype” – accompanied by a charge of complicity in the economic interests of Big Pharma – has been leveled most insistently and effectively against the World Health Organization.

What most justifies that nutty charge is the failure of WHO to concede – even to this day – that the pandemic has been mild so far. Over the long term, the ability of officials to issue credible public health warnings depends on their ability to stand down from those warnings when the risk is found to be lower than initially feared, or when the risk is mostly gone (even if it may return – which is something else they can’t credibly warn about when they’re not yet saying it’s mostly gone for now).

It is as if hurricane forecasters were unwilling to say so when a hurricane veered off its likeliest course or decayed into a much milder storm.

When officials point to a serious or potentially serious risk, they’re warning, not hyping – even if that risk eventually becomes minor or never materializes at all. But when officials fail to downgrade their warnings as new data show the risk to be lower than it might have been, or lower than it looked at first, that’s a huge risk communication error. It might even be called hyping.

In talking about the status of an evolving risk, officials need to answer four main questions – and need to answer them over and over:

number 1
What has happened so far?
number 2
What is happening right now?
number 3
What seems likeliest to happen next?
number 4
What’s the worst case planning scenario – what might happen that’s likely enough to be worth preparing for, even though it’s not the likeliest scenario?

In many risk situations, including influenza pandemics, all four answers keep changing. (Even the first: Officials get new evidence that alters their assessment of past events.) And all four remain uncertain, often highly uncertain. So every update needs to be tentative. It needs to include the information that the update itself may turn out wrong, that the next update may be very different from this one. Ideally it should also include the information that the range of possible outcomes is even broader than the difference between the likeliest scenario and the worst case planning scenario – that the full range runs from anticlimactically minor to devastating.

For a hypothetical hurricane, the answers might go like this:

number 1
What has happened so far: “Last week, we reported that Tropical Storm Alison had quickly become a Category Three hurricane as it passed over warm ocean waters.”
number 2
What is happening right now: “Now, after passing over a large island land mass, Alison has weakened to become a Category One hurricane.”
number 3
What seems likeliest to happen next: “Overall, weather forecasts suggest conditions are not ripe for much strengthening in the next few days, so the hurricane is likely to remain at Category One. But there is some disagreement and great uncertainty in the various forecast models, so we will be monitoring this closely and issuing frequent updates.”
number 4
What’s the worst case planning scenario: “Certain atmospheric changes, not currently expected, could permit gradual strengthening to a Category Three hurricane, given the warm water temperatures in Alison’s projected course.”

For the 2009-2010 H1N1 pandemic, the current answers should sound like this:

number 1
What has happened so far: Looked scary at first; turned out pretty mild. “In April, reports were received from Mexico of a novel influenza virus that appeared to be causing serious illness with a possibly high case fatality rate, much higher than for the seasonal flu. As more data became available, and as the disease spread around the world, it became clear that the case fatality rate was much lower overall than the seasonal flu case fatality rate, but higher than the seasonal flu case fatality rate in people under 65. In terms of severity, this pandemic so far has been less deadly than we originally feared, and far less deadly than the infamous 1918 pandemic. It has been more like some other previous flu pandemics that we have referred to as ‘mild,’ link is to a PDF filecomparatively mild,’ or ‘very mild.’ link is to a PDF file
number 2
What is happening right now: Not much flu around. “In most countries with adequate surveillance, the incidence of H1N1 flu has dropped below peak levels, and below national epidemic baselines. Much less is known about swine flu incidence in countries where surveillance is spotty, but we are not hearing reports of extensive outbreaks.”
number 3
What seems likeliest to happen next: Becomes a seasonal flu strain. “Based on the course of the best-studied past pandemics, we believe it is unlikely, this far along in the pandemic, that the H1N1 virus will become significantly more virulent in the near future. Nor do we expect the virus to disappear, though that too is possible. It is highly likely that this virus will gradually mutate and continue to circulate as a seasonal flu strain, at least until the next pandemic.”
number 4
What’s the worst case planning scenario: Could still surprise us and turn more virulent. “Although we think it is unlikely this far along in the pandemic, it is always possible for a pandemic influenza strain to become much more virulent. For this reason we recommend continued vigilance in the form of laboratory and epidemiological surveillance, and general pandemic influenza preparedness.”

This is quite different from what WHO is currently saying.

The paradox is that accusations of hype make it harder for WHO officials to acknowledge that the pandemic has been mild so far and the risk is currently minimal. That’s no excuse, of course – and it doesn’t explain why WHO didn’t stand down much earlier on. Still, I can understand why WHO is disinclined to look like it is “caving” in response to pressure from its critics – pressure that combines far-fetched accusations of conflict of interest with valid assertions that the pandemic did in fact turn out much milder than the early justified warnings suggested, and that WHO should damn well say so.

The ethics of risk communication consulting and the BP oil spill

name:Ashley
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Public health RN
Date:June 6, 2010
Location:Florida, U.S.

Comment:

I eagerly await your updated critique of BP’s crisis communication and actions in the Gulf disaster.

If you were helping BP as a consultant now, is there a point at which you would consider not lending the company your expertise on ethical grounds? Is this a question that practitioners should ask as they help organizations? If a client does not tell the truth or intentionally violates other core principles of risk communication, is there a responsibility to walk away?

Peter responds:

These are the comments I have posted on the BP oil spill:

These four have far from exhausted the list of things worth saying about BP’s oil spill risk communication, even its oil spill risk communication so far.

For example, I was aghast at CEO Tony Hayward’s offhand comments that the environmental impact on the Gulf may end up “very, very modest” because it’s a “very big ocean.” Even if this turns out true, it is incredibly insensitive … rather like a child molester pointing out that the kid will probably get over it.

On the other hand, I was pretty impressed with BP’s reaction to the satiric comments posted on a fake BP Twitter account. Instead of asking Twitter to take down the imposter account, BP said it had no complaint about what it saw as a legitimate expression of people’s understandable anger and frustration.

The last six weeks have provided lots of examples of BP violating what I consider to be core principles of outrage management and crisis communication. There are also more than a few things BP has done that strike me as excellent outrage management and crisis communication. Most commentators have given BP extremely low marks overall – unfairly low, in my judgment. Any company that was responsible for a huge oil spill it couldn’t find a way to plug (at least so far) would be facing inevitable and justified public outrage no matter how good or bad its risk communication skills (or advisors) were.

On May 3 I told the BBC I would give the company a B in risk communication so far. Today I’d give it a C-.

Do I quit when clients do bad risk communication?

But even if BP were earning an F in risk communication, I wouldn’t necessarily feel an ethical obligation to walk away. (So far I have nothing to walk away from. BP hasn’t asked for my advice.)

My wife and colleague Jody Lanard is a psychiatrist. “If you’re not willing to work with crazy people,” she has often told me, “you shouldn’t be a psychiatrist.” Similarly, if you’re not willing to work with screwed-up organizations, you shouldn’t be a risk communication consultant.

My typical client accepts and implements around 15 to 25 percent of my advice. That means 75 to 85 percent of the time the client ends up doing something I consider less than best practice in risk communication. If I were going to walk away every time a client turned down some of my advice, I wouldn’t have any clients. Who would want to hire a consultant who was foreordained to quit the first time his or her advice was rejected?

The specific example you give is a client that “does not tell the truth.” I routinely advise my clients that withholding information from the public isn’t just unethical; it’s stupid. If the information comes out anyway and people feel they should have been told sooner, it does the client roughly twenty times as much harm as it would have done if the client had been candid in the first place. Mathematically, then, secrecy is a bad bet unless the client can sustain a 95% success rate at keeping its secrets … which I doubt any company or government agency in the developed world can sustain. (My “twenty times as much harm” figure isn’t based on actual data. It’s a guesstimate I use to drive home how foolish it is for clients to let upsetting information be belatedly revealed by third parties, instead of candidly and quickly revealing it themselves.)

But virtually every client I have ever worked for has at some point said, no, we’re not going to tell people that. My clients don’t often lie outright, as far as I know. But they very often craft their messages carefully to avoid sharing the part of the truth they’d rather people didn’t know. Both sides in risk controversies do this. If anything, “good guys” do it more than “bad guys,” in part because they feel more virtuous and in part because they’re less likely to get caught.

So no, I don’t feel an ethical obligation to quit when a client tells less than the whole truth … not even when a client says things I consider flat-out lies. I always advise my clients not to lie, and I always advise them not to shade the truth, and I try hard to persuade them why it is in their interests to follow this advice – at least their long-term interests, but usually their short-term interests as well. I usually succeed in part and fail in part. And I don’t believe the partial failure means I have to quit.

What ethical obligations do I obey?

What ethical standards guide my risk communication consulting? I have three I’m clear on. There may be others I’m acting on without quite conceptualizing, but these are the three that are codified in my mind:

number 1
I won’t let a client keep dangerous information secret. Virtually all my clients withhold some information, usually because the information is embarrassing, because it will increase the client’s legal vulnerability, or because the client thinks it will be misrepresented by opponents and misunderstood by the public. That’s very different in my judgment from withholding information that could save lives if it were revealed. If a factory is emitting toxicants its own analysis says are deadly, for example, it has to say so, enabling regulators and neighbors to take the appropriate actions. If I suspect that’s true but can’t prove it, I must quit. If I can prove it but can’t convince the client to come clean, I must blow the whistle. (Does worrying that I might blow the whistle encourage clients to come clean, or does it encourage them to make sure I won’t get access to proof of dangerous information? Some of each, I believe.)
number 2
I won’t let a client talk me out of giving the best risk communication advice I can. Clients hate hearing advice they don’t want to take. So there’s always pressure on consultants not to give such advice – or at least not to keep giving it once the client has made it clear that particular advice isn’t welcome. My most common ethical stand is simply to keep reiterating unwelcome advice … frequently in writing. (Putting it in writing creates a paper trail – these days usually an electronic one. It also protects me if the company publicly misrepresents my advice.) Fairly often my insistence on reiterating unwelcome advice spares me the decision about whether or not to quit. I get fired. Fairly often the client keeps stonewalling but lets me keep insisting. And fairly often the third or fourth time I say something, the client winces, sighs deeply, and does what I’m recommending.
number 3
I won’t let a client keep me from going public with my opinions about publicly available information. Most clients’ confidentiality agreements forbid me to reveal their secrets. Some forbid me to reveal my own recommendations. Some forbid me even to identify them as a client. I willingly sign such agreements, but only after adding a provision that if the client reveals my involvement and misrepresents my advice, I can correct the record. And I make sure there’s nothing in any confidentiality agreement that prevents me from going public with my own opinions about the client’s public behavior. I am willing to keep my client’s secrets (as long as they’re not dangerous secrets) – but I always reserve the right to comment freely on things that aren’t secret. Some clients (soon to be ex-clients) think it’s unethical for me to take their money and then criticize them on my website or in the media. I think it would be unethical to allow their money to buy my silence.

So what about a client that simply isn’t taking any of my advice – a client that continues to do crappy outrage management or crappy crisis communication? I often stop working for such clients simply because the work isn’t useful and isn’t fun. It’s only remunerative. But I don’t consider that an ethical principle. It’s a self-indulgence. At the tail end of a successful career, I can afford to pass up work, even highly remunerative work, that isn’t useful or fun.

I have also consistently refused work for the tobacco industry. I sometimes tell myself this is an ethical principle; tobacco is the only legal product I know that, properly used, kills its best customers. But I’m not sure this decision isn’t more about cowardice than principle. Consultants who work for the tobacco industry are widely condemned. I get enough of that just working for polluters. It would be much worse if I worked for tobacco companies. (The World Health Organization, for example, requires contractors to sign sworn statements as to whether or not they have done any tobacco industry work; I don’t know how WHO would react if I checked the “yes” box, but at a minimum there would be some more hoops to jump through.) A year or so ago I was invited to be an expert witness for a tobacco company. The limited risk communication and risk perception points the company wanted me to testify to were valid, in my judgment. I declined anyway. I didn’t feel especially ethical about declining. I felt self-protective. Even though the company was in the right about the specific points it wanted me to make, I decided that helping Big Tobacco win a lawsuit wouldn’t be useful … and I sure as hell knew it wouldn’t be fun.

Ethics, of course, is intensely personal. Ethical questions often come up at my outrage management seminars. Half of my response is to explain what feels ethical to me and why. The other half of my response is to urge everybody at the seminar never to do anything he or she considers unethical just because Peter Sandman thinks it’s okay. Some people, for example, tell me they think it’s unethically manipulative for a company to be more responsive to its stakeholders when the company’s only reason for doing so is to reduce stakeholder outrage. I can’t see where they’re coming from – to me, wanting other people not to be angry at you seems like a pretty good reason for treating them better. But again, I advise seminar participants not to do it if it feels unethical to them.

The ethics of outrage management itself

Here’s the ethical challenge I hear most frequently from critics of my work. Helping truly bad companies do a better job of managing a crisis or a controversy isn’t improving the world, they say; it’s covering up the evil. The ethical standards I have articulated don’t include the standard my critics would recommend: not helping bad guys. And my critics think many companies – perhaps most – are bad guys that shouldn’t be helped.

Suppose a company is willing to follow my advice (at least some of it) on how to respond better to public outrage. It’s willing to be more transparent about its past sins and current problems; it’s willing to share more control with stakeholders and negotiate accountability mechanisms instead of demanding to be trusted; it’s willing to listen to criticism more openly and more empathically. But it has a bad safety record, worse than the records of most of its peer companies. Or it has a bad carbon footprint, spewing greenhouse gases into the atmosphere in ways it could easily (though expensively) reduce. Or it has other serious substantive blemishes: employing child labor in developing countries, purchasing raw materials from farms carved out of the tropical rainforest, buying and selling collateralized debt obligations, whatever.

It is my fervent conviction that helping such companies be responsive to their stakeholders – including the very activists who excoriate me for working with those companies – is a significant step toward substantive improvement. That is also my experience after 40+ years as a risk communication consultant.

Consider two hypothetical oil companies producing oil in a developing country. One is a publicly owned western company, the other a nationalized company controlled by a corrupt dictator. Activists are in a pretty good position to put pressure on the western company, largely by arousing outrage among regulators, shareholders, and the general public. Then I can try to show the western company why it should be responsive to that pressure, and how to be responsive to that pressure. It’s much harder for activists to find a way to put pressure on the corrupt dictator’s oil company. The dictator’s oil company is profoundly unlikely to hire me to help it address stakeholder outrage when it can afford to ignore stakeholder outrage.

The western oil company doesn’t just talk a better game than the dictator’s company. It is also more honest, more transparent, and more likely to make substantive improvements on everything from land stewardship to labor practices. Obviously the behavior of publicly owned western oil companies is sometimes pretty bad. Does anyone actually doubt that the behavior of nationalized oil companies controlled by corrupt dictators is worse?

The only organizations that seek my help are those that need to be responsive to outside pressure. I am part of an ecosystem that includes activists, shareholders, and regulators – an ecosystem that functions only because of my clients’ vulnerability to public outrage.

My clients’ vulnerability to public outrage doesn’t always result in significant substantive improvements. One way a western oil company can respond to pressure is by selling out to a dictator’s oil company – which may be a good outcome for the western oil company, but is unlikely to be a good outcome for the people who live near the oilfield.

And sometimes clients with serious substantive problems get better at managing stakeholders’ outrage without doing much of anything about the substantive problems. But as I invariably warn my clients, that’s a very temporary achievement. When there are serious hazards that need to be addressed, a company that learns how to talk better and listen better but doesn’t actually change its behavior is only setting itself up for higher outrage to come when its stakeholders eventually learn they have been gulled.

The situation is more complicated when the substantive problems are comparatively minor – in my terms, when the hazard is low. Then outrage management may be all that’s necessary. Or nearly all that’s necessary. Even in low-hazard high-outrage situations, a company that becomes more responsive in order to reduce the outrage usually ends up agreeing to some of its stakeholders’ substantive recommendations as well. So the high outrage gets lower, and the low hazard gets lower too. Still, most of the focus in low-hazard high-outrage situations stays on the outrage. I think that’s as it should be. But my critics don’t necessarily agree.

What my critics are often worried about, I think, is something like this. Suppose a company’s stakeholders don’t really understand what the company is doing that poses a significant hazard. There may be sizable stakeholder outrage (perhaps provoked in part by activist efforts), but it’s focused on comparatively minor substantive issues. So when I help the company address the outrage more effectively, the stakeholders calm down about those low-hazard issues … and still don’t know about the high-hazard ones.

I think this situation is more the exception than the rule, but it does happen. Some issues are easy to arouse outrage about, while others are harder, even though they may pose a more serious ultimate threat to health or the environment. Activists not unreasonably focus on the issues with the highest outrage potential. When I help a company manage that outrage, I am helping it defuse a situation that, if I hadn’t been there to help, might eventually have led to pressure to improve the serious problem.

Another valid criticism, I think, is the charge that outrage management is intrinsically reformist and therefore counterrevolutionary. I used to hear this language a lot more often in the early years of my career. It derives from Marxist theorists like Herbert Marcuse, who argued pretty persuasively back in the 1960s that powerful institutions often make minor concessions in order to relieve the pressure that might otherwise have forced real change. I help powerful institutions become more responsive. Marcuse was right that powerful institutions become more responsive mostly when they decide they must in order to hang onto their power. I think responsiveness under pressure leads to meaningful change – more meaningful than Marcuse believed, and more meaningful than many of my critics believe, and for that matter, more meaningful than many of my clients believe (they don’t always realize what they’re getting themselves into). But to those who think that incremental change tends to be illusory and “real change” needs to be revolutionary, I really am part of the problem, not part of the solution.

Jim Joyce, Tony Hayward, and how to apologize

name:Stew Thornley
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Government communications
(and baseball official scorer)
Date:June 5, 2010
Location:Minnesota, U.S.

Comment:

I’ve read some of your columns on how to apologize.

I wonder if you’d consider doing something on how well umpire Jim Joyce did this week when his mistake (calling Jason Donald safe at first with two out in the ninth) cost Armando Galarraga a perfect game. The comments I read on Facebook that night toward Joyce were venomous. It seemed like everyone hated him.

After the game, when he saw the replay and knew he was wrong, Joyce admitted it with no excuses, acknowledged the impact of his mistake, and asked to see Galarraga so he could apologize to him. The next day Galarraga brought out the lineup card before the game to publicly indicate that he didn’t have hard feelings toward Joyce, a classy gesture.

I think a lot of people still hate Joyce, but his response caused many to take a step back, think about how hard umpiring can be and how umpires put themselves on the line, and at least feel some empathy toward him.

Peter responds:

I agree. Jim Joyce’s apology was as perfect as Armando Galarraga’s pitching.

A lot of commentators used the same word to describe both Joyce’s and Galarraga’s behavior: “class.” Galarraga didn’t explode in anger, and readily accepted Joyce’s apology. Joyce apologized not once but several times. He focused (rightly) not just on his mistake but on the damage it did: “I just cost that kid a perfect game,” he said, reportedly with tears in his eyes.

It’s instructive to compare Joyce’s apology with that of Tony Hayward, CEO of BP.

I don’t doubt for a moment that Hayward regrets the Deepwater Horizon oil spill in the Gulf of Mexico. He has tried to apologize repeatedly. But his most memorable apology itself required an apology. On May 30, speaking of the families of the eleven oil workers who died in the April 20 explosion, Hayward said – on tape – “We’re sorry for the massive disruption it’s caused their lives. There’s no one who wants this over more than I do. I would like my life back.”

It is human for Hayward to feel sorry for himself. I imagine Joyce does too; he would rather not go down in baseball history as the umpire who blew his most important call. But Joyce managed to stay focused on what he did to Galarraga. Hayward – like Exxon’s Larry Rawl after the 1979 Exxon Valdez spill – has seemed more preoccupied with the harm to himself and his company than to the Gulf and its residents (or to the families of the eleven who died).

This may be an injustice. I don’t know Hayward, and I certainly don’t know how he really feels. Maybe it’s just his technical training that keeps him from showing he feels awful … like the perpetrator, not the victim. Even when he says exactly the right thing – for example, when he acknowledged recently that BP hadn’t been sufficiently prepared for a deep-water blowout – he sounds more like a technocrat pointing out an interesting fact than a CEO confessing a grievous corporate sin.

Jim Joyce, I suspect, could do better.

Note: Here are other pieces by me commenting on the Deepwater Horizon oil spill:

The role of public affairs professionals in enterprise risk management

name:Tom Price
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Freelance journalist
Date:May 30, 2010
Location:Washington DC, U.S.

Comment:

I’m preparing a report, to be published by the Foundation for Public Affairs, about the role of public affairs professionals in enterprise risk management. I’d be interested in your expert thoughts on what role they tend to play now, what role you think they should play, what they can contribute to the risk management process, and whether risk management professionals and other executives tend to understand what that contribution could be.

The report will be published this fall and can be obtained online at www.pac.org/fpa.

Peter responds:

Enterprise risk management has two principal components:

  • Identifying things that might go wrong, and assessing their probability and magnitude; and
  • Figuring out how to cope with each item on the list – ways to make it less likely to happen, insure against it, mitigate its magnitude if it happens, etc.

Too many organizations still see the role of public affairs in this process very narrowly. After a bad thing happens, public affairs people are expected to use their communications abilities in ways management hopes will mitigate the bad thing’s impact on the organization. Sometimes that means trying to minimize stakeholder and public awareness that the bad thing happened, or its connection to the organization. Sometimes it means “explaining” why the bad thing wasn’t so bad after all, or why it’s not really the organization’s fault.

In organizations that understand the distinction between spin and risk communication, the risk mitigation function of public affairs people is seen differently: acknowledging what went wrong; apologizing for the organization’s role in it; providing all the details, even the ones that are still tentative or that reflect badly on the organization; listening hard while people vent their outrage; collaborating with critics on steps to reduce the damage, compensate the victims, and prevent recurrences; etc.

But even this much-improved vision of how public affairs people can best mitigate organizational damage is still too narrow. What’s missing is a deeply embedded understanding that the very concept of risk has two components, which I term “hazard” and “outrage.” Hazard is substantive risk – how dangerous X is, how much actual harm it is likely to do. Outrage is cultural risk – how upsetting X is.

The correlation between hazard and outrage is exceedingly low; that is, people often fail to get upset about risks that seriously endanger them, and often get very upset about risks that endanger them far less. But the correlation between hazard perception and outrage is quite high; if people are upset about a risk, they’re very likely to think it is doing a lot of harm. Much is known about the factors that determine stakeholder and public outrage (control, trust, dread, responsiveness, etc.), and much is known about the organizational behaviors that exacerbate or mitigate that outrage.

Why is this central to enterprise risk management? Because public and stakeholder outrage is a hazard to the organization. In many cases public and stakeholder outrage is the principal hazard to the organization.

Most organizations have figured out that reputational risk is one of the main categories of risk they face. And most organizations intuitively understand that reputational damage isn’t highly correlated with actual, substantive, hazard-related damage. Sometimes organizations are responsible for awful outcomes and their reputations suffer only a little; sometimes organizations end up crucified for comparatively trivial infractions. (And sometimes, of course, organizations do enormous harm and rightly pay enormous reputational costs.) Finally, most organizations hold their public affairs people chiefly responsible for mitigating reputational damage.

What’s too often missing is a solid understanding of the dynamics of reputational damage – that is, the dynamics of outrage.

So public affairs people need to input an “outrage perspective” in every stage of enterprise risk management:

  • When an organization is listing the risks it faces, public affairs people need to make sure that reputational risks (and reputational components of substantive risks) are on the list – and need to keep reminding management that under high-outrage conditions even small substantive problems can constitute huge reputational risks.
  • When an organization is assessing the magnitude and probability of each risk listed, public affairs people need to insist on including reputational damage – especially the reputational damage outrage can do even when hazard is small.
  • When an organization is deciding how to make enterprise-threatening risks less likely or less damaging, public affairs people need to argue on behalf of outrage-based public affairs strategies – for example, organizations that worry visibly and warn proactively about a risk encounter less outrage if that risk eventuates than organizations that ignore or over-reassure about the same risk.
  • When an organization is planning how to respond to the risks on the list if they happen, public affairs people need to fight for plans that are responsive to the dynamics of outrage – for example, plans that put a premium on promptness, candor, and contrition.
  • When bad things happen and these plans must be implemented, public affairs people need to manage the implementation in a way that prioritizes outrage mitigation as well as hazard mitigation – staving off the instincts of the legal department and top management (and perhaps even their own instincts) to circle the wagons instead.

This requires two changes. First, public affairs people need to master the ways in which outrage management in high-controversy situations differs from traditional public relations. And second, public affairs people need to be centrally involved in every stage of enterprise risk management – not just implementing damage control at the final stage after earlier stages have gone awry.

There has been progress on both of these fronts over the past few decades. But there is still a long, long ways to go.

Consider some high-visibility risk controversies as I write this in May 2010: Did Goldman Sachs deceive investors in ways that exacerbated the world financial collapse? Did BP take insufficient precautions against a deepwater oil spill in the Gulf of Mexico? Did Toyota design dangerous cars and lobby against government efforts to make them safer? Did the Catholic Church cover up for child-molesting priests and let them continue to ruin the lives of children? I think these four organizations are very different in how much harm they actually did, and in how much responsibility they actually bear. But none of the four has shown much mastery of outrage management. And I would wager that public affairs people played at best a peripheral role in assessing the risks that now confront these four organizations.

Further debate on whether the CDC misled people about age-specific death rates of pandemic H1N1

name: Jim Dukelow
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Semi-retired risk analyst
Date:May 21, 2010
Email:jsdukelow@yahoo.com
Location:Washington, U.S.

Comment:

In two recent RISKANAL (an Internet discussion list for risk professionals) postings, Peter Sandman has argued that CDC has been “intentionally misleading” in its discussions of the 2009 H1N1 pandemic. Throughout this comment I use Sandman/Lanard and “they” and “their” since Peter Sandman credits his colleague and spouse Dr. Jody Lanard in the development of the ideas in the links he provides.

I believe that Sandman/Lanard are being unintentionally misleading when they accuse the leadership at CDC of being intentionally misleading. Their assertion that the 2009 H1N1 2009–2010 flu season is mild depends on the word “risk” having a specific, single meaning. Like Humpty-Dumpty, “risk” means what they want it to mean, nothing more, nothing less.

I have read their posting “Why did the CDC misrepresent its swine flu mortality data – innumeracy, dishonesty, or what?” and the first 50 pages or so of their “Archive of Swine Flu Pandemic Communication Updates.” I find most (perhaps 95%) of what they write on the swine flu pandemic knowledgeable and reasonable, but will focus on the other 5%, because that is what leads Sandman/Lanard to criticize the CDC characterization of the 2009 H1N1 pandemic as intentionally misleading.

When Sandman/Lanard assert that the 2009 H1N1 flu season is mild, they assume and assume that the reader will share their assumption that “risk” in this context means the population mortality rate (PMR), which they define, reasonably enough, as the product of the case attack rate (CAR) and the case fatality rate (CFR). They use (in their 2 December 2009 update) CDC estimates from 12 November 2009 to calculate PMRs for children (7.49 dead per million children, age 0–17), adults (14.88 dead per million adults, age 18–64), and seniors (11.44 dead per million seniors, age 65+). They contrast this with the PMRs for seasonal flu: children and adults (13 dead per million non-seniors, age 0–64) and seniors (830 dead per million seniors, age 65+).

They note that CDC has been careful to emphasize the comparison of PMR for non-seniors in 2009 H1N1 and non-seniors in seasonal flu and careful to include the phrase “than in a usual flu season” when they claim the 2009 H1N1 is worse for non-seniors than seasonal flu. So, they are not accusing CDC of lying about the pandemic flu impact, but rather feel that the public misinterpret the claim and believe incorrectly that 2009 H1N1 is more dangerous (higher PMR) to children than seniors. They are concerned that misled seniors will not feel the necessity to be vaccinated when 2009 H1N1 vaccine is available to them. They are concerned about CDC and WHO losing public health credibility when the public and the media discover they have been “misled.”

This brief summary does not do justice to the subtlety and comprehensiveness of their argument.

Unfortunately, if you use different consequence measures than the CFR, the Sandman/Lanard argument falls apart. They recognize in their 2 December 2009 update that “risk” is determined using two factors (quoting):

  • Probability (or frequency) – how often does it happen; how many people does it happen to?
  • Magnitude (or consequence) – when it happens, how bad is it?

If instead of CFR as a consequence measure, we use the product of CFR and the number of years of life lost (YoLL), the 2009 H1N1 pandemic is almost certainly worse than the average seasonal flu season. Why should we prefer years of life lost as a consequence measure? Sandman/Lanard give us some of the reasons in their 15 December 2009 update where they write:

There are several coherent reasons for prioritizing children (and the youngest adults) even if they are less at risk than their parents and grandparents:

  • Young people have more years of life expectancy left.
  • Young people are our future.
  • Young people may be more likely to transmit infections than the elderly.
  • Young people usually have a better antigenic response to influenza vaccines than the elderly.
  • Surveys show that most Americans would rather protect the young than the elderly.

I can add a few additional reasons we should use YoLL as the consequence measure for determining/understanding the “risk” of influenza and other diseases.

  • We commonly realize that grief at the loss of a young parent or spouse or grief at the loss of a child is an order of magnitude worse than our grief at the loss of an elderly relative or friend.
  • That realization is displayed in the success of cable news channels that exploit 24/7 crimes against children and young adults and diseases affecting children and young adults.
  • Nature itself uses this consequence measure. Diseases and metabolic conditions that affect only those past the age of reproduction exert only indirect evolutionary selection pressure. For instance, evolution has provided an array of devices to maintain the integrity of the genome and prevent the development of cancer in the young. These protections gradually deteriorate as people age past capability of reproduction.

Sandman/Lanard combine their insistence on use of a specific meaning of “risk” with an insistence that CDC exhibit the sort of intellectual purity demanded by the far left of the Democratic Party and far right of the Republican Party. This shows up in their insistence that CDC management/spokespeople tell only the “truth” in all cases and push for only policies that are consistent with that “truth.” Otherwise, they predict disastrous loss of credibility.

An example is their use in the 15 December 2009 update of a 1996 Wall Street Journal article by Amanda Bennett and Anita Sharpe (winner of a 1997 Pulitzer Prize and available at www.pulitzer.org/archives/5997). Sandman/Lanard note that Bennett and Sharpe describe the U.S. government 1986 decision to conduct a societally broad-based anti-HIV campaign rather that a campaign targeted at high-risk populations. They quote medical ethicist George Annas (“When the public starts mistrusting its public health officials, it takes a long time before they believe them again.”) and former NY City Health Commissioner Stephen Joseph (“Political correctness has prevented us from looking at the issue squarely in the eye and dealing with it. It is the responsibility of the public-health department to tell the truth.”).

Sandman/Lanard say nothing about Bennett and Sharpe’s extensive discussion of why CDC and Health and Human Services chose a broad-based campaign, rather than the more targeted campaign that CDC understood was called for. In the ten years between 1986, when the policy was put in place, and 1996, when Bennett and Sharpe were writing, the levers of power of public health policy were fully controlled for all but two years by Republican administrations and/or Republican Congresses, including Senator Jesse Helms of North Carolina, who was adamantly opposed to, and able to block, any program targeting gays and intravenous drug users. CDC chose to get the policy they could, which had some benefit for the high-risk populations and “wasted” a lot of money on low-risk populations.

Sandman/Lanard say nothing about the courage of CDC managers in the early 1980s, who reprogrammed funds, without the knowledge of their Reagan administration superiors at HHS, to do the epidemiological research that established the mechanisms of transmission of HIV and established the basis for coherent policies to counter HIV infection. Other CDC managers were harshly punished a decade or so later for doing the same thing in the context of another health emergency.

Peter responds:

First let me note that Jim Dukelow’s comment is nearly identical to one he posted a few days ago on the RISKANAL listserv. This response, on the other hand, is significantly expanded from the response I posted on RISKANAL, largely because my wife and colleague Jody Lanard collaborated on this one. To join RISKANAL, send the following email message to lyris@lyris.pnl.gov: SUBSCRIBE RISKANAL First_Name Last_Name

Jody and I have no quarrel with the YLL (years of life lost) calculation as an alternative to considering all deaths equally unfortunate and equally worth preventing … postponing, really. YLL certainly adds useful information.

But where was the YLL perspective for all those years when public health agencies were shouting from the rooftops that flu is a grossly neglected disease that kills 36,000 Americans a year, without mentioning that about 90% of those 36,000 are 65 or older, mostly a lot older? The 36,000-a-year factoid was deployed endlessly as a core argument for urging young people to get their flu shots.

Each risk measure points to something different. Population mortality rates (which we should have called “age-specific death rates,” clearer and more customary nomenclature), case fatality rates, attack rates, hospitalization rates, YLL calculations – they’re all useful. The question is whether public health agencies provide several such measures to help us understand ways that a particular disease outbreak is bad and ways that it’s not so bad … or whether the agencies cherry-pick the measure they can most easily use to convince us that “this one” is bad (or not so bad).

As for whether the CDC and HHS misled people (and misled state and local health agencies) about who was most at risk from pandemic H1N1, Jody and I do hold these two federal agencies responsible for the fact that lots of state and local agencies (and lots of journalists) got from the feds and passed along to the public the clear impression that kids and young adults up to age 24 were among those at particularly high risk. This is not a matter of our decreeing what the term “risk” ought to mean; we are simply reading what was said and written.

Here is one of many examples, a New York City health department press release from January 2010, a time when H1N1 vaccine was going begging. I have bolded the parts of this excerpt I think explicitly claim that young people are likelier than older people to get very sick or die.

“The best way to protect yourself or your loved ones from becoming very ill is to get vaccinated,” said Dr. Thomas Farley, New York City Health Commissioner. “People in priority groups are at higher risk of hospitalization and death if they get sick. So don’t take the risk – get the vaccine today.”

Last month, the Health Department lifted any remaining restrictions on H1N1 vaccine eligibility, while continuing to target those in high-priority groups. Those groups include pregnant women, anyone between 6 months and 24 years old, and adults with chronic health conditions, such as asthma, diabetes, or immune deficiency.

Examples like this one make a pretty unambiguous claim about age-specific death rates, not years of life lost.

I would have had no objection to a sentence like this:

Even though the pandemic is killing a much higher percentage of people 50–64 than any other age group, as long as the vaccine is in short supply we are prioritizing all children for vaccination rather than the 50–64 group, because children have their whole lives ahead of them.

I don’t feel qualified to question the vaccine prioritization decisions of the CDC’s Advisory Committee on Immunization Practices. They’re certainly debatable decisions. Very few (if any) other developed countries prioritized all children and young adults up to age 24. Canada, the United Kingdom, France, and Australia did not. But that’s not my field. My concern isn’t which target groups the CDC most wanted to vaccinate; my concern is what the CDC told people in order to get them vaccinated.

It is a truism of risk communication – of all communication – that widespread audience misunderstandings are by definition the communicator’s fault. In 2009 most Americans came to believe that swine flu was more dangerous to children than to their parents or grandparents. That belief was false. The CDC knew it was false. But that belief was conducive to getting kids vaccinated, so the CDC encouraged the false belief.

The CDC chose to take a short cut. Instead of emphatically offering an accurate rationale for prioritizing children (YLL, contagion, higher attack rate, higher vaccine efficacy, etc.), it implied a false rationale. And when local and state health officials, themselves misled, said flat-out false things about which age groups were most at risk, the CDC chose to do nothing to correct the error.

I believe this was not ethical, and in the long term I believe it was not wise either.

It was, however, successful. The percentage of all U.S. children vaccinated against pandemic H1N1 was far higher than the percentage of high-risk prioritized adults vaccinated.

In January 2010, the CDC published survey data about H1N1 vaccination uptake. Here’s what it reported:

By the end of December 2009, an estimated 61 million persons (20% of the U.S. population) had been vaccinated, including 27.9% of persons in the initial target groups, 29.4% of children, 11.6% of adults aged 25–64 years with underlying medical conditions, 22.3% of health-care personnel, and 13.9% of adults caring for infants aged <6 months.

In April 2010, the CDC published updated survey data about H1N1 vaccination rates through the end of January, and included this discussion about the lower rate in the high-risk 25–64 age group compared with the higher rate among children. By this time, the uptake in kids had risen to about 37%:

The 2009 H1N1 vaccination coverage rate among adults at high risk aged 25–64 years was lower (median: 25%) than the rate among children. Reasons for this might include a lesser emphasis on vaccination of this population compared with children, lack of preexisting relationship of state immunization programs with providers who serve adults at high risk, difficulty in implementing a risk-condition-based recommendation for persons in this age group (resulting in vaccination program implementation challenges), and historically low seasonal influenza vaccination rates in this population.

Jody and I were glad to see the authors consider the possibility that “a lesser emphasis on vaccination of this population compared with children” might account for the lower pandemic vaccination rate of adults with high-risk conditions than of children, even though adults with high-risk conditions were far likelier than children to suffer serious illness and death from pandemic H1N1.

But we were surprised to see “historically low seasonal influenza vaccination rates in this population” on the list of possible explanations. In recent history, adults with high-risk conditions have had a much higher seasonal influenza vaccination rate than children. The rate in children rose from 24% in 2008–2009 to about 40% in 2009–2010.

As the authors probably know already, high-risk adults 18–64 got the 2009–2010 seasonal flu vaccine at a much higher rate than they got the pandemic flu vaccine: 36% of high-risk 18–49-year-olds, and 45% of 50–64-year-olds, got the seasonal vaccine (about the same as in the previous flu season). So it’s not that high-risk adults were unaccustomed to flu vaccination. Rather, high-risk adults apparently saw pandemic vaccination as a high priority for their children but a lower priority for themselves.

Two important side notes:

First, I think Jim Dukelow rightly describes the Bennett/Sharpe work on the CDC’s decision in the 1980s to do broad rather than targeted HIV outreach. He does more justice to this decision than Jody and I did in the website post he quotes.

Second, there is one point about which I think we did unintentionally mislead Mr. Dukelow. He writes that our “assertion that the 2009 H1N1 2009–2010 flu season is mild depends on the word ‘risk’ having a specific, single meaning.” Because there were fewer pandemic H1N1 flu deaths in the U.S. than in average flu seasons and past pandemics, he says, we insist on calling the pandemic mild – and then we criticize health agencies for failing to adopt this single standard and failing to call the pandemic mild. A years-of-life-lost standard, he says, undermines this claim.

It does seem to be true – so far – that the number of deaths from pandemic H1N1, while lower overall than in the average flu season, is about three times higher in people under 65. Whereas about 90% of U.S. seasonal flu mortality is in people 65 and older, most of the estimated pandemic deaths in the U.S. were in middle-aged adults, and about 10% were in children under 18 … far more children than the average flu season kills.

But our assessment of the 2009 flu pandemic as “mild” is based mostly on years of official use of the term “mild” to describe previous pandemics (1847–1848, 1889, 1968). In “Swine Flu Communication Challenges and Lessons Learned,” Jody and I recently noted:

Because the H1N1 pandemic was killing more children than seasonal flu, it became anathema to call the pandemic “mild.” In the past, the CDC has routinely labeled certain seasonal flu years as “mild,” “mild to moderate,” etc. And public health officials have routinely called the 1968 pandemic “mild” or “relatively mild.”

Our use of the term “mild” is also based on criteria U.S. officials developed in advance of the H1N1 pandemic to characterize pandemic severity … and then decided not to use to describe the pandemic they actually faced. The proposed Pandemic Severity Index link is to a PDF file (PSI), developed by the Department of Health and Human Services, described a “Category 1” pandemic – the lowest category on the proposed severity scale – as one that would cause fewer than 90,000 deaths in the U.S.

Just as a Category 1 hurricane is still a hurricane and can still do considerable damage, a mild pandemic is not a walk in the park – especially for the most severely affected age groups. More widespread use of YLL can help clarify that this generally mild pandemic has had a comparatively severe impact on kids. But pretending that children are likelier to die than their parents and grandparents isn’t an appropriate way to “explain” that children’s deaths are especially devastating and therefore worth prioritizing.

Was it wrong to warn people even though the swine flu pandemic was turning out mild?

name:Sean G. Kaufman
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Public health training director
Email:May 5, 2010
Location:Georgia, U.S.

Comment:

Regarding pandemic preparedness, I have some strong feelings about how this issue was communicated.

From a behavioral standpoint, I know if we take a person, inject artificial noise and point to a behavior which mitigates this noise, we typically see an increase in adherence to that behavior. In June of 2009, it was clear to most public health experts that the H1N1 pandemic was not as severe as expected. However, the communication (in my opinion) continued to inject noise above that which was scientifically demonstrated (or as I like to put it – ethically guided).

I am interested in your opinions and insights. I know being a public health organization is tough. You communicate too little, you get hit. You communicate too much, you get hit like this. But I honestly feel (as a public health professional) that we lost our way on this one. In my opinion, our job is to communicate risk – and let people make the decision how to behave. However, on this one our communication caused excessive noise … leading to a behavior which has not yet demonstrated it has true value.

Peter responds:

I share your judgment that from June 2009 onward, public health warnings about pandemic H1N1 were more alarming than the actual severity of the pandemic justified. As you suggest, that wasn’t so in the early weeks, when too little was known about its actual severity. But by June there was ample evidence that the pandemic was surprisingly mild so far. And to this date that assessment remains so.

But I don’t think you judge a warning by whether it’s commensurate with what’s happening, or with what actually happens later. You judge it by whether it’s commensurate with what might have happened. I think public health officials were justifiably concerned that the pandemic might become more severe over time, as some other flu pandemics have done.

That concern was justified in June when there was little individuals could do except wash their hands and cover their coughs, and it remained justified in the fall when vaccine began to be available in some developed countries. Some experts say it remains justified today, since there is no way to be sure yet if the new H1N1 virus might still return in a more virulent form.

When it comes to warnings and preparedness about potential catastrophes, it’s not “damned if you do and damned if you don’t.” It’s darned if you do and damned if you don’t. If officials had downplayed a pandemic that turned out severe (and ordered far too little vaccine), the criticism would have been much more vituperative than the criticism they face now for having hyped a pandemic that turned out mild (and ordering far too much vaccine).

So I am not critical of health officials for continuing to sound the alarm after June 2009, despite the increasingly clear evidence that the pandemic was much milder than they had initially feared. I am, however, critical of other things:

  • Officials didn’t concede – and in many cases still have not conceded – that the pandemic was in fact very mild so far … even though it was deadlier to the young than the seasonal flu, which sickens all age groups (young people more than older people) but kills mostly the elderly.
  • Officials didn’t explain clearly enough or often enough that their warnings weren’t about what had happened so far, but about what might happen next. They did little to help the public imagine the severe pandemic they feared, and to contrast it with the mild pandemic we were already experiencing.
  • Officials didn’t share the dilemma of crisis preparedness, predicting (and hoping) that they were probably ordering more vaccine than we would end up needing, and explaining that that was a wiser error than running out if the virus took a turn for the worse.
  • Officials didn’t suggest that seasonal flu vaccination was a lower priority than in most years, since one of the likelier scenarios was that the pandemic strain would supplant seasonal strains in 2009–2010. And officials rarely mentioned early signs that the worst seasonal strain still circulating in small amounts, H3N2, had mutated to become a bad match with the 2009–2010 seasonal vaccine strain.
  • U.S. officials in particular didn’t adjust their messaging regarding the most vulnerable age groups when it became clear that adults (especially adults with preexisting conditions) were far more endangered by pandemic H1N1 than children.

For more than you want to read on these and other criticisms, see my index of “Swine Flu: Substantial Articles, Interviews, and Guestbook Entries” and my “Archive of Swine Flu Pandemic Communication Updates.”

I think continuing to warn was a justified public health decision. But I think the warnings could have been a lot more precise, more helpful, and more honest.

Sean responds:

Your response really helped me identify just what I have been upset about for the past four months. The warnings could have been more precise, helpful, and honest. That sums it up perfectly. The cherry on the top of the sundae would be if organizations acknowledged this.

How did Goldman Sachs become a scapegoat?

Name:Michael Laurence
Field:Owner, Micro-Metrics Company
Date:May 5, 2010
Location:Georgia, U.S.

Comment:

You ask: What Did Goldman Sachs Do Wrong?

It failed to avoid being scapegoated:

  • For the ideologically-based policy failures of recent Presidential Administrations in their fore-doomed strong-arm promotion of universal home-ownership.
  • For the exacerbation thereof by the unfortunate deflationary tendencies of a debt-based currency that was instituted with the formation of the Federal Reserve System itself.
  • For the prior disappearance of Congressional (Glass-Steagall) segregation and protection of depositor funds amidst the effort to maintain the global competitiveness of American financial institutions.
  • For the long-established general reliance upon the socialization of large-institution financial losses amidst an emerging climate of bureaucratic deregulation and of enhanced opportunity for private profit.
  • And for shamefully failing to incompetently but equitably share in the general distress induced by the senior institutions identified above.

Show trial, anyone?

Peter responds:

Well yes, various arms of the government acted in ways that caused or exacerbated or permitted the economic collapse. So did Goldman Sachs and the rest of the investment banking industry. So did most of us, one way or another. But government bears the lion’s share of the responsibility simply because preventing economic collapse is one of the government’s core tasks. It’s not my task, or your task, or Goldman Sachs’s task.

And yes, your sarcastic last bullet point is sound: Part of why people are outraged at Goldman Sachs is that it has survived and even flourished, while so many others have suffered and are still suffering. People and institutions that profit from the misfortune – or imprudence – of others can expect a fair amount of heat. That’s true even if they’re not actually causing the conditions from which they profit. Betting against the general welfare isn’t likeable.

So what should Goldman Sachs have done to avoid the outrage, and what can it do now to mitigate the outrage? Suffer more? Avoid being smarter than the government and the rest of us? Restrict itself to profit opportunities that don’t come at the expense of the overall society?

I think there are more generic outrage management lessons to be harvested here. After all, John Paulson is the character in this drama who most clearly profited from betting correctly that the subprime housing market was about to tank, while most of the “real experts” were betting that it would stay healthy. Goldman Sachs just facilitated Paulson’s gamble and went along for the ride. And yet there’s not nearly as much public outrage at Paulson as at GS.

Scapegoating is a kind of projection. We project our own guilt, shame, and responsibility onto the scapegoat, making it bear the weight of everyone else’s sins as well as its own. The risk communication question here is: What could Goldman Sachs have done to “stay out of the projective field” – to avoid being the lightning rod for all this projection, the most attractive potential scapegoat around? And what could it do now to “get out of the projective field” – to allay some of people’s outrage, help people accept our collective responsibility, and persuade people to prefer the benefits of sound regulation to the satisfactions of punishing the scapegoat?

I think the principles of outrage management offer some answers.

Are we learning the right lessons from the Goldman Sachs controversy?

Name:Melodie Selby
Field:Professor, former consulting engineer, former environmental regulator
Date:May 5, 2010
Location:Washington, U.S.

Comment:

This is in response to your email exchange about Goldman Sachs.

I really liked seeing how opinions can change and shift as a result of research and discussion. That’s how it should be so I was glad there weren’t any accusations of “flip-flopping.”

My specific comment has to do with your fourth question – are we learning the right lessons? I don’t think we are.

I think a key lesson from the whole mess is the importance of regulation. As you say, corporations have a duty to their shareholders to do everything legal to make money. I tell my students, if it is not illegal to dump waste irresponsibly, arguably a corporation is violating its fiduciary duty to its shareholders if it spends more to “do the right thing.” A certain amount of going beyond the law can be justified in the name of public relations, but if we, the people, think something is damaging to society, we need to make it illegal.

Instead, there is a perception that regulation and capitalism are opposites. People think that the way for capitalism to flourish is to eliminate regulation. And that seems to help in the short term – but didn’t work out so well in the long term for the financial markets. (Or so I understand. I don’t fully understand how the deregulation in the ’90s led to the mess, but it seems to be a contributing factor.)

The thing that’s important about regulation is doing it correctly. In environmental regulation, we’ve found that telling people exactly what they have to do doesn’t work well. It’s better to tell them what we will and will not accept and let them figure out how. That’s where capitalism shines – figuring out innovative ways to reach goals more efficiently. So I worry that our financial regulations will end up outlawing the specific instruments that are implicated in this crisis, but not saying what we will and won’t accept. If that’s the case, we’ll just do it all again in a few years, because you can’t write laws as fast people can come up with new ideas.

Unfortunately, I don’t know enough about the stock market to know what’s acceptable and what’s not. It seems to me that just betting whether stocks will go up or down without actually providing capital to get things done should not be part of where people’s pensions get invested. If people want to do that on the side, fine. But it would take someone who understands the system to know how to write things like that into law.

Wouldn’t it be nice, though, to be able to have that debate on a national level, rather than who should get the blame?

Peter responds:

I agree with both of your points: (a) that corporations are impelled to maximize profitability, which means we need regulation to make sure profitable but socially harmful corporate behaviors are illegal; and (b) that regulation works better when we tell companies what outcomes to achieve rather than how to achieve them (in engineering terms, setting “performance specifications” rather than dictating “technical specifications”).

You wrote: “A certain amount of going beyond the law can be justified in the name of public relations.” That relates to where outrage (and thus risk communication) fits in.

Part of the answer, clearly, is that outrage is the other main constraint, besides regulation, on what companies may do to make money. If Goldman Sachs had been outrage-savvy, it might have realized that its high-visibility, master-of-the-universe arrogance and its astronomical profits made it an ideal target for public outrage. It might have judged that painting a target on its own back was especially inadvisable while it was heavily engaged in financial transactions (like going short on much of the U.S. economy) that ordinary people would find extremely unappetizing. It might have predicted that if the economy collapsed (or even contracted significantly) in ways that were arguably related to the new and poorly understood conventions of investment banking, the government and the public would be actively searching for suitable scapegoats, and GS would be a prime candidate. It might therefore have decided to make more of an effort to explain how the business of banking had changed, and less of an effort to conduct that business without regulatory constraints.

But part of the answer, too, is that outrage makes it much, much harder for people to learn the right lessons. The more outraged the public becomes, the stronger the pressure on government to punish instead of regulating – and to regulate in ways that exacerbate the punishment instead of ameliorating the problem. We have just been through a bailout that was profoundly unpopular because it helped big banks and other institutions widely seen as undeserving. The government did a pretty poor job of explaining – regretfully – that it was necessary to bail out (rescue) the banks in order to rescue (bail out) the economy. Now people are in a mood to punish the banks, even if the punishments we settle on might also punish the economy rather than protecting it. Unfortunately, we’re in a mood to prefer tech specs to performance specs. We want the government to micro-manage the investment banking industry, not because we really think Congress understands investment banking, but because we will get pleasure from seeing investment bankers tied up in knots.

So the better job Goldman Sachs does of managing the public’s outrage – outrage that is richly justified but partly misdirected – the likelier we are to end up with regulations that work instead of regulations that punish.

My wife and colleague Jody Lanard adds the following:

Professor Selby writes: “A certain amount of going beyond the law [in the direction of voluntarily refraining from legal, profitable, but harmful corporate behaviors] can be justified in the name of public relations, but if we, the people, think something is damaging to society, we need to make it illegal.”

If one company unilaterally refrains from the legal, profitable, but harmful behavior, that might give it a public relations benefit, but it still harms its shareholders relative to similar companies that continue doing the legal, profitable, but harmful behavior.

This reality is an argument for corporations – not just activists and “we, the people” – to lobby for regulation that restrains or prohibits the harmful but profitable behavior. This kind of lobbying is “precaution advocacy” risk communication, as opposed to “outrage management” risk communication.

Corporations often want a level playing field of regulations, especially when their behavior is better for the world but worse for their own shareholders (in terms of short-term profits) than the behavior of competitors. Corporations sometimes have to beg regulators to “stop all of us from sinning,” in order not to lose customers or shareholder value as a result of voluntarily behaving well.

I would add that my corporate clients often hesitate to do the kind of precaution advocacy Jody is recommending, even though they know she’s right. I once had a paper industry client that had built a new mill with state-of-the-art dioxin controls, anticipating that the regulations would soon require such controls. When regulatory fervor declined, my client was left with an unusually expensive paper mill. The CEO acknowledged to me that it made economic sense to lobby for more stringent dioxin regs – regs his new plant would already meet. But, he said, it just felt wrong to lobby for tougher regulations.

What kind of game-changer might it be if Goldman Sachs were to lobby for tougher regulations on investment banking?

When a government decides swine flu is mild: Talking about crisis management policy changes

name: Rebecca Tooher
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Pandemic and Other Infectious Diseases index

Field:Health researcher
Date:April 25, 2010
Email:Rebecca.tooher@adelaide.edu.au
Location:Australia

Comment:

Thanks for all your insightful and considered commentary about the H1N1 pandemic. I recently came across your forensic dissection of the then Australian Health Minister’s speech about pandemic preparedness back in 2005. (We have had a change of government since, and Mr. Abbott has recently become the opposition leader.)

I wonder whether you have monitored the current Australian government response to the H1N1 2009 pandemic.

Although initially emphasizing the seriousness of the pandemic, by June 22 government messaging was that “the disease is mild in most, severe in some and moderate overall,” and that the aim of public health policy was to protect the most vulnerable. In fact the government modified its initial phased pandemic response and created a new “PROTECT” phase which acknowledged that the existing phases were not appropriate given the mildness of the pandemic.

The official government website says: “PROTECT recognises that infection with pandemic (H1N1) 2009 is not as severe as originally envisaged when AHMPPI [pandemic management plan] was written in 2008, and that although this new disease is mild in most cases, it can be severe in some. PROTECT puts greater focus on treating and caring for people in whom the disease may be severe.”

So far, it looks to me like they were getting the message about right.

However, we have been studying the community response to the management of the pandemic and it seems that the perception that the government overemphasized the seriousness of the pandemic is quite common. Do you have any thoughts about what might have got lost between the official messaging and the community perception?

Peter responds:

My wife and colleague Jody Lanard collaborated on this response.

The main recent flu vaccination issue in Australia has been a shortage of seasonal vaccine. The southern hemisphere’s winter is fast approaching, vaccine demand is up (largely because of swine flu), and vaccine supply is down (for a range of reasons: the priority given to the swine flu vaccine, manufacturing problems, shipping holdups because of Iceland’s volcano, etc.). You’ve also got a breaking story about possible side effects in some young children who received the 2010 seasonal flu vaccine produced by the Australian company CSL.

But the issue you raise is an interesting and important one. Australian journalists and the Australian public had no trouble figuring out that the swine flu pandemic was pretty mild – certainly a lot milder than the bird flu pandemic for which health officials in developed countries like Australia had been trying to prepare. But a lot of reporters and a lot of ordinary citizens didn’t quite get it that the Australian government had noticed too, and had changed its policies in response to the unexpected mildness of the pandemic.

The question, then, is how can a government agency talk about crisis management policy changes in a way that sinks in?

The Australian government certainly tried. Your quote from the Department of Health and Ageing website is impressive. Before swine flu, Australia’s pandemic phases were labeled ALERT, DELAY, CONTAIN, SUSTAIN, CONTROL, and RECOVER. The CONTROL phase would have included broad-based H1N1 vaccination as soon as there was sufficient vaccine available. Because swine flu turned out mild in most cases, Australia explicitly abandoned that phase. Instead, it implemented a newly created PROTECT phase that focused on “treating and caring for those more vulnerable to severe outcomes.”

Vaccination in the PROTECT phase was similarly aimed at “people at increased risk of severe outcomes.” Instead of struggling to vaccinate everyone, or triaging limited vaccine to protect people needed to keep critical infrastructure up and running, Australia decided that most people could probably weather a fairly mild swine flu pandemic without medical help. So it decided to concentrate on treating – and vaccinating – the most vulnerable. And it said so.

By contrast, many other governments refused to acknowledge until very late in the game that the pandemic was looking rather mild, compared with their initial fears and compared with previous flu pandemics. Some have yet to make this acknowledgment.

Many countries, especially in the developing world, put far too much emphasis for far too long on containment, which sent two false signals: that the pandemic could be contained and that it had to be contained – that it was stoppable and severe. Others, like the U.S., said from the outset that containment was unachievable, but still emphasized how dangerous the pandemic was, even as their own emerging data showed it to be less dangerous in many ways than an ordinary flu season.

The mere use of the word “mild” came to be seen as a sign of callousness toward those who had died, ignoring the fact that for years the World Health Organization and national governments around the world had routinely referred to the “mild” or “relatively mild” pandemics of 1957 and 1968.

Ultimately, WHO paid the highest price for this reluctance to acknowledge the ways in which the pandemic was mild. Largely because it failed to integrate severity – low severity – into its pandemic messaging, WHO was accused of having created a “fake pandemic” that unduly frightened millions. National governments around the world were similarly castigated for having wasted precious healthcare funds on pandemic vaccine nobody wanted.

Australia’s government was less criticized than many – and rightly so. But as you point out, even in Australia many people got the impression that the government had made the pandemic sound more severe than justified.

So what could Australia have done better? The list that follows makes some of the same points I made in my February 2010 column, “Telling People You Got It Wrong.” I recommend that column as a companion piece to this response.

number 1

Roll with the punches. To some extent, the criticism was unavoidable. All governments, Australia’s included, rightly started out alarmist, not just because the threat of bird flu was on their minds but also because the early information from Mexico was genuinely alarming. And all governments, Australia’s included, rightly remained cautious (to varying degrees) even as the data started looking less scary. Most figured that it was more important to worry about not having enough vaccine than about having too much; most figured that it was better to risk being accused later of hyping a minor problem than of shrugging off an imminent disaster.

Outcome-biased thinking or “hindsight bias” is a fundamental cognitive heuristic. English translation: We all tend to be unfair in the way we judge other people’s uncertain decisions after the uncertainty has been resolved. We think they should have guessed right. Two controversies raged simultaneously in the United Kingdom a few months ago: Why did the government buy more vaccine than it needed for a pandemic that turned out mild, and why did the government buy less road salt and grit than it needed for a winter that turned out severe?

The rest of this list discusses some things you can do to mitigate outcome-biased thinking. But in the end you can’t prevent it altogether. It may help a little to acknowledge from time to time that outcome-biased thinking is a natural human tendency we must all try to control. But that will help only if you can manage to make it sound empathic (and perhaps a bit resigned), not defensive, hostile, or cynical.

number 2

Repeat the mildness message. My guess is that the Australian government didn’t say nearly often enough and emphatically enough that the pandemic was turning out less severe than expected, and it was altering its pandemic policies in response. I haven’t done the content analysis it would require to document this point. But the unexpected mildness of the pandemic was the sort of message that usually doesn’t get said very often or very emphatically. It took courage for the Australian government to promulgate that message at all, especially after it had ordered a lot of vaccine. Most governments (and WHO) systematically avoided or even explicitly disavowed the mildness message.

Even if the government had tried hard to emphasize that the pandemic was turning out pretty mild so far, getting the media to cooperate wouldn’t have been easy. Reporters prefer alarming memes to reassuring memes, especially when nothing really awful has happened yet. (Media sensationalism usually goes into remission in actual catastrophes.) So a government that tries to balance the “this could get much worse” message against the “this could fizzle out” message is likely to see only the first of the two messages in the headlines. And then when “this” fizzles out, the media segue to a devastatingly unfair feature story: “Remember when officials said we’re all gonna die?” Repeating the mildness message often and emphatically is only partial protection against this media one-two punch, but it does help some.

number 3

Repeat the policy change message. More often than not, government health agencies change their minds quietly. It’s awkward to admit that “we used to believe X, but now we believe Y.” So what usually happens is the health agency website page gets silently “updated”: X disappears and Y is there in its place, with no mention of the change. This strategy may save the agency some embarrassment, but it undermines the agency’s ability to communicate the new policy. When X turns into Y without warning or explanation, people who were aware of X tend to miss the change, thinking that X must still be the policy. Or they disbelieve the change: “That can’t be right….” Meanwhile, people who were unaware of X miss the significance of the change; Y is less noticeable, less interesting, and less newsworthy as a timeless factoid than it would have been as a shift in policy. Perhaps most important, if an agency isn’t willing to describe the change as a change, it loses the opportunity to clarify the rationale behind the change.

Australia got all this stunningly right on its website. It’s clear on the website that Australian pandemic policy changed because swine flu was turning out milder than expected. What I don’t know is how often the government managed to get the policy change message repeated in the media, or how hard it tried to do so.

A quick perusal of Australian media stories from mid-to-late June 2009 shows that the PROTECT phase was introduced just before numerous news articles covered Australia’s first H1N1 deaths. Read through this sample article and decide for yourself if it conveys mostly the Health Minister’s “milder than expected” message or mostly the country’s fear and alarm in response to its first death. It was predictable from other countries’ experience that Australia’s first few swine flu deaths would provoke a national oh-my-God moment (an “adjustment reaction”). It was very difficult for the “milder than expected” message to prevail over that oh-my-God moment.

number 4

Talk about uncertainty. One key to crisis communication is anticipatory guidance: telling people what to expect. And one main thing to expect in most emerging crisis situations is uncertainty. That’s certainly true for flu pandemics. As infectious disease experts like to say, “When you’ve seen one pandemic, you’ve seen one pandemic.”

Uncertainty is the right context for explaining policy changes. “We planned for a more severe pandemic than the one that has so far developed. So we’re changing some policies to adapt to the comparative mildness of the pandemic so far. But just as we couldn’t be sure at the outset that the pandemic would be severe, we cannot be sure now that it will stay mild. If swine flu takes a turn for the worse, our policies will need to change again.” Even the Australian Department of Health and Ageing webpage you quoted doesn’t stress uncertainty nearly enough. It hints at it only in one line describing the RECOVER phase: “Pandemic controlled in Australia but further waves may occur if the virus drifts and/or is re-imported into Australia.”

number 5

Share the crisis policymaking dilemma. Policymaking in highly uncertain situations is a gamble. You do your best to plan for a wide range of possibilities. But some decisions are necessarily inflexible. Deciding how much vaccine to buy is an example. If you buy too much, you waste precious resources; if you buy too little, you leave people unprotected, and some of them die. Similarly, the decision about how emphatically to warn the public must try to strike the right balance, yet the decision must be made before anyone knows how serious the pandemic will become.

In making these decisions, wise officials try to err on the alarming side … within reason. It isn’t feasible to prepare for the worst conceivable outcome, but it is unwise to prepare only for the likeliest outcome. So you locate your policies in the range between what’s worst and what’s likeliest. By definition, that means you usually over-prepare. And so you usually end up criticized for over-preparing – which is far preferable to being criticized for failing to protect public health. When it comes to taking precautions, the fundamental rule is: darned if you do, damned if you don’t.

The Australian government – and every government – needed to say this, all of it. It’s what governments need to say again and again, in crisis after crisis, about policymaking under conditions of uncertainty.

Yes, I know how hard that is to accomplish. The message is too complicated. It’s not newsworthy. It sounds too defensive. But the only time it is even remotely possible to get people to understand the policymaking dilemma is before the outcome is known. Once a pandemic turns out mild or severe, outcome-biased thinking rules the roost. Waiting until after everybody knows what happened and then explaining that you couldn’t have known all that when you were making your policy decisions sounds really defensive. Your only real shot at explaining the policymaking dilemma is beforehand, and beforehand is when you should explain it – not once but ad nauseam.

number 6

Distinguish scenarios from predictions. To talk meaningfully about the crisis policymaking dilemma, you have to talk about what might happen. You have to address a range of scenarios, from fizzle to disaster. The problem: Your statements about scenarios are very likely to be perceived, reported, and later remembered as predictions. Of course if you have actually been making predictions about unpredictable things, you should stop. And you will owe the public an apology if you turn out wrong (and in principle even if you turn out right) – regardless of whether your predictions were over-confidently reassuring or over-confidently alarming.

But assuming you’ve been offering people hypothetical scenarios that they tend to hear and remember as confident predictions, what can you do to underline the distinction? These three strategies should help (a little).

  • Pair your scenarios. Try never to discuss a severe scenario without mentioning a mild scenario in the same breath – and vice-versa. It’s harder to misinterpret your scenarios as predictions when they’re pointing in conflicting directions.
  • Point out explicitly that a scenario is not a prediction. “This is a scenario, not a prediction. I’m talking about one of the things that might happen, and what we’re doing to prepare for it. I’m not predicting that that’s what will happen. We don’t know what will happen.” Some reporters (and especially headline writers) will ignore the distinction even if you harp on it, but harping on it does improve your odds of getting it across.
  • “Go meta” on the problem. That is, talk about how people are getting it wrong – and take some of the blame for being misunderstood. “Some reporters and some of the public seem to think I’m making a prediction. I must sound like I’m making a prediction. I’m sorry – that’s not what I’m trying to say. I am trying to describe hypothetical situations for which we have a responsibility to prepare.”
number 7

Go meta. The last bullet point in #6 opens up a useful but little-used strategy for not being misunderstood. Go meta. Don’t just say X. Explain that you mean X, not Y, but somehow you’re not being clear enough and people keep thinking you’re saying Y, but you’re not, you’re trying to say X. And then explain it again. And yet again.

This strategy can be applied not just to the distinction between scenarios and predictions, but also to the crisis policymaking dilemma, uncertainty, the policy change message, and the mildness message – everything I have focused on in this response. But I’ll illustrate it with the distinction between scenarios and predictions:

I have been trying to explain that if the swine flu pandemic turns out even half as severe as 1918, many thousands of Australians will die. That is not a prediction. It is a hypothetical scenario. But I have noticed that my hypothetical scenarios about the pandemic are frequently reported and later remembered as predictions. I have an obligation to consider severe scenarios, and try to prepare for them. But I am so sorry that a hypothetical scenario sometimes ends up first frightening people, and later angering them. There’s another scenario we need to consider too: The swine flu pandemic could end up very mild. In fact, for most people so far it is very mild, but we can’t count on it staying that way.

I want to apologize at the start of this week’s news conference for miscommunicating last week. I thought I was explaining to you that we were considering various severe pandemic scenarios for planning purposes, not as forecasts. But the headlines after last week’s press conference showed me that I must have sounded like I was saying that the pandemic was definitely going to be very severe. I am not saying that. I have no idea at this point how severe the pandemic is going to be. I hope I can explain our what-if scenarios better this week, and I hope that tomorrow’s headlines do not make it sound like I am predicting that the pandemic is definitely going to be severe.

I am still not being clear enough that we simply don’t know yet how severe the pandemic is going to be. We have a responsibility to try to prepare for severe scenarios. But it is my fondest hope that six months from now, most of our preparations will not have been necessary – although they will have been expensive and they will have been unnecessarily alarming if we are lucky and the pandemic turns out mild. Reporters keep writing as if my agency were making predictions about the pandemic. I’m just not being clear enough. We don’t have a prediction about the pandemic. All we have is a bunch of planning scenarios, ranging from very severe to very mild.

The essence of “going meta” here is talking explicitly about the ways you are being misreported and misunderstood. What makes it work (sometimes) is taking the blame for the misreporting and misunderstanding. What officials usually feel like doing is haranguing the press for getting it wrong! But it’s more effective – and more accurate – to blame yourself for not making your points clearly, emphatically, and repeatedly enough. Reporters will rarely cover what you say when you go meta. But they will (sometimes) listen more acutely to the nuances you are trying to convey … at least the ones you keep saying you’re not being clear enough about. Feel free to think of this privately as “haranguing the press.”

Rebecca responds:

Overall, I think that the government has been saying most of what you recommend.

Health Minister Nicola Roxon repeated the “mild in most, severe in some, moderate overall” theme every time she talked about swine flu. However, I think from our work it is possible that the message would have got through more clearly if it had been delivered by someone other than a politician.

I have just one thing to note about your appraisal of the government’s website. I agree it says mostly the right things. The problem is that very few people in our surveys (less than 5%) reported using the internet to get pandemic information, and even fewer preferred it as a source. We have also done focus groups and interviews, and these suggest that people would go to these websites only if directed or if specifically looking for something. (Some of our focus group participants had obviously visited the websites to bone up on swine flu before the focus group.)

The government recently launched a new advertising campaign called “The Facts about Swine Flu,” which is designed to encourage people to have the vaccination by highlighting that H1N1 hit younger people more severely, including pregnant women, and more people ended up in ICU than in a normal flu season. However, I don’t think it’s working that well because at the moment we seem to have some expert or other in the media nearly every day talking up the possibility of a return of H1N1 this winter. It feels coordinated.

Of course, the government paid for 21 million doses of vaccine, of which only around 8 million have been ordered so far. So they obviously want to use it up. On the other hand in the last few days there have been stories saying that H1N1 will dominate seasonal flu and that probably 30% of kids had swine flu last year and with the 20% or so of adults that have been vaccinated we might have already got herd immunity. So, as with most of the swine flu communication it is quite confusing. What the general public who are probably not taking much notice are expected to make of it all, I really don’t know.

I do think the idea that swine flu was not even as bad as normal flu has taken hold. Since vaccination rates for seasonal flu are only about 40% anyway (mainly in older and at-risk people), it’s unlikely that most people would see it as necessary to get the swine flu vax. Also there is a mismatch between rhetoric and action. The government is saying everyone should get the swine flu vaccine, but at the same time saying H1N1 is mild, and not providing the seasonal flu vax for free (which now includes H1N1 as well). So for your average person there’s a contradiction (given they think that swine flu isn’t as bad as normal flu). Also I noticed today that information websites and media stories about “how to tell if you have swine flu” say something along the lines of the symptoms are similar to seasonal flu. So little wonder if people are a bit confused by the messaging.

So the problem for the Australian government is:

  • People don’t think H1N1 is bad enough to be vaccinated against.
  • We have not a shortage but rather a surplus of H1N1 vaccine.

From a risk communication perspective I guess this is a case where precaution advocacy is required. However, I think it is a bit of a losing battle.

Last year’s experience has already led most people to consider the virus as mild and that’s exactly what the government said. Now they are trying to convince people that: “Well yes we did say that it was mild but it’s still worth getting the vaccine because some people were severely affected and that could be you, even if you are not in a high-risk group, even though last year you would have heard a lot that the people dying had underlying conditions and we were focusing our efforts on only the most vulnerable. Now we know (now that the “facts about swine flu” are in) that around 30% of people that were hospitalised and who died had no underlying condition and that they were on average much younger than normal seasonal flu victims. And we expect a second wave to happen any day and it could be worse than last year.”

Of course, this is all presented in the context of the many doses of vaccine that have already been paid for. I’m sure this leads many people to think that the government is now trying to talk the pandemic up just to get rid of the vaccine, not necessarily to make sure that the community stays healthy.

So it seems to be a bit of a low-hazard, low-outrage scenario, which probably wouldn’t call for any risk communication at all – except that there is excess vaccine, and the situation could transform into a high-hazard, low-outrage or even high-hazard, high-outrage situation if the virus changes or a second wave comes that is worse. I wonder whether some dilemma-sharing might be the best way to encourage vaccine uptake and counter complacency?

peter and jody respond:

The Australian government must feel like it can’t win! It was more candid than most governments about the mildness of H1N1 so far, but the Australian public still got the impression that it was hyping the pandemic. Now it’s trying to convince the public that getting vaccinated against H1N1 is a good idea as an extra margin of safety, and people are saying, “But you told us it was mild.”

It sounds to me like the Australian government’s main messages right now should be these:

  • So far, the swine flu pandemic has been mild, milder than we initially expected. For most people who have gotten sick, it’s been pretty much like the seasonal flu.
  • We keep saying “so far” because the story isn’t over. Swine flu could come back in a more severe form. We have no way to know if this will happen. There are no signs of it yet.
  • There have been some important differences between swine flu and seasonal flu so far. The seasonal flu kills mostly the elderly, but swine flu has killed a wider range, including a lot more children and young adults than the seasonal flu. Most of the swine flu victims so far have had preexisting health problems, but about 30% were previously healthy.
  • Back when we worried that swine flu might turn out more severe than it has been so far, we made the decision to buy 21 million doses of swine flu vaccine. We have a lot left, and it would be a shame to waste it. We think it makes sense for people to get vaccinated against swine flu. The H1N1 vaccine is free (except for your doctor’s consultation fee) and readily available.
  • Even though the odds that swine flu will do you serious harm are pretty low so far, the odds that the H1N1 vaccine will do you harm are much, much lower. We can’t honestly say it’s a top public health priority to get vaccinated against swine flu – not compared with making sure your children are up to date on their childhood vaccinations; not compared with wearing your seatbelt, or quitting smoking, or taking precautions against sexually transmitted diseases. It isn’t a top priority, at least not right now. But it is sensible. Why risk the small chance of serious illness and the bigger chance of an unpleasant week in bed when a simple jab can cut both risks substantially?
  • As for the seasonal flu vaccine…. [Here’s where it gets a bit complicated. Presumably getting the seasonal vax makes more medical sense than getting the swine flu vax, since it offers protection against H1N1 plus two other flu strains. But at the moment there’s plenty of swine flu vaccine in Australia and it’s free except for the GP consultant fee; by contrast, there’s a shortage of seasonal flu vaccine, people in low-risk groups have to pay for it, and there’s some question now about its safety for young children. It seems to me that that’s exactly what the Australian government should be saying, sharing the policy dilemma regarding tradeoffs between the two vaccines.]

Education and training for risk communication

Name:Rusty Cawley
Field:Public relations
Date:April 25, 2010
Email:rcawley@tamu.edu
Location:Texas, U.S.

Comment:

You describe your career as “muddling” into risk communication, a discipline that did not exist formally when you started out. What combination of education and training would you recommend for a risk communicator starting out today?

Peter responds:

The first national conference with the phrase “risk communication” in its title took place in January 1986 in Washington DC. Today there are too many risk communication conferences to count, let alone attend. I now actually meet people from time to time who tell me they’re in risk communication and then ask me, “and what do you do?”

Even today, most risk communication practitioners learned on the job. They studied something else in college (and perhaps in grad school): risk assessment, mass communication, psychology, public health, health education, etc. My own academic background, for example, is in psychology and mass communication.

I think this is still a viable career path, especially for people who have already finished their schooling and aren’t enthusiastic about doing it again. Read a lot of risk communication – in journals, on websites like this one, etc. Read the news with a risk communication perspective in mind. (This week’s question: What would you have done differently if you were helping European authorities talk about the air travel problems caused by Iceland’s volcanic eruptions?) Start getting your risk communication opinions into other people’s hands: Write letters to the editor, trade journal articles, unsolicited emails full of unpaid advice, blog entries, even comments on this Guestbook. Then look for a job where you’ll actually get paid for your risk communication opinions.

But the time is fast approaching when new risk communication practitioners will need formal risk communication credentials. Now that risk communication exists as a field, I suppose the proper education and training for that field is education and training in risk communication.

More and more academics are taking an interest in riskcomm, and some have managed to cobble together undergraduate and graduate programs. I don’t know most of these programs well enough to assess them. There are also a large number of individual risk communication courses, which can be housed virtually anywhere, from journalism to public health.

The best way to judge these programs and courses, I think, is to look at course reading lists. I would focus especially on two questions:

number 1
Are the assigned readings mostly real work in risk communication, or are they recycled and relabeled work from fields like health education and public relations? There’s nothing wrong with learning from long-established collateral fields, but if you’re trying to get grounded and credentialed in the comparatively new field of risk communication, make sure that’s really what you’re getting.
number 2

Are the assigned readings actionable and readable? Academic researchers tend to be interested in doing (and reading) research that makes a new contribution to theory, even if the topic being studied isn’t terribly practical. And academic researchers tend to write in a style that’s highfalutin, abstract, and often muddled. Try to study with teachers who write clearly and interestingly … and who write for practitioners, not only for other scholars.

In spite of what I said in #1 above, a solid academic program in risk communication should push students to take courses in collateral fields as well. The three most important, in my judgment, are these:

  • Social psychology and persuasion theory (and perhaps even advertising, the discipline that takes applied persuasion most seriously)
  • Risk assessment (so you can understand what your employer or client is telling you about whether something is dangerous or not)
  • Mass media studies (including social media and public relations)

If you’re in a risk communication program that lets you get through without much social psych, risk assessment, and mass media, supplement the program on your own. If you’re considering entering such a program, look elsewhere.

A doctorate is a very different animal than an undergraduate or master’s degree. What matters most when studying for a Ph.D. is the individual faculty member you hope to work closely with – TA his or her courses, RA his or her research, help with his or her consulting, and do your dissertation under his or her guidance. The rest of the program matters, but the mentor matters more – so much more that it’s not crazy to jump through the hoops of a program that’s not really your cup of tea in order to work with the person you most want to work with. To find the right mentor, you need to read the work of university profs until you find someone you can imagine modeling yourself on for a few years. Then start a correspondence. Then ask what sort of doctoral program might be arranged.

The other side of the argument: Your may change your mind. The prof you came to study with may leave, or retire, or turn out to be a jerk. It’s nice if there’s a decent program to fall back on. A third side of the argument: You may be more interested in the credential than the mentoring, and mostly looking for a place that won’t make you jump through too many hoops. Still, in a small field like risk communication, I think picking a Ph.D. program is mostly picking a mentor. You want to end up feeling and functioning more like an apprentice than a student.

I have three jaundiced, undocumented, and I hope incorrect prejudices about why it’s difficult to get really useful education and training in risk communication.

  • I think too many “risk communication” practitioners are really just relabeled PR people or science educators – two fields that I consider very different from risk communication (and from each other).
  • I think “risk communication” educators tend to be closer to the mark than practitioners in understanding what’s new and different about risk communication. But too many riskcomm educators are excessively focused on theory-building and data-dredging, as opposed to what practitioners actually need to know. I wouldn’t advise studying with anybody who isn’t out there consulting in the real world.
  • I think the exceptions, the people who are good risk communication practitioners/consultants themselves and should be spending some time in colleges and universities preparing the next generation of practitioners/consultants, are having too much fun (and making too much money) trying to help solve big problems instead. They don’t get around to teaching a lot of courses.

I should add that the “you” to whom this answer is addressed isn’t Rusty Cawley, who posted the question. Rusty is a 20-year veteran PR man who has made the transition to risk communication and now works as communications coordinator for the Integrative Center for Homeland Security at Texas A&M University. He is a fan of my work, and I’m a fan of his. Check out his risk communication website, “ColdCrisisTV.”

How to do experimental research to test risk communication principles

Name:Rebecca Tooher
Field:Health researcher
Date:April 25, 2010
Email:Rebecca.tooher@adelaide.edu.au
Location:Australia

Comment:

This comment relates to a Guestbook post from September 2009 regarding measuring the success of your risk communication principles in an experimental setting. You suggested that in the real world focus groups and possibly a survey would be the only practical way to assess communication effectiveness. However, I think there may be some occasions where a randomized design would be ethical and appropriate. I am specifically thinking of a cluster randomized design which compares standard practice (i.e. the communication strategy that would be used anyway) with one based on your risk communication principles.

To be ethically acceptable randomization should only be used when there is equipoise – when there is genuine uncertainty about which course of action is more effective. It seems to me given the lack of empirical evidence about risk communication outcomes that equipoise would quite often exist. (Arguably proponents of each approach might believe that their approach is better, but if that is based on nothing more than essentially gut feeling equipoise would still exist – that is, they genuinely don’t know which is better.)

Something like this would probably be best tested in a situation where the hazard is not extremely high. It’s difficult to see a research ethics committee agreeing to a study where a failure of the risk communication strategy could be seen to lead directly to deaths, although logically, if we don't know that the standard communication plan will work then in practice this is allowed all the time. Ideally, the study might involve a situation where the desired outcome is a behavior change and where probably precaution advocacy is needed and can be leveraged by using your theory of increasing outrage. If this was successful, and with the lessons that were learned in undertaking the study, you might then move on to use the same methodology to try to reduce outrage.

Practically, I can see this working in more closed settings rather than necessarily at a community level – for example in a health service, where the units of randomization would be individual administrative units (e.g. hospitals or community health centers). Ideally, you would want a setting where people wouldn’t be likely to discover that the communication method was different for different randomized units. It might also be possible to test the theory with community members in a general practice setting (i.e. primary health care physicians), where individual practices could be randomized but the outcome would be measured in the patients. I’m sure there would be similar administrative units in areas outside of health care.

Of course there are specific methodological challenges in using cluster randomized designs (the sample size needs to be adjusted because the individuals in each unit that is randomized are more similar to each other than to individuals in other units). But there is a growing methodological expertise on how to address these challenges. These kinds of designs are being used, in health care at least, to assess complex packages of care and particularly behavior change interventions.

Ideally, focus groups and surveys could be embedded in a study like this to explore some of the more complex responses and barriers/enablers to uptake of the various communication methods. This would strengthen the findings of the study.

Peter responds:

Together with my wife and colleague Jody Lanard, I have long fantasized about creating a foundation that would fund research only if advocates of competing hypotheses applied jointly, stipulating in advance that both sides would abide by the results. So I very much like your point that if you haven’t got a clue which works better, X or Y, there’s no compelling ethical reason not to test the question in a real-world experiment – whereas if you were already pretty confident that X was preferable, you’d arguably be mistreating the part of your sample you assigned to Y.

And I accept your counsel that a cluster randomized design is usable (and used) to get around the problems of real-world randomization. Because you randomized, your conclusions can be a lot more confident than conclusions drawn from focus groups or surveys, or from case studies, or from simulation experiments.

Nonetheless, let me say a word in defense of focus groups, surveys, case studies, and simulations … and even less rigorous methodologies than those. Many of my clients despair of being able to do methodologically state-of-the-art research on the behavioral impacts of alternative messaging strategies. They lack the requisite expertise, budget, or time (or all three) to design and implement a cluster randomized experiment. In fact, they may lack the expertise, budget, or time to field a fully professional focus group or survey.

Too often they conclude that there’s no point in doing any research at all. So they go with standard practice, or their intuition, or my intuition.

Rather than doing no research at all, I’d like to see them ask a few key questions of a dozen people on the bus or in the supermarket. (We can call this a “sample of convenience” so it doesn’t feel so amateur.) Of course a carefully done focus group or survey is far more reliable than this kind of slapdash, informal “study” – and a true real-world experiment is far more reliable – though less rich – than a focus group or survey. But even an extremely casual (and totally unpublishable) piece of research is a huge improvement over no research at all.

Every time I have managed to convince a client to spend an hour or two “testing” a proposed communication strategy with a handful of miscellaneous people, the strategy has improved markedly. Or at least it has changed markedly. To be certain the change was an improvement, we’d probably need a cluster randomized experiment.

Meeting the needs of relatives of disaster victims

name: Paul
This guestbook entry
is categorized as:

      link to Crisis Communication index

Field:Student
Date:April 25, 2010
Location:U.K.

Comment:

I have found the material on your website very informative and helpful in making sense of risk communication.

I am presently completing my master’s degree which is risk-based. I am studying the conflict that arises between the management of the needs of victims’ relatives and the requirement to gather evidence at disaster scenes – i.e., keeping determined relatives away from disaster scenes.

I am trying to address this conflict from a risk communication perspective and would be grateful for any guidance which may assist.

Peter responds:

Anyone who watches cop shows on television is familiar with the problem. The relatives of the victims of crimes and natural disasters rush to the scene and get in the way. The authorities have to keep them from interfering with rescue and recovery efforts, from contaminating evidence, and from getting hurt themselves.

It is of course natural for the relatives to want to get to the scene, see for themselves, look after the bodies and possessions of their loved ones, and begin the process of grieving. It is also natural for those in charge of an already chaotic situation to want to get the relatives out of the way with a minimum of additional effort – especially if they’re feeling anxious and stressed themselves. Maybe they can spare a minute or two to explain why it’s so important for the relatives to leave them alone. No more than that.

But brief, one-sided “explanations” are not what we need most when we’re coming to grips with a devastating and totally unexpected loss.

So the situation is a setup for conflict. People in these sorts of situations aren’t at their best. That’s obviously true of the relatives, but it may also be true of the authorities, if coping with catastrophe isn’t their daily fare. Under stress we often regress; we become more childish, self-centered, demanding, and unreasonable. We look for a scapegoat. And so the people in charge may project their feelings of discomfort and inadequacy into anger at the relatives. And the relatives may project their grief into anger at the people in charge. The relatives are “interfering” with essential work. The authorities are “insensitive” and “mismanaging” the crisis.

Many hospitals, airlines, morgues, police and fire departments, and similar organizations have crisis management protocols that include a brief section on communicating with the relatives of victims. Some are available online. I don’t know if there has been a systematic review of the recommendations contained in these protocols. (I didn’t find one.) If not, that would be a valuable project.

The protocols cover a wide range of communication issues: how to handle the initial notification; how to conduct interviews aimed at eliciting necessary information; how to arrange for crisis counseling when needed (and how to tell if it’s needed); how to manage the formal identification of bodies; etc. I’m sure some of the protocols must have something to say about how to keep the relatives out of the way – and maybe even how to do so empathically.

Here are my thoughts on that last task:

number 1

Make sure talking to the relatives is part of somebody’s job.

This is probably the most important recommendation I’ve got. Organization is crucial at disaster scenes. Everybody has to focus on his or her job, trusting that colleagues will do theirs. Too often, talking to the relatives isn’t anybody’s job; it’s everybody’s distraction. Crisis managers understandably (even rightly) put a higher priority on getting the situation under control than on communicating about it. If you’re responsible for the recovery effort or the evidence-gathering, you’re not going to have a lot of time left over for anything else.

So somebody who understands the recovery effort and the evidence-gathering but isn’t managing them needs to be explicitly charged with the communication job – with talking to top management, political leaders, regulators, the media … and the relatives of victims. (All those audiences may require more than one communicator.)

If you haven’t got any communication professionals on hand and can’t spare an emergency responder, sometimes volunteers can be asked to stand with the families and run interference for them where possible. A volunteer who is physically unable to dig in the rubble may be perfectly able to help the desperate relatives bear the agony.

number 2

Acknowledge and share the angst.

“Here’s why you need to stay out of the way” is a huge improvement over “Get out of the way!” But the explanation is still too impersonal to get through to many distraught relatives of victims, too oblivious to their needs. “I’m sure you must want to get as close as you can to your loved one” has the opposite problem. It’s too intrusive. People don’t like to be told how they feel – especially in high-stress situations.

So deflect it: “In situations like this, many relatives understandably feel a strong need to get as close as they can to their loved ones.” And then talk about how you feel: “I wish I could let you closer to the scene. If only it were safe to go in!”

number 3

Explain what’s happening and what’s going to happen.

Keeping relatives out of the way is essential. Keeping them out of the loop is not. In stressful situations, we all tend to become highly invested in knowing what’s happening and what will happen next. Knowing gives us a sense of control, and control makes awful situations a little more bearable. Wise doctors tell their patients what they’re doing and what they’re about to do; wise emergency managers give the victims’ relatives the same information for the same reasons.

Especially important is answering the “when” questions: “When will I be able to see my husband?” “When will I be able to bury my wife?” It’s important to address these questions even if you don’t know the answers. You can at least share the desire to know: “If only we could predict how long it’s going to be! It is horrible not to know. I hesitate to speculate, because it may take much longer than my best guess, but I think possibly within a few hours. I hope that doesn’t sound like a promise. It’s really only a guess.”

number 4

Think about how to answer tough questions.

Undoubtedly the toughest question regarding the death of a loved one is “Did he [or she] suffer?” If the victim suffered horribly, most experts suggest avoiding the truth without actually lying. “It’s very hard to know,” for example, is a time-tested answer. But it is also thought that the relatives intuitively know what they need: the truth or the sugar-coated version; knowing (and perhaps even seeing) what actually happened or not having that awful image burned into their brains forever. You get different questions, of course, regarding victims who might not be dead, starting with the two most difficult: “Is there a chance that he [or she] is alive?” and “Can’t you do more?”

Some of the not-quite-so-tough questions deserve to be answered even if they’re not asked. Experienced emergency managers know better than I do what questions the relatives of victims are reluctant to ask but very anxious to have answered. Certainly these two are among them: “How do you know it’s my son in there?” “Who’s going to make sure my daughter’s wedding ring doesn’t disappear?”

number 5

Give relatives things to do – at least a chance to talk.

Most people find crises more bearable if they have things to do. At the scene of a disaster, there is typically not much you can ask the victims’ relatives to do … though it’s worth considering whether they can play some kind of volunteer role. At a minimum, they can talk.

It’s obviously important to debrief the relatives for information you actually need. But many relatives will want to tell you more than you need to know. They may want to talk about what their loved one was like, what the relationship was like, how devastated they are, how unfair the fates are, how badly they think the crisis is being managed. Of course they may not want to talk about any of that. But if they do want to talk, it’s important to have someone available to listen.

number 6

If you can do it over coffee, so much the better.

A bit of familiar comfort helps people bear awful situations. And people who can bear what they have to bear are likelier to let you get on with doing what you have to do.

Bottom line: The best way to keep the relatives of victims from interfering with your efforts to manage the crisis and collect the evidence is to try to respond empathically to their needs. Some needs you can’t meet (and you should try to say so empathically). Meet the ones you can – especially their need for another human being who talks to them and listens to them.

The Catholic Church’s pedophilia scandal: contrition, dilemma-sharing, and accountability

name:John
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Pharmaceutical manager
Date:April 25, 2010
Location:Taiwan

Comment:

I’m a big fan of your site. I’ve been combing through it for a few years now, ever since I saw your name in Freakonomics.

I was wondering what advice you would give the Catholic Church for the current pedophilia scandal. In recent news articles, it sounds like the Church is taking the “hazard” approach, logically stressing that such abuse happens everywhere, and priests are no more likely to commit such acts than the average person.

The media have of course focused on the “outrage” portion – the outrage that someone as trustworthy as a priest would commit such acts, and then the organization covers it up quietly.

I think this would be a case where inviting some people “inside” would be a way to show they are opening up (like when you suggest chemical companies can appoint oversight boards from the local community to get people involved).

Your thoughts?

Peter responds:

There’s too much to say about the communication issues raised by the Catholic Church’s pedophilia scandal to encapsulate fully in a Guestbook response. I will focus on three aspects: contrition, dilemma-sharing, and accountability.

But first, let me echo what you said about hazard versus outrage. Child molesting is a huge outrage, among the hugest. When the molester is a priest, an agent of God entrusted with children’s moral and spiritual wellbeing, the outrage is exacerbated. When the Church has turned a blind eye for decades, the outrage is exacerbated further.

And when Church officials emphasize that other institutions also have a child molesting problem, and complain that it is unfair to single out the Catholic Church for criticism, they are, as you say, unwisely focusing on the hazard instead of the outrage. “Everybody does it” is never an effective defense; it’s not a defense a priest would accept from a misbehaving parishioner or child. But that’s not the point. The point is that “defense” isn’t what you do when people are rightly outraged at you. You apologize.

Contrition

The most obvious and most important failure of the Church’s communication has been the insufficiency of its expressions of contrition. The Church can’t protect its moral authority by understating its sins. It must regain its moral authority by acknowledging them wholeheartedly.

The irony here is that no institution in Western society understands the dynamics of contrition and forgiveness better than the Catholic Church. Both Catholic doctrine and secular experience make forgiveness conditional on the following steps:

  1. Admit you did it.
  2. (In the secular process, shut up while we berate you – a step not required by God but absolutely essential to secular forgiveness.)
  3. Apologize – expressing regret, sympathy for the victims, and moral responsibility/fault.
  4. Explain what failings in your organization let it happen.
  5. Make it right – in particular, compensate your victims and improve your organization so it’s less likely to happen in the future.
  6. Do a penance.

I discussed this process in more detail in a 2001 column entitled “Saying You’re Sorry.” The only step I’ve added since then is explaining what failings in your organization let it happen. That step has its own column (also written in 2001), “The Stupidity Defense.” Note that the stupidity defense is grounded in my conviction that organizational misbehavior is usually more stupid than evil. Acknowledging you were stupid is a good way to keep people from unfairly assuming you were evil. But covering up pedophilia to preserve the Church’s reputation really was evil – even if it was stupid too, since the cover-ups were bound to emerge eventually and lead to the crisis the Church now faces. To be forgiven, the Church will need to own up to more than stupidity.

These six steps on the road to forgiveness must be taken in the right sequence. You get no credit for apologizing until you have admitted precisely what it is you’re apologizing for, and until the rest of us have had a chance to yell at you for what you did. And until you have apologized sufficiently, explaining what went wrong sounds like you’re making excuses, and compensating your victims feels like you’re trying to bribe them.

It’s fairly clear that the Church has yet to go through some of these steps with regard to pedophilia. Obviously, it took the Church far too long to own up to what was going on. But it has done so now, at least to some extent. And it has certainly been berated, and continues to be berated. Where the Church is most obviously falling down is in Steps (c) and (d). It hasn’t been nearly apologetic enough (c), and it hasn’t been candid about the organizational failures that contributed to the problem (d). As a result, it gets very little credit for its efforts to make it right (e) – to compensate the victims of priestly pedophilia and to implement changes that will make future abuses less likely to happen and less likely to get covered up.

Instead of focusing on its own sins, the Church has taken offense at its critics. A high-ranking Vatican official compared the widespread criticism to anti-Semitism. Many other Church leaders attributed it to anti-Catholic sentiment. And the Pope himself lashed out against the news media, especially The New York Times, asserting that the Church would not be “intimidated by the petty gossip of dominant opinion.”

This is exactly the opposite of contrition.

Undoubtedly, the outrage of Church officials is getting in the way of their ability to express remorse, perhaps even their ability to feel it. It is also getting in the way of our ability to hear the Church’s remorse, even when it is expressed (as in the Pope’s letter to the Irish clergy). We hear mostly the Church’s defensiveness, which inevitably increases our outrage.

Of course the outrage of the Church at its critics has some foundation. It is true that many good priests have been damaged by a broad-brush indictment; it is true that in some cases specific allegations have been bandied about without convincing evidence; it is true that other institutions that work extensively with children face similar problems. There are valid points to be made in the Church’s defense. But defense and counterattack have interfered with what should be Church officials’ top communication priority: contrition.

Importantly, what most requires expressions of contrition isn’t the pedophilia itself. Child abuse is horrifying, but that sin is individual, not institutional. The Church’s sin is choosing not to see – and when forced to see, choosing not to tell.

Far too often, Church officials averted their eyes from instances of priestly pedophilia. Far too often they ignored, or ridiculed, or paid off those who dared to come forward with accusations. Far too often they took no action, or inadequate action, to prevent recurrences. The sin of willful obliviousness has been substantially corrected. The Church is no longer oblivious to child abuse; if anything it is now hyper-aware. But that sin needs to be acknowledged more often, more dramatically, and more contritely than has been the case so far.

And even when the Church has taken appropriate action to protect children from priests it had reason to know were pedophiles, it has often done so secretly. This seems to have had more to do with protecting the reputation of the Church itself than with trying to save the reputations, vocations, and souls of the offenders. Its preoccupation with secrecy too often kept the Church from doing what it should have done to aid the recovery of victims and to identify additional victims so that they, too, could be helped. I believe it is as important for the Church to acknowledge, atone for, and correct its self-protective secrecy as its obliviousness.

We can feel some empathy for senior Church officials. Their reluctance to express contrition for their obliviousness and secrecy regarding pedophilia is fueled by more than outrage at their critics. I would guess it is also fueled by self-loathing. Some Catholic leaders must find it unbearable to contemplate what they did and failed to do, the extraordinary damage their obliviousness and secrecy caused to thousands of children … and to the Church itself. Facing up to their role in perpetuating the horror must be agonizing. But it must be done.

And it must be done again and again. Even for offenses far smaller than these, it is rarely enough to apologize once or twice – and never useful to respond to continuing criticism by pointing out that you already apologized. I am certain that many Church leaders feel they have abased and humiliated themselves sufficiently; I am certain that some believe the apologizing has gone on too long already. But “let’s put that behind us” should always be the victim’s line, never the perpetrator’s. It is not for the Church to decide when no further expressions of contrition are required.

The Church will also need to punish those who contributed most egregiously to the obliviousness and the secrecy. Punishing individual wrongdoers is less important to forgiveness than acknowledging and correcting institutional failings; the core problem here is with the barrel, not a few apples. Nonetheless, punishment is part of “making it right”; it is also part of making institutional contrition credible. I think the Church has made substantial progress in taking action against pedophiliac priests, and I think it has largely stopped trying to keep the problem secret. Where the Church has made the least progress is in punishing the bishops and archbishops who sustained the willful obliviousness and the cover-ups. The Church’s pedophiles are being dealt with a lot more aggressively today than those in the Church who allowed them to flourish. I think the scandal will not subside until some of these higher-ups are punished.

Is the Pope himself one of the higher-ups who must be punished? I don’t know. I have read the allegations that in previous Church positions he was at least complicit in a pattern of cover-up. Given how widespread the pattern was, and how powerful Cardinal Ratzinger (now Pope Benedict XVI) was, I assume he was part of the pattern. How could it be otherwise? I have also read that the Pope (when he was Cardinal Ratzinger) was instrumental in rebalancing due process for accused priests against protection of children, reversing centuries of Church law and decreeing that insufficient evidence was no longer sufficient justification for doing nothing.

My impression is that the Pope was part of the obliviousness and part of the cover-up ... but saw the light more quickly and took action more responsibly than many in the Church. As the head of the Church he must now express not just the Church’s institutional contrition, but also his own personal contrition. He needs to say more about his role in the scandal over the decades, and how he sees that role today. I am not a Roman Catholic, but I am told that public papal expressions of contrition – and of sinfulness – are neither unprecedented nor contrary to doctrine.

Now would be an excellent time for Pope Benedict to acknowledge publicly his sins and the sins of his Church with regard to ignoring and covering up priestly pedophilia, and to begin working his way through the steps Catholic doctrine says are required for forgiveness. I don’t know what penance is most appropriate for the Pope, or for other Church fathers who tolerated the intolerable for decades. But I am sure the Pope will be able to think of some appropriate penances, and so will the public.

On April 15, just as I was completing this response, the Pope referred publicly (though obliquely) to the pedophilia scandal for the first time since his March 20 letter to the Irish clergy. He talked about penance:

I must say, we Christians, even in recent times, have often avoided the word “repent,” which seemed too tough. But now, under attack from the world which talks to us of our sins, we can see that being able to do penance is a grace and we see how necessary it is to do penance and thus recognize what is wrong in our lives.

The Pope saying that Christians should recognize the value of doing penance isn’t the same as the Pope publicly doing penance for the Church’s tolerance of pedophilia. And doing penance for this sin would be premature. Penance is the last step in forgiveness, and the Church is still stuck at the third step; it hasn’t apologized enough yet. The Pope’s April 15 statement is nonetheless a good sign.

Dilemma-sharing

Eventually, the Church needs to make more use of dilemma-sharing.

I say “eventually” because in situations like this contrition must precede dilemma-sharing. An institution that hasn’t sufficiently acknowledged that it often ignored the problem or covered up the problem is disqualified from explaining that it’s a tough problem to solve. Dilemma-sharing without contrition just looks like another way of making excuses.

But contrition without dilemma-sharing is sterile. Part of the forgiveness process is to “make it right.” It’s not enough to regret the problem; the Church needs to make a determined effort to solve the problem. It dare not leave the misimpression that priests will stop abusing children as soon as the Church owns up to its history of obliviousness and cover-up. There are more scandalous pedophilia revelations to come – both of past horrors and of new horrors. The Church should say so, often – not matter-of-factly but with anticipatory anguish and sorrow. Any implication that the scandal is over will backfire badly when new chapters emerge.

The essence of dilemma-sharing is this: The Church needs to help us see that there are no easy answers.

In particular, Church leaders need to talk about the awful tradeoffs between protecting children from a priest against whom an accusation has been made and protecting priests from false accusations. Complicating this conflict is the reality that children who have been molested at home do sometimes accuse someone else; taking such accusations at face value abandons both the priest and the child. False accusations can result under other circumstances as well – such as when a malicious adult persuades a malleable child to invent a story or embroider an essentially innocent encounter.

I don’t know what percentage of pedophilia charges against priests are false. Surely there are more genuine cases that go undetected than false accusations. Nonetheless, the dilemma posed by the risk of false accusations is real.

In each individual case, proof of child abuse is extremely difficult to come by. One solution is to wait until a pattern has emerged. But waiting for a pattern means allowing a suspected priest to keep doing what he is doing – an incredible injustice to the second or third child whose complaint proves the pattern. The alternative is to expose, or punish, or at least isolate a suspected priest against whom the evidence is scanty. That too is an injustice.

For a long time, the Church has “solved” this dilemma with secret investigations. It looked into pedophilia accusations quietly, informally, in a way that protected the suspected priest’s reputation and due process rights. The fatal flaw here was that the Church was also attending to its own reputational concerns (not to mention legal and financial concerns). If the Church could convince itself that the evidence was insufficient, it could make the problem go away … at least until another victim came forward. And so a legitimate worry about false accusations slid easily into a pattern of cover-up.

The Church could wash its hands of the responsibility for policing priestly misconduct, and simply refer all complaints to the cops. This is what many critics have recommended, but it has its own drawbacks. The legal system is far more strictly bound by the rules of evidence than a Church investigation; a pattern sufficient to persuade a bishop to remove a priest might still be insufficient to persuade a district attorney to file charges. (And should a priest acquitted because of insufficient evidence continue in his pastoral duties?) Moreover, many children and parents who can find the courage to tell one priest what another priest has done would simply refuse to come forward to the police and the courts.

As it shares this dilemma, the Church needs to acknowledge that thoroughly justified outrage at pedophiliac priests makes us all understandably uninterested in fairness to suspected offenders. But no good would come if the Church were to lurch from cover-up to witch hunt. The Church has been there before too, and Church leaders should say so.

Another possible approach is simply to keep priests away from children as much as possible. This is the solution taken by many schools and daycare centers today. Kindergartners no longer get to sit in the teacher’s lap, and a child who seems troubled is unlikely to be invited into the cloakroom for a private conversation. (There is now a camera in the cloakroom.) We pay the price of reduced innocent intimacy rather than risk the horror of child abuse … and, for teachers and schools, the horror of a false allegation of child abuse. Think back over some of the hallowed Hollywood movies of priests helping kids – and try rewriting the scripts without touching or privacy.

I don’t know the best way for the Church to cope with the child abuse issue. I’m not sure the experts know – and I’m not an expert. I do know that the Church has failed to make clear that the answers aren’t obvious. And I know that it cannot do so until it has first apologized more fully for its longtime failure to see the problem, own up to it publicly, and start looking for answers.

For the foreseeable future, every dilemma-sharing effort will need to be accompanied by a reminder of the Church’s contrition: “Notwithstanding the genuine dilemma of what to do with unprovable charges of pedophilia, it is clear that even when the evidence was substantial, the Church has too often ignored or even covered up that evidence instead of acting to protect the children in our care. Now that we are addressing the problem at last, we are coming to realize that there are no easy answers. But this does not in any way excuse our failure for so long even to look for the answers.”

Although I am not an expert in pedophilia, I would bet my mortgage that the priesthood attracts more pedophiles than, say, plumbing – for two reasons:

  • The obvious reason – priests get to have a lot of close, intimate, emotionally charged contact with children.
  • The less obvious reason – some people who are struggling with unacceptable sexual urges seek out churches (and priestly celibacy) in an effort to control themselves and avoid wrongdoing. Sometimes that effort succeeds, but sometimes it fails.

Unless it has persuasive evidence that this is false, the Church needs to acknowledge that it is true, or probably true, and needs to think through publicly its implications. This is yet another dilemma that deserves public discussion.

Accountability

You said it perfectly when you posed the question: The Church should fashion a role for outsiders – and especially for critics – in its response to pedophilia ... just as the oil and chemical industries have had to fashion a role for critics in their response to pollution.

Insularity is one of the most vivid characteristics of the Catholic Church. It is clearly both a strength and a weakness. But in coping with pedophiliac priests, the Church’s insularity has been a weakness. I doubt there are many sex abuse experts in high places in the Vatican hierarchy. Or many women. In an April 12 cover article in Newsweek, Lisa Miller writes: “The cause of the Catholic clergy's sex-abuse scandal is no mystery: insular groups of men often do bad things.”

Remember, the Church’s institutional sins are obliviousness and cover-up – not pedophilia itself. A secretive, self-protective, inward-looking, unworldly culture (that is, an insular culture) is especially vulnerable to obliviousness and cover-up.

Even if the Church’s inwardness and isolation were not part of the problem, accountability to critics outside the Church – or at least outside the clergy – should certainly be part of the solution. When I try to convince companies to establish Community Advisory Panels (CAPs) in environmental controversies, I usually stress the importance of the CAP as a source of credibility. “When outsiders are allowed inside,” I tell my corporate clients, “they get to see how difficult your environmental problems are, how hard you are working to try to solve those problems, how much money you’re spending, how much progress you’re making. These are not things you can easily explain to people who are disposed to mistrust you. When they see these things for themselves, they slowly come to trust you more.”

Though I typically make the point less emphatically, the Community Advisory Panel is also a source of wisdom. Outsiders not only learn about all the good things you’re doing (and discover some bad things they hadn’t known about before). They also point out some additional good things you ought to do. Some of their advice isn’t feasible, of course – and some that might be feasible you’re simply unwilling to take. But every CAP I have ever watched operate managed to come up with suggestions the company was able and willing to implement. The result wasn’t just improved corporate credibility. It was also improved environmental protection.

I am confident that the Catholic Church has much to learn from its critics about how best to address its twin problems: pedophilia itself, and the culture that too often ignored, tolerated, and covered up pedophilia.

Letting outsiders in isn’t just a way of showing the world that the Church is changing. It is a way of changing the Church.

Applying “Risk = Hazard + Outrage” to financial markets

name:Anonymous
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Technology risk officer
Date:March 17, 2010
Location:New York, U.S.

Comment:

Thank you very much for the very interesting presentation at the technology risk forum. It was very nice to understand the underlying logic behind the “Risk = Hazard + Outrage” equation.

As part of my internalizing it, I can clearly relate the “hazard” part to a loss distribution which would capture the expected value variable, or the probability times impact. On top of that we have the “outrage” variable, which as we have seen in the recent past to be at least just as important.

While you mostly spoke of “outrage” in terms of the public at large, a similar effect happens in smaller specialized groups such as financial market makers. In fact, it is a big part of the explanation of how contagion spreads – your fellow dealers stop dealing with you because your reputation has taken a knock (rightly or wrongly), counterparties rush to terminate trades and unwind positions, investors want out, and suddenly you have a crisis which can first fail you and then spread to other market participants. And all of this happens outside the “hazard” variable.

The points you made about communication are just as valid even in this kind of a situation because even though the audience is slightly different, the same dynamics apply as in the “town hall meeting” example you gave. When every other weekend was bringing news of a large institution’s collapse, those who survived had as much to attribute to their communications as they had to their economic fundamentals. Communication is an important part of risk management in that sense.

Now my question is – if this be so, why is it that all financial risk management and regulation is focused solely on expected values, a certain percentile (as in VaR) of loss? Why isn’t some of the “outrage” concept being baked into the Basel Framework, or the regulatory responses we have seen since the beginning of the recent crisis? (In fact Basel II goes to the extent of defining operational risk as specifically excluding reputation and strategic risks.) We also do not see any references to the “outrage” factor (regardless of the verbiage used) in the recent papers on liquidity risk and stress tests as put out by the BCBS, or the FSA in the U.K. In the U.S., the supervisory stress tests done in the early part of 2009 were solely based on math and “shock factors” applied to economic variables, and completely ignored the impact of the “outrage” (though admittedly the manifestation of outrage among market participants is different from that of people in a town hall meeting).

Why would someone like you not try to propose including this in a rational and consistent way when these regulatory bodies come out with proposals, consultation drafts and discussion papers? Surely this is worthy of being addressed in the interest of our financial system and the economy at large. Have you ever made proposals to the Fed, the FSA, BIS or others? If so, what has their response been?

Peter responds:

I thoroughly agree with you that both individual responses to a trader’s reputation and overall market responses to an economic situation are hugely influenced by outrage.

And as far as I can tell, you’re right that the dynamics are pretty much the same as in my “angry town hall meeting” paradigm. Outrage affects hazard perception, and hazard perception affects precautionary behavior – and in economic systems (micro- or macro-), precautionary behavior affects market conditions and thus the cycle (or spiral!) continues.

This isn’t exactly news to economic theorists. Behavioral economics isn’t a hot new iconoclastic subfield anymore. It’s a fundamental part of economic thinking. Theories grounded in the “rational man/woman” assumption have been, if not discredited, at least identified as accounting for only a part of economic behavior. Princeton psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his work exploring the ways various heuristics lead economic decision-makers to “wrong” conclusions. Kahneman’s work since 2002 has made emotion more and more central.

I’m not nearly knowledgeable enough about market regulation to have strong opinions about how outrage should be integrated into regulatory standards. I barely know what the Basel Framework is. The closest I have come previously to writing about the questions you’re raising was a March 2009 Guestbook entry on “Credit default swaps, financial meltdown, and risk communication.”

But it does seem to me that if a government is developing “stress tests” aimed at assessing the likelihood of a repeat of the market meltdown of 2008–2009, it would want to pay some attention to outrage.

Certainly I push my industrial clients to think about the ways in which their behavior could trigger outrage widespread enough to damage their reputations and depress the price of their shares. The example I gave at the presentation you attended was an oil company that aroused human rights outrage over its investments in an oppressive developing country, leading the socially responsible investment community to dump (and badmouth) its stock. Either the company should have avoided such investments or it should have taken seriously the task of mitigating the outrage those investments were likely to trigger.

It’s not too great a jump to wonder what sort of behavior by a banking company might lead to a similar loss of reputation and share price … and what sort of behavior by the entire banking field might trigger another loss of faith in the market itself.

Has anyone asked me to consult on these issues? No, never. I have had banking clients over the years, but always on a peripheral issue: how to prepare for a possible pandemic, for example, or how to address outrage at the bank’s investment in a controversial industrial facility. It’s worth noting that you heard me speak at a meeting of banking business continuity and information security professionals. The people who are worried about restoring the reputation of your industry and lessening the public’s hatred haven’t called.

Like the Fed, the FSA, BIS and others, I doubt they know there’s a field there, a field I call “outrage management.”

Intentionally irritating opponents as a tactic

name:Anonymous
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Technology risk officer
Date:March 16, 2010
Location:New York, U.S.

Comment:

Thanks for your presentation on risk communication. It was insightful, and it was great to hear your approach firsthand.

I was wondering if you have any thoughts about whether it is ever a good strategy for outrage management to rile up a group instead of the responses you mentioned?

The reason I ask is the stratagem from Sun Tzu’s Art of War: “If your enemy is angry, irritate him.” It seems clear this would at best be a risky strategy for a public company, although maybe there are examples where it might be effective.

Peter responds:

Of course increasing outrage is the core task of what I call “precaution advocacy,” when people are insufficiently concerned about a serious risk.

But that’s not what you’re asking about. You’re wondering whether I think it ever makes sense to “irritate your enemy” (as Sun Tzu suggests), presumably because an irritated enemy makes more mistakes.

While this may make sense in war, I don’t think it makes sense in outrage management, for two reasons:

  • The goal in outrage management isn’t to provoke opponents into acting unwisely so you can defeat them. Rather, it is to ameliorate their outrage enough that they can accept a win-win or a compromise, an outcome that works for you both … and may even view such an outcome as tantamount to defeating you. The assumption is that defeating opponents is extremely difficult and extremely costly; “defeated” opponents too often rise like a phoenix from the ashes, or their defeat inspires an ever-larger cohort of new opponents. It’s better to aim for a settlement.
  • When opponents are unwilling to settle and insist on continuing the battle instead, the fallback in outrage management is to allow them to marginalize themselves. This is best accomplished by persuading attentive stakeholders (potential opponents) that you have made major concessions to your opponents, that the serious issues have largely been resolved, and that your opponents are unreasonably raising ever-more-extreme and ever-more-peripheral new points of contention, Acting provocatively toward your opponents – goading them into unreasonableness – is likely to leave these attentive stakeholders hostile to you for being provocative instead of hostile to your opponents for being unreasonable.

Bottom line: A conciliatory approach via outrage management aims at reconciling opponents who are reconcilable, and at isolating and marginalizing opponents who are not. This is incompatible with intentionally irritating opponents into further outrage.

I have sometimes advised clients to try to make sure the rudest, most obstreperous opponent comes to the meeting, so calmer, more courteous opponents (and attentive stakeholders) can watch, think to themselves “I’m not like Susan,” and become more open to reconciliation than they might otherwise have been. But that is fruitful only if the client is visibly treating Susan with respect. There’s a seesaw at work here. If the company is respectful toward Susan, others are free to roll their eyes and complain that she’s hijacking the meeting. But if the company becomes angry or contemptuous, or if the company does things to goad Susan into greater irritation and unreasonableness, then everybody else’s outrage will be directed at the company instead of at Susan.

Making health care workers get vaccinated against the flu

name:Kathleen
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Public health
Date:March 12, 2010
Location:California, U.S.

Comment:

I read your October 2009 comment on “Mandatory vaccination for health care workers” with great interest.

Even though I was once militant that HCWs should be mandated to receive influenza vaccinate, I have been influenced by the ethical arguments put forward by George Annas and others.

However, the one problem I have with your response is that in my experience there are about 30% of HCWs that have a belief system about influenza vaccine that causes them to reject it. You can get 40% without much trouble and maybe another 30% with mandatory declination, but then you have the group that won’t get vaccinated no matter what you do. I worked in a hospital for 20 years and can tell you I did everything I could to educate this recalcitrant group to no avail, including one-on-one education. As you know, hundreds of studies have been published about methods to increase uptake among HCWs. I have come to the conclusion that mandating vaccination is the only thing that will work for this last group of holdouts.

That said, I don’t like the “get vaccinated or wear a mask” approach either. To me it is disingenuous because if hospitals were really worried about HCWs spreading influenza, they would have had HCWs wearing masks throughout the H1N1 pandemic before either seasonal or H1N1 vaccine was available, but they didn’t. It seems punitive and when pushed, most hospitals will admit they use it as a “stick” to get HCWs vaccinated.

The issue of mandating vaccination has some real legs now. I think it will be helpful if you and others can speak to this issue more.

One thing I find interesting is that most hospitals have mandated immunity to measles, mumps, and rubella (and more recently varicella) for a long time. I think the difference is that most HCWs were vaccinated or had these diseases when they were children and didn’t have to make the vaccination decision for themselves (every year in the case of influenza). HCWs have accepted this requirement, which is usually in place for students entering health care fields.

The psychology is interesting.

Peter responds:

I appreciate your point that if hospitals were really all that worried about HCWs spreading flu to patients, they would have required masks before there was an available vaccine. So making HCWs choose between vaccine and mask is arguably hypocritical.

Of course it’s possible for a hospital to argue something like this:

Masks are uncomfortable, inconvenient, potentially hazardous (as fomites), and expensive. Making HCWs wear masks would protect some patients from influenza, but the collateral damage is too high. Vaccines have much less collateral damage, so we push vaccination on HCWs in situations where we don’t normally push masks. But if an individual HCW is resistant to vaccination for whatever reason, then it makes sense to offer masks as a less desirable option, an accommodation to that HCW’s desire not to be vaccinated. Requiring masks in situations where there’s no vaccine has too big a downside; requiring vaccination for HCWs who fear the vaccine has a different but still unacceptably big downside. But requiring the choice – with vaccine preferred but masks available for conscientious objectors to vaccination – is reasonable.

I’m not sure I buy that argument, but it’s not obviously foolish or hypocritical.

The bigger question for me is the rationale for requiring HCWs to get vaccinated against the flu.

If it’s for the HCW himself/herself, then it’s unconscionable coercion. Making employees do things for their own good is pretty obviously wrong. We don’t (yet) make other people get vaccinated against flu. Why coerce HCWs for their own good more than we coerce people in other jobs? When officials tell HCWs “this is for your own good,” I think they’re undermining their own case.

If it’s for the hospital, aimed at reducing absenteeism and thus the cost of health care, then one wants to see the data. How much is actually saved? Are there bigger savings available with less collateral damage that the hospital isn’t pursuing? Is the hospital including morale issues in its cost-benefit calculation? Does the benefit justify the coercion? Moreover, in a unionized setting battles between what’s good for the employer and what’s good for the employee are the classical venue of labor-management negotiation. It would save the hospital money to pay HCWs less, too, but that’s not enough reason to countenance unilateral pay cuts. If vaccination is for the sake of the hospital, it ought to be a contract negotiation issue.

If it’s for the patient, the rationale for mandatory vaccination is stronger. Hospitals are entitled to regulate employee behavior for the benefit of patients. But here we really need data. My impression is that there are pretty good data that HCW flu vaccination reduces hospital costs, but not very good data that HCW flu vaccination reduces hospital-acquired flu in patients. Patient health is the strongest rationale for coercing HCWs, but only if the evidence is strong. Is it? And as you pointed out, if HCWs really give lots of patients the flu, you’d expect different hospital mask policies too. So officials end up trying to argue that the impact on patients is enough to justify making HCWs get vaccinated, but not enough to justify masking them when there’s no vaccine (or when the vaccine is a bad match). That’s a pretty narrow window. Similarly, why aren’t hospitals requiring visitors to prove that they have been vaccinated? Unvaccinated family hang around the patient all day with impunity … but the orderly has to get vaccinated?

Sometimes my clients get into fights with their employees (or other stakeholders) that started out over a real substantive issue (usually a fairly small one) … and morphed into something that’s really more about power and ego. I wonder how much of that is playing out in the HCW vaccination battle. “Whose hospital is it anyway?” “How dare someone without an M.D. question my judgment that the vaccine is safe?” “If we let them win this fight, what other policies will they decide to flout?” Of course the same could be true on the other side of the battle lines. When HCWs insist on their right to go unvaccinated, they may be bringing to that fight animus that comes from other labor-management issues, from pay to parking.

Kathleen responds:

You hit the nail on the head with your question about whether vaccinating HCWs is a patient safety issue. A few years ago I would have said yes, absolutely, because there were a couple of studies that seemed to support this view (Carman and Potter). However, now I’m not so sure the data are as strong as I might wish for, particularly in settings other than nursing homes. Thomas Jefferson who has done Cochrane reviews on this subject doesn’t seem to think the data are there. However, he appears to be a polarizing figure and seems to be a zealot about RCTs, so it’s hard to know what to make of some of his more public comments on this issue.

I think what most mandatory vaccination proponents think is this: Influenza vaccination will prevent influenza in most vaccinated HCWs, and infected HCWs have been associated with nosocomial transmission of influenza (that is, transmission in a health care setting). Hence vaccinating HCWs will make it less likely for patients to be infected by a HCW. There certainly are reports of nosocomial outbreaks of influenza caused by HCWs. But are there enough data to mandate vaccination? I don’t know.

This seems like an issue where for many people the train has left the station and it doesn’t really matter what the data are.

I’m surprised by how I’m starting to think about this issue. Ten years ago I thought there should be mandates. I still think mandates are the only thing that will bring vaccination rates to 90–100%, but now I wonder if the juice is worth the squeeze. I think most people are newer to thinking about this topic, still in the process of thinking through a lot of different issues.

As for your argument for mask versus vaccine, I’m not sure I would buy it either, but at least it’s honest.

Outrage at proposed wood burning regulations

name:Colin
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Environmentally concerned citizen
Date:March 12, 2010
Location:Alaska, U.S.

Comment:

I took your class on risk communication in April 2008, and have been wondering how the principles would be applied to a case facing the local community currently, an interesting scenario of high hazard and high outrage.

Fairbanks has recently become a non-attainment area for PM2.5 in the ambient air. Most attribute this to increased wood burning for home heating, which drastically increased after the 2008 oil price spike. The outrage in this case hasn’t primarily been from those outraged about the air pollution, but instead from a very vocal minority that has opposed proposed limits on wood burning, leading to some rather remarkable allegations that the EPA (through the state and local governments) is trying to limit our freedom, etc. People obviously feel squeezed on their heating bills, and don’t like what they perceive to be federal intrusion on their rights.

There is some outrage about the air quality problems amongst the general public, but it seems more diffuse. Only people that are right next to smoky problem areas get really steamed, but they don’t tend to be as vocal as the aforementioned outraged wood burners.

I guess my main point is that the outrage has been more significant so far against the attempts to solve the problem (bad air quality) than the problem itself.

The local government’s attempts to pass nuisance rules to clamp down on the worst offenders thus far seem to have increased the outrage, while most people in the community seem to like or at least feel okay with the proposed new rules. The current approach developing seems to be an education campaign, which doesn’t seem like it will reach the outraged constituents.

Any quick advice that you could offer in terms of analysis or proposed directions?

Peter responds:

Outrage about regulation of wood burning has a long history. I first encountered it personally in the 1970s (I think it was), when regulators tried to shut down a famous Seattle restaurant that featured wood-smoked salmon for violating air pollution regulations. I’ve encountered the same issue several times since, especially in areas where wood burning for heat was widely practiced and strongly cherished, not just for economic reasons but also as emblematic of a close-to-the-land eco-friendly lifestyle. The idea that getting your heat from a utility company might actually be less damaging to air quality than chopping and burning your own energy supply simply didn’t compute.

Regulations against leaf burning used to raise similar protests for similar reasons. I grew up loving the autumn smell of everybody’s burning leaf piles. My middle-aged children have rarely smelled that smell and don’t miss it.

Conceptually, this is not about high-hazard, high-outrage scenarios. A true high-hazard, high-outrage situation is a crisis. People are both endangered and upset by whatever’s happening, and the key risk communication task is to help them bear their outrage and choose wise rather than unwise emergency response actions. A factory explosion, for example, simultaneously endangers and frightens people.

In the case you’re describing, the high hazard is the particulate emissions from incomplete combustion of wood. The high outrage isn’t about the effects of wood burning at all; it’s about interfering with people’s desire to burn wood. As you rightly point out, what’s arousing people’s outrage is the precaution, not the risk.

This is quite common. When industrial employees won’t wear their respirators, eye gear, and other PPE, for example, the problem could be insufficient outrage about workplace risks. But it’s likelier to be excessive outrage about the PPE itself; the respirators itch, the glasses look geeky, etc. I was interviewed on this problem in a 2005 article entitled “Getting Workers to Wear PPE: Communication Is Key.” link is to a PDF file I argued that precaution advocacy aimed at increasing outrage about workplace threats to the lungs and eyes would be less effective than outrage management aimed at reducing outrage about the respirators and the glasses.

The usual outrage management strategies are your best bet: sharing control with your critics, acknowledging their sound arguments, giving them credit for changes you make in response to their concerns, etc. (See especially “Reducing Outrage: Six Principal Strategies.” link is to a PDF file)

Educating people about the hazards of PM2.5 pretty obviously won’t help assuage the outrage of those who hate losing a cherished piece of their lifestyle (and a good way to save on heating costs) at the hands of an intrusive government bureaucracy.

Working to increase the overall community’s outrage about PM2.5 may be worth doing for a variety of reasons, especially to increase their support for the proposed regulation and their compliance after it’s in force. But you have to decide whether you think mobilizing the majority’s outrage at the minority’s intransigence is a desirable path forward. It has powerful pros and powerful cons, but I’d certainly think twice before trying to split the community. The collateral costs are high even if you succeed. And you might not succeed; instead, those who are okay with regulating PM2.5 might rally to the support of the old-timers who want to keep their home fires burning.

I suspect I’d look to ameliorate the opponents’ outrage rather than trying to make them the object of everyone else’s outrage. One thing I’m sure I’d do is to express regret at what’s being lost. You need to be visibly saddened that a venerated piece of Alaskan culture and a way to live inexpensively off the bounty of nature has turned into a threat to the ability of asthmatics and others to breathe free.

Using a risk matrix to measure enterprise risk perception

Name:Murat Andac
Field:Internal auditor
Date:February 12, 2010
Email:muratandac@yahoo.com
Location:Turkey

Comment:

I am an internal auditor in Turkey. I want to prepare a report for my office on ERM (enterprise risk management).

First of all I want to measure risk perception of our office workers (managers, engineers, inspectors, doctors and the other lower level staff). I want to learn how they perceive the risk (and hazard). So I want to measure this perception by using survey methods.

I need an example survey form. Will you please help me?

Peter responds:

It’s a little hard to advise you on survey methodology or to suggest possible survey designs and questions you could borrow from without knowing a lot more about what you’re studying and why.

The most generic way to study enterprise risks is to present people with a “risk matrix.” Typically, there are two dimensions, probability and consequence. (These are the two components of conventional definitions of risk.) Each dimension typically has four or five gradations.

Here’s a sample risk matrix borrowed from Wikipedia. The rows are probability, ranging from “certain” to “rare.” The columns are consequence, ranging from “negligible” to “catastrophic.”

ProbabilityConsequence
Negligible Marginal Critical Catastrophic
Certain High High Extreme Extreme
Likely Moderate High High Extreme
Possible Low Moderate High Extreme
Unlikely Low Low Moderate Extreme
Rare Low Low Moderate High

Participants are asked either to assess a preexisting list of risks according to these two parameters, or to identify risks on their own and locate them in the matrix. The assessment can be done individually and the results tabulated, or groups can debate their views and try to reach a consensus assessment. Whichever procedure is used, it’s useful to keep track of the range of opinion. Don’t average away the views of outliers.

The outcome of a risk matrix exercise is information about which risks to the organization participants consider more or less likely and more or less damaging. This is often seen as a way of assessing enterprise risk itself. But I think it’s better seen as a way of getting a handle on perceived risk. Depending on the expertise of the participants, their perceptions may or may not tell you much about the actual probability and consequence of various risks to the organization. Mostly it tells you what your participants think.

There is considerable debate over the pros and cons of risk matrices as a way of identifying and assessing enterprise risks.

Certainly risk matrices are a handy way of finding out what your people think your organization’s biggest risks are (the very-high-consequence ones that aren’t vanishingly low probability, and the very-high-probability ones that aren’t vanishing low magnitude).

But there’s a lot of evidence that people don’t necessarily interpret adjectives and similar verbal risk descriptions in the same way, even if you try to help them with definitions. Picking labels for your categories is also a far-from-obvious set of decisions. Consider the Wikipedia risk matrix, for example. Is something that’s expected to happen about once every decade “likely,” “possible,” or “unlikely”? And what are we to do with risks whose consequence strikes us as worse than “marginal” but not really “critical”?

Most people also respond differently to risk adjectives than they would to actual quantitative estimates of probability and consequence. When going from risk numbers to risk words, we tend to dichotomize. If the number strikes us as fairly small, we pick a less serious verbal risk descriptor than the number justifies; if it strikes us as fairly big, we pick a more serious verbal descriptor than the number justifies. People asked to translate from words back to numbers do the same thing: they go from not-very-alarming descriptors to very small numbers, and from alarming descriptor to very big numbers. Thus: “Only once a decade? That’s unlikely.” And later: “Unlikely? That’s like once a century.”

Risk matrices are extremely clear … often much clearer than they ought to be. In other words, they tend to “over-communicate” – leaving people with the impression that they know more than they know.

In a 2008 article link is to a PDF file in the journal Risk Analysis, Tony Cox makes a case that risk matrices can often provide misleading information leading to poor risk management decisions.

If you know my work, you know I have another objection to risk matrices: Neither axis of the risk matrix explicitly addresses what I call outrage – how upsetting the risk is. In enterprise risk management, stakeholder outrage is arguably a component of the magnitude dimension. A “minor” glitch in a safety system, for example, may have very little direct impact on the organization, but if it terrifies neighbors into wanting to shut you down, its indirect impact could be catastrophic. But there’s nothing in traditional risk matrices to push participants to consider this kind of impact. I’d be happier with a three-axis matrix that asked participants to assess separately the consequence, probability, and outrage potential of the risks they considered.

Many have proposed replacing the phrases in risk matrices with quantitative estimates. This has different problems. Unlike adjectives, numbers are precise (even if they're precise guesses) – but people often haven’t a clue what they mean. Words are vaguer and people disagree about what they mean (often without realizing it), but at least they think they know!

And if you’re going to use words, a matrix helps you think about two (or three) aspects of risk at the same time, which has got to be better than thinking sometimes about consequence, sometimes about probability, and sometimes about outrage without ever putting them together.

Of course risk matrices are only one of many ways of measuring enterprise risk and enterprise risk perception. I’m not at all confident they're the best one for your purposes (since I don’t really understand your purposes), but they might be a place to start.

My biggest regret: not building a next generation

Name:James Donnelly
Field:Senior Vice President, Crisis Management, Ketchum
Date:February 8, 2010
Location:New York, U.S.

Comment:

I’ve long been an admirer of your body of work and I’m thrilled you’re considering contributing to my inaugural “Three Tough Q’s” feature on my blog, www.jamesjdonnelly.com. I may edit your responses there to the key concepts, but will certainly link back to your full responses here.

Q: Over your lifetime of study, contribution and counsel – any professional regrets?

Peter responds:

My biggest regret by far is that I have not managed to create a cadre of people who do what I do.

On the one hand, “risk communication” is now a recognized field. (I had to laugh when I asked my airplane seatmate a year or so ago what field she was in, and she said “risk communication.”) But most of what goes under that name seems more like conventional public education or public relations to me. The principles of precaution advocacy, outrage management, and crisis communication that I have been trying to articulate for years are still very far from conventional practice. And I don’t really see a new generation of risk communication consultants emerging to push the rock further on up the hill.

Maybe this is my own biased, narcissistic perception of what “counts” as good risk communication. Maybe my approach is just too idiosyncratic (or just plain wrong). There are plenty of people coming out of universities with at least a little exposure to risk communication … and even to my approach to risk communication. (I sometimes meet young people at seminars or consultations who tell me they studied me in college and seem surprised to learn that I’m not long dead.) Still, when clients ask who else does what I do – because I’m busy the day they want me or too expensive or whatever – I don’t have a lot of names to give them.

Back in the 1970s, when I was briefly an “environmental communication” professor at the University of Michigan, I had graduate students – young people who spent a year or two focused on trying to absorb everything I could teach them. Then I moved to Rutgers University, where I taught almost entirely undergrads, who took at most a course or two on what I was coming to call risk communication. And then I moved to independent consulting, where my “students” were practitioners and I was lucky to get them for as much as two whole days.

There are thousands of people out there who have had a day or two of Sandman, but few if any who have had a week or a month or a year. Insofar as what I do is unique and valuable – which isn’t for me to judge – I have been unsuccessful in replicating it. My best “student” by far is my wife and colleague Jody Lanard. Jody has taught me as much as I have taught her. But she’s not much younger than I am.

I am doing what I can to try to remedy this defect. I’m trying to get as much of my approach as possible onto my website, www.psandman.com – and I’m negotiating with a university to host the website and sustain it when I’m not doing so any longer. I was about to launch a “master class” (asking participants to commit a week a year for at least two years) when the economy collapsed – and I plan to restart that effort when the economy recovers sufficiently. I’m also looking for commercial partners – ideally a worldwide public relations agency and a worldwide management consulting firm, both of which would undertake to let me train their key people (and would send a few champions to the master class).

If I had it all to do again, I would make “legacy” a higher priority earlier. Maybe I’d affiliate with a university and have graduate students again. Maybe I’d set up some kind of apprenticeship program. If I’d launched the master class a decade ago, it might have a couple of hundred alumni by now, and maybe they would be linked to each other in some kind of intranet.

Very few people are fortunate to change the world even a little. I have had more impact than I ever expected. And I know all impact is transient. Still, my biggest regret is not finding a way to keep my approach to risk communication going and developing after I’m not doing it anymore.

Is it possible that is already happening, and I just don’t get to see it? I hope so.

Talking about uncertainty when hazard levels are unclear

name:James Donnelly
This guestbook entry
is categorized as:

      link to Outrage Management index      link to Crisis Communication index

Field:Senior Vice President, Crisis Management, Ketchum
Date:February 8, 2010
Location:New York, U.S.

Comment:

I’ve long been an admirer of your body of work and I’m thrilled you’re considering contributing to my inaugural “Three Tough Q’s” feature on my blog, www.jamesjdonnelly.com. I may edit your responses there to the key concepts, but will certainly link back to your full responses here.

Q: Can organizations mitigate public panic – and a current zeal for instant information – when hazard levels are unclear?

Peter responds:

Let me start by saying that panic – real panic – is quite rare. Disaster management experts define panic as what happens when people are so terrified that they just can’t stop themselves from doing something they would otherwise know to be useless or harmful. People in horrible situations often feel panicky, but they almost always manage to control themselves and act sensibly. For more on how rare panic actually is, see my 2005 article with Jody Lanard, “Tsunami Risk Communication: Warnings and the Myth of Panic.”

In crisis situations and even in ordinary risk controversies, officials and the media often mistakenly think people are panicking, a phenomenon I have sometimes called “panic panic.” But what you’re calling “public panic” isn’t panic at all, I think. It’s rationally high levels of concern about a situation that may be genuinely dangerous. Organization shouldn’t want to “mitigate” that concern. They should want to guide it.

Let’s take a concrete example. In late April 2009, a novel influenza virus emerged in Mexico and California that looked likely to launch the first flu pandemic of the twenty-first century. It did in fact launch such a pandemic, but so far it has been a very mild one. But nobody knew that in the early months. So there was reason to be concerned – in two senses: Concern was justified by the real possibility of tough sledding ahead; and concern was useful because it motivated people take precautions, and to seek information that would help them determine which precautions to take.

Many people were insufficiently concerned and therefore not taking any precautions – so one job for pandemic risk communicators was to try to arouse this audience to greater concern. Some people were appropriately concerned and very anxious to take precautions – so another job for pandemic risk communicators was to try to help this audience figure out which precautions were feasible and useful and which were not. And a few people were excessively concerned (perhaps even feeling panicky) and inclined to take really inappropriate precautions – so a third job for pandemic risk communicators was to try to reassure this audience. Even for this third audience, the smallest of the three, the job wasn’t to “mitigate” their distress, at least not in the sense of trying to convince them to stop worrying; rather, the job was to help them bear their distress and put it into perspective, so they could make wiser choices about how best to protect themselves and their loved ones.

Uncertainty about the extent of the hazard was inevitable to the early months of the pandemic. We had only partial and uncertain information about how bad it was; and we didn’t have a clue how bad it might get. (We still don’t, really; we only know it hasn’t been too bad so far.)

Uncertainty about the extent of the hazard is intrinsic to most evolving risk situations, and especially in the early stages of a crisis … or a possible crisis.

The effect of uncertainty on what I call outrage – on people’s inclination to get upset – is binary. If outrage is low, uncertainty keeps it low: “I won’t worry till the experts are sure there’s really a problem.” If outrage is high, uncertainty makes it higher: “How dare they expose me to this contaminant when they know so little about its long-term health effects!” You can see this dual response to uncertainty in a lot of risk controversies, from global climate change to industrial air emissions.

For the people who are frightened or angry about a particular risk, then, uncertainty exacerbates their feelings – adding to what you call “a current zeal for instant information.” I think the solution is to provide as much information as you can as quickly as you can, while trying to highlight the most important and actionable information.

Obviously, that doesn’t mean providing more information than you have! Some of the most important information you give people in an uncertain situation is information about the uncertainty itself:

  • How much you don’t know.
  • What you are doing to learn more, and when you expect to have some answers.
  • How much you won’t know for years, if ever – and why some questions are difficult or impossible to answer.
  • How awful it is for everybody – for you and for your stakeholders – to have to endure so much uncertainty, and to have to make uncertain decisions knowing that they may turn out mistaken in the end.
  • What sorts of decisions you are making in the face of your uncertainty, and what sorts of decisions you are advising your stakeholders to make.
  • Your guiding principle for decision-making in the face of uncertainty: erring on the alarming side (“better safe than sorry”), but not in such an extreme way that over-preparedness or over-reaction will do as much damage as the hazard itself.
  • Your other guiding principle for decision-making in the face of uncertainty: staying flexible, reconsidering prior decisions as you learn more and as the situation changes.
  • Your advice to your stakeholders to adopt the same guiding principles: to err on the alarming side but within the bounds of reason, and to be ready to gear up or gear down as appropriate in response to new information.

Of course you should also tell people what you do know – and not just what you’re certain about. Acknowledging uncertainty is crucial to good outrage management and to good crisis communication, but acknowledging uncertainty doesn’t mean throwing your hands into the air and claiming total ignorance. Nor does it mean confining yourself to the few things you know for sure. When you brief top management, you say things like this: “We’re certain of A. We’re pretty sure about B. C would be a surprise, but not a shock. D is extremely unlikely. E is impossible. We haven’t a clue yet how likely F is. We’ll probably never know about G.” Be similarly discriminating about levels of uncertainty when you brief the public.

Bottom line: The best response to people’s zeal for instant information is instant information – especially information about uncertainty.

For more on how to talk to upset people in uncertain situations, see my 2004 column on “Acknowledging Uncertainty.”

Outrage management via online social media

name:James Donnelly
This guestbook entry
is categorized as:

      link to Outrage Management index

Field:Senior Vice President, Crisis Management, Ketchum
Date:February 8, 2010
Location:New York, U.S.

Comment:

I’ve long been an admirer of your body of work and I’m thrilled you’re considering contributing to my inaugural “Three Tough Q’s” feature on my blog, www.jamesjdonnelly.com. I may edit your responses there to the key concepts, but will certainly link back to your full responses here.

Q: Has the rise of online social networks changed outrage management (low hazard, high outrage), for better or worse?

Peter responds:

Asking someone over 60 to comment on the impact of online social networks is pretty risky. Asking me is very risky. I’m trying, but I’m not really a netizen.

Still, it’s obvious even to me that an ever-larger segment of the population gets most of its information online from peers. If you want to tell people about your organization, if you want to know what others are telling people about your organization, and if you want to be part of the dialogue about your organization, you have to be on Facebook and Twitter and YouTube … and on whatever may be about to supplant Facebook and Twitter and YouTube as next week’s online social networks.

There is one less obvious impact on outrage management I think is pretty important. To make sense of it, I first have to outline a distinction that’s key in outrage management – the distinction between publics and stakeholders.

Publics are large groups of people peripherally interested in your activities. Collectively they can have a huge impact on your reputation and your operations, of course. But since they’re not about to come to a meeting or even read a newsletter, they have traditionally been reachable only via the mass media.

Stakeholders, on the other hand, do come to meetings and read newsletters; some stakeholders hold meetings and write newsletters. The traditional media matter much less in reaching stakeholders. Traditionally, stakeholder relations is retail, not wholesale; the best stakeholder relations is one-on-one.

Outraged people – people who are angry, frightened, skeptical, distrustful, or otherwise upset about something you’re doing – are stakeholders. So outrage management is more a kind of stakeholder relations than a kind of public relations. In fact, good outrage management often pays a public relations price. To address their stakeholders’ outrage, organizations need to do a lot of acknowledging and apologizing with regard to prior misbehaviors and current problems. Most of what they’re acknowledging and apologizing for is stuff their publics don’t know and don’t especially want to know, and stuff they’d really rather their publics didn’t find out.

Cluing in your publics to your organization’s defects is collateral damage when you’re responding to stakeholders’ outrage about those defects.

The distinction between publics and stakeholders has never been airtight, of course. Your hostile stakeholders do their own outreach to publics they think have the potential to join them as stakeholders; they’re revealing your defects to those publics whether you choose to do so or not. You may (rightly) see a meeting with critics as an opportunity for stakeholder relations and outrage management. But your critics (rightly) see it as an opportunity for recruiting via the media. And reporters who cover the meeting (rightly) see it as a story.

Still, the public/stakeholder distinction has been very important in my outrage management work.

Thanks to online social media, this distinction matters less and less. People who learn things about your organization via online social media are somewhere in the middle between publics and stakeholders. They’re not the passive, barely attentive audience of the mainstream media, glancing at a story about you before moving on to something else. Nor are they active, committed, and comparatively well-informed the way stakeholders are.

Two aspects of this new intermediate space strike me as especially relevant to outrage management:

  • Online social media facilitate participation without requiring serious interest. It’s easy to add an offhand comment to somebody’s blog, to forward an interesting online article to groups of friends, to retweet something you saw on Twitter, etc. The mainstream media provide a much more passive environment, while traditional participation – going to a meeting, for example – requires much more effort. People who don’t really care very much about you, certainly not enough to attend a meeting or write a letter to the editor, may nonetheless add their two cents to a discussion about you in the social media or email the discussion to others. And that, of course, may trigger greater interest and more involvement down the road.
  • Online social media put a premium on opinion rather than information. Or at least the ratio of opinion to information is much higher than in the mainstream media. And the information that is passed along hasn’t been vetted by reporters and editors who have a stake in avoiding inaccuracy. So the ratio of false and misleading information to sound information is also higher than in the mainstream media. Moreover, the role of information is different: A large percentage of participants in online social media are expressing a viewpoint, and selectively marshalling information to support that viewpoint. All this is done in a casual, expressive tone. Conventional definitions of credibility put a lot of stress on authority and expertise. But credibility in social media (I would bet) is grounded much more in emotional expressiveness, and in coming across as similar (in feelings and values) to the other participants in the dialogue.

I don’t fully understand the implications of these changes for outrage management, but I sense that they are important and going to be more so. Many organizations have learned to interact with their outraged stakeholders in ways that are participatory, two-sided, responsive, human, emotionally expressive, and empathic. But when targeting less involved publics, those same organizations tend to churn out the usual one-way, one-sided, just-the-facts “public education.” Now they’ll need to learn to talk to relatively uninvolved people via online social media in ways more like the ways they have learned to talk to stakeholders.

Making pandemic communications (and all crisis communications) provisional

name:Trevor Kerr
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index      link to Crisis Communication index

Field:Retired medical microbiologist
Date:February 8, 2010
Location:Australia

Comment:

Your January 17th update on the H1N1 pandemic is splendid.

I wonder if all current plans to prepare for the next phase of the pandemic could be recast as just that – preparations. So, in the broader context of increasing knowledge and better methods, “we” are always progressing toward doing it better. The lessons learned in the past six months will inform the preparations for the next six, and so on.

Thus the question “What have we learned so far?” would have elicited an earlier “confession” that the morbidity and mortality hasn’t been high, i.e., the effect of the pandemic has been “mild.” Earlier statements of consensus on the evident facts would allow earlier discussion about what is to be done next.

In that light, perhaps you may find a bit of time to reflect on the consequences of vaccine purchasing contracts in the USA and Australia.

May I tease apart your comments on viral mutations, as they apply to influenza viruses? I think you may have presented two different concepts.

One, you often suggest that the dreaded mutation of the virus to a highly virulent form has not happened yet. This lays the foundation for a mistaken belief that “mutation” is a rare event. The word itself conjures up visions of the horrible, anyway, so it’s a pity another word couldn’t be used.

Then, two, you do say that mutations are the way of this particular virus. In fact, as I understand it, the reproduction of the viral genome is an imperfect process, and many variants may be churned out of just a single infected cell. Some of the changes in bases may be of no consequence, some may cause that individual virus to “die,” and occasionally a change occurs that has dramatic consequences. A good place to see this variability in action is at the repositories where the maps of individual strains of virus have been laid out for comparison. I am sure a virologist could guess at the likelihood of viral isolates from different times and/or places being identical in every respect – that is, in genetic sequence. I guess, though, there are tight constraints on the resources available for performing the sequencings, so “identity” may be difficult to prove. That aside, the mappings are things of some beauty and they may persuade you about the limitations of the use of “mutation” as a helpful descriptor.

Peter responds:

I thoroughly agree with your first point. In any emerging crisis, it is crucial for governments to keep updating their assessment of the situation. It is crucial for them to update their messaging to match. And it’s crucial for them to keep telling the public that that’s what emerging crises are like, that people need to expect the unexpected … and in particular that people need to expect changes in their assessment and therefore in their actions and recommendations.

As you say, this kind of focus on “what have we learned so far” would have made it easier for authorities to say that H1N1 was looking milder in June than had been feared in May – and to couple that reassurance with a strong warning that the virus could turn more virulent at any time.

Good crisis communication is usually provisional – uncertain about what’s happening and extremely uncertain about what’s going to happen. Authorities who box themselves with overconfident descriptions and predictions may hesitate to announce new information, fearing that people will become confused or mistrustful if they change their story too much. Thus they get stuck in outdated messaging. They may end up reluctant to say that the pandemic is turning out mild so far (WHO); or reluctant to say that elderly people are more at risk than was initially believed (CDC); or reluctant to say that containment efforts have done all they can and now we must switch over to mitigation strategies instead (public health agencies in many developing countries).

I’m not certain what you think the implications of this are for the vaccine purchasing contracts of the U.S., Australia, and other developed countries. (Developing countries had higher priorities for their limited health budgets.)

But the implications for talking about vaccine purchasing contracts are pretty obvious. Governments had to guess how many people they should be ready to vaccinate. And governments had to guess how much vaccine would be needed per vaccinee. As they guessed, they rightly judged that overestimation was better for public health – and for their careers – than underestimation. I have no quarrel with the way most governments of developed countries (Poland excepted) decided how much vaccine to order.

But these governments rarely put out appropriately provisional messages about their vaccine purchase decisions: “We’re guessing that we’ll probably need two doses per person; we’re guessing that the pandemic will probably be severe enough that most people will want to get vaccinated; we’re guessing that the vaccine will probably be available soon enough that there will be time for most people to get vaccinated before they’ve been sick already or the pandemic is waning. If we turn out wrong about some of these guesses – which we very well may – we will end up with extra vaccine. We believe that’s a lot better than turning away people who want to be vaccinated.”

We all have a tendency toward thinking; we tend to think that our governments should have purchased exactly as much of every resource as they ended up needing. That’s how the U.K. public and the U.K. media could have simultaneously blasted the U.K. government for two failures: buying too much pandemic vaccine (when the pandemic was turning out mild) and buying too little road salt and grit (when the winter was turning out severe).

I know of no way to prevent this natural-but-perverse style of thinking. But the best way to minimize it is to anticipate it: “If we guess wrong we could end up with too much or too little vaccine. We are choosing to order more than we may end up needing; that will reduce the risk of running out. We’re unlikely to get it exactly right, and we would rather be criticized for wasting money than for endangering the lives of our people.”

Your final point is excellent. And I can only plead guilty. I get it that influenza viruses are constantly mutating; that most of the mutations are minor and most of the major ones are dead ends; and that only once in a while does a major change take hold and meaningfully affect the character of the disease we are fighting. By invoking the word “mutation” only when I’m talking about such a major change – especially an increase in virulence – I am perpetuating the widespread misunderstanding that all mutations are horrific. I’ll try to do better.

Why did the CDC misrepresent its swine flu mortality data – innumeracy, dishonesty, or what?

name:Robert R. Ulmer
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Risk and crisis communication expert; coauthor of
Effective Crisis Communication: Moving from Crisis
to Opportunity
Date:February 2, 2010
Location:Arkansas, U.S.

Comment:

I have gone through all three of your “Swine Flu Pandemic Communication Updates” addressing your criticism of the CDC for misrepresenting its data about swine flu severity:

Very interesting reading for sure.

Speaking just as an observer of your commentary, I am left wondering after reading your comments: Is this a case of willful negligence by the CDC (which at times I hear you saying) or innumeracy (which is what I think it probably is)?

I have seen enough statistical misrepresentations of data in research findings to make me skeptical about most of what I read or hear relating to risk and risk communication. I understand how crazy that sounds but honestly I don’t know many people that understand probability very well.

A recent article of the journal Science tackles this very issue. In the article, Victor De Gruttola, the Chair of Biostatistics at the Harvard School of Public Health, argues that statistical measures such as p values are misunderstood by far too many scientists and journalists that report on research studies. Okay, that is a long way of saying I believe there are a lot of misconceptions concerning the application of probability and statistics.

From reading your analysis I agree that it appears that the CDC got the statistics perfectly wrong with the populations of children and those over 65. However, as I mentioned earlier I am not surprised by what appears to me as a miscalculation. Without having people like yourself checking the stats and showing your work we are destined to create meanings about data that are unreliable. As a result, our risk communication becomes dubious as well.

Peter responds:

I find it easy to imagine that CDC communication professionals could construct messages that convey the impression that children are most at risk for complications or death from the H1N1 pandemic without realizing that the CDC’s own data say otherwise. After all, they’re communication professionals, not flu experts.

Similarly, many local health officials, many state health officials, and even the President of the United States have received and reiterated the message that all the prioritized groups are at high risk of severe cases of flu, despite the clear statement of the CDC’s Advisory Committee on Immunization Practices that children were prioritized because they were more likely to catch and spread the flu, not because they were at higher risk of complications. These people are not flu experts either, and I don’t hold them responsible – or at least not very responsible – for the misleading claims they passed along.

And I will grant that it is just barely imaginable that even CDC Director Tom Frieden could reiterate these claims without realizing that they contradict the CDC’s own data.

But it is inconceivable to me that the epidemiologists and other subject matter experts who do infectious disease risk assessment 24–7 for the CDC could undertake the complex calculations required to estimate age-specific pandemic H1N1 cases, hospitalizations, and deaths without also doing the incredibly simple additional calculations required to derive estimated age-specific case attack rates (CARs), case fatality rates (CFRs), and population mortality rates (PMRs). These parameters are fundamental in the training of infectious disease experts. It is virtually automatic to convert infectious disease data – even preliminary, tentative estimates regarding an ongoing pandemic – into these familiar parameters. That’s how experts determine how severe an outbreak is so far, and who is most at risk.

Moreover, some of the technical experts who do influenza work at the CDC have in fact calculated estimates of the pandemic’s age-specific CARs, CFRs, and PMRs. A prime example: On November 30, about 43 minutes into a webinar, CDC expert Martin Cetron presented preliminary PMR estimates for various age groups, covering the period from April through November 14, 2009. The webinar itself is online, but when Jody Lanard and I sought permission in mid-December to reproduce one of its tables on my website, Dr. Cetron declined, in part because the data were not yet “validated and ready for prime time.” (However, when we requested the raw data from which such estimates are derivable, for the period from August 30 through mid-December, the CDC promptly arranged to send us a spreadsheet, for which we are grateful.)

An easier-to-access example, from several months earlier in the pandemic, is an August 12 presentation by Sonja J. Olsen, head of the CDC’s Global Activities Team, Division of Emerging Infections and Surveillance Services. Slide 8 reports age-specific population mortality rates based on early confirmed deaths in four countries, including the U.S. The title of the slide is “Deaths from Novel Influenza A (H1N1) Most Common in Adults” – not children.

In addition, several state health departments have openly reported age-specific estimates of H1N1 population mortality rates, using laboratory-confirmed hospitalization and death data from high percentages of hospitals in the state.

Florida, for example, reports age-specific deaths in an easy-to-understand table. The currently posted update clearly shows that Florida’s pandemic death rate has been highest for people 50–64 years old, and lowest for the group for which vaccination was universally prioritized: the 5–24-year-old age group. (Most of the deaths reported here occurred before significant vaccination immunity could have been a factor.) Here is Florida’s table:

On January 22, 2010, the CDC itself (as opposed to individual CDC experts) published age-specific population mortality rates for the H1N1 pandemic … for the first time as far as I can determine. That week’s Morbidity and Mortality Weekly Report (MMWR) includes this paragraph:

During August 30–January 9, a total of 1,779 deaths associated with laboratory-confirmed influenza virus infections were reported to CDC through AHDRA. The 1,779 laboratory-confirmed deaths are in addition to the 593 laboratory-confirmed deaths from 2009 H1N1 that were reported to CDC from April through August 30, 2009. Since August 30, cumulative deaths associated with laboratory-confirmed 2009 H1N1 infection per 100,000 population were 0.31 for persons aged 0–4 years, 0.26 for 5–18 years, 0.38 for 19–24 years, 0.60 for 25–49 years, 1.03 for 50–64 years, and 0.65 for ≥65 years. For the period August 30–January 9, the median number of states reporting laboratory-confirmed deaths per week through AHDRA was 34 (range: 23–38).

When Jody and I converted the CDC data into a bar chart, the age-specific PMRs looked like this:

Note that the lowest population mortality rates are in the three age groups for which universal vaccination was prioritized (except for children 0–6 months, for whom influenza vaccine is not approved).

CDC Graphs Age–Specific Swine Flu Mortality Data

On February 17, 2010, the CDC posted a webpage entitled “Flu–Related Hospitalizations and Deaths in the United States from April 2009 – January 30, 2010.” The new page included three additional weeks of data beyond what was reported in the January 22 MMWR. More importantly, it showed the swine flu death rates for different age groups in a clear bar graph, instead of buried in the middle of a paragraph. Hospitalization data were also graphed.

(Added: February 18, 2010)

PMRs that are calculated based on laboratory-confirmed deaths are of course underestimates, since not every swine flu death gets lab confirmation. They are underestimates in other ways as well: The state totals omit hospitals that didn’t send in their reports; the nationwide totals omit states that didn’t send in theirs, or reported using a different metric.

That’s partly why the CDC developed the whole-population estimates on which I based my articles. These broad-based population estimates are preferable to the incomplete tallies of laboratory-confirmed cases for assessing how mild or severe the pandemic has been overall (so far) – as well as for other purposes for which a wide-angle view of the pandemic’s total impact is what’s needed. But the two sorts of data show very similar patterns with regard to which age groups have been most at risk (so far). And the laboratory-confirmed data are tabulated with a larger number of narrower age categories, which may make them more useful for many purposes, including vaccination campaign decisions.

Do you believe that most people over 65 (and their doctors) realize that they are at higher risk than children, and at about the same risk as people 25–49, many of whom are considered “young adults”? Do you believe that most people 50–64 (and their doctors) realize that they are the group at highest risk? I do not.

But it is clear that the CDC’s flu experts have known these facts for months – early on from the data on laboratory-confirmed deaths, and later from the CDC’s estimates of total deaths as well.

Another reason I am convinced that the CDC is well-aware of the population mortality data: Most CDC statements about the vulnerability of different age groups to swine flu have been misleading rather than flat-out false – for example, eliding from the accurate claim that the pandemic has killed far more children than seasonal flu usually kills to the false implication that it is deadlier to children than to older people. If the CDC didn’t realize what its data meant, it would have stumbled often into overt falsehood. Instead, it has crafted messages that carefully mislead without lying. (State and local health officials, misled themselves, are unable to craft careful messages, and do stumble often into overt falsehood, as you can see if you go to the links in the second paragraph of this response.)

I offered a number of detailed examples in the three articles you mention at the start of your comment. Here’s one from my “Update on the December 2 Update,” on a December 4 Q&A between Bob Roos of the University of Minnesota’s Center for Infectious Disease Research & Policy (CIDRAP) and CDC Director Thomas Frieden.

Bob Roos: Thanks for taking my question. I had a question about the kind of the overall severity of the pandemic and case mortality rate. The CDC recently estimated the case fatality rate at 0.018%. This is much lower than the seasonal flu epidemic. I wonder if you can comment on that.

Tom Frieden: The key point is really age specific case fatality rates. We’ve reported that H1N1 has not affected the elderly significantly. So while some elderly have gotten it and those who are infected sometimes become severely ill and that’s why we have emphasized the importance of prompt antiviral treatment of the elderly and others with underlying conditions who are severely ill. What we’re finding really is this virus is a much worse virus for younger people. The number of people, not just children, but young adults under the age 50, who will get severely ill or die from this virus is much higher than from seasonal flu. The fact is that with 210 laboratory confirmed [pediatric] H1N1 deaths, we are really before the beginning of flu season, we don’t know whether there will be much more H1N1 or not. Already three times the number of deaths among children than we would [see] in a usual flu season.

At the time of this December 4 dialogue, here is what the age-specific population mortality data available to the CDC – but not to reporters or the public – looked like, converted into a bar chart. These are lab-confirmed cases; the CDC estimates Roos and Dr. Frieden were discussing showed a similar but less detailed pattern.


Note: The total number (n=1126) reported from August 30 to November 14 may differ a bit from updated data for the same period, because reporting jurisdictions amend earlier data submissions based on receipt of new information from past weeks, correction of errors, etc.

As the update discusses in greater detail, Dr. Frieden’s answer first bridges away from Roos’s question about overall pandemic severity to age-specific severity. Then he makes some broad claims that “H1N1 has not affected the elderly significantly” and “this virus is a much worse virus for younger people.” These are misleading but legalistically defensible statements. It is true that the pandemic has been less deadly to the elderly than the seasonal flu usually is; and it is true that the pandemic has been less deadly to the elderly than to adults 50–64, who are technically “younger people.” It is decidedly not true that the pandemic has been less deadly to the elderly than to children. Perhaps wishing to imply the latter without actually misstating the facts, Dr. Frieden bridges again, this time to the key fact that seems to support his two propositions, that the pandemic isn’t mild and that it is especially horrible for children: The pandemic has already caused “three times the number of deaths among children than we would [see] in a usual flu season.”

As I wrote in the update:

This is true. The pandemic is (a) killing more children than the seasonal flu kills, though it is (b) killing fewer children (as a percentage of the total number of children or the number of children with swine flu) than adults and seniors.

Dr. Frieden’s response showcases the first fact (a), which supports the CDC’s vaccine prioritization groups, and avoids the second fact (b), which doesn’t.

Let me add two other pieces of evidence that the CDC’s failure to give the public age-specific breakouts of population mortality rates (until the January 22, 2010 MMWR) was intentional.

  • At its press briefings, the CDC routinely tries to correct inaccurate perceptions about vaccine risk and other issues. The national pandemic influenza communication plan recommends that officials “[p]romptly address rumors, inaccuracies, and misperceptions.” But as far as I know, the CDC has never tried to correct publicly the thousands of news stories stating or implying that everyone between the ages of 0 and 24 is at increased risk (compared to other age groups) of complications and death from swine flu.
  • On January 8, Sharon Begley, science editor of Newsweek, wrote a blog post on my criticisms and calculations. Before doing so, she asked Beth Bell (associate director for epidemiologic science at the CDC’s National Center for Immunization and Respiratory Diseases) to go over my numbers. Dr. Bell said, “The basic calculation is right.” But the agency’s messaging didn’t change. National Influenza Vaccination Week happened; HHS officials held press conferences to announce new public service announcements (PSAs) aimed at Indian Country and African-Americans. You can scroll through page after page of swine flu PSAs without finding any targeting Americans 50–64 or 65+.

I conclude that the CDC’s decision not to report estimated age-specific CARs, CFRs, and PMRs when it reported its age-specific estimates of cases, hospitalizations, and deaths was a conscious decision – a decision it has made three times so far in three separate reports. (The latest report and links to the earlier ones can be found at http://www.cdc.gov/h1n1flu/estimates_2009_h1n1.htm.) As I mentioned above, the CDC did publish age-specific population mortality rates in the January 22 MMWR. But as of February 2 there have been no CDC pandemic update press briefings since January 7, and this information has not hit the media.

And I conclude that the CDC’s decision not to change its vaccination priority groups after the age-specific data started to emerge – which occurred prior to the start of the vaccination campaign – was also a conscious decision. Finally, I conclude that its decision to continue to justify the vaccination priority groups with misleading public statements about the relative risk of different age cohorts was also a conscious decision.

Note that I do not object to the decision to keep the vaccination priority groups unchanged – only the decision to keep the rationale unchanged. There were still good (though debatable) reasons to focus on vaccinating children and to deemphasize vaccinating seniors, even after it became clear that seniors had a higher pandemic PMR than children (and after it became clear that older adults, 50–64, had a higher pandemic PMR than children or seniors). The CDC could have explained those reasons. It chose instead to continue to signal that children and young adults were most at risk, leaving the public and outside officials to infer incorrectly that “most at risk” meant “most at risk of complications and death.”

If your hypothesis were right – if no one at the CDC realized what its own data meant – that would be truly scary. Health officials who occasionally mislead the public are less dangerous (and more correctable) than health officials who are unable to assess basic health metrics. Forced to choose between dishonest leaders and incompetent leaders, I’d have to go for dishonest.

I don’t know why the CDC chose to mislead the public, but it’s easy to speculate. Some possible reasons are simple and even compelling. For example, it’s easier and more “credible” (if you don’t get caught) to stick to one rationale for a policy than to change rationales in the middle of the policy’s implementation. Other possible reasons are prosocial-but-Machiavellian. For example, a successful pediatric flu vaccination program could be a platform for wider public acceptance of vaccination against childhood diseases.

The likeliest hypothesis, in my judgment: The CDC probably felt that more kids would end up vaccinated if it conveyed the impression that children faced the greatest risk than if it said truer but less inspiring things, such as the fact that children catch influenza at a higher rate than older people (which is true for seasonal flu as well as pandemic flu), and that vaccinating them is thought to be among the best ways to protect their parents and grandparents.

My wife and colleague Jody Lanard collaborated on this response.

Robert responds:

Well, at this point with the current evidence I see this ultimately as an empirical question. Is there any way we can determine if my contention (innumeracy) or your contention (dishonesty) is correct?

I find it interesting that we are both cynical but about different things.

Ultimately, it sounds like you are saying CDC wanted more children to be vaccinated so they were dishonest in their communication about vaccinations. I hope this is not true. As you astutely say in your commentary, one can be clear in risk communication about risk levels and still get parents to have their children vaccinated. My guess, without having the evidence to know for sure, is what you mention earlier: that conventional wisdom suggests children are high-risk for the flu and that idea directed CDC’s risk communication. That is why I mentioned innumeracy earlier in my first post.

Risk communicators, journalists, and anyone else must do their due diligence and make sure their research findings and risk communication are congruent. Our common sense and everyday ways of knowing often get us in trouble – particularly when we have empirical data that suggest otherwise.

For these reasons my evaluation suggests the innumeracy/miscalculation/misinterpretation conclusion for CDC’s risk communication over the determination that CDC is dishonest. However, it is an interesting case of risk communication and one that all risk communicators can learn from.

Peter responds:

I do think the CDC has been dishonest about which age groups are most at risk from the swine flu pandemic. Having said that, I should instantly add that I am not accusing the CDC of lying, only of misleading. I think it tries hard not to lie.

Does the distinction between lying and misleading matter? Corporate communicators have long since learned that proving they didn’t actually lie when they misled the public may save them in a courtroom – but it doesn’t protect them from public outrage when their dishonesty is discovered.

On the other hand, the U.S. media and the U.S. public love to uncover and excoriate corporate dishonesty. But similar dishonesty on the part of public health officials seems to go comparatively unremarked and unpunished by journalists and citizens alike, as long as it falls short of outright lying. This may be because the dishonesty of public health officials is motivated less by self-interest; when they mislead us, they usually do so with our interests at heart. It may be because we depend on public health officials and we want to trust them.

In four decades of risk communication consulting, I have learned that “good guys” are actually likelier than “bad guys” to construct misleading communications. (My 2009 Berreth Lecture to the National Public Health Information Coalition addressed this conclusion and gave many examples.) I think there are three reasons why this is so:

number 1
The good guys are more self-deceptive than the bad guys. When they’re 90% right and reluctant to acknowledge their critics’ 10%, they convince themselves that they are 100% right. When science alone won’t take them where they want to go, they go beyond the science … and continue to believe that everything they say is grounded in “sound science.”
number 2
The good guys are more self-righteous than the bad guys. They know their goals are prosocial, so they continue to feel virtuous even when they are distorting the truth in order to accomplish those goals. When forced to acknowledge that something they have said isn’t quite accurate, they unashamedly explain that it is often necessary to “simplify” the data in order to persuade the public to do the right thing.
number 3
As noted above, the good guys are likelier than the bad guys to get away with these kinds of deceptions.

It’s a potentially lethal combination. It leads good guys – officials at the CDC, among others – to take uncompromising stands on behalf of misleading claims they have talked themselves into considering the unvarnished truth, with blowback too infrequent to disrupt the pattern.

By contrast, corporate communicators find it easier to remember that their critics have a point; easier to understand that when their critics are wrong about most things it is still useful to concede the ways in which they’re right or partly right; and easier to worry that even minor deceptions could get them crucified.

Despite these differences, bad guys often mislead. So it shouldn’t be surprising that good guys mislead a lot.

But I believe – perhaps naively – that even the good guys get caught in the end. And when a credulous public and credulous journalists finally realize that the good guys have been less than punctilious about some uncomfortable facts, we feel betrayed. We pile on. Suddenly even minor peccadilloes become big controversies.

This is what has happened in recent months to climate change scientists and activists, including the Intergovernmental Panel on Climate Change. Since the email scandal widely referred to as “Climategate” (which wasn’t minor), nobody is giving IPCC and its affiliated scientists a free pass on anything.

And this is what is happening in Europe right now to the World Health Organization (WHO), in the controversy – much hotter there than in the U.S. – over whether WHO promoted a “fake” pandemic in cahoots with the pharmaceutical industry. The charge is ludicrous, but it has legs because WHO has been less than candid about the mildness of the pandemic (so far), and about changes in how it treats “severity” in its descriptions/definitions of what constitutes a flu pandemic.

But that’s another story.

How should WHO have integrated severity into its pandemic communications?

name: Karl
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

Field:Policy and planning
Date:January 29, 2010
Location:U.S.

Comment:

I have been reading articles on how the WHO is defending itself against the “fake pandemic” charge.

They do not seem to have taken any communications training. In their statement, they note, “The description of it as a fake is wrong and irresponsible.” How foolish.

They set themselves up for this and are failing to acknowledge the confusion they facilitated by not addressing the severity part of the pandemic equation from the beginning. Much of the world is having to deal with this critique because countries never fully explained as a foundational component of their communications strategy (if they had one) the distinction between the geographic term pandemic and the self-evident need to provide a severity context for their actions.

They said they were going to do this early on, and even looked at the U.S. Pandemic Severity Index link is to a PDF file (PSI) as a model in May of last year and gave it no credence. But then again, the U.S. chose not to use its own PSI, developed over years as a consensus document for virtually all of U.S. public health and a foundational component of virtually all state plans. The PSI was not perfect, but was an excellent foundation.

I hope everyone uses these lessons to improve their actions in the future. We now have WHO in a reactive, defensive mode they didn’t need to be in, defending their strategic approach while we are still in a tactical mode. How foolish for them and for the Council of Europe committee attacking them. They very likely will adjust their tactical messages to the world as a result – to public health’s detriment.

These are classic mistakes easily foreseen and even warned against and trained for. Human beings seem to never learn from history.

Peter responds:

I agree with you that severity is obviously relevant to flu pandemic response policy. The World Health Organization (WHO) agrees with you too – see for example section 3.2.5 of the April 2009 WHO pandemic influenza guidance document, link is to a PDF file entitled “Roles and Responsibilities in Preparedness and Response.”

WHO even gave some thought to the creation of a pandemic severity index – or at least coming up with criteria for labeling pandemics as mild, intermediate, or severe. The question of how to describe severity during a pandemic was repeatedly discussed prior to the issuance of the April 2009 pandemic guidance document, though no conclusion was reached. In a May 2008 presentation link is to a PDF file on “Updating WHO Guidance on Pandemic Preparedness,” for example, WHO’s top influenza official, Keiji Fukuda, included a slide that starts: “Update #5 Incorporate Pandemic Severity Assessment & Reporting.”

But WHO left severity out of its April 2009 definitions of the pandemic phases, and its descriptions of how it would decide about ratcheting up from one phase to the next. Previous guidance documents had considered severity in their discussions of the pandemic phases. And then, instead of acknowledging this difference and forthrightly discussing why it believed the 2009 pandemic was severe enough to satisfy the earlier definitions/descriptions/discussions, WHO chose instead to defend itself by asserting (accurately but irrelevantly and misleadingly) that the formal definition of the generic term “pandemic” has nothing to do with severity. This was neither an honest nor an effective defense.

Jody Lanard and I wrote about this controversy at some length a couple of days ago.

In a nutshell: Most WHO documents describing what influenza pandemics are like, and how to respond, have talked about severity in various ways. But a streamlined flu pandemic guidance document that didn’t address severity in its revised descriptions of WHO’s pandemic phases was hurriedly published in late April 2009, shortly after the appearance of novel H1N1. Then the 2009 pandemic turned out milder than most experts expected or feared – although there was absolutely no way to know that when the novel flu virus began its inexorable global spread, and even today there is absolutely no way to know whether it will remain the case in 2010.

WHO would be wise to argue that by June 2009 swine flu was already novel, widespread, and severe enough to qualify as an influenza pandemic, though it has turned out to be a mild one so far.

WHO is much less wise to imply that severity has nothing at all to do with whether a novel, widespread influenza outbreak ought to be called a pandemic. Hiding behind a narrow, generic definition of the word “pandemic” that focuses exclusively on geographical spread is a disingenuous and ineffective way to respond to the false charge that WHO intentionally hyped an outbreak of normal flu (“une grippe tout ce qu’il y a de plus normal”) in order to enrich Big Pharma.

That said, the tougher question is how to integrate severity into our definitions/descriptions of influenza pandemics.

The U.S. Pandemic Severity Index (PSI) pretty much assumes that as long as a novel flu virus is spreading widely, even an extremely mild pandemic is still a pandemic … though this is exactly the issue at the core of the “fake pandemic” controversy. The lowest level of the U.S. PSI (Category 1) is defined as a pandemic with a case fatality rate (CFR) of less than 0.1% – that is, a pandemic less deadly than the average seasonal flu. The CFR of the swine flu pandemic in the U.S. is around 0.02% so far – one-fifth as deadly as the average seasonal flu. Is it a Category 1 pandemic, or not a pandemic at all? What would we do with a novel influenza virus spreading worldwide and killing almost nobody? Would that be a Category 1 pandemic too, or not a pandemic at all? It would still need to be watched closely, because a very mild novel influenza virus can mutate or reassort in ways that dramatically increase its virulence. But as long as it remained really, really benign, would we want to call it a pandemic at all?

In 1977, a 1950 version of a then-seasonal H1N1 virus emerged, probably as a result of a laboratory accident. It soon spread throughout the world. But most experts don’t include 1977 in their lists of flu pandemics, because the 1977 outbreak caused virtually no excess mortality, and most of the people who caught it at all were under age 23. In its 1999 flu pandemic guidance document, link is to a PDF file WHO refers to this as a “benign pandemic.” But usually WHO and most other experts do not consider the 1977 Russian Flu to have been a pandemic at all. The mildest flu pandemic we routinely call a pandemic was 1968.

Well, if the 2009 pandemic is nearly over – something we don’t know yet, obviously – then it’s turning out milder than 1968 but more severe than 1977. So it either replaces 1968 as the mildest pandemic on record or it replaces 1977 as the poster child of an outbreak too mild to be called a pandemic. (Most experts seem to be voting for the former, given the elevated death rates in people under 65 compared with the seasonal flu.) A pandemic severity index would illuminate this controversy, but it wouldn’t resolve it. The U.S. PSI, which doesn’t really address pandemics less deadly than the average seasonal flu, wouldn’t even illuminate the controversy.

Part of the problem is that there are lots of kinds of severity. The U.S. Pandemic Severity Index defines severity almost entirely in terms of case fatality rates – the percentage of those sickened who die. It assumes a (symptomatic) case attack rate of 30%. This leaves a lot of factors unaccounted for. What about a worldwide flu outbreak that sickened, say, 60% of the world’s population – disrupting everything from education to manufacturing to shipping to policing for weeks at a time as it rolled around the world – but killed almost nobody? We’ve never seen a flu outbreak like that … but as the experts keep telling us, influenza is endlessly surprising, and things that have never happened before happen all the time.

Or consider a more immediately relevant example. Although the two estimates are grounded in different methodologies, so the comparison is a long way from airtight, the swine flu pandemic has so far killed far fewer people in the U.S. than the average seasonal flu. That’s why I keep insisting on calling the pandemic mild. But even so, the pandemic has killed far more children (and non-elderly adults) than the average seasonal flu. In the U.S., CDC models and estimates suggest that the average seasonal flu kills tens of thousands of elderly people and a few hundred children. The CDC’s estimates of U.S. swine flu mortality so far: around 1,360 elderly people and around 1,180 children. Are we just counting deaths here, or should we consider who dies? Doesn’t it matter that children have more years of life left to live? Doesn’t it matter that far fewer children die in non-pandemic flu seasons?

And suppose a novel influenza outbreak sends millions of people to the hospital, burdening intensive care units to the breaking point. Isn’t that a kind of severity too, even if they nearly all survive?

When the World Health Organization thinks about pandemic severity, it is also appropriately preoccupied with the immense differences in physical condition and medical care among the world’s peoples. What do we say about a flu outbreak that the developed world can take in stride but the developing world experiences as incredibly deadly and debilitating? And how will we get accurate measures of how deadly and debilitating the outbreak is in countries whose health surveillance is as deficient as their health care?

These are tough questions – but the toughest question of all is the timing question.

Unless it is very severe from the outset, a pandemic’s initial severity isn’t its most fearsome aspect. What worries public health officials most about a pandemic is the possibility – but not certainty – that it could turn much more severe later. So one of the most important reasons to label a widely spreading flu virus a “pandemic” is to send a signal: “Watch out!”

This is largely a new capability. The 2009 pandemic was the first influenza pandemic the world’s experts were able to monitor from close to its inception. The others were “discovered” after they were well underway. Thanks to improvements in surveillance, we can now give people some warning of an emerging pandemic: a chance to implement social distancing strategies, to prepare to cope with absenteeism problems and hospital overcrowding, to teach people the right way to cough, and above all to begin the laborious process of manufacturing and administering a new vaccine. The more warning time, obviously, the better.

But pandemic severity isn’t something we can expect to know early, at least not with any confidence. Preliminary data from Mexico suggested that the swine flu case fatality rate was going to be around 0.4% – four times as bad as the seasonal flu. By the time WHO issued its pandemic declaration on June 11, we already knew that number was too high … so far. But the 1957 pandemic and even the disastrous 1918 pandemic also started with mild first waves. There was no way to know whether the 2009 pandemic would follow the same pattern: a second wave far deadlier than the first. It didn’t. In most northern hemisphere countries the second wave was mild too; in the southern hemisphere and some northern hemisphere countries, there hasn’t been any second wave yet. But WHO couldn’t have known that when it declared the pandemic.

And what are the odds that the northern hemisphere will see a third wave of swine flu in 2010 that’s far deadlier than the first two? Who knows? There were three pandemics in the twentieth century, and none of them had two mild waves followed by a whopper (unless there was a wave so mild that the much less sophisticated surveillance of the time missed it altogether). How confident does that make you that it won’t happen this time?

Here’s the essence of the dilemma:

  1. Now that we’re capable of warning people about emerging pandemics in time to do some good, it would be insane not to use this ability. Early warnings are invaluable.
  2. But when we issue those early warnings, we won’t know how severe the pandemic is with any certainty, and we won’t have a clue yet how severe it will get.
  3. Still, it’s not foolish to think there ought to be some minimum level of severity to justify calling a flu outbreak a pandemic at all, even though the need to monitor the new virus and prepare for the worst would be unaffected by this labeling decision.

So WHO has two options, neither of them ideal:

Option One: WHO can declare a pandemic early, realizing – and acknowledging – that the pandemic’s eventual severity is unknowable, and might turn out so low as to make the precautions being urged excessive. That means defining a flu pandemic without a severity criterion. There should still be a pandemic severity index, of course, but it would have no floor.

Option Two: WHO can issue its early warnings but frame them in terms of a “possible pandemic” – and later on, when the disease is waning and the data are in, make the determination of whether it turned out to be a severe pandemic, a moderate pandemic, a mild pandemic, or a false alarm. That means having a pandemic severity index with a floor, below which it’s not a pandemic at all.

In its 2009 revised pandemic guidance plan, WHO partly resolved this dilemma by making the actions prescribed for Phase 5 (when it looks like a pandemic is probably imminent) exactly the same as those for Phase 6 (“It’s a pandemic”). That’s fine for the technical aspects of pandemic response.

But the language for communicating an emerging pandemic to the world will need to be flexible, depending on what most people already have in their minds (if anything) when they hear the word “pandemic.” In 2009, the shadow of avian influenza and 1918 loomed large in many non-experts’ (and many experts’) “mental model” of a flu pandemic. A pandemic was a very frightening prospect to contemplate. The widespread impression that the 2009 pandemic has turned out too mild to deserve to be called a pandemic resulted directly from the sort of pandemic people had been conditioned to worry about. WHO’s refusal to validate that the pandemic we got wasn’t that pandemic contributed to widespread skepticism and mistrust, and ultimately to the emergence of cloud-cuckoo-land conspiracy theories.

Next time, the world’s mental model may well be the extraordinarily mild pandemic of 2009-2010 (if it stays extraordinarily mild). And so communication strategies will have to take that very different pre-existing impression into account.

In 2009, WHO did in essence declare a “possible pandemic.” That’s what Phases 4 and 5 are all about. Its reluctance to declare an actual pandemic – Phase 6 – sooner than it did was grounded chiefly in some Member States’ concerns that the novel H1N1 virus hadn’t spread widely enough yet to qualify. There was also concern expressed back in May and June (as WHO pondered ratcheting up to Phase 6) that maybe the novel H1N1 virus wasn’t deadly enough either. But in keeping with its 2009 guidelines, WHO ultimately chose to escalate from possible pandemic to pandemic on the grounds that the virus was spreading efficiently in at least two WHO regions.

The virus was looking pretty mild by June, though certainly not harmless. When it declared the pandemic, WHO did not cite any severity criterion for doing so. In the months that followed it usually referred to the pandemic as “moderate” rather than “mild,” partly because it had so little data on the effects of the disease in most developing countries.

Would it be better if a Phase 6 declaration required specified evidence of severity? Should WHO wait till a novel influenza virus that’s spreading efficiently from person to person has killed enough people? Should WHO wait till the virus shows evidence that it’s probably going to kill enough people? Is it sufficient that the virus might get more severe and then kill enough people? Or is the notion of killing “enough people” the wrong way to define a flu pandemic in the first place?

I’m not sure it matters a lot. I’m okay with Option One: calling swine flu a pandemic that’s mild so far but could get more severe. And I’m okay with Option Two: calling swine flu a near-pandemic so far that could still turn pandemic if it gets more severe. There is precedent for Option Two. In 1977, WHO officials kept an eye on the spreading Russian Flu in case it took a turn for the worse, but they did not in real time call it a pandemic.

Option Two would mean a change in established definitions of the word “pandemic” and conventional descriptions of what constitutes an influenza pandemic. Most of the experts would be happier with Option One, I’m sure. But if a sizable segment of the public has learned (from the experts) that flu pandemics are supposed to be pretty bad, and if that convinces WHO to tweak its next pandemic guidance document so the ones that aren’t so bad yet are called “possible pandemics” or “emerging pandemics” or whatever alternative nomenclature emerges, I won’t consider that a horrible outcome at all.

What matters, it seems to me, is that WHO figure out the best way to warn people that something unusual is happening that might or might not get really bad, and that now is the time to start gearing up in case it does:

  • It was important to say something in late April telling people that swine flu wasn’t widespread yet but was starting to spread and looked really deadly.
  • It was important to say something in June telling people that swine flu was spreading wildly, and that it looked less deadly than at first but that could change.
  • And in the months that followed, it was important to keep people informed about ongoing changes in swine flu’s spread and in its deadliness, and to keep warning them that more changes – including sudden changes for the worse – were among the possibilities for which it was important to prepare.

WHO’s big mistake, I think, had nothing to do with whether or not it considered severity part of the definition of an influenza pandemic. WHO’s big mistake was failing to tell people clearly enough that what was happening so far wasn’t looking remotely like the devastating pandemic the experts were worried about. In places were surveillance was good, it didn’t even look like a “moderate” pandemic. It was either a very mild pandemic or a not-quite-pandemic (I don’t care which) that might turn into the devastating pandemic the experts were worried about … or might not. So far it hasn’t.

Weather forecasters often need to warn people that a devastating storm might be headed their way. But they’re careful not to imply prematurely that the devastating storm has already arrived, or even that it’s a sure bet. They don’t want people looking out the window at gentle squalls and scoffing, “That’s what they consider a devastating storm!” And they don’t want people feeling angry and misled afterwards if the storm dies down or goes elsewhere. They want people preparing for the worst, hoping for the best, and knowing both are possible.

However it integrates severity into its pandemic definition, WHO needs to do a better job of integrating uncertainty about severity into its pandemic communications. I’m comfortable thinking of swine flu as a mild pandemic so far. But I could get comfortable thinking of it as a near-pandemic so far. What matters most is for the public to understand that nobody knows whether or not swine flu will morph into a severe pandemic in the months ahead. The public needed to understand that back in June when WHO issued its pandemic declaration, and the public still needs to understand that today.

My wife and colleague Jody Lanard contributed to this response.

Talking about a shooting … or any awful event that just happened

Name:Lisa Pogoff
Field:University health education specialist
Date:January 27, 2010
Location:Minnesota, U.S.

Comment:

We had a shooting on campus last night. What do you think of this paragraph in terms of risk/crisis communication from the university president?

Acts of violent crime are always unsettling. We’re fortunate to have a relatively low crime rate, especially for an urban campus, but statistics are rarely comforting when members of our community are the victims of crime. Rest assured that the University takes public safety very seriously. In recent years, we’ve increased the size of our police force and increased digital surveillance, with 1,800 cameras around campus. Through continued persistence, investment, and community awareness, we will provide a safe environment for students, faculty, staff, and visitors.

I think it’s good except for the “rest assured.” That sounds patronizing to me.

Peter responds:

I agree that “rest assured” sounds patronizing. Apart from being a stuffy, antiquated phrase, it basically says “we’ll handle it and you shouldn’t worry about it” – and it instructs people to feel “assured” at a time when they should be told instead how natural it is to feel agitated.

I also agree that the rest of the paragraph is pretty good, for a single paragraph. The “statistics are rarely comforting” passage is especially nice – a very empathic way to frame the statistics.

But I wouldn’t advise any client to promise to “provide a safe environment.” That’s okay as an aspirational goal, but not as a promise; it can’t be done … especially since “safe” without any qualifiers implies zero risk to many people. Promising to work to increase safety is a lot wiser.

In a longer statement, I would expect some exploration of whether there are any lessons to be learned from last night’s shooting – or if it’s too soon for that, a promise to look for possible lessons to improve campus safety efforts.

And of course if there are things that went wrong, or if there are rumors of things that went wrong, or if there will soon be rumors of things that went wrong, now is the time to address them. “You may have heard that [we were warned about this guy and did nothing; our system for a campus lockdown and automated cell phone alert failed; whatever]. Here’s what we know so far….”

Copyright © 2010 by Peter M. Sandman

Contact information page:   Peter M. Sandman     


Website design and management provided by SnowTao Editing Services.