Posted: April 10, 2002
This page is categorized as:    link to Outrage Management index
Hover here for
Article SummaryWhen things go badly wrong for a company or government agency, there were usually precursors, and the failure to heed these warnings is a familiar feature of post-disaster recriminations. I call the precursors/warnings “yellow flags” – yellow, not red, because in real time it’s usually impossible to tell whether they’ll turn out to be a minor wrinkle or a major flaw. This column addresses the choices companies and agencies face with regard to yellow flags: whether to let yourself know about them at all; whether to investigate the ones you know about; whether to stop what you’re doing while you await the results; and whether to tell the rest of us what’s up. The column focuses on the last of these choices, arguing that transparency about yellow flags is not just the best way to get them investigated properly; it is also the only way to prevent people from imagining afterwards that they were red flags.

Yellow Flags: The Acid Test of Transparency

Virtually every time a company gets into serious trouble – from Firestone to Enron – a major theme in the recriminations that follow is that the company was warned and chose not to take the warnings to heart. It is vanishingly rare for a disaster to have no precursors. Once the disaster occurs, we identify the precursors, establish that you paid them little heed, and hold you responsible for willful ignorance at best, intentional evil at worst. Firestone had data in hand that showed a pattern of Ford Explorer rollover blowouts; Enron’s Ken Lay met with an accounting executive who told him the company was cooking the books. They chose not to pay attention, we tell ourselves, so they are to blame for everything that followed.

The logical problem here, of course, is that while disasters are almost always preceded by warnings, warnings are often not followed by disasters. If we were to investigate a collection of products and processes that have proved (so far) to be benign, even beneficial, we would find that most of them were also preceded by warnings of possible disaster. The technical challenge is to decide, beforehand, which warnings to take to heart, which to dismiss … and which to accord some sort of intermediate validity, to treat as reasons to move more slowly and cautiously, but keep moving nonetheless, pending further investigation.

The communication challenge, meanwhile, is what (if anything) to say publicly about these warnings whose significance hasn’t been assessed.

In my work with clients, I often call these warnings/precursors “yellow flags.” The analogy is from auto racing, where a yellow flag signals that something ahead merits caution; the driver may keep moving, but carefully, and may not attempt to pass other cars. Racing also has green flags and red flags, which signal, respectively, a return to normal racing conditions and a serious accident ahead. In racing, track managers work to resolve the yellow flag into a more definitive signal. Companies have to make this determination on their own – with or without stakeholder consultation.

Choices When Facing a Yellow Flag

A company deciding how to deal with a particular yellow flag must make two decisions: How to respond to what you know so far, and what efforts to make to learn more. The two are interconnected. Typically, what you have at first is little more than a straw in the wind – a strange anomaly in an otherwise reassuring data set, a dissenting staff member who sees a problem nobody else seems to see, an unsubstantiated claim. You would hardly abandon a promising business venture on such flimsy ground. So either you proceed full-speed-ahead and ignore the warning; or you proceed full-speed-ahead while investigating the warning; or you slow down or even stop temporarily until you have learned more. This is an iterative process, of course. Once you learn more, you must once again decide whether to stop, slow, or maintain speed; and whether to investigate further or settle for what you know already.

Obviously, honorable people can disagree about what sort of yellow flag is trivial or unconvincing enough to dismiss (that is, to treat as green); what sort deserves continuing caution (further research, and perhaps a temporary halt or slowdown until more is known); and what sort is serious enough to derail the innovation (that is, to treat as red). Inevitably, proponents will tend to see green in situations where opponents see red. But everyone ought to be able to agree that all three categories exist in principle. That is, some warnings are dismissible, some merit caution, and some undermine the whole enterprise.

This is a point worth belaboring. I often ask my clients to describe a research result, or a set of research results, that would persuade them that they should retreat permanently. If they cannot specify conditions under which they would abandon the project, by definition they are not qualified to assess whether those specs have been met – and thus not qualified to assert that they haven’t been met, that the project is acceptably safe. The same is true on the other side. Even though it is much harder to “prove” something safe than to prove it dangerous, opponents of a technology are obliged to be able to specify conditions under which they would concede that it is safe enough to proceed; otherwise their claim that it isn’t has no meaning.

Similarly, there must be some yellow flags that deserve further research, and some that do not; some where caution dictates a moratorium or at least a slowdown until more is known, and some where it is appropriate to dot an i or cross a t while moving full-speed-ahead. Again, people may disagree over where to draw the line (based on their attitudes toward risk-taking in general and this technology in particular) – but anyone who pretends there is nothing on the other side of the line cannot help determine where the line should be drawn.

In a well-ordered universe, all parties would concede that yellow flags need to be judged along the lines described above, and would proceed to debate the specifics: which warnings to shrug off and which to investigate; when to stop or slow the pace of change and when to maintain speed.

Pretending Yellow Flags Are Green

But in the universe we actually inhabit, yellow flags instantly morph into green or red flags, depending on the values and interests of the observer.

Each side blames this on the other side. Companies believe that since activists are bound to pretend the yellow flag is red, they have no choice but to pretend it is green. Activists believe that since companies will claim the flag is green, they have no choice but to insist it is red.

Both sides have a point. What’s important to notice, however, is that the standoff that results works much better for the activists than for the companies. When an activist group interprets uncertain warnings as sure signs of disaster ahead, its exaggeration of the risk is likely to be seen by the public as a forgivable, even desirable (because conservative) sort of exaggeration. But when a company pretends the yellow flags are green – that is, when the company dichotomizes risk, insists that the technology in question is risk-free, hides the yellow flags it can hide and minimizes the ones it cannot hide – it plays into its opponents’ hands.

It does so in at least three ways.

By dichotomizing risk and claiming there are no warning signs whatever, the company sets a standard it cannot meet. Any subsequent evidence of risk – however small the risk, however uncertain the evidence – becomes evidence that the company was either mistaken or dishonest. Ultimately it doesn’t matter which came first, the company’s claim of green or the activists’ claim of red. Either way, the company’s unwillingness or inability to see any yellow establishes that green and red are the only available colors – and thus legitimates and strengthens the activists’ contention that yellow is red. As I incessantly tell my clients, in a battle between “perfectly safe” and “incredibly dangerous,” “incredibly dangerous” is a sure winner. But in a battle between “a little dangerous” and “incredibly dangerous,” “a little dangerous” is a contender. Polarization of risk serves opponents better than proponents. Yellow should therefore be the companies’ favorite color.

Consider for example the finding that genetically modified Bt corn can kill monarch butterflies. There is nothing intrinsically devastating about this finding. The monarch butterfly is not an endangered species, and Bt corn adds little to monarch mortality in the field – corn kills orders of magnitude fewer butterflies than speeding car windshields. Nor is the finding surprising. After all, the reason for adding Bt to the corn’s DNA in the first place is that Bt is a natural pesticide, capable of killing the corn borer; corn borers and monarchs are cousins, both lepidoptera. The GM seed industry made collateral damage to monarchs a damaging finding by not reporting it, allowing opponents to report it instead. Not that the companies hid the fact that Bt can kill insects; they just deemphasized the downside. They implied a small problem was a nonproblem, clearing the way for critics to claim it was a huge problem.

As I noted at the start of this column, after a disaster the public always finds the precursors and blames the company for having ignored them. I should add, we blame you also for having concealed them – thus preventing us from making you address them before it was too late. When you minimize the risk of a technology and hide or dismiss the yellow flags, you are thus setting yourself up for maximum blame and maximum punishment if things go wrong.

Worse yet, you are setting yourself up for the rest of us to imagine things have gone wrong even when they haven’t. Consider for example the silicone breast implant story. When doctors first suggested to the silicone industry that it might produce breast implants, this expansion of the product line seemed more a public service than a profit center. And silicone, widely considered to be virtually inert, seemed the ideal material for such a surgical use. So the product was brought to market without much safety research; it seemed self-evidently safe. In the decades that followed, there were occasional yellow flags (an anomalous medical report here, a lawsuit there) suggesting that silicone implants might be causing systemic disorders in patients. For the most part, these yellow flags were dismissed without much thought, certainly without much further research and, importantly, without much public consultation.

The silicone industry didn’t think it was hiding a bombshell; it was ignoring a distraction. Only when the litigation got serious – bankruptcy-level serious – did the research get serious as well. There is now good evidence that silicone breast implants probably do not cause systemic disorders. Of course the research was too late (and too reluctant) to persuade many plaintiffs; silicone is no longer available in the United States for breast implant use, and the silicone industry has paid out billions in claims. That was the penalty for ignoring yellow flags that did indeed turn out green … but that looked red to the public precisely because they were ignored.

The third downside of pretending that yellow flags are green results from corporate efforts to avoid the first two. It is by far the most serious effect, I think, though it is by its very nature difficult to document. Here it is in a nutshell: In order not to get caught hiding or minimizing yellow flags, companies often arrange not to know about them. They thus increase the probability that they will stumble into an outcome disastrous not only to the company but to the rest of us as well. And they also fail to avoid more modest but avoidable problems – because they didn’t dare to recognize that the problems were there.

Willful Ignorance

It was predictable that Bt corn would probably cause some collateral damage among monarchs, and predictable that the damage would probably be modest. Knowing this, put yourself in the position of a company that markets Bt corn seed. What are your options?

To start with, you can conduct an open and collaborative study on the effects of Bt corn on monarchs. This is my preferred option (predictably). But it’s easy to see why a company wouldn’t like it much. If you find a more serious impact than you expect, you’re stuck with it. And if you find the modest impact you do expect, critics will inevitably make much of it. “Even the industry,” they’ll say, “admits that genetically modifying corn seed with Bt can have a devastating effect on monarch butterflies.”

Well, okay, you say, so do the study secretly. Establish to your own satisfaction that you’re not going to eradicate the beautiful monarch. Make sure the effect on monarchs is tolerable, in your judgment; then (if it is tolerable) keep quiet about it so critics won’t be able to exaggerate your data or spin the results to a different conclusion. Readers familiar with my work know already why I’d argue strenuously against this option. There are strong grounds to question the ethics of keeping research results secret – especially “yellow flag” research results where there is room for dispute about the seriousness of the findings. And ethics aside, secrecy is a high-stakes gamble. Secrets tend to come out. Whistleblowers release them, or regulators demand them, or they emerge from the discovery process of a lawsuit. And nothing turns a yellow flag into a red flag more quickly than a failed attempt to keep it secret.

And so companies retreat to a posture of willful ignorance. Better not to study the matter at all, you understandably conclude. Maybe nobody will. (After all, who except the companies involved has much money to spend researching the downside of a new product?) If worse comes to worst and some academic actually does the study – as happened with Bt corn and monarchs – the company can always mobilize then to question the study’s methodology or significance, and if absolutely necessary to commission further research to establish that the problem is small.

It should be obvious that this is the worst of all possible outcomes: Your company deprives itself of crucial knowledge about the riskiness of its product line because it is afraid to know. (In the Bt/monarch case, the company can probably get away with arguing that it did know, based on theory and intuition if not data. But many risk assertions cannot be assessed without data.) A company that is secretly making sure it’s not about to unleash a disaster may be ethically challenged and courting public outrage, but it’s a vast improvement over a company that declines to make sure because the rest of us may find out and misuse what it learns.

How often does this happen? I don’t really know. Much research is required (and made public) by regulators, of course. But much isn’t – and where the research isn’t obligatory, I think, companies are tempted to remain as ignorant as they can. I often ask my clients if they have any data relevant to this or that potential problem, and they say they don’t. Maybe they’re lying; maybe they have data they consider reassuring but think their critics would treat as alarming, data they have therefore decided to keep to themselves. But at least some of the time – and I suspect it’s most of the time – they’re telling the truth. They don’t know something they ought to know … because they have decided that knowing is too dangerous.

I don’t mean to suggest that a large company would intentionally avoid knowing about a huge and imminent problem in the belief that ignorance is more profitable. Ignorance of a huge and imminent problem simply isn’t more profitable. Huge and imminent problems generate huge and imminent liabilities, and companies are smart enough to want to catch them early. What companies want to avoid is collecting data about a small problem – data that their opponents can use in news conferences and courtrooms to make the problem look huge. So if they think the problem is probably small, they tend to ignore it. Once in a while that means they miss a huge problem they thought was probably small. More often it means they aren’t as prepared to deal with the small problem as they could have been, if they had been willing to investigate that yellow flag properly.

It gets worse. Not only are my clients reluctant to ask questions for fear they’ll be stuck with the answers. They don’t even want to be stuck with the questions.

Remember, by definition not all yellow flags deserve to be investigated thoroughly. Some you look at and decide they’re farfetched or trivial or both, and that’s the end of the matter. But a company can get into almost as much trouble for having considered a possible problem and decided not to study it as for having studied the problem and decided to hide the results. This is an obvious disincentive to raise possible problems at all, even inside the company.

I once participated in a series of high-level corporate meetings to address “unanswered questions” about a particular technology. The technology was hotly controversial and much-studied. Acting responsibly, even bravely, the company commissioned its own research department to produce a list of possible risks that had been alleged but not thoroughly studied. Over the months that followed the company began to figure out which unanswered questions it wanted to try to answer and which it was content to leave on the list.

I was amazed at the extraordinary level of security that attended this process. The list of unanswered questions was handed out at the start of each meeting and collected at the end; no notes were permitted, lest they be discoverable. I urged the company to go to the other extreme. Publish the list, I said. That way stakeholders could help determine the research priorities – and begin to understand that not all yellow flags are equally urgent. Management thought I was crazy. In fact, some at the meetings thought they were crazy to produce such a list at all. A less safety-conscious company, I was told, wouldn’t even have begun the process.

And who knows what information never made it onto the list. Lower-level researchers even at this company had told me they were discouraged from passing along minor reservations arising from their various studies. If they found a big problem, they explained, they were certainly expected to report it up the hierarchy, where higher-ranking managers would decide whether to investigate further, whether to proceed more cautiously in the meantime, and whether to tell anyone. But it was clear in the culture that these decisions constituted a big burden, and only big problems merited imposing that burden on management. In other words, fairly low-ranking research staffers were dichotomizing their yellow flags into green and red. The reddish yellow flags they communicated up the hierarchy; the greenish yellow flags they buried without a paper trail.

I think it is impossible – literally impossible – for a company to take environment, health, and safety seriously unless the company consciously looks for possible environment, health, and safety problems it hasn’t yet addressed, and then decides which ones to address and which not to. The line between tolerable risks and intolerable ones is a fuzzy line. So is the line between risks that deserve further investigation and risks that do not. Leaving aside the question of whether stakeholders deserve a role in drawing these lines, there is no question that companies (with or without stakeholder involvement) are obliged to draw them. And you cannot draw these lines unless you consider candidates that end up on both sides of the line.

When I ask a client whether there are any risks the company considered investigating or ameliorating and decided not to, this is a trick question. If the answer is no – and if it’s an honest answer – then the company by definition isn’t managing its risk portfolio properly. You can’t manage risks properly without running into some that you investigate and then decide not to ameliorate, and others that you decide not even to investigate. You can’t be defining these lines sensibly if you’re claiming there is nothing on the other side.

If you have investigated and ameliorated every risk you can think of, then you haven’t thought of all the relevant risks. There are always risks you haven’t investigated and risks you haven’t ameliorated. If you don’t know what they are, you don’t know enough about the risks you face.

And if the reason you don’t know is that you daren’t find out, for fear that the answers won’t stay secret and your stakeholders will torture you with them, then your communication problem is causing a technical problem. In the language of “hazard” and “outrage,” your fear of public outrage is creating a hazard – a hazard to the company and a hazard to the public.

The Solution:
Transparency about Yellow Flags

There are only three possible ways to cope with a yellow flag:

  • Deal with it as you think appropriate (investigate or not; slow down or stop in the meantime or not) – but keep it secret.
  • Don’t let yourself know about it.
  • Acknowledge it – and deal not just with the yellow flag itself, but also with the stakeholders’ demand to help make the decision about how to deal with it.

Companies rightly judge the first to be very dangerous. They hate the third too much even to assess its pros and cons. And so they slide into the second … which is by far the worst for all concerned.

I am a specialist in risk communication and outrage management – not risk assessment. I have no special qualifications to judge which yellow flags merit investigation and which don’t. But the reasoning in this column leads me to an inescapable conclusion: If a company isn’t transparent about its yellow flags, it probably isn’t assessing them properly.

Many stakeholders have reached the same conclusion with rather less tortured logic – simply noting that the company has a stake in the outcome and therefore cannot be trusted to make wise judgments. That is, most activists and other critics take it for granted that companies will look at yellow flags through green-colored glasses. Whether consciously dishonest or merely self-deceptive, companies will tend to see what they want to see.

There is a lot of truth in this view. I think conscious dishonesty is less common in corporate decision-making about risk than the activists imagine. But human self-deception is far more common than my clients imagine. (I wrote an earlier column, “The Stupidity Defense,” that touched on this theme, arguing that “evil” and “stupid” are competing explanations for the same mistakes, and that stakeholders would assume evil less often if companies confessed stupidity more readily.) Of course activists are also human, and also vulnerable to bias, conscious and unconscious. But that’s not a reason to avoid transparency. In fact, it is a reason to embrace transparency. Americans characteristically mistrust any authority to have a monopoly on wisdom. We believe in “checks and balances”; we think truth emerges more reliably from contending biases than from the search for a judge who is bias-free.

That doesn’t mean it’s fun for company managers to open themselves up to criticism from stakeholders, some of whom will criticize unjustly (or at least it’ll feel that way). It isn’t fun. But it is by far the best of the three options – much better than assessing risks secretly and much, much better than failing to assess them at all.

Of course for companies that have been hiding yellow flags for years, there’s going to be a transition problem when they begin to haul them out of the closet. You can’t pretend you just discovered them; you’ll have to admit you recently decided to acknowledge them at last. That in itself will cause some outrage, and justly so.

But once you’ve survived the transition, transparency about yellow flags has only one major cost: The yellow flags that look greenish to you and reddish to your critics will end up getting more attention than you think they deserve. Some money will be wasted proving they’re greenish, just as you thought.

Other outcomes that may look like costs are actually benefits in disguise. In particular, if the yellow flag actually is reddish, you (and the rest of us) will find out sooner. Your worst critics expect you to see that as a cost, a prohibitive cost, in fact. But do you really want to foist a dangerous new technology on the world because you mistakenly thought it was safe? Leave altruism aside. Is that how companies build shareholder value?

Meanwhile, consider the remaining list of benefits:

  • Greenish yellow flags are likelier to look greenish – that is, less likely to be misperceived as red – when they are acknowledged and investigated promptly than when they are belatedly discovered by your opponents.
  • Learning early that a yellow flag is green takes the issue off the table. In fact, the gold standard for transparent research is a study that both sides agree in advance will resolve the question.
  • If studying the yellow flag reveals a problem that can be ameliorated, there is time to ameliorate it. If it can’t be ameliorated, but is small enough to tolerate, there is time to forewarn people so they’re not blindsided.
  • The debate over which yellow flags to investigate and how to assess the resulting data is less distorted by stakeholder outrage when the company sought the debate instead of trying to avoid it.
  • Taking yellow flags seriously – in public – earns you a reputation for caution. This is a bankable reputation; we want to be led into the dangerous future by people who know it’s dangerous. “Half Speed Ahead!”
  • When stakeholders help prioritize the yellow flags, they are forced to learn (and acknowledge) that risk is not a dichotomy, that zero risk is not attainable, and that not every hypothesis is worth testing.

One of the hottest issues today in the world of risk management is the meaning of the so-called Precautionary Principle. This deserves its own column. But I do want to note here that the Precautionary Principle is all about how to handle yellow flags. The extreme version says all yellow flags must be considered red until proven green with certainty. Since it is impossible to prove a negative, far less to do so with certainty, and since there is an infinity of yellow flags, this version of the Precautionary Principle is a recipe for paralysis. Often that seems to be what its advocates have in mind. (In all fairness, activists only occasionally describe their viewpoint this extremely; industry does so routinely, then shoots down the strawman.)

But it makes no sense to argue – as industry sometimes seems to argue – that the impossibility of disproving all possible risks means there is no need to investigate any of them. A company that scoffs at all yellow flags is by definition too incautious to be allowed to proceed. A company that pretends to scoff at all yellow flags, while secretly investigating the ones it thinks might be serious, looks too incautious to be allowed to proceed. You earn the right to dismiss a particular yellow flag by building a record of taking all yellow flags seriously, dismissing only some, and only after thoughtful, public, consideration.

Above all, companies need to learn to defend technologies without insisting or pretending or implying that they are risk-free. There is a curious paradox here. Industry often argues, correctly, that the public must learn to live in a risky world, must learn that nothing is risk-free. But industry does more than any other institution to perpetuate our naiveté on this issue. At its worst, industry ignores the risk until it can’t; then it claims the risk doesn’t exist; only when forced to do so does it belatedly acknowledge that, yes, the risk exists but it is small and tolerable. By that time we’re in a mood to assume the worst. Imagine an early electric power industry that first ignored, then “debunked” warnings about possible fire and shock risks. It would have devastated its credibility. Worse, its denial might well have convinced the world that if fire and shock turn out to be real hazards after all, then we must abandon electricity altogether.

There are yellow flags that aren’t worth investigating and yellow flags that are. Among those that are investigated, many will turn out to be nothing after all. Some will turn out real but manageable, others real and serious. Once in a great while a yellow flag will turn out so horrific that an entire industry becomes infeasible. Companies and industries, especially controversial ones, must show that they know all these categories exist. When they imply that virtually no yellow flags are worth investigating, or that virtually all yellow flag investigations will find no problem, they set the rest of us up to imagine that if a problem is found it spells disaster.

To me the most interesting aspect of yellow flag transparency is that the obvious answer doesn’t seem so obvious to my corporate clients. Activists think they know why:  The companies realize that transparency about yellow flags would reveal the seriousness of the risk, so the companies resist transparency to protect their investments in dangerous technologies. I don’t doubt that this must be true occasionally, but I very much doubt that my corporate clients are routinely stupid enough or evil enough to pursue a technology they themselves consider dangerous.

Nor is it enough to note that companies simply don’t enjoy being forced to collaborate and compromise with the enemy. True, they don’t enjoy it. But they do it, more and more frequently. So why is research one of the last bastions of unwillingness to be transparent, unwillingness even to consider being transparent?

My best guess (and it’s only a guess) is that the key barrier to transparency about yellow flags is ego – in particular, ego about one’s own integrity. Remember, activists rarely make the case that companies are only human, and therefore prey to the same self-justifying, wish-fulfilling biases as other humans. Rather, activists tend to argue that companies are intentionally dishonest, corrupt, evil. For a corporate representative, then, conceding the need to be transparent about yellow flags feels like a confession of evil. Insisting on the trustworthiness and comprehensiveness of one’s own research becomes a claim of honor. This isn’t mostly a matter of reputation in the external world; in the external world, companies would look more honest if they made themselves more transparent. It is about “self-reputation” – which is the kind we all value most.

An observation that supports this hypothesis:  I have urged at least half a dozen corporate clients to develop a systematic process for addressing all yellow flags transparently. In each case, the most fervent opposition came from the researchers themselves, who responded as though their integrity was being questioned. (Even the commonplace psychological observation that unconscious bias is widespread among researchers, which is the rationale for double-blind research in medicine, often leads researchers in the “hard” sciences to take offense.) CEOs and other senior managers, on the other hand, often seem interested in the idea … though in all fairness I’ve yet to have a client move very far in this direction.

Once in a while a group of researchers tries to move in this direction. Perhaps the best-known example is the 1975 Asilomar Conference on genetic modification. Biotech scientists (as they’re now called) were worried that their work might be dangerous, and that it was proceeding too quickly with too little attention to possible risks. So they organized an international conference to discuss what they knew and didn’t know, and to draw up recombinant DNA research guidelines. The guidelines included a moratorium on some kinds of work until more was known. They became the basis for much early government regulation of biotech research.

Asilomar has been called a “milestone of self-regulation in science.” It wasn’t perfect: It lasted only a few days; non-scientist stakeholders weren’t invited to participate; the focus was on just one hazard, the escape of genetically engineered organisms from the laboratory. Still, it could have been a promising start. Where would the biotech industry be today if the Asilomar Conference had been the first of many, if the self-questioning spirit of that conference had led to a systematic program of transparency about biotech yellow flags?

What am I recommending? The three points below will probably strike you as unrealistic, and will strike your attorneys as crazy.

Make a list of your yellow flags.

Ask your research people what the unanswered questions are. Examine your critics’ claims with a new eye, looking for testable hypotheses that haven’t been sufficiently tested. Collect anomalies: surprising outliers in a data array, weird findings that were never replicated, off-the-wall and off-the-cuff comments, dissenting opinions.

Share your yellow flags, and share your decision-making about them.

Start with a public “data dump” of internal documents (put them on your web site), so you’re sharing not just your list but the source material for your list. Ask stakeholders to add to the list. Then begin the process of developing a collaborative research agenda, and an action plan for the interim.

Nurture a corporate culture that takes yellow flags seriously.

Don’t assume you have one. If you haven’t done #1 and #2 yet, you don’t.

Postscript on Tort Reform

When I ask my clients why they’re reluctant to be transparent about yellow flags, reluctant to investigate yellow flags, even reluctant to talk about investigating yellow flags, they often mutter something about “tort reform.” I think this is more an excuse than a reason, but it is still worth a brief discussion.

Note that not every area of tort law puts pressure on prospective defendants to be as ignorant as possible. Consider medical malpractice law, for example. My wife (Dr. Jody Lanard) is a psychiatrist. She tells me that doctors are taught to document right on the patient’s chart the diagnoses they considered and rejected, the evidence pro and con, and why they decided on a different diagnosis. Doctors don’t always do this properly, but the point is that their legal advisors tell them to try. Suppose the patient was treated for Disease X and ends up dying of Disease Y. Now the family is suing the doctor for malpractice. The doctor is in better shape to defend the suit if the chart shows that s/he considered Y and ruled it out for specified reasons than if the chart makes no mention of Y. In other words, doctors are encouraged by tort law to consider all the alternatives, to assess the yellow flags, to “rule out” – consciously and explicitly – even the unlikely diagnoses. They are in far worse trouble if they never thought of Y than if they mistakenly but cogently ruled it out.

With all that may be wrong with medical malpractice law, this is admirable. It is in stark contrast to the legal precedents governing toxic tort litigation. A chemical company, for example, wants to have nothing in its files suggesting that dimethylmeatloaf might be a carcinogen. A chain of memos in which the company’s people debate the issue and ultimately decide, no, it’s not a carcinogen (reasoning cogently even if mistakenly) is seen as ammunition for the plaintiff; far better never to have considered the possibility.

In a fascinating though densely written article on the Internet, Phil Regal offers an instructive analogy. (See http://biosci.umn.edu/~pregal/whatdrives.htm for the full treatment.) Regal is a scientist who has focused much of his work on biosafety; he is not a biotech opponent, but he is certainly a critic of how the industry has dealt with risk issues. Regal compares deciding whether to study biotech yellow flags with deciding whether to install a lock on one’s front door. The additional research provides additional protection and additional peace of mind – but at a cost, so in deciding how much to spend on locks you consider the cost, the riskiness of the neighborhood, and how anxious you are in the first place.

Now, Regal continues, suppose you’re a landlord – worse yet, a landlord whose tenants aren’t free to move. If you acknowledge that there are (or may be) real risks out there, that your tenants are more worried about these risks than you are, that better locks do exist but you think they cost too much, you’re setting yourself up for pressure from tenants to buy the better locks and for lawsuits from tenants if you don’t and they get robbed. Says Regal: “The safest course of action may be for a landlord to claim that s/he believes that the risk of crime in the neighborhood is overrated but that s/he is following police advice anyway and the building is as safe as it can be.” In other words, says Regal, the biotech industry is legally wise to “play dumb,” to keep claiming that allegations of biotech risk are not credible and that the people making them are hysterical, unscientific, and biased.

I am no more an attorney than I am a risk assessor. I don’t know why tort law encourages thoughtful attention to yellow flags by doctors and willful ignorance by chemical companies and biotech companies. But it would help to get that sorted out too.

Copyright © 2002 by Peter M. Sandman

For more on outrage management:    link to Outrage Management index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.