The typical product liability case once centered on a physical defect arising from a manufacturing flaw. The plaintiff would argue that the product broke or failed to work because of some physical defect, a departure from the stated specifications. The expert analysis examined the materials, assembly process and other physical factors that contributed to producing the product.
The notion of defect eventually broadened from physical defect to design defect. Even if a product behaved as advertised (in fact, because the product behaved as advertised), it could still be held defective if the design exposed the user to injury. The expert analysis then examined a new set of factors, including the degree of risk, the product's utility and benefit, safer design alternatives, the difficulty of employing different designs, etc. The most important issue, however, is whether the design takes likely user behavior into account.
Since attorneys were used to having engineers handle product defect cases, they turned to these same experts to assess warnings and human behavior. However, the expert analysis examines factors such as visual conspicuity, attention, understanding, risk perception, behavioral trade-offs, user mental models, experience and attitudes. These are the domains of perceptual and cognitive psychology, not of engineering. A proper expert must incorporate the total context: user experience, expectations, goals, limitations and predispositions. Unfortunately, many "experts" disembody behavior from its context and arrive at the wrong conclusion. The analysis of behavior toward warnings is a good example.
At first glance, warning cases appear simple. A person uses a product. He either sees the warning or not. If he sees it, he either understands it or not. If he understands it, he either complies with it or not. When the issue reaches court, then the arguments usually focus on the warning's content and format: Were the colors correct? Did it have the right words? Was there a symbol? This is an attempt to disembody the behavior from its overall context by focusing attention on one minute part of the entire situation. In fact, there is very little real evidence that these issues matter much in the real world. Of course, the words in a warning must be legible and intelligible. However, content is only one small factor in determining warning effectiveness. The warning must exist in a context that supports it and renders it both credible and relevant. Here is a brief overview of some key issues:
1. People may misperceive the risk.
Research studies find that people are poor at estimating "objective risk." One common bias is the overestimation of rare, catastrophic accidents (nuclear power plant explosions) and underestimation of risk from much more likely accidents (highway collision). As described below, however, "objective risk" is an oxymoron. Risk is inherently subjective and is a function of many factors, including familiarity, sense of control, voluntariness, predictability, immediacy and several other variables.
2. Risk is not an objective quantity.
People in the warning business usually treat risk as an objective quantity (accidents with number of uses, etc). In fact, risk is a complex concept that cannot be quantified objectively. Epidemiological and similar analyses are suspect when attempting to determine whether a behavior is risky in any given situation. "Accidents per 1000 uses" is merely a statistic; it is not the degree of risk. For any situation they are many statistics (death, injuries, severe injuries, deaths+severe injuries, wages lost, accidents in the rain, etc.) that might be computed. Which one expresses the risk? In addition, statistics are often based on data of highly questionable validity and use of unwarranted assumptions . Risk also differs for different people. An expert electrician has a different risk in handling electrical connections than does a home handyman. "Objective" statistics collected over many instances are generalities that may not apply to any specific situation.
Most importantly, risk is a perception and
is in the eye (cognition, really) of the beholder. Here's a hypothetical example of how a person's goals will drastically affect degree of risk. Suppose a man can bet $10 against $100 with the odds of winning being 20:1 against. "Objectively," this would seem like a risky bet, since the payoff would only return 10:1. Now suppose the man needed $100 to buy a drug to save his life. The goal's value changes the apparent degree of risk. In fact, he was taking little risk because the $100 allowed him to achieve an important goal while the $10 was essentially useless. (Of course, you would need to know what alternatives he had for reaching his goal and the risk inherent in the alternatives). This example demonstrates the folly of any attempt to quantify risk without understanding the goals and purposes of the people who are acting. The fundamental mistake made by many engineering and quantitative analyses is to take behavior out of context and to fail to consider mental models, goals and payoffs.
3. People do not consider risk for most everyday tasks.
Much discussion on warnings assumes that people
make conscious, voluntary decisions about whether to comply with a warning or to assume the risk. However, most
everyday behaviors are performed in a schematized or automatic
mode where there is no conscious thought and risk is not considered.
Driving is a relatively risky activity, for example, but when was the last
time that you considered the cost-benefit ratio of driving to the store
for a quart of milk vs. staying at home and drinking water? The
use of familiar products is also typically automatic. Did you consider risk
when you last used your stove, hair dryer, oven cleaner, etc? People have schemata,
stereotyped conceptualizations of the way objects work and the
way to use them. One study interviewed accident victims of common
products and found that 74% said that they had not been aware that
they were running any risk.
4. Cost is greater than benefit.
People may fail to comply if the cost seems too high. If the person makes a conscious decision, then he will consider the relative cost-benefit of both complying and not complying. This means that the effectiveness of a warning depends on the ability to achieve the desired ends through another means. One library that I recently visited has "Do Not Use Cell Phone" signs, but it also set aside a small room on each floor where cell phones could be used. This is a good way to encourage compliance.
5. People trust their direct observations.
If a product contains an obvious
hazard (sharp edges, flame, moving, mechanical parts), then people will behave self-protectively. A warning
may be largely superfluous. If there are no obvious hazards,
then people are far less lively to perceive or comply with warnings.
Some studies find that people are more likely to read warnings if they
perceived that product use has risk. This suggests that perception of the risk
precedes perception of the warning, so it is unclear how
much the warning is really adding to self-protective behavior.
6. People trust their control of the situation.
People perceive less risk if they believe that they can control the hazard. If a sign says "No Diving," for example, the person least likely to comply is an experienced diver. He believes that he can control dive angle for safe entry into the water. Generally speaking, fear and anxiety are stronger when someone believes the situation is out of control. Control can refer to the ability to change outcome, or it can merely mean the ability to predict the outcome. In either case, control stems from expertise and experience.
7. People trust their experience.
One of the few robust findings in the warning research literature is that increased experience with a product or environment decreases warning compliance. I have already discussed how repeated exposure creates automatic behavior and separation of action from conscious decision-making. Furthermore, repeated behavior in a situation causes learning and a sense of mastery and control. Experience affects likelihood of even seeing a warning. As explained on another page, conspicuity is largely a function of meaningfulness. If a person learns that a warning has little significance, then it will not even be noticed.
8. Warnings are common, accidents are rare.
Each warning must be considered in the context of all other warnings. Most people go through
life bombarded by warnings concerning just about everything. Yet, most people rarely experience a
major accident. Naturally, people become skeptical about warnings, especially
since they are aware that warnings
often arise from litigation and not safety concerns. People see miles
of orange barrels in the absence of any construction workers, for example, and
quickly learn to ignore them as well as the
accompanying speed reduction signs. Moreover, people will generalize across products
in the same category. For example, suppose a person uses one over-the-counter cold remedy and has no safety problems and perceives no risk. If the person switches to another cold remedy, the safety experiences from the first product will likely transfer to the second. People readily form categories and treat objects within a category similarly.
Do Formats And Codes Matter?
There is little reason to believe that particular formats, colors, shapes, etc. affect user compliance1. Research, consisting primarily of laboratory studies, has produced conflicting results about the best color, shape and even contents. This probably occurred because 1) format is a very minor variable compared to the factors described above and 2) meaningfulness is important for attracting attention and interpreting the warning. The laboratory studies use artificial conditions employing naive subjects who have no experience or purpose. Any effect of experience or meaningfulness is absent.
Standardized warning formats may even work against safety. Suppose a person sees several warnings with the same color/shape, etc. format and learns that the information they contain is irrelevant. He can use the format to help filter out future "useless" information without reading the words. The ultimate effect is to reduce warning effectiveness.2 This is an example of the "cue generalization" phenomenon discussed elsewhere.
When Do Warnings Work?
Given the number of factors that decrease perceived risk, it is hardly surprising that there are remarkably few credible studies that document warning effectiveness. Most pro-warning evidence comes from the laboratory experiments3, which generalize little to the "real-world." Outside the lab, evidence of warning effectiveness is spotty, to say the least. The most notable exception is out-of-order signs. They are often effective, likely because they provide a contingency that is both certain and immediate.
However, there are specific conditions where
warnings have a greater ability to modify behavior. The best guide is to
examine the "contingency," an operant conditioning concept that
refers to the connection between behavior and consequences of behavior.
People are most likely to comply when behavioral consequences:
Have greater magnitude;
Have lower response requirement;
Occur immediately after the response; and
Occur with high probability.
The effectiveness of warnings on cigarette packages is small, for example,
because of the contingencies. If you smoke now, the bad consequences
are uncertain and lie far in the future. The effects
of a "road closed" or "out of order" sign on an elevator are
stronger because the negative consequence is immediate and certain. The power of contingency
has even been noted by nonexperts. Writer Dave Barry once quipped that cigarette smoking would end overnight if the package said, "WARNING: cigarettes contain fat!"
Ironically, clinical psychologists sometimes treat
patients who fear an object by a method called "systematic
desensitization" (a term borrowed from respondent conditioning) in which people are
repeatedly and gradually exposed to the "hazard."
Over the course of treatment, the fear extinguishes and
people habituate to the "hazard." This same process occurs naturally when
people repeatedly face the potential but unlikely hazard signaled by
a warning. People learn and people adapt. This is perhaps
the fundamental property of human nature.
Conclusion: Are Warnings Effective?
The final answer to the question "Are Warnings Effective?" is that
there is no general answer. There are documented examples of real-world
warning effectiveness, but
they are astonishingly few compared to examples where warnings failed to create any clear accident reduction. This is hardly surprising given the contingencies, low probability and remote effects of most hazards, and by the opportunities for reduction in perceived risk by experience and by development of automatic behavior that completely removes risk from consideration. At the very least, it cannot be assumed that merely because someone calls a label or sign a "warning" that it will function that way - especially if the warning has not undergone scientific testing. Always remember that function is not an innate property of any object, but rather is viewer-dependent. However each situation is different, so general conclusions are risky. In any specific cases, a proper analysis requires examination of the issues described above and elsewhere. In sum, a genuine
examination of warning adequacy requires a thorough knowledge of human
perception and cognition that can be applied to each specific instance.
Footnotes
1All things being equal, yellow is the best color for attracting attention. This is true for several reasons, but the most important is that it has the highest spectral luminosity of any chromatic color. However, its effects can be defeated by the cognitive variables described here.
2The New York
State Thruway started using new safety construction zone signs that say,
"Slow Down, My Daddy Works Here" written in a child's scrawl. They are
effective in drawing attention because they are unusual. In other words,
they are effective because they do not follow standard format. Unfortunately,
anyone who regularly drives past these signs will eventually tune them out
as well.
3e. g., Green, M. (2001). Caution! Warning literature May Be Misguided, Occupational Health & Safety, 16-18, Dec.