Computer And Medical Device Error
Marc Green
Computers are playing an increasing role in everyday life, so it
is not surprising that incidents involving computers have become a
common matter of litigation. In a wide variety of technical,
financial and other situations, people make decisions and perform
responses based on the appearance of a computer screen. The
increased reliance on computer controlled devices in medicine
is particularly noteworthy because of the critical safety issues
involved. As I will discuss below, this produces conflicts among
the usual goals of interface design, intuitiveness and
ease-of-use, and the safety of patients.
When error occurs, the question naturally arises as to how one
can apportion responsibility between the human and the computer: was
the computer poorly designed or was the human negligent?
In many cases, the critical component of the computer is its interface.
Users/operators generally do not understand the computer's inner world
of bits, bytes, files, ram, etc. Rather, they understand the computer through
its interface, the text and images that appear on the screen. Hence, a
popular saying in the computer world is that for the user:
The Interface Is The System
User/operator actions can only be evaluated in the context of the interface.
As an analogy, suppose a driver fails to see a STOP sign, runs an intersection
and hits another car. Is it the driver's fault? If the STOP sign were highly
visible, the answer would likely be "yes." If the STOP sign were hidden
by foliage, then the answer is likely "no" because the information were
beyond human ability to perceive and to respond. Similarly, the actions
of a computer user can only be evaluated in reference to the quality of interface
design.
In order to assess responsibility, it is important to understand how
interfaces are designed. Below, I provide a brief outline of the methods
which designers use in creating and evaluating computer and other man-machine
interfaces.
Standard Practice in Interface Design
There are two principal issues which should be investigated when there
is an accident involving computers. First, the computer interface should
be evaluated for adequacy of design. Faulty design could be construed as
negligence on the part of the designers. As discussed below, there are
a number of standard criteria available for evaluating adequacy. Second,
the interface design procedure should be closely examined. Just as there
are standards for nursing practice and for medical care, there is standard
practice for interface design. Although not formally codified, the standard
practice is generally understood by interface professionals and is described
in most design texts. Failure to follow the generally agreed practice is
also a possible source of negligence.
Most interface design occurs in 4 overlapping phases. In addition, there
is a 5th optional phase, which may or may not be performed.
1. Requirements gathering
The designer studies users and their tasks and attempts to develop a
set of requirements, which state what the interface should do. This stage
is critical, since misunderstanding the users (or user classes if there
are different groups) will guarantee a flawed interface design.
2. Prototype development:
The next step is to develop a prototype with which to test users. In
most cases, the initial prototypes are crude (sometimes even paper and
pencil) and become more refined with user testing. (See next phase.)
3. Formative Usability Evaluation:
A good designer will continually test users in order to validate design
and to test assumptions. Each design is really a guess until it has been
tested and found adequate by a representative set of users (although there
are other useful techniques - see below). The designer uses data obtained
from each evaluation to refine and form (hence the name "formative") the
next prototype. The design is retested (looping the procedure back
to phase 2), and the design further refined. In theory, testing stops when
a predetermined set of benchmarks is reached and/or when the designer is
simply satisfied. In reality, design typically ceases when time or money
run out.
4. Conversion of Prototype to Final Software
Software engineers (programmers with little or no interface design experience
in most cases) convert the prototype to a final form, usually in a faster
and more efficient programming language. It is not uncommon for the design
to change significantly in this process. Sometimes the software engineer
makes what he/she considers small changes (which may, in fact, be major
changes) in order to make programming easier and sometimes the efficient
programming language simply cannot produce the design exactly.
5. Summative Usability Testing
Usability testers evaluate the final interface functionality. This may
occur in-house, or the testers may go to customers in the field. Results
are then used to improve the next software revision.
The scheme outlined here, with some variation, is standard practice
by interface design and usability experts. However, this is not how all
interfaces are designed. In many companies, people with little or no design,
usability or human factors experience often create the interface. Some
companies are run by techies who simply slap untested interfaces together
as an afterthought, with software engineers performing interface "design."
Graphic artists often design interfaces, especially on web sites, although
they have no psychology or human factors training. Lastly, usability testing
is expensive, so companies may skimp on this part of the project budget.
Although the importance of human factors is becoming better understood
in the technology world, inadequate design procedures are still common.
A properly conducted design procedure generates a series of standard
documents. The most important are the "requirements document," "design
specification" and the "change control" documents. The requirements document
says what the interface ought to do. The design specification document
describes the interface in detail. It is usually written by the designer(s)
so that the people programming the production code know what the design
should be. There is often a deviation between the design and final product
for the reasons already described. Lastly, the change control document
is a formal notation of all changes in the design, usually after the design
specification is written. There are almost invariably some design changes
made up until the last possible second. Usually, there are specific people
who must "sign off" on each document.
Standard Evaluation of Interfaces
Usability testing of the prototype is a critical part of design. There
are many techniques for testing, but they fall into two general classes:
- Those that involve actual users
- Those that do not involve actual users.
The first method, which has many variations, has users sitting at a computer
and working with a prototype. At early development stages, the user may
be merely exploring the software - viewing a screen, clicking buttons, entering
numbers, etc. With more refined prototypes, the user may be solving a simulated
task which resembles his/her real work. The tester may simply write qualitative
observations or may record quantitative data, such as number of errors
and time required for task completion, and perform statistical analysis.
Videotaping of users is common.
The second method is sometimes used to supplement, and occasionally
replace the first if users are hard to obtain. This occurs if the users'
time is very valuable (physicians, lawyers, or very highly skilled technical
employees, etc.) or if the users are geographically remote. Moreover, it
is sometimes difficult to know who the users will be, so it may be misleading
to test with any particular group.
A common nonuser method is called "Heuristic Evaluation." A group of
3-5 usability experts and/or nonexperts judges the interface based on a
set of specific criteria. Here are some criteria, which would be used to
judge most interfaces:
- Simplicity: make the interface easy to use;
- Design For Error: assume that the user
will make errors. Make errors easy to reverse and/or find a way to
prevent them, e. g., ask for confirmation on important
actions;
- Make System State Visible: the user
should know what is happening inside the computer from looking at
the interface;
- Speak the user's language: use concepts
with which the user is familiar. If there are different classes of
user (e. g., novices and experts), then be sure that
both groups understand the interface;
- Minimize Human Memory Load: human memory
is fallible and people are likely to make errors if they must
remember information. Where possible have the critical information
on the screen. Recognition and selection from a list are easier
than memory recall;
- Provide feedback to the user: when the
user makes an action, provide feedback that something happened. At
the most basic level, the feedback may simply be a beep to
indicate that a button press was recorded. At a higher level, the
feedback may be a message that describes the consequences of the
action in detail;
- Provide good error messages: When errors
occur, provide the user with useful information about the problem.
Poor error messages can be disastrous, as in the Therac-25 case (See below); and
- Be Consistent: Similar actions should produce similar results and objects,
which are the same visually (colors, shapes), should be related in an important
way. Conversely different objects should be indicated by different visual
appearance.
These same criteria can be used to judge the responsibility of the interface
design in an accident. From experience, I'd guess the most common problems
are lack of consistency, hiding of the system state and failure to design
for error. (Most interface designers have never head of Failure
Mode and Effects Analysis.)
Ease of Use vs. Safety: An Example of Medical Device Error
One problem in evaluating interface design is that safety and ease-of-use
sometimes conflict. Interface designers are taught to make the interface
"user friendly" and intuitive. Being intuitive, however, is a two-edged
sword. The good news is that users learn the interface quickly and make
fewer errors. In addition, an intuitive interface is more likely to be
properly operated when the user is under stress, the time when people unconsciously
fall back on their innate and highly learned behavior.
The bad news is that the very notion of "intuitive" means that the user/operator
won't have to think too much. In safety-critical situations, this is not
always desirable. People have a tendency to minimize their workload by
using more and more general cues. For example, instead of reading a red
warning label, they may learn to simply respond when they see the red text
- it is much easier and faster to recognize color than to read text. If
there is an unusual or unexpected message in red, the user will not notice
the change because the cue is color, not the actual text. Similarly, users
learn to make their responses "automatic" when they occur with great frequency.
The classic example of an ease-safety conflict is the Therac-25, which
was a computer-controlled device for delivering measured bursts of radiation
to cancer patients. Several patients being treated with the machine accidentally
received fatal doses of radiation.
There were many problems with the Therac-25 (including poor error messages
which failed to make the machine state visible), but I'll just comment
on one aspect of the interface design. In the original version of the machine,
the operator had to enter control parameters twice. First, they were typed
into the computer and sent by hitting the "enter" key. Second, the
user entered the values into a control panel. This provided redundancy
for a critical task. It seemed less likely that the operator would enter
the same wrong values twice. Moreover, the computer could check to make
sure that the values were the same.
From an ease-of-use standpoint, this was a clumsy design. The interface
designers decided to make the user's life easier by removing the need to confirm
values with the control board. As before, the user typed numbers and then
hit the enter key to send the values. Instead of going to the control board,
the values appeared again on the screen, and the user could confirm them
by hitting the "enter" key a second time. This second confirmation was
a replacement for the control board data entry. It was much a faster and
more efficient interface design.
Users soon began entering the values and then simply hitting the "enter"
key twice without looking at the screen. The new system was easier, but
the redundant check on the values was gone.
This was highly predictable because of phenomena called "automaticity"
and "response chaining." When a person repeatedly performs a task requiring
a standard and unvarying series of responses, then the responses chain
together and effectively become a single response. Once started, the chain
of responses runs off automatically. For example, a pianist learning a
new piece might have to think about every note before hitting the key.
After practice, the pianist simply runs off the series of responses without
thinking. This reliance on "muscle memory" is obviously much easier, but
thinking is removed from the task. The Therac-25 case was an especially
bad example because the enter and confirmation responses were identical,
which facilitates response linking.
Such errors are common. In another case, a nurse, accidentally turned off the alarm on computerized equipment that was monitoring a critically ill patient. Normal operation required her to set the alarm to "on" and then to confirm the choice with several more responses on computer keyboard. She viewed a series of computer screens containing information about the alarm system. After each, she was to press the enter key again to confirm and to see the next screen. With experience, the responses chained and became automatic. She would set the alarm and begin merely hitting the enter key rapidly - tap, tap, tap - without really monitoring the screen information. She had done this many times before and the screens had never revealed any important information, so she began (unconsciously) conserving attention and increasing efficiency by ignoring the "irrelevant" information. On this one occasion, she missed the screen saying that the alarm was still off. The response chain, once started, had run off without supervision.
Summary
Since "The Interface is the system", an attorney investigating any accident
involving a computer should examine the interface design. The first questions
that should always be asked about an accident involving computers are:
- Did the interface design meet the requirements of the task?;
- Did the interface meet standard evaluation criteria?;
- Was standard practice followed in the interface design?;
- Was testing adequate?;
- Were the test users appropriate?;
- Were the usability test results properly interpreted and incorporated in the design?;
- Were there "Change Control" and other formal documents on the design procedure?;
- Who designed the interface and what were his/her credentials?; and
- If a safety-critical situation, what was the tradeoff between ease-of-use
and safety?
There are also secondary issues which should be examined, such as
"Was user training adequate?", "If the user/operator could customize the
interface, did he/she reduce interface quality?",etc.
In this article, I have outlined the issues
involved in determining responsibility in accidents involving
computers. Computers and similar devices are truly "man-machine
systems," where both components must function properly to avoid
error. It is perhaps natural to examine only the user/operator
actions, since they represent the visible "sharp end" of the system.
In many cases, however, the major fault lies with the machine and/or
the process used to develop the most important part, the user
interface.