ZeePedia

COMMUNICATING USERS: ELIMINATING ERRORS, POSITIVE FEEDBACK, NOTIFYING AND CONFIRMING

<< ASKING USERS: INTERVIEWS, QUESTIONNAIRES, WALKTHROUGHS
INFORMATION RETRIEVAL: AUDIBLE FEEDBACK, OTHER COMMUNICATION WITH USERS, IMPROVING DATA RETRIEVAL >>
img
Human Computer Interaction (CS408)
VU
Lecture
42
Lecture 42. Communicating Users
Learning Goals
As the aim of this lecture is to introduce you the study of Human Computer
Interaction, so that after studying this you will be able to:
Discuss how to eliminate error messages
Learn how to eliminate notifiers and confirmatory messages
Eliminating Errors
42.1
Bulletin dialog boxes are used for error messages, notifiers, and confirmations, three
of the most abused components of modern GUI design. With proper design, these
dialogs can all but be eliminated. In this lecture, we'll explore how and why.
Errors Are Abused
There is probably no more abused idiom in the GUI world than the error dialog. The
proposal that a program doesn't have the right -- even the duty -- to reject the user's
input is so heretical that many practitioners dismiss it summarily. Yet, if we examine
this assertion rationally and from the user's -- rather than the programmer's -- point
of view, it is not only possible, but quite reasonable.
Users never want error messages. Users want to avoid the consequences of making
errors, which is very different from saying that they want error messages. It's like
saying that people want to abstain from skiing when what they really want to do is
avoid breaking their legs. Usability guru Donald Norman (1989) points out that users
frequently blame themselves for errors in product design. Just because you aren't
getting complaints from your users doesn't mean that they are happy getting error
messages.
Why We Have So Many Error Messages
The first computers were undersized, underpowered, and expensive, and didn't lend
themselves easily to software sensitivity. The operators of these machines were white-
lab-coated scientists who were sympathetic to the needs of the CPU and weren't
offended when handed an error message. They knew how hard the computer was
working. They didn't mind getting a core dump, a bomb, an "Abort, Retry, Fail?" or
the infamous "FU" message (File Unavailable). This is how the tradition of software
treating people like CPUs began. Ever since the early days of computing,
programmers have accepted that the proper way for software to interact with humans
402
img
Human Computer Interaction (CS408)
VU
was to demand input and to complain when the human failed to achieve the same
perfection level as the CPU.
Examples of this approach exist wherever software demands that the user do things its
way instead of the software adapting to the needs of the human. Nowhere is it more
prevalent, though, than in the omnipresence of error messages.
What's Wrong with Error Messages
Error messages, as blocking modal bulletins must stop the proceedings with a modal
dialog box. Most user interface designers -- being programmers -- imagine that their
error message boxes are alerting the user to serious problems. This is a widespread
misconception. Most error message boxes are informing the user of the inability of the
program to work flexibly. Most error message boxes seem to the user like an
admission of real stupidity on the program's part. In other words, to most users, error
message boxes are seen not just as the program stopping the proceedings but, in clear
violation of the axiom: Don't stop the proceedings with idiocy. We can significantly
improve the quality of our interfaces by eliminating error message boxes.
People hate error messages
Humans have emotions and feelings: Computers don't. When one chunk of code
rejects the input of another, the sending code doesn't care; it doesn't scowl, get hurt, or
seek counseling. Humans, on the other hand, get angry when they are flatly told they
are idiots.
When users see an error message box, it is as if another person has told them that they
are stupid. Users hate this. Despite the inevitable user reaction, most programmers
just shrug their shoulders and put error message boxes in anyway. They don't know
how else to create reliable software.
Many programmers and user interface designers labor under the misconception that
people either like or need to be told when they are wrong. This assumption is false in
several ways. The assumption that people like to know when they are wrong ignores
human nature. Many people become very upset when they are informed of their
mistakes and would rather not know that they did something wrong. Many people
don't like to hear that they are wrong from anybody but themselves. Others are only
willing to hear it from a spouse or close friend. Very few wish to hear about it from a
machine. You may call it denial, but it is true, and users will blame the messenger
before they blame themselves.
The assumption that users need to know when they are wrong is similarly false. How
important is it for you to know that you requested an invalid type size? Most
programs can make a reasonable substitution.
We consider it very impolite to tell people when they have committed some social
faux pas. Telling someone they have a bit of lettuce sticking to their teeth or that their
fly is open is equally embarrassing for both parties. Sensitive people look for ways to
bring the problem to the attention of the victim without letting others notice. Yet
programmers assume that a big, bold box in the middle of the screen that stops all the
action and emits a bold "beep" is the appropriate way to behave.
Whose mistake is it, anyway?
Conventional wisdom says that error messages tell the user when he has made some
mistake. Actually, most error bulletins report to the user when the program gets
403
img
Human Computer Interaction (CS408)
VU
confused. Users make far fewer substantive mistakes than imagined. Typical "errors"
consist of the user inadvertently entering an out-of-bounds number, or entering a
space where the computer doesn't allow it. When the user enters something
unintelligible by the computer's standards, whose fault is it? Is it the user's fault for
not knowing how to use the program properly, or is it the fault of the program for not
making the choices and effects clearer?
Information that is entered in an unfamiliar sequence is usually considered an error by
software, but people don't have this difficulty with unfamiliar sequences. Humans
know how to wait, to bide their time until the story is complete. Software usually
jumps to the erroneous conclusion that out-of-sequence input means wrong input and
issues the evil error message box.
When, for example, the user creates an invoice for an invalid customer number, most
programs reject the entry. They stop the proceedings with the idiocy that the user
must make the customer number valid right now. Alternatively, the program could
accept the transaction with the expectation that a valid customer number will
eventually be entered. It could, for example, make a special notation to itself
indicating what it lacks. The program then watches to make sure the user enters the
necessary information to make that customer number valid before the end of the
session, or even the end of the month book closing. This is the way most humans
work. They don't usually enter "bad" codes. Rather, they enter codes in a sequence
that the software isn't prepared to accept.
If the human forgets to fully explain things to the computer, it can after some
reasonable delay, provide more insistent signals to the user. At day's or week's end the
program can move irreconcilable transactions into a suspense account. The program
doesn't have to bring the proceedings to a halt with an error message. After all, the
program will remember the transactions so they can be tracked down and fixed. This
is the way it worked in manual systems, so why can't computerized systems do at least
this much? Why stop the entire process just because something is missing? As long as
the user remains well informed throughout that some accounts still need tidying, there
shouldn't be a problem. The trick is to inform without stopping the proceedings.
If the program were a human assistant and it staged a sit-down strike in the middle of
the accounting department because we handed it an incomplete form, we'd be pretty
upset. If we were the bosses, we'd consider finding a replacement for this anal-
retentive, petty, sanctimonious clerk. Just take the form, we'd say, and figure out the
missing information. The experts have used Rolodex programs that demand you enter
an area code with a phone number even though the person's address has already been
entered. It doesn't take a lot of intelligence to make a reasonable guess at the area
code. If you enter a new name with an address in Menlo Park, the program can
reliably assume that their area code is 650 by looking at the other 25 people in your
database who also live in Menlo Park and have 650 as their area code. Sure, if you
enter a new address for, say, Boise, Idaho, the program might be stumped. But how
tough is it to access a directory on the Web, or even keep a list of the 1,000 biggest
cities in America along with their area codes?
Programmers may now protest: "The program might be wrong. It can't be sure. Some
cities have more than one area code. It can't make that assumption without approval of
the user!" Not so.
If we asked a human assistant to enter a client's phone contact information into our
Rolodex, and neglected to mention the area code, he would accept it anyway,
expecting that the area code would arrive before its absence was critical.
Alternatively, he could look the address up in a directory. Let's say that the client is in
404
img
Human Computer Interaction (CS408)
VU
Los Angeles so the directory is ambiguous: The area code could be either 213 or 310.
If our human assistant rushed into the office in a panic shouting "Stop what you're
doing! This client's area code is ambiguous!" we'd be sorely tempted to fire him and
hire somebody with a greater-than-room-temperature IQ. Why should software be any
different? A human might write 213/310? into the area code field in this case. The
next time
we call that client, we'll have to determine which area code is correct, but in the
meantime, life can go on.
Again, squeals of protest: "But the area code field is only big enough for three digits!
I can't fit 213/310? into it!" Gee, that's too bad. You mean that rendering the user
interface of your program in terms of the underlying implementation model -- a
rigidly fixed field width -- forces you to reject natural human behavior in favor of
obnoxious, computer-like inflexibility supplemented with demeaning error messages?
Not to put too fine a point on this, but error message boxes come from a failure of the
program to behave reasonably, not from any failure of the user.
This example illustrates another important observation about user interface design. It
is not only skin deep. Problems that aren't solved in the design are pushed through the
system until they fall into the lap of the user. There are a variety of ways to handle the
exceptional situations that arise in interaction with software -- and a creative designer
or programmer can probably think of a half-dozen or so off the top of her head -- but
most programmers just don't try. They are compromised by their schedule and their
preferences, so they tend to envision the world in the terms of perfect CPU behavior
rather than in the terms of imperfect human behavior.
Error messages don't work
There is a final irony to error messages: They don't prevent the user from making
errors. We imagine that the user is staying out of trouble because our trusty error
messages keep them straight, but this is a delusion. What error messages really do is
prevent the program from getting into trouble. In most software, the error messages
stand like sentries where the program is most sensitive, not where the user is most
vulnerable, setting into concrete the idea that the program is more important than the
user. Users get into plenty of trouble with our software, regardless of the quantity or
quality of the error messages in it. All an error message can do is keep me from
entering letters in a numeric field -- it does nothing to protect me from entering the
wrong numbers -- which is a much more difficult design task.
Eliminating Error Messages
We can't eliminate error messages by simply discarding the code that shows the actual
error message dialog box and letting the program crash if a problem arises. Instead,
we need to rewrite the programs so they are no longer susceptible to the problem. We
must replace the error-message with a kinder, gentler, more robust software that
prevents error conditions from arising, rather than having the program merely
complain when things aren't going precisely the way it wants. Like vaccinating it
against a disease, we make the program immune to the problem, and then we can toss
the message that reports it. To eliminate the error message, we must first eliminate the
possibility of the user making the error. Instead of assuming error messages are
normal, we need to think of them as abnormal solutions to rare problems -- as
surgery instead of aspirin. We need to treat them as an idiom of last resort.
405
img
Human Computer Interaction (CS408)
VU
Every good programmer knows that if module A hands invalid data to module B,
module B should clearly and immediately reject the input with a suitable error
indicator. Not doing this would be a great failure in the design of the interface
between the modules. But human users are not modules of code. Not only should
software not reject the input with an
error message, but the software designer must also reevaluate the entire concept of
what "invalid data" is. When it comes from a human, the software must assume that
the input is correct, simply because the human is more important than the code.
Instead of software rejecting input, it must work harder to understand and reconcile
confusing input. The program may understand the state of things inside the computer,
but only the user understands the state of things in the real world. Ultimately, the real
world is more relevant and important than what the computer thinks.
Making errors impossible
Making it impossible for the user to make errors is the best way to eliminate error
messages. By using bounded gizmos for all data entry, users are prevented from ever
being able to enter bad numbers. Instead of forcing a user to key in his selection,
present him with a list of possible selections from which to choose. Instead of making
the user type in a state code, for example, let him choose from a list of valid state
codes or even from a picture of a map. In other words, make it impossible for the user
to enter a bad state.
Another excellent way to eliminate error messages is to make the program smart
enough that it no longer needs to make unnecessary demands. Many error messages
say things like "Invalid input. User must type xxxx." Why can't the program, if it
knows what the user must type, just enter xxxx by itself and save the user the tongue-
lashing? Instead of demanding that the user find a file on a disk, introducing the
chance that the user will select the wrong file, have the program remember which files
it has accessed in the past and allow a selection from that list. Another example is
designing a system that gets the date from the internal clock instead of asking for
input from the user.
Undoubtedly, all these solutions will cause more work for programmers. However, it
is the programmer's job to satisfy the user and not vice versa. If the programmer
thinks of the user as just another input device, it is easy to forget the proper pecking
order in the world of software design.
Users of computers aren't sympathetic to the difficulties faced by programmers. They
don't see the technical rationale behind an error message box. All they see is the
unwillingness of the program to deal with things in a human way.
One of the problems with error messages is that they are usually post facto reports of
failure. They say, "Bad things just happened, and all you can do is acknowledge the
catastrophe." Such reports are not helpful. And these dialog boxes always come with
an OK button, requiring the user to be an accessory to the crime. These error message
boxes are reminiscent of the scene in old war movies where an ill-fated soldier steps
on a landmine while advancing across the rice paddy. He and his buddies clearly hear
the click of the mine's triggering mechanism and the realization comes over the
soldier that although he's safe now, as soon as he removes his foot from the mine, it
will explode, taking some large and useful part of his body with it. Users get this
feeling when they see most error message boxes, and they wish they were thousands
of miles away, back in the real world.
406
img
Human Computer Interaction (CS408)
VU
Positive feedback
42.2
One of the reasons why software is so hard to learn is that it so rarely gives positive
feedback. People learn better from positive feedback than they do from negative
feedback. People want to use their software correctly and effectively, and they are
motivated to learn how to make the software work for them. They don't need to be
slapped on the wrist when they fail. They do need to be rewarded, or at least
acknowledged, when they succeed. They will feel better about themselves if they get
approval, and that good feeling will be reflected back to the product.
Advocates of negative feedback can cite numerous examples of its effectiveness in
guiding people's behavior. This evidence is true, but almost universally, the context of
effective punitive feedback is getting people to refrain from doing things they want to
do but shouldn't: Things like not driving over 55 mph, not cheating on their spouses,
and not fudging their income taxes. But when it comes to helping people do what they
want to do, positive feedback is best. Imagine a hired ski instructor who yells at you,
or a restaurant host who loudly announces to other patrons that your credit card was
rejected.
Keep in mind that we are talking about the drawbacks of negative feedback from a
computer. Negative feedback by another person, although unpleasant, can be justified
in certain circumstances. One can say that the drill sergeant is at least training you in
how to save your life in combat, and the imperious professor is at least preparing you
for the vicissitudes of the real world. But to be given negative feedback by software
-- any software -- is an insult. The drill sergeant and professor are at least human
and have bona fide experience and merit. But to be told by software that you have
failed is humiliating and degrading. Users, quite justifiably, hate to be humiliated and
degraded. There is nothing that takes place inside a computer that is so important that
it can justify humiliating or degrading a human user. We only resort to negative
feedback out of habit.
Improving Error Messages: The Last Resort
Now we will discuss some methods of improving the quality of error message boxes,
if indeed we are stuck using them. Use these recommendations only as a last resort,
when you run out of other options.
A well-formed error message box should conform to these requirements:
Be polite
Be illuminating
Be helpful
Never forget that an error message box is the program reporting on its failure to do its
job, and it is interrupting the user to do this. The error message box must be
unfailingly polite. It must never even hint that the user caused this problem, because
that is simply not true from the user's perspective. The customer is always right.
The user may indeed have entered some goofy data, but the program is in no position
to argue and blame. It should do its best to deliver to the user what he asked for, no
matter how silly. Above all, the program must not, when the user finally discovers his
silliness, say, in effect, "Well, you did something really stupid, and now you can't
recover. Too bad." It is the program's responsibility to protect the user even when he
takes inappropriate action. This may seem draconian, but it certainly isn't the user's
responsibility to protect the computer from taking inappropriate action.
407
img
Human Computer Interaction (CS408)
VU
The error message box must illuminate the problem for the user. This means that it
must give him the kind of information he needs to make an appropriate determination
to solve the program's problem. It needs to make clear the scope of the problem, what
the alternatives are, what the program will do as a default, and what information was
lost, if any. The program should treat this as a confession, telling the user everything.
It is wrong, however, for the program to just dump the problem on the user's lap and
wipe its hands of the matter. It should directly offer to implement at least one
suggested solution right there on the error message box. It should offer buttons that
will take care of the problem in various ways. If a printer is missing, the message box
should offer options for deferring the printout or selecting another printer. If the
database is hopelessly trashed and useless, it should offer to rebuild it to a working
state, including telling the user how long that process will take and what side effects it
will cause.
Figure shows an example of a reasonable error message. Notice that it is polite,
illuminating, and helpful. It doesn't even hint that the user's behavior is anything but
impeccable.
Notifying and Confirming
42.3
Now, we discuss alert dialogs (also known as notifiers) and confirmation dialogs, as
well as the structure of these interactions, the underlying assumptions about them, and
how they, too, can be eliminated in most cases. ?
Alerts and Confirmations
42.4
Like error dialogs, alerts and confirmations stop the proceedings with idiocy, but they
do not report malfunctions. An alert notifies the user of the program's action, whereas
a confirmation also gives the user the authority to override that action. These dialogs
pop up like weeds in most programs and should, much like error dialogs, be
eliminated in favor of more useful idioms.
408
img
Human Computer Interaction (CS408)
VU
Alerts: Announcing the obvious
When a program exercises authority that it feels uncomfortable with, it takes steps to
inform the user of its actions. This is called an alert. Alerts violate the axiom: A
dialog box is another room; you should have a good reason to go. Even if an alert is
justified (it seldom is), why go into another room to do it? If the program took some
indefensible action, it should confess to it in the same place where the action occurred
and not in a separate dialog box.
Conceptually, a program should either have the courage of its convictions or it should
not take action without the user's direct guidance. If the program, for example, saves
the user's file to disk automatically, it should have the confidence to know that it is
doing the right thing. It should provide a means for the user to find out what the
program did, but it doesn't have to stop the proceedings with idiocy to do so. If the
program really isn't sure that it should save the file, it shouldn't save the file, but
should leave that operation up to the user.
Conversely, if the user directs the program to do something -- dragging a file to the
trash can. for example -- it doesn't need to stop the proceedings with idiocy to
announce that the user just dragged a file to the trashcan. The program should ensure
that there is adequate visual feedback regarding the action; and if the user has actually
made the gesture in error, the program should silently offer him a robust Undo facility
so he can backtrack.
The rationale for alerts is that they inform the user. This is a desirable objective, but
not at the expense of smooth interaction flow.
Alerts are so numerous because they are so easy to create. Most languages offer some
form of message box facility in a single line of code. Conversely, building an
animated status display into the face of a program might require a thousand or more
lines of code. Programmers cannot be expected to make the right choice in this
situation. They have a conflict of interest, so designer: must be sure to specify
precisely where information is reported on the surface of an application The designers
must then follow up to be sure that the design wasn't compromised for the sake of
rapid coding. Imagine if the contractor on
a building site decided unilaterally not to add a bathroom because it was just too much
trouble to deal with the plumbing. There would be consequences.
Software needs to keep the user informed of its actions. It should have visual
indicators built into its main screen to make such status information available to the
user, should he desire it. Launching an alert to announce an unrequested action is bad
enough. Putting up an alert to announce a requested action is pathological.
Software needs to be flexible and forgiving, but it doesn't need to be fawning and
obsequious. The dialog box shown in Figure below is a classic example of an alert
that should be put out of our misery. It announces that it added the entry to our phone
book. This occurs immediately after we told it to add the entry to our phone book,
which happened milliseconds after we physically added the entry to what appears to
be our phone book. It stops the proceedings to announce the obvious.
409
img
Human Computer Interaction (CS408)
VU
It's as though the program wants approval for how hard it worked: "See, dear, I've
cleaned your room for you. Don't you love me?" If a person interacted with us like
this, we'd suggest that they seek counseling.
Confirmations S
When a program does not feel confident about its actions, it often asks the user for
approval with a dialog box. This is called a confirmation, like the one shown in Figure
below. Sometimes the confirmation is offered because the program second-guesses
one of the user's actions. Sometimes the program feels that is not competent to make a
decision it faces and uses a confirmation to give the user the choice instead.
Confirmations always come from the program and never from the user. This means
that they are a reflection of the implementation model and are not representative of
the user's goals.
Remember, revealing the implementation model to users is a sure-fire way to create
an inferior user interface. This means that confirmation messages are inappropriate.
Confirmations get written into software when the programmer arrives at an impasse in
his coding. Typically, he realizes that he is about to direct the program to take some
bold action and feels unsure about taking responsibility for it. Sometimes the bold
action is based on some conciliation the program detects, but more often it is based on
a command the user issues. Typically, confirmation will be launched after the user
issues a command that is either irrecoverable whose results might cause undue alarm.
Confirmations pass the buck to the user. The user trusts the program to do its job, and
the program should both do it and ensure that it does it right. The proper solution is to
make the action easily reversible and provide enough modeless feedback so that the
user is not taken off-guard.
As a program's code grows during development, programmers detect numerous
situations where they don't feel that they can resolve issues adequately. Programmers
410
img
Human Computer Interaction (CS408)
VU
will unilaterally insert buck-passing code in these places, almost without noticing it.
This tendency needs to be closely watched, because programmers have been known to
insert dialog boxes into the code even after the user interface specification has been
agreed upon. Programmers often don't consider confirmation dialogs to be part of the
user interface, but they are.
THE DIALOG THAT CRIED, "WOLF!"
Confirmations illustrate an interesting quirk of human behavior: They only work
when they are unexpected. That doesn't sound remarkable until you examine it in
context. If confirmations are offered in routine places, the user quickly becomes
inured to them and routinely dismisses them without a glance. The dismissing of
confirmations thus becomes as routine as the issuing of them. If, at some point, a truly
unexpected and dangerous situation arises -- one that should be brought to the user's
attention -- he will, by rote, dismiss the confirmation, exactly because it has become
routine. Like the fable of the boy who cried, "Wolf," when there is finally real danger,
the confirmation box won't work because it cried too many times when there was no
danger.
For confirmation dialog boxes to work, they must only appear when the user will
almost definitely click the No or Cancel button, and they should never appear when
the user is likely to click the Yes or OK button. Seen from this perspective, they look
rather pointless, don't they?
The confirmation dialog box shown in Figure below is a classic. The irony of the
confirmation dialog box in the figure is that it is hard to determine which styles to
delete and which to keep. If the confirmation box appeared whenever we attempted to
delete a style that was currently in use, it would at least then be helpful because the
confirmation would be less routine. But why not instead put an icon next to the names
of styles that are in use and dispense with the confirmation? The interface then
provides more pertinent status information, so one can make a more informed
decision about what to delete.
411
img
Human Computer Interaction (CS408)
VU
ELIMINATING CONFIRMATIONS
42.5
Three axioms tell us how to eliminate confirmation dialog boxes. The best way is to
obey the simple dictum: Do, don't ask. When you design your software, go ahead and
give it the force of its convictions (backed up by user research). Users will respect its
brevity and its confidence.
Of course, if the program confidently does something that the user doesn't like, it must
have the capability to reverse the operation. Every aspect of the program's action must
be undoable. Instead of asking in advance with a confirmation dialog box, on those
rare occasions when the programs actions were out of turn, let the user issue the Stop-
and-Undo command.
Most situations that we currently consider unprotectable by Undo can actually be
protected fairly well. Deleting or overwriting a file is a good example. The file can be
moved to a suspense directory where it is kept for a month or so before it is physically
deleted. The Recycle Bin in Windows uses this strategy, except for the part about
automatically erasing files after a month: Users still have to manually take out the
garbage.
Even better than acting in haste and forcing the user to rescue the program with Undo,
you can make sure that the program offers the user adequate information so that the
he never purposely issues a command that leads to an inappropriate action (or never
omits a necessary command). The program should use sufficiently rich visual
feedback so that the user is constantly kept informed, the same way the instruments
on dashboards keep us informed of the state of our cars.
Occasionally, a situation arises that really can't be protected by Undo. Is this a
legitimate case for a confirmation dialog box? Not necessarily. A better approach is to
provide users with protection the way we give them protection on the freeway: with
412
img
Human Computer Interaction (CS408)
VU
consistent and clear markings. You can often build excellent, modeless warnings right
into the interface. For instance, look at the dialog from Adobe Photoshop in Figure
below, telling us that our document is larger than the available print area. Why has the
program waited until now to inform us of this fact? What if guides were visible on the
page at all times (unless the user hid them) showing the actual printable region? What
if those parts of the picture outside the printable area were highlighted when the user
moused over the Print button in the toolbar? Clear, modeless feedback is the best way
to address these problems.
Much more common than honestly irreversible actions are those actions that are easily
reversible but still uselessly protected by routine confirmation boxes. There is no
reason whatsoever to ask for confirmation of a move to the Recycle Bin. The sole
reason that the Recycle Bin exists is to implement an undo facility for deleted files.
Replacing Dialogs: Rich Modeless Feedback
42.6
Most computers now in use in the both the home and the office come with high-
resolution displays and high-quality audio systems. Yet, very few programs (outside
of games) even scratch the surface of using these facilities to provide useful
information to the user about the status of the program, the users' tasks, and the
system and its peripherals in general. It is as if an entire toolbox is available to
express information to users, but programmers have stuck to using the same blunt
instrument -- the dialog -- to communicate information. Needless to say, this means
that subtle status information is simply never communicated to users at all, because
even the most clueless designers know that you don't want dialogs to pop up
constantly. But constant feedback is exactly what users need. It's simply the channel
of communication that needs to be different.
In this section, well discuss rich modeless feedback, information that can be provided
to the user in the main displays of your application, which don't stop the flow of the
program or the user, and which can all but eliminate pesky dialogs.
Rich visual modeless feedback
42.7
Perhaps the most important type of modeless feedback is rich visual modeless
feedback (RVMF). This type of feedback is rich in terms of giving in-depth
information about the status or attributes of a process or object in the current
application. It is visual in that it makes idiomatic use of pixels on the screen (often
dynamically), and it is modeless in that this information is always readily displayed,
requiring no special action or mode shift on the part of the user to view and make
sense of the feedback.
For example, in Windows 2000 or XP, clicking on an object in a file manager window
automatically causes details about that object to be displayed on the left-hand side of
the file manager window. (In XP, Microsoft ruined this slightly by putting the
413
img
Human Computer Interaction (CS408)
VU
information at the bottom of a variety of other commands and links. Also, by default,
they made the Details area a drawer that you must open, although the program, at
least, remembers its state.) Information includes title, type of document, its size,
author, date of modification, and even a thumbnail or miniplayer if it is an image or
media object. If the object is a disk, it shows a pie chart and legend depicting how
much space is used on the disk. Very handy indeed! This interaction is perhaps
slightly modal because it requires selection of the object, but the user needs to select
objects anyway. This functionality handily eliminates the need for a properties dialog
to display this information. Although most of this information is text, it still fits within
the idiom.
414
Table of Contents:
  1. RIDDLES FOR THE INFORMATION AGE, ROLE OF HCI
  2. DEFINITION OF HCI, REASONS OF NON-BRIGHT ASPECTS, SOFTWARE APARTHEID
  3. AN INDUSTRY IN DENIAL, SUCCESS CRITERIA IN THE NEW ECONOMY
  4. GOALS & EVOLUTION OF HUMAN COMPUTER INTERACTION
  5. DISCIPLINE OF HUMAN COMPUTER INTERACTION
  6. COGNITIVE FRAMEWORKS: MODES OF COGNITION, HUMAN PROCESSOR MODEL, GOMS
  7. HUMAN INPUT-OUTPUT CHANNELS, VISUAL PERCEPTION
  8. COLOR THEORY, STEREOPSIS, READING, HEARING, TOUCH, MOVEMENT
  9. COGNITIVE PROCESS: ATTENTION, MEMORY, REVISED MEMORY MODEL
  10. COGNITIVE PROCESSES: LEARNING, READING, SPEAKING, LISTENING, PROBLEM SOLVING, PLANNING, REASONING, DECISION-MAKING
  11. THE PSYCHOLOGY OF ACTIONS: MENTAL MODEL, ERRORS
  12. DESIGN PRINCIPLES:
  13. THE COMPUTER: INPUT DEVICES, TEXT ENTRY DEVICES, POSITIONING, POINTING AND DRAWING
  14. INTERACTION: THE TERMS OF INTERACTION, DONALD NORMAN’S MODEL
  15. INTERACTION PARADIGMS: THE WIMP INTERFACES, INTERACTION PARADIGMS
  16. HCI PROCESS AND MODELS
  17. HCI PROCESS AND METHODOLOGIES: LIFECYCLE MODELS IN HCI
  18. GOAL-DIRECTED DESIGN METHODOLOGIES: A PROCESS OVERVIEW, TYPES OF USERS
  19. USER RESEARCH: TYPES OF QUALITATIVE RESEARCH, ETHNOGRAPHIC INTERVIEWS
  20. USER-CENTERED APPROACH, ETHNOGRAPHY FRAMEWORK
  21. USER RESEARCH IN DEPTH
  22. USER MODELING: PERSONAS, GOALS, CONSTRUCTING PERSONAS
  23. REQUIREMENTS: NARRATIVE AS A DESIGN TOOL, ENVISIONING SOLUTIONS WITH PERSONA-BASED DESIGN
  24. FRAMEWORK AND REFINEMENTS: DEFINING THE INTERACTION FRAMEWORK, PROTOTYPING
  25. DESIGN SYNTHESIS: INTERACTION DESIGN PRINCIPLES, PATTERNS, IMPERATIVES
  26. BEHAVIOR & FORM: SOFTWARE POSTURE, POSTURES FOR THE DESKTOP
  27. POSTURES FOR THE WEB, WEB PORTALS, POSTURES FOR OTHER PLATFORMS, FLOW AND TRANSPARENCY, ORCHESTRATION
  28. BEHAVIOR & FORM: ELIMINATING EXCISE, NAVIGATION AND INFLECTION
  29. EVALUATION PARADIGMS AND TECHNIQUES
  30. DECIDE: A FRAMEWORK TO GUIDE EVALUATION
  31. EVALUATION
  32. EVALUATION: SCENE FROM A MALL, WEB NAVIGATION
  33. EVALUATION: TRY THE TRUNK TEST
  34. EVALUATION – PART VI
  35. THE RELATIONSHIP BETWEEN EVALUATION AND USABILITY
  36. BEHAVIOR & FORM: UNDERSTANDING UNDO, TYPES AND VARIANTS, INCREMENTAL AND PROCEDURAL ACTIONS
  37. UNIFIED DOCUMENT MANAGEMENT, CREATING A MILESTONE COPY OF THE DOCUMENT
  38. DESIGNING LOOK AND FEEL, PRINCIPLES OF VISUAL INTERFACE DESIGN
  39. PRINCIPLES OF VISUAL INFORMATION DESIGN, USE OF TEXT AND COLOR IN VISUAL INTERFACES
  40. OBSERVING USER: WHAT AND WHEN HOW TO OBSERVE, DATA COLLECTION
  41. ASKING USERS: INTERVIEWS, QUESTIONNAIRES, WALKTHROUGHS
  42. COMMUNICATING USERS: ELIMINATING ERRORS, POSITIVE FEEDBACK, NOTIFYING AND CONFIRMING
  43. INFORMATION RETRIEVAL: AUDIBLE FEEDBACK, OTHER COMMUNICATION WITH USERS, IMPROVING DATA RETRIEVAL
  44. EMERGING PARADIGMS, ACCESSIBILITY
  45. WEARABLE COMPUTING, TANGIBLE BITS, ATTENTIVE ENVIRONMENTS