|
|||||
Human
Computer Interaction
(CS408)
VU
Lecture
15
Lecture
15. Interaction
Paradigms
Learning
Goals
As the
aim of this lecture is to
introduce you the study of
Human Computer
Interaction,
so that after studying this
you will be able to:
Describe WIMP
interfaces in detail
·
Discuss
different interaction
paradigms
·
We have
briefly discussed about the
WIMP interfaces in last lecture.
Today we will
discuss
WIMP interfaces in detail.
The
WIMP Interfaces
15.1
In our
last lecture we have already
discussed the four key
features of the WIMP
interface
that give it its name
windows, icons, pointers and
menus and today
we
will
discuss these in greater
detail. There are also many
additional interaction
objects
and
techniques commonly used in WIMP
interfaces, some designed
for specific
purposes
and others more general.
Our discussion will cover
the toolbars, menus,
buttons,
palettes and dialog
boxes.
Together,
these elements of the WIMP interfaces
are called widgets, and
they
comprise
the toolkit for interaction
between user and
system.
Windows
Windows
are areas of the screen that
behave as if they were
independent terminals in
their
own right. A window can
usually contain text or
graphics, and can be moved
or
resized.
More than one window
can be on a screen at once, allowing
separate tasks to
be
visible at the same time.
Users can direct their
attention to the different
windows as
they
switch from one thread of
work to another.
If one
window overlaps the other,
the back window is partially
obscured, and then
refreshed
when exposed again. Overlapping
windows can cause problems
by
obscuring
vital information, so windows
may also be tiled, when
they adjoin but do
not
overlap each other.
Alternatively, windows may be
placed in a cascading fashion,
where
each new window is placed
slightly to the left and
below the previous
window.
In some
systems this layout policy
is fixed, in others the user
can select it.
Usually
windows have various things
associated with them that
increase their
usefulness.
Scrollbars are one such
attachment, allowing the
user to move the
contents
of the
window up and down, or from
side to side. This makes the
window behave as if
it were a
real window onto a much
larger world, where new
information is brought
into
view by manipulating the
scrollbars.
There is
usually a title bar attached to
the top of a window,
identifying it to the
user,
and
there may be special boxes
in the corners of the window to
aid resizing,
closing,
or making
as large as possible. Each of these
can be seen in the
figure.
132
Human
Computer Interaction
(CS408)
VU
In
addition, some systems allow
windows within windows. For
example, in Microsoft
Office
applications, such as Excel
and Word, each application
has its own
window
and
then within this each
document has a window. It is
often possible to
have
different
layout policies within the
different application
windows.
Icons
Windows
can be closed and lost
forever, or they can be
shrunk to some very
reduced
representation.
A small picture is used to represent a
closed window, and
this
representation
is known as an icon. By allowing
icons, many windows can
be
available
on the screen at the same
time, ready to be expanded to
their full size by
clicking
on the icon. Shrinking a
window to its icon is known
as iconifying the
window.
When a user temporarily does
not want to follow a
particular thread of
dialog,
he can suspend that dialog
by iconifying the window
containing the
dialog.
The
icon saves space on the
screen and serves as a
remainder to the user that
he can
subsequently
resume the dialog by opening
up the window. Figure shows
a few
examples
of icons used in a typical
windowing system
(Microsoft).
Icons
can also be used to represent
other aspects of the system,
such as a wastebasket
for
throwing unwanted files
into, or various disks,
programs or functions, that
are
accessible
to the user. Icon can
take many forms: they
can be realistic
representation
of the
objects that they stand for,
or they can be highly
stylized. They can even
be
arbitrary
symbols, but these can be
difficult for users to
interpret.
Pointers
The
Pointer is an important component of
the WIMP interface, since the
interaction
style
required by WIMP relies very
much on pointing and
selecting things such
as
icons.
The mouse provides an input
device capable of such
tasks, although
joysticks
and
trackballs are other
alternatives. The user is presented
with a cursor on the
screen
that is
controlled by the input
device. A verity of pointer
cursors is shown in
figure.
The
different shape of cursor are
often used to distinguish modes,
for example the
normal
pointer cursor maybe an arrow,
but change to change to cross-hairs
when
drawing a
line. Cursors are also used
to tell the user about
system activity, for
example a
watch or hourglass cursor may be
displayed when the system s
busy
reading a
file.
Pointer
cursors are like icons,
being small bitmap images,
but in addition all
cursors
have a
hot-spot, the location to
which they point.
Menus
The
last main feature of the
windowing system is the
menu, an interaction
technique
that is
common across many
non-windowing systems as well. A
menu presents a
choice of
operations or services that can be
performed by the system at a
given time.
As we
discussed our ability to
recall information is inferior to
our ability to
recognize
it from
some visual cue. Menus
provide information cues in
the form of an
ordered
list of
operations that can be
scanned. This implies that
the names used for
the
commands in
the menu should be
meaningful and
informative.
The
pointing device is used to
indicate the desired option.
As the pointer moves to
the
position
of a menu item, the item is
usually highlighted to indicate
that it is the
potential
candidate for selection.
Selection usually requires
some additional user
action,
such as pressing a button on the
mouse that controls the
pointer cursor on the
screen or
pressing some special key on
the keyboard. Menus are
inefficient when they
133
Human
Computer Interaction
(CS408)
VU
have
too many items, and so
cascading menus are utilized, in which
item selection
opens up
another menu adjacent to the
item, allowing refinement of
the selection.
Several
layers of cascading menus can be
used.
The
main menu can be visible to
the user all the
time, as a menu bar and
submenus
can be
pulled down or across from
it upon request. Menu bars
are often placed at
the
top of
the screen or at the top of
each window. Alternative
includes menu bars
along
one
side of the screen, or even
placed amongst the windows in
the main `desktop'
area. Websites
use a variety of menu bar
locations, including top,
bottom and either
side of
the screen. Alternatively,
the main menu can be
hidden and upon request
it
will pop
up onto the screen. These
pop-up menus are often
used to present context-
sensitive
options, for example
allowing one to examine
properties of particular
on-
screen
objects. In some systems
they are also used to
access more global actions
when
the
mouse is depressed over the
screen background.
Pull-down
menus are dragged down from
the title at the top of
the screen, by moving
the
mouse pointer into the
title par area and pressing
the button. Fall-down menus
are
similar,
except that the menu
automatically appears when
the mouse pointer enters
the
title
bar, without the user
having to press the button.
Some menus explicitly asked
to
go away.
Pop up menus appear when a
particular region of the
screen, may be
designated
by an icon, is selected, but they
only stay as long as the
mouse button is
depressed.
Another
approach to menu selection is to
arrange the options in a
circular fashion.
The
pointer appears in the center of
the circle, and so there is
the same distance to
travel to
any of the selections. This
has the advantages that it is
easier to select items,
since
they can each have a
larger target area, and
that the selection time
for each item
is the
same, since the pointer is
equidistant from them all.
However, these pie
menus
take up
more screen space and
are therefore less common in
interface.
The
major problems with menus in
general are deciding what
items to include and
how to
group those items. Including
too many items makes
menus too long or
creates
too
many of them, whereas grouping
causes problems in that
items that relate to
the
same
topic need to come under the
same heading, yet many
items could be
grouped
under
more than one heading. In
pull-down menus the menu
label should be chosen
to
reflect
the function of the menu
items, and items grouped
within menus by
function.
These
groupings should be consistent across
applications so that the
user can transfer
learning
to new applications. Menu
items should be ordered in
the menu according to
importance
and frequency of use, and
appropriate functionalities should be
kept apart
to
prevent accidental selection of
the wrong function, with
potentially disastrous
consequences.
Keyboard
accelerators
Menus
often offer keyboard accelerators,
key combinations that have
the same effect
as
selecting the menu item.
This allows more expert
users, familiar with the
system, to
manipulate
things without moving off
the keyboard, which is often
faster. The
accelerators
are often displayed
alongside the menu item so
that frequent use
makes
them
familiar.
Buttons
Buttons
are individual and isolated
regions within display that
can be selected by the
user to
invoke specific operations.
These regions are referred
to as buttons because
134
Human
Computer Interaction
(CS408)
VU
they are
purposely made to resemble the push
buttons you would find on a
control
panel.
`Pushing' the button invokes
a command, the meaning of
which is usually
indicated
by a textual label or a small
icon.
Radio
Buttons
Buttons
can also be used to toggle
between two states,
displaying status
information
such as
whether the current font is
italicized or not in a word processor, or
selecting
options
on a web form. Such toggle
buttons can be grouped
together to allow a user
to
select
one feature form a set of
mutually exclusive options,
such as the size in
points
of the
current font. These are
called radio buttons.
Check
boxes
It a set
of options is not mutually
exclusive, such as font characteristics
like bold,
italic
and underlining, and then a
set of toggle buttons can be
used to indicate the
on/off
status of the options. This
type of collection of buttons is
sometimes referred to
as check
boxes
Toolbars
Many
systems have a collection of
small buttons, each with
icons, placed at the top
or
side of
the window and offering
commonly used functions. The
function of this
toolbar
is similar to a menu bar,
but as the icons are smaller
than the equivalent
text
more
functions can be simultaneously
displayed. Sometimes the content of
the toolbar
is fixed,
but often users can
customize it, either changing
which functions area
made
available,
or choosing which of several
predefined toolbars is
displayed
Palettes
In many
application programs, instructions
can either one of several
modes. The
defining
characteristic of modes is that
the interpretation of actions,
such as
keystrokes
or gestures with the mouse,
changes as the mode change.
For example,
using
the standard UNIX text editor
vi, keystrokes can be
interpreted either as
operations
to insert characters in the
document or as operations to perform
file
manipulation.
Problems occur if the user
is not aware of the current
mode. Palettes are
a mechanism
for making the set of
possible modes and the
active mode visible to
the
user. A
palette is usually a collection of
icons that are reminiscent of he purpose
of the
various
modes. An example in a drawing package
would be a collection of icons
to
indicate
the pixel color or pattern
that is used to fill in objects,
much like an artist's
palette
for paint.
Some
systems allow the user to
create palettes from menus
or toolbars. In the case
of
pull-down
menus, the user may be able
`tear off' the menu, turning
it into a palette
showing
the menu items. In the
case of toolbars, he may be
able to drag the
toolbar
away
from its normal position
and place it anywhere on the
screen. Tear-off
menus
are
usually those that are
heavily graphical anyway,
for example line style of
color
selection
in a drawing package.
Dialog
boxes
Dialog
boxes are information windows
used by the system to bring
the user's
attention
to some important information,
possibly an error or a warning
used to
135
Human
Computer Interaction
(CS408)
VU
prevent a
possible error. Alternatively,
they are used to invoke a
sub dialog between
user
and system for a very
specific task that will normally be
embedded within some
larger
task. For example, most
interactive applications result in
the user creating
some
file
that will have to be named and
stored within the filing
system. When the user
or
the
file and indicate where it
is to be located within the
filing system. When the
save
sub
dialog is complete, the
dialog box will disappear.
Just as windows are used
to
separate
the different threads of user-system
dialog, so too are dialog
boxes used to
factor
out auxiliary task threads
from the main task
dialog.
Interaction
Paradigms
15.2
We
believe that we now build
interactive systems that are
more usable than those
built
in the
past. We also believe that
there is considerable room
for improvement in
designing
more usable systems in the
future. The great advances
in computer
technology
have increased the power of machines
and enhanced the bandwidth
of
communication
between human and computer.
The impact of the technology
alone,
however,
is not sufficient to enhance
its usability. As our machines
have become
more
powerful, they key to increased
usability has come from
the creative and
considered
application of the technology to
accommodate and augment the
power of
the
human. Paradigms for interaction
have for the most part been
dependent upon
technological
advances and their creative
application to enhance
interaction.
By
interaction paradigm, it is meant a
particular philosophy or way of
thinking about
interaction
design. It is intended to orient
designers to the kinds of questions
they
need to ask.
For many years the
prevailing paradigm in interaction
design was to
develop
application for the desktop
intended to be used by single
user sitting in
front of
a CPU, monitor, keyboard and
mouse. A dominant part of this
approach was
to design
software applications that
would run using a GUI or WIMP
interface.
Recent
trend has been to promote
paradigms that move beyond
the desktop. With
the
advent of
wireless, mobile, and
handheld technologies, developers started
designing
applications
that could be used in a
diversity of ways besides
running only on an
individual's
desktop machine.
We will
discuss different paradigms
here.
Time
sharing
In the
1940s and 1950s, the
significant advances in computing
consisted of new
hardware
technologies. Mechanical relays
were replaced by vacuum
electron tubes.
Tubes
were replaced by transistors, and
transistors by integrated chips, all of
which
meant
that the amount of sheer
computing power was
increasing by orders of
magnitude.
By the 1960s it was becoming
apparent that the explosion
of growth in
computing
power would be wasted if there
were not an equivalent
explosion of ideas
about
how to channel that power.
One of the leading advocates of research
into
human-centered
applications of computer technology
was J.C.R Licklider,
who
became
the director of the
Information Processing Techniques Office
of the US
Department
of Defense's Advanced Research
Agency (ARPA). It was
Licklider's
goal to
finance various research
centers across the United
States in order to encourage
new ideas
about how best to apply
the burgeoning computing
technology.
136
Human
Computer Interaction
(CS408)
VU
One of
the major contributions to
come out of this new
emphasis in research was
the
concept
of time-sharing, in which a single
computer could support
multiple users.
Previously,
the human was restricted to
batch sessions, in which
complete jobs were
submitted
on punched cards or paper tape to an
operator who would then
run them
individually
on the computer. Time-sharing
systems of the 1960s made
programming
a truly
interactive venture and
brought about a subculture of
programmers known as
`hackers'
single-minded masters of detail
who took pleasure in
understanding
complexity.
Though the purpose of the
first interactive time-sharing
systems was
simply to
augment the programming
capabilities of the early hackers, it
marked a
significant
stage in computer applications
for human use.
Video display
units
As early
as the mid-1950s researchers
were experimenting with the
possibility of
presenting
and manipulating information
from a computer in the form
of images on a
video
display unit (VDU). These
display screens could
provide a more
suitable
medium
than a paper printout for
presenting vast quantities of strategic
information
for
rapid assimilation. It was
not until 1962, however,
when a young graduate
student
at the
Massachusetts Institute of Technology
(MIT), Ivan Sutherland, astonished
the
established
computer science community
with the Sketchpad program,
that the
capabilities
of visual images were
realized.
Sketchpad
demonstrated two important ideas.
First, computers could be
used for more
than
just data processing. They could
extend the user's ability to abstract
away from
some
levels of detail, visualizing
and manipulating different
representations of the
same
information. Those abstractions did
not have to be limited to
representations in
terms of
bit sequences deep within
the recesses of computer
memory. Rather, the
abstraction
could be make truly visual.
To enhance human interaction, the
information
within
the computer was made more
amenable to human consumption.
The computer
was
made to speak a more human
language, instead of the
human being forced to
speak
more like a computer.
Secondly, Sutherland's efforts
demonstrated how
important
the contribution of one
creative mind could be to
the entire history of
computing.
Programming
toolkits
Dougles
Engelbart's ambition since the
early 1950s was to use
computer technology
as a
means of complementing human
problem-solving activity. Engelbart's
idea as a
graduate
student at the University f
California at Berkeley was to
use the computer to
teach humans.
This dream of naïve human
users actually learning from
a computer
was a
stark contrast to the
prevailing attitude of his
contemporaries that computers
were
purposely complex technology
that only the intellectually
privileged were
capable
of manipulating.
Personal
computing
Programming
toolkits provide a means for
those with substantial computing
skills to
increase
their productivity greatly.
But Engelbart's vision was
not exclusive to the
computer
literate. The decade of the
1970s saw the emergence of
computing power
aimed at
the masses, computer
literate or not. One of the
first demonstrations that
the
powerful
tools of the hacker could be
made accessible to the
computer novice was a
graphics
programming language for
children called LOGO. The
inventor, Seymen
Papert,
wanted to develop a language
that was easy for
children to use. He and
his
137
Human
Computer Interaction
(CS408)
VU
colleagues
from MIT and elsewhere
designed a computer-controlled mechanical
turtle
that
dragged a pen along a surface to trace
its path. In the early
1970s Alan Kay
view
of the
future of computing was
embodied in small, powerful machines,
which were
dedicated
to single users, that is
personal computers. Together with
the founding team
of
researchers at the Xerox
Palo Alto Research Center,
Kay worked on
incorporating
a
powerful and simple visually
based programming environment,
Smalltalk, for the
personal
computing hardware that was
just becoming feasible. As
technology
progresses,
it is now becoming more
difficult to distinguish between
what constitutes
a
personal computer, or workstation,
and what constitutes a
mainframe.
Window systems and the WIMP
interface
With
the advent and immense
commercial success of personal
computing, the
emphasis
for increasing the usability
of computing technology focused on
addressing
the
single user who engaged in a
dialog with the computer in
order to complete
some
work.
Humans are able to think
about more than one
thing at a time, and
in
accomplishing
some piece of work, they
frequently interrupt their
current train of
thought
to pursue some other related
piece of work. A personal
computer system
which
forces the user to progress in
order through all of the
tasks needed to achieve
some
objective, from beginning to
end without any diversions,
does not correspond
to
that
standard working pattern. If the
personal computer is to be an effective
dialog
partner,
to must be as flexible in its ability to
change the topic as the
human is.
But
the ability to address the
needs of a different user task is
not the only
requirement.
Computer
systems for the most part
react to stimuli provided by the
user, so they are
quite
amenable to a wandering dialog
initiated by the user. As
the ser engages in
more
than
one plan of activity over a
stretch of time, it becomes difficult
for him to
maintain
the status of the
overlapping threads of activity.
Interaction
based on windows, icons, menus,
and pointers--the WIMP
interface--is
now
commonplace. These interaction
devices first appeared in the
commercial
marketplace
in April 1981, when Xerox
Corporation introduced the
8010 Star
Information
System.
The
metaphor
Metaphor
is used quite successfully to teach
new concepts in terms of ones,
which are
already
understood. It is no surprise that this
general teaching mechanism has
been
successful
in introducing computer novices to
relatively foreign
interaction
techniques.
Metaphor is used to describe the
functionality of many
interaction
widgets,
such as windows, menus, buttons
and palettes. Tremendous
commercial
successes
in computing have arisen
directly from a judicious
choice of metaphor.
The
Xerox
Alto and Star were the
first workstations based on
the metaphor of the
office
desktop.
The majority of the management
tasks on a standard workstation have to
do
with
the file manipulation.
Linking the set of tasks
associated with file
manipulation
to the
filing tasks in a typical
office environment makes the
actual computerized
tasks
easier to
understand at first. The
success of the desktop
metaphor is unquestionable.
Another
good example in the personal
computing domain is the
widespread use of the
spreadsheet
for accounting and financial
modeling.
Very
few will debate the value of a
good metaphor for increasing
the initial
familiarity
between user and computer
application. The danger of a
metaphor is
usually
realized after the initial
honeymoon period. When word
processors were first
introduced,
they relied heavily on the
typewriter metaphor. The
keyboard of a
138
Human
Computer Interaction
(CS408)
VU
computer
closely resembles that of a standard
typewriter, so it seems like a
good
metaphor
from any typewriter. For
example, the space key on a
typewriter is passive,
producing
nothing on the piece of
paper and just moving
the guide further along
the
current
line. For a typewriter, a
space is not a character. However,
for a word
processor,
the blank space is a character,
which much be inserted
within a text just as
any
other character is inserted. So an
experienced typist is not
going to be able to
predict
his experience with a
preliminary understanding of a word
processor.
Another
problem with a metaphor is
the cultural bias that it
portrays. With the
growing
internationalization of software, it
should not be assumed that a
metaphor
will
apply across national
boundaries. A meaningless metaphor will
only add another
layer of
complexity between the user
and the system.
Direct
Manipulation
In the
early 1980s as the price of
fast and high-quality
graphics hardware was
steadily
decreasing, designers
were beginning to see that
their products were
gaining
popularity
as their visual content increased. As
long as the user-system
command line
prompt
computing was going to stay
within the minority
population of the hackers
who
reveled in the challenge of
complexity. In a standard command line
interface, the
only
way to get any feedback on
the results of previous interaction is to
know that you
only
have to ask for it and to
know how to ask for
it. Rapid visual and
audio feedback
on a
high-resolution display screen or
through a high-quality sound
system makes it
possible
to provide evaluative information
for every executed user
action.
Rapid
feedback is just one feature
of the interaction technique
known as direct
manipulation.
Ben Shneiderman is attributed
with coining this phrase in
1982 to
describe
the appeal of graphics-based interactive
systems such as Sketchpad
and the
Xerox
Alto and Star. He highlights
the following features of a
direct manipulation
interface.
· visibility
of the objects of
interest
incremental
action at the interface with
rapid feedback on all
actions
·
reversibility
of all actions, so that
users are encouraged to explore
without
·
severe
penalties
syntactic
correctness of all actions, so
that every user action is a
legal
·
operation
replacement
of complex command language
with actions to
manipulate
·
directly
the visible objects.
The
first real commercial
success which demonstrated
the inherent usability of
direct
manipulation
interfaces for the general
public was the Macintosh
personal computer,
introduced
by Apple Computer, Inc. in
1984 after the relatively
unsuccessful
marketing
attempt in the business
community of the similar but
more pricey Lisa
computer.
The direct manipulation
interface for the desktop
metaphor requires that
the
documents
and folders are made
visible to the user as
icons, which represent
the
underlying
files and directories. An
operation such as moving a
file from one
directory
to another is mirrored as an action on
the visible document, which
is picked
and
dragged along the desktop
from one folder to the
next.
139
Human
Computer Interaction
(CS408)
VU
Language versus
action
Whereas it is
true that direct
manipulation interface make
some tasks easier to
perform
correctly, it is equally true
that some tasks are more
difficult, if not
impossible.
Contrary to popular wisdom, it is
not generally true that
action speak
louder
than words. The image,
projected for direct
manipulation was of the
interface
as a
replacement for the
underlying system as the
world of interest to the
user. Actions
performed
at the interface replace any
need to understand their meaning at
any deeper,
system
level. Another image is of
the interface s the
interlocutor or mediator
between
the
user and the system.
The user gives the
interface instructions and it is
then the
responsibility
of the interface to see that
those instructions are
carried out. The
user-
system
communication is by means of indirect
language instead of direct
actions.
We can
attach two meaningful
interpretations to this language
paradigm. The first
requires
that the user understands
how the underlying system
functions and the
interface
as interlocutor need not perform
much translation. In fact,
this interpretation
of the
language paradigm is similar to
the kind of interaction,
which existed before
direct
manipulation interfaces were
around. In a way, we have
come full circle.
The
second interpretation does
not require the user to
understand the
underlying
system's
structure. The interface
serve a more active role, as
it must interpret between
the
intended operation as requested by the
user and the possible
system operations
that must
be invoked to satisfy that
intent. Because it is more
active, some people
refer to
the interface as an agent in
these circumstances. This kind of
language
paradigm
can be seen in some internal
system database, but you
would not know
how
that
information is organized.
Whatever
interpretation is attached to the
language paradigm, it is clear
that it has
advantages
and disadvantages when compared
with the action paradigm
implied by
direct
manipulation interfaces. In the
action paradigm, it is often
much easier to
perform
simple tasks without risk o
certain classes or error.
For example,
recognizing
and
pointing to an object reduces
the difficulty of identification
and the possibility
of
misidentification.
On the other hand, more
complicated tasks are often
rather tedious
to
perform in the action
paradigm, as they require repeated
execution of the same
procedure
with only minor
modification. In the language
paradigm, there is
the
possibility
describing a generic procedure once
and then leaving it to be
executed
without
further user
intervention.
The
action and language
paradigms need not be completely
separate. In the
above
example
two different paradigms are
distinguished by saying that
generic and
repeatable
procedures can be described in the
language paradigm and not in
the action
paradigm.
An interesting combination of the
two occurs in programming by
example
when a
user can perform some
routine tasks in the action
paradigm and the
system
records
this as a generic procedure. In a
sense, the system is
interpreting the
user's
actions
as a language script that it
can then follow.
Hypertext
In 1945,
Vannevar Bush, then the
highest-ranking scientific administrator
in the US
war
effort, published an article
entitled `As We May Think' in
The Atlantic Monthly.
Bush
was in charge of over 6000 scientists
who had greatly pushed back
the frontiers
of
scientific knowledge during
the Second World War. He
recognized that a
major
drawback
of these prolific research
efforts was that it was
becoming increasingly
difficult
to keep in touch with the
growing body of scientific
knowledge in the
literature.
In his opinion, the greatest advantages
of this scientific revolution
were to
140
Human
Computer Interaction
(CS408)
VU
be gained
by those individuals who were
able to keep abreast of an
ever-increasing
flow of
information. To that end, he
described an innovative and
futuristic
information
storage and retrieval apparatus the
memex , which was
constructed
with
technology wholly existing in
1945 and aimed at increasing
the human capacity
to store
and retrieve, connected pieces of
knowledge by mimicking our
ability to
create
random associative
links.
An unsuccessful
attempt to create a machine
language equivalent of the
memex on
early
1960s computer hardware led
Nelson on a lifelong quest to produce
Xanadu, a
potentially
revolutionary worldwide publishing
and information retrieval
system
based on
the idea of interconnected,
non-linear text and other
media forms. A
traditional
paper is read from beginning to
end, in a linear fashion.
But within that
text,
there are often ideas or footnotes
that urge the reader to digress
into richer topic.
The
linear format for
information does not provide
much support for this
random and
associated
browsing task. What Bush's
memex suggested was to preserve
the non-
linear
browsing structure in the
actual documentation. Nelson
coined the phrase
hypertext
in the mid 1960s to reflect
this non-linear text
structure.
Multi-modality
The
majority of interactive systems
still use the traditional
keyboard and a
pointing
device,
such as a mouse, for input
and are restricted to a
color display screen
with
some
sound capabilities for output. Each of
these input and output
devices can be
considered
as communication channels for the
system and they correspond
to certain
human
communication channels. A multi-modal
interactive system is a system
that
relies on
the use of multiple human
communication channels. Each different
channel
for
the user is referred to as a
modality of interaction. In this
sense, all
interactive
systems
can be considered multi-model,
for human have always
used their visual
and
haptic
channels in manipulating a computer. In
fact, we often use our
audio channel to
hear
whether the computer is
actually running
properly.
However,
genuine multi-modal systems
rely to an extent on simultaneous
use of
multiple
communication channels for both
input and output. Humans
quite naturally
process
information by simultaneous use of
different channels.
Computer-supported cooperative
work
Another
development in computing in the 1960s
was the establishment of the
first
computer
networks, which allowed
communication between separate
machines.
Personal
computing was all about
providing individuals with
enough computing
power so
that they were liberated
from dumb terminals, which
operated on a time-
sharing
systems. It is interesting to note
that as computer networks
become
widespread,
individuals retained their
powerful workstations but
now wanted to
reconnect
themselves to the rest of the
workstations in their immediate
working
environment,
and even throughout the
world. One result of this
reconnection was the
emergence of
collaboration between individuals
via the computer
called computer
supported
cooperative work, or
CSCW.
The
main distinction between
CSCW systems and interactive
systems designed for
a
single
user is that designer can no
longer neglect the society
within which any
single
user
operates. CSCW systems are
built to allow interaction
between humans via
the
computer
and so the needs of the
many must be represented in the one
product. A fine
example
of a CSCW system is electronic
mail email yet
another metaphor by
141
Human
Computer Interaction
(CS408)
VU
which
individuals at physically separate
locations can communicate
via electronic
messages
that work in a similar way
to conventional postal
systems.
The World Wide
Web
Probably
the most significant recent development
interactive computing is the
World
Wide
Web, often referred to as
just the web, or WWW.
The web is built on top of
the
Internet,
and offers an easy to use,
predominantly graphical interface to
information,
hiding
the underlying complexities of
transmission protocols, addresses
and remote
access to
data.
The
Internet is simply a collection of
computers, each linked by any
sort of data
connections,
whether it be slow telephone
line and modem or
high-bandwidth optical
connection.
The computers of the Internet
all communicate using common
data
transmission
protocols and addressing systems.
This makes it possible for
anyone to
read
anything from anywhere, in
theory, if it conforms to the
protocol. The web
builds
on this
with its own layer of
network protocol, a standard markup
notation for laying
out
pages of information and a
global naming scheme. Web
pages can contain
text,
color
images, movies, sound and, most
important, hypertext links to
other web pages.
Hypermedia
documents can therefore be published by
anyone who has access to
a
computer
connected to the Internet.
Ubiquitous
computing
In the
late 1980s, a group of
researchers at Xerox PARC
led by Mark Weiser,
initiated
a
research program with the
goal of moving human-computer
interaction away from
the
desktop and out into
our everyday lives. Weiser
observed.
The most
profound technologies are
those that disappear. They
weave themselves into
the
fabric of everyday life
until they are
indistinguishable from
it.
These
words have inspired a new
generation of researchers in the
area of ubiquitous
computing.
Another popular term for
this emerging paradigm is
pervasive computing,
first
coined by IBM. The intention is to
create a computing infrastructure
that
permeates
our physical environment so
much that we do not notice
the computer may
longer. A
good analogy for the
vision of ubiquitous computing is
the electric motor.
When
the electric motor was
first introduced, it was
large, loud and very
noticeable.
Today,
the average household contains so
many electric motors that we
hardly ever
notice
them anymore. Their utility
led to ubiquity and, hence,
invisibility.
Sensor-based and
context-aware interaction
The
yard-scale, foot-scale and
inch-scale computers are all
still clearly
embodied
devices
with which we interact,
whether or not we consider
them `computers'.
There
are an
increasing number of proposed
and existing technologies
that embed
computation
even deeper, but unobtrusively,
into day-to-day life.
Weiser's dream was
computers
anymore', and the term
ubiquitous computing encompasses a
wide range
from
mobile devices to more
pervasive environments.
142
Table of Contents:
|
|||||