|
|||||
Human
Computer Interaction
(CS408)
VU
compatible
with standards. The spirit of
the law, not the
letter of the law, should
be
your
guide.
Lecture
40
Lecture
40. Observing
User
Learning
Goals
As the
aim of this lecture is to
introduce you the study of
Human Computer
Interaction,
so that after studying this
you will be able to:
· Discuss
the benefits and challenges
of different types of
observation.
· Discuss
how to collect, analyze and
present data from observational
evaluation.
Observation
involves watching and
listening to users. Observing
users interacting
with
software, even casual
observing, can tell you an
enormous amount about
what
they
do, the context in which
they do it, how well
technology supports them, and
what
other
support is needed. In this lecture we
describe how to observe and do
ethnography
and discuss their role in
evaluation.
User
can be observed in controlled
laboratory-like conditions, as in
usability testing,
or in the
natural environments in which
the products are used--i.e.,
the field. How
the
observation
is done depends on why it is
being done and the
approach adopted.
There
is a
variety of structured, less
structured, and descriptive
observation techniques
for
evaluators
to choose from. Which they
select and how their
findings are
interpreted
will
depend upon the evaluation
goals, the specific
questions being addressed,
and
practical
constraints.
What and when to
observe
40.1
Observing
is useful at any time during
product development. Early in
design,
observation
helps designers understand users'
needs. Other types of
observation
are
done later to examine
whether the developing
prototype meets users' needs.
Depending
on the type of study, evaluators
may be onlookers,
participant
observers,
or ethnographers. The degree of
immersion that evaluators
adopt varies
across a
broad outsider-insider spectrum.
Where a particular study
falls along this
spectrum
depends on its goal and on
the practical and ethical
issues that constrain
and shape
it.
How to
observe
40.2
The
same basic data-collection tools
are used for laboratory
and field studies
(i.e.,
direct
observation, taking notes, collecting
video, etc.) but the
way in which they
are
used is
different. In the laboratory
the emphasis is on the
details of what
individuals
do, while in the field
the context is important and
the focus is on how
people
interact with each other,
the technology, and their
environment. Furthermore,
the
equipment in the laboratory is
usually set up in advance
and is relatively
static
whereas
in the field it usually must be moved
around. In this section we discuss
how to
observe,
and then examine the
practicalities and compare
data-collection tools.
381
Human
Computer Interaction
(CS408)
VU
In
controlled environments
The
role of the observer is to first
collect and then make
sense of the stream
of
data on
video, audiotapes, or notes
made while watching users in
a controlled
environment.
Many practical issues have to be thought
about in advance,
including
the following.
· It is
necessary to decide where
users will be located so that
the
equipment
can be set up. Many usability
laboratories, for example,
have two
or three wall-mounted, adjustable cameras
to record users'
activities
while they work on test
tasks. One camera might
record facial
expressions,
another might focus on mouse
and keyboard activity,
and
another
might record a broad view of
the participant and capture
body
language.
The stream of data from
the cameras is fed into a video
editing and
analysis
suite where it is annotated
and partially edited. Another
form of
data
that can be collected is an
interaction log. This records
all the user's
key
presses.
Mobile usability laboratories, as the
name suggests, are intended
to
be moved
around, but the equipment
can be bulky. Usually it is
taken to a
customer's
site where a temporary
laboratory environment is
created.
· The
equipment needs testing to
make sure that it is set up
and works as
expected,
e.g., it is advisable that
the audio is set at the
right level to
record
the user's voice.
· An
informed consent form should
be available for users to
read and sign
at the
beginning of the study. A script is
also needed to guide how
users
are
greeted, and to tell them
the goals of the study,
how long it will
last,
and to
explain their rights. It is also
important to make users
feel
comfortable
and at ease.
In the
field
Whether
the observer sets out to be
an outsider or an insider, events in the
field can
be
complex and rapidly changing.
There is a lot for evaluators to t h i n
k about, so
many
experts have a framework to
structure and focus their
observation. Ilk
framework
can be quite simple. For
example, this is a practitioner's
framework that
focuses
on just three easy-to-remember items to look
for:
· The
Person. Who is using the
technology at any particular
time?
· The
Place. Where are they
using it?
· The
Thing. What are they
doing with it?
Frameworks
like the one above help
observers to keep their goals and
questions in
sight.
Experienced observers may, however,
prefer more detailed frame
works, such as
the one
suggested by Goetz and LeCompte
(19X4) below, which
encourages observers
to pay
greater attention to the context of
events, the people and the
technology:
Who is
present? How would you
characterize them? What is
their role?
What is
happening? What are people
doing and saying and how
are they behaving?
Does any
of this behavior appear routine?
What is their tone and
body language?
When
does the activity occur? How
is it related to other
activities?
Where is
it happening? Do physical conditions
play a role?
382
Human
Computer Interaction
(CS408)
VU
Where is
it happening? What precipitated
the event or interaction? Do
people have
different
perspectives?
How is
the activity organized? What rules or
norms influence
behavior?
Colin
Kobson (1993) suggests a slightly
longer but similar set of
items:
· Space.
What is the physical space
like and how is it laid
out?
· Actors.
What are the names
and relevant details of the
people involved?
· Activities.
What are the actors doing
and why?
· Objects.
What physical objects are present,
such as furniture?
· Acts.
What are specific
individuals doing?
· Events.
Is what you observe part of a
special event?
· Goals.
What are the actors trying
to accomplish?
· Feelings.
What is the mood of the
group and of individuals?
These
frameworks are useful not
only for providing focus but
also for organizing
the
observation and data-collection
activity. Below is a checklist of
things to plan
before
going into the
field:
· State
the initial study goal and
questions clearly.
· Select
a framework to guide your
activity in the
field.
· Decide
how to record events--i.e., as
notes, on audio, or on video, or
using
a
combination of all three. Make
sure you have the
appropriate equipment
and
that it works. You need a
suitable notebook and pens.
A laptop
computer
might be useful but could be
cumbersome. Although this
is
called
observation, photographs, video,
interview transcripts and
the like
will help
to explain what you see
and are useful for
reporting the story
to
others.
· Be
prepared to go through your
notes and other records as
soon as
possible
after each evaluation
session to flesh out detail
and check
ambiguities
with other observers or with
the people being observed.
This
should be
done routinely because human
memory is unreliable. A
basic
rule is
to do it within 24 hours, but
sooner is better!
· As
you make and review
your notes, try to highlight
and separate personal
opinion
from what happens. Also
clearly note anything you
want to go back
to.
Data collection and analysis
go hand in hand to a large
extent in
fieldwork.
· Be
prepared to re focus your
study as you analyze and
reflect upon what
you
see. Having observed for a
while, you will start to
identify interesting
phenomena
that seem relevant.
Gradually you will sharpen your ideas
into
questions
that guide further
observation, either with the
same group or with a
new
but similar group.
· Think
about how you will gain the
acceptance and trust of
those you observe.
Adopting
a similar style of dress and
finding out what interests
the group and
showing
enthusiasm for what they do
will help. Allow time to
develop
relationships.
Fixing regular times and
venues to meet is also
helpful, so everyone
knows
what to expect. Also, be aware
that it will be easier to
relate lo some
people
than others, and it will be
tempting to pay attention to those
who
receive
you well, so make sure you
attend to everyone in the
group.
383
Human
Computer Interaction
(CS408)
VU
Think
about how to handle
sensitive issues, such as
negotiating where you can
go.
For
example, imagine you are
observing the usability of a portable
home
communication
device. Observing in the living
room, study, and kitchen is
likely
to be
acceptable, but bedrooms and
bathrooms are probably out
of bounds. Take time
to check
what participants are
comfortable with and be accommodating
and
flexible.
Your choice of equipment for
data collection will also
influence how
intrusive
you are in people's
lives.
· Consider
working as a team. This can
have several benefits: for instance,
you can
compare
your observations. Alternatively, you
can agree to focus on
different
people or
different parts of the context.
Working as a team is also likely
to
generate
more reliable data because
you can compare notes
among different
evaluators.
· Consider
checking your notes with an
informant or members of the group
to
ensure
that you are understanding
what is happening and that
you are making
good
interpretations.
· Plan
to look at the situation
from different perspectives. For
example, you may
focus on
particular activities or people. If
the situation has a
hierarchical
structure,
as in many companies, you
will get different perspectives
from different
layers of
management--e.g., end-users, marketing,
product developers,
product
managers,
etc.
Participant
observation and ethnography
Being a
participant observer or an ethnographer
involves all the practical
steps just
mentioned,
but especially that the
evaluator must be accepted
into the group. An
interesting
example of participant observation is
provided by Nancy Baym's
work (1997)
in which
she joined an online
community interested in soap
operas for over a year
in
order to
understand how the community
functioned. She told the
community what she
was
doing and offered to share
her findings with them. This
honest approach gained
her
their
trust, and they offered
support and helpful comments. As Baym
participated
she
learned about the community,
who the key characters were,
how people interacted,
their
values, and the types of
discussions that were generated.
She kept all the
messages
as data to be referred to later. She
also adapted interviewing
and
questionnaires
techniques to collect additional
information.
As we
said the distinction between
ethnography and participant
observation is
blurred.
Some ethnographers believe that
ethnography is an open
interpretivist
approach
in which evaluators keep an
open mind about what
they will see. Others
such as
David Fetterman from
Stanford University, see a
stronger role for a
theoretical
underpinning: "before asking
the first question in the
field the
ethnographer
begins with a problem, a
theory or model, a research
design, specific
data
collection techniques, tools
for analysis, and a specific
writing style"
(Fetterman.
1998. p. 1). This may
sound as if ethnographers have biases,
but by
making
assumptions explicit and
moving between different
perspectives, biases are
at least
reduced. Ethnographic study
allows multiple
interpretations
of reality; it is
interpretivisit.
Data
collection and analysis
often occur simultaneously
in
ethnography,
with analysis happening at many
different levels throughout
the
study.
The question being
investigated is refined as more
understanding about
the
situation
is gained.
384
Human
Computer Interaction
(CS408)
VU
The
checklist below (Fetterman.
1998) for doing ethnography
is similar to the
general
list just mentioned:
Identify
a problem or goal and then
ask good questions to be
answered by the
study,
which may or may not
invoke theory depending on
your philosophy of
ethnography.
The observation framework such as
those mentioned above can
help
to focus
the study and stimulate
questions.
The
most important part of
fieldwork is just being there to observe,
as, questions, and
record
what is seen and heard.
You need to be aware of people's feelings
and sensitive
to where
you should not go.
Collect a
variety of data, if possible,
such as notes, still pictures,
audio and video,
and
artifacts
as appropriate. Interviews are
one of the most important
data-gathering
techniques
and can be structured,
semi-structured, or open So-called
retrospective
interviews
are used after the fact to
check that interpretations are
correct.
As you
work in the Held, he
prepared to move backwards
and forwards between
the
broad
picture and specific
questions. Look at the
situation holistically and then
from the
perspectives
of different stakeholder groups and participants.
Early questions arc likely
to
be broad,
but as you get to know
the situation ask more
specific questions.
Analyze
the data using a holistic
approach in which observations
arc under stood within
the
broad
context--i.e., they are
contextualized. To do this, first
synthesize your
notes,
which is
best done at the end of each
day, and then check with
someone from the
community
that you have described
the situation accurately. Analysis is
usually iterative,
building
on ideas with each
pass.
Data
Collection
40.3
Data
collection techniques (i.e.,
taking notes, audio recording,
and video recording)
are used
individually or in combination and
are often supplemented with
photos from
a still
camera. When different kinds
of data are collected,
evaluators have to
coordinate
them; this requires
additional effort but has
the advantage of providing
more
information and different
perspectives. Interaction logging
and participant
diary
studies are also used.
Which techniques are used
will depend on the
context,
time
available, and the
sensitivity of what is being
observed. In most
settings,
audio,
photos, and notes will be
sufficient. In others it is essential to
collect video
data so
as to observe in detail the
intricacies of w h a t is going
on.
Notes
plus still camera
Taking
notes is the least technical
way of collecting data, but
it can be difficult
and
tiring to write and observe at the
same time. Observers also
get bored and
the
speed at
which they write is limited.
Working with another person
solves sonic of
these
problems and provides
another perspective. Handwritten notes
are flexible in the
field
but must be transcribed.
However, this transcription can be
the first step in
data
analysis, as the evaluator
must go through the data and
organize it. A laptop
computer
can be a useful alternative
but it is more obtrusive and cumbersome,
and its
batteries need
recharging every few hours. If a
record of images is needed,
photographs,
digital images, or sketches are easily
collected.
Audio
recording plus still
camera
Audio
can be a useful alternative to
note taking and is less
intrusive than video.
It
allows
evaluators to be more mobile
than with even the
lightest, battery-driven
video
cameras, and so is very
flexible. Tapes, batteries, and
the recorder are
now
385
Human
Computer Interaction
(CS408)
VU
relatively
inexpensive but there are
two main problems with
audio recording. One
is
the
lack of a visual record,
although this can be dealt
with by carrying a
small
camera.
The second drawback is
transcribing the data, which
can be onerous if the con
tents of
many hours of recording have
to be transcribed: often, however,
only sections
are
needed. Using a headset with
foot control makes
transcribing less onerous.
Many
studies do
not need this level of
detail; instead, evaluators
use the recording to
remind
them about important details
and as a source of anecdotes for
reports.
Video
Video
has the advantage of
capturing both visual and
audio data but can be
intrusive.
However,
the small, handheld, battery-driven
digicams are fairly mobile,
inexpensive
and
are commonly used.
A problem
with using video is that
attention becomes focused on
what is seen through
the
lens. It is easy to miss
other things going on outside of
the camera view.
When
recording
in noisy conditions, e.g., in rooms
with many computers running
or outside
when it
is windy, the sound may get
muffled.
Analysis
of video data can be very
time-consuming as there is so much to take: note
of.
Over
100 hours of analysis time for
one hour of video recording
is common for detailed
analyses
in which every gesture and
utterance is analyzed.
Indirect
observation: tracking users'
activities
40.4
Sometimes
direct observation is not
possible because it is obtrusive or
evaluators
cannot be
present over the duration of
the study, and so users'
activities are
tracked
indirectly.
Diaries and interaction logs
are two techniques for doing
this. From the
records
collected evaluators reconstruct
what happened and look
for usability and
user
experience problems.
Diaries
Diaries
provide a record of what
users did, when they
did it, and what
they thought
about
their interactions with the
technology. They are useful
when users are
scattered
and
unreachable in person, as in many
Internet and web
evaluations. Diaries are
inexpensive,
require no special equipment or
expertise, and are suitable
for long-term
studies.
Templates can also be created
online to standardize entry
format and enable
the data
to go straight into a database
for analysis. These templates
are like those used
in
open-ended online questionnaires.
However, diary studies rely on
participants
being
reliable and remembering to
complete them, so incentives
are needed and the
process
has to be straightforward and
quick. Another problem is
that participants
often
remember events as being better or
worse than they really
were, or taking more
or less
time than they actually
did.
Robinson
and Godbey (1997) asked
participants in their study to
record how much
time
Americans spent on various activities.
These diaries were completed
at the end
of each
day and the data was
later analyzed to investigate
the impact of television
on
people's
lives. In another diary
study, Barry Brown and
his colleagues from
Hewlett
Packard
collected diaries form 22
people to examine when, how,
and why they
capture
different
types of information, such as notes,
marks on paper, scenes, sounds,
moving
images, etc.
(Brown, et al.. 2000). The
participants were each given
a small handheld
camera
and told to take a picture
every time they captured
information in any
form.
The
study lasted for seven days
and the pictures were
used as memory joggers in
a
subsequent
semi-structured interview used to
get participants to elaborate on
their
386
Human
Computer Interaction
(CS408)
VU
activities.
Three hundred and eighty-one
activities were recorded.
The pictures
provided
useful contextual information.
From this data the
evaluators constructed a
framework
to inform the design of new
digital cameras and handheld
scanners.
Interaction
logging
Interaction
logging in which key
presses, mouse or other
device movements are
recorded
has been used in usability
testing for many years.
Collecting this data is
usually
synchronized with video and
audio logs to help
evaluators analyze
users'
behavior
and understand how users
worked on the tasks they
set. Specialist
software
tools are
used to collect and analyze
the data. The log is
also time-stamped so it
can
be used
to calculate how long a user
spends on a particular task or
lingered in a
certain
part of a website or software
application.
Explicit counters that
record visits to a website
were once a familiar sight.
Recording
the number of visitors to a site
can be used to justify maintenance
and
upgrades
to it. For example, if you
want to find out whether
adding a bulletin
board to
an e-commerce website increases the
number of visits, being able
to
compare
traffic before and after the
addition of the bulletin board is
useful. You
can also
track how long people stayed
at the site, which areas
they visited, where
they came
from, and where they
went next by tracking their
Internet Service
Provider
(LS.P.) address. For example, in a
study of an interactive art
museum by
researchers at
the University of Southern
California, server logs were
analyzed by
tracking
visitors in this way
(McLaughlin et al., 1999).
Records of when
people
came to
the site, what they
requested, how long they
looked at each page,
what
browser
they were using, and
what country they were
from, etc., were
collected
over a
seven-month period. The data
was analyzed using Webtrends, a
commercial
analysis
tool, and the evaluators
discovered that the site
was busiest on weekday
evenings.
In another study that
investigated lurking behavior in
listserver
discussion
groups, the number of
messages posted was compared
with list
membership
over a three-month period to
see how lurking behavior
differed among
groups
(Nonnecke and Preece,
2000).
An
advantage of logging user activity is
that it is unobtrusive, but
this also raises
ethical
concerns that need careful consideration
(see the dilemma about
observing
without
being seen). Another advantage is
that large volumes of data
can be logged
automatically.
However, powerful tools are
needed to explore and
analyze this data
quantitatively
and qualitatively. An increasing
number of visualization tools
are
being
developed for this purpose;
one example is WebLog, which
dynamically
shows
visits to websites(Hochheiser and
Shneiderman, 2000).
Analyzing,
interpreting, and presenting the data
40.5
By now
you should know that many,
indeed most observational
evaluations
generate
a lot of data in the form of
notes, sketches, photographs, audio
and video
records
of interviews and events,
various artifacts, diaries,
and logs. Most
observational
data is qualitative and analysis
often involves interpreting
what
users
were doing or saying by
looking for patterns in the data.
Sometimes
qualitative
data is categorized so t h a t it can be
quantified and in some
studies
events
are counted.
Dealing
with large volumes of data,
such as several hours of
video, is daunting,
which is
why it is particularly important to plan
observation studies very
carefully
before
starting them. The DECIDE
framework suggests identifying goals
and
387
Human
Computer Interaction
(CS408)
VU
questions
first before selecting
techniques for the study,
because the goals and
questions
help determine which data is
collected and how it will be
analyzed.
When
analyzing any kind of data,
the first thing to do is to
"eyeball" the data to
see
what stands out. Are
there patterns or significant
events? Is there
obvious
evidence
that appears to answer a question or
support a theory? Then
proceed to
analyze
it according to the goals
and questions. The discussion that
follows focuses on
three
types of data:
· Qualitative
data that is
interpreted
and
used to tell "the story"
about what
was
observed.
· Qualitative
data that is
categorized
using
techniques such as content
analysis.
· Quantitative
data that is
collected from interaction
and video logs
and
presented
as values, tables, charts and graphs and
is treated statistically.
Qualitative
analysis to tell a
story
Much of
the power of analyzing
descriptive data lies in
being able to tell a
convincing
story, illustrated with powerful
examples that help to
confirm the
main
points and will be credible to
the development team. It is hard to
argue with
well-chosen
video excerpts of users interacting with
technology or anecdotes
from
transcripts.
To
summarize, the main
activities involved in working
with qualitative data
to
tell a
story are:
· Review
the data after each
observation session to synthesize
and identify
key
themes and make
collections.
· Record
the themes in a coherent yet
flexible form, with
examples. While
post-its
enable you to move ideas
around and group similar
ones, they can
fall off
and get lost and are
not easily transported, so
capture the main
points
in
another form, either on
paper or on a laptop, or make an
audio recording.
· Record
the date and time of each
data analysis session. (The
raw data should
already
be systematically logged with
dates.)
· As themes
emerge, you may want to
check your understanding
with the
people
you observe or your
informants.
· Iterate
this process until you are
sure that your story
faithfully represents
what
you observed and that
you have illustrated it with
appropriate
examples
from the data.
· Report
your findings to the
development team, preferably in an
oral
presentation
as well as in a written report.
Reports vary in form, but it
is
always
helpful to have a clear,
concise overview of the main
findings
presented
at the beginning.
Quantitative
data analysis
Video
data collected in usability laboratories
is usually annotated as it is
observed
Small
teams of evaluators watch
monitors showing what is
being recorded in a
control
room out of the users'
sight. As they see errors or
unusual behavior, one
of
the
evaluators marks the video
and records a brief remark.
When the test is
finished
evaluators
can use the annotated
recording to calculate performance
times so they can
compared
users' performance on different
prototypes. The data stream
iron: the
interaction
log is used in a similar way
to calculate performance times.
Typically
this
data is further analyzed using
simple statistics such as
means, standard
deviations,
388
Human
Computer Interaction
(CS408)
VU
T-tests, etc.
Categorized data may also be
quantified and analyzed
statistically, as we
have
said.
Feeding
the findings back into
design
The
results from an evaluation can be
reported to the design team in
several ways, as
we have
indicated. Clearly written
reports with an overview at the
beginning and
detailed
content list make for
easy reading and a good
reference document.
Including
anecdotes,
quotations, pictures, and video
clips helps to bring the study to
life, stimulate
interest, and make the
written description more meaningful.
Some teams like
quantitative
data,
but its value depends on the
type of study and its goals.
Verbal presentations
that
include
video clips can also be
very powerful. Often both
qualitative and quantitative
data
analysis
are useful because they
provide alternative
perspectives.
389
Table of Contents:
|
|||||