|
|||||
Organization
Development MGMT
628
VU
Lesson
25
Evaluating
and Institutionalizing Organization
Development Interventions
This
lecture focuses on the final
stage of the organization development cycle--
evaluation and
institutionalization.
Evaluation is concerned with
providing feed-back to practitioners and
organization
members
about the progress and
impact of interventions. Such
information may suggest the
need for
further
diagnosis and modification of the
change program, or it may
show that the intervention
is
successful.
Institutionalization involves making a particular
change a permanent part of the
organization's
normal
functioning. It ensures that the
results of successful change
programs persist over
time.
Evaluation
processes consider both the
implementation success of the intended
intervention and the
long-
term
results it produces. Two key
aspects of effective evaluation are
measurement and research
design.
Time
institutionalization or long- term
persistence of intervention effects is
examined in a framework
showing
the organization characteristics, intervention
dimensions, and processes
contributing to
institutionalization
of OD interventions in
organizations.
Evaluating
OD Interventions:
Assessing
organization development interventions involves
judgments about whether an intervention
has
been
implemented as intended and, if so,
whether it is having desired results.
Managers investing resources
in
OD efforts increasingly are being
held accountable for
results--being asked to justify the
expenditures in
terms
of hard, bottom-line outcomes.
More and more, managers
are asking for rigorous
assessment of OD
interventions
and are using the results to
make important resource allocation
decisions about OD, such
as
whether
to continue to support the change program, to
modify or alter it, or to terminate it
and try
something
else.
Traditionally,
OD evaluation has been discussed as
something that occurs after the
intervention. That
view
can
be misleading, however. Decisions about the
measurement of relevant variables and the
design of the
evaluation
process should be made early in the OD
cycle so that evaluation choices
can be integrated with
intervention
decisions.
There
are two distinct types of OD
evaluation--one intended to guide the
implementation of interventions
and
another to assess their overall
impact. The key issues in
evaluation are measurement and
research
design.
Implementation
and Evaluation Feedback:
Most
discussions and applications of OD evaluation
imply that evaluation is something done
after
intervention.
It is typically argued that
once the intervention is implemented, it should be
evaluated to
discover
whether it is producing intended effects.
For example, it might be
expected that a job
enrichment
program
would lead to higher employee
satisfaction and performance.
After implementing job enrichment,
evaluation
would involve assessing whether
these positive results
indeed did occur.
This
after-implementation view of evaluation is
only partially correct. It
assumes that interventions
have
been
implemented as intended and that the
key purpose of evaluation is to assess
their effects. In many,
if
not
most, organization development programs, however,
implementing interventions cannot be taken
for
granted.
Most OD interventions require significant
changes in people's behaviors
and ways of thinking
about
organizations, but they typically
offer only broad prescriptions
for how such changes
are to occur.
For
example, job enrichment calls
for adding discretion, variety, and
meaningful feedback to people's
jobs.
Implementing
such changes requires
considerable learning and experimentation as
employees and
managers
discover
how to translate these
general prescriptions into
specific behaviors and
procedures. This learning
process
involves much trial and
error and needs to be guided by
information about whether behaviors
and
procedures
are being changed as intended.
Consequently, we should expand our
view of evaluation to
include
both during-implementation assessment of
whether interventions are actually being
implemented
and
after-implementation evaluation of whether they are
producing expected
results.
Both
kinds of evaluation provide organization members
with feedback about
interventions. Evaluation
aimed
at guiding implementation may be
called implementation feedback,
and assessment intended
to
discover
intervention outcomes may be
called evaluation feedback. Figure 36
shows how the two kinds
of
feedback
fit with the diagnostic and
intervention stages of OD.
The application of OD to a
particular
organization
starts with a thorough
diagnosis of the situation, which helps
identify particular organizational
problems
or areas for improvement, as well as
likely causes underlying
them. Next, from an array
of
possible
interventions, one or some
set is chosen as a means of
improving the organization. The choice
is
based
on knowledge linking interventions to
diagnosis and change
management.
In
most cases, the chosen
intervention provides only general
guidelines for organizational change,
leaving
managers
and employees with the task
of translating those guidelines into
specific behaviors
and
procedures.
Implementation feedback informs this
process by supplying data about the
different features of
Organization
Development MGMT
628
VU
the
intervention itself and data
about the immediate effects of the
intervention. These data,
colleted
repeatedly
and at short intervals, provide a
series of snapshots about
how the intervention is
progressing.
Organization
members can use this
information, first, to gain a
clearer understanding of the
intervention
(the
kinds of behaviors and procedures
required to implement it) and, second, to
plan for the next
implementation
steps. This feedback cycle
might proceed for several
rounds, with each round
providing
members
with knowledge about the intervention and
ideas lot the next stage of
implementation.
Figure
36: Implementation and Evaluation
Feedback
Once
implementation feedback informs
organization members that the
intervention is sufficiently in
place,
evaluation
feedback begins, in contrast to
implementation feedback, it is concerned
with the overall
impact
of
the intervention and with whether
resources should continue to be allocated
to it or to other possible
interventions.
Evaluation feedback takes longer to
gather and interpret than
does implementation
feedback.
It
typically includes a broad array of
outcome measures, such as
performance job satisfaction,
absenteeism,
and
turnover. Negative results on
these measures tell members
either that the initial diagnosis
was seriously
flawed
or that tile wrong
intervention was chosen.
Such feedback might prompt
additional diagnosis and
a
search
for a more effective intervention.
Positive results, on the other hand,
tell members that the
intervention
produced expected outcomes and
might prompt a search for
ways to institutionalize the
changes,
making them a permanent part of the
organizations normal functioning.
An
example of a job enrichment intervention
helps to clarity the OD stages and
feedback linkages
shown
in
Figure 36. Suppose the initial
diagnosis reveals that
employee performance and
satisfaction are low
and
that
jobs being overly structured
and routinized is an underlying
cause of this problem. An inspection
of
alternative
interventions to improve productivity
and satisfaction suggests
that job enrichment might
be
applicable
for this situation. Existing job
enrichment theory proposes that
increasing employee discretion,
task
variety, and feedback can
lead to improvements in work quality
and attitudes and that this
job design
and
outcome linkage is especially strong for
employees who have growth
needs--needs for
challenge,
autonomy,
and development. Initial diagnosis
suggests that most of the
employees have high growth
needs
and
that the existing job designs
prevent the fulfillment of these
needs. Therefore, job enrichment
seems
particularly
suited to this situation.
Organization
Development MGMT
628
VU
Managers
and employees now start to
translate the general prescriptions
offered by job enrichment
theory
into
specific behaviors and
procedures. At this stage, the
intervention is relatively broad and must
be
tailored
to fit the specific situation. To implement the
intervention, employees might
decide on the
following
organizational changes: job discretion
can be increased through
more participatory styles
of
supervision;
task variety can be enhanced by
allowing employees to inspect
their job outputs; and
feedback
can
be made more meaningful by providing
employees with quicker and
more specific information
about
their
performances.
After
three months of trying to implement these
changes, the members use
implementation feedback to
see
how
the intervention is progressing.
Questionnaires and interviews (similar to
those used in diagnosis)
are
administered
to measure the different features of
job enrichment (discretion, variety, and
feedback) and to
assess
employees' reactions to the changes.
Company records are analyzed
to show the short-term effects
on
productivity of the intervention. The
data reveal that
productivity and satisfaction
have changed very
little
since the initial diagnosis.
Employ perceptions of job discretion
and feedback also have
shown
negligible
change, but perceptions of
task variety have shown significant
improvement. In-depth discussion
and
analysis of this first round of
implementation feedback help
supervisors gain a better feel for the
kinds
of
behaviors needed to move toward a
participatory leadership style. This
greater clarification of
one
feature
of the intervention leads to a decision
to involve the supervisors in leadership
training to develop
the
skills and knowledge needed to
lead anticipatively. A decision
also is made to make job
feedback more
meaningful
by translating such data into
simple bar graphs, rather
than continuing to provide
voluminous
statistical
reports.
After
these modifications have been in effect
for about three months,
members institute a second
round of
implementation
feedback to see how the intervention is
progressing. The data now
show that
productivity
and
satisfaction have moved moderately higher
than in the first round of
feedback and that
employee
perceptions
of task variety and feedback
are both high. Employee
perceptions of discretion, however,
remain
relatively low. Members conclude
that the variety and feedback
dimensions of job enrichment
are
sufficiently
implemented but that the discretion component
needs improvement. They decide to put
more
effort
into supervisory training
and to ask OD practitioners to provide
online counseling and
coaching to
supervisors
about their leadership
styles.
After
four more months, a third
round of implementation feedback
occurs. The data now
show that
satisfaction
and performance are significantly higher
than in the first round of
feedback and
moderately
higher
than in the second round.
The data also show
that discretion, variety, and
feedback are all
high,
suggesting
that the job enrichment intervention
has been successfully implemented.
Now evaluation
feedback
is used to assess the overall
effectiveness of the program.
The
evaluation feedback includes all the
data from the satisfaction
and performance measures
used in the
implementation
feedback. Because both the
immediate and broader effects of the
intervention are being
evaluated,
additional outcomes are
examined, such as employee
absenteeism, maintenance costs,
and
reactions
of other organizational units not included in
job enrichment. The full
array of evaluation data
might
suggest that after one year
from the start of implementation, the job
enrichment program is having
expected
effects and thus should be
continued and made more
permanent.
Measurement:
Providing
useful implementation and evaluation
feedback involves two
activities: selecting the
appropriate
variables
and designing good
measures.
Selecting
Variables:
Ideally,
the variables measured in OD evaluation should derive
from the theory or conceptual
model
underlying
the intervention. The model should incorporate the
key features of the intervention as
well as its
expected
results. The general
diagnostic models described in
Chapters 5 and 6 meet these
criteria, as do the
more
specific models introduced in
Chapters 12 through 20. For
example, the job-level diagnostic
model
described
in Chapter 6 proposes several major
features of work: task
variety, feedback, and
autonomy. The
theory
argues that high levels of
these elements can be
expected to result in high
levels of work quality
and
satisfaction.
In addition, as we shall see in
Chapter 16, the strength of this relationship
varies with the
degree
of employee growth need: the higher the
need, the more that job
enrichment produces positive
results.
The
job-level diagnostic model suggests a
number of measurement variables for
implementation and
evaluation
feedback. Whether the intervention is being
implemented could be assessed by determining
how
many
job descriptions have been
rewritten to include more responsibility or
how many organization
members
have received cross-training in
other job skills. Evaluation
of the immediate and long-
term
impact
of job enrichment would include measures
of employee performance and
satisfaction over time.
Organization
Development MGMT
628
VU
Again,
these measures would likely
be included in the initial diagnosis,
when the company's problems
or
areas
for improvement are
discovered.
Measuring
both intervention and
outcome variables is necessary
for implementation and
evaluation
feedback.
Unfortunately, there has
been a tendency in OD to measure
only outcome variables
while
neglecting
intervention variables altogether. It
generally is assumed that the
intervention has been
implemented
and attention, therefore, is directed to
its impact on such organizational
outcomes as
performance,
absenteeism, and satisfaction. As
argued earlier, implementing OD interventions
generally
take
considerable time and learning. It must
be empirically determined that the intervention
has been
implemented;
it cannot simply be assumed.
Implementation feedback serves this
purposes guiding the
implementation
process and helping to
interpret outcome data Outcome
measures are ambiguous
without
knowledge
of how well the intervention
has been implemented. For
example, a negligible change in
measures
of performance and satisfaction could
mean that the wrong
intervention has been
chosen, that
the
correct intervention has not
been implemented effectively, or that the
wrong variables have
been
measured.
Measurement of the intervention variables
helps determine the correct
interpretation of out-
come
measures.
As
suggested above, the choice of
intervention variables to measure should
derive from the conceptual
framework
underlying the OD intervention. OD
research and theory
increasingly have come to
identify
specific
organizational changes needed to implement particular
interventions. These variables should
guide
not
only implementation of the intervention
but also choices about what
change variables to measure
for
evaluative
purposes. Additional sources of knowledge
about intervention variables can be
found in the
numerous
references at the end of each of the
intervention chapters in this book
and in several of the
books
in the Wiley Series on Organizational Assessment
and Change.
The
choice of what outcome variables to
measure also should be dictated by
intervention theory,
which
specifies
the kinds of results that can be
expected from particular change
programs. Again, the material
in
this
book and elsewhere identifies
numerous outcome measures,
such as job satisfaction,
intrinsic
motivation,
organizational commitment, absenteeism, turnover,
and productivity.
Historically,
OD assessment has focused on
attitudinal outcomes, such as
job satisfaction, while
neglecting
hard
measures, such as performance.
Increasingly, however, managers and
researchers are calling
for
development
of behavioral measures of OD outcomes.
Managers are interested
primarily in applying OD
to
change work-related behaviors that
involve joining, remaining, and
producing at work, and are
assessing
OD
more frequently in terms of
such bottom-line results. Macy
and Mirvis have done
extensive research to
develop
a standardized set of behavioral outcomes
for assessing and comparing
intervention results.
Table of Contents:
|
|||||