|
|||||
Chapter
2
Overview
of 50 Software
Best
Practices
Since
not everyone reads books
from cover to cover, it
seems useful to
provide
a concise overview of software
engineering best practices
before
expanding
the topics later in the
book. As it happens, this
section was
originally
created for a lawsuit, which
was later settled. That
material
on
best practices has been
updated here to include
recent changes in
software
engineering technologies.
These
best-practice discussions focus on
projects in the
10,000function
point
range. The reason for
this is pragmatic. This is
the size range
where
delays and cancellations
begin to outnumber successful
comple-
tions
of projects.
The
best practices discussed in
this book cover a timeline
that can
span
30 years or more. Software
development of large applications
can
take
five years. Deploying and
customizing such applications
can take
another
year. Once deployed, large
applications have extremely
long
lives
and can be used for 25
years or more.
Over
the 25-year usage period,
numerous enhancements and
defect
repairs
will occur. There may also
be occasional "renovation" or
restruc-
turing
of the application, changing
file formats, and perhaps
converting
the
source code to a newer
language or languages.
The
set of best practices
discussed here spans the
entire life cycle
from
the
day a project starts until
the day that the
application is withdrawn
from
service. The topics include,
but are not limited
to, the best
practices
for
the 50 subjects discussed
here:
1.
Minimizing harm from layoffs
and downsizing
2.
Motivation and morale of
technical staff
3.
Motivation and morale of
managers and
executives
39
40
Chapter
Two
4.
Selection and hiring of
software personnel
5.
Appraisals and career
planning for software
personnel
6.
Early sizing and scope
control of software
applications
7.
Outsourcing software
applications
8.
Using contractors and
management consultants
9.
Selecting software methods,
tools, and practices
10.
Certifying methods, tools,
and practices
11.
Requirements of software
applications
12.
User involvement in software
projects
13.
Executive management support of
software applications
14.
Software architecture and
design
15.
Software project
planning
16.
Software project cost
estimating
17.
Software project risk
analysis
18.
Software project value
analysis
19.
Canceling or turning around
troubled projects
20.
Software project organization
structures
21.
Training managers of software
projects
22.
Training software technical
personnel
23.
Use of software
specialists
24.
Certification of software engineers,
specialists, and
managers
25.
Communication during software
projects
26.
Software reusability
27.
Certification of reusable
materials
28.
Programming or coding
29.
Software project
governance
30.
Software project measurements
and metrics
31.
Software benchmarks and
baselines
32.
Software project milestone
and cost tracking
33.
Software change control
before release
34.
Configuration control
35.
Software quality
assurance
36.
Inspections and static
analysis
Overview
of 50 Software Best Practices
41
37.
Testing and test library
control
38.
Software security analysis
and control
39.
Software performance
analysis
40.
International software
standards
41.
Protecting intellectual property in
software
42.
Protection against viruses,
spyware, and hacking
43.
Software deployment and
customization
44.
Training clients or users of
software applications
45.
Customer support after
deployment of software
applications
46.
Software warranties and
recalls
47.
Software change management
after release
48.
Software maintenance and
enhancement
49.
Updates and releases of
software applications
50.
Terminating or withdrawing legacy
applications
Following
are summary discussions of
current best practices for
these
50
managerial and technical
areas.
1.
Best Practices for
Minimizing Harm
from
Layoffs and
Downsizing
As
this book is written, the
global economy is rapidly
descending into the
worst
recession since the Great
Depression. As a result,
unprecedented
numbers
of layoffs are occurring.
Even worse, a number of
technology
companies
will probably run out of
funds and declare
bankruptcy.
Observations
during previous economic
downturns show that
com-
panies
often make serious mistakes
when handling layoffs and
down-
sizing
operations. First, since the
selection of personnel to be
removed
is
usually made by managers and
executives, technical personnel
are
let
go in larger numbers than
managerial personnel, which
degrades
operational
performance.
Second,
administrative and support
personnel such as quality
assur-
ance,
technical writers, metrics
and measurement specialists,
secre-
tarial
support, program librarians,
and the like are
usually let go before
software
engineers and technical
personnel. As a result, the
remaining
technical
personnel must take on a
variety of administrative tasks
for
which
they are neither trained
nor qualified, which also
degrades opera-
tional
performance.
The
results of severe layoffs
and downsizing usually show
up in
reduced
productivity and quality for
several years. While there
are no
42
Chapter
Two
perfect
methods for dealing with
large-scale reductions in
personnel,
some
approaches can minimize the
harm that usually
follows:
Bring
in outplacement services to help
employees create résumés
and
also
to find other jobs, if
available.
For
large corporations with multiple
locations, be sure to post
avail-
able
job openings throughout the
company. The author once
observed a
large
company with two divisions
co-located in the same
building where
one
division was having layoffs
and the other was
hiring, but neither
side
attempted any
coordination.
If
yours is a U.S. company that
employs offshore workers
brought into
the
United States on temporary
visas, it would be extremely
unwise
during
the recession to lay off
employees who are U.S.
citizens at higher
rates
than overseas employees. It is
even worse to lobby for or
to bring
in
more overseas employees
while laying off U.S.
citizens. This has
been
done
by several major companies
such as Microsoft and Intel,
and it
results
in severe employee morale
loss, to say nothing of very
bad pub-
licity.
It may also result in
possible investigations by state
and federal
officials.
Analyze
and prioritize the
applications that are under
development
and
in the backlog, and attempt
to cut those applications
whose ROIs
are
marginal.
Analyze
maintenance of existing legacy
applications and
consider
ways
of reducing maintenance staff
without degrading security or
oper-
ational
performance. It may be that
renovation, restructuring,
removal
of
error-prone modules, and
other quality improvements
can reduce
maintenance
staffing but not degrade
operational performance.
Calculate
the staffing patterns needed
to handle the applications
in
the
backlog and under
development after low-ROI
applications have
been
purged.
As
cuts occur, consider raising
the span of control or the
number of tech-
nical
personnel reporting to one
manager. Raising the span of
control from
an
average of about 8 technical
employees per manager to 12
technical
employees
per manager is often
feasible. In fact, fairly
large spans of con-
trol
may even improve performance
by reducing contention and
disputes
among
the managers of large
projects.
Do
not attempt to skimp on
inspections, static analysis,
testing, and
quality
control activities. High quality
yields better performance
and
smaller
teams, while low quality
results in schedule delays,
cost over-
runs,
and other problems that
absorb effort with little
positive return.
Carefully
analyze ratios of technical
personnel to specialists such
as
technical
writing, quality assurance,
configuration control, and
other
personnel.
Eliminating specialists in significantly
larger numbers than
software
engineers will degrade the
operational performance of the
soft-
ware
engineers.
Overview
of 50 Software Best Practices
43
In
a severe recession, some of
the departing personnel may
be key
employees
with substantial information on products,
inventions, and
intellectual
property. While most
companies have nondisclosure
agree-
ments
in place for protection,
very few attempt to create
an inventory
of
the knowledge that might be
departing with key personnel. If
layoffs
are
handled in a courteous and
professional manner, most
employees
would
be glad to leave behind
information on key topics.
This can be
done
using questionnaires or "knowledge"
interviews. But if the
layoffs
are
unprofessional or callous to employees,
don't expect employees
to
leave
much useful information
behind.
In
a few cases where there is a
complete closure of a research
facil-
ity,
some corporations allow
departing employees to acquire
rights to
intellectual
properties such as copyrights
and even to patents filed
by
the
employees. The idea is that
some employees may form
startup com-
panies
and thereby continue to make
progress on useful ideas
that oth-
erwise
would drop from
view.
As
cuts in employment are being
made, consider the typical
work pat-
terns
of software organizations. For a
staff that totals 1000
personnel,
usually
about half are in technical
work such as software
engineering,
30
percent are specialists of
various kinds, and 20
percent are manage-
ment
and staff personnel.
However, more time and
effort are usually
spent
finding and fixing bugs
than on any other measurable
activity.
After
downsizing, it could be advantageous to
adopt technologies
that
improve
quality, which should allow
more productive work from
smaller
staffs.
Therefore topics such as
inspections of requirements and
design,
code
inspections, Six Sigma, static
analysis, automated testing,
and
methods
that emphasize quality
control such as the Team
Software
Process
(TSP) may allow the
reduced staffing available to
actually have
higher
productivity than
before.
A
study of work patterns by
the author in 2005 showed
that in the
course
of a normal 220-day working
year, only about 47 days
were actu-
ally
spent on developing the
planned features of new
applications by
software
engineering technical personnel.
About 70 days were spent
on
testing
and bug repairs. (The
rest of the year was
spent on meetings,
administrative
tasks, and dealing with
changing requirements.)
Therefore
improving quality via a
combination of defect
prevention
and
more effective defect
removal (i.e., inspections
and static analysis
before
testing, automated testing,
etc.) could allow smaller
staffs to per-
form
the same work as larger
staffs. If it were possible to
cut down defect
removal
days to 20 days per year
instead of 70 days, that
would have the
effect
of doubling the time
available for new
development efforts.
Usually
one of the first big cuts
during a recession is to reduce
cus-
tomer
support, with severe consequences in
terms of customer
satis-
faction.
Here, too, higher quality
prior to delivery would
allow smaller
44
Chapter
Two
customer
support teams to handle more
customers. Since customer
sup-
port
tends to be heavily focused on
defect issues, it can be
hypothesized
that
every reduction of 220
defects in a delivered application
could reduce
the
number of customer support
personnel by one, but would
not degrade
response
time or time to achieve
satisfactory solutions. This is
based on
the
assumption that customer
support personnel speak to
about 30 cus-
tomers
per day, and each released
defect is encountered by 30
customers.
Therefore
each released defect occupies one day
for one customer
support
staff
member, and there are
220 working days per
year.
Another
possible solution would be to
renovate legacy
applications
rather
than build new replacements.
Renovation and the removal
of
error-prone
modules, plus running static
analysis tools and
restructur-
ing
highly complex code and
perhaps converting the code
to a newer
language,
might stretch out the
useful lives of legacy
applications by
more
than five years and
reduce maintenance staffing by
about one
person
for every additional 120
bugs removed prior to
deployment. This
is
based on the assumption that
maintenance programmers typically
fix
about
10 bugs per month (severity
1 and 2 bugs, that
is).
The
bottom line is that if U.S.
quality control were better
than it is
today,
smaller staffs could
actually accomplish more new
development
than
current staffs. Too many
days are being wasted on
bug removal for
defects
that could either be
prevented or removed prior to
testing.
A
combination of defect prevention
and effective defect removal
via
inspections,
static analysis, and
automated and conventional
testing
could
probably reduce development
staffing by 25 percent,
maintenance
staffing
by 50 percent, and customer
support staffing by about 20
per-
cent
without any reduction in
operational efficiency, customer
satis-
faction,
or productivity. Indeed development
schedules would
improve
because
they usually slip more
during testing than at any
other time,
due
to excessive defects. As the
economy sinks into
recession, it is impor-
tant
to remember not only that
"quality is free," as stated by
Phil Crosby,
but
that it also offers
significant economic benefits
for software.
One
problem that has existed
for many years is that
few solid eco-
nomic
studies have been performed
and published that
convincingly
demonstrate
the value of software
quality. A key reason for
this is that
the
two most common metrics
for quality, lines of code
and cost per
defect,
are flawed and cannot
deal with economics topics.
Using defect
removal
costs per function point is
a better choice, but these
metrics
need
to be deployed in organizations that
actually accumulate
effort,
cost,
and quality data
simultaneously. From studies
performed by the
author,
combinations of defect prevention
and defect removal
methods
that
lower defect potentials and
raise removal efficiency
greater than
95
percent, simultaneously benefit
development costs,
development
schedules,
maintenance costs, and
customer support
costs.
Overview
of 50 Software Best Practices
45
Over
and above downsizing, many
companies are starting to
enforce
reduced
work months or to require unpaid
time off on the part
of
employees
in order to keep cash flow
positive. Reduced work
periods
and
reduced compensation for all
employees is probably less
harmful
than
cutting staff and keeping
compensation constant for
the remain-
der.
However, caution is needed because if
the number of required
days
off
exceeds certain thresholds, employees
may switch from being
legally
recognized
as full-time workers to becoming
part-time workers. If
this
occurs,
then their medical benefits,
pensions, and other
corporate perks
might
be terminated. Since policies
vary from company to
company
and
state to state, there is no
general rule, but it is a
problem to be
reckoned
with.
The
information technology employees of
many state
governments,
and
some municipal governments,
have long had benefits
that are no
longer
offered by corporations. These
include payment for sick
days
not
used, defined pension
programs, accumulating vacation
days from
year
to year, payment for unused
vacation days at retirement,
and zero-
payment
health benefits. As state
governments face mounting
deficits,
these
extraordinary benefits are
likely to disappear in the
future.
There
are no perfect solutions for
downsizing and laying off
personnel,
but
cutting specialists and
administrative personnel in large
numbers
may
cause unexpected problems.
Also, better quality control
and better
maintenance
or renovation can allow
smaller remaining staffs to
handle
larger
workloads without excessive
overtime, loss of operational
effi-
ciency,
or degradation of customer
satisfaction.
2.
Best Practices for
Motivation
and
Morale of Technical
Staff
Many
software engineers and other
specialists such as quality
assur-
ance
and technical writers are
often high-energy, self-motivated
indi-
viduals.
Psychological studies of software
personnel do indicate
some
interesting
phenomena, such as high
divorce rates and a
tendency
toward
introversion.
The
nature of software development
and maintenance work tends
to
result
in long hours and sometimes
interruptions even in the
middle
of
the night. That being
said, a number of factors
are useful in keeping
technical
staff morale at high
levels.
Studies
of exit interviews of software
engineers at major
corporations
indicate
two distressing problems:
(1) the best personnel
leave in the
largest
numbers and (2) the
most common reason stated
for voluntary
attrition
is "I don't like working for
bad management."
Thus,
some sophisticated companies
such as IBM have
experimented
with
reverse
appraisals, where
employees evaluate
management
46
Chapter
Two
performance,
as well as normal
appraisals, where
employee perfor-
mance
is evaluated.
Following
are some topics noted in a
number of leading companies
where
morale
is fairly high, such as IBM,
Microsoft, Google, and the
like:
Emphasize
doing things right, rather
than just working long
hours
to
make artificial and probably
impossible schedules.
Allow
and support some personal
projects if the individuals
feel that
the
projects are
valuable.
Ensure
that marginal or poor
managers are weeded out,
because poor
management
drives out good software
engineers in larger numbers
than
any
other factor.
Ensure
that appraisals are fair,
honest, and can be appealed
if employ-
ees
believe that they were
incorrectly downgraded for
some reason.
Have
occasional breakfast or lunch
meetings between executives
and
technical
staff members, so that
topics of mutual interest
can be dis-
cussed
in an open and nonthreatening
fashion.
Have
a formal appeal or "open
door" program so that
technical
employees
who feel that they
have not been treated
fairly can appeal
to
higher-level management. An important
corollary of such a
program
is
"no reprisals." That is, no
punishments will be levied against
person-
nel
who file complaints.
Have
occasional awards for
outstanding work. But recall
that many
small
awards such as "dinners for
two" or days off are
likely to be more
effective
than a few large awards. But
don't reward productivity
or
schedules
achieved at the expense of
quality.
As
business or economic situations
change, keep all technical
person-
nel
apprised of what is happening.
They will know if a company is
in
financial
distress or about to merge, so
formal meetings to keep
person-
nel
up to date are
valuable.
Suggestion
programs that actually
evaluate suggestions and
take
actions
are often useful. But
suggestion programs that
result in no
actions
are harmful.
Surprisingly,
some overtime tends to raise
morale for
psychological
reasons.
Overtime makes projects seem
to be valuable, or else
they
would
not require overtime. But
excessive amounts of overtime
(i.e.,
60-hour
weeks) are harmful for
periods longer than a couple
of weeks.
One
complex issue is that
software engineers in most
companies are
viewed
as members of "professional staffs"
rather than hourly
work-
ers.
Unless software engineers
and technical workers are
members of
unions,
they normally do not receive
any overtime pay regardless
of
the
hours worked. This issue
has legal implications that
are outside
the
scope of this book.
Training
and educational opportunities
pay off in better morale
and
also
in better performance. Therefore
setting aside at least ten
days a year
Overview
of 50 Software Best Practices
47
for
education either internally or at
external events would be
beneficial.
It
is interesting that companies with
ten or more days of annual
training
have
higher productivity rates
than companies with no
training.
Other
factors besides these can
affect employee morale, but
these give
the
general idea. Fairness,
communication, and a chance to do
innova-
tive
work are all factors
that raise the morale of
software engineering
personnel.
As
the global economy slides
into a serious recession,
job opportuni-
ties
will become scarce even for
top performers. No doubt
benefits will
erode
as well, as companies scramble to
stay solvent. The recession
and
economic
crisis may well introduce
new factors not yet
understood.
3.
Best Practices for
Motivation and
Morale
of Managers and
Executives
The
hundred-year period between
1908 and the financial
crisis and
recession
of 2008 may later be viewed
by economic historians as
the
"golden
age" of executive compensation
and benefits.
The
global financial crisis and
the recession followed by
attempts
to
bail out industries and
companies that are in severe
jeopardy have
thrown
a spotlight on a troubling topic:
the extraordinary
salaries,
bonuses,
and retirement packages for
top executives.
Not
only do top executives in
many industries have
salaries of several
million
dollars per year, but
they also have bonuses of
millions of dollars,
stock
options worth millions of
dollars, pension plans worth
millions of
dollars,
and "golden parachutes" with
lifetime benefits and
health-care
packages
worth millions of
dollars.
Other
benefits include use of
corporate jets and limos,
use of corporate
boxes
at major sports stadiums,
health-club memberships, golf
club
memberships,
and scores of other
"perks."
Theoretically
these benefits are paid
because top executives are
sup-
posed
to maximize the values of
companies for shareholders,
expand
business
opportunities, and guide
corporations to successful
business
opportunities.
But
the combination of the
financial meltdown and the
global recession
coupled
with numerous instances of executive
fraud and malpractice
(as
shown
by Enron) will probably put an
end to unlimited
compensation
and
benefits packages. In the
United States at least
companies receiv-
ing
federal "bail out" money
will have limits on executive
compensation.
Other
companies are also
reconsidering executive compensation
pack-
ages
in light of the global
recession, where thousands of
companies are
losing
money and drifting toward
bankruptcy.
From
2008 onward, executive
compensation packages have been
under
a
public spotlight and
probably will be based much
more closely on cor-
porate
profitability and business success
than in the past. Hopefully,
in
48
Chapter
Two
the
wake of the financial
meltdown, business decisions will be
more
carefully
thought out, and long-range
consequences analyzed
much
more
carefully than has been
the practice in the
past.
Below
the levels of the chief
executives and the senior
vice presidents
are
thousands of first-, second-, and
third-line managers, directors
of
groups,
and other members of
corporate management.
At
these lower levels of
management, compensation packages
are
similar
to those of the software
engineers and technical
staff. In fact,
in
some companies the
top-ranked software engineers
have compensa-
tion
packages that pay more
than first- and some
second-line managers
receive,
which is as it should be.
The
skill sets of successful
managers in software applications
are
a
combination of management capabilities
and technical
capabilities.
Many
software managers started as
software engineers, but
moved into
management
due to problem-solving and
leadership abilities.
A
delicate problem should be
discussed. If the span of
control or
number
of technical workers reporting to a
manager is close to
the
national
average of eight employees
per manager, then it is hard
to find
qualified
managers for every available
job. In other words, the
ordinary
span
of control puts about 12.5
percent of workers into
management
positions,
but less than 8 percent
are going to be really good
at it.
Raising
the span of control and
converting the less-qualified
manag-
ers
into staff or technical
workers might have merit. A
frequent objec-
tion
to this policy is how can
managers know the
performance of so
many
employees. However, under
the current span of control
levels,
managers
actually spend more time in
meetings with other
managers
than
they do with their own
people.
As
of 2009, software project
management is one of the toughest
kinds
of
management work. Software
project managers are charged
with
meeting
imposed schedules that may
be impossible, with containing
costs
that may be low, and with
managing personnel who are
often
high-energy
and innovative.
When
software projects fail or
run late, the managers
receive the
bulk
of the blame for the
situation, even though some
of the problems
were
due to higher-level schedule
constraints or to impossible
client
demands.
It is an unfortunate fact that
software project managers
have
more
failures and fewer successes
than hardware engineering
manag-
ers,
marketing managers, or other
managers.
The
main issues facing software
project management include
sched-
ule
constraints, cost constraints,
creeping requirements, quality
control,
progress
tracking, and personnel
issues.
Scheduling
software projects with accuracy is
notoriously difficult,
and
indeed a majority of software
projects run late, with the
magnitude
of
the delays correlating with
application size. Therefore
management
Overview
of 50 Software Best Practices
49
morale
tends to suffer because of constant
schedule pressures. One
way
of
minimizing this issue is to
examine the schedules of
similar projects
by
using historical data. If
in-house historical data is
not available, then
data
can be acquired from
external sources such as the
International
Software
Benchmarking Standards Group
(ISBSG) in Australia.
Careful
work
breakdown structures are
also beneficial. The point
is, matching
project
schedules with reality affects
management morale. Since
costs
and
schedules are closely
linked, the same is true
for matching costs
to
reality.
One
reason costs and schedules
for software projects tend
to exceed
initial
estimates and budgets is
creeping requirements.
Measurements
using
function points derived from
requirements and specifications
find
the
average rate of creep is
about 2 percent per calendar
month. This
fact
can be factored into initial
estimates once it is understood. In
any
case,
significant changes in requirements
need to trigger fresh
schedule
and
cost estimates. Failure to do
this leads to severe
overruns, damages
management
credibility, and of course
low credibility damages
manage-
ment
morale.
Most
software projects run into
schedule problems during
testing due
to
excessive defects. Therefore
upstream defect prevention
and pretest
defects
removal activities such as
inspections and static
analysis are
effective
therapies against schedule
and cost overruns.
Unfortunately,
not
many managers know this,
and far too many
tend to skimp on
qual-
ity
control. However, if quality
control is good, morale is also
likely to
be
good, and the project will
have a good chance of
staying on target.
Therefore
excellence in quality control
tends to benefit both
managerial
and
professional staff
morale.
Tracking
software progress and
reporting on problems is perhaps
the
weakest
link in software project management. In
many lawsuits for
breach
of contract, depositions reveal
that serious problems were
known
to
exist by the technical staff
and first-line managers, but
were not
revealed
to higher-level management or to clients
until it was too
late
to
fix them. The basic rule of
project tracking should be:
"no surprises."
Problems
seldom go away by themselves, so
once they are known,
report
them
and try and solve them.
This will benefit both
employee and man-
agement
morale much more than
sweeping problems under the
rug.
Personnel
issues are also important
for software projects. Since
many
software
engineers are self-motivated,
have high energy levels,
and are
fairly
innovative, management by example is
better than
management
by
decree. Managers need to be fair and
consistent with appraisals and
to
ensure
that personnel are kept
informed of all issues
arriving from higher
up
in the company, such as
possible layoffs or sales of
business units.
Unfortunately,
software management morale is
closely linked to
software
project successes, and as of 2009,
far too many projects
fail.
50
Chapter
Two
Basing
plans and estimates on
historical data and
benchmarks rather
than
on client demands would also
improve management
morale.
Historical
data is harder to overturn
than estimates.
4.
Best Practices for Selection
and
Hiring
of Software Personnel
As
the global economy slides
into a severe recession,
many companies
are
downsizing or even going out
of business. As a result, it is a
buyers'
market
for those companies that
are doing well and
expanding. At no
time
in history have so many
qualified software personnel
been on the
job
market at the same time as
at the end of 2008 and
during 2009.
It
is still important for
companies to do background checks of
all appli-
cants,
since false résumés are
not uncommon and are
likely to increase
due
to the recession. Also,
multiple interviews with both
management
and
technical staff are
beneficial to see how
applicants might fit
into
teams
and handle upcoming
projects.
If
entry-level personnel are
being considered for their
first jobs out of
school,
some form of aptitude
testing is often used. Some
companies also
use
psychological interviews with industrial
psychologists. However,
these
methods have ambiguous
results.
What
seem to give the best
results are multiple
interviews combined
with
a startup evaluation period of
perhaps six months.
Successful per-
formance
during the evaluation period
is a requirement for joining
the
group
on a full-time regular
basis.
5.
Best Practices for
Appraisals and Career
Planning
for Software Personnel
After
about five years on the
job, software engineers tend
to reach a major
decision
on their career path. Either
the software engineer wants
to stay
in
technical work, or he or she
wants to move into
management.
Technical
career paths can be
intellectually satisfying and
also have
good
compensation plans in many
leading companies. Positions
such as
"senior
software engineer" or "corporate
fellow" or "advisory
architect"
are
not uncommon and are
well respected. This is
especially true for
corporations
such as IBM that have
research divisions where
top-gun
engineers
can do very innovative
projects of their own
choosing.
While
some managers do continue to
perform technical work,
their
increasing
responsibilities in the areas of
schedule management,
cost
management,
quality management, and
personnel management
obvi-
ously
reduce the amount of time
available for technical
work.
Software
engineering has several
different career paths, with
devel-
opment
programming, maintenance programming,
business analysis,
Overview
of 50 Software Best Practices
51
systems
analysis, quality assurance,
architecture, and testing
all moving
in
somewhat different
directions.
These
various specialist occupations
bring up the fact that
software
engineering
is not yet a full profession
with specialization that is recog-
nized
by state licensing boards.
Many kinds of voluntary
specialization
are
available in topics such as
testing and quality
assurance, but these
have
no legal standing.
Large
corporations can employ as
many as 90 different kinds of
spe-
cialists
in their software organizations,
including technical
writers,
software
quality assurance specialists,
metrics specialists,
integration
specialists,
configuration control specialists,
database administrators,
program
librarians, and many more.
However, these specialist
occupa-
tions
vary from company to company
and have no standard
training or
even
standard definitions.
Not
only are there no standard
job titles, but also
many companies
use
a generic title such as
"member of the technical
staff," which can
encompass
a dozen specialties or
more.
In
a study of software specialties in
large companies, it was
common
to
find that the human
resource groups had no idea
of what specialties
were
employed. It was necessary to go on
site and interview
managers
and
technical workers to find
out this basic
information.
In
the past, one aspect of
career planning for the
best technical
personnel
and managers included "job
hopping" from one company
to
another.
Internal policies within
many companies limited pay
raises,
but
switching to another company
could bypass those limits.
However,
as
the economy retracts, this
method is becoming difficult.
Many com-
panies
now have hiring freezes
and are reducing staffs
rather than
expanding.
Indeed, some may enter
bankruptcy.
6.
Best Practices for Early
Sizing and Scope
Control
of Software Applications
For
many years, predicting the
size of software applications
was dif-
ficult
and very inaccurate.
Calculating size by using
function-point
metrics
had to be delayed until requirements
were known, but by
then
it was too late for
the initial software cost
estimates and sched-
ule
plans. Size in terms of
source code could only be
guessed at by
considering
the sizes of similar
applications, if any existed
and their
sizes
were known.
However,
in 2008 and 2009, new
forms of size analysis
became avail-
able.
Now that the International
Software Benchmarking
Standards
Group
(ISBSG) has reached a
critical mass with perhaps
5,000 software
applications,
it is possible to acquire reliable
size data for many
kinds
of
software applications from
the ISBSG.
52
Chapter
Two
Since
many applications are quite
similar to existing
applications,
acquiring
size data from ISBSG is
becoming a standard early-phase
activ-
ity.
This data also includes
schedule and cost information, so it is
even
more
valuable than size alone.
However, the ISBSG data
supports function
point
metrics rather than lines of
code. Since function points
are a best
practice
and the lines of code
approach is malpractice, this is
not a bad
situation,
but it will reduce the use of
ISBSG benchmarks by
companies
still
locked into LOC
metrics.
For
novel software or for
applications without representation in
the
ISBSG
data, several forms of
high-speed sizing are now
available. A new
method
based on pattern matching
can provide fairly good
approxima-
tions
of size in terms of function
points, source code, and
even for other
items
such as pages of specifications. This
method also predicts the
rate
at
which requirements are
likely to grow during
development, which
has
long been a weak link in
software sizing.
Other
forms of sizing include new
kinds of function point
approxima-
tions
or "light" function point
analysis, which can predict
function point
size
in a matter of a few minutes, as opposed
to normal counting speeds
of
only about 400 function
points per day.
Early
sizing is a necessary precursor to
accurate estimation and also
a
precursor
to risk analysis. Many kinds
of risks are directly
proportional
to
application size, so the
earlier the size is known,
the more complete
the
risk analysis.
For
small applications in the
1000function point range,
all features
are
usually developed in a single
release. However, for major
applica-
tions
in the 10,000 to
100,000function point range,
multiple releases
are
the norm.
(For
small projects using the
Agile approach, individual
features or
functions
are developed in short
intervals called sprints.
These
are usu-
ally
in the 100 to 200function
point range.)
Because
schedules and costs are
directly proportional to
application
size,
major systems are usually
segmented into multiple
releases at
12-
to 18-month intervals. Knowing
the overall size, and
then the sizes
of
individual functions and
features, it is possible to plan an
effective
release
strategy that may span
three to four consecutive
releases. By
knowing
the size of each release,
accurate schedule and cost
estimating
becomes
easier to perform.
Early
sizing using pattern
matching can be done before
requirements
are
known because this method is
based on external descriptions of
a
software
application and then by
matching the description
against the
"patterns"
of other similar
applications.
The
high-speed function point
methods are offset in time
and need at
least
partial requirements to operate
successfully.
Overview
of 50 Software Best Practices
53
The
best practice for early
sizing is to use one or more
(or all) of the
high-
speed
sizing approaches before committing
serious funds to a
software
application.
If the size is large enough
so that risks are likely to
be severe,
then
corrective actions can be
applied before starting development,
when
there
is adequate time
available.
Two
innovative methods for
software scope control have
recently sur-
faced
and seem to be effective.
One is called Northern
Scope because
it
originated
in Finland. The other is
called Southern
Scope because
it origi-
nated
in Australia. The two are
similar in that they attempt
to size appli-
cations
early and to appoint a
formal scope manager to
monitor growth of
possible
new features. By constantly
focusing on scope and growth
issues,
software
projects using these methods
have more success in their
initial
releases
because, rather than
stuffing too many late
features into the
first
release,
several follow-on releases are
identified and populated
early.
These
new methods of scope control
have actually led to the
creation
of
a new position called
scope
manager. This
new position joins
several
other
new jobs that have
emerged within the past
few years, such as
web
master and scrum
master.
Sizing
has been improving in recent
years, and the combination
of
ISBSG
benchmarks plus new
high-speed sizing methods
shows promise
of
greater improvements in the
future.
7.
Best Practices for
Outsourcing Software
Applications
For
the past 20 years, U.S.
corporations have been
dealing with a major
business
issue: should software
applications be built internally,
or
turned
over to a contractor or outsourcer
for development. Indeed
the
issue
is bigger than individual
applications and can
encompass all soft-
ware
development operations, all
software maintenance operations,
all
customer
support operations, or the
entire software organization
lock,
stock,
and barrel.
The
need for best practices in
outsource agreements is
demonstrated
by
the fact that within
about two years, perhaps 25
percent of outsource
agreements
will have developed some
friction between the clients
and the
outsource
vendors. Although results
vary from client to client
and contrac-
tor
to contractor, the overall
prognosis of outsourcing within
the United
States
approximates the following
distribution, shown in Table
2-1, is
derived
from observations among the
author's clients.
Software
development and maintenance
are expensive
operations
and
have become major cost
components of corporate budgets. It is
not
uncommon
for software personnel to
exceed 5 percent of total
corporate
employment,
and for the software
and computing budgets to
exceed
10
percent of annual corporate
expenditures.
54
Chapter
Two
TABLE
2-1
Approximate
Distribution of U.S. Outsource
Results After 24 Months
Percent
of Outsource
Results
Arrangements
Both
parties generally
satisfied
70%
Some
dissatisfaction by client or
vendor
15%
Dissolution
of agreement planned
10%
Litigation
between client and
contractor probable
4%
Litigation
between client and
contractor in progress
1%
Using
the function point metric as
the basis of comparison,
most large
companies
now own more than
2.5 million function points
as the total volume
of
software in their mainframe
portfolios, and some very
large companies
such
as AT&T and IBM each own
well over 10 million
function points.
As
an example of the more or
less unplanned growth of
software
and
software personnel in modern
business, some of the larger
banks
and
insurance companies now have
software staffs that number
in the
thousands.
In fact, software and
computing technical personnel
may
compose
the largest single
occupation group within many
companies
whose
core business is far removed
from software.
As
software operations become larger,
more expensive, and more
wide-
spread,
the executives of many large
corporations are asking a
fundamen-
tal
question: Should
software be part of our core
business or not?
This
is not a simple question to
answer, and the exploration
of some
of
the possibilities is the
purpose of this section. You
would probably
want
to make software a key
component of your core business
operations
under
these conditions:
You
sell products that depend
upon your own proprietary
software.
■
Your
software is currently giving
your company significant
competitive
■
advantage.
Your
company's software development
and maintenance
effectiveness
■
are
far better than your
competitors'.
You
might do well to consider
outsourcing of software if its
relation-
ship
to your core business is
along the following
lines:
Software
is primarily used for
corporate operations, not as a
product.
■
Your
software is not particularly
advantageous compared with
your
■
competitors'.
Your
development and maintenance
effectiveness are
marginal.
■
Overview
of 50 Software Best Practices
55
Once
you determine that
outsourcing either specific
applications or
portions
of your software operations is a
good match to your
business
plans,
some of the topics that
need to be included in outsource
agree-
ments
include
The
sizes of software contract
deliverables must be determined
during
■
negotiations,
preferably using function
points.
Cost
and schedule estimation for
applications must be formal
and
■
complete.
Creeping
user requirements must be
dealt with in the contract in
a
■
way
that is satisfactory to both
parties.
Some
form of independent assessment of
terms and progress
should
■
be
included.
Anticipated
quality levels should be
included in the
contract.
■
Effective
software quality control
steps must be utilized by
the
■
vendor.
If
the contract requires that
productivity and quality
improvements
■
be
based on an initial baseline,
then great care must be
utilized in
creating
a baseline that is accurate
and fair to both
parties.
Tracking
of progress and problems
during development must be
com-
■
plete
and not overlook or
deliberately conceal
problems.
Fortunately,
all eight of these topics
are amenable to control once
they
are
understood to be troublesome if left to
chance. An interesting
sign
that
an outsource vendor is capable of
handling large applications is
if
they
utilize state-of-the-art quality
control methods.
The
state-of-the-art for large
software applications includes
sophisti-
cated
defect prediction methods,
measurements of defect removal
effi-
ciency,
utilization of defect prevention
methods, utilization of
formal
design
and code inspections,
presence of a Software Quality
Assurance
(SQA)
department, use of testing
specialists, and usage of a
variety of
quality-related
tools such as defect
tracking tools, complexity
analysis
tools,
debugging tools, and test
library control
tools.
Another
important best practice for
software outsource
contracts
involves
dealing with changing requirements,
which always occur.
For
software
development contracts, an effective
way of dealing with
chang-
ing
user requirements is to include a
sliding scale of costs in
the con-
tract
itself. For example, suppose
a hypothetical contract is based on
an
initial
agreement of $1000 per
function point to develop an
application
of
1000 function points in
size, so that the total
value of the
agreement
is
$1 million.
56
Chapter
Two
The
contract might contain the
following kind of escalating
cost scale
for
new requirements added
downstream:
Initial
1000 function points
=
$1000
per function point
Features
added more than 3 months
after contract
=
$1100
per function point
signing
Features
added more than 6 months
after contract
=
$1250
per function point
signing
Features
added more than 9 months
after contract
=
$1500
per function point
signing
Features
added more than 12 months
after
=
$1750
per function point
contract
signing
Features
deleted or delayed at user
request
=
$250
per function point
Similar
clauses can be utilized with
maintenance and
enhancement
outsource
agreements, on an annual or specific
basis, such as:
Normal
maintenance and defect
repairs
=
$250
per function point per
year
Mainframe
to client-server conversion
=
$500
per function point per
system
Special
"mass update" search and
repair
=
$75
per function point per
system
(Note
that the actual cost
per function point for
software produced in
the
United States runs from a
low of less than $300
per function point
for
small end-user projects to a
high of more than $5,000
per function
point
for large military software
projects. The data shown
here is for
illustrative
purposes and should not
actually be used in contracts as
it
stands.)
The
advantage of the use of
function point metrics for
development
and
maintenance contracts is that
they are determined from
the user
requirements
and cannot be unilaterally
added or subtracted by
the
contractor.
In
summary form, successful
software outsourced projects
in
the
10,000function point class
usually are characterized by
these
attributes:
Less
than 1 percent monthly
requirements changes after
the require-
■
ments
phase
Less
than 1 percent total volume
of requirements "churn"
■
Fewer
than 5.0 defects per
function point in total
volume
More
than 65 percent defect
removal efficiency before
testing begins
■
More
than 96 percent defect
removal efficiency before
delivery
■
Overview
of 50 Software Best Practices
57
Also
in summary form, unsuccessful
outsource software projects
in
the
10,000function point class
usually are characterized by
these attri-
butes:
More
than 2 percent monthly
requirements changes after
the require-
■
ments
phase
More
than 5 percent total volume
of requirements churn
■
More
than 6.0 defects per
function point in total
volume
■
Less
than 35 percent defect
removal efficiency before
testing begins
■
Less
than 85 percent defect
removal efficiency before
delivery
■
In
performing "autopsies" of cancelled or
failed projects, it is
fairly
easy
to isolate the attributes
that distinguish disasters
from successes.
Experienced
project managers know that
false optimism in
estimates,
failure
to plan for changing
requirements, inadequate quality
approaches,
and
deceptive progress tracking
lead to failures and
disasters. Conversely,
accurate
estimates, careful change
control, truthful progress
tracking,
and
topnotch quality control are
stepping-stones to success.
Another
complex topic is what
happens to the employees
whose work
is
outsourced. The best
practice is that they will be
reassigned within
their
own company and will be used
to handle software applications
and
tasks
that are not outsourced.
However, it may be that the
outsourcing
company
will take over the
personnel, which is usually a
very good to
fair
practice based on the
specifics of the companies
involved. The worst
case
is that the personnel whose
work is outsourced will be laid
off.
In
addition to outsourcing entire
applications or even portfolios,
there
are
also partial
outsource
agreements for specialized
topics such as test-
ing,
static analysis, quality
assurance, and technical
writing. However,
these
partial assignments may also
be done in-house by contractors
who
work
on-site, so it is hard to separate
outsourcing from contract
work
for
these special topics.
Whether
to outsource is an important business
decision. Using best
practices
for the contract between
the outsource vendor and
the client
can
optimize the odds of success,
and minimize the odds of
expensive
litigation.
In
general, maintenance outsource
agreements are less
troublesome
and
less likely to end up in
court than development
outsource agree-
ments.
In fact, if maintenance is outsourced,
that often frees up
enough
personnel
so that application backlogs
can be reduced and major
new
applications
developed.
As
the economy worsens, there
is uncertainty about the
future of out-
sourcing.
Software will remain an important
commodity, so outsourcing
58
Chapter
Two
will
no doubt stay as an important
industry. However, the
economic crisis
and
the changes in inflation
rates and currency values
may shift the
bal-
ance
of offshore outsourcing from
country to country. In fact, if
deflation
occurs,
even the United States
could find itself with
expanding capabili-
ties
for outsourcing.
8.
Best Practices for Using
Contractors and
Management
Consultants
As
this book is written in
2009, roughly 10 percent to 12
percent of the
U.S.
software population are not
full-time employees of the
companies
that
they work for. They
are contractors or
consultants.
On
any given business day in
any given Fortune 500
company, roughly
ten
management consultants will be working
with executives and
man-
agers
on topics that include
benchmarks, baselines, strategic
planning,
competitive
analysis, and a number of
other specialized
topics.
Both
of these situations can be
viewed as being helpful
practices and
are
often helpful enough to move
into the best practice
category.
All
companies have peaks and
valleys in their software
workloads.
If
full-time professional staff is on
board for the peaks,
then they won't
have
any work during the
valley periods. Conversely, if
full-time pro-
fessional
staffing is set up to match
the valleys, when important
new
projects
appear, there will be a shortage of
available technical
staff
members.
What
works best is to staff
closer to the valley or
midpoint of aver-
age
annual workloads. Then when
projects occur that need
additional
resources,
bring in contractors either
for the new projects
themselves,
or
to take over standard
activities such as maintenance
and thereby
free
up the company's own
technical staff. In other
words, having full-
time
staffing levels 5 percent to 10
percent below peak demand is
a
cost-effective
strategy.
The
primary use of management
consultants is to gain access to
spe-
cial
skills and knowledge that
may not be readily available
in-house.
Examples
of some of the topics where
management consultants
have
skills
that are often lacking
among full-time staff
include
Benchmarks
and comparisons to industry
norms
■
Baselines
prior to starting process
improvements
■
Teaching
new or useful technologies
such as Agile, Six Sigma,
and
■
others
Measurement
and metrics such as function
point analysis
■
Selecting
international outsource
vendors
■
Strategic
and market planning for
new products
■
Overview
of 50 Software Best Practices
59
Preparing
for litigation or defending
against litigation
■
Assisting
in process improvement
startups
■
Attempting
to turn around troubled
projects
■
Offering
advice about IPOs, mergers,
acquisitions, and
venture
■
financing
Management
consultants serve as a useful
conduit for special
stud-
ies
and information derived from
similar companies. Because
manage-
ment
consultants are paid for
expertise rather than for
hours worked,
many
successful management consultants
are in fact top experts
in
their
selected fields.
Management
consultants have both a
strategic and a tactical
role.
Their
strategic work deals with
long-range topics such as
market posi-
tions
and optimizing software
organization structures. Their
tactical
role
is in areas such as Six
Sigma, starting measurement
programs, and
aiding
in collecting function point
data.
In
general, usage both of
hourly contractors for
software develop-
ment
and maintenance, and of
management consultants for
special
topics
benefits many large
corporations and government
agencies. If
not
always best practices, the
use of contractors and
management con-
sultants
are usually at least good
practices.
9.
Best Practices for Selecting
Software
Methods,
Tools, and
Practices
Unfortunately,
careful selection of methods,
tools, and practices
seldom
occurs
in the software industry.
Either applications are
developed using
methods
already in place, or there is
rush to adopt the latest
fad such
as
CASE, I-CASE, RAD, and
today, Agile in several
varieties.
A
wiser method of selecting
software development methods
would be
to
start by examining benchmarks
for applications that used
various
methods,
and then to select the
method or methods that yield
the best
results
for specific sizes and
types of software
projects.
As
this book is written, thousands of
benchmarks are now available
from
the
nonprofit International Software
Benchmarking Standards
Group
(ISBSG),
and most common methods
are represented. Other
benchmark
sources
are also available, such as
Software Productivity Research,
the
David
Consulting Group, and
others. However, ISBSG is
available on the
open
market to the public and is
therefore easiest to
access.
Among
the current choices for
software development methods
can
be
found (in alphabetical
order) Agile development,
clean-room devel-
opment,
Crystal development, Dynamic
Systems Development
Method
(DSDM),
extreme programming (XP),
hybrid development,
iterative
development,
object-oriented development,
pattern-based development,
60
Chapter
Two
Personal
Software Process (PSP), rapid
application development
(RAD),
Rational
Unified Process (RUP), spiral
development, structured
develop-
ment,
Team Software Process (TSP),
V-model development, and
waterfall
development.
In
addition to the preceding, a
number of partial development
meth-
ods
deal with specific phases or
activities. Included in the
set of partial
methods
are (in alphabetical order)
code inspections, data-state
design,
design
inspections, flow-based programming,
joint application
design
(JAD),
Lean Six Sigma, pair
programming, quality function
deployment
(QFD),
requirements inspections, and Six
Sigma for software.
While
the
partial methods are not
full development methods,
they do have a
measurable
impact on quality and
productivity.
It
would be useful to have a
checklist of topics that
need to be evalu-
ated
when selecting methods and
practices. Among these would
be
Suitability
by application size How
well the method works
for
applications
ranging from 10 function
points to 100,000 function
points.
The
Agile methods seem to work
well for smaller
applications, while
Team
Software Process (TSP) seems
to work well for large
systems, as
does
the Rational Unified Process
(RUP). Hybrid methods also
need to
be
included.
Suitability
by application type How
well the method works
for
embedded
software, systems software,
web applications,
information
technology
applications, commercial applications,
military software,
games,
and the like.
Suitability
by application nature How
well the method
works
for
new development, for
enhancements, for warranty
repairs, and for
renovation
of legacy applications. There
are dozens of
development
methodologies,
but very few of these
also include maintenance
and
enhancement.
As of 2009, the majority of
"new" software
applications
are
really replacements for
aging legacy applications.
Therefore data
mining
of legacy software for
hidden requirements, enhancements,
and
renovation
should be standard features in
software methodologies.
Suitability
by attribute How
well the method supports
important
attributes
of software applications including
but not limited to defect
pre-
vention,
defect removal efficiency,
minimizing security
vulnerabilities,
achieving
optimum performance, and
achieving optimum user
interfaces.
A
development method that does
not include both quality
control and mea-
surement
of quality is really unsuitable
for critical software
applications.
Suitability
by activity How
well the method supports
require-
ments;
architecture; design; code development;
reusability; pretest
inspections;
static analysis; testing;
configuration control; quality
assur-
ance;
user information; and
postrelease maintenance, enhancement,
and
customer
support.
Overview
of 50 Software Best Practices
61
The
bottom line is that
methodologies should be deliberately
selected
to
match the needs of specific
projects, not used merely
because they are
a
current fad or because no one knows of
any other approach.
As
this book is written, formal
technology selection seems to
occur
for
less than 10 percent of
software applications. About 60
percent use
whatever
methods are the local
custom, while about 30
percent adopt
the
most recent popular method
such as Agile, whether or
not that
method
is a good match for the
application under
development.
Development
process refers
to a standard set of activities
that are
performed
in order to build a software
application. (Development
process
and
development
methodology are
essentially synonyms.)
For
conventional software development
projects, about 25
activities
and
perhaps 150 tasks are
usually included in the work
breakdown
structure
(WBS). For Agile projects,
about 15 activities and 75
tasks are
usually
included in the work
breakdown structure.
The
work breakdown structure of
large systems will vary
based on
whether
the application is to be developed
from scratch, or it
involves
modifying
a package or modifying a legacy
application. In today's
world
circa
2009, projects that are
modifications are actually
more numerous
than
complete new development
projects.
An
effective development process
for projects in the nominal
10,000
function
point range that include
acquisition and modification of
com-
mercial
software packages would
resemble the
following:
1.
Requirements gathering
2.
Requirements analysis
3.
Requirements inspections
4.
Data mining of existing
similar applications to extract
business
rules
5.
Architecture
6.
External design
7.
Internal design
8.
Design inspections
9.
Security vulnerability
analysis
10.
Formal risk analysis
11.
Formal value analysis
12.
Commercial-off-the-shelf (COTS) package
analysis
13.
Requirements/package mapping
14.
Contacting package user
association
62
Chapter
Two
15.
Package licensing and
acquisition
16.
Training of development team in
selected package
17.
Design of package
modifications
18.
Development of package
modifications
19.
Development of unique
features
20.
Acquisition of certified reusable
materials
21.
Inspection of package
modifications
22.
Documentation of package
modifications
23.
Inspections of documentation and HELP
screens
24.
Static analysis of package
modifications
25.
General testing of package
modifications
26.
Specialized testing of package
modifications (performance,
security)
27.
Quality assurance review of
package modifications
28.
Training of user personnel in
package and
modifications
29.
Training of customer support
and maintenance
personnel
30.
Deployment of package
modifications
These
high-level activities are
usually decomposed into a
full work
breakdown
structure with between 150
and more than 1000
tasks and
lower-level
activities. Doing a full
work breakdown structure is
too dif-
ficult
for manual approaches on
large applications. Therefore,
project
management
tools such as Artemis Views,
Microsoft Project,
Primavera,
or
similar tools are always
used in leading
companies.
Because
requirements change at about 2
percent per calendar
month,
each
of these activities must be
performed in such a manner
that
changes
are easy to accommodate
during development; that is,
some
form
of iterative development is necessary
for each major
deliverable.
However,
due to fixed delivery
schedules that may be
contractually
set,
it is also mandatory that
large applications be developed with
mul-
tiple
releases in mind. At a certain
point, all features for
the initial
release
must be frozen, and changes
occurring after that point
must
be
added to follow-on releases.
This expands the concept of
iterative
development
to a multiyear, multirelease
philosophy.
A
number of sophisticated companies
such as IBM and AT&T
have
long
recognized that change is
continuous with software
applications.
These
companies tend to have fixed
release intervals, and
formal plan-
ning
for releases spreads over at
least the next two
releases after the
current
release.
Overview
of 50 Software Best Practices
63
Formal
risk analysis and value
analysis are also indicators
of software
sophistication.
As noted in litigation, failing
projects don't perform
risk
analyses,
so they tend to be surprised by
factors that delay
schedules
or
cause cost overruns.
Sophisticated
companies always perform
formal risk analysis
for
major
topics such as possible loss
of personnel, changing
requirements,
quality,
and other key topics.
However, one form of risk
analysis is not
done
very well, even by most
sophisticated companies: security
vulner-
abilities.
Security analysis, if it is done at
all, is often an
afterthought.
A
number of approaches have
proven track records for
large software
projects.
Among these are the
capability maturity model
(CMM) by the
Software
Engineering Institute (SEI)
and the newer Team
Software
Process
(TSP) and Personal Software
Process (PSP) created by
Watts
Humphrey
and also supported by the
SEI. The Rational Unified
Process
(RUP)
also has had some successes
on large software projects.
For
smaller
applications, various flavors of
Agile development and
extreme
programming
(XP) have proven track
records of success. Additional
approaches
such as object-oriented development,
pattern matching,
Six
Sigma, formal inspections,
prototypes, and reuse have
also demon-
strated
value for large
applications.
Over
and above "pure" methods
such as the Team Software
Process
(TSP),
hybrid approaches are also
successful. The hybrid
methods use
parts
of several different approaches
and blend them together to
meet
the
needs of specific projects. As of
2009, hybrid or blended
development
approaches
seem to outnumber pure
methods in terms of
usage.
Overall,
hybrid methods that use
features of Six Sigma, the
capabil-
ity
maturity model, Agile, and
other methods have some
significant
advantages.
The reason is that each of
these methods in "pure" form
has
a
rather narrow band of
project sizes and types
for which they are
most
effective.
Combinations and hybrids are
more flexible and can
match the
characteristics
of any size and any
type. However, care and
expertise
are
required in putting together
hybrid methods to be sure
that the best
combinations
are chosen. It is a job for
experts and not for
novices.
There
are many software process
improvement network (SPIN)
chap-
ters
in major cities throughout
the United States. These
organizations
have
frequent meetings and serve
a useful purpose in disseminating
infor-
mation
about the successes and
failures of various methods
and tools.
It
should be obvious that any
method selected should offer
improve-
ments
over former methods. For
example, current U.S.
averages for
software
defects total about 5.00
per function point. Defect
removal
efficiency
averages about 85 percent, so
delivered defects amount
to
about
0.75 per function
point.
Any
new process should lower
defect potentials, raise
defect removal
efficiency,
and reduce delivered
defects. Suggested values
for an improved
64
Chapter
Two
process
would be no more than 3.00
defects per function point,
95 per-
cent
removal efficiency, and
delivered defects of no more
than 0.15 defect
per
function point.
Also,
any really effective process
should raise productivity
and
increase
the volume of certified
reusable materials used for
software
construction.
10.
Best Practices for
Certifying Methods,
Tools,
and Practices
The
software industry tends to
move from fad to fad with
each meth-
odology
du jour making unsupported
claims for achieving new
levels
of
productivity and quality.
What would be valuable for
the software
industry
is a nonprofit organization that
can assess the
effectiveness
of
methods, tools, and
practices in an objective
fashion.
What
would also be useful are
standardized measurement
practices
for
collecting productivity and
quality data for all
significant software
projects.
This
is not an easy task. It is
unfeasible for an evaluation
group to
actually
try out or use every
development method, because such
usage
in
real life may last
for several years, and
there are dozens of
them.
What
probably would be effective is
careful analysis of empirical
results
from
projects that used various
methods, tools, and
practices. Data can
be
acquired from benchmark
sources such as the
International Software
Benchmarking
Standards Group (ISBSG), or
from other sources such
as
the
Finnish Software Metrics
Association.
To
do this well requires two
taxonomies: (1) a taxonomy of
software
applications
that will provide a structure
for evaluating methods
by
size
and type of project and
(2) a taxonomy of software
methods and
tools
themselves.
A
third taxonomy, of software
feature sets, would also be
useful, but
as
of 2009, does not exist in
enough detail to be useful.
The basic idea
of
all three taxonomies is to
support pattern matching. In
other words,
applications,
their feature sets, and
development methods all deal
with
common
issues, and it would be
useful if the patterns
associated with
these
issues could become visible.
That would begin to move
the industry
toward
construction of software from
standard reusable
components.
The
two current taxonomies deal
with what kinds of software
might
use
the method, and what
features the method itself
contains.
It
would not be fair to compare
the results of a large
project of greater
than
10,000 function points with a
small project of less than
1000 func-
tion
points. Nor would it be fair to
compare an embedded military
appli-
cation
against a web project.
Therefore a standard taxonomy
for placing
software
projects is a precursor for
evaluating and selecting
methods.
Overview
of 50 Software Best Practices
65
From
performing assessment and
benchmark studies with my
col-
leagues
over the years, a four-layer
taxonomy seems to provide a
suit-
able
structure for software
applications:
Nature
The
term nature
refers
to whether the project is a
new devel-
opment,
an enhancement, a renovation, or
something else. Examples
of
the
nature parameter include new
development, enhancement of
legacy
software,
defect repairs, and
conversion to a new
platform.
Scope
The
term scope
identifies
the size range of the
project run-
ning
from a module through an
enterprise resource planning
(ERP)
package.
Sizes are expressed in terms
of function point metrics as
well
as
source code. Size ranges
cover a span that runs
from less than 1
function
point to more than 100,000
function points. To simplify
analy-
sis,
sizes can be discrete, that
is, 1, 10, 100, 1000,
10,000, and 100,000
function
points. Examples of the
scope parameter include
prototype,
evolutionary
prototype, module, reusable
module, component,
stand-
alone
program, system, and
enterprise system.
Class
The
term class
identifies
whether the project is for
external
use
within the developing
organization, or whether it is to be
released
externally
either on the Web or in some
other form. Examples of
the
class
parameter include internal
applications for a single
location, inter-
nal
applications for multiple
locations, external applications
for the
public
domain (open source),
external applications to be marketed
com-
mercially,
external applications for
the Web, and external
applications
embedded
in hardware devices.
Type
The
term type
refers
to whether the application is
embed-
ded
software, information technology, a
military application, an
expert
system,
a telecommunications application, a
computer game, or
some-
thing
else. Examples of the type
parameter include batch
applications,
interactive
applications, web applications,
expert systems,
robotics
applications,
process-control applications, scientific
software, neural
net
applications, and hybrid
applications that contain
multiple types
concurrently.
This
four-part taxonomy can be
used to define and compare
software
projects
to ensure that similar
applications are being
compared. It is
also
interesting that applications
that share the same
patterns on this
taxonomy
are also about the
same size when measured
using function
point
metrics.
The
second taxonomy would define
the features of the
development
methodology
itself. There are 25 topics
that should be
included:
Proposed
Taxonomy for Software
Methodology Analysis
1.
Team organization
2.
Specialization of team
members
66
Chapter
Two
3.
Project management--planning and
estimating
4.
Project management--tracking and
control
5.
Change control
6.
Architecture
7.
Business analysis
8.
Requirements
9.
Design
10.
Reusability
11.
Code development
12.
Configuration control
13.
Quality assurance
14.
Inspections
15.
Static analysis
16.
Testing
17.
Security
18.
Performance
19.
Deployment and customization of
large applications
20.
Documentation and
training
21.
Nationalization
22.
Customer support
23.
Maintenance (defect
repairs)
24.
Enhancement (new
features)
25.
Renovation
These
25 topics are portions of an
approximate 30-year life
cycle that
starts
with initial requirements and
concludes with final
withdrawal
of
the application many years
later. When evaluating
methods, this
checklist
can be used to show which
portions of the timeline and
which
topics
the methodology
supports.
Agile
development, for example,
deals with 8 of these 25
factors:
1.
Team organization
2.
Project management--planning and
estimating
3.
Change control
4.
Requirements
5.
Design
Overview
of 50 Software Best Practices
67
6.
Code development
7.
Configuration control
8.
Testing
In
other words, Agile is
primarily used for new
development of appli-
cations
rather than for maintenance
and enhancements of
legacy
applications.
The
Team Software Process (TSP)
deals with 16 of the 25
factors:
1.
Team organization
2.
Specialization of team
members
3.
Project management--planning and
estimating
4.
Project management--tracking and
control
5.
Change control
6.
Requirements
7.
Design
8.
Reusability
9.
Code development
10.
Configuration control
11.
Quality assurance
12.
Inspections
13.
Static analysis
14.
Testing
15.
Security
16.
Documentation and
training
TSP
is also primarily a development
method, but one that
concen-
trates
on software quality control
and also that includes
project man-
agement
components for planning and
estimating.
Another
critical aspect of evaluating
software methods, tools,
and
practices
is to measure the resulting
productivity and quality
levels.
Measurement
is a weak link for the
software industry. To
evaluate
the
effectiveness of methods and
tools, great care must be
exercised.
Function
point metrics are best
for evaluation and economic
purposes.
Harmful
and erratic metrics such as
lines of code and cost
per defect
should
be avoided.
However,
to ensure apples-to-apples comparison
between projects
using
specific methods, the
measures need to granular-down to
the level
of
specific activities. If only
project- or phase-level data is
used, it will
be
too inaccurate to use for
evaluations.
68
Chapter
Two
Although
not every project uses
every activity, the author
makes use
of
a generalized activity chart of
accounts for collecting
benchmark data
at
the activity level:
Chart
of Accounts for Activity-Level
Software Benchmarks
1.
Requirements (initial)
2.
Requirements (changed and
added)
3.
Team education
4.
Prototyping
5.
Architecture
6.
Project planning
7.
Initial design
8.
Detail design
9.
Design inspections
10.
Coding
11.
Reusable material
acquisition
12.
Package acquisition
13.
Code inspections
14.
Static analysis
15.
Independent verification and
validation
16.
Configuration control
17.
Integration
18.
User documentation
19.
Unit testing
20.
Function testing
21.
Regression testing
22.
Integration testing
23.
Performance testing
24.
Security testing
25.
System testing
26.
Field testing
27.
Software quality
assurance
28
Installation
29.
User training
30.
Project management
Overview
of 50 Software Best Practices
69
Unless
specific activities are
identified, it is essentially impossible
to
perform
valid comparisons between
projects.
Software
quality also needs to be
evaluated. While cost of
quality
(COQ)
would be preferred, two important
supplemental measures
should
always be included. These
are defect potentials and
defect
removal
efficiency.
The
defect
potential of a
software application is the
total number of
bugs
found in requirements, design,
code, user documents, and
bad fixes.
The
defect
removal efficiency is
the percentage of bugs found
prior to
delivery
of software to clients.
As
of 2009, average values for
defect potentials are
Defect
Origins
Defects
per Function Point
Requirements
bugs
1.00
Design
bugs
1.25
Coding
bugs
1.75
Documentation
bugs
0.60
Bad
fixes (secondary
bugs)
0.40
TOTAL
5.00
Cumulative
defect removal efficiency
before delivery is only
about
85
percent. Therefore methods
should be evaluated in terms of
how
much
they reduce defect
potentials and increase
defect removal effi-
ciency
levels. Methods such as the
Team Software Process (TSP)
that
lower
potentials below 3.0 bugs
per function point and
that raise defect
removal
efficiency levels above 95
percent are generally viewed
as best
practices.
Productivity
also needs to be evaluated.
The method used by
the
author
is to select an average or midpoint
approach such as Level
1
on
the capability maturity
model integration (CMMI) as a
starting
point.
For example, average
productivity for CMMI 1 applications
in
the
10,000function point range is
only about 3 function points
per
staff
month. Alternative methods
that improve on these
results, such
as
Team Software Process (TSP)
or the Rational Unified
Process (RUP),
can
then be compared with the
starting value. Of course
some methods
may
degrade productivity,
too.
The
bottom line is the
evaluating software methodologies,
tools, and
practices
scarcely performed at all
circa 2009. A combination of
activity-
level
benchmark data from
completed projects, a formal
taxonomy for
pinning
down specific types of
software applications, and a
formal taxon-
omy
for identifying features of
the methodology are all
needed. Accurate
quality
data in terms of defect
potentials and defect
removal efficiency
levels
is also needed.
70
Chapter
Two
One
other important topic for
certification would be to show
improve-
ments
versus current U.S.
averages. Because averages
vary by size
and
type of application, a sliding
scale is needed. For
example, current
average
schedules from requirements to
delivery can be
approximated
by
raising the function point
total of the application to
the 0.4 power.
Ideally,
an optimal development process
would reduce the exponent
to
the
0.3 power.
Current
defect removal efficiency
for software applications in
the
United
States is only about 85
percent. An improved process
should
yield
results in excess of 95
percent.
Defect
potentials or total numbers of
bugs likely to be encountered
can
be
approximated by raising the
function point total of the
application to
the
1.2 power, which results in
alarmingly high numbers of
defects for
large
systems. An improved development
process should lower
defect
potentials
below about the 1.1
power.
The
volume of certified reusable
material in current software
appli-
cations
runs from close to 0 percent up to
perhaps 50 percent, but
the
average
value is less than 25
percent. The software
industry would
be
in much better shape
economically if the volume of
certified reusable
materials
could top 85 percent on
average, and reach 95
percent for
relatively
common kinds of
applications.
The
bottom line is that
certification needs to look at
quantitative
results
and include information on
benefits from adopting new
methods.
One
additional aspect of certification is to
scan the available reports
and
benchmarks
from the International
Software Benchmarking
Standards
Group
(ISBSG). As their collection of
historical benchmarks rises
above
5,000
projects, more and more
methods are represented in
enough detail
to
carry out multiple-regression
studies and to evaluate
their impacts.
11.
Best Practices for
Requirements
of
Software Applications
As
of 2009, more than 80
percent of software applications
are not new
in
the sense that such
applications are being
developed for the very
first
time.
Most applications today are
replacements for older and
obsolete
applications.
Because
these applications are
obsolete, it usually happens
that their
written
specifications have been
neglected and are out of
date. Yet in
spite
of the lack of current
documents, the older
applications contain
hundreds
or thousands of business rules
and algorithms that need
to
be
transferred to the new
application.
Therefore,
as of 2009, requirements analysis
should not deal only
with
new
requirements but should also
include data mining of the
legacy
code
to extract the hidden
business rules and
algorithms. Some
tools
Overview
of 50 Software Best Practices
71
are
available to do this, and
also many maintenance
workbenches can
display
code and help in the
extraction of latent business
rules.
Although
clear requirements are a
laudable goal, they almost
never
occur
for nominal 10,000function
point software applications.
The only
projects
the author has observed
where the initial
requirements were
both
clear and unchanging were
for specialized small
applications below
500
function points in
size.
Businesses
are too dynamic for
requirements to be completely
unchanged
for large applications. Many
external events such as
changes
in
tax laws, changes in
corporate structure, business process
reengineer-
ing,
or mergers and acquisitions
can trigger changes in
software applica-
tion
requirements. The situation is
compounded by the fact that
large
applications
take several years to
develop. It is unrealistic to
expect
that
a corporation can freeze all
of its business rules for
several years
merely
to accommodate the needs of a
software project.
The
most typical scenario for
dealing with the requirements of
a
nominal
10,000function point application
would be to spend
several
months
in gathering and analyzing
the initial requirements.
Then as
design
proceeds, new and changed
requirements will arrive at a rate
of
roughly
2 percent per calendar
month. The total volume of
requirements
surfacing
after the initial
requirements exercise will probably
approach
or
even exceed 50 percent.
These new and changing
requirements will
eventually
need to be stopped for the
first release of the
application,
and
requirements surfacing after
about 9 to 12 months will be
aimed
at
follow-on releases of the
application.
The
state of the art for
gathering and analyzing the
requirements for
10,000function
point projects includes the
following:
Utilization
of joint application design
(JAD) for initial
requirements
■
gathering
Utilization
of quality function deployment
(QFD) for quality
require-
■
ments
Utilization
of security experts for
security analysis and
vulnerability
■
prevention
Utilization
of prototypes for key
features of new
applications
■
Mining
legacy applications for
requirements and business
rules for new
■
projects
Full-time
user involvement for Agile
projects
■
Ensuring
that requirements are
clearly expressed and can be
under-
■
stood
Utilization
of formal requirement inspections with
both users and
■
vendors
72
Chapter
Two
Creation
of a joint client/vendor change
control board
■
Selection
of domain experts for
changes to specific
features
■
Ensuring
that requirements traceability is
present
■
Multirelease
segmentation of requirements
changes
■
Utilization
of automated requirements analysis
tools
■
Careful
analysis of the features of
packages that will be part of
the
■
application
The
lowest rates of requirements
changes observed on
10,000function
point
projects are a little below
0.5 percent a month, with an
accumulated
total
of less than 10 percent
compared with the initial
requirements.
However,
the maximum amount of growth
has topped 200
percent.
Average
rates of requirements change
run between 1 percent
and
3
percent per calendar month
during the design and
coding phases, after
which
changes are deferred to
future releases.
The
concurrent use of JAD sessions,
careful analysis of
requirements,
requirements
inspections, and prototypes
can go far to bring the
require-
ments
process under technical and
management control.
Although
the results will not become
visible for many months
or
sometimes
for several years, the
success or failure of a large
software
project
is determined during the
requirements phase. Successful
proj-
ects
will be more complete and
thorough in gathering and
analyzing
requirements
than failures. As a result,
successful projects will
have
fewer
changes and lower volumes of
requirements creep than
failing
projects.
However,
due to the fact that
most new applications are
partial repli-
cas
of existing legacy software,
requirements should include
data mining
to
extract latent business
rules and algorithms.
12.
Best Practices for User
Involvement
in
Software Projects
It
is not possible to design
and build nominal
10,000function point
business
applications without understanding
the requirements of
the
users.
Further, when the
application is under development,
users nor-
mally
participate in reviews and
also assist in trials of
specific deliv-
erables
such as screens and
documents. Users may also
review or even
participate
in the development of prototypes
for key inputs,
outputs,
and
functions. User participation is a
major feature of the new
Agile
development
methodology, where user
representatives are embedded
in
the
project team. For any
major application, the state
of the art of user
involvement
includes participation
in:
Overview
of 50 Software Best Practices
73
1.
Joint application design
(JAD) sessions
2.
Quality function deployment
(QFD)
3.
Reviewing business rules and
algorithms mined from
legacy
applications
4.
Agile projects on a full-time
basis
5.
Requirements reviews
6.
Change control boards
7.
Reviewing documents produced by
the contractors
8.
Design reviews
9.
Using prototypes and sample
screens produced by the
contractors
10.
Training classes to learn
the new application
11.
Defect reporting from design
through testing
12.
Acceptance testing
User
involvement is time-consuming but
valuable. On average,
user
effort
totals about 20 percent of
the effort put in by the
software tech-
nical
team. The range of user
involvement can top 50
percent at the
high
end and be less than 5
percent at the low end.
However, for large
and
complex projects, if the
user involvement totals to
less than about
10
percent of the effort
expended by the development
team, the project
will
be at some risk of having
poor user satisfaction when
it is finally
finished.
The
Agile methodology includes a
full-time user representative
as
part
of the project team. This
method does work well
for small projects
and
small numbers of users. It
becomes difficult or impossible
when the
number
of users is large, such as
the millions of users of
Microsoft Office
or
Microsoft Vista. For
applications with millions of users, no
one user
can
possibly understand the
entire range of possible
uses.
For
these high-usage applications,
surveys of hundreds of users
or
focus
groups where perhaps a dozen
users offer opinions are
preferred.
Also,
usability labs where users
can try out features and
prototypes
are
helpful.
As
can be seen, there is no
"one size fits all"
method for software
applications
that can possibly be
successful for sizes of 1,
10, 100, 1000,
10,000,
and 100,000 function points.
Each size plateau and
each type of
software
needs its own optimal
methods and
practices.
This
same situation occurs with
medicine. There is no antibiotic
or
therapeutic
agent that is successful
against all diseases
including bacte-
rial
and viral illness. Each
condition needs a unique
prescription. Also
as
with medicine, some conditions
may be incurable.
74
Chapter
Two
13.
Best Practices for Executive
Management
Support
of Software Applications
The
topic of executive management
support of new applications
varies
with
the overall size of the
application. For projects
below about 500
function
points, executive involvement
may hover around zero,
because
these
projects are so low in cost
and low in risk as to be
well below the
level
of executive interest.
However,
for large applications in
the 10,000function point
range,
executive
scrutiny is the norm. It is an
interesting phenomenon
that
the
frequent failure of large
software projects has caused
a great deal
of
distrust of software managers by
corporate executives. In fact,
the
software
organizations of large companies
are uniformly regarded
as
the
most troublesome organizations in
the company, due to high
failure
rates,
frequent overruns, and
mediocre quality
levels.
In
the software industry
overall, the state of the
art of executive man-
agement
support indicates the
following roles:
Approving
the return on investment
(ROI) calculations for
software
■
projects
Providing
funding for software
development projects
■
Assigning
key executives to oversight,
governance, and project
direc-
■
tor
roles
Reviewing
milestone, cost, and risk
status reports
■
Determining
if overruns or delays have
reduced the ROI below
corpo-
■
rate
targets
Even
if executives perform all of
the roles that normally
occur, prob-
lems
and failures can still
arise. A key failing of
software projects is
that
executives
cannot reach good business
decisions if they are
provided
with
disinformation rather than
accurate status reports. If
software
project
status reports and risk
assessments gloss over
problems and
technical
issues, then executives
cannot control the project
with the pre-
cision
that they would like.
Thus, inadequate reporting
and less-than-
candid
risk assessments will delay
the eventual and prudent
executive
decision
to try and limit further expenses by
terminating projects
that
are
out of control.
It
is a normal corporate executive
responsibility to ascertain why
proj-
ects
are running out of control.
One of the reasons why
executives at
many
large corporations distrust
software is because software
projects
have
a tendency to run out of
control and often fail to
provide accu-
rate
status reports. As a result,
top executives at the levels
of senior
vice
presidents, chief operating
officers, and chief
executive officers
find
Overview
of 50 Software Best Practices
75
software
to be extremely frustrating and
unprofessional compared with
other
operating units.
As
a class, the corporate
executives that the author
has met are
more
distrustful
of software organizations than
almost any other
corporate
group
under their management
control. Unfortunately, corporate
execu-
tives
appear to have many reasons
for being distrustful of
software
managers
after so many delays and
cost overruns.
All
of us in the software industry
share a joint responsibility
for rais-
ing
the professional competence of
software managers and
software
engineers
to such a level that we
receive (and deserve) the
trust of
corporate
client executives.
14.
Best Practices for
Software
Architecture
and Design
For
small stand-alone applications in
the 1000function point
range,
both
architecture and design are
often informal activities.
However, as
application
sizes increase to 10,000 and
then 100,000 function
points,
both
architecture and design
become increasingly important.
They also
become
increasingly complicated and
expensive.
Enterprise
architecture is even larger in
scope, and it attempts to
match
total
corporate portfolios against
total business needs including
sales,
marketing,
finance, manufacturing, R&D, and
other operating units.
At
the
largest scale, enterprise
architecture may deal with
more than 5,000
applications
that total more than 10
million function
points.
The
architecture of a large software
application concerns its
overall
structure
and the nature of its
connections to other applications
and
indeed
to the outside world. As of
2009, many alternative
architectures
are
in use, and a specific
architecture needs to be selected
for new appli-
cations.
Some of these include
monolithic applications,
service-oriented
architecture
(SOA), event-driven architecture,
peer-to-peer, pipes
and
filters,
client-server, distributed, and
many others, including some
spe-
cialized
architectures for defense
and government
applications.
A
colleague from IBM, John
Zachman, developed an interesting
and
useful
schema that shows some of
the topics that need to be
included in
the
architectural decisions for
large software applications.
The overall
Zachman
schema is shown in Table
2-2.
In
the Zachman schema, the
columns show the essential
activities,
and
the rows show the
essential personnel involved with
the software.
The
intersections of the columns
and rows detail tasks
and decisions for
each
join of the rows and
columns. A quick review of
Table 2-2 reveals
the
rather daunting number of
variables that need to be
dealt with to
develop
the architecture for a major
software application.
76
Chapter
Two
TABLE
2-2
Example
of the Zachman Architectural
Schema
What
How
Where
Who
When
Why
Planner
Owner
Designer
Builder
Contractor
Enterprise
The
design of software applications is
related to architecture, but
deals
with
many additional factors. As of
2009, the selection of
design methods
is
unsettled, and there are
more than 40 possibilities.
The unified model-
ing
language (UML) and use-cases
are currently the hottest of
the design
methods,
but scores of others are
also in use. Some of the
other possibilities
include
old-fashioned flowcharts, HIPO diagrams,
Warnier-Orr diagrams,
Jackson
diagrams, Nassi-Schneiderman charts,
entity-relationship dia-
grams,
state-transition diagrams, action
diagrams, decision tables,
data-
flow
diagrams, object-oriented design,
pattern-based design, and
many
others,
including hybrids and
combinations.
The
large number of software
design methods and
diagramming
techniques
is a sign that no single
best practice has yet
emerged. The
fundamental
topics of software design
include descriptions of the
func-
tions
and features available to
users and how users will
access them.
At
the level of internal
design, the documents must
describe how those
functions
and features will be linked
and share information
internally.
Other
key elements of software
design include security
methods and
performance
issues. In addition, what
other applications will
provide
data
to or take data from the
application under development
must be
discussed.
Obviously, the design must
also deal with hardware
plat-
forms
and also with software
platforms such as the
operating systems
under
which the application will
operate.
Because
many software applications
are quite similar, and
have been
for
more than 50 years, it is
possible to record the basic
features, func-
tions,
and structural elements of
common applications into
patterns
that
can
be reused over and over.
Reusable design patterns will become a
best
practice
once a standard method for
describing those patterns
emerges
from
the many competing design
languages and graphic
approaches
that
are in current use.
It
is possible to visualize some of
these architectural patterns
by
examining
the structures of existing
applications using
automated
tools.
In fact, mining existing
software for business rules,
algorithms,
Overview
of 50 Software Best Practices
77
and
architectural information is a good
first step toward
creating
libraries
of reusable components and a
workable taxonomy of
software
features.
Enterprise
architecture also lends
itself to pattern analysis.
Any con-
sultant
who visits large numbers of
companies in the same
industries
cannot
help but notice that
software portfolios are
about 80 percent
similar
for all insurance companies,
banks, manufacturing
companies,
pharmaceuticals,
and so forth. In fact, the
New Zealand government
requires
that all banks use
the same software, in part
to make audits
and
security control easier for
regulators (although perhaps
increasing
the
risk of malware and denial
of service attacks).
What
the industry needs as of
2009 are effective methods
for visu-
alizing
and using these
architectural patterns. A passive
display of
information
will be insufficient. There is a deeper
need to link costs,
value,
numbers of users, strategic
directions, and other kinds
of busi-
ness
information to the architectural
structures. In addition, it is
necessary
to illustrate the data that
the software applications
use,
and
also the flows of
information and data from
one operating unit
to
another and from one
system to another; that is,
dynamic models
rather
than static models would be
the best representation
approach.
Given
the complexity and kinds of
information, what would
prob-
ably
be most effective for
visualization of patterns would be
dynamic
holographic
images.
15.
Best Practices for
Software
Project
Planning
Project
planning for large software
projects in large corporations
often
involves
both planning specialists
and automated planning
tools. The
state
of the art for planning
software projects circa 2009
for large proj-
ects
in the nominal 10,000function
point range involves
Development
of complete work breakdown
structures
■
Collecting
and analyzing historical
benchmark data from
similar
■
projects
Planning
aid provided by formal
project offices
■
Consideration
to staff hiring and turnover
during the project
■
Usage
of automated planning tools
such as Artemis Views or
Microsoft
■
Project
Factoring
in time for requirements
gathering and
analysis
■
Factoring
in time for handling
changing requirements
■
78
Chapter
Two
Consideration
given to multiple releases if
requirements creep is
■
extreme
Consideration
given to transferring software if
outsourcing is used
■
Consideration
given to supply chains if
multiple companies
are
■
involved
Factoring
in time for a full suite of
quality control
activities
■
Factoring
in risk analysis of major
issues that are likely to
occur
■
Successful
projects do planning very
well indeed. Delayed or
can-
celled
projects, however, almost
always have planning
failures. The
most
common planning failures
include (1) not dealing
effectively with
changing
requirements, (2) not
anticipating staff hiring
and turnover
during
the project, (3) not
allotting time for detailed
requirements
analysis,
(4) not allotting sufficient
time for formal inspections,
test-
ing,
and defect repairs, and
(5) essentially ignoring
risks until they
actually
occur.
Large
projects in sophisticated companies will
usually have planning
support
provided by a project
office. The
project office will typically
be
staffed
by between 6 and 10 personnel
and will be well equipped
with
planning
tools, estimating tools,
benchmark data, tracking
tools, and
other
forms of data analysis tools
such as statistical
processors.
Because
project planning tools and
software cost-estimating tools
are
usually
provided by different vendors,
although they share data,
plan-
ning
and estimating are different
topics. As used by most
managers,
the
term planning
concerns
the network of activities
and the critical
path
required to complete a project.
The term estimating
concerns
cost
and
resource predictions, and
also quality predictions.
The two terms
are
related but not identical.
The two kinds of tools
are similar, but
not
identical.
Planning
and estimating are both
more credible if they are
supported
by
benchmark data collected
from similar projects.
Therefore all major
projects
should include analysis of
benchmarks from public
sources
such
as the International Software
Benchmarking Standards
Group
(ISBSG)
as well as internal benchmarks.
One of the major problems
of
the
software industry, as noted
during litigation, is that
accurate plans
and
estimates are often replaced
by impossible plans and
estimates
based
on business needs rather than on
team capabilities. Usually
these
impossible
demands come from clients or
senior executives, rather
than
from
the project managers.
However, without empirical
data from simi-
lar
projects, it is difficult to defend
plans and estimates no
matter how
accurate
they are. This is a subtle
risk factor that is not
always recog-
nized
during risk analysis
studies.
Overview
of 50 Software Best Practices
79
16.
Best Practices for Software
Project
Cost
Estimating
For
small applications of 1000 or
fewer function points,
manual estimates
and
automated estimates are
about equal in terms of
accuracy. However,
as
application sizes grow to 10,000 or
more function points,
automated esti-
mates
continue to be fairly accurate,
but manual estimates become
danger-
ously
inaccurate by leaving out
key activities, failing to
deal with changing
requirements,
and underestimating test and
quality control. Above
10,000
function
points in size, automated
estimating tools are the
best practice,
while
manual estimation is close to
professional malpractice.
Estimating
software projects in the
nominal 10,000function
point
range
is a critical activity. The
current state of the art
for estimating
large
systems involves the use
of:
Formal
sizing approaches for major
deliverables based on
function
■
points
Secondary
sizing approaches for code
based on lines of code
metrics
■
Tertiary
sizing approaches using
information such as screens,
reports,
■
and
so on
Inclusion
of reusable materials in the
estimates
■
Inclusion
of supply chains in the
estimate if multiple companies
are
■
involved
Inclusion
of travel costs if international or
distributed teams are
■
involved
Comparison
of estimates to historical benchmark
data from similar
■
projects
Trained
estimating specialists
■
Software
estimating tools (CHECKPOINT,
COCOMO, KnowledgePlan,
■
Price-S,
SEER, SLIM, SoftCost,
etc.)
Inclusion
of new and changing
requirements in the
estimate
■
Quality
estimation as well as schedule
and cost estimation
■
Risk
prediction and
analysis
■
Estimation
of all project management
tasks
■
Estimation
of plans, specifications, and
tracking costs
■
Sufficient
historical benchmark data to
defend an estimate
against
■
arbitrary
changes
There
is some debate in the
software literature about
the merits of
estimating
tools versus manual
estimates by experts. However,
above
80
Chapter
Two
10,000
function points, there are
hardly any experts in the
United
States,
and most of them work
for the commercial software
estimating
companies.
The
reason for this is that in
an entire career, a project
manager might
deal
only with one or two really
large systems in the
10,000function
point
range. Estimating companies, on
the other hand, typically
collect
data
from dozens of large
applications.
The
most common failing of
manual estimates for large
applications
is
that they are excessively
optimistic due to lack of
experience. While
coding
effort is usually estimated
fairly well, manual
estimates tend to
understate
paperwork effort, test
effort, and the impacts of
changing
requirements.
Even if manual estimates
were accurate for large
applica-
tions,
which they are not,
the cost of updating manual
estimates every few
weeks
to include changing requirements is
prohibitively expensive.
A
surprising observation from
litigation is that sometimes
accurate
estimates
are overruled and rejected
precisely because they are
accurate!
Clients
or top managers reject the
original and accurate
estimate, and
replace
it with an artificial estimate
made up out of thin air.
This is because
the
original estimate showed
longer schedules and higher
costs than the
clients
wanted, so they rejected it.
When this happens, the
project has
more
than an 80 percent chance of failure,
and about a 99 percent
chance
of
severe cost and schedule
overruns.
A
solution to this problem is to
support the estimate by
historical
benchmarks
from similar applications.
These can be acquired from
the
International
Software Benchmarking Standards
Group (ISBSG) or
from
other sources. Benchmarks
are perceived as being more
real than
estimates,
and therefore supporting
estimates with historical
bench-
marks
is a recommended best practice.
One problem with this
approach
is
that historical benchmarks above
10,000 function points are
rare, and
above
100,000 function points
almost nonexistent.
Failing
projects often understate
the size of the work to be
accom-
plished.
Failing projects often omit
to perform quality estimates
at
all.
Overestimating productivity rates is
another common reason
for
cost
and schedule overruns.
Underestimating paperwork costs is
also
a
common failing.
Surprisingly,
both successful and failing
projects are similar
when
estimating
coding schedules and costs.
But failing projects are
exces-
sively
optimistic in estimating testing
schedules and costs. Failing
proj-
ects
also tend to omit
requirements changes during
development, which
can
increase the size of the
project significantly.
Because
estimating is complex, trained
estimating specialists
are
the
best, although such
specialists are few. These
specialists always
utilize
one or more of the leading
commercial software estimating
tools
or
sometimes use proprietary
estimating tools. About half
of our leading
Overview
of 50 Software Best Practices
81
clients
utilize two commercial
software estimating tools
frequently and
may
own as many as half a dozen.
Manual estimates are never
adequate
for
major systems in the
10,000function point
range.
Manual
estimates using standard
templates are difficult to
modify
when
assumptions change. As a result,
they often fall behind
the reality
of
ongoing projects with substantial
rates of change. My observations
of
the
overall results of using
manual estimates for
projects of more than
about
1000 function points is that
they tend to be incomplete
and err
on
the side of excessive
optimism.
For
large projects of more than
10,000 function points, manual
estimates
are
optimistic for testing, defect
removal schedules, and costs
more than 95
percent
of the time. Manual
estimating is hazardous for
large projects.
For
many large projects in large
companies, estimating
special-
ists
employed by the project
offices will do the bulk of
the cost esti-
mating
using a variety of automated
estimating tools. Often
project
offices
are equipped with several
estimating tools such as
COCOMO,
KnowledgePlan,
Price-S, SEER, SoftCost, SLIM,
and so on, and will
use
them
all and look for
convergence of results.
As
previously discussed, even
accurate estimates may be
rejected
unless
they are supported by
historical data from similar
projects. In
fact,
even historical data may
sometimes be rejected and
replaced by
impossible
demands, although historical
data is more credible
than
unsupported
estimates.
For
small projects of fewer than
1000 function points, coding
remains
the
dominant activity. For these
smaller applications, automated
and
manual
cost estimates are roughly
equal in accuracy, although of
course
the
automated estimates are much
quicker and easier to
change.
17.
Best Practices for Software
Project
Risk
Analysis
Make
no mistake about it, large
software projects in the
10,000function
point
range are among the
most risky business
endeavors in human
history.
The major risks for
large software projects
include
Outright
cancellation due to excessive
cost and schedule
overruns
■
Outright
termination due to downsizing or
bankruptcy due to the
poor
■
economy
Cost
overruns in excess of 50 percent compared
with initial estimates
■
Schedule
overruns in excess of 12 months
compared with initial
■
estimates
Quality
control so inept that the
software does not work
effectively
■
Requirements
changes in excess of 2 percent
per calendar month
■
82
Chapter
Two
Executive
or client interference that
disrupts the project
■
Failure
of clients to review requirements
and plans effectively
■
Security
flaws and
vulnerabilities
■
Performance
or speed too slow to be
effective
■
Loss
of key personnel from the
project during
development
■
The
presence of error-prone modules in
legacy applications
■
Patent
violations or theft of intellectual
property
■
External
risks (fire, earthquakes,
hurricanes, etc.)
■
Sale
or acquisition of a business unit with
similar software
■
From
analysis of depositions and
court documents in breach of
con-
tract
litigation, most failing
projects did not even
perform a formal risk
analysis.
In addition, quality control
and change management
were
inadequate.
Worse, project tracking was
so inept that major
problems
were
concealed rather than being
dealt with as they occurred.
Another
ugly
risk is that sometimes
fairly accurate estimates
were rejected and
replaced
by impossible schedule and
cost targets based on
business
needs
rather than team
capabilities.
The
state of the art of software
risk management is
improving.
Traditionally,
formal risk analysis by
trained risk experts
provided the
best
defense. However, risk
estimation tools and
software risk models
were
increasing in numbers and
sophistication circa 2008.
The new
Application
Insight tool from Computer
Aid Inc. and the
Software Risk
Master
prototype of the author are
examples of predictive tools
that can
quantify
the probabilities and
effects of various forms of
risk.
As
of 2009, the best practices
for software risk management
include
Early
risk assessment even prior
to development of full
require-
■
ments
Early
prediction of defect potentials
and removal efficiency
levels
■
Comparison
of project risk patterns to
similar projects
■
Acquisition
of benchmarks from the ISBSG
database
■
Early
review of contracts and
inclusion of quality
criteria
■
Early
analysis of change control
methods
■
Early
analysis of the value of the
application due to the poor
economy
■
The
importance of formal risk
management rises with
application
size.
Below 1000 function points,
risk management is usually
optional.
Above
10,000 function points, risk
assessments are mandatory.
Above
100,000
function points, failure to
perform careful risk
assessments is
evidence
of professional malpractice.
Overview
of 50 Software Best Practices
83
From
repeated observations during
litigation for breach of
contract,
effective
risk assessment is almost
never practiced on applications
that
later
end up in court. Instead
false optimism and
unrealistic schedules
and
cost estimates get the
project started in a bad
direction from the
first
day.
Unfortunately,
most serious risks involve a
great many variable
factors.
As
a result, combinatorial complexity
increases the difficulty of
thorough
risk
analysis. The unaided human
mind has trouble dealing
with prob-
lems
that have more than
two variables. Even
automated risk models
may
stumble if the number of
variables is too great, such
as more than
ten.
As seen by the failure of
economic risk models to
predict the financial
crisis
of 2008, risk analysis is
not a perfect field and
may miss serious
risks.
There are also false
positives, or risk factors
that do not actually
exist,
although these are fairly
rare.
18.
Best Practices for Software
Project Value Analysis
Software
value analysis is not very
sophisticated as this book is
written
in
2009. The value of software
applications prior to development
may not
even
be quantified, and if it is quantified,
then the value may be
suspect.
Software
applications have both
financial and intangible
value
aspects.
The financial value can be
subdivided into cost
reductions and
revenue
increases. The intangible
value is more difficult to
characterize,
but
deals with topics such as
customer satisfaction, employee
morale,
and
the more important topics of
improving human life and
safety or
improving
national defense.
Some
of the topics that need to
be included in value analysis
studies
include
Tangible
Financial Value
Cost
reductions from new
application
■
Direct
revenue from new
application
■
Indirect
revenue from new application
due to factors such as
hard-
■
ware
sales
"Drag
along" or revenue increases in
companion applications
■
Domestic
market share increases from
new application
■
International
market share increases from
new application
■
Competitive
market share decreases from
new application
■
Increases
in numbers of users due to
new features
■
User
performance increases
■
User
error reductions
■
84
Chapter
Two
Intangible
Value
Potential
harm if competitors instead of
you build application
■
Potential
harm if competitors build
similar application
■
Potential
gain if your application is
first to market
■
Synergy
with existing applications already
released
■
Benefits
to national security
■
Benefits
to human health or
safety
■
Benefits
to corporate prestige
■
Benefits
to employee morale
■
Benefits
to customer satisfaction
■
What
the author has proposed is
the possibility of constructing
a
value
point metric
that would resemble function
point metrics in
struc-
ture.
The idea is to have a metric
that can integrate both
financial and
intangible
value topics and therefore
be used for
return-on-investment
calculations.
In
general, the financial value
points would be equal to
$1000. The
intangible
value points would have to
be mapped to a scale that
pro-
vided
approximate equivalence, such as
each customer added or
lost
would
be worth 10 value points.
Obviously, value associated with
saving
human
lives or national defense
would require a logarithmic
scale since
those
values may be
priceless.
Value
points could be compared with
cost per function point
for eco-
nomic
studies such as return on
investment and total cost of
ownership
(TCO).
19.
Best Practices for Canceling
or Turning
Around
Troubled Projects
Given
the fact that a majority of
large software projects run
late or are
cancelled
without ever being
completed, it is surprising that
the lit-
erature
on this topic is very
sparse. A few interesting
technical papers
exist,
but no full-scale books. Of
course there are many
books on soft-
ware
disasters and outright
failures, but they are
hardly best practice
discussions
of trying to rescue troubled
projects.
Unfortunately,
only a small percentage of
troubled projects can
be
rescued
and turned into successful
projects. The reasons for
this are
twofold:
First, troubled projects
usually have such bad
tracking of prog-
ress
that it is too late to
rescue the project by the
time the problems
sur-
face
to higher management or to clients.
Second, troubled projects
with
Overview
of 50 Software Best Practices
85
schedule
delays and cost overruns
steadily lose value.
Although such
projects
may have had a positive
value when first initiated,
by the time
of
the second or third cost
overrun, the value has
probably degraded so
much
that it is no longer cost-effective to
complete the application.
An
example
will clarify the
situation.
The
example shows an original
estimate and then three
follow-on
estimates
produced when senior
management was alerted to
the fact
that
the previous estimate was no
longer valid. The
application in ques-
tion
is an order entry system for
a large manufacturing company.
The
initial
planned size was 10,000
function points.
The
original cost estimate was
for $20 million, and
the original value
estimate
was for $50 million.
However, the value was
partly based upon
the
application going into
production in 36 months. Every
month of
delay
would lower the
value.
Estimate
1: January 2009
Original
size (function
points)
10,000
Original
budget (dollars)
$20,000,000
Original
schedule (months)
36
Original
value (dollars)
$50,000,000
Original
ROI
$2.50
Estimate
2: June 2010
Predicted
size (function
points)
12,000
Predicted
costs (dollars)
$25,000,000
Predicted
schedule (months)
42
Predicted
value (dollars)
$45,000,000
Predicted
ROI
$1.80
Recovery
possible
Estimate
3: June 2011
Predicted
size (function
points)
15,000
Predicted
costs (dollars)
$30,000,000
Predicted
schedule (months)
48
Predicted
value (dollars)
$40,000,000
Predicted
ROI
$1.33
Recovery
unlikely
Estimate
4: June 2012
Predicted
size (function
points)
17,000
Predicted
costs (dollars)
$35,000,000
Predicted
schedule (months)
54
Predicted
value (dollars)
$35,000,000
Predicted
ROI
$1.00
Recovery
impossible
86
Chapter
Two
As
can be seen, the steady
increase in creeping requirements
trig-
gered
a steady increase in development
costs and a steady
increase
in
development schedules. Since
the original value was
based in part
on
completion in 36 months, the
value eroded so that the
project was
no
longer viable. By the fourth
estimate, recovery was
unfeasible and
termination
was the only
choice.
The
truly best practice, of
course, would be to avoid
the situation by
means
of a careful risk analysis
and sizing study before
the application
started.
Once the project is under
way, best practices for
turnarounds
include
Careful
and accurate status
tracking
■
Re-estimation
of schedules and costs due
to requirements changes
■
Re-estimation
of value at frequent
intervals
■
Considering
intangible value as well as
internal rate of return
and
■
financial
value
Using
internal turnaround specialists
(if available)
■
Hiring
external turnaround
consultants
■
Threatening
litigation if the application is
under contract
■
It
the application has negative
value and trying to turn it
around is
unfeasible,
then best practices for
cancellation would
include
Mining
the application for useful
algorithms and business
rules
■
Extracting
potentially useful reusable
code segments
■
Holding
a formal "postmortem" to document
what went wrong
■
Assembling
data for litigation if the
application was under
contract
■
Unfortunately,
cancelled projects are
common, but usually don't
gen-
erate
much in the way of useful
data to avoid similar
problems in the
future.
Postmortems should definitely be
viewed as best practices
for
cancelled
projects.
One
difficulty in studying cancelled
projects is that no one
wants
to
spend the money to measure
application size in function
points.
However,
the advent of new
high-speed, low-cost function
point meth-
ods
means that the cost of
counting function points is
declining from
perhaps
$6.00 per function point
counted down to perhaps
$0.01 per
function
point counted. At a cost of a
penny per function point,
even a
100,000function
point disaster can now be
quantified. Knowing
the
sizes
of cancelled projects will provide
new insights into software
eco-
nomics
and aid in forensic
analysis.
Overview
of 50 Software Best Practices
87
20.
Best Practices for Software
Project
Organization
Structures
Software
project organization structures
and software
specialization
are
topics that have more
opinions than facts
associated with them.
Many
adherents of the "small
team" philosophy claim that
software
applications
developed by teams of six or
fewer are superior in
terms
of
quality and productivity.
However, such small teams
cannot develop
really
large applications.
As
software projects grow in
size, the number and
kinds of specialists
that
are normally employed goes
up rapidly. With increases in
person-
nel,
organization structures become
more complex, and
communication
channels
increase geometrically. These
larger groups eventually
become
so
numerous and diverse that
some form of project office
is required to
keep
track of progress, problems,
costs, and issues.
A
study performed by the
author and his colleagues of
software occu-
pation
groups in large corporations
and government agencies
identified
more
than 75 different specialties.
Because software engineering is
not a
licensed
profession with formal specialties,
these specialists are
seldom
clearly
identified in personnel records.
Therefore on-site visits and
dis-
cussions
with local managers are
needed to ascertain the
occupations
that
are really used.
The
situation is made more
complex because some companies do
not
identify
specialists by job title or
form of work, but use a
generic title
such
as "member of the technical
staff" to encompass scores of
different
occupations.
Also
adding to the difficulty of
exploring software specialization
is
the
fact that some personnel
who develop embedded
software are not
software
engineers, but rather
electrical engineers, automotive
engi-
neers,
telecommunications engineers, or some
other type of
engineer.
In
many cases, these engineers
refuse to be called "software
engineers"
because
software engineering is lower in
professional status and not
a
recognized
professional engineering
occupation.
Consider
the differences in the
number and kind of personnel
who are
likely
to be used for applications of
1000 function points, 10,000
function
points,
and 100,000 function points.
For small projects of 1000
function
points,
generalists are the norm
and specialists are few. But
as applica-
tions
reach 10,000 and 100,000
function points, specialists
become more
important
and more numerous. Table
2-3 illustrates typical
staffing
patterns
for applications of three
sizes an order of magnitude
apart.
As
can easily be seen from
Table 2-3, the diversity of
occupations rises
rapidly
as application size increases.
For small applications,
generalists
predominate,
but for large systems,
various kinds of specialists
can top
one
third of the total team
size.
88
Chapter
Two
TABLE
2-3
Personnel
Staffing Patterns for Software
Projects
1000
10,000
100,000
Occupation
Group
Function
Points
Function
Points
Function
Points
Architect
1
5
Configuration
control
2
8
Database
administration
2
10
Estimating
specialist
1
3
Function
point
counters
2
5
Measurement
specialist
1
5
Planning
specialist
1
3
Project
librarian
2
6
Project
manager
1
6
75
Quality
assurance
2
12
Scrum
master
3
8
Security
specialist
1
5
Software
engineers
5
50
600
Technical
writer
1
3
12
Testers
5
125
Web
designer
1
5
TOTAL
STAFF
7
83
887
Function
points per
staff
member
142.86
120.48
112.74
Table
2-3 also illustrates why
some methods such as Agile
develop-
ment
do very well for small
projects, but may not be a
perfect match for
large
projects. As project sizes
grow larger, it is hard to
accommodate all
of
the various specialists into
the flexible and cohesive
team organiza-
tions
that are the hallmark of
the Agile approach.
For
example, large software
projects benefit from
specialized organi-
zation
such as project offices,
formal software quality
assurance (SQA)
organizations,
formal testing groups,
measurement groups, change
man-
agement
boards, and others as well.
Specialized occupations that
benefit
large
projects include architecture,
security, database
administration,
configuration
control, testing, and
function point
analysis.
Melding
these diverse occupations
into a cohesive and
cooperating
team
for large software projects
is not easy. Multiple
departments and
multiple
specialists bring about a
geometric increase in
communica-
tion
channels. As a result, a best
practice for large software
projects
above
10,000 function points is a
project office whose main
role is
Overview
of 50 Software Best Practices
89
coordination
of the various skills and
activities that are a
necessary
part
of large-system development. The
simplistic Agile approach
of
small
self-organizing teams is not
effective above about 2,500
func-
tion
points.
Another
issue that needs examination
is the span
of control, or
the
number
of employees reporting to a manager.
For reasons of
corporate
policy,
the average number of
software employees who
report to a man-
ager
in the United States is
about eight. However, the
observed range
of
employees per manager runs
from 3 to more than
20.
Studies
carried out by the author
within IBM noted that having
eight
employees
per manager tended to put
more people in management
than
were
really qualified to do managerial
work well. As a result,
planning,
estimating,
and other managerial
functions were sometimes
poorly per-
formed.
My study concluded that
changing the average span of
control
from
8 to 11 would allow marginal
managers to be reassigned to staff
or
technical
positions. Cutting down on
the number of departments
would
also
reduce communication channels
and allow managers to have
more
time
with their own teams, rather
than spending far too
much time
with
other managers.
Even
worse, personality clashes
between managers and
technical
workers
sometimes led to the
voluntary attrition of good
technologists.
In
fact, when exit interviews
are examined, two
distressing problems
tend
to occur: (1) The best
people leave in the largest
numbers; and (2)
The
most common reason cited
for departure was "I don't
want to work
under
bad management."
Later
in this book the pros
and cons of small teams,
large depart-
ments,
and various spans of control
will be discussed at more
length,
as
will special topics such as
pair programming.
21.
Best Practices for Training
Managers of
Software
Projects
When
major delays or cost
overruns for projects occur
in the nominal
10,000function
point range, project
management problems are
always
present.
Conversely, when projects
have high productivity and
quality
levels,
good project management is
always observed. The state
of the art
for
project management on large
projects includes knowledge
of:
1.
Sizing techniques such as
function points
2.
Formal estimating tools and
techniques
3.
Project planning tools and
techniques
4.
Benchmark techniques and
sources of industry
benchmarks
5.
Risk analysis methods
90
Chapter
Two
6.
Security issues and security
vulnerabilities
7.
Value analysis
methods
8.
Project measurement
techniques
9.
Milestone and cost tracking
techniques
10.
Change management
techniques
11.
All forms of software quality
control
12.
Personnel management
techniques
13.
The domain of the
applications being
developed
For
the global software
industry, it appears that
project management
was
a weak link and possibly the
weakest link of all. For
example, for
failing
projects, sizing by means of
function points is seldom
utilized.
Formal
estimating tools are not
utilized. Although project
planning tools
may
be used, projects often run
late and over budget
anyway. This indi-
cates
that the plans were
deficient and omitted key
assumptions such
as
the normal rate of
requirements change, staff
turnover, and delays
due
to high defect volumes found
during testing.
The
roles of management in outsource
projects are more
complex
than
the roles of management for
projects developed internally. It
is
important
to understand the differences
between client
management
and
vendor
project management.
The
active work of managing the
project is that of the
vendor project
managers.
It is their job to create
plans and schedules, to
create cost
estimates,
to track costs, to produce
milestone reports, and to
alert the
client
directors and senior client
executives to the existence of
potential
problems.
The
responsibility of the client
director or senior client
executive cen-
ters
around facilitation, funding,
and approval or rejection of
plans and
estimates
produced by the vendor's
project manager.
Facilitation
means
that the client director
will provide access for
the
vendor
to business and technical
personnel for answering
questions
and
gathering requirements. The
client director may also
provide to
the
vendor technical documents,
office space, and sometimes
tools and
computer
time.
Funding
means
that the client director,
after approval by
corporate
executives,
will provide the money to
pay for the
project.
Approval
means
that the client director
will consider proposals,
plans,
and
estimates created by the
vendor and either accept
them, reject
them,
or request that they be
modified and
resubmitted.
The
main problems with failing
projects seem to center
around the
approval
role. Unfortunately clients
may be presented with a
stream
of
optimistic estimates and
schedule commitments by vendor
project
Overview
of 50 Software Best Practices
91
management
and asked to approve them.
This tends to lead to
cumula-
tive
overruns, and the reason
for this deserves
comment.
Once
a project is under way, the
money already spent on it will
have
no
value unless the project is
completed. Thus if a project is
supposed to
cost
$1 million, but has a cost
overrun that needs an additional
$100,000
for
completion, the client is
faced with a dilemma. Either
cancel the
project
and risk losing the
accrued cost of a million
dollars, or provide
an
additional 10 percent and
bring the project to
completion so that it
returns
positive value and results
in a working application.
If
this scenario is repeated
several times, the choices
become more
difficult.
If a project has accrued $5
million in costs and seems
to need
another
10 percent, both sides of
the dilemma are more
expensive. This
is
a key problem with projects
that fail. Each time a
revised estimate
is
presented, the vendor
asserts that the project is
nearing completion
and
needs only a small amount of
time and some additional
funds to
bring
it to full completion. This
can happen
repeatedly.
All
corporations have funding
criteria for major
investments. Projects
are
supposed to return positive
value in order to be funded.
The value
can
consist of either revenue
increases, cost reductions, or
competitive
advantage.
A typical return on investment
(ROI) for a software project
in
the
United States would be about
3 to 1. That is, the project
should return
$3.00
in positive value for every
$1.00 that is spent on the
project.
During
the course of development
the accrued costs are
monitored. If
the
costs begin to exceed
planned budgets, then the
ROI for the project
will
be diminished. Unfortunately for
failing projects, the
ability of client
executives
to predict the ROI can be
damaged by inaccurate
vendor
estimating
methods and cost control
methods.
The
root problem, of course, is
that poor estimating methods
are
never
realistic nor are the
schedules: they are always
optimistic.
Unfortunately,
it can take several
iterations before the
reality of this
pattern
emerges.
Each
time a vendor presents
revised estimates and
schedules, there
may
be no disclosure to clients of internal
problems and risks that
the
vendor
is aware of. Sometimes this
kind of problem does not
surface
until
litigation occurs, when all
of the vendor records have
to be dis-
closed
and vendor personnel are
deposed.
The
bottom line is that training
of software managers needs to
be
improved
in the key tasks of
planning, estimating, status
reporting, cost
tracking,
and problem
reporting.
22.
Best Practices for Training
Software
Technical
Personnel
The
software development and
maintenance domains are
characterized
by
workers who usually have a
fairly strong work ethic
and reasonable
92
Chapter
Two
competence
in core activities such as detail
design, programming,
and
unit
testing. Many software
personnel put in long hours
and are fairly
good
in basic programming tasks.
However, to be successful on spe-
cific,
10,000function point applications,
some additional skill sets
are
needed--knowledge
of the following:
1.
Application domains
2.
The database packages,
forms, tools, and
products
3.
The skill sets of the
subcontract companies
4.
Joint application design
(JAD) principles
5.
Formal design
inspections
6.
Complexity analysis
7.
All programming languages
utilized
8.
Security issues and security
vulnerabilities (a weak
area)
9.
Performance issues and
bottlenecks (a weak
area)
10.
Formal code
inspections
11.
Static analysis
methods
12.
Complexity analysis
methods
13.
Change control methods and
tools
14.
Performance measurement and
optimization techniques
15.
Testing methods and
tools
When
software technical problems
occur, they are more
often related
to
the lack of specialized
knowledge about the
application domain or
about
specific technical topics
such as performance optimization
rather
than
to lack of knowledge of basic
software development
approaches.
There
may also be lack of
knowledge of key quality
control activities
such
as inspections, JAD, and
specialized testing approaches. In
general,
common
programming tasks are not
usually problems. The
problems
occur
in areas where special
knowledge may be needed,
which brings
up
the next topic.
23.
Best Practices for
Use
of
Software Specialists
In
many human activities,
specialization is a sign of
technological
maturity.
For example, the practice of
medicine, law, and
engineering
all
encompass dozens of specialists.
Software is not yet as
sophisticated
as
the more mature professions,
but specialization is now
occurring in
increasing
numbers. After analyzing the
demographics of large
software
Overview
of 50 Software Best Practices
93
production
companies in a study commissioned by
AT&T, from 20 to more
than
90 specialized occupations now
exist in the software
industry.
What
is significant about specialization in
the context of
10,000
function
point projects is that
projects with a full complement of a
dozen
or
more specialists have a
better chance of success than
those relying
upon
generalists.
The
state of the art of
specialization for nominal
10,000function point
projects
would include the following
specialist occupation
groups:
1.
Configuration control
specialists
2.
Cost estimating
specialists
3.
Customer liaison
specialists
4.
Customer support
specialists
5.
Database administration
specialists
6.
Data quality
specialists
7.
Decision support
specialists
8.
Development specialists
9.
Domain knowledge
specialists
10.
Security specialists
11.
Performance specialists
12.
Education specialists
13.
Function point specialists
(certified)
14.
Graphical user interface
(GUI) specialists
15.
Human factors specialists
16.
Integration specialists
17.
Joint application design
(JAD) specialists
18.
SCRUM masters (for Agile
projects)
19.
Measurement specialists
20.
Maintenance specialists for
postrelease defect
repairs
21.
Maintenance specialists for
small enhancements
22.
Outsource evaluation
specialists
23
Package evaluation
specialists
24.
Performance specialists
25.
Project cost estimating
specialists
26.
Project planning
specialists
27.
Quality assurance
specialists
94
Chapter
Two
28.
Quality measurement
specialists
29.
Reusability specialists
30.
Risk management
specialists
31.
Standards specialists
32.
Systems analysis
specialists
33.
Systems support
specialists
34.
Technical writing
specialists
35.
Testing specialists
36.
Tool specialists for
development and maintenance
workbenches
Senior
project managers need to
know what specialists are
required
and
should take active and
energetic steps to find
them. The domains
where
specialists usually outperform
generalists include technical
writ-
ing,
testing, quality assurance,
database design, maintenance,
and per-
formance
optimization. For some tasks
such as function point
analysis,
certification
examinations are a prerequisite to
doing the work at
all.
Really
large projects also benefit
from using planning and
estimating
specialists.
Both
software development and
software project management
are
now
too large and complex
for generalists to know
everything needed
in
sufficient depth. The
increasing use of specialists is a
sign that the
body
of knowledge of software engineering
and software
management
is
expanding, which is a beneficial
situation.
For
the past 30 years, U.S.
and European companies have
been out-
sourcing
software development, maintenance,
and help-desk
activities
to
countries with lower labor
costs such as India, China,
Russia, and
a
number of others. In general it is
important that outsource
vendors
utilize
the same kinds of methods as
in-house development, and in
par-
ticular
that they achieve excellence
in quality control.
Interestingly
a recent study of outsource
practices by the
International
Software
Benchmarking Standards Group
(ISBSG) found that
outsource
projects
tend to use more tools
and somewhat more
sophisticated plan-
ning
and estimating methods than
similar projects produced
in-house.
This
is congruent with the author's
own observations.
24.
Best Practices for
Certifying Software
Engineers,
Specialists, and Managers
As
this book is written in 2008
and 2009, software
engineering itself
and
its many associated
specialties are not fully
defined. Of the 90 or
so
occupations noted in the
overall software domain,
certification is
Overview
of 50 Software Best Practices
95
possible
for only about a dozen
topics. For these topics,
certification is
voluntary
and has no legal
standing.
What
would benefit the industry
would be to establish a joint
certifica-
tion
board that would include
representatives from the
major professional
associations
such as the ASQ, IEEE
(Institute of Electrical &
Electronics
Engineers),
IIBA (International Institute of Business
Analysis), IFPUG
(International
Function Point Users Group),
PMI (#Project Management
Institute),
SEI (Software Engineering Institute),
and several others.
The
joint
certification board would
identify the specialist
categories and create
certification
criteria. Among these
criteria might be examinations or
cer-
tification
boards, similar to those
used for medical
specialties.
As
this book is written,
voluntary certification is possible
for these
topics:
Function
point analysis
(IFPUG)
■
Function
point analysis
(COSMIC)
■
Function
point analysis
(Finnish)
■
Function
point analysis
(Netherlands)
■
Microsoft
certification (various
topics)
■
Six
Sigma green belt
■
Six
Sigma black belt
■
Certified
software project manager
(CSPM)
■
Certified
software quality engineer
(CSQE)
■
Certified
software test manager
(CSTM)
■
Certified
software test professional
(CSTP)
■
Certified
software tester
(CSTE)
■
Certified
scope manager (CSM)
■
These
forms of certification are
offered by different organizations
that
in
general do not recognize
certification other than
their own. There is
no
central registry for all
forms of certification, nor
are their standard
examinations.
As
a result of the lack of
demographic data about those
who are regis-
tered,
there is no solid information as to
what percentage of various
spe-
cialists
and managers are actually
certified. For technical
skills such as
function
point analysis, probably 80
percent of consultants and
employ-
ees
who count function points
are certified. The same or
even higher is
true
for Six Sigma. However, for
testing, for project
management, and
for
quality assurance, it would be
surprising if the percentage of
those
certified
were higher than about 20
percent to 25 percent.
96
Chapter
Two
It
is interesting that there is
overall certification neither
for "soft-
ware
engineering" nor for
"software maintenance engineering" as
a
profession.
Some
of the technical topics that
might be certified if the
industry
moves
to a central certification board
would include
Software
architecture
■
Software
development engineering
■
Software
maintenance engineering
■
Software
quality assurance
■
Software
security assurance
■
Software
performance analysis
■
Software
testing
■
Software
project management
■
Software
scope management
■
Specialized
skills would also need
certification, including but
not lim-
ited
to:
Six
Sigma for software
■
Quality
function deployment
(QFD)
■
Function
point analysis (various
forms)
■
Software
quality measurements
■
Software
productivity measurements
■
Software
economic analysis
■
Software
inspections
■
SEI
assessments
■
Vendor
certifications (Microsoft, Oracle,
SAP, etc.)
■
The
bottom line as this book is
written is that software
certification
is
voluntary, fragmented, and of
unknown value to either
practitioners
or
to the industry. Observations
indicate that for technical
skills such
as
function point analysis,
certified counters are
superior in accuracy to
self-taught
practitioners. However, more
study is needed on the
benefits
of
software quality and test
certification.
What
is really needed though is
coordination of certification and
the
establishment
of a joint certification board
that would consider all
forms
of
software specialization. The
software engineering field
would do well
to
consider how specialties are
created and governed in
medicine and
law,
and to adopt similar
policies and
practices.
Overview
of 50 Software Best Practices
97
For
many forms of certification, no
quantitative data is
available
that
indicates that certification
improves job performance.
However,
for
some forms of certification,
enough data is available to
show tangible
improvements:
1.
Testing performed by certified
testers is about 5 percent
higher
in
defect removal efficiency
than testing performed by
uncertified
testers.
2.
Function point analysis
performed by certified function
point coun-
ters
seldom varies by more than 5
percent when counting trial
appli-
cations.
Counts of the same trials by
uncertified counters vary
by
more
than 50 percent.
3.
Applications developed where
certified Six Sigma black
belts are part
of
the development team tend to
have lower defect potentials
by about
1
defect per function point,
and higher defect removal
efficiency levels
by
about 7 percent. (Compared
against U.S. averages of 5.0
defects
per
function point and 85
percent defect removal
efficiency.)
Unfortunately,
as this book is written,
other forms of
certification
are
ambiguous in terms of quantitative
results. Obviously, those
who
care
enough about their work to
study and successfully pass
written
examinations
tend to be somewhat better
than those who don't,
but this
is
difficult to show using
quantified data due to the
very sparse sets of
data
available.
What
would benefit the industry
would be for software to
follow the
pattern
of the American Medical
Association and have a
single organi-
zation
responsible for identifying
and certifying specialists,
rather than
independent
and sometimes competing
organizations.
25.
Best Practices for
Communication
During
Software Projects
Large
software applications in the
nominal 10,000function
point
domain
are always developed by
teams that number from at
least 50
to
more than 100 personnel. In
addition, large applications
are always
built
for dozens or even hundreds
of users, many of whom will be
using
the
application in specialized
ways.
It
is not possible to build any
large and complex product
where
dozens
of personnel and dozens of
users need to share
information
unless
communication channels are
very well planned and
utilized.
Communication
needs are even greater
when projects involve
multiple
subcontractors.
As
of 2009, new kinds of
virtual environments where
participants
interact
using avatars in a virtual-reality
world are starting to
enter
98
Chapter
Two
the
business domain. Although
such uses are experimental in
2009, they
are
rapidly approaching the
mainstream. As air travel
costs soar and
the
economy sours, methods such
as virtual communication are
likely to
expand
rapidly. Within ten years,
such methods might well
outnumber
live
meetings and live
conferences.
Also
increasing in use for
interproject communication are "wiki
sites,"
which
are collaborative networks
that allow colleagues to
share ideas,
documents,
and work products.
The
state of the art for
communication on a nominal
10,000function
point
project includes all of the
following:
Monthly
status reports to corporate
executives from project
manage-
■
ment
Weekly
progress reports to clients by
vendor project
management
■
Daily
communication between clients
and the prime
contractor
■
Daily
communication between the
prime contractor and all
subcon-
■
tractors
Daily
communication between developers
and development
manage-
■
ment
Use
of virtual reality for
communication across geographic
boundaries
■
Use
of "wiki" sites for
communication across geographic
boundaries
■
Daily
"scrum" sessions among the
development team to
discuss
■
issues
Full
e-mail support among all
participants
■
Full
voice support among all
participants
■
Video
conference communication among
remote locations
■
Automated
distribution of documents and
source code among
devel-
■
opers
Automated
distribution of change requests to
developers
■
Automated
distribution of defect reports to
developers
■
Emergency
or "red flag" communication
for problems with a
material
■
impact
For
failing projects, many of
these communication channels
were
either
not fully available or had
gaps that tended to
interfere with both
communication
and progress. For example,
cross-vendor communica-
tions
may be inadequate to highlight
problems. In addition, the
status
reports
to executives may gloss over
problems and conceal them,
rather
than
highlight causes for
projected cost and schedule
delays.
Overview
of 50 Software Best Practices
99
The
fundamental purpose of good
communications was
encapsu-
lated
in a single phrase by Harold
Geneen, the former chairman
of ITT
Corporation:
"NO SURPRISES."
From
reviewing the depositions
and court documents of
breach of
contract
litigation, it is alarming that so
many projects drift along
with
inadequate
status tracking and problem
reporting. Usually
projects
that
are cancelled or that have
massive overruns do not even
start to
deal
with the issues until it is
far too late to correct
them. By contrast,
successful
projects have fewer serious
issues to deal with, more
effec-
tive
tracking, and much more
effective risk abatement
programs. When
problems
are first reported,
successful projects immediately
start task
forces
or risk-recovery activities.
26.
Best Practices for Software
Reusability
At
least 15 different software
artifacts lend themselves to
reusability.
Unfortunately,
much of the literature on
software reuse has
concen-
trated
only on reusing source code,
with a few sparse and
intermittent
articles
devoted to other topics such
as reusable design.
The
state of the art of
developing nominal 10,000function
point proj-
ects
includes substantial volumes of
reusable materials. Following
are
the
15 artifacts for software
projects that are
potentially reusable:
1.
Architecture
2.
Requirements
3.
Source code (zero
defects)
4.
Designs
5.
Help information
6.
Data
7.
Training materials
8.
Cost estimates
9.
Screens
10.
Project plans
11.
Test plans
12.
Test cases
13.
Test scripts
14.
User documents
15.
Human interfaces
100
Chapter
Two
Not
only are there many
reusable artifacts, but also
for reuse to be
both
a technical and business success,
quite a lot of information
needs
to
be recorded:
All
customers or users in case of a
recall
■
All
bugs or defects in reusable
artifacts
■
All
releases of reusable
artifacts
■
Results
of certification of reusable
materials
■
All
updates or changes
■
Also,
buggy materials cannot
safely be reused. Therefore
extensive
quality
control measures are needed
for successful reuse,
including but
not
limited to:
Inspections
of reusable text
documents
■
Inspections
of reusable code
segments
■
Static
analysis of reusable code
segments
■
Testing
of reusable code
segments
■
Publication
of certification certificates for
reusable materials
■
Successful
software reuse involves much
more than simply copying
a
code
segment and plugging it into
a new application.
One
of the common advantages of
using an outsource vendor is
that
these
vendors are often very
sophisticated in reuse and
have many
reusable
artifacts available. However,
reuse is most often
encountered
in
areas where the outsource
vendor is a recognized expert.
For example,
an
outsource vendor that
specializes in insurance applications
and has
worked
with a dozen property and
casualty insurance companies
prob-
ably
has accumulated enough
reusable materials to build
any insurance
application
with at least 50 percent reusable
components.
Software
reuse is a key factor in
reducing costs and schedules
and in
improving
quality. However, reuse is a
two-edged sword. If the
quality
levels
of the reusable materials
are good, then reusability
has one of the
highest
returns on investment of any
known software technology. But
if
the
reused materials are filled
with bugs or errors, the ROI
can become
very
negative. In fact, reuse of
high quality or poor quality
materials
tends
to produce the greatest
extreme in the range of ROI of
any known
technology:
plus or minus 300 percent
ROIs have been
observed.
Software
reusability is often cited as a
panacea that will remedy
soft-
ware's
sluggish schedules and high
costs. This may be true
theoretically,
but
reuse will have no practical
value unless the quality
levels of the
reusable
materials approach zero
defects.
Overview
of 50 Software Best Practices
101
A
newer form of reuse termed
service-oriented
architecture (SOA)
has
appeared
within the past few
years. The SOA approach
deals with reuse
by
attempting to link fairly large
independent functions or
"services"
into
a cohesive application. The
functions themselves can
also operate in
a
stand-alone mode and do not
require modification. SOA is an
intrigu-
ing
concept that shows great
promise, but as of 2009, the
concepts are
more
theoretical than real. In
any event, empirical data on
SOA costs,
quality,
and effectiveness have not
yet become available.
Software
reusability to date has not
yet truly fulfilled the
high expec-
tations
and claims made for
it. Neither object-oriented
class libraries
nor
other forms of reuse such as
commercial enterprise resource
plan-
ning
(ERP) packages have been
totally successful.
To
advance reuse to the status
of really favorable economics,
better
quality
for reusable materials and
better security control for
reusable
materials
need to be more widely
achieved. The technologies
for accom-
plishing
this appear to be ready, so
perhaps within a few years,
reuse
will
finally achieve its past
promise of success.
To
put software on a sound
economic basis, the paradigm
for software
needs
to switch from software
development using custom
code to soft-
ware
construction using standard
reusable components. In 2009,
very
few
applications are constructed
from standard reusable
components.
Part
of the reason is that
software quality control is
not good enough
for
many components to be used
safely. Another part of the
reason is
the
lack of standard architectures
for common application types
and
the
lack of standard interfaces
for connecting components.
The aver-
age
volume of high-quality reusable
material in typical
applications
today
is less than 25 percent.
What is needed is a step-by-step
plan
that
will raise the volume of
high-quality reusable material to
more
than
85 percent on average and to
more than 95 percent for
common
applications
types.
27.
Best Practices for
Certification
of
Reusable Materials
Reuse
of code, specifications, and
other material is also a
two-edged
sword.
If the materials approach
zero-defect levels and are
well devel-
oped,
then they offer the
best ROI of any known
technology. But if the
reused
pieces are buggy and
poorly developed, they only
propagate bugs
through
dozens or hundreds of applications. In
this case, software
reuse
has
the worst negative ROI of
any known technology.
Since
reusable material is available
from hundreds of sources
of
unknown
reliability, it is not yet
safe to make software reuse
a main-
stream
development practice. Further,
reusable material, or at
least
source
code, may have security
flaws or even deliberate
"back doors"
102
Chapter
Two
inserted
by hackers, who then offer
the materials as a temptation
to
the
unwary.
This
brings up an important point:
what must happen for
software
reuse
to become safe, cost-effective,
and valuable to the
industry?
The
first need is a central
certification facility or multiple
certifica-
tion
facilities that can
demonstrate convincingly that
candidates for
software
reuse are substantially bug
free and also free
from viruses,
spyware,
and keystroke loggers.
Probably an industry-funded
nonprofit
organization
would be a good choice for
handling certification. An
orga-
nization
similar to Underwriters Laboratories or
Consumer Reports
comes
to mind.
But
more than just certification
of source code is needed.
Among the
other
topics that are precursors
to successful reuse would
be
A
formal taxonomy of reusable
objects and their
purposes
■
Standard
interface definitions for
linking reusable
objects
■
User
information and HELP text
for all reusable
objects
■
Test
cases and test scripts
associated with all reusable
objects
■
A
repository of all bug
reports against reusable
objects
■
Identification
of the sources of reusable
objects
■
Records
of all changes made to
reusable objects
■
Records
of all variations of reusable
objects
■
Records
of all distributions of reusable
objects in case of
recalls
■
A
charging method for reusable
material that is not
distributed for
■
free
Warranties
for reusable material
against copyright and
patent
■
violations
In
other words, if reuse is
going to become a major
factor for software,
it
needs to be elevated from
informal and uncontrolled
status to formal
and
controlled status. Until this
can occur, reuse will be of
some value,
but
hazardous in the long run.
It would benefit the
industry to have
some
form of nonprofit organization
serve as a central repository
and
source
of reusable material.
Table
2-4 shows the approximate
development economic value of
high-
quality
reusable materials that have
been certified and that
approach
zero-defect
levels. The table assumes
reuse not only of code,
but also of
architecture,
requirements, design, test
materials, and
documentation.
The
example in Table 2-4 is a
fairly large system of
10,000 function
points.
This is the size where
normally failures top 50
percent, pro-
ductivity
sags, and quality is
embarrassingly bad. As can be seen,
as
Overview
of 50 Software Best Practices
103
TABLE
2-4
Development
Value of High-Quality Reusable
Materials
Application
size (function points) =
10,000
Reuse
Effort
Prod.
Schedule
Defect
Removal
Delivered
Percent
Staff
(months) (FP/Mon.)
(months)
Potential
Efficiency
Defects
0.00%
67
2,654
3.77
39.81
63,096
80.00%
12,619
10.00%
60
2,290
4.37
38.17
55,602
83.00%
9,452
20.00%
53
1,942
5.15
36.41
48,273
87.00%
6,276
30.00%
47
1,611
6.21
34.52
41,126
90.00%
4,113
40.00%
40
1,298
7.70
32.45
34,181
93.00%
2,393
50.00%
33
1,006
9.94
30.17
27,464
95.00%
1,373
60.00%
27
736
13.59
27.59
21,012
97.00%
630
70.00%
20
492
20.33
24.60
14,878
98.00%
298
80.00%
13
279
35.86
20.91
9,146
98.50%
137
90.00%
7
106
94.64
15.85
3,981
99.00%
40
100.00%
4
48
208.33
12.00
577
99.50%
3
the
percentage of reuse increases,
both productivity and
quality levels
improve
rapidly, as do development
schedules.
No
other known development
technology can achieve such
a profound
change
in software economics as can
high-quality reusable
materials.
This
is the goal of object-oriented
development and service-oriented
archi-
tecture.
So long as software applications
are custom-coded and
unique,
improvement
in productivity and quality will be
limited to gains of
per-
haps
25 percent to 30 percent. For
really major gains of
several hundred
percent,
high-quality reuse appears to be
the only viable
option.
Not
only would high-quality
reusable material benefit
develop-
ment,
but maintenance and
enhancement work would also
improve.
However,
there is a caveat with maintenance.
Once a reusable com-
ponent
is installed in hundreds or thousands of
applications, it is
mandatory
to be able to recall it,
update it, or fix any latent
bugs that
are
reported. Therefore both
certification and sophisticated
usage
records
are needed to achieve maximum
economic value. In this
book
maintenance
refers
to defect repairs. Adding
new features is
called
enhancement.
Table
2-5 illustrates the
maintenance value of reusable
materials.
Both
development staffing and
maintenance staffing have
strong cor-
relations
to delivered defects, and
therefore would be reduced as
the
volume
of certified reusable materials
goes up.
Customer
support is also affected by
delivered defects, but
other
factors
also impact support ratios.
Over and above delivered
defects,
customer
support is affected by numbers of
users and by numbers
of
installations
of the application.
104
Chapter
Two
TABLE
2-5
Maintenance
Value of High-Quality Reusable
Materials
Application
size (function points) =
10,000
Reuse
Effort
Prod.
Schedule
Defect
Removal
Latent
Percent
Staff
(months)
(FP/Mon.)
(months)
Potential
Efficiency
Defects
0.00%
13
160
62.50
12.00
12,619
80.00%
2,524
10.00%
12
144
69.44
12.00
9,452
83.00%
1,607
20.00%
11
128
78.13
12.00
6,276
87.00%
816
30.00%
9
112
89.29
12.00
4,113
90.00%
411
40.00%
8
96
104.17
12.00
2,393
93.00%
167
50.00%
7
80
125.00
12.00
1,373
95.00%
69
60.00%
5
64
156.25
12.00
630
97.00%
19
70.00%
4
48
208.33
12.00
298
98.00%
6
80.00%
3
32
312.50
12.00
137
98.50%
2
90.00%
1
16
625.00
12.00
40
99.00%
0
100.00%
1
12
833.33
12.00
3
99.50%
0
In
general, one customer support
person is assigned for about
every
1000
customers. (This is not an
optimum ratio and explains
why it is so
difficult
to reach customer support
without long holds on
telephones.) A
ratio
of one support person for
about every 150 customers
would reduce
wait
time, but of course raise
costs. Because customer
support is usually
outsourced
to countries with low labor costs,
the monthly cost is
assumed
to
be only $4,000 instead of
$10,000.
Small
companies with few customers
tend to be better in
customer
support
than large companies with
thousands of customers, because
the
support
staffs are not saturated
for small companies.
Table
2-6 shows approximate values
for customer support as
reuse
goes
up. Table 2-6 assumes
500 install sites and
25,000 users.
Because
most customer support calls
deal with quality issues,
improv-
ing
quality would actually have
very significant impact on
support costs,
and
would probably improve
customer satisfaction and
reduce wait
times
as well.
Enhancements
would also benefit from
certified reusable
materials.
In
general, enhancements average
about 8 percent per year;
that is,
if
an application is 10,000 function
points at delivery, then
about 800
function
points would be added the
next year. This is not a
constant
value,
and enhancements vary, but 8
percent is a useful
approximation.
Table
2-7 shows the effects on
enhancements for various
percentages
of
reusable material.
Overview
of 50 Software Best Practices
105
TABLE
2-6
Customer
Support Value of High-Quality
Reusable Materials
Application
size (function points) =
10,000
Installations
= 500
Application
users = 25,000
Reuse
Effort
Prod.
Schedule
Defect
Removal
Latent
Percent
Staff
(months)
(FP/Mon.)
(months)
Potential
Efficiency
Defects
0.00%
25
300
33.33
12.00
12,619
80.00%
2,524
10.00%
23
270
37.04
12.00
9,452
83.00%
1,607
20.00%
20
243
41.15
12.00
6,276
87.00%
816
30.00%
18
219
45.72
12.00
4,113
90.00%
411
40.00%
16
197
50.81
12.00
2,393
93.00%
167
50.00%
15
177
56.45
12.00
1,373
95.00%
69
60.00%
13
159
62.72
12.00
630
97.00%
19
70.00%
12
143
69.69
12.00
298
98.00%
6
80.00%
11
129
77.44
12.00
137
98.50%
2
90.00%
10
116
86.04
12.00
40
99.00%
0
100.00%
9
105
95.60
12.00
3
99.50%
0
TABLE
2-7
Enhancement
Value of High-Quality Reusable
Materials
Application
size (function points) =
10,000
Enhancements
(function points) =
800
Years
of usage = 10
Installations
= 1,000
Application
users = 50,000
Reuse
Effort
Prod.
Schedule
Defect
Removal
Latent
Percent
Staff
(months)
(FP/Mon.)
(months)
Potential
Efficiency
Defects
0.00%
6
77
130.21
12.00
3,046
80.00%
609
10.00%
5
58
173.61
12.00
2,741
83.00%
466
20.00%
4
51
195.31
12.00
2,437
87.00%
317
30.00%
4
45
223.21
12.00
2,132
90.00%
213
40.00%
3
38
260.42
12.00
1,828
93.00%
128
50.00%
3
32
312.50
12.00
1,523
95.00%
76
60.00%
2
26
390.63
12.00
1,218
97.00%
37
70.00%
2
19
520.83
12.00
914
98.00%
18
80.00%
1
13
781.25
12.00
609
98.50%
9
90.00%
1
6
1562.50
12.00
305
99.00%
3
100.00%
1
4
2500.00
12.00
2
99.50%
0
106
Chapter
Two
Although
total cost of ownership
(TCO) is largely driven by
defect
removal
and repair costs, there
are other factors, too.
Table 2-8 shows a
hypothetical
result for development plus
ten years of usage for 0
percent
reuse
and 80 percent reuse.
Obviously, Table 2-8
oversimplifies TCO
calculations,
but the intent is to show
the significant economic
value of
certified
high-quality reusable
materials.
Constructing
applications that are 100
percent reusable is not
likely
to
be a common event. However,
experiments indicate that
almost any
application
could achieve reuse levels
of 85 percent to 90 percent if
certified
reusable components were
available. A study done some
years
ago
at IBM for accounting applications
found that about 85 percent
of
the
code in the applications was
common and generic and
involved the
logistics
of putting accounting onto a
computer. About 15 percent of
the
code
actually dealt with accounting
per se.
Not
only code but also
architecture, requirements, design,
test mate-
rials,
user manuals, and other
items need to be part of the
reusable
package,
which also has to be under
strict configuration control
and of
course
certified to zero-defect levels
for optimum value. Software
reuse
has
been a promising technology for
many years, but has
never achieved
its
true potential, due
primarily to mediocre quality
control. If service-
oriented
architecture (SOA) is to fulfill
its promise, then it must
achieve
excellent
quality levels and thereby
allow development to make
full use
of
certified reusable
components.
In
addition to approaching zero-defect
quality levels, certified
com-
ponents
should also be designed and
developed to be much more
secure
against
hacking, viruses, botnets,
and other kinds of security
attacks.
In
fact, a strong case can be
made that developing
reusable materials
TABLE
2-8
Total
Cost of Ownership of High-Quality
Reusable Materials (0% and
80%
reuse
volumes)
Application
size (function points)
=
10,000
Annual
enhancements (function points)
=
800
Monthly
cost =
$10,000
Support
cost =
$4,000
Useful
life after deployment
=
10
years
0%
Reuse
80%
Reuse
Difference
Development
$26,540,478
$2,788,372
$23,752,106
Enhancement
$7,680,000
$1,280,000
$6,400,000
Maintenance
$16,000,000
$3,200,000
$12,800,000
Customer
support
$12,000,000
$5,165,607
$6,834,393
TOTAL
COST
$62,220,478
$12,433,979
$49,786,499
TCO
Cost per Function
Point
$3,456.69
$690.78
$2,765.92
Overview
of 50 Software Best Practices
107
with
better boundary controls and
more secure programming
languages,
such
as E, would add even more
value to certified reusable
objects.
As
the global economy descends
into severe recession, every
company
will
be seeking methods to lower
costs. Since software costs
historically
have
been large and difficult to
control, it may be that the
recession will
attract
renewed interest in software
reuse. To be successful, both
quality
and
security certification will be a critical
part of the process.
28.
Best Practices for
Programming
or
Coding
Programming
or coding remains the
central activity of software
devel-
opment,
even though it is no longer
the most expensive. In spite
of the
promise
of software reuse, object-oriented
(OO) development,
application
generators,
service-oriented architecture (SOA),
and other methods
that
attempt
to replace manual coding with
reusable objects, almost
every
software
project in 2009 depends upon
coding to a significant degree.
Because
programming is a manual and
error-prone activity, the
con-
tinued
reliance upon programming or
coding places software among
the
most
expensive of all human-built
products.
Other
expensive products whose
cost structures are also
driven by
the
manual effort of skilled
craftspeople include constructing
12-meter
yachts
and constructing racing cars
for Formula One or
Indianapolis.
The
costs of the custom-built
racing yachts are at least
ten times higher
than
normal class-built yachts of
the same displacements. Indy
cars or
Formula
1 cars are close to 100
times more costly than
conventional
sedans
built on assembly lines and
that include reusable
components.
One
unique feature of programming
that is unlike any other
engineer-
ing
discipline is the existence of
more than 700 different
programming
languages.
Not only are there hundreds
of programming languages,
but
also some large software
applications may have as
many as 12 to
15
languages used at the same
time. This is partly due to
the fact that
many
programming languages are
specialized and have a
narrow focus.
Therefore
if an application covers a wide
range of functions, it may
be
necessary
to include several languages.
Examples of typical
combina-
tions
include COBOL and SQL
from the 1970s and
1980s; Visual Basic
and
C from the 1990s; and
Java Beans and HTML from
the current
century.
New
programming languages have
been developed at a rate of
more
than
one a month for the past 30
years or so. The current
table of pro-
gramming
languages maintained by Software
Productivity Research
(SPR)
now contains more than
700 programming languages
and con-
tinues
to grow at a dozen or more
languages per calendar year.
Refer to
www.SPR.com
for additional
information.
108
Chapter
Two
Best
practices for programming
circa 2009 include the
following
topics:
Selection
of programming languages to match
application needs
■
Utilization
of structured programming practices
for procedural code
■
Selection
of reusable code from
certified sources, before
starting to
■
code
Planning
for and including security
topics in code, including
secure
■
languages
such as E
Avoidance
of "spaghetti bowl"
code
■
Minimizing
cyclomatic complexity and
essential complexity
■
Including
clear and relevant comments
in the source code
■
Using
automated static analysis
tools for Java and
dialects of C
■
Creating
test cases before or
concurrently with the
code
■
Formal
code inspections of all
modules
■
Re-inspection
of code after significant
changes or updates
■
Renovating
legacy code before starting
major enhancements
■
Removing
error-prone modules from
legacy code
■
The
U.S. Department of Commerce
does not classify
programming as
a
profession, but rather as a
craft or skilled trade. Good
programming
also
has some aspects of an art
form. As a result, individual
human skill
and
careful training exert a
very strong influence on the
quality and
suitability
of software programming.
Experiments
in the industry use
pair
programming, where
two pro-
grammers
work concurrently on the
same code; one does the
program-
ming
and the other does
real-time review and
inspection. Anecdotal
evidence
indicates that this method
may achieve somewhat
higher
quality
levels than average.
However, the method seems
intrinsically
inefficient.
Normal development by one programmer,
followed by static
analysis
and peer reviews of code,
also achieves better than
average
quality
at somewhat lower costs than
pair programming.
It
is a proven fact that people
do not find their own
errors with high
efficiency,
primarily because they think
many errors are correct
and don't
realize
that they are wrong.
Therefore peer reviews,
inspections, and
other
methods
of review by other professionals
have demonstrable
value.
The
software industry will continue with
high costs and high error
rates
so
long as software applications
are custom-coded. Only the
substitution
of
high-quality reusable objects is
likely to make a fundamental
change
in
overall software costs, schedules,
quality levels, and failure
rates.
Overview
of 50 Software Best Practices
109
It
should be obvious that if
software applications were
assembled
from
reusable materials, then the
costs of each reusable
component
could
be much higher than today,
with the additional costs
going to
developing
very sophisticated security
controls, optimizing
performance
levels,
creating state-of-the-art specifications
and user documents,
and
achieving
zero-defect quality levels.
Even if a reusable object
were to
cost
10 times more than today's
custom-code for the same
function, if
the
reused object were used
100 times, then the
effective economic
costs
would
be only 1/10th of today's
cost.
29.
Best Practices for
Software
Project
Governance
Over
the past few years an
alarming number of executives in
major cor-
porations
have been indicted and
convicted for insider
trading, financial
misconduct,
deceiving stockholders with false
claims, and other
crimes
and
misdemeanors. As a result, the
U.S. Congress passed the
Sarbanes-
Oxley
Act of 2002, which took
effect in 2004.
The
Sarbanes-Oxley (SOX) Act
applies to major corporations
with
earnings
above $75 million per
year. The SOX Act
requires a great deal
more
accountability on the part of
corporate executives than
was normal
in
the past. As a result, the
topic of governance has become a
major issue
within
large corporations.
Under
the concept of governance,
senior
executives can no
longer
be
passive observers of corporate
financial matters or of the
software
applications
that contain financial data.
The executives are required
to
exercise
active oversight and
guidance on all major
financial and stock
transactions
and also on the software
used to keep corporate
books.
In
addition, some added reports
and data must be provided in
the
attempt
to expand and reform
corporate financial measurements
to
ensure
absolute honesty, integrity,
and accountability. Failure to
comply
with
SOX criteria can lead to
felony charges for corporate
executives,
with
prison terms of up to 20 years
and fines of up to
$500,000.
Since
the Sarbanes-Oxley measures
apply only to major public
compa-
nies
with revenues in excess of $75 million
per annum, private
companies
and
small companies are not
directly regulated by the
law. However, due
to
past irregularities, executives in
all companies are now
being held to
a
high standard of trust.
Therefore governance is an important
topic.
The
first implementation of SOX
measures seemed to require
teams
of
25 to 50 executives and information
technology specialists working
for
a
year or more to establish
the SOX control framework.
Many financial
applications
required modification, and of
course all new
applications
must
be SOX compliant. The
continuing effort of administering
and
adhering
to SOX criteria will probably
amount to the equivalent
of
110
Chapter
Two
perhaps
20 personnel full time for
the foreseeable future.
Because of
the
legal requirements of SOX
and the severe penalties
for noncompli-
ance,
corporations need to get
fully up to speed with SOX criteria.
Legal
advice
is very important.
Because
of the importance of both
governance and Sarbanes-Oxley,
a
variety
of consulting companies now
provide assistance in
governance
and
SOX adherence. There are
also automated tools that
can augment
manual
methods of oversight and
control. Even so, executives
need to
understand
and become more involved
with financial software
applica-
tions
than was common before
Sarbanes-Oxley became
effective.
Improper
governance can lead to fines
or even criminal charges
for
software
developed by large U.S.
corporations. However, good
governance
is
not restricted to very large
public companies. Government
agencies,
smaller
companies, and companies in
other countries not affected
by
Sarbanes-Oxley
would benefit from using
best practices for
software
governance.
30.
Best Practices for Software
Project
Measurements
and Metrics
Leading
companies always have
software measurement programs
for
capturing
productivity and quality
historical data. The state
of the art
of
software measurements for
projects in the nominal
10,000function
point
domain includes measures
of:
1.
Accumulated effort
2.
Accumulated costs
3.
Accomplishing selected
milestones
4.
Development productivity
5.
Maintenance and enhancement
productivity
6.
The volume of requirements
changes
7.
Defects by origin
8.
Defect removal
efficiency
9.
Earned value (primarily for
defense projects)
The
measures of effort are often
granular and support work
break-
down
structures (WBS). Cost
measures are complete and
include devel-
opment
costs, contract costs, and
costs associated with purchasing
or
leasing
packages. There is one area of
ambiguity even for top
compa-
nies:
the overhead or burden rates
associated with software costs
vary
widely
and can distort comparisons
between companies, industries,
and
countries.
Overview
of 50 Software Best Practices
111
Many
military applications use
the earned
value approach
for mea-
suring
progress. A few civilian
projects use the earned
value method,
but
the usage is far more
common in the defense
community.
Development
productivity circa 2008
normally uses function
points
in
two fashions: function
points per staff month
and/or work hours
per
function
point.
Measures
of quality are powerful
indicators of top-ranked
software
producers.
Laggards almost never
measure quality, while top
software
companies
always do. Quality measures
include data on defect
vol-
umes
by origin (i.e., requirements,
design, code, bad fixes) and
severity
level.
Really
sophisticated companies also
measure defect removal
efficiency.
This
requires accumulating all
defects found during
development and
also
after release to customers
for a predetermined period.
For example,
if
a company finds 900 defects
during development and the
clients find
100
defects in the first three
months of use, then the
company achieved
a
90 percent defect removal
efficiency level. Top
companies are usu-
ally
better than 95 percent in
defect removal efficiency,
which is about
10
percent better than the
U.S. average of 85
percent.
One
of the uses of measurement
data is for comparison
against indus-
try
benchmarks. The nonprofit
International Software
Benchmarking
Standards
Group (ISBSG) has become a
major industry resource
for
software
benchmarks and has data on
more than 4000 projects as
of
late
2008. A best practice circa
2009 is to use the ISBSG
data collection
tool
from the start of
requirements through development,
and then to
routinely
submit benchmark data at the
end. Of course classified
or
proprietary
applications may not be able
to do this.
Sophisticated
companies know enough to
avoid measures and
metrics
that
are not effective or that
violate standard economic
assumptions.
Two
common software measures
violate economic assumptions
and
cannot
be used for economic
analysis:
Cost
per defect penalizes quality
and makes buggy software
look better
■
than
it is.
The
lines of code metric
penalizes high-level languages
and makes
■
assembly
language look more
productive than any
other.
As
will be discussed later, both
lines of code and cost
per defect vio-
late
standard economic assumptions
and lead to erroneous
conclusions
about
both productivity and
quality. Neither metric is
suitable for eco-
nomic
study.
Measurement
and metrics are
embarrassingly bad in the
software
industry
as this book is written. Not
only do a majority of companies
mea-
sure
little or nothing, but some
of the companies that do try to
measure
112
Chapter
Two
use
invalid metrics such as
lines of code or cost per
defect. Measurement
of
software is a professional embarrassment
as of 2009 and
urgently
needs
improvement in both the
quantity and quality of
measures.
Software
has perhaps the worst
measurement practices of any
"engi-
neering"
field in human history. The
vast majority of software
organiza-
tions
have no idea of how their
productivity and quality
compare with
other
organizations. Lack of good
historical data also makes
estimating
difficult,
process improvement difficult,
and is one of the factors
asso-
ciated
with the high cancellation
rate of large software
projects. Poor
software
measurement should be viewed as
professional malpractice.
31.
Best Practices for Software
Benchmarks
and
Baselines
A
software
benchmark is a
comparison of a project or
organization
against
similar projects and
organizations from other
companies. A
software
baseline is a
collection of quality and
productivity information
gathered
at a specific time. Baselines
are used to evaluate
progress
during
software process improvement
periods.
Although
benchmarks and baselines
have different purposes,
they col-
lect
similar kinds of information.
The primary data gathered
during both
benchmarks
and baselines includes, but
is not limited to, the
following:
1.
Industry codes such as the
North American Industry
Classification
(NAIC)
codes
2.
Development countries and
locations
3.
Application taxonomy of nature,
scope, class, and
type
4.
Application complexity levels
for problems, data, and
code
5.
Application size in terms of
function points
6.
Application size in terms of
logical source code
statements
7.
Programming languages used
for the application
8.
Amount of reusable material
utilized for the
application
9.
Ratio of source code
statements to function
points
10.
Development methodologies used
for the application (Agile,
RUP,
TSP,
etc.)
11.
Project management and
estimating tools used for
the application
12.
Capability maturity level
(CMMI) of the project
13.
Chart of accounts for
development activities
14.
Activity-level productivity data
expressed in function
points
15.
Overall net productivity
expressed in function
points
Overview
of 50 Software Best Practices
113
16.
Cost data expressed in
function points
17.
Overall staffing of
project
18.
Number and kinds of
specialists employed on the
project
19.
Overall schedule for the
project
20.
Activity schedules for development,
testing, documentation, and so
on
21.
Defect potentials by origin
(requirements, design, code,
documents,
bad
fixes)
22.
Defect removal activities
used (inspections, static
analysis, testing)
23.
Number of test cases created
for application
24.
Defect removal efficiency
levels
25.
Delays and serious problems
noted during
development
These
25 topics are needed not
only to show the results of
specific
projects,
but also to carry out
regression analysis and to
show which
methods
or tools had the greatest
impact on project
results.
Consulting
groups often gather the
information for the 25
topics just
shown.
The questionnaires used
would contain about 150
specific ques-
tions.
About two days of effort
are usually needed to
collect all of the
data
for a major application
benchmark in the 10,000function
point
size
range. The work is normally
carried out on-site by a
benchmark
consulting
group. During the data
collection, both project
managers and
team
members are interviewed to
validate the results. The
interview
sessions
usually last about two
hours and may include
half a dozen
developers
and specialists plus the
project manager.
Collecting
full benchmark and baseline
data with all 25 factors
only
occurs
within a few very
sophisticated companies. More
common would
be
to use partial benchmarks
that can be administered by
web surveys or
remote
means without requiring
on-site interviews and data
collection.
The
ten most common topics
gathered for these partial
benchmarks and
baselines
include, in descending
order:
1.
Application size in terms of
function points
2.
Amount of reusable material
utilized for the
application
3.
Development methodologies used
for the application (Agile,
RUP,
TSP,
etc.)
4.
Capability maturity level
(CMMI) of the project
5.
Overall net productivity
expressed in function
points
6.
Cost data expressed in
function points
7.
Overall staffing of
project
114
Chapter
Two
8.
Overall schedule for the
project
9.
Delays and serious problems
noted during
development
10.
Customer-reported bugs or
defects
These
partial benchmarks and
baselines are of course
useful, but lack
the
granularity for a full and
complete statistical analysis of
all factors
that
affect application project
results. However, the data
for these partial
benchmarks
and baselines can be
collected remotely in perhaps
two to
three
hours once the application
is complete.
Full
on-site benchmarks can be
performed by in-house personnel,
but
more
often are carried out by
consulting companies such as
the David
Consulting
Group, Software Productivity
Research, Galorath, and
a
number
of others.
As
this book is written in
2009, the major source of
remote bench-
marks
and baselines is the
International Software
Benchmarking
Standards
Group (ISBSG). ISBSG is a
nonprofit organization with
headquarters
in Australia. They have
accumulated data on
perhaps
5000
software projects and are
adding new data at a rate of
perhaps
500
projects per year.
Although
the data is not as complete
as that gathered by a full
on-site
analysis,
it is still sufficient to show
overall productivity rates. It is
also
sufficient
to show the impact of
various development methods
such as
Agile,
RUP, and the like. However,
quality data is not very
complete as
this
book is written.
The
ISBSG data is expressed in
terms of function point
metrics, which
is
a best practice for both
benchmarks and baselines.
Both IFPUG func-
tion
points and COSMIC function
points are included, as well
as several
other
variants such as Finnish and
Netherlands function
points.
The
ISBSG data is heavily
weighted toward information
technology
and
web applications. Very
little data is available at
present for military
software,
embedded software, systems
software, and specialized
kinds
of
software such as scientific
applications. No data is available at
all for
classified
military applications.
Another
gap in the ISBSG data is
due to the intrinsic
difficulty of
counting
function points. Because
function point analysis is
fairly slow
and
expensive, very few
applications above 10,000
function points have
ever
been counted. As a result,
the ISBSG data lacks
applications such
as
large operating systems and
enterprise resource planning
packages
that
are in the 100,000 to
300,000function point
range.
As
of 2009, some high-speed,
low-cost function point
methods are
available,
but they are so new
that they have not
yet been utilized
for
benchmark
and baseline studies.
However, by 2010 or 2012
(assuming
the
economy improves), this
situation may change.
Overview
of 50 Software Best Practices
115
Because
the data is submitted to
ISBSG remotely by clients
them-
selves,
there is no formal validation of
results, although obvious
errors
are
corrected. As with all forms of
self-analysis, there may be
errors due
to
misunderstandings or to local variations
in how topics are
measured.
Two
major advantages of the
ISBSG data are the
fact that it is
avail-
able
to the general public and
the fact that the
volume of data is
increas-
ing
rapidly.
Benchmarks
and baselines are both
viewed as best practices
because
they
have great value in heading
off irrational schedule
demands or
attempting
applications with inadequate management
and develop-
ment
methods.
Every
major project should start
by reviewing available
benchmark
information
from either ISBSG or from
other sources. Every
process
improvement
plan should start by
creating a quantitative
baseline
against
which progress can be
measured. These are both
best practices
that
should become almost
universal.
32.
Best Practices for Software
Project
Milestone
and Cost Tracking
Milestone
tracking refers
to having formal closure on
the development
of
key deliverables. Normally,
the closure milestone is the
direct result
of
some kind of review or
inspection of the deliverable. A
milestone is
not
an arbitrary calendar
date.
Project
management is responsible for
establishing milestones,
moni-
toring
their completion, and
reporting truthfully on whether
the mile-
stones
were successfully completed or
encountered problems.
When
serious
problems are encountered, it is
necessary to correct the
problems
before
reporting that the milestone
has been completed.
A
typical set of project
milestones for software
applications in the
nominal
10,000function point range
would include completion
of:
1.
Requirements review
2.
Project plan review
3.
Cost and quality estimate
review
4.
External design
reviews
5.
Database design
reviews
6.
Internal design
reviews
7.
Quality plan and test
plan reviews
8.
Documentation plan
review
9.
Deployment plan
review
116
Chapter
Two
10.
Training plan review
11.
Code inspections
12.
Each development test
stage
13.
Customer acceptance
test
Failing
or delayed projects usually
lack serious milestone
tracking.
Activities
might be reported as finished
while work was still
ongoing.
Milestones
might be simple dates on a
calendar rather than
comple-
tion
and review of actual
deliverables. Some kinds of
reviews may be
so
skimpy as to be ineffective. Other
topics such as training may
be
omitted
by accident.
It
is important that the
meaning of milestone
is
ambiguous in the
software
industry. Rather than
milestones being the result
of a formal
review
of a deliverable, the term
milestone often refers to
arbitrary cal-
endar
dates or to the delivery of
unreviewed and untested
materials.
Delivering
documents or code segments
that are incomplete,
contain
errors,
and cannot support
downstream development work is
not the
way
milestones are used by
industry leaders.
Another
aspect of milestone tracking
among industry leaders is
what
happens
when problems are reported
or delays occur. The
reaction is strong
and
immediate: corrective actions
are planned, task forces
assigned, and
correction
begins to occur. Among
laggards, on the other hand,
problem
reports
may be ignored, and
corrective actions very
seldom occur.
In
a dozen legal cases
involving projects that
failed or were never
able
to operate successfully, project
tracking was inadequate in
every
case.
Problems were either ignored
or brushed aside, rather
than being
addressed
and solved.
An
interesting form of project
tracking for object-oriented
projects has
been
developed by the Shoulders
Corporation. This method
uses a 3-D
model
of software objects and
classes using Styrofoam
balls of various
sizes
that are connected by dowels
to create a kind of
mobile.
The
overall structure is kept in a
location visible to as many
team
members
as possible. The mobile
makes the status instantly
visible to
all
viewers. Color-coded ribbons
indicate status of each
component, with
different
colors indicating design
complete, code complete,
documenta-
tion
complete, and testing
complete (gold). There are
also ribbons for
possible
problems or delays.
This
method provides almost
instantaneous visibility of overall
proj-
ect
status. The sale method
has been automated using a
3-D modeling
package,
but the physical structures
are easier to see and
have proven
more
useful on actual projects.
The Shoulders Corporation
method con-
denses
a great deal of important
information into a single
visual repre-
sentation
that nontechnical staff can
readily understand.
Overview
of 50 Software Best Practices
117
33.
Best Practices for Software
Change
Control
Before Release
Applications
in the nominal 10,000function
point range run from 1
percent
to
3 percent per month in new
or changed requirements during
the
analysis
and design phases. The
total accumulated volume of
changing
requirements
can top 50 percent of the
initial requirements when
func-
tion
point totals at the
requirements phase are
compared with function
point
totals at deployment. Therefore
successful software projects in
the
nominal
10,000function point range
must use state-of-the-art
methods
and
tools to ensure that changes
do not get out of control.
Successful
projects
also use change control
boards to evaluate the need
for specific
changes.
And of course all changes
that have a significant
impact on
costs
and schedules need to
trigger updated development
plans and
new
cost estimates.
The
state of the art of change
control for applications in
the 10,000
function
point range includes the
following:
Assigning
"owners" of key deliverables
the responsibility for
approv-
■
ing
changes
Locked
"master copies" of all
deliverables that change
only via formal
■
methods
Planning
contents of multiple releases
and assigning key features
to
■
each
release
Estimating
the number and rate of
development changes
before
■
starting
Using
function point metrics to
quantify changes
■
A
joint client/development change
control board or designated
domain
■
experts
Use
of joint application design
(JAD) to minimize
downstream
■
changes
Use
of formal requirements inspections to
minimize downstream
■
changes
Use
of formal prototypes to minimize
downstream changes
■
Planned
usage of iterative development to
accommodate changes
■
Planned
usage of Agile development to
accommodate changes
■
Formal
review of all change
requests
■
Revised
cost and schedule estimates
for all changes greater
than
■
10
function points
Prioritization
of change requests in terms of
business impact
■
118
Chapter
Two
Formal
assignment of change requests to
specific releases
■
Use
of automated change control tools with
cross-reference capabilities
■
One
of the observed byproducts of
the usage of formal JAD
sessions is
a
reduction in downstream requirements
changes. Rather than
having
unplanned
requirements surface at a rate of 1
percent to 3 percent
per
month,
studies of JAD by IBM and other
companies have indicated
that
unplanned
requirements changes often
drop below 1 percent per
month
due
to the effectiveness of the JAD
technique.
Prototypes
are also helpful in reducing
the rates of
downstream
requirements
changes. Normally, key
screens, inputs, and outputs
are
prototyped
so users have some hands-on
experience with what the
com-
pleted
application will look
like.
However,
changes will always occur
for large systems. It is not
pos-
sible
to freeze the requirements of
any real-world application,
and it is
naďve
to think this can occur.
Therefore leading companies
are ready and
able
to deal with changes, and do
not let them become
impediments to
progress.
Consequently, some form of
iterative development is a
logical
necessity.
The
newer Agile methods embrace
changing requirements.
Their
mode
of operation is to have a permanent
user representative as part
of
the
development team. The Agile
approach is to start by building
basic
features
as rapidly as possible, and
then to gather new
requirements
based
on actual user experiences with
the features already
provided in
the
form of running code.
This
method works well for
small projects with a small
number of
users.
It has not yet been
deployed on applications such as
Microsoft
Vista,
where users number in the
millions and the features
number
in
the thousands. For such
massive projects, one user
or even a
small
team of users cannot
possibly reflect the entire
range of usage
patterns.
Effective
software change management is a
complicated problem with
many
dimensions. Software requirements,
plans, specifications,
source
code,
test plans, test cases, user
manuals, and many other
documents
and
artifacts tend to change
frequently. Changes may
occur due to exter-
nal
factors such as government
regulations, competitive factors,
new
business
needs, new technologies, or to fix
bugs.
Furthermore
some changes ripple through
many different
deliver-
ables.
A new requirement, for
example, might cause changes
not only
to
requirements documentation but
also to internal and
external speci-
fications,
source code, user manuals,
test libraries, development
plans,
and
cost estimates.
When
function points are measured
at the end of requirements
and
then
again at delivery, it has
been found that the
rate of growth of
Overview
of 50 Software Best Practices
119
"creeping
requirements" averages between 1
percent and 3 percent
per
calendar
month during the design
and coding phases. Therefore
if an
application
has 1000 function points at
the end of the
requirements
phase,
expect it to grow at a rate of
perhaps 20 function points
per
month
for the next eight to
ten months. Maximum growth
has topped
5
percent per month, while
minimum growth is about 0.5
percent per
month.
Agile projects, of course,
are at the maximum
end.
For
some government and military
software projects, traceability
is
a
requirement,
which means that all
changes to code or other
deliverables
must
be traced back to an explicit
requirement approved by
government
stakeholders
or sponsors.
In
addition, a number of people
may be authorized to change
the
same
objects, such as source code
modules, test cases, and
specifications.
Obviously,
their separate changes need
to be coordinated, so it may be
necessary
to "lock" master copies of key
deliverables and then to
serialize
the
updates from various
sources.
Because
change control is so complex
and so pervasive, many
auto-
mated
tools are available that
can aid in keeping all
changes current
and
in dealing with the interconnections of
change across
multiple
deliverables.
However,
change control cannot be
entirely automated since it is
nec-
essary
for stakeholders, developers,
and other key players to
agree on
significant
changes to application scope, to
new requirements, and
to
items
that may trigger schedule
and cost changes.
At
some point, changes will
find their way into
source code, which
implies
new test cases will be needed as
well. Formal integration
and
new
builds will occur to include
sets of changes. These
builds may occur
as
needed, or they may occur at
fixed intervals such as
daily or weekly.
Change
control is a topic that
often causes trouble if it is
not handled
well
throughout the development
cycle. As will be discussed later,
it
also
needs to be handled well
after deployment during the
maintenance
cycle.
Change control is a superset of
configuration control, since
change
control
also involves decisions of
corporate prestige and
competitive
issues
that are outside the
scope of configuration
control.
34.
Best Practices for
Configuration Control
Configuration
control is a subset of change
control in general.
Formal
configuration
control originated in the
1950s by the U.S.
Department
of
Defense as a method of keeping
track of the parts and
evolution
of
complex weapons systems. In
other words, hardware
configuration
control
is older than software
configuration control. As commonly
prac-
ticed,
configuration control is a mechanical
activity that is
supported
by
many tools and substantial
automation. Configuration control
deals
120
Chapter
Two
with
keeping track of thousands of
updates to documents and
source
code.
Ascertaining whether a particular
change is valuable is
outside
the
scope of configuration
control.
Software
configuration control is one of the
key
practice areas
of the
Software
Engineering Institute's capability
maturity model (CMM
and
CMMI).
It is also covered by a number of
standards produced by the
IEEE,
ANSI
(American National Standards
Institute), ISO, and other
organiza-
tions.
For example, ISO standard
10007-2003 and IEEE standard
828-
1998
both cover configuration
control for software
applications.
Although
configuration control is largely
automated, it still
requires
human
intervention to be done well.
Obviously, features need to
be
uniquely
identified, and there needs
to be extensive mapping
among
requirements,
specifications, source code,
test cases, and user
documents
to
ensure that every specific
change that affects more
than one deliver-
able
is correctly linked to other
related deliverables.
In
addition, the master copy of
each deliverable must be
locked to
avoid
accidental changes. Only
formal methods with formal
validation
should
be used to update master
copies of deliverables.
Automated
configuration control is a best
practice for all
applications
that
are intended for actual
release to customers.
35.
Best Practices for Software
Quality
Assurance
(SQA)
Software
quality assurance is the
general name for
organizations
and
specialists who are
responsible for ensuring and
certifying the
quality
levels of software applications
before they are delivered
to
customers.
In
large corporations such as IBM,
the SQA organizations are
inde-
pendent
of software development organizations
and report to a
corpo-
rate
vice president of quality.
The reason for this is to
ensure that no
pressure
can be brought to bear on
SQA personnel by means of
threats
of
poor appraisals or career
damage if they report
problems against
software.
SQA
personnel constitute between 3
percent and 5 percent of
software
engineering
personnel. If the SQA
organization is below 3 percent,
there
usually
are not enough personnel to
staff all projects. It is
not a best
practice
to have "token SQA"
organizations who are so
severely under-
staffed
that they cannot review
deliverables or carry out
their roles.
SQA
organizations are staffed by
personnel who have some
training
in
quality measurements and
quality control methodologies.
Many SQA
personnel
are certified as black belts
in Six Sigma practices. SQA is
not
just
a testing organization and
indeed may not do testing at
all. The
roles
normally played by SQA
groups include
Overview
of 50 Software Best Practices
121
Estimating
quality levels in terms of
defect potentials and
defect
■
removal
efficiency
Measuring
defect removal and assigning
defect severity
levels
■
Applying
Six Sigma practices to software
applications
■
Applying
quality function deployment
(QFD) to software
applications
■
Moderating
and participating in formal
inspections
■
Teaching
classes in quality
topics
■
Monitoring
adherence to relevant corporate, ANSI,
and ISO quality
■
standards
Reviewing
all deliverables to ensure
adherence to quality
standards
■
and
practices
Reviewing
test plans and quality
plans to ensure completeness
and
■
best
practices
Measuring
the results of
testing
■
Performing
root-cause analysis on serious
defects
■
Reporting
on potential quality problems to
higher management
■
Approving
release of applications to customers by
certifying accept-
■
able
quality levels
At
IBM and some other
companies, formal approval by
software qual-
ity
assurance is a prerequisite for
actually delivering software to
cus-
tomers.
If the SQA organization
recommends against delivery
due to
quality
issues, that recommendation
can only be overturned by
appeal
to
the division's vice
president or to the president of
the corporation.
Normally,
the quality problems are
fixed.
To
be definitive about quality
issues, the SQA
organizations are the
pri-
mary
units that measure software
quality, including but not
limited to:
Customer
satisfaction Leaders
perform annual or semiannual
cus-
tomer
satisfaction surveys to find
out what their clients
think about their
products.
Leaders also have
sophisticated defect reporting
and customer
support
information available via
the Web. Many leaders in
the commer-
cial
software world have active
user groups and forums.
These groups often
produce
independent surveys on quality
and satisfaction topics.
There are
also
focus groups, and some large
software companies even have
formal
usability
labs, where
new versions of products are
tried out by
customers
under
controlled conditions. (Note:
customer satisfaction is
sometimes
measured
by marketing organizations rather
than by SQA groups.)
Defect
quantities and origins Industry
leaders keep accurate
records
of the bugs or defects found
in all major deliverables,
and they
start
early, during requirements or
design. At least five
categories of
122
Chapter
Two
defects
are measured: requirements
defects, design defects,
code defects,
documentation
defects, and bad fixes, or
secondary bugs
introduced
accidentally
while fixing another bug.
Accurate defect reporting is
one
of
the keys to improving
quality. In fact, analysis of
defect data to search
for
root causes has led to
some very innovative defect
prevention and
defect
removal operations in many
companies. Overall, careful
measure-
ment
of defects and subsequent
analysis of the data is one of
the most
cost-effective
activities a company can
perform.
Defect
removal efficiency Industry
leaders know the average
and
maximum
efficiency of every major
kind of review, inspection,
and test,
and
they select optimum series
of removal steps for
projects of various
kinds
and sizes. The use of
pretest reviews and
inspections is normal
among
Baldrige winners and
organizations with ultrahigh quality,
since
testing
alone is not efficient
enough. Leaders remove from
95 percent to
more
than 99 percent of all
defects prior to delivery of
software to cus-
tomers.
Laggards seldom exceed 80
percent in terms of defect
removal
efficiency
and may drop below 50
percent.
Delivered
defects by application Industry
leaders begin to
accu-
mulate
statistics on errors reported by
users as soon as the software
is
delivered.
Monthly reports are prepared
and given to executives,
which
show
the defect trends against
all products. These reports
are also sum-
marized
on an annual basis. Supplemental
statistics such as
defect
reports
by country, state, industry,
client, and so on, are
also included.
Defect
severity levels All of
the industry leaders,
without excep-
tion,
use some kind of a severity
scale for evaluating
incoming bugs
or
defects reported from the
field. The number of
plateaus varies from
one
to five. In general, "Severity 1"
problems cause the system to
fail
completely,
and the severity scale
then descends in
seriousness.
Complexity
of software It
has been known for
many years that
complex
code is difficult to maintain
and has higher than
average defect
rates.
A variety of complexity analysis
tools are commercially
available
that
support standard complexity
measures such as cyclomatic
and
essential
complexity. It is interesting that
the systems software
com-
munity
is much more likely to
measure complexity than the
information
technology
(IT) community.
Test
case coverage Software
testing may or may not
cover every
branch
and pathway through
applications. A variety of commercial
tools
are
available that monitor the
results of software testing
and that help
to
identify portions of applications
where testing is sparse or
nonex-
istent.
Here, too, the systems
software domain is much more
likely to
measure
test coverage than the
information technology (IT)
domain.
Cost
of quality control and
defect repairs One
significant
aspect
of quality measurement is to keep
accurate records of the
costs
Overview
of 50 Software Best Practices
123
and
resources associated with various
forms of defect prevention
and
defect
removal. For software, these
measures include the costs
of (1)
software
assessments, (2) quality
baseline studies, (3)
reviews, inspec-
tions,
and testing, (4) warranty
repairs and postrelease
maintenance, (5)
quality
tools, (6) quality
education, (7) your software
quality assurance
organization,
(8) user satisfaction
surveys, and (9) any
litigation involv-
ing
poor quality or customer
losses attributed to poor
quality. In gen-
eral,
the principles of Crosby's
"Cost of Quality" topic
apply to software,
but
most companies extend the
basic concept and track
additional fac-
tors
relevant to software projects.
The general topics of Cost
of Quality
include
the costs of prevention,
appraisal, internal failures,
and external
failures.
For software more details
are needed due to special
topics such
as
toxic requirements, security
vulnerabilities, and performance
issues,
which
are not handled via
normal manufacturing cost of
quality.
Economic
value of quality One
topic that is not well
covered in
the
quality assurance literature is
that of the economic value
of quality.
A
phrase that Phil Crosby,
the former ITT vice
president of quality,
made
famous
is "quality is free." For
software it is better than
free; it more
than
pays for itself. Every
reduction of 120 delivered
defects can reduce
maintenance
staffing by about one person.
Every reduction of about
240
delivered
defects can reduce customer
support staffing by about
one
person.
In today's world, software
engineers spend more days
per year
fixing
bugs than doing actual
development A combination of
quality-
centered
development methods such as
Team Software Process
(TSP),
joint
application design (JAD),
quality function deployment
(QFD),
static
analysis, inspections, and
testing can reduce costs
throughout the
life
cycle and also shorten
development schedules. Unfortunately,
poor
measurement
practices make these
improvements hard to see
except
among
very sophisticated
companies.
As
previously stated, a key
reason for this is that
the two most
common
metrics for quality, lines
of code and cost per
defect, are flawed
and
cannot deal with economics
topics. Using defect removal
costs per
function
point is a better choice,
but these metrics need to be
deployed
in
organizations that actually
accumulate effort, cost, and
quality data
simultaneously.
From studies performed by
the author, combinations
of
defect
prevention and defect
removal methods that lower
defect poten-
tials
and raise removal efficiency
greater than 95 percent
benefit devel-
opment
costs, development schedules,
maintenance costs, and
customer
support
costs simultaneously.
Overall
quality measures are the
most important of almost any
form of
software
measurement. This is because
poor quality always causes
sched-
ule
delays and cost overruns,
while good quality is
associated with on-
time
completions of software applications
and effective cost
controls.
124
Chapter
Two
Formal
SQA organizations occur most
often in companies that
build
large
and complex physical devices
such as airplanes,
mainframe
computers,
telephone switching systems,
military equipment, and
medi-
cal
equipment. Such organizations
have long recognized that
quality
control
is important to success.
By
contrast, organizations such as
banks and insurance
companies
that
build information technology
software may not have
SQA organi-
zations.
If they do, the
organizations are usually
responsible for
testing
and
not for a full range of
quality activities.
Studies
of the delivered quality of
software applications indicate
that
companies
with formal SQA organizations
and formal testing
organiza-
tions
tend to exceed 95 percent in
cumulative defect removal
efficiency
levels.
36.
Best Practices for
Inspections
and
Static Analysis
Formal
design and code inspections
originated more than 35
years ago
in
IBM. They still are among
the top-ranked methodologies in
terms of
defect
removal efficiency. (Michael
Fagan, formerly of IBM
Kingston,
first
published the inspection
method with his colleagues
Lew Priven,
Ron
Radice, and then some
years later, Roger Stewart.)
Further, inspec-
tions
have a synergistic relationship with
other forms of defect
removal
such
as testing and static
analysis and also are
quite successful as
defect
prevention
methods.
Automated
static analysis is a newer
technology that originated
per-
haps
12 years ago. Automated
static analysis examines
source code
for
syntactical errors and also
for errors in boundary
conditions, calls,
links,
and other troublesome and
tricky items. Static
analysis may not
find
embedded requirements errors
such as the notorious Y2K
problem,
but
it is very effective in finding
thousands of bugs associated
with
source
code issues. Inspections and
static analysis are
synergistic defect
removal
methods.
Recent
work on software inspections by Tom
Gilb, one of the more
prominent
authors dealing with inspections,
and his colleagues
contin-
ues
to support the early finding
that the human mind
remains the tool
of
choice for finding and
eliminating complex problems
that originate
in
requirements, design, and
other noncode deliverables.
Indeed, for
finding
the deeper problems in
source code, formal code
inspections still
outrank
testing in defect removal
efficiency levels. However,
both static
analysis
and automated testing are
now fairly efficient in
finding an
increasingly
wide array of
problems.
If
an application is written in a language
where static analysis
is
supported
(Java, C, C++, and other C
dialects), then static
analysis is
Overview
of 50 Software Best Practices
125
a
best practice. Static
analysis may top 87 percent
in finding common
coding
defects. Occasionally there
are false positives,
however. But these
can
be minimized by "tuning" the
static analysis tools to
match the
specifics
of the applications. Code
inspections after static
analysis can
find
some deeper problems such as
embedded requirements
defects,
especially
in key modules and
algorithms.
Because
neither code inspections nor
static analysis are fully
success-
ful
in finding performance problems, it is
also necessary to use
dynamic
analysis
for performance issues.
Either various kinds of
controlled
performance
test suites are run, or
the application is instrumented
to
record
timing and performance
data.
Most
forms of testing are less
than 35 percent efficient in
finding
errors
or bugs. The measured defect
removal efficiency of both
formal
design
inspections and formal code
inspections averages more
than
65
percent efficient, or twice as
efficient as most forms of
testing. Some
inspections
top 85 percent in defect
removal efficiency levels. Tom
Gilb
reports
that some inspection
efficiencies have been
recorded that are
as
high as 88 percent.
A
combination of formal inspections of
requirements and
design,
static
analysis, formal testing by
test specialists, and a
formal (and
active)
software quality assurance
(SQA) group are the
methods most
often
associated with projects achieving a
cumulative defect
removal
efficiency
higher than 99
percent.
Formal
inspections
are manual activities in
which from three to
six
colleagues
go over design specifications page by
page, using a formal
pro-
tocol.
The normal complement is
four, including a moderator, a
recorder,
a
person whose work is being
inspected, and one other.
(Occasionally,
new
hires or specialists such as
testers participate, too.)
Code
inspec-
tions
are the same idea,
but they go over listings or
screens line by line.
To
term this activity an
inspection,
certain
criteria must be met,
includ-
ing
but not limited to the
following:
There
must be a moderator to keep
the session moving.
■
There
must be a recorder to keep
notes.
■
There
must be adequate preparation
time before each
session.
■
Records
must be kept of defects
discovered.
■
Defect
data should not be used
for appraisals or punitive
purposes.
■
The
original concept of inspections
was based on actual meetings
with
live
participants. The advent of
effective online communications
and
tools
for supporting remote
inspections now means that
inspections can
be
performed electronically, which
saves on travel costs for
teams that
are
geographically dispersed.
126
Chapter
Two
Any
software deliverable can be
subject to a formal inspection,
and
the
following deliverables have
now developed enough
empirical data
to
indicate that the inspection
process is generally
beneficial:
Architecture
inspections
■
Requirements
inspections
■
Design
inspections
■
Database
design inspections
■
Code
inspections
■
Test
plan inspections
■
Test
case inspections
■
User
documentation inspections
■
For
every software artifact
where formal inspections are
used, the
inspections
range from just under 50
percent to more than 80
percent in
defect
removal efficiency and have
an average efficiency level of
roughly
65
percent. This is overall the
best defect removal
efficiency level of
any
known
form of error
elimination.
Further,
thanks to the flexibility of
the human mind and
its ability
to
handle inductive logic as
well as deductive logic,
inspections are also
the
most versatile form of
defect removal and can be
applied to essen-
tially
any software artifact.
Indeed, inspections have
even been applied
recursively
to themselves, in order to fine-tune
the inspection
process
and
eliminate bottlenecks and
obstacles.
It
is sometimes asked "If inspections
are so good, why doesn't
every-
one
use them?" The answer to
this question reveals a
basic weakness of
the
software industry. Inspections
have been in the public
domain for
more
than 35 years. Therefore no
company except a few
training com-
panies
tries to "sell" inspections,
while there are many
vendors selling
testing
tools. If you want to use
inspections, you have to
seek them out
and
adopt them.
Most
software development organizations
don't actually do
research
or
collect data on effective
tools and technologies. They
make their tech-
nology
decisions to a large degree by
listening to tool and
methodology
vendors
and adopting those where
the sales personnel are
most per-
suasive.
It is even easier if the
sales personnel make the
tool or method
sound
like a silver bullet that
will give miraculous results
immediately
upon
deployment, with little or no training,
preparation, or additional
effort.
Since inspections are not
sold by tool vendors and do
require
training
and effort, they are
not a glamorous technology.
Hence many
software
organizations don't even
know about inspections and
have no
idea
of their versatility and
effectiveness.
Overview
of 50 Software Best Practices
127
It
is a telling point that all
of the top-gun software
quality houses and
even
industries in the United
States tend to utilize
pretest inspections.
For
example, formal inspections
are very common among
computer
manufacturers,
telecommunication manufacturers,
aerospace manu-
facturers,
defense manufacturers, medical
instrument manufacturers,
and
systems software and
operating systems developers. All of
these
need
high-quality software to market
their main products, and
inspec-
tions
top the list of effective
defect removal
methods.
It
is very important not to
allow toxic requirements,
requirements
errors,
and requirements omissions to
flow downstream into code,
because
requirements
problems cannot be found and
removed by testing.
Design
problems should also be
found prior to code
development,
although
testing can find some
design problems.
The
key message is that defects
should be found within no
more than
a
few hours or days from
when they originate. Defects
that originate in
a
specific phase such as
requirements should never be
allowed down-
stream
into design and
code.
Following
are the most effective
known methods for finding
defects
within
a specific phase or within a
short time interval from
when the
defects
originate:
Defect
Origins
Optimal
Defect Discovery
Methods
Requirements
defects
Formal
requirements inspections
Design
defects
Formal
design inspections
Coding
defects
Static
analysis
Formal
code inspections
Testing
Document
defects
Editing
of documents
Formal
document inspections
Bad
fixes
Re-inspection
after defect repairs
Rerunning
static analysis tools after
defect repairs
Regression
testing
Test
case defects
Inspection
of test cases
As
can be seen, inspections are
not the only form of
defect removal, but
they
are the only form
that has proven to be
effective against
require-
ments
defects, and they are
also very effective against
other forms of
defects
as well.
A
new nonprofit organization
was created in 2009 that is
intended to
provide
instruction and quantified
data about formal
inspections. The
organization
is being formed as this book
is written.
As
of 2009, inspections are
supported by a number of tools
that can
predict
defects, defect removal
efficiency, costs, and other
relevant fac-
tors.
These tools also collect
data on defects and effort,
and can con-
solidate
the data with similar data
from static analysis and
testing.
128
Chapter
Two
Formal
inspections are a best
practice for all
mission-critical software
applications.
37.
Best Practices for Testing
and
Test
Library Control
Software
testing has been the
main form of defect removal
since soft-
ware
began more than 60 years
ago. At least 20 different
forms of testing
exist,
and typically between 3 and
12 forms of testing will be used
on
almost
every software
application.
Note
that testing can also be
used in conjunction with other
forms
of
defect removal such as
static analysis and formal
inspections. In
fact,
such synergistic combinations
are best practices, because
test-
ing
by itself is not sufficient to
achieve high levels of
defect removal
efficiency.
Unfortunately,
when measured, testing is
rather low in defect
removal
efficiency
levels. Many forms of
testing such as unit test
are below 35
percent
in removal efficiency, or find
only about one bug out of
three.
The
cumulative efficiency of all
forms of testing seldom tops
80 percent,
so
additional steps such as
inspections and static
analysis are needed to
raise
defect removal efficiency
levels above 95 percent, which is a
mini-
mum
safe level.
Because
of the many forms of testing
and the existence of
hundreds
or
thousand of test cases, test
libraries are often huge
and cumbersome,
and
require automation for
successful management.
Testing
has many varieties,
including black
box testing (no
knowl-
edge
of application structure), white
box
testing (application
structure
is
known), and gray
box
testing (data structures are
known).
Another
way of dividing testing is to
look at test steps performed
by
developers,
by testing specialists or quality
assurance, and by
customers
themselves.
Testing in all of its forms
can utilize 20 percent to
more than
40
percent of total software
development effort. Given
the low efficiency
of
testing in terms of defect
removal, alternatives that
combine higher
efficiency
levels with lower costs are
worth considering.
There
are also very specialized
forms of testing such as
tests concerned
with
performance issues, security
issues, and usability
issues. Although
not
testing in the normal sense
of the word, applications with
high secu-
rity
criteria may also use
professional hackers who
seek to penetrate the
application's
defenses. Common forms of software
testing include
Testing
by Developers
Subroutine
testing
■
Module
testing
■
Unit
testing
■
Overview
of 50 Software Best Practices
129
Testing
by Test Specialists or Software
Quality Assurance
New
function testing
■
Component
testing
■
Regression
testing
■
Performance
testing
■
Security
testing
■
Virus
and spyware testing
■
Usability
testing
■
Scalability
testing
■
Standards
testing (ensuring ISO and
other standards are
followed)
■
Nationalization
testing (foreign languages
versions)
■
Platform
testing (alternative hardware or
operating system
versions)
■
Independent
testing (military
applications)
■
Component
testing
■
System
testing
■
Testing
by Customers or Users
External
beta testing (commercial
software)
■
Acceptance
testing (information technology;
outsource applications)
■
In-house
customer testing (special
hardware devices)
■
In
recent years automation has
facilitated test case
development,
test
script development, test
execution, and test library
management.
However,
human intelligence is still
very important in developing
test
plans,
test cases, and test
scripts.
Several
issues with testing are
underreported in the literature
and
need
more study. One of these is
the error density in test
cases them-
selves.
Studies of samples of test
libraries at selected IBM
locations
sometimes
found more errors in test
cases than in the software
being
tested.
Another issue is that of
redundant test cases, which
implies that
two
or more test cases are
duplicates or test the same
conditions. This
adds
costs, but not rigor. It
usually occurs when multiple
developers or
multiple
test personnel are engaged
in testing the same
software.
A
topic that has been
studied but which needs
much more study is
that
of testing defect removal
efficiency. Since most forms
of testing
seem
to be less than 35 percent
efficient, or find only
about one bug out
of
three, there is an urgent
need to examine why this
occurs.
A
related topic is the low
coverage of testing when
monitored by vari-
ous
test coverage analysis
tools. Usually, only 75
percent or less of
the
130
Chapter
Two
source
code in applications is executed
during the course of
testing.
Some
of this may be dead code
(which is another problem),
some may
be
paths that are seldom
traversed, but some may be
segments that are
missed
by accident.
The
bottom line is that testing
alone is not sufficient to
achieve defect
removal
efficiency levels of 95 percent or
higher. The current best
prac-
tice
would be to use testing in
conjunction with other methods
such as
requirements
and design inspections,
static analysis, and code
inspec-
tions
prior to testing itself.
Both defect prevention and
defect removal
should
be used together in a synergistic
fashion.
Effective
software quality control is
the most important single
factor
that
separates successful projects
from delays and disasters.
The reason
for
this is because finding and
fixing bugs is the most
expensive cost ele-
ment
for large systems and
takes more time than
any other activity.
Successful
quality control involves
defect prevention, defect
removal,
and
defect measurement activities.
The phrase defect
prevention includes
all
activities that minimize the
probability of creating an error or
defect
in
the first place. Examples of
defect prevention activities
include the Six
Sigma
approach, joint application
design (JAD) for gathering
require-
ments,
usage of formal design
methods, use of structured
coding tech-
niques,
and usage of libraries of
proven reusable
material.
The
phrase defect
removal includes
all activities that can
find errors
or
defects in any kind of
deliverable. Examples of defect
removal activi-
ties
include requirements inspections,
design inspections,
document
inspections,
code inspections, and all
kinds of testing. Following
are the
major
forms of defect prevention
and defect removal
activities practiced
as
of 2009:
Defect
Prevention
Joint
application design (JAD) for
gathering requirements
■
Quality
function deployment (QFD)
for quality
requirements
■
Formal
design methods
■
Structured
coding methods
■
Renovation
of legacy code prior to
updating it
■
Complexity
analysis of legacy code
prior to updating it
■
Surgical
removal of error-prone modules
from legacy code
■
Formal
defect and quality
estimation
■
Formal
security plans
■
Formal
test plans
■
Formal
test case
construction
■
Formal
change management
methods
■
Overview
of 50 Software Best Practices
131
Six
Sigma approaches (customized
for software)
■
Utilization
of the Software Engineering
Institute's capability
matu-
■
rity
model (CMM or CMMI)
Utilization
of the new team and
personal software processes
(TSP,
■
PSP)
Embedded
users with development teams
(as in the Agile
method)
■
Creating
test cases before code
(as with Extreme
programming)
■
Daily
SCRUM sessions
■
Defect
Removal
■
Requirements
inspections
Design
inspections
■
Document
inspections
■
Formal
security inspections
■
Code
inspections
■
Test
plan and test case
inspection
■
Defect
repair inspection
■
Software
quality assurance
reviews
■
Automated
software static analysis
(for languages such as Java
and
■
C
dialects)
Unit
testing (automated or
manual)
■
Component
testing
■
New
function testing
■
Regression
testing
■
Performance
testing
■
System
testing
■
Security
vulnerability testing
■
Acceptance
testing
■
The
combination of defect prevention
and defect removal
activities
leads
to some very significant
differences when comparing
the overall
numbers
of software defects in successful
versus unsuccessful
projects.
For
projects in the 10,000function
point range, the successful
ones accu-
mulate
development totals of around
4.0 defects per function
point and
remove
about 95 percent of them
before delivery to customers. In
other
words,
the number of delivered
defects is about 0.2 defect
per function
point,
or 2,000 total latent defects. Of these,
about 10 percent or 200
would
be
fairly serious defects. The
rest would be minor or
cosmetic defects.
132
Chapter
Two
By
contrast, the unsuccessful
projects accumulate development
totals of
around
7.0 defects per function
point and remove only
about 80 percent of
them
before delivery. The number of
delivered defects is about
1.4 defects
per
function point, or 14,000 total
latent defects. Of these, about 20
percent
or
2,800 would be fairly serious defects.
This large number of latent
defects
after
delivery is very troubling
for users.
Unsuccessful
projects typically omit
design and code
inspections,
static
analysis, and depend purely
on testing. The omission of
upfront
inspections
causes three serious
problems: (1) The large
number of
defects
still present when testing
begins slows down the
project to a
standstill;
(2) The "bad fix"
injection rate for projects
without inspec-
tions
is alarmingly high; and (3)
The overall defect removal
efficiency
associated
only with testing is not
sufficient to achieve defect
removal
rates
higher than about 80
percent.
38.
Best Practices for Software
Security
Analysis
and Control
As
this book is written in
2009, software security is
becoming an increas-
ingly
critical topic. Not only are
individual hackers attempting to
break
into
computers and software
applications, but also
organized crime,
drug
cartels, terrorist organizations
such as al Qaeda, and even
hostile
foreign
governments are.
As
computers and software
become more pervasive in
business and
government
operations, the value of
financial data, military
data, medi-
cal
data, and police data is
high enough so that criminal
elements can
afford
to mount major attacks using
very sophisticated tools and
also
very
sophisticated hackers. Cybersecurity is
becoming a major
battle-
ground
and needs to be taken very
seriously.
Modern
software applications that
contain sensitive data such
as
financial
information, medical records,
personnel data, or military
and
classified
information are at daily
risk from hackers, viruses,
spyware,
and
even from deliberate theft
by disgruntled employees. Security
con-
trol
of software applications is a serious
business, associated with
major
costs
and risks. Poor security
control can lead to serious
damages and
even
to criminal charges against
the software and corporate
executives
who
did not ensure high
security levels.
Modern
security control of critical
software applications requires
a
combination
of specialized skills; sophisticated
software tools;
proper
architecture,
design, and coding
practices; and constant
vigilance.
Supplemental
tools and approaches such as
hardware security
devices,
electronic
surveillance of premises, careful
background checks of all
per-
sonnel,
and employment of hackers
who deliberately seek out
weaknesses
and
vulnerabilities are also
very common and may be
necessary.
Overview
of 50 Software Best Practices
133
However,
software security starts with
careful architecture,
design,
and
coding practices. In addition,
security inspections and the
employ-
ment
of security specialists are
key criteria for successful
security con-
trol.
Both "blacklisting" and
"whitelisting" of applications that
interface
with
applications undergoing security
analysis are needed. Also,
pro-
gramming
languages such as E (a Java
variation) that are aimed
at
security
topics are important and
also a best practice.
Security
leaks or vulnerabilities come
from a variety of sources,
includ-
ing
user inputs, application
interfaces, and of course
leaks due to poor
error-handling
or poor coding practices.
One of the reasons that
special-
ists
are required to reduce
security vulnerabilities is because
ordinary
training
of software engineers is not
thorough in security
topics.
Dozens
of companies are now active
in the security area. The
U.S.
Department
of Homeland Security is planning on
building a new
research
lab specifically for
software security. Nonprofit
organizations
such
as the Center for Internet
Security (CIS) are growing
rapidly in
membership,
and joining such a group
would be a best practice for
both
corporations
and government
agencies.
In
addition, security standards
such as ISO 17799 also
offer guidance
on
software security
topics.
Although
hacking and online theft of
data is the most
widespread
form
of security problem, physical
security of computers and
data cen-
ters
is important, too. Almost
every month, articles appear
about loss
of
confidential credit card and
medical records due to theft
of notebook
computers
or desktop computers.
Because
both physical theft and
hacking attacks are becoming
more
and
more common, encryption of
valuable data is now a best
practice
for
all forms of proprietary and
confidential information.
From
about 2000 forward into
the indefinite future, there
has been an
escalating
contest between hackers and
security experts.
Unfortunately,
the
hackers are becoming
increasingly sophisticated and
numerous.
It
is theoretically possible to build
some form of artificial
intelligence
or
neural network security
analysis tools that could
examine software
applications
and find security flaws with
very high efficiency.
Indeed, a
similar
kind of AI tool applied to
architecture and design
could provide
architects
and designers with optimal
security solutions.
A
general set of best
practices for software
applications under
devel-
opment
includes
Improve
the undergraduate and
professional training of
software
■
engineers
in security topics.
For
every application that will
connect to the Internet or to
other
■
computers,
develop a formal security
plan.
Perform
security inspections of requirements
and specifications.
■
134
Chapter
Two
Develop
topnotch physical security
for development
teams.
■
Develop
topnotch security for home
offices and portable
equipment.
■
Utilize
high-security programming languages
such as E.
■
Utilize
automated static analysis of
code to find potential
security
■
vulnerabilities.
Utilize
static analysis on legacy
applications that are to be
updated.
■
Automation
will probably become the
overall best practice in
the
future.
However, as of 2009, security
analysis by human experts
remains
the
best practice. While
security experts are common
in military and
classified
areas, they are not
yet used as often as they
should be for
civilian
applications.
39.
Best Practices for
Software
Performance
Analysis
As
any user of Windows XP or
Windows Vista can observe,
performance
of
large and complex software
applications is not as sophisticated as
it
should
be. For Windows, as an
example, application load
times slow down
over
time. A combination of increasing
Internet clutter and spyware
can
degrade
execution speed to a small fraction of
optimum values.
While
some utility applications
can restore a measure of
original
performance,
the fact remains that
performance optimization is a
tech-
nology
that needs to be improved.
Microsoft is not alone with
sluggish
performance.
A frequent complaint against
various Symantec tools
such
as
the Norton AntiVirus package
is that of extremely slow
performance.
The
author has personally
observed a Norton AntiVirus
scan that did
not
complete after 24 hours,
although the computer did
not have the
latest
chips.
Since
performance analysis is not
always a part of software
engineer-
ing
or computer science curricula,
many software engineers are
not
qualified
to deal with optimizing performance.
Large companies such
as
IBM
employ performance specialists
who are trained in such
topics. For
companies
that build large
applications in the 100,000function
point
range,
employment of specialists would be
considered a best
practice.
There
are a number of performance
tools and measurement
devices
such
as profilers
that
collect data on the fly. It
is also possible to
embed
performance
measurement capabilities into
software applications
them-
selves,
which is called instrumentation.
Since
instrumentation and other
forms of performance analysis
may
slow
down application speed, care is
needed to ensure that the
data is
correct.
Several terms derived from
physics and physicists have
moved
into
the performance domain. For
example, a heisenbug
is
named after
Overview
of 50 Software Best Practices
135
Heisenberg's
uncertainty principle. It is a bug
that disappears when
an
attempt is made to study it.
Another physics-based term is
bohrbug
named
after Nils Bohr. A bohrbug
occurs when a well-defined
set of
conditions
occur, and does not
disappear. A third term from
physics
is
that of mandelbug
named
after Benoit Mandelbrot, who
developed
chaos
theory. This form of bug is
caused by such random and
chaotic
factors
that isolation is difficult. A
fourth and very unusual
form of bug
is
a schrodenbug
named
after Ernst Schrodinger.
This form of bug
does
not
occur until someone notices
that the code should
not have worked
at
all, and as soon as the bug
is discovered, the software
stops working
(reportedly).
Performance
issues also occur based on
business cycles. For
example,
many
financial and accounting
packages tend to slow down
at the end
of
a quarter or the end of a
fiscal year when usage
increases dramati-
cally.
One
topic that is not covered
well in the performance
literature is the
fact
that software performance
drops to zero when a
high-severity bug is
encountered
that stops it from running.
Such problems can be
measured
using
mean-time-to-failure. These problems
tend to be common in
the
first
month or two after a
release, but decline over
time as the software
stabilizes.
Other stoppages can occur
due to denial of service
attacks,
which
are becoming increasingly
common.
This
last point brings up the
fact that performance best
practices
overlap
best practices in quality
control and security
control. A general
set
of best practices includes
usage of performance specialists,
excel-
lence
in quality control, and
excellence in security
control.
As
with security, it would be possible to
build an artificial
intelligence
or
neural net performance
optimization tool that could
find performance
problems
better than testing or
perhaps better than human
perfor-
mance
experts. A similar tool
applied to architecture and
design could
provide
performance optimization rules
and algorithms prior to
code
development.
In
general, AI and neural net
approaches for dealing with
complex
problems
such as security flaws and
performance issues have
much
to
recommend them. These topics
overlap autonomous
computing, or
applications
that tend to monitor and
improve their own
performance
and
quality.
40.
Best Practices for
International
Software
Standards
Because
software is not a recognized
engineering field with
certification
and
licensing, usage of international
standards has been
inconsistent.
Further,
when international standards
are used, not very
much empirical
136
Chapter
Two
data
is available that demonstrates
whether they were helpful,
neutral,
or
harmful for the applications
being developed. Some of the
international
standards
that apply to software are
established by the
International
Organization
for Standards commonly known
as the ISO.
Examples
of
standards
that affect software
applications include
ISO/IEC
10181 Security
Frameworks
■
ISO
17799 Security
■
Sarbanes-Oxley
Act
■
ISO/IEC
25030 Software Product
Quality Requirements
■
ISO/IEC
9126-1 Software Engineering
Product Quality
■
IEEE
730-1998 Software Quality
Assurance Plans
■
IEEE
1061-1992 Software
Metrics
■
ISO
9000-9003 Quality
Management
■
ISO
9001:2000 Quality Management
System
■
There
are also international
standards for functional
sizing. As of
2008,
data on the effectiveness of
international standards in
actually
generating
improvements is sparse.
Military
and defense applications
also follow military
standards
rather
than ISO standards. Many
other standards will be dealt
with
later
in this book.
41.
Best Practices for Protecting
Intellectual
Property
in Software
The
obvious first step and
also a best practice for
protecting intellectual
property
in software is to seek legal
advice from a patent or
intellec-
tual
property law firm. Only an
intellectual property lawyer
can pro-
vide
proper guidance through the
pros and cons of copyrights,
patents,
trademarks,
service marks, trade
secrets, nondisclosure
agreements,
noncompetition
agreements, and other forms
of protection. The
author
is
of course not an attorney,
and nothing in this section
or this book
should
be viewed as legal
advice.
Over
and above legal advice,
technical subjects also need
to be consid-
ered,
such as encryption of sensitive
information, computer firewalls
and
hacking
protection, physical security of
offices, and for classified
mili-
tary
software, perhaps even
isolation of computers and
using protective
screens
that stop microwaves.
Microwaves can be used to
collect and
analyze
what computers are doing
and also to extract
confidential data.
Many
software applications contain
proprietary information
and
algorithms.
Some defense and weapons
software may contain
classified
Overview
of 50 Software Best Practices
137
information
as well. Patent violation
lawsuits and theft of
intellectual
property
lawsuits are increasing in
number, and this trend will
prob-
ably
continue. Overt theft of
software and data by hackers
or bribery of
personnel
are also occurring more
often than in the
past.
Commercial
software vendors are also
concerned about piracy
and
the
creation of unauthorized copies of
software. The solutions to
this
problem
include registration, activation,
and in some cases
actually
monitoring
the software running on
client computers, presumably
with
client
permission. However, these
solutions have been only
partially
successful,
and unlawful copying of
commercial software is
extremely
common
in many developing countries
and even in the
industrialized
nations.
One
obvious solution is to utilize
encryption of all key
specifications
and
code segments. However, this
method raises logistical
issues for the
development
team, since unencrypted
information is needed for
human
understanding.
A
possible future solution may
be associated with cloud
computing,
where
applications reside on network
servers rather than on
individual
computers.
Although such a method might
protect the software
itself,
it
is not trouble free and
may be subject to hacking,
interception from
wireless
networks, and perhaps even
denial of service
attacks.
Since
protection of intellectual property
requires expert legal
advice
and
also specialized advice from
experts in physical security
and online
security,
only a few general
suggestions are given
here.
Be
careful with physical security of
office spaces, notebook
comput-
ers,
home computers that may
contain proprietary information,
and
of
course e-mail communications.
Theft of computers, loss of
notebook
computers
while traveling, and even
seizure of notebook
computers
when
visiting foreign countries
might occur. Several
companies prohibit
employees
from bringing computers to
various overseas
locations.
In
addition to physical security of
computers, it may be necessary
to
limit
usage of thumb drives, DVDs,
writable CD disks, and other
remov-
able
media. Some companies and
government organizations
prohibit
employees
from carrying removable
media in and out of
offices.
If
your company supports home
offices or telecommuting, then
your
proprietary
information is probably at some
risk. While most
employees
are
probably honest, there is no
guarantee that their
household mem-
bers
might not attempt hacking
just for enjoyment. Further,
you may
not
have any control over
employee home wireless
networks, some of
which
may not have any
security features
activated.
For
employees of companies with proprietary
intellectual property,
some
form of employment agreement
and noncompetition
agreement
would
normally be required. This is
sometimes a troublesome
area,
and
a few companies demand
ownership of all employee
inventions,
138
Chapter
Two
whether
or not they are job
related. Such a Draconian
agreement often
suppresses
innovation.
Outsource
agreements should also be
considered as part of
protecting
intellectual
property. Obviously, outsource
vendors need to sign
confi-
dentiality
agreements. These may be
easier to enforce in the
United
States
than in some foreign
locations, which is a factor
that needs to be
considered
also.
If
the intellectual property is
embedded in software, it may be
prudent
to
include special patterns of
code that might identify
the code if it is
pirated
or stolen.
If
the company downsizes or
goes out of business,
special legal advice
should
be sought to deal with the
implications of handling
intellectual
property.
For downsizing, obviously
all departing employees will
prob-
ably
need to sign noncompete
agreements. For going out of
business,
intellectual
property will probably be an asset
under bankruptcy
rules,
so
it still needs to be
protected.
While
patents are a key method of
protecting intellectual
property,
they
are hotly debated in the
software industry. One side
sees patents
as
the main protective device
for intellectual property;
the other side
sees
patents as merely devices to
extract enormous fees. Also,
there may
be
some changes in patent laws
that make software patents
difficult to
acquire
in the future. The topic of
software patents is very
complex, and
the
full story is outside the
scope of this book.
One
curious new method of
protecting algorithms and
business rules
in
software is derivative of the
"Bible code" and is based on
equidistant
letter
spacing (ELS).
A
statistical analysis of the
book of Genesis found that
letters that
were
equally spaced sometimes
spelled out words and
even phrases. It
would
be possible for software
owners to use the same
approach either
with
comments or actual instructions
and to embed a few codes
using
the
ELS method that identified
the owner of the software.
Equally
spaced
letters that spelled out
words or phrases such as
"stop thief"
could
be used as evidence of theft. Of
course this might backfire
if
thieves
inserted their own ELS
codes.
42.
Best Practices for Protecting
Against
Viruses,
Spyware, and
Hacking
As
of 2009, the value of
information is approaching the
value of gold,
platinum,
oil, and other expensive
commodities. In fact, as the
global
recession
expands, the value of
information is rising faster
than the
value
of natural products such as
metals or oil. As the value
of infor-
mation
goes up, it is attracting
more sophisticated kinds of
thievery. In
the
past, hacking and viruses
were often individual
efforts, sometimes
Overview
of 50 Software Best Practices
139
carried
out by students and even by
high-school students at times
just
for
the thrill of accomplishing
the act.
However,
in today's world, theft of
valuable information has
migrated
to
organized crime, terrorist
groups, and even to hostile
foreign govern-
ments.
Not only that, but denial of
service attacks and "search
bots" that
can
take over computers are
powerful and sophisticated
enough to shut
down
corporate data centers and
interfere with government
operations.
This
situation is going to get
worse as the global economy
declines.
Since
computers are used to store
valuable information such as
finan-
cial
records, medical records,
patents, trade secrets,
classified military
information,
customer lists, addresses
and e-mail addresses,
phone
numbers,
and social security numbers,
the total value of stored
infor-
mation
is in the range of trillions of
dollars. There is no other
commodity
in
the modern world that is
simultaneously so valuable and so
easy to
steal
as information stored in a
computer.
Not
only are there increasing
threats against software and
financial
data,
but it also is technically
within the realm of
possibility to hack
into
voting
and election software as
well. Any computer connected
to the out-
side
world by any means is at
risk. Even computers that
are physically
isolated
may be at some risk due to
their electromagnetic
emissions.
Although
many individual organizations
such as Homeland
Security,
the
Department of Defense, the
FBI, NSA (National Security
Agency),
IBM,
Microsoft, Google, Symantec,
McAfee, Kaspersky,
Computer
Associates,
and scores of others have
fairly competent security
staffs
and
also security tools, the
entire topic needs to have a
central coordi-
nating
organization that would
monitor security threats and
distribute
data
on best practices for
preventing them. The
fragmentation of the
software
security world makes it
difficult to organize defenses
against
all
known threats, and to
monitor the horizon for
future threats.
The
FBI started a partnership organization
with businesses called
InfraGuard
that is intended to share
data on software and
computer
security
issues. According to the
InfraGuard web site, about
350 of
the
Fortune 500 companies are
members. This organization
has local
branches
affiliated with FBI field offices in
most major cities such
as
Boston,
Chicago, San Francisco, and
the like. However, smaller
compa-
nies
have not been as proactive
as large corporations in dealing
with
security
matters. Membership in InfraGuard
would be a good first
step
and
a best practice as
well.
The
Department of Homeland Security
(DHS) also has a
joint
government-business
group for Software Assurance
(SwA). This group
has
published a Software Security
State of the Art Report
(SOAR) that
summarizes
current best practices for
prevention, defense, and
recovery
from
security flaws. Participation in
this group and following
the prin-
ciples
discussed in the SOAR would
be best practices,
too.
140
Chapter
Two
As
this book is being written,
Homeland Security is planning to
con-
struct
a major new security
research facility that will
probably serve
as
a central coordination location
for civilian government
agencies and
will
assist businesses as
well.
A
new government security
report chaired by Representative
James
Langevin
of Rhode Island is also
about to be published, and it
deals with
all
of the issues shown here as
well as others, and in
greater detail. It
will
no doubt provide additional
guidance beyond what is
shown here.
Unfortunately,
some of the security
literature tends to deal
with
threats
that occur after development
and deployment. The need
to
address
security as a fundamental principle of
architecture, design,
and
development is poorly covered. A
book related to this one, by
Ken
Hamer-Hodges,
Authorization
Oriented Architecture, will
deal with
more
fundamental subjects. Among
the subjects is automating
computer
security
to move the problem from
the user to the system
itself. The
way
to do this is through detailed
boundary management. That is
why
objects
plus capabilities matter.
Also, security frames such
as Google
Caja,
which prevents redirection to
phishing sites, are best
practices.
The
new E programming language is
also a best practice, since
it is
designed
to ensure optimum
security.
The
training of business analysts,
systems analysts, and
architects
in
security topics has not
kept pace with the changes in
malware, and
this
gap needs to be bridged quickly, because
threats are becoming
more
numerous
and more serious.
It
is useful to compare security
infections with medical
infections.
Some
defenses against infections,
such as firewalls, are like
biohazard
suits,
except the software
biohazard suits tend to
leak.
Other
defenses, such as antivirus
and antispyware applications,
are
like
antibiotics that stop some
infections from spreading
and also kill
some
existing infections. However, as with
medical antibiotics,
some
infections
are resistant and are
not killed or stopped. Over
time the
resistant
infections tend to evolve
more rapidly than the
infections that
were
killed, which explains why
polymorphic software viruses
are now
the
virus of choice.
What
might be the best long-term
strategy for software would
be to
change
the DNA of software applications
and to increase their
natu-
ral
immunity to infections via
better architecture, better
design, more
secure
programming languages, and
better boundary
controls.
The
way to solve security
problems is to consider the
very foundations
of
the science and to build
boundary control in physical
terms based on
the
Principle of Least Authority,
where each and every
subroutine call
is
treated as an instance of a protected
class of object. There
should
be
no Global items, no Global
Name Space, no Global path
names like
C:/directory/file
or URL http://123.456.789/file. Every
subroutine should
Overview
of 50 Software Best Practices
141
be
a protected call with boundary
checking, and all program
references
are
dynamically bound from a
local name at run time with
access con-
trol
check included at all times.
Use secure languages and
methods (for
example,
E and Caja today). Some
suggested general best
practices from
the
Hamer-Hodges draft
include
Change
passwords frequently (outdated by
today's technology).
■
Don't
click on e-mail links--type
the URL in manually.
■
Disable
the preview pane in all
inboxes.
■
Read
e-mail in plain text.
■
Don't
open e-mail
attachments.
■
Don't
enable Java, JS, or
particularly ActiveX.
■
Don't
display your e-mail address
on your web site.
■
Don't
follow links without knowing
what they link to.
■
Don't
let the computer save
your passwords.
■
Don't
trust the "From" line in
e-mail messages.
■
Upgrade
to latest security levels,
particularly for Internet
Explorer.
■
Consider
switching to Firefox or
Chrome.
■
Never
run a program unless it is
trusted.
■
Read
the User Agreement on
downloads (they may sell
your personal
■
data).
Expect
e-mail to carry worms and
viruses.
■
Just
say no to pop-ups.
■
Say
no if an application asks for
additional or different
authorities.
■
Say
no if it asks to read or edit
anything more than a
Desktop
■
folder.
Say
no if an application asks for
edit authority on other
stuff.
■
Say
no if it asks for read
authority on odd stuff, with a
connection to
■
the
Web.
During
an application install, supply a
new name, new icon,
and a
■
new
folder path.
Say
no when anything asks for
web access beyond a specific
site.
■
Always
say no unless you want to be
hit sooner or later.
■
Internet
security is so hazardous as of 2009
that one emerging
best
practice
is for sophisticated computer
users to have two computers.
One
of
these would be used for
web surfing and Internet
access. The second
142
Chapter
Two
computer
would not be connected to
the Internet and would
accept only
trusted
inputs on physical media
that are of course checked
for viruses
and
spyware.
It
is quite alarming that
hackers are now organized
and have jour-
nals,
web sites, and classes
available for teaching
hacking skills. In
fact,
a
review of the literature
indicates that there is more
information avail-
able
about how to hack than on
how to defend against
hacking. As of
2009,
the hacking "industry" seems
to be larger and more
sophisticated
than
the security industry, which
is not surprising, given the
increasing
value
of information and the
fundamental flaws in computer
security
methods.
There is no real census of
either hackers or security
experts,
but
as of 2009, the hacking
community may be growing at a
faster rate
than
the security
community.
Standard
best practices include use
of firewalls, antivirus
packages,
antispyware
packages, and careful
physical security. However, as
the
race
between hackers and security
companies escalates, it is also
nec-
essary
to use constant vigilance.
Virus definitions should be
updated
daily,
for example. More recent
best practices include
biological defenses
such
as using fingerprints or retina
patterns in order to gain
access to
software
and computers.
Two
topics that have ambiguous
results as of 2009 are those
of iden-
tify
theft insurance and
certification of web sites by
companies such
as
VeriSign. As to identity theft
insurance, the idea seems
reasonable,
but
what is needed is more
active support than just
reimbursement for
losses
and expenses. What would
perhaps be a best practice
would be
a
company or nonprofit that
had direct connections to
all credit card
companies,
credit bureaus, and police
departments and could offer
rapid
response
and assistance to consumers with
stolen identities.
As
to certification of web sites, an
online search of that
subject reveals
almost
as many problems and
mistakes as benefits. Here,
too, the idea
may
be valid, but the
implementation is not yet
perfect. Whenever
prob-
lem
reports begin to approach
benefit reports in numbers,
the topic is
not
suitable for best practice
status.
Some
examples of the major
threats in today's cyberworld
are dis-
cussed
below in alphabetical
order:
Adware
Because
computer usage is so common,
computers have
become
a primary medium for
advertising. A number of software
compa-
nies
generate income by placing
ads in their software that
are displayed
when
the software executes. In
fact, for shareware and
freeware, the
placing
of ads may be the primary
source of revenue. As an
example,
the
Eudora e-mail client
application has a full-featured
version that is
supported
by advertising revenue. If adware
were nothing but a
pas-
sive
display of information, it would be
annoying but not
hazardous.
However,
adware can also collect
information as well as display
it.
Overview
of 50 Software Best Practices
143
When
this occurs, adware tends to
cross a line and become
spyware.
As
of 2009, ordinary consumers
have trouble distinguishing
between
adware
and spyware, so installation of
antispyware tools is a best
prac-
tice,
even if not totally
effective. In fact, sophisticated
computer users
may
install three or four
different antispyware tools, because
none are
100
percent effective by
themselves.
Authentication,
authorization, and access
Computers
and
software
tend to have a hierarchy of
methods for protection
against
unauthorized
use. Many features are
not accessible to ordinary
users,
but
require some form of
administrative
access. Administrative
access is
assigned
when the computer or
software is first installed.
The adminis-
trator
then grants other users
various permissions and
access rights. To
use
the computer or software,
users need to be authenticated
or
identi-
fied
to the application with the
consent of the administrator. Not
only
human
users but also software
applications may need to be
authenti-
cated
and given access rights.
While authenticating human
users is not
trivial,
it can be done without a
great deal of ambiguity. For
example,
retina
prints or fingerprints provide an
unambiguous identification of a
human
user. However, authenticating
and authorizing software
seems
to
be a weak link in the security
chain. Access control lists
(ACLs) are
the
only available best
practice, but just for
static files, services,
and
networks.
ACL cannot distinguish identities, so a
virus or Trojan has
the
same authorization as the
session owner! If some
authorized soft-
ware
contains worms, viruses, or
other forms of malware, they
may use
access
rights to propagate. As of 2009,
this problem is complex
enough
that
there seems to be no best
practice for day-to-day
authorization.
However,
a special form of authorization
called capability-based
secu-
rity
is at
least in theory a best
practice. Unfortunately,
capability-based
security
is complex and not widely
utilized. Historically, the
Plessey 250
computer
implemented a hardware-based capability
model in order to
prevent
hacking and unauthorized
changes of access lists
circa 1975.
This
approach dropped from use
for many years, but
has resurfaced by
means
of Google's Caja and the E
programming language.
Back
door Normally,
to use software, some kind
of login process
and
password are needed. The
term back
door refers
to methods for
gaining
access to software while
bypassing the normal entry
points and
avoiding
the use of passwords, user
names, and other protocols.
Error-
handling
routines and buffer overruns
are common backdoor
entry
points.
Some computer worms install
back doors that might be
used
to
send spam or to perform
harmful actions. One
surprising aspect of
back
doors is that occasionally
they are deliberately put
into software
by
the programmers who
developed the applications.
This is why classi-
fied
software and software that
deals with financial data
needs careful
inspection,
static analysis, and of
course background security
checks
144
Chapter
Two
of
the software development
team. Alarmingly, back doors
can also be
inserted
by compilers if the compiler
developer put in such a
function.
The
backdoor situation is subtle
and hard to defend against.
Special
artificial
intelligence routines in static
analysis software may become
a
best
practice, but the problem
remains complex and hard to
deal with.
Currently,
several best practice rules
include (1) assume errors
are signs
of
an attack in process; (2)
never let user-coded error
recovery run at
elevated
privileged levels; (3) never
use global (path) addressing
for
URL
or networked files; and (4)
local name space should be
translated
only
by a trusted device.
Botnets
The
term botnet
refers
to a collection of "software
robots"
that
act autonomously and attempt
to seize control of hundreds or
thou-
sand
of computers on a network and
turn them into "zombie
computers."
The
bots are under control of a
bot
herder and
can be used for a
number
of
harmful purposes such as
denial of service attacks or
sending spam.
In
fact, this method has
become so pervasive that bot
herders actu-
ally
sell their services to
spammers! Botnets tend to be
sophisticated
and
hard to defend against.
While firewalls and
fingerprinting can be
helpful,
they are not 100
percent successful. Constant
vigilance and
top-gun
security experts are a best
practice. Some security
companies
are
now offering botnet
protection using fairly
sophisticated artificial
intelligence
techniques. It is alarming that
cybercriminals and
cyberde-
fenders
are apparently in a heated
technology race. Lack of
boundary
controls
is what allow botnets to
wander at will. Fundamental
archi-
tectural
changes, use of Caja, and
secure languages such as E
could
stop
botnets.
Browser
hijackers This
annoying and hazardous
security prob-
lem
consists of software that
overrides normal browser
addresses and
redirects
the browser to some other
site. Browser hijackers were
used
for
marketing purposes, and
sometimes to redirect to porn
sites or
other
unsavory locations. A recent
form of browser hijacking is
termed
rogue
security sites. A
pop-up ad will display a message
such as "YOUR
COMPUTER
IS INFECTED" and direct the
user to some kind of
secu-
rity
site that wants money. Of
course, it might also be a
phishing site.
Modern
antispyware tools are now
able to block and remove
browser
hijackers
in most cases. They are a
best practice for this
problem, but
they
must be updated frequently with
new definitions. Some
browsers
such
as Google Chrome and Firefox
maintain lists of rogue web
sites and
caution
users about them. This
keeping of lists is a best
practice.
Cookies
These
are small pieces of data
that are downloaded
from
web
sites onto user computers.
Once downloaded, they then
go back and
forth
between the user and
the vendor. Cookies are
not software but
rather
passive data, although they
do contain information about
the
user.
Benign uses of cookies are
concerned with online shopping
and with
Overview
of 50 Software Best Practices
145
setting
up user preferences on web
sites such as Amazon.
Harmful uses
of
cookies include capturing
user information for unknown
or perhaps
harmful
purposes. For several years,
both the CIA and NSA
downloaded
cookies
into any computer that
accessed their web sites
for any reason,
which
might have allowed the
creation of large lists of
people who did
nothing
more than access web
sites. Also, cookies can be
hijacked or
changed
by a hacker. Unauthorized change of a
cookie is called cookie
poisoning.
It
could be used, for example,
to change the amount of
pur-
chase
at an online store. Cookies
can be enabled or disabled on
web
browsers.
Because cookies can be
either beneficial or harmful,
there is
no
general best practice for
dealing with them. The
author's personal
practice
is to disable cookies unless a
specific web site requires
cookies
for
a business purpose originated by
the author.
Cyberextortion
Once
valuable information such as
bank records,
medical
records, or trade secrets
are stolen, what next?
One alarming
new
form of crime is cyberextortion, or
selling the valuable data
back
to
the original owner under
threat of publishing it or selling it to
com-
petitors.
This new crime is primarily
aimed at companies rather
than
individuals.
The more valuable the
company's data, the more
tempt-
ing
it is as a target. Best practices in
this area involve using
topnotch
security
personnel, constant vigilance,
firewalls and the usual
gamut of
security
software packages, and
alerting authorities such as
the FBI or
the
cybercrime units of large
police forces if extortion is
attempted.
Cyberstalking
The
emergence of social networks
such as YouTube,
MySpace,
and Facebook has allowed
millions of individuals to
commu-
nicate
who never (or seldom)
meet each other face to
face. These same
networks
have also created new
kinds of threats for
individuals such as
cyberbullying
and cyberstalking. Using
search engines and the
Internet,
it
is fairly easy to accumulate
personal information. It is even
easier to
plant
rumors, make false
accusations, and damage the
reputations of
individuals
by broadcasting such information on
the Web or by using
social
networks. Because cyberstalking
can be done anonymously,
it
is
hard to trace, although some
cyberstalkers have been
arrested and
charged.
As this problem becomes more
widespread, states are
passing
new
laws against it, as is the
federal government. Defenses
against
cyberstalking
include contacting police or
other authorities, plus
con-
tacting
the stalker's Internet
service provider if it is known.
While it
might
be possible to slow down or
prevent this crime by using
anony-
mous
avatars for all social
networks, that more or less
defeats the pur-
pose
of social networking.
Denial
of service This
form of cybercrime attempts to
stop specific
computers,
networks, or servers from
carrying out normal
operations
by
saturating them with phony
messages or data. This is a
sophisti-
cated
form of attack that requires
considerable skill and
effort to set
146
Chapter
Two
up,
and of course considerable
skill and effort to prevent
or stop. Denial
of
service (DoS) attacks seemed
to start about 2001 with an
attack
against
America Online (AOL) that
took about a week to stop.
Since
then
numerous forms of DoS
attacks have been developed.
A precursor
to
a denial of service attack
may include sending out
worms or search
robots
to infiltrate scores of computers
and turn them into
zombies,
which
will then unknowingly participate in
the attack. This is a
complex
problem,
and the best practice
for dealing with it is to have
topnotch
security
experts available and to
maintain constant
vigilance.
Electromagnetic
pulse (EMP) A
byproduct of nuclear
explo-
sions
is a pulse of electromagnetic radiation
that is strong enough
to
damage
transistors and other
electrical devices. Indeed,
such a pulse
could
shut down almost all
electrical devices within
perhaps 15 miles.
The
damage may be so severe that
repair of many devices--that
is,
computers,
audio equipment, cell
phones, and so on--would be
impos-
sible.
The electromagnetic pulse
effect has led to research
in e-bombs,
or
high-altitude
bombs that explode perhaps
50 miles up and shut
down
electrical
power and damage equipment
for hundreds of square
miles,
but
do not kill people or destroy
buildings. Not only nuclear
explosions
but
other forms of detonation
can trigger such pulses.
While it is possible
to
shield electronic devices
using Faraday cages or
surrounding them in
metallic
layers, this is unfeasible
for most civilians. The
major military
countries
such as the United States
and Russia have been
carrying out
active
research in e-bombs and
probably have them already
available.
It
is also possible that other
countries such as North
Korea may have
such
devices. The presence of
e-bombs is a considerable threat to
the
economies
of every country, and no
doubt the wealthier
terrorist orga-
nizations
would like to gain access to
such devices. There are no
best
practices
to defend against this for
ordinary citizens.
Electromagnetic
radiation Ordinary
consumers using home
computers
probably don't have to worry
about loss of data due to
elec-
tromagnetic
radiation, but this is a
serious issue for military
and clas-
sified
data centers. While
operating, computers radiate
various kinds
of
electromagnetic energy, and
some of these can be picked
up remotely
and
deciphered in order to collect
information about both
applications
and
data. That information could
be extracted from
electromagnetic
radiation
was first discovered in the
1960s. Capturing
electromagnetic
radiation
requires rather specialized
equipment and also
specialized
personnel
and software that would be
outside the range of
day-to-day
hackers.
Some civilian threats do
exist, such as the
possibility of cap-
turing
electromagnetic radiation to crack
"smart cards" when they
are
being
processed. Best practices
include physical isolation of
equipment
behind
copper or steel enclosures,
and of course constant
vigilance and
topnotch
security experts. Another
best practice would be to
install
Overview
of 50 Software Best Practices
147
electromagnetic
generators in data centers
that would be more
pow-
erful
than computer signals and
hence interfere with detection.
This
approach
is similar to jamming to shut
down pirate radio
stations.
Hacking
The
word "hack" is older than
the computer era and
has
meaning
in many fields, such as
golf. However, in this book,
hacking
refers
to deliberate attempts to penetrate a
computer or software
appli-
cation
with the intent to modify
how it operates. While some
hacking is
harmful
and malicious, some may be
beneficial. Indeed, many
security
companies
and software producers
employ hackers who attempt
to pen-
etrate
software and hardware to
find vulnerabilities that
can then be
fixed.
While firewalls, antivirus,
and antispyware programs are
all good
practices,
what is probably the best
practice is to employ ethical
hackers
to
attempt penetration of key
applications and computer
systems.
Identity
theft Stealing
an individual's identity in order to
make
purchases,
set up credit card accounts,
or even to withdraw funds
from
banks
is one of the fastest-growing crimes in
human history. A new
use
of
identity theft is to apply
for medical benefits. In
fact, identity theft
of
physicians'
identities can even be used
to bill Medicare and
insurance
companies
with fraudulent claims. Unfortunately,
this crime is far
too
easy
to commit, since it requires
only moderate computer
skills plus
commonly
available information such as
social security numbers,
birth
dates,
parents' names, and a few
other topics. It is alarming
that many
identity
thefts are carried out by
relatives and "friends" of
the victims.
Also,
identity information is being
sold and traded by hackers.
Almost
every
computer user receives daily
"phishing" e-mails that
attempt to
trick
them into providing their
account numbers and other
identifying
information.
As the global economy
declines into recession,
identity theft
will
accelerate. The author
estimates that at least 15
percent of the
U.S.
population is at risk. Best
practices to avoid identity
theft include
frequent
credit checks, using
antivirus and anti-spyware
software, and
also
physical security of credit
cards, social security
cards, and other
physical
media.
Keystroke
loggers This
alarming technology represents
one of
the
most serious threats to home
computer users since the
industry
began.
Both hardware and software
keystroke logging methods
exist,
but
computer users are more
likely to encounter software
keystroke log-
ging.
Interestingly, keystroke logging
also has benign uses in
studying
user
performance. In today's world,
not only keystrokes but
also mouse
movements
and touch-screen movements
need to be recorded for
the
technology
to work. The most malicious
use of keystroke logging is
to
intercept
passwords and security codes
so that bank accounts,
medical
records,
and other proprietary data
can be stolen. Not only
computers
are
at risk, but also ATM
machines. In fact, this
technology could also
be
used
on voting machines; possibly with
the effect of influencing
elections.
148
Chapter
Two
Antispyware
programs are somewhat
useful, as are other methods
such
as
one-time passwords. This is
such a complex problem that
the current
best
practice is to do almost daily
research on the issue and
look for
emerging
solutions.
Malware
This
is a hybrid term that
combines one syllable
from
"malicious"
and one syllable from
"software." The term is a
generic
descriptor
for a variety of troublesome
security problems
including
viruses,
spyware, Trojans, worms, and
so on.
Phishing
This
phrase is derived from
"fishing" and refers to
attempts
to
get computer users to reveal
confidential information such as
account
numbers
by having them respond to
bogus e-mails that appear to
be
from
banks or other legitimate
businesses. A classic example of
phishing
are
e-mails that purport to be
from a government executive in
Nigeria
who
is having trouble getting
funds out of the country
and wants to
deposit
them in a U.S. account. The
e-mails ask the readers to
respond
by
sending back their account
information. This early
attempt at phish-
ing
was so obviously bogus that
hardly anyone responded to
it, but sur-
prisingly,
a few people might have.
Unfortunately, modern attempts
at
phishing
are much more sophisticated
and are very difficult to
detect.
The
best practice is never to
respond to requests for
personal or account
information
that you did not
originate. However, newer
forms are more
sophisticated
and can intercept browsers
when they attempt to go
to
popular
web sites such as eBay or
PayPal. The browser can be
redirected
to
a phony web site that
looks just like the
real one. Not only do
phony
web
sites exist, but also
phony telephone sites.
However, as phishing
becomes
more sophisticated, it is harder to
detect. Fortunately
credit
card
companies, banks, and other
institutions at risk have
formed a
nonprofit
Anti-Phishing Working Group.
For software companies,
affili-
ation
with this group would be a
best practice. For
individuals, verifying
by
phone and refusing to
respond to e-mail requests
for personal and
account
data are best practices.
Many browsers such as
Firefox and
Internet
Explorer have anti-phishing
blacklists
of
known phishing sites
and
warn users if they are
routed to them. Boundary
control, Caja, and
languages
such as E are also effective
against phishing.
Physical
security Physical
security of data centers,
notebook com-
puters,
thumb drives, and wireless
transmission remains a best
prac-
tice.
Almost every week, articles
appear in papers and
journals about
loss
or theft of confidential data
when notebook computers are
lost or
stolen.
There are dozens of
effective physical security
systems, and all of
them
should be considered. A modern
form of physical security
involves
using
fingerprints or retina patterns as
passwords for computers
and
applications.
Piracy
Piracy
in several forms is a major
problem in the modern
world.
The piracy of actual ships
has been a problem off
the African
Overview
of 50 Software Best Practices
149
coast.
However, software piracy has
also increased alarmingly.
While
China
and the Asia Pacific
region are well known as
sources of piracy,
the
disputes between Iran and
the USA have led Iran to
allow unlimited
copying
of software and intellectual
property, which means that
the
Middle
East is also a hotbed of
software piracy. In the
United States and
other
countries with strong intellectual
property laws, Microsoft
and
other
large software vendors are
active in bringing legal
charges against
pirates.
The nonprofit Business
Software Alliance even
offers rewards
for
turning in pirates. However,
unauthorized copies of software
remain
a
serious problem. For smaller
software vendors, the usual
precautions
include
registration and activation of
software before it can be
utilized.
It
is interesting that the
open-source and freeware
communities deal
with
the problem in rather
different ways. For example,
open-source
softwares
commonly use static analysis
methods, which can find
some
security
flaws. Also having dozens of
developers looking at the
code
raises
the odds that security
flaws might be
identified.
Rootkits
In
the Unix operating system,
the term root
user refers
to
someone having authorization to
modify the operating system
or
the
kernel. For Windows, having
administrative
rights is
equivalent.
Rootkits
are programs that infiltrate
computers and seize control
of the
operating
system. Once that control is
achieved, then the rootkit
can
be
used to launch denial of
service attacks, steal
information, reformat
disk
drives, or perform many
other kinds of mischief. In
2005, the Sony
Corporation
deliberately issued a rootkit on
music CDs in an attempt
to
prevent
music piracy via
peer-to-peer and computer
copying. However,
an
unintended consequence of this
rootkit was to open up
backdoor
access
to computers that could by
used by hackers, spyware,
and viruses.
Needless
to say, once the Sony
rootkit was revealed to the
press, the
outcry
was sufficient for Sony to
withdraw the rootkit.
Rootkits tend
to
be subtle and not only
slip past some antivirus
software, but indeed
may
attack the antivirus
software itself. There seem
to be no best prac-
tices
as of 2009, although some
security companies such as
Kaspersky
and
Norton have development
methods for finding some
rootkits and
protecting
themselves as well.
Smart
card hijacking A
very recent threat that
has only just
started
to occur is that of remote-reading of
various "smart cards"
that
contain
personal data. These include
some new credit cards
and also
new
passports with embedded information.
The government is
urging
citizens
to keep such cards in metal
containers or at least metal
foil,
since
the data can be accessed
from at least 10 feet away.
Incidentally,
the
"EZ Pass" devices that
commuters use to go through
tolls without
stopping
are not secure
either.
Spam
Although
the original meaning of
spam
referred
to a meat
product,
the cyberdefinition refers to
unwanted ads, e-mails, or
instant
150
Chapter
Two
messages
that contain advertising. Now
that the Internet is the
world's
primary
communication medium and
reaches perhaps one-fifth of
all
humans
on the planet, using the
Internet for ads and
marketing is going
to
continue. The volume of spam
is alarming and is estimated at
topping
85
percent of all e-mail
traffic, which obviously
slows down the
Internet
and
slows down many servers as
well. Spam is hard to combat
because
some
of it comes from zombie
computers that have been
hijacked by
worms
or viruses and then
unknowingly used for
transmitting spam.
Some
localities have made
spamming illegal, but it is
easy for spam-
mers
to outsource to some other
locality where it is not
illegal. Related
to
spamming is a new subindustry
called e-mail
address harvesting.
E-mail
addresses can be found by
search robots, and once
found and cre-
ated,
the lists are sold as
commercial products. Another
form of address
harvesting
is from the fine print of
the service agreements of
social
networks,
which state that a user's
e-mail address may not be
kept pri-
vate
(and will probably be sold as a
profit-making undertaking). A
best
practice
against spam is to use
spyware and spam blockers,
but these
are
not 100 percent effective.
Some spam networks can be
de-peered,
or
cut off from other
networks, but this is
technically challenging
and
may
lead to litigation.
Spear
phishing The
term spear
phishing refers
to a new and very
sophisticated
form of phishing where a
great deal of personal
informa-
tion
is included in the phishing
e-mail to deceive possible
victims. The
main
difference between phishing
and spear phishing is the
inclusion
of
personal information. For
example, an e-mail that
identifies itself
as
coming from a friend or
colleague is more likely to be
trusted than
one
coming from a random source.
Thus, spear phishing is a
great
deal
harder to defend against.
Often hackers break into
corporate
computers
and then send spear
phishing e-mails to all
employees,
with
disinformation indicating that
the e-mail is from
accounting,
human
factors, or some other
legitimate organization. In fact,
the real
name
of the manager might also be
included. The only best
practice
for
spear phishing is to avoid
sending personal or financial
informa-
tion
in response to any e-mail. If
the e-mail seems legitimate,
check
by
phone before responding.
However, spear phishing is
not just a
computer
scam, but also includes
phony telephone messages and
text
messages
as well.
Spyware
Software
that installs itself on a
host computer and
takes
partial
control of the operating
system and web browser is
termed
spyware.
The
purpose of spyware is to display
unwanted ads,
redirect
browsers
to specific sites, and also
to extract personal information
that
might
be used for purposes such as
identity theft. Prior to
version 7
of
Microsoft Internet Explorer,
almost any ActiveX program
could be
downloaded
and start executing. This
was soon discovered by hackers
as
Overview
of 50 Software Best Practices
151
a
way to put ads and
browser hijackers on computers.
Because spyware
often
embedded itself in the
registry, it was difficult to
remove. In today's
world
circa 2009, a combination of
firewalls and modern
antispyware
software
can keep most spyware
from penetrating computers,
and can
eliminate
most spyware as well.
However, in the heated
technology race
between
hackers and protectors,
sometimes the hackers pull
ahead.
Although
Macintosh computers have
less spyware directed their
way
than
computers running Microsoft
Windows do, no computers or
operat-
ing
systems in the modern world
are immune to
spyware.
Trojans
This
term is of course derived
from the famous
Trojan
horse.
In a software context, a Trojan is
something that seems to
be
useful
so that users are deceived
into installing it via
download or by
disk.
Once it's installed, some
kind of malicious software
then begins to
take
control of the computer or
access personal data. One
classic form of
distributing
Trojans involves screensavers.
Some beautiful view such
as
a
waterfall or a lagoon is offered as a
free download. However,
malicious
software
routines that can cause
harm are hidden in the
screensaver.
Trojans
are often involved in denial
of service attacks, in identity
theft,
in
keystroke logging, and in
many other harmful actions.
Modern antivi-
rus
software is usually effective
against Trojans, so installing,
running,
and
updating such software is a
best practice.
Viruses
Computer
viruses originated in the
1970s and started to
become
troublesome in the 1980s. As with
disease viruses,
computer
viruses
attempt to penetrate a host,
reproduce themselves in
large
numbers,
and then leave the
original host and enter
new hosts. Merely
reproducing
and spreading can slow
networks and cause
performance
slowdowns,
but in addition, some
viruses also have functions
that delib-
erately
damage computers, steal
private information, or perform
other
malicious
acts. For example, viruses
can steal address books
and then
send
infected e-mails to every
friend and contact of the
original host.
Macro
viruses
transmitted by documents created
using Microsoft Word
or
Microsoft Excel have been
particularly common and
particularly
troublesome.
Viruses spread by instant
messaging are also
trouble-
some.
Viruses are normally
transmitted by attaching themselves
to
a
document, e-mail, or instant
message. While antivirus
software is
generally
effective and a best
practice, virus developers
tend to be
active,
energetic, and clever. Some
newer viruses morph or
change
themselves
spontaneously to avoid antivirus
software. These
mutat-
ing
viruses are called polymorphic
viruses.
Although viruses
primarily
attack
Microsoft Windows, all
operating systems are at
risk, includ-
ing
Linux, Unix, Mac OS,
Symbian, and all others.
Best practices for
avoiding
viruses are to install
antivirus software and to
keep the virus
definitions
up to date. Taking frequent
checkpoints and restore
points
is
also a best practice.
152
Chapter
Two
Whaling
This
is a form of phishing that
targets very
high-level
executives
such as company presidents,
senior vice presidents,
CEOs,
CIOs,
board members, and so forth.
Whaling tends to be very
sophisti-
cated.
An example might be an e-mail
that purports to be from a
well-
known
law firm and that discusses
possible litigation against
the target
or
his or her company. Other
devices would include "who's
who" e-mail
requests,
or requests from famous
business journals. The only
best prac-
tice
is to avoid responding without
checking out the situation
by phone
or
by some other method.
Wireless
security leaks In
the modern world, usage of
wireless
computer
networks is about as popular as
cell phone usage.
Many
homes
have wireless networks as do
public buildings. Indeed
some
towns
and cities offer wireless
coverage throughout. As wireless
com-
munication
becomes a standard method
for business-to-business
and
person-to-person
communication, it has attracted
many hackers, iden-
tify
thieves, and other forms of
cybercriminals. Unprotected
wireless
networks
allow cybercriminals to access
and control computers,
redi-
rect
browsers, and steal private
information. Other less
overt activities
are
also harmful. For example,
unprotected wireless networks
can be
used
to access porn sites or to
send malicious e-mails to
third parties
without
the network owner being
aware of it. Because many
consum-
ers
and computer users are
not versed in computer and
wireless net-
work
issues, probably 75 percent of
home computer networks are
not
protected.
Some hackers even drive
through large cities looking
for
unprotected
networks (this is called
war
driving). In
fact, there may
even
be special signs and symbols
chalked on sidewalks and
buildings
to
indicate unprotected networks.
Many networks in coffee
shops and
hotels
are also unprotected. Best
practices for avoiding
wireless secu-
rity
breaches include using the
latest password and
protection tools,
using
encryption, and frequently
changing passwords.
Worms
Small
software applications that
reproduce themselves
and
spread
from computer to computer
over networks are called
worms.
Worms
are similar to viruses, but
tend to be self-propagating
rather
than
spreading by means of e-mails or
documents. While a few
worms
are
benign (Microsoft once tried to
install operating system
patches
using
worms), many are harmful. If
worms are successful in
reproduc-
ing
and moving through a
network, they use bandwidth
and slow down
performance.
Worse, some worms have
payloads
or
subroutines that
perform
harmful and malicious
activities such as erasing
files. Worms
can
also be used to create
zombie computers that might
take part in
denial
of service attacks. Best
practices for avoiding worms
include
installing
the latest security updates
from operating vendors such
as
Microsoft,
using antivirus software
(with frequent definition
updates),
and
using firewalls.
Overview
of 50 Software Best Practices
153
As
can be seen from the variety
of computer and software
hazards in
the
modern world, protection of
computers and software from
harmful
attacks
requires constant vigilance. It
also requires installation
and
usage
of several kinds of protective
software. Finally, both
physical secu-
rity
and careless usage of
computers by friends and
relatives need to
be
considered. Security problems will
become more pervasive as
the
global
economy sinks into
recession. Information is one commodity
that
will
increase in value no matter
what is happening to the
rest of the
economy.
Moreover, both organized
crime and major terrorist
groups
are
now active players in
hacking, denial of service,
and other forms of
cyberwarfare.
If
you break down the
economics of software security,
the distribu-
tion
of costs is far from optimal
in 2009. From partial data,
it looks
like
about 60 percent of annual
corporate security costs are
spent on
defensive
measures for data centers
and installed software,
about 35
percent
is spent on recovering from
attacks such as denial of
service,
and
only about 5 percent is
spent on preventive measures.
Assuming
an
annual cost of $50 million
for security per Fortune
500 company,
the
breakdown might be $30
million on defense, $17.5
million for
recovery,
and only $2.5 million on
prevention during development
of
applications.
With
more effective prevention in
the form of better
architecture,
design,
secure coding practices,
boundary controls, and
languages
such
as E, a future cost distribution
for security might be
prevention,
60
percent; defense, 35 percent; and
recovery, 5 percent. With
better pre-
vention,
the total security costs
would be lower: perhaps $25
million per
year
instead of $50 million per
year. In this case the
prevention costs
would
be
$15 million; defensive costs
would be $8.75 million; and
recovery costs
would
be only $1.25 million. Table
2-9 shows the two cost
profiles.
So
long as software security
depends largely upon human
beings
acting
wisely by updating virus
definitions and installing
antispyware,
it
cannot be fully successful.
What the software industry
needs is to
design
and develop much better
preventive methods for
building appli-
cations
and operating systems, and
then to fully automate
defensive
approaches
with little or no human intervention
being needed.
TABLE
2-9
Estimated Software Security Costs in
2009 and 2019 (Assumes
Fortune
500
Company)
2009
2019
Difference
Prevention
$2,500,000
$15,000,000
$12,500,000
Defense
$30,000,000
$8,750,000
$21,250,000
Recovery
$17,500,000
$1,250,000
$16,250,000
TOTAL
$50,000,000
$25,000,000
$25,000,000
154
Chapter
Two
43.
Best Practices for Software
Deployment
and
Customization
Between
development of software and
the start of maintenance is a
gray
area
that is seldom covered by
the software literature:
deployment and
installation
of software applications. Considering
that the deployment
and
installation of large software
packages such as enterprise
resource
planning
(ERP) tools can take
more than 12 calendar
months, cost more
than
$1 million, and involve more
than 25 consultants and 30
in-house
personnel,
deployment is a topic that
needs much more research
and
much
better coverage in the
literature.
For
most of us who use personal
computers or Macintosh
computers,
installation
and deployment are handled
via the Web or from a CD or
DVD.
While
some software installations
are troublesome (such as
Symantec or
Microsoft
Vista), many can be
accomplished in a few
minutes.
Unfortunately
for large mainframe
applications, they don't
just load
up
and start working. Large
applications such as ERP
packages require
extensive
customization in order to work with
existing applications.
In
addition, new releases are
frequently buggy, so constant
updates
and
repairs are usually part of
the installation
process.
Also,
large applications with hundreds or
even thousands of
users
need
training for different types
of users. While vendors may
provide
some
of the training, vendors
don't know the specific
practices of their
clients.
So it often happens that
companies themselves have to
put
together
more than a dozen custom
courses. Fortunately, there
are tools
and
software packages that can
help in doing in-house
training for large
applications.
Because
of bugs and learning issues,
it is unfeasible just to stop
using
an
old application and to start
using a new commercial
package. Usually,
side-by-side
runs occur for several
months, both to check for
errors in
the
new package and to get
users up to speed as well.
To
make a long (and expensive)
story short, deployment of a
major
new
software package can run
from six months to more
than 12 months
and
involve scores of consultants, educators,
and in-house
personnel
who
need to learn the new
software. Examples of best
practices for
deployment
include
Joining
user associations for the
new application, if any
exist
■
Interviewing
existing customers for
deployment advice and
counsel
■
Finding
consultants with experience in
deployment
■
Acquiring
software to create custom
courses
■
Acquiring
training courses for the
new application
■
Customizing
the new application to meet
local needs
■
Overview
of 50 Software Best Practices
155
Developing
interfaces between the new
application and
legacy
■
applications
Recording
and reporting bugs or
defects encountered during
deploy-
■
ment
Installing
patches and new releases
from the vendor
■
Evaluating
the success of the new
application
■
Installation
and deployment of large
software packages are
common,
but
very poorly studied and
poorly reported in the
software literature.
Any
activity that can take
more than a calendar year,
cost more than
$1
million, and involve more
than 50 people in full-time
work needs
careful
analysis.
The
costs and hazards of
deployment appear to be directly
related
to
application size and type.
For PC and Macintosh
software, deploy-
ment
is usually fairly straightforward
and performed by the
customers
themselves.
However, some companies such
as Symantec make it
dif-
ficult
by requiring the prior
versions of their applications be
removed,
but
normal Windows removal of it
leaves traces that can
interfere with
the
new installation.
Big
applications such as mainframe
operating systems, ERP
pack-
ages,
and custom software are
very troublesome and
expensive to deploy.
In
addition, such applications
often require extensive
customization for
local
conditions before they can
be utilized. And, of course,
this complex
situation
also requires training
users.
44.
Best Practices for Training
Clients
or
Users of Software
Applications
It
is an interesting phenomenon of the
software industry that
commer-
cial
vendors do such a mediocre
job of providing training
and tutorial
information
that a major publishing
subindustry has come into
being
providing
books on popular software
packages such as Vista,
Quicken,
Microsoft
Office, and dozens of other
popular applications. Also,
training
companies
offer interactive CD training
for dozens of software
packages.
As
this book is written, the
best practice for learning
to use popular soft-
ware
packages from major vendors
is to use third-party sources
rather
than
the materials provided by
the vendors
themselves.
For
more specialized mainframe
applications such as those
released
by
Oracle and SAP, other
companies also provide
supplemental training
for
both users and maintenance
personnel, and usually do a
better job
than
the vendors
themselves.
After
60 years of software, it might be
thought that standard
user-
training
materials would have common
characteristics, but they do
not.
156
Chapter
Two
What
is needed is a sequence of learning
material including but
not
limited
to:
Overview
of features and
functions
■
Installation
and startup
■
Basic
usage for common
tasks
■
Usage
for complex tasks
■
HELP
information by topic
■
Troubleshooting
in case of problems
■
Frequently
asked questions (FAQ)
■
Operational
information
■
Maintenance
information
■
Some
years ago, IBM performed a
statistical analysis of user
evaluations
for
all software manuals
provided to customers with IBM software.
Then
the
top-ranked manuals were
distributed to all IBM technical
writers with
a
suggestion that they be used
as guidelines for writing
new manuals.
It
would be possible to do a similar
study today of third-party
books
by
performing a statistical analysis of
the user reviews listed in
the
Amazon
catalog of technical books.
Then the best books of
various kinds
could
serve as models for new
books yet to be
written.
Because
hard-copy material is static
and difficult to modify,
tuto-
rial
material will probably migrate to
online copy plus, perhaps,
books
formatted
for e-book readers such as
the Amazon Kindle, Sony
PR-505,
and
the like.
It
is possible to envision even
more sophisticated online
training by
means
of virtual environments, avatars,
and 3-D simulations,
although
these
are far in the future as of
2009.
The
bottom line is that tutorial
materials provided by software
ven-
dors
are less than adequate
for training clients.
Fortunately, many
com-
mercial
book publishers and
education companies have
noted this and
are
providing better alternatives, at
least for software with high
usage.
Over
and above vendor and
commercial books, user
associations and
various
web sites have volunteers
who often can answer
questions about
software
applications. Future trends
might include providing user
infor-
mation
via e-books such as the
Amazon Kindle or Sony
PR-505.
45.
Best Practices for Customer
Support
of
Software Applications
Customer
support of software applications is
almost universally
unsatis-
factory.
A few companies such as
Apple, Lenovo, and IBM have
reasonably
Overview
of 50 Software Best Practices
157
good
reviews for customer
support, but hundreds of
others garner
criticism
for
long wait times and
bad information.
Customer
support is also labor-intensive
and very costly. This is
the
main
reason why it is not very good. On
average it takes about
one
customer
support person for every
10,000 function points in a
software
application.
It also takes one customer
support person for about
every
150
customers. However, as usage
goes up, companies cannot
afford
larger
and larger customer support
teams, so the ratio of
support to
customers
eventually tops 1000 to 1,
which of course means long
wait
times.
Thus, large packages in the
100,000function point range
with
100,000
customers need either
enormous support staff, or
smaller staffs
that
trigger very difficult
access by customers.
Because
of the high costs and
labor intensity of customer
support, it
is
one of the most common
activities outsourced to countries with
low
labor
costs such as India.
Surprisingly,
small companies with only a
few hundred customers
often
have better customer support
than large companies, due to
the
fact
that their support teams
are not overwhelmed.
A
short-range strategy for
improving customer support is to
improve
quality
so that software is delivered with
fewer bugs. However,
not
many
companies are sophisticated
enough to even know how to
do this.
A
combination of inspections, static
analysis, and testing can
raise defect
removal
efficiency levels up to perhaps 97
percent from today's
averages
of
less than 85 percent.
Releasing software with fewer
bugs or defects
would
yield a significant reduction in
the volume of incoming
requests
for
customer support.
The
author estimates that
reducing delivered bugs by
about 220 would
reduce
customer support staffing by one
person. This is based on
the
assumption
that customer support
personnel answer about 30
calls per
day,
and that each bug will be
found by about 30 customers. In
other
words,
one bug can occupy one day
for a customer support staff
member,
and
there are 220 working
days per year.
A
more comprehensive long-range
strategy would involve many
dif-
ferent
approaches, including some
that are novel and
innovative:
Develop
artificial-intelligence virtual support
personnel who will
■
serve
as the first tier of
telephone support. Since
live humans are
expensive
and often poorly trained,
virtual personnel could do a
much
better
job. Of course, these
avatars would need to be
fully stocked
with
the latest information on
bug reports, work-arounds,
and major
issues.
Allow
easy e-mail contacts between
customers and support
organi-
■
zations.
For small companies or small
applications, these could
be
screened
by live support personnel.
For larger applications or
those
158
Chapter
Two
with
millions of customers, some
form of artificial-intelligence
tool
would
scan the e-mails and
either offer solutions or
route them to real
personnel
for analysis.
Standardize
HELP information and user's
guides so that all
soft-
■
ware
applications provide similar
data to users. This would
speed up
learning
and allow users to change
software packages with
minimal
disruptions.
Doing this would perhaps
trigger the development of
new
standards
by the International Standards
Organization (ISO), by
the
IEEE,
and by other standards
bodies.
For
reusable functions and
features, such as those used
in service-
■
oriented
architecture, provide reusable HELP
screens and tutorial
information
as well as reusable source code. As
software switches
from
custom development to assembly
from standard
components,
the
tutorial materials for those
standard components must be
part of
the
package of reusable artifacts
shared among many
applications.
46.
Best Practices for
Software
Warranties
and Recalls
Almost
every commercial product
comes with a warranty that
offers repairs
or
replacement for a period of
time if the product should
be defective: appli-
ances,
automobiles, cameras, computers,
optics, and so on. Software
is a
major
exception. Most "software
warranties" explicitly disclaim
fitness
for
use, quality, or causing harm to
consumers. Most software
products
explicitly
deny warranty protection
either "express or
implied."
Some
software vendors may offer
periodic updates and bug
repairs,
but
if the software should fail
to operate or should produce
incorrect
results,
the usual guarantee is
merely to provide another
copy, which
may
have the same flaws.
Usually, the software cannot
be returned and
the
vendor will not refund the
purchase price, much less
fix any damage
that
the software might have
caused such as corrupting
files or leaving
unremoveable
traces.
What
passes for software warranties
are part of end
user license
agree-
ments
(EULA),
which users are required to
acknowledge or sign
before
installing
software applications. These EULA
agreements are
extremely
one-sided
and designed primarily to
protect the vendors.
The
reason for this is the
poor quality control of
software applica-
tions,
which has been a major
weakness of the industry for
more than
50
years.
As
this book is being written,
the federal government is
attempting
to
draft a Uniform Computer
Information Transaction Act
(UCITA) as
part
of the Uniform Commercial
Code. UCITA has proven to be
very
controversial,
and some claim it is even
weaker in terms of
consumer
Overview
of 50 Software Best Practices
159
protection
than current EULA practices, if
that is possible.
Because
state
governments can make local
changes, the UCITA may not
even
be
very uniform.
If
software developers introduced
the best practices of
achieving
greater
than 95 percent defect
removal efficiency levels
coupled with
building
software from certified
reusable components, then it
would also
be
possible to create the best
practice fair warranties
that benefit both
parties.
Clauses within such
warranties might
include
Vendors
would make a full refund of
purchase price to any
dissatisfied
■
customer
within a fixed time period
such as 30 days.
Software
vendors would guarantee that
the software would
operate
■
in
conformance to the information
provided in user
guides.
The
vendors would offer free
updates and bug repairs
for at least a
■
12-month
period after
purchase.
Vendors
would guarantee that
software delivered on physical
media
■
such
as CD or DVD disks would be free of
viruses and malware.
Over
and above specific warranty
provisions, other helpful
topics
would
include
Methods
of reporting bugs or defects to
the vendor would be
included
■
in
all user guides and
also displayed in HELP
screens.
Customer
support would be promptly
available by phone with
less
■
than
three minutes of wait
time.
Responses
to e-mail requests for help
would occur within 24
business
■
hours
of receipt (weekends might be
excluded in some cases).
As
of 2009, most EULA agreements
and most software warranties
are
professionally
embarrassing.
47.
Best Practices for Software
Change
Management
After Release
In
theory, software change
management after release of a
software
application
should be almost identical to
change management
before
the
release; that is,
specifications would be updated as
needed, configu-
ration
control would continue, and
customer-reported bugs would
be
added
to the overall bug
database.
In
practice, postrelease change
management is often less
rigorous than
change
management prior to the
initial release. While
configuration
control
of code might continue,
specifications are seldom
kept current.
Also,
small bug repairs and
minor enhancements may occur
that lack
160
Chapter
Two
permanent
documentation. As a result, after
perhaps five years of
usage,
the
application no longer has a
full and complete set of
specifications.
Also,
code changes may have
occurred which triggered
islands of
"dead
code" that is no longer reached.
Code comments may be out
of
date.
Complexity as measured using
cyclomatic or essential
complexity
will
probably have gone up, so
changes tend to become
progressively
more
difficult. This situation is
common enough so that for
updates,
many
companies depend primarily on
the tenure of long-term
mainte-
nance
employees, whose knowledge of
the structure of aging
legacy code
is
vital for successful
updates.
However,
legacy software systems do
have some powerful tools
that
can
help in bringing out new
versions and even in
developing replace-
ments.
Because the source code
does exist in most cases, it is
possible
to
apply automation to the
source code and extract
hidden business
rules
and algorithms that can
then be carried forward to
replacement
applications
or to renovated legacy applications.
Examples of such
tools
include
but are not limited
to:
Complexity
analysis tools that can
illustrate all paths and
branches
■
through
code
Static
analysis tools that can find
bugs in legacy code, in
selected
■
languages
Static
analysis tools that can
identify error-prone modules
for surgical
■
removal
Static
analysis tools that can
identify dead code for
removal or isola-
■
tion
Data
mining tools that can
extract algorithms and
business rules from
■
code
Code
conversion tools that can
convert legacy languages
into Java or
■
modern
languages
Function
point enumeration tools that
can calculate the sizes of
legacy
■
applications
Renovation
workbenches that can assist
in handling changes to
exist-
■
ing
software
Automated
testing tools that can
create new test cases
after examin-
■
ing
code segments
Test
coverage tools that can
show gaps and omissions
from current
■
test
case libraries
In
addition to automated tools,
formal inspection of source
code, test
libraries,
and other artifacts of
legacy applications can be
helpful, too,
assuming
the artifacts have been
kept current.
Overview
of 50 Software Best Practices
161
As
the global economy continues
to sink into a serious
recession, keep-
ing
legacy applications running
for several more years
may prove to
have
significant economic value.
However, normal maintenance
and
enhancement
of poorly structured legacy
applications with marginal
quality
is not cost-effective. What is
needed is a thorough analysis of
the
structure
and features of legacy
applications. Since manual
methods are
likely
to be ineffective and costly,
automated tools such as
static analysis
and
data mining should prove to
be valuable allies during
the next few
years
of the recession
cycle.
48.
Best Practices for Software
Maintenance
and
Enhancement
Software
maintenance is more difficult
and complex to analyze
than
software
development because the word
"maintenance" includes so
many
different kinds of activities.
Also, estimating maintenance
and
enhancement
work requires evaluation not
only of the changes
them-
selves,
but also detailed and
complete analysis of the
structure and code
of
the legacy application that
is being modified.
As
of 2009, some 23 different
forms of work are subsumed
under the
single
word "maintenance."
Major
Kinds of Work Performed
Under the Generic
Term
"Maintenance"
1.
Major enhancements (new
features of greater than 50
function
points)
2.
Minor enhancements (new
features of less than 5
function points)
3.
Maintenance (repairing defects
for good will)
4.
Warranty repairs (repairing
defects under formal
contract)
5.
Customer support (responding to
client phone calls or
problem
reports)
6.
Error-prone module removal
(eliminating very troublesome
code
segments)
7.
Mandatory changes (required or
statutory changes)
8.
Complexity or structural analysis
(charting control flow plus
com-
plexity
metrics)
9.
Code restructuring (reducing cyclomatic
and essential
complexity)
10.
Optimization (increasing performance or
throughput)
11.
Migration (moving software
from one platform to
another)
12.
Conversion (changing the
interface or file
structure)
162
Chapter
Two
13.
Reverse engineering (extracting
latent design information
from
code)
14.
Reengineering/renovation (transforming
legacy application to
modern
forms)
15.
Dead code removal (removing
segments no longer
utilized)
16.
Dormant application elimination
(archiving unused
software)
17.
Nationalization (modifying software
for international
use)
18.
Mass updates such as Euro or
Year 2000 repairs
19.
Refactoring, or reprogramming,
applications to improve
clarity
20.
Retirement (withdrawing an application
from active service)
21.
Field service (sending
maintenance members to client
locations)
22.
Reporting bugs or defects to
software vendors
23.
Installing updates received
from software vendors
Although
the 23 maintenance topics
are different in many
respects,
they
all have one common feature
that makes a group
discussion pos-
sible:
they all involve modifying
an existing application rather
than
starting
from scratch with a new
application.
Each
of the 23 forms of modifying
existing applications has a
differ-
ent
reason for being carried
out. However, it often
happens that several
of
them take place
concurrently. For example,
enhancements and
defect
repairs
are very common in the
same release of an evolving
application.
There
are also common sequences or
patterns to these modification
activi-
ties.
For example, reverse
engineering often precedes reengineering,
and
the
two occur so often together
as to almost constitute a linked
set. For
releases
of large applications and
major systems, the author
has observed
from
six to ten forms of
maintenance all leading up to
the same release!
In
recent years the Information
Technology Infrastructure
Library
(ITIL)
has begun to focus on many
key issues that are
associated with
maintenance,
such as change management,
reliability, availability,
and
other
topics that are significant
for applications in daily
use by many
customers.
Because
aging software applications
increase in complexity over
time,
it
is necessary to perform some
form of renovation or refactoring
from
time
to time. As of 2009, the
overall set of best
practices for aging
legacy
applications
includes the
following:
Use
maintenance specialists rather
than developers.
■
Consider
maintenance outsourcing to specialized
maintenance
■
companies.
Use
maintenance renovation
workbenches.
■
Overview
of 50 Software Best Practices
163
Use
formal change management
procedures.
■
Use
formal change management
tools.
■
Use
formal regression test
libraries.
■
Perform
automated complexity analysis
studies of legacy
applications.
■
Search
out and eliminate all
error-prone modules in legacy
applica-
■
tions.
Identify
all dead code in legacy
applications.
■
Renovate
or refactor applications prior to
major enhancements.
■
Use
formal design and code
inspections on major
updates.
■
Track
all customer-reported
defects.
■
Track
response time from
submission to repair of
defects.
■
Track
response time from
submission to completion of
change
■
requests.
Track
all maintenance activities
and costs.
■
Track
warranty costs for
commercial software.
■
Track
availability of software to
customers.
■
Because
the effort and costs
associated with maintenance
and
enhancement
of aging software are now
the dominant expense of
the
entire
software industry, it is important to
use state-of-the-art
methods
and
tools for dealing with
legacy applications.
Improved
quality before delivery can
cut maintenance costs.
Since
maintenance
programmers typically fix about 10
bugs per calendar
month,
every reduction in delivered
defects of about 120 could
reduce
maintenance
staffing by one person. Therefore
combinations of defect
prevention,
inspections, static analysis,
and better testing can
reduce
maintenance
costs. This is an important
consideration in a world
facing
a
serious recession as we are in
2009.
Some
of the newer approaches
circa 2009 include
maintenance or reno-
vation
workbenches, such as the
tools offered by Relativity
Technologies.
This
workbench also has a new
feature that performs
function point
analysis
with high speed and good
precision. Renovation prior to
major
enhancements
should be a routine
activity.
Since
many legacy applications
contain error-prone modules
that
are
high in complexity and
receive a disproportionate share of
defect
reports,
it is necessary to take corrective
actions before proceeding
with
significant
changes. As a rule of thumb,
less than 5 percent of the
mod-
ules
in large systems will receive
more than 50 percent of
defect reports.
It
is usually impossible to fix such
modules, so once they are
identified,
surgical
removal followed by replacement is
the normal therapy.
164
Chapter
Two
As
of 2009, maintenance outsourcing has
become one of the most
popular
forms
of software outsourcing. In general,
maintenance outsource agree-
ments
have been more successful
than development outsource
agreements
and
seem to have fewer instances
of failure and litigation.
This is due in
part
to the sophistication of the
maintenance outsource companies
and in
part
to the fact that existing
software is not prone to
many of the forms of
catastrophic
risk that are troublesome
for large development
projects.
Both
maintenance and development
share a need for using
good
project
management practices, effective
estimating methods, and
very
careful
measurement of productivity and
quality. While
development
outsourcing
ends up in litigation in about 5
percent of contracts,
main-
tenance
outsourcing seems to have
fewer issues and to be less
conten-
tious.
As the economy moves into
recession, maintenance
outsourcing
may
offer attractive economic
advantages.
49.
Best Practices for Updates
and Releases
of
Software Applications
Once
software applications are
installed and being used,
three things
will
happen: (1) bugs will be
found that must be fixed;
(2) new features
will
be added in response to business
needs and changes in laws
and
regulations;
and (3) software vendors
will want to make money
either
by
bringing out new versions of
software packages or by adding
new
features
for a fee. This part of
software engineering is not
well covered
by
the literature. Many bad
practices have sprung up
that are harmful
to
customers and users. Some of
these bad practices
include
Long
wait times for customer
support by telephone.
■
Telephone
support that can't be used
by customers who have
hearing
■
problems.
No
customer support by e-mail, or
very limited support (such
as
■
Microsoft).
Incompetent
customer support when
finally reached.
■
Charging
fees for customer support,
even for reporting
bugs.
■
Inadequate
methods of reporting bugs to
vendors (such as
■
Microsoft).
Poor
response times to bugs that
are reported.
■
Inadequate
repairs of bugs that are
reported.
■
Stopping
support of older versions of
software prematurely.
■
Forcing
customers to buy new
versions.
■
Changing
file formats of new versions
for arbitrary
reasons.
■
Overview
of 50 Software Best Practices
165
Refusing
to allow customers to continue
using old versions.
■
Warranties
that cover only replacement
of media such as
disks.
■
One-sided
agreements that favor only
the vendor.
■
Quirky
new releases that can't be
installed over old
releases.
■
Quirky
new releases that drop
useful features of former
releases.
■
Quirky
new releases that don't
work well with competitive
software.
■
These
practices are so common that
it is not easy to even find
com-
panies
that do customer support and
new releases well, although
there
are
a few. Therefore the
following best practices are
more theoretical
than
real as of 2009:
Ideally,
known bugs and problems
for applications should be
displayed
■
on
a software vendor's web
site.
Bug
reports and requests for
assistance should be easily
handled by
■
e-mail.
Once reported, responses should be
returned within 48
hours.
Reaching
customer support by telephone
should not take more
than
■
5
minutes.
When
customer support is reached by
phone, at least 60 percent
of
■
problems
should be resolved by the
first tier of support
personnel.
Reaching
customer support for those
with hearing impairments
■
should
be possible.
Fee-based
customer support should
exclude bug reports and
problems
■
caused
by vendors.
Bug
repairs should be self-installing
when delivered to
clients.
■
New
versions and new features
should not require manual
uninstalls
■
of
prior versions.
When
file formats are changed,
conversion to and from older
formats
■
should
be provided free of charge by
vendors.
Support
of applications with thousands of users
should not be arbi-
■
trarily
withdrawn.
Users
should not be forced to buy
new versions annually unless
they
■
wish
to gain access to the new
features.
In
general, mainframe vendors of
expensive software
packages
(greater
than $100,000) are better at
customer support than are
the
low-end,
high-volume vendors of personal
computer and
Macintosh
packages.
However, poor customer
support, inept customer
support,
sluggish
bug repairs, and forced
migration to new products or
releases of
questionable
value remain endemic
problems of the software
industry.
166
Chapter
Two
50.
Best Practices for
Terminating or
Withdrawing
Legacy Applications
Large
software applications tend to
have surprisingly long life
expectan-
cies.
As of 2009, some large
systems such as the U.S.
air traffic control
system
have been in continuous
usage for more than 30
years. Many
large
internal applications in major
companies have been in use
more
than
20 years.
Commercial
applications tend to have
shorter life expectancies
than
information
systems or systems software,
since vendors bring
out
new
releases and stop supporting
old releases after a period
of years.
Microsoft,
Intuit, and Symantec, for
example, are notorious for
with-
drawing
support for past versions of
software even if they still
have
millions
of users and are more
stable than the newer
versions.
Intuit,
for example, deliberately
stops support for old
versions of
Quicken
after a few years. Microsoft
is about to stop support for
Windows
XP
even though Vista is still
somewhat unstable and
unpopular. Even
worse,
Symantec, Intuit, and
Microsoft tend to change
file formats so
that
records produced on new
versions can't be used on
old versions.
Repeated
customer outrage finally got
the attention of Microsoft,
so
that
they usually provide some
kind of conversion method. Intuit
and
Symantec
are not yet at that
point.
Nonetheless
at some point aging legacy
applications will need
replace-
ment.
Sometimes the hardware on
which they operate will need
replace-
ment,
too.
For
small PC and Macintosh
applications, replacement is a
minor
inconvenience
and a noticeable but not
unbearable expense. However,
for
massive
mainframe software or heavy-duty
systems software in the
10,000
function
point range, replacement can
be troublesome and
expensive.
If
the software is custom-built
and has unique features,
replacement
will
probably require development of a
new application with all of
the
original
features, plus whatever new
features appear to be useful.
The
patient-record
system of the Veterans
Administration is an example of
an
aging legacy system that
has no viable commercial
replacements.
An
additional difficulty with retiring or
replacing legacy systems
is
that
often the programming
languages are "dead" and no
longer have
working
compilers or interpreters, to say
nothing of having very
few
programmers
available.
Best
practices for retiring aging
systems (assuming they still
are in
use)
include the
following:
Mine
the application to extract
business rules and
algorithms needed
■
for
a new version.
Survey
all users to determine the
importance of the application
to
■
business
operations.
Overview
of 50 Software Best Practices
167
Do
extensive searches for
similar applications via the
Web or with
■
consultants.
Attempt
to stabilize the legacy
application so that it stays
useful as
■
the
new one is being
built.
Consider
whether service-oriented architecture
(SOA) may be
suitable.
■
Look
for certified sources of
reusable material.
■
Consider
the possibility of automated
language conversion.
■
Utilize
static analysis tools if the
language(s) are
suitable.
■
Make
no mistake, unless an application
has zero users,
replacement
and
withdrawal are likely to
cause trouble.
Although
outside the scope of this
book, it is significant that
the life
expectancies
of all forms of storage are
finite. Neither magnetic
disks
nor
solid-state devices are
likely to remain in fully
operational mode for
more
than about 25 years.
Summary
and Conclusions
The
most obvious conclusions are
six:
First,
software is not a "one size
fits all" occupation.
Multiple practices
and
methods are needed.
Second,
poor measurement practices
and a lack of solid
quantified
data
have made evaluating
practices difficult. Fortunately,
this situa-
tion
is improving now that
benchmark data is readily
available.
Third,
given the failure rates
and number of cost and
schedule over-
runs,
normal development of software is
not economically
sustainable.
Switching
from custom development to
construction using
certified
reusable
components is needed to improve
software economics.
Fourth,
effective quality control is a
necessary precursor that
must
be
accomplished before software
reuse can be effective.
Combinations
of
defect prevention method,
inspections, static analysis,
testing, and
quality
assurance are needed.
Fifth,
as security threats against
software increase in numbers
and
severity,
fundamental changes are
needed in software
architecture,
design,
coding practices, and
defensive methods.
Sixth,
large software applications
last for 25 years or more.
Methods
and
practices must support not
only development, but also
deployment
and
many years of maintenance
and enhancements.
Readings
and References
Chapter
2 is an overview of many different
topics. Rather than provide
a
conventional
reference list, it seems
more useful to show some of
the key
168
Chapter
Two
books
and articles available that
deal with the major topics
discussed
in
the chapter.
Project
Management, Planning,
Estimating,
Risk,
and Value
Analysis
Boehm,
Barry Dr. Software
Engineering Economics. Englewood
Cliffs, NJ: Prentice
Hall,
1981.
Booch
Grady. Object
Solutions: Managing the
Object-Oriented Project. Reading,
MA:
Addison
Wesley, 1995.
Brooks,
Fred. The
Mythical Man-Month. Reading,
MA: Addison Wesley, 1974,
rev. 1995.
Charette,
Bob. Software
Engineering Risk Analysis
and Management. New
York:
McGraw-Hill,
1989.
Charette,
Bob. Application
Strategies for Risk
Management. New
York: McGraw-Hill,
1990.
Ž
Chrissies,
Mary Beth; Konrad, Mike;
Shrum, Sandy; CMMI : Guidelines
for Product
Integration
and Process Improvement;
Second Edition; Addison
Wesley, Reading,
MA;
2006; 704 pages.
Cohn,
Mike. Agile
Estimating and Planning. Englewood
Cliffs, NJ: Prentice Hall
PTR,
2005.
DeMarco,
Tom. Controlling
Software Projects. New
York: Yourdon Press,
1982.
Ewusi-Mensah,
Kweku. Software
Development Failures Cambridge,
MA: MIT Press,
2003.
Galorath,
Dan. Software
Sizing, Estimating, and Risk
Management: When
Performance
Is
Measured Performance Improves.
Philadelphia:
Auerbach Publishing,
2006.
Glass,
R.L. Software
Runaways: Lessons Learned
from Massive Software
Project
Failures.
Englewood
Cliffs, NJ: Prentice Hall,
1998.
Harris,
Michael, David Herron, and
Stasia Iwanicki. The
Business Value of
IT:
Managing
Risks, Optimizing Performance,
and Measuring Results. Boca
Raton, FL:
CRC
Press (Auerbach),
2008.
Humphrey,
Watts. Managing
the Software Process. Reading,
MA: Addison Wesley,
1989.
Johnson,
James, et al. The
Chaos Report. West
Yarmouth, MA: The Standish
Group,
2000.
Jones,
Capers. Assessment
and Control of Software
Risks.:
Prentice Hall, 1994.
Jones,
Capers. Estimating
Software Costs. New
York: McGraw-Hill,
2007.
Jones,
Capers. "Estimating and
Measuring Object-Oriented Software."
American
Programmer,
1994.
Jones,
Capers. Patterns
of Software System Failure
and Success. Boston:
International
Thomson
Computer Press, December
1995.
Jones,
Capers. Program
Quality and Programmer
Productivity. IBM
Technical Report
TR
02.764. San Jose, CA:
January 1977.
Jones,
Capers. Programming
Productivity. New
York: McGraw-Hill,
1986.
Jones,
Capers. "Why Flawed Software
Projects are not Cancelled
in Time." Cutter
IT
Journal,
Vol.
10, No. 12 (December 2003):
1217.
Jones,
Capers. Software
Assessments, Benchmarks, and Best
Practices. Boston:
Addison
Wesley
Longman, 2000.
Jones,
Capers. "Software Project
Management Practices: Failure
Versus Success."
Crosstalk,
Vol.
19, No. 6 (June 2006):
48.
Laird,
Linda M. and Carol M.
Brennan. Software
Measurement and Estimation:
A
Practical
Approach. Hoboken,
NJ: John Wiley & Sons,
2006.
McConnell,
Steve. Software
Estimating: Demystifying the
Black Art. Redmond,
WA:
Microsoft
Press, 2006.
Park,
Robert E., et al. Software
Cost and Schedule Estimating
- A Process Improvement
Initiative.
Technical
Report CMU/SEI 94-SR-03. Pittsburgh,
PA: Software
Engineering
Institute,
May 1994.
Park,
Robert E., et al. Checklists
and Criteria for Evaluating
the Costs and
Schedule
Estimating
Capabilities of Software Organizations
Technical
Report CMU/SEI
Overview
of 50 Software Best Practices
169
95-SR-005.
Pittsburgh, PA: Software
Engineering Institute, Carnegie-Mellon
Univ.,
January
1995.
Roetzheim,
William H. and Reyna A.
Beasley. Best
Practices in Software Cost
and
Schedule
Estimation. Saddle
River, NJ: Prentice Hall PTR,
1998.
Strassmann,
Paul. Governance
of Information Management: The
Concept of an Information
Constitution,
Second
Edition. (eBook) Stamford, CT:
Information Economics Press,
2004.
Strassmann,
Paul. Information
Productivity. Stamford,
CT: Information Economics
Press,
1999.
Strassmann,
Paul. Information
Payoff. Stamford,
CT: Information Economics Press,
1985.
Strassmann,
Paul. The
Squandered Computer. Stamford,
CT: Information Economics
Press,
1997.
Stukes,
Sherry, Jason Deshoretz,
Henry Apgar, and Ilona
Macias. Air
Force Cost
Analysis
Agency Software Estimating
Model Analysis. TR-9545/008-2
Contract
F04701-95-D-0003,
Task 008. Management
Consulting & Research, Inc.,
Thousand
Oaks,
CA 91362. September 30,
1996.
Symons,
Charles R. Software
Sizing and Estimating--Mk II
FPA (Function
Point
Analysis).
Chichester,
UK: John Wiley & Sons,
1991.
Wellman,
Frank. Software
Costing: An Objective Approach to
Estimating and
Controlling
the
Cost of Computer Software.
Englewood
Cliffs, NJ: Prentice Hall,
1992.
Whitehead,
Richard. Leading
a Development Team. Boston:
Addison Wesley, 2001.
Yourdon,
Ed. Death
March - The Complete
Software Developer's Guide to
Surviving
"Mission
Impossible" Projects. Upper
Saddle River, NJ: Prentice Hall
PTR, 1997.
Yourdon,
Ed. Outsource:
Competing in the Global
Productivity Race. Upper
Saddle
River,
NJ: Prentice Hall PTR,
2005.
Measurements
and Metrics
Abran,
Alain and Reiner R. Dumke.
Innovations
in Software Measurement. Aachen,
Germany:
Shaker-Verlag, 2005.
Abran,
Alain, Manfred Bundschuh,
Reiner Dumke, Christof
Ebert, and Horst
Zuse.
["article
title"?]Software
Measurement News, Vol.
13, No. 2 (Oct. 2008).
(periodical).
Bundschuh,
Manfred and Carol Dekkers.
The
IT Measurement Compendium. Berlin:
Springer-Verlag,
2008.
Chidamber,
S. R. and C. F. Kemerer. "A Metrics
Suite for Object-Oriented
Design," IEEE
Trans.
On Software Engineering, Vol.
SE20, No. 6 (June 1994):
476493.
Dumke,
Reiner, Rene Braungarten,
Günter Büren, Alain Abran,
Juan J. Cuadrado-Gallego,
(editors).
Software
Process and Product
Measurement. Berlin:
Springer-Verlag, 2008.
Ebert,
Christof and Reiner Dumke.
Software
Measurement: Establish,
Extract,
Evaluate,
Execute. Berlin:
Springer-Verlag, 2007.
Garmus,
David & David Herron.
Measuring
the Software Process: A Practical
Guide to
Functional
Measurement. Englewood
Cliffs, NJ: Prentice Hall,
1995.
Garmus,
David and David Herron.
Function
Point Analysis Measurement
Practices for
Successful
Software Projects. Boston:
Addison Wesley Longman,
2001.
International
Function Point Users Group.
IFPUG
Counting Practices Manual,
Release
4.
Westerville, OH: April
1995.
International
Function Point Users Group
(IFPUG). IT
Measurement Practical
Advice
from
the Experts. Boston:
Addison Wesley Longman,
2002.
Jones,
Capers. Applied
Software Measurement, Third
Edition.
New
York: McGraw-Hill,
2008.
Jones,
Capers. "Sizing Up Software."
Scientific
American Magazine, Vol.
279, No. 6
(December
1998): 104111.
Jones
Capers. A
Short History of the Lines
of Code Metric, Version
4.0. (monograph)
Narragansett,
RI: Capers Jones & Associates
LLC, May 2008.
Kemerer,
C. F. "Reliability of Function Point
Measurement A Field
Experiment."
Communications
of the ACM, Vol.
36, 1993: 8597.
Parthasarathy,
M. A. Practical
Software Estimation Function
Point Metrics for
Insourced
and Outsourced Projects. Upper
Saddle River, NJ: Infosys
Press, Addison
Wesley,
2007.
170
Chapter
Two
Putnam,
Lawrence H. Measures
for Excellence -- Reliable
Software On Time,
Within
Budget.
Englewood
Cliffs, NJ: Yourdon Press
Prentice Hall, 1992.
Putnam,
Lawrence H. and Ware Myers.
Industrial
Strength Software
Effective
Management
Using Measurement. Los
Alamitos, CA: IEEE Press,
1997.
Stein,
Timothy R. The
Computer System Risk
Management Book and
Validation Life
Cycle.
Chico,
CA: Paton Press,
2006.
Stutzke,
Richard D. Estimating
Software-Intensive Systems. Upper
Saddle River, NJ:
Addison
Wesley, 2005.
Architecture,
Requirements, and Design
Ambler,
S. Process
Patterns Building Large-Scale
Systems Using Object
Technology.
Cambridge
University Press, SIGS
Books, 1998.
Artow,
J. and I. Neustadt. UML
and the Unified Process. Boston:
Addison Wesley, 2000.
Bass,
Len, Paul Clements, and
Rick Kazman. Software
Architecture in Practice. Boston:
Addison
Wesley, 1997.
Berger,
Arnold S. Embedded
Systems Design: An Introduction to
Processes, Tools,
and
Techniques.: CMP
Books, 2001.
Booch,
Grady, Ivar Jacobsen, and
James Rumbaugh. The
Unified Modeling
Language
User
Guide, Second
Edition. Boston: Addison
Wesley, 2005.
Cohn,
Mike. User
Stories Applied: For Agile
Software Development. Boston:
Addison
Wesley,
2004.
Fernandini,
Patricia L. A
Requirements Pattern Succeeding in
the Internet Economy.
Boston:
Addison Wesley, 2002.
Gamma,
Erich, Richard Helm, Ralph
Johnson, and John Vlissides.
Design
Patterns:
Elements
of Reusable Object Oriented
Design. Boston:
Addison Wesley, 1995.
Inmon
William H., John Zachman,
and Jonathan G. Geiger.
Data
Stores, Data
Warehousing,
and the Zachman Framework.
New
York: McGraw-Hill,
1997.
Marks,
Eric and Michael Bell.
Service-Oriented
Architecture (SOA): A Planning
and
Implementation
Guide for Business and
Technology. New
York: John Wiley & Sons,
2006.
Martin,
James & Carma McClure.
Diagramming
Techniques for Analysts
and
Programmers.
Englewood
Cliffs, NJ: Prentice Hall,
1985.
Orr,
Ken. Structured
Requirements Definition. Topeka,
KS: Ken Orr and
Associates, Inc,
1981.
Robertson,
Suzanne and James Robertson.
Mastering
the Requirements Process, Second
Edition.
Boston: Addison Wesley,
2006.
Warnier,
Jean-Dominique. Logical
Construction of Systems. London:
Van Nostrand
Reinhold.
Wiegers,
Karl E. Software
Requirements, Second
Edition. Bellevue, WA:
Microsoft Press,
2003.
Software
Quality Control
Beck,
Kent. Test-Driven
Development. Boston:
Addison Wesley, 2002.
Chelf,
Ben and Raoul Jetley.
"Diagnosing Medical Device
Software Defects Using
Static
Analysis."
Coverity Technical Report.
San Francisco: 2008.
Chess,
Brian and Jacob West.
Secure
Programming with Static
Analysis. Boston:
Addison
Wesley, 2007.
Cohen,
Lou. Quality
Function Deployment How to Make
QFD Work for You.
Upper
Saddle
River, NJ: Prentice Hall,
1995.
Crosby,
Philip B. Quality
is Free. New
York: New American Library,
Mentor Books, 1979.
Everett,
Gerald D. and Raymond
McLeod. Software
Testing. Hoboken,
NJ: John Wiley &
Sons,
2007.
Gack,
Gary. Applying
Six Sigma to Software
Implementation Projects. http://software
.isixsigma.com/library/content/c040915b.asp.
Gilb,
Tom and Dorothy Graham.
Software
Inspections. Reading,
MA: Addison Wesley,
1993.
Overview
of 50 Software Best Practices
171
Hallowell,
David L. Six
Sigma Software Metrics, Part
1.
http://software.isixsigma.com/
library/content/03910a.asp.
International
Organization for Standards.
"ISO 9000 / ISO 14000."
http://www.iso.org/
iso/en/iso9000-14000/index.html.
Jones,
Capers. Software
Quality Analysis and
Guidelines for Success. Boston:
International
Thomson Computer Press,
1997.
Kan,
Stephen H. Metrics
and Models in Software
Quality Engineering, Second
Edition.
Boston:
Addison Wesley Longman,
2003.
Land,
Susan K., Douglas B. Smith,
John Z. Walz. Practical
Support for Lean Six
Sigma
Software
Process Definition: Using IEEE Software
Engineering Standards.:
Wiley-
Blackwell,
2008.
Mosley,
Daniel J. The
Handbook of MIS Application Software
Testing. Englewood
Cliffs,
NJ:
Yourdon Press, Prentice
Hall, 1993.
Myers,
Glenford. The
Art of Software Testing. New
York: John Wiley & Sons,
1979.
Nandyal
Raghav. Making
Sense of Software Quality Assurance.
New
Delhi: Tata
McGraw-Hill
Publishing, 2007.
Radice,
Ronald A. High
Quality Low Cost Software
Inspections. Andover,
MA:
Paradoxicon
Publishing, 2002.
Wiegers,
Karl E. Peer
Reviews in Software A Practical
Guide. Boston:
Addison Wesley
Longman,
2002.
Software
Security, Hacking,
and
Malware Prevention
Acohido,
Byron and John Swartz.
Zero
Day Threat: The Shocking
Truth of How Banks
and
Credit Bureaus Help Cyber
Crooks Steal Your Money
and Identity.:
Union
Square
Press, 2008.
Allen,
Julia, Sean Barnum, Robert
Ellison, Gary McGraw, and
Nancy Mead. Software
Security:
A Guide for Project
Managers. (An
SEI book sponsored by the
Department
of
Homeland Security) Boston:
Addison Wesley Professional,
2008.
Anley,
Chris, John Heasman, Felix
Lindner, and Gerardo
Richarte. The
Shellcoders
Handbook:
Discovering and Exploiting
Security Holes. New
York: Wiley, 2007.
Chess,
Brian. Secure
Programming with Static
Analysis. Boston:
Addison Wesley
Professional,
2007.
Dowd,
Mark, John McDonald, and
Justin Schuh. The
Art of Software
Security
Assessment:
Identifying and Preventing
Software Vulnerabilities. Boston:
Addison
Wesley
Professional, 2006.
Ericson,
John. Hacking:
The Art of Exploitation, Second
Edition.: No Starch Press,
2008.
Gallager,
Tom, Lawrence Landauer, and
Brian Jeffries. Hunting
Security Bugs.
Redmond,
WA: Microsoft Press,
2006.
Hamer-Hodges,
Ken. Authorization
Oriented Architecture Open
Application
Networking
and Security in the 21st
Century. Philadelphia:
Auerbach Publications,
to
be published in December
2009.
Hogland,
Greg and Gary McGraw.
Exploiting
Software: How to Break Code.
Boston:
Addison
Wesley Professional,
2004.
Hogland,
Greg and Jamie Butler.
Rootkits:
Exploiting the Windows
Kernal. Boston:
Addison
Wesley Professional,
2005.
Howard,
Michael and Steve Lippner.
The
Security Development Lifecycle.
Redmond,
WA:
Microsoft Press,
2006.
Howard,
Michael and David LeBlanc.
Writing
Secure Code. Redmond,
WA: Microsoft
Press,
2003.
Jones,
Andy and Debi Ashenden.
Risk
Management for Computer
Security: Protecting
Your
Network and Information
Assets.:
Butterworth-Heinemann, 2005.
Landoll,
Douglas J. The
Security Risk Assessment
Handbook: A Complete Guide
for
Performing
Security Risk Assessments. Boca
Raton, FL: CRC Press (Auerbach),
2005.
McGraw,
Gary. Software
Security Building Security
In. Boston:
Addison Wesley
Professional,
2006.
172
Chapter
Two
Rice,
David: Geekonomics:
The Real Cost of Insecure
Software.
Boston: Addison
Wesley
Professional,
2007.
Scambray,
Joel. Hacking
Exposed Windows: Microsoft
Windows Security Secrets
and
Solutions,
Third
Edition. New York:
McGraw-Hill, 2007.
------
Hacking
Exposed Web Applications,
Second
Edition. New York:
McGraw-Hill, 2006.
Sherwood,
John, Andrew Clark, and
David Lynas. Enterprise
Security Architecture: A
Business-Driven
Approach.: CMP,
2005.
Shostack,
Adam and Andrews Stewart.
The
New School of Information
Security. Boston:
Addison
Wesley Professional,
2008.
Skudis,
Edward and Tom Liston.
Counter
Hack Reloaded: A Step-by-Step
Guide to
Computer
Attacks and Effective
Defenses. Englewood
Cliffs, NJ: Prentice Hall PTR,
2006.
Skudis,
Edward and Lenny Zeltzer.
Malware:
Fighting Malicious Code. Englewood
Cliffs,
NJ: Prentice Hall PTR,
2003.
Stuttard,
Dafydd and Marcus Pinto.
The
Web Application Hackers
Handbook:
Discovering
and Exploiting Security
Flaws.,
New York: Wiley,
2007.
Szor,
Peter. The
Art of Computer Virus
Research and Defense. Boston:
Addison Wesley
Professional,
2005.
Thompson,
Herbert and Scott Chase.
The
Software Vulnerability Guide.
Boston:
Charles
River Media, 2005.
Viega,
John and Gary McGraw.
Building
Secure Software: How to Avoid
Security
Problems
the Right Way. Boston:
Addison Wesley Professional,
2001.
Whittaker,
James A. and Herbert H.
Thompson. How
to Break Software
Security.
Boston:
Addison Wesley Professional,
2003.
Wysopal,
Chris, Lucas Nelson, Dino
Dai Zovi, and Elfriede
Dustin. The
Art of Software
Security
Testing: Identifying Software
Security Flaws. Boston:
Addison Wesley
Professional,
2006.
Software
Engineering and Programming
Barr,
Michael and Anthony Massa.
Programming
Embedded Systems: With C and
GNU
Development
Tools.:
O'Reilly Media, 2006.
Beck,
K. Extreme
Programming Explained: Embrace
Change. Boston:
Addison Wesley, 1999.
Bott,
Frank, A. Coleman, J. Eaton,
and D. Roland. Professional
Issues in Software
Engineering.:
Taylor & Francis,
2000.
Glass,
Robert L. Facts
and Fallacies of Software
Engineering (Agile
Software
Development).
Boston:
Addison Wesley, 2002.
Hans,
Professor van Vliet.
Software
Engineering Principles and
Practices, Third
Edition.
London, New York: John
Wiley & Sons,
2008.
Hunt,
Andrew and David Thomas.
The
Pragmatic Programmer. Boston:
Addison Wesley,
1999.
Jeffries,
R., et al. Extreme
Programming Installed. Boston:
Addison Wesley, 2001.
Marciniak,
John J. (Editor). Encyclopedia
of Software Engineering (two
volumes). New
York:
John Wiley & Sons,
1994.
McConnell,
Steve. Code
Complete. Redmond,
WA: Microsoft Press,
1993.
Morrison,
J. Paul. Flow-Based
Programming: A New Approach to
Application
Development.
New
York: Van Nostrand Reinhold,
1994.
Pressman,
Roger. Software
Engineering A Practitioner's
Approach, Sixth
Edition. New
York:
McGraw-Hill, 2005.
Sommerville,
Ian. Software
Engineering, Seventh
Edition. Boston: Addison
Wesley, 2004.
Stephens
M. and D. Rosenberg. Extreme
Programming Refactored: The Case
Against
XP.
Berkeley,
CA: Apress L.P.,
2003.
Software
Development Methods
Boehm,
Barry. "A Spiral Model of
Software Development and
Enhancement."
Proceedings
of the Int. Workshop on
Software Process and
Software Environments.
ACM
Software Engineering Notes
(Aug.
1986): 2242.
Overview
of 50 Software Best Practices
173
Cockburn,
Alistair. Agile
Software Development. Boston:
Addison Wesley, 2001.
Cohen,
D., M. Lindvall, and P.
Costa. "An Introduction to
agile methods." Advances
in
Computers,
New
York: Elsevier Science,
2004.
Highsmith,
Jim. Agile
Software Development Ecosystems.
Boston:
Addison Wesley,
2002.
Humphrey,
Watts. TSP
Leading a Development Team.
Boston:
Addison Wesley, 2006.
Humphrey,
Watts. PSP:
A Self-Improvement Process for Software
Engineers. Upper
Saddle
River, NJ: Addison Wesley,
2005.
Krutchen,
Phillippe. The
Rational Unified Process An
Introduction. Boston:
Addison
Wesley,
2003.
Larman,
Craig and Victor Basili.
"Iterative and Incremental
Development A Brief
History."
IEEE
Computer Society (June
2003): 4755.
Love,
Tom. Object
Lessons. New
York: SIGS Books,
1993.
Martin,
Robert. Agile
Software Development: Principles,
Patterns, and Practices. Upper
Saddle
River, NJ: Prentice Hall,
2002.
Mills,
H., M. Dyer, and R. Linger.
"Cleanroom Software Engineering."
IEEE
Software,
4,
5
(Sept. 1987):
1925.
Paulk
Mark, et al. The
Capability Maturity Model
Guidelines for Improving
the
Software
Process. Reading,
MA: Addison Wesley,
1995.
Rapid
Application Development.
http://en.wikipedia.org/wiki/Rapid_
application_development.
Stapleton,
J. DSDM
Dynamic System Development
Method in Practice. Boston:
Addison
Wesley, 1997.
Software
Deployment, Customer
Support,
and Maintenance
Arnold,
Robert S. Software
Reengineering. Los
Alamitos, CA: IEEE Computer
Society
Press,
1993.
Arthur,
Lowell Jay. Software
Evolution The Software
Maintenance Challenge. New
York:
John Wiley & Sons,
1988.
Gallagher,
R. S. Effective
Customer Support. Boston:
International Thomson
Computer
Press,
1997.
Parikh,
Girish. Handbook
of Software Maintenance. New
York: John Wiley &
Sons,
1986.
Pigoski,
Thomas M. Practical
Software Maintenance Best
Practices for Managing
Your
Software
Investment. Los
Alamitos, CA: IEEE Computer
Society Press, 1997.
Sharon,
David. Managing
Systems in Transition A Pragmatic
View of Reengineering
Methods.
Boston:
International Thomson Computer
Press, 1996.
Takang,
Armstrong and Penny Grubh.
Software
Maintenance Concepts and
Practice.
Boston:
International Thomson Computer
Press, 1997.
Ulrich,
William M. Legacy
Systems: Transformation Strategies.
Upper
Saddle River, NJ:
Prentice
Hall, 2002.
Social
Issues in Software Engineering
Brooks,
Fred. The
Mythical Manmonth, Second
Edition. Boston: Addison
Wesley, 1995.
DeMarco,
Tom. Peopleware:
Productive Projects and
Teams. New
York: Dorset House,
1999.
Glass,
Robert L. Software
Creativity, Second
Edition. Atlanta, GA:
developer.*books, 2006.
Humphrey,
Watts. Winning
with Software: An Executive
Strategy. Boston:
Addison
Wesley,
2002.
Johnson,
James, et al. The
Chaos Report. West
Yarmouth, MA: The Standish
Group,
2007.
Jones,
Capers. "How Software
Personnel Learn New Skills,"
Sixth Edition
(monograph).
Narragansett,
RI: Capers Jones & Associates
LLC, July 2008.
174
Chapter
Two
Jones,
Capers. "Conflict and
Litigation Between Software
Clients and
Developers"
(monograph).
Narragansett, RI: Software Productivity
Research, Inc., 2008.
Jones,
Capers. "Preventing Software
Failure: Problems Noted in
Breach of Contract
Litigation."
Narragansett, RI: Capers Jones &
Associates LLC, 2008.
Krasner,
Herb. "Accumulating the Body
of Evidence for the Payoff
of Software Process
Improvement
1997" Austin, TX:
Krasner Consulting.
Kuhn,
Thomas. The
Structure of Scientific Revolutions.
University
of Chicago Press,
1996.
Starr,
Paul. The
Social Transformation of American
Medicine.:
Basic Books Perseus
Group,
1982.
Weinberg,
Gerald M. The
Psychology of Computer Programming.
New
York: Van
Nostrand
Reinhold, 1971.
Weinberg,
Gerald M. Becoming
a Technical Leader. New
York: Dorset House,
1986.
Yourdon,
Ed. Death
March The Complete
Software Developer's Guide to
Surviving
"Mission
Impossible" Projects. Upper
Saddle River, NJ: Prentice Hall
PTR, 1997.
Zoellick,
Bill. CyberRegs
A Business Guide to Web
Property, Privacy, and
Patents.
Boston:
Addison Wesley, 2002.
Web
Sites
There
are hundreds of software
industry and professional
associations.
Most
have a narrow focus. Most
are more or less isolated
and have no
contact
with similar associations. Exceptions to
this rule include
the
various
software process improvement
network (SPIN) groups and
the
various
software metrics
associations.
This
partial listing of software
organizations and web sites
is to facili-
tate
communication and sharing of
data across both
organization and
national
boundaries. Software is a global
industry. Problems occur
from
the
first day of requirements to
the last day of usage,
and every day in
between.
Therefore mutual cooperation
across industry and
technical
boundaries
would benefit software and
help it toward becoming a
true
profession
rather than a craft of
marginal competence.
What
might be useful for the
software industry would be
reciprocal
memberships
among the major professional
associations along the
lines
of
the American Medical
Association. There is a need
for an umbrella
organization
that deals with all aspects
of software as a profession, as
does
the AMA for medical
practice.
American
Electronics Association (AEA)
www.aeanet.org
(may merge with
ITAA)
American
Society for Quality www.ASQ.org
Anti-Phishing
Working Group www.antiphishing.org
Association
for Software Testing www.associationforsoftwaretesting.org
Association
of Computing Machinery www.ACM.org
Association
of Competitive Technologies (ACT)
www.actonline.org
Association
of Information Technology Professionals
www.aitp.org
Brazilian
Function Point Users Group
www.BFPUG.org
Business
Application Software Developers
Association www.basda.org
Business
Software Alliance (BSA) www.bsa.org
Center
for Internet Security www.cisecurity.org
Center
for Hybrid and Embedded
Software Systems (CHESS) http://chess.eecs
.berkeley.edu
Overview
of 50 Software Best Practices
175
China
Software Industry Association
www.CSIA.org
Chinese
Software Professional Association
www.CSPA.com
Computing
Technology Industry Association
(CTIA) www.comptia.org
Embedded
Software Association (ESA)
www.esofta.com
European
Design and Automation
Association (EDAA) www.edaa.com
Finnish
Software Measurement Association
www.fisma.fi
IEEE
Computer Society www.computer.org
Independent
Computer Consultants Association
(ICCA) www.icca.org
Information
Technology Association of America
(ITAA) www.itaa.org
(may merge
with
AEA)
Information
Technology Metrics and
Productivity Institute
(ITMPI)
www.ITMPI.org
InfraGuard
www.InfraGuard.net
Institute
for International Research (IIR)
eee.irusa.com
Institute
of Electrical and Electronics
Engineers (IEEE) www.IEEE.org
International
Association of Software Architects
www.IASAHOME.org
International
Function Point Users Group
(IFPUG) www.IFPUG.org
International
Institute of Business Analysis
www.IIBAorg
International
Software Benchmarking Standards
Group (ISBSG) www.ISBSG.org
Japan
Function Point Users Group
www.jfpug.org
Linux
Professional Institute www.lpi.org
National
Association of Software and
Service Companies
(India)
www.NASCOM.in
Netherlands
Software Metrics Association
www.NESMA.org
Process
Fusion www.process-fusion.com
Programmers'
Guild www.programmersguild.org
Project
Management Institute www.PMI.org
Russian
Software Development Organization
(RUSSOFT) www.russoft.org
Society
of Information Management (SIM)
www.simnet.org
Software
and Information Industry
Association www.siia.net
Software
Engineering Body of Knowledge
www.swebok.org
Software
Engineering Institute (SEI)
www.SEI.org
Software
Productivity Research (SPR)
www.SPR.com
Software
Publishers Association (SPA)
www.spa.org
United
Kingdom Software Metrics
Association www.UKSMA.org
U.S.
Internet Industry Association
(USIIA) www.usiia.org
Women
in Technology International www.witi.com
This
page intentionally left
blank
Table of Contents:
|
|||||