|
|||||
Chapter
7
Requirements,
Business
Analysis,
Architecture, Enterprise
Architecture,
and Design
Introduction
Before
any code can be created
for a software application, it is
neces-
sary
to define the features,
scope, structure, and user
interfaces that
will
be developed. It is also necessary to
define the methods of
delivery
of
those features, and the
platforms on which the
application will oper-
ate.
In addition, targets and
goals for the application
must be defined
in
terms of performance, security,
reliability, and a number of
other
topics.
These various issues are
spread among a number of
documents
and
plans that include
requirements, business analysis,
architecture,
and
design. Each of these can be
subset into several topical
segments
and
subdocuments.
Although
a number of templates and
models exist for each
kind of
document,
no methods have proven to be
totally successful. Even
after
more
than 60 years of software, a
number of common problems
still
occur
for almost all major
software applications:
1.
Requirements grow and change
at rates in excess of 1 percent
per
calendar
month.
2.
Few applications include
greater than 80 percent of
user require-
ments
in the first release.
3.
Some requirements are
dangerous or "toxic" and
should not be
included.
437
438
Chapter
Seven
4.
Some applications are
overstuffed with extraneous features no
one
asked
for.
5.
Most software applications
are riddled with security
vulnerabilities.
6.
Errors in requirements and
design cause many
high-severity
bugs.
7.
Effective methods such as
requirement and design
inspections are
seldom
used.
8.
Standard, reusable requirements
and designs are not
widely
available.
9.
Mining legacy applications
for "lost" business
requirements seldom
occurs.
10.
The volume of paper
documents may be too large
for human under-
standing.
These
ten problems are endemic to
the software industry.
Unlike the
design
of physical structures such as
aircraft, boats, buildings, or
medi-
cal
equipment, software does not
utilize effective and proven
design
methods
and standard document
formats. In other words, if a
reader
picks
up the requirements or specifications
for two different
software
applications,
the contents and format
are likely to be very
different.
These
differences make validation
difficult because without
standard
and
common structures, there are
far too many variations to
allow easy
editing
or error identification. Automated
verification of requirements
and
design are theoretically
possible, but beyond the
state of the art as
of
2009. Formal inspections of
requirements and other
documents are
effective,
but of course manual
inspections are slower than
automated
verification.
There
are also numerous
"languages" for representing
requirement
and
design features. These
include use-cases, user
stories, decision
tables,
fishbone diagrams, state-change
diagrams, entity-relationship
diagrams,
executable English, normal
English, the unified
modeling
language
(UML), and perhaps 30 other
flavors of graphical
representa-
tion
(flowcharts, Nassi-Schneiderman charts,
data-flow diagrams, HIPO
diagrams,
etc.). For quality
requirements, there are also
special dia-
grams
associated with quality function
deployment (QFD).
The
existence of so many representation
techniques indicates
that
no
perfect representation method
has yet been developed. If
any one of
these
methods were clearly
superior to the others, then
no doubt it would
become
a de facto standard used for
all software projects. So
far as can
be
determined, no representation method is
used by more than
perhaps
10
percent of software applications. In
fact, most software
applications
utilize
multiple representation methods because
none is fully
adequate
Requirements,
Analysis, Architecture, and
Design
439
for
all business and technical
purposes. Therefore, combinations of
text
and
graphical representations in the
form of use-cases, flowcharts,
and
other
diagrams are the most
common approach.
In
this chapter, we will be dealing with
some of the many variations
in
methods
for handling software
requirements, business analysis,
archi-
tecture,
and design.
Software
Requirements
If
software engineering is to become a true
profession rather than an
art
form,
software engineers have a
responsibility to help customers
define
requirements
in a thorough and effective
manner.
It
is the job of a professional
software engineer to insist on
effective
requirements
methods such as joint
application design (JAD),
quality
function
deployment (QFD), and
requirements inspections. It is also
the
responsibility
of software engineers to alert
clients to any
potentially
harmful
requirements.
Far
too often the literature on
software requirements is passive
and
makes
the incorrect assumption
that users will be 100
percent effec-
tive
in identifying requirements. This is a
dangerous assumption.
User
requirements
are never complete and
they are often wrong.
For a soft-
ware
project to succeed, requirements need to
be gathered and
analyzed
in
a professional manner, and
software engineering is the
profession
that
should know how to do this
well.
It
should be the responsibility of
the software engineers to
insist that
proper
requirements methods be used.
These include data mining
of
legacy
applications, joint application
design (JAD), quality
function
deployment
(QFD), prototypes, and
requirements inspections.
Another
method
that benefits requirements
such as embedded users (as
with
Agile
development). Use-cases might
also be recommended.
The
users of software applications
are not software engineers
and
cannot
be expected to know optimal
ways of expressing and
analyzing
requirements.
Ensuring that requirements
collection and analysis
are
at
state-of-the-art levels devolves to
the software engineering
team.
Today
in 2009, almost half of all
major applications are
replacements
for
aging legacy applications,
some of which have been in
use for more
than
25 years. Unfortunately, legacy
applications seldom have
current
specifications
or requirements documents
available.
Due
to the lack of available
information about the
features and func-
tions
of the prior legacy
application, a new form of
requirements analy-
sis
is coming into being. This
new form starts by data
mining of
the
legacy
application in order to extract
business rules and
algorithms. As
it
happens, data mining can
also be used to gather data
for sizing, in
terms
of both function points and
code statements.
440
Chapter
Seven
Structure
and Contents of
Software
Requirements
Software
requirements obviously describe
the key features and
functions
that
a software application will contain. But
requirements specifica-
tions
also serve other business
purposes. For example, the
requirements
should
also discuss any limits or
constraints on the software,
such as
performance
criteria, reliability criteria,
security criteria, and the
like.
The
costs and schedules of
building software applications
are strongly
influenced
by the size of the
application in terms of the
total require-
ments
set that will be implemented.
Therefore, requirements are
the
primary
basis of ascertaining software
size.
By
fortunate coincidence, the
structure of the function
point metric
is
a good match to the
fundamental issues that
should be included in
software
requirements. In chronological order,
these seven fundamen-
tal
topics should be explored as
part of the requirements
gathering
process:
1.
The outputs
that
should be produced by the
application
2.
The inputs
that
will enter the software
application
3.
The logical
files that
must be maintained by the
application
4.
The entities
and relationships that
will be in the logical files of
the
application
5.
The inquiry
types that
can be used with the
application
6.
The interfaces
between
the application and other
systems
7.
Key algorithms
that
must be present in the
application
Five
of these seven topics are
the basic elements of the
International
Function
Point Users Group (IFPUG)
function point
metric.
The
fourth topic, "entities and
relationships," is part of the
British
Mark
II function point metric and
the newer COSMIC function
point.
The
seventh topic, "algorithms," is a
standard factor of the
feature
point
metric, which added a count
of algorithms to the five
basic func-
tion
point elements used by
IFPUG.
The
similarity between the
topics that need to be
examined when
gathering
requirements and those used
by the functional metrics
makes
the
derivation of function point
totals during requirements a
fairly
straightforward
task. In fact, automated
creation of function point
size
from
requirements has been accomplished
experimentally, although
this
is
not yet commonplace.
However,
30 additional topics also
need to be explored and
decided
during
the requirements phase. Some
of these are nonfunctional
require-
ments,
and some are business
requirements needed to determine
whether
Requirements,
Analysis, Architecture, and
Design
441
funding
should be provided for the
application. These additional
topics
include
1.
The size
of
the application in function
points and source
code
2.
The schedule
of
the application from
requirements to delivery
3.
The staffing
of
the development team,
including key
specialists
4.
The cost
of
the application by activity
and also in terms of cost
per
function
point
5.
The business
value of
the application and
return
on investment
(ROI)
6.
The nonfinancial
value,
such as competitive advantages
and cus-
tomer
loyalty
7.
The major risks
facing
the application, that is,
termination, delays,
overruns,
and so on
8.
The features of competitive
applications by
business rivals
9.
The method
of delivery, such
as SOA, SaaS, disks,
downloads, and
so
on
10.
The supply
chain of
the application, or related
applications upstream
or
downstream
11.
The legacy
requirements derived
from older applications
being
replaced
12.
The laws
and regulations that
impact the application
(i.e., tax laws;
privacy,
etc.)
13.
The quality
levels in
terms of defects, reliability,
and ease of use
criteria
14.
The error-handling
features
in case of user errors or
power outages,
and
so on
15.
The warranty
terms of
the application and
responses to warranty
claims
16.
The hardware
platform(s) on
which the application will
operate
17.
The software
platform(s), such
as operating systems and
databases
18.
The nationalization
criteria,
or the number of foreign
language
versions
19.
The security
criteria for
the application and its
companion databases
20.
The performance
criteria, if
any, for the
application
21.
The training
requirements or
form of tutorial materials
that may
be
needed
442
Chapter
Seven
22.
The installation
procedures for
starting and initializing
the applica-
tion
23.
The reuse
criteria for
the application in terms of
both reused mate-
rials
going into the application
and also whether features of
the
application
may be aimed at subsequent
reuse by downstream
applications
24.
The use
cases or major tasks users
are expected to be able to
per-
form
via the application
25.
The control
flow or
sequence of information moving
through the
application
26.
Possible future
requirements for
follow-on releases
27.
The hazard
levels of
any requirements that might
be potentially
"toxic"
28.
The life
expectancy of
the application in terms of
service life once
deployed
29.
The projected total
cost of ownership (TCO)
of the application
30.
The release
frequency for
new features and repairs
(annually,
monthly,
etc.)
The
seven primary topics and
the 30 supplemental topics
are not the
only
items that need to be
examined during requirements,
but none of
these
should be omitted, since
they can all significantly
affect software
projects.
Most
of these 37 topics are
needed for many different
kinds of appli-
cations:
commercial packages, in-house
applications, outsource
applica-
tions,
defense projects, systems
software, and embedded
applications.
Statistical
Analysis of Software
Requirements
From
analyzing thousands of software
applications in hundreds of
compa-
nies,
the author has noted
some basic facts about
software requirements.
As
software applications grow
larger, the volume of
software require-
ments
also grows larger. However,
the growth in requirements
cannot
keep
pace with the growth of the
software itself. As a result,
the larger
the
application, the less
complete the requirements
are.
The
fact that software
requirements are incomplete
for large soft-
ware
applications leads to the
phenomenon of continuous
requirements
change
at rates between 1 percent
and 3 percent per calendar
month.
Requirements
may contain hundreds of bugs
or defects. These are
difficult
to remove via testing, but
can be found by means of
formal
requirement
inspections.
Requirements,
Analysis, Architecture, and
Design
443
Requirements
are translated into designs,
and designs are
translated
into
code. A study by the author
at IBM found that at each
translation
point,
10 percent to 15 percent of the
requirements do not make it
down-
stream
into the next stage, at
least initially.
In
addition to creeping
requirements instituted
by users, which pre-
sumably
have some business value, a
surprising number of
changes
are
added by developers, without
any formal requirements or
even any
apparent
need on the part of users.
For some applications, more
than
7
percent of the delivered
functions were added by the
developers, some-
times
without the users even
being aware of them. The
topic of sponta-
neous
and unsolicited change is
seldom discussed in the
requirements
literature.
(When developers were asked
why they did this, the
most
common
response was "I thought it
might be useful.")
In
aggregate, about 15 percent of
initial user requirements
are miss-
ing
from the formal requirements
documents and show up as
creeping
requirements
later on. At each
translation point from
requirements to
some
other deliverable such as
design or code, about 10
percent of the
requirements
accidentally drop out and
have to be added back in
later
or
in subsequent releases. As mentioned,
developers spontaneously
add
features
without any user
requirements asking for
them, and sometimes
even
without the knowledge of the
users. Perhaps 7 percent of
delivered
features
are in the form of
unsolicited developer-added features
that
lack
any customer requirements,
although some of these may
turn out
to
be useful. In addition to unplanned
growth and unplanned loss
of
requirements,
some requirements are toxic
or harmful, while many
may
contain
errors ranging from high
severity to low
severity.
In
theory, some kinds of
requirements such as executable
English
could
use static analysis or some
form of automated validation,
but to
date
this approach is
experimental.
Some
software requirements may be
toxic
or
cause serious harm if
they
are not removed. A prime
example of a toxic requirement is
the
famous
Y2K problem. Another example of a
toxic requirement is the
file-
handling
protocol of the Quicken
financial application. If backup
files
are
opened instead of being
restored, then data
integrity can be
lost.
A
very common toxic
requirement in many applications is
the failure to
accommodate
people with three names. Yet
another toxic requirement
is
the
poor error-handling routines in
many software applications,
which
have
become the preferred route
for virus and spyware
infections. The
bottom
line is that the traditional
definition of quality as
"conformance
to
requirements" is not safe because of
the presence of so many
serious
toxic
requirements.
At
this point it is interesting to
look at information about
the size of
software
requirements, and also about
the numbers of bugs or
defects
that
might be in software
requirements.
444
Chapter
Seven
TABLE
7-1
Requirements
Pages per Function
Point
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
0.40
0.35
0.50
1.00
0.35
0.52
100
0.50
0.45
0.60
1.10
0.40
0.61
1,000
0.55
0.50
0.70
1.15
0.45
0.67
10,000
0.40
0.45
0.60
0.80
0.00
0.56
100,000
0.30
0.40
0.50
0.75
0.00
0.49
Average
0.43
0.43
0.58
0.96
0.40
0.56
Table
7-1 shows the approximate
size of software requirements in
terms
of
pages per function point.
The metric used is that of
the International
Function
Point Users Group (IFPUG),
counting rules version 4.2.
Five
different
requirement "languages" are
shown in Table 7-1.
Note
that for Table 7-1
and the other tables in
this chapter, no data
is
available for "user stories"
for applications in the
10,000 to 100,000
function
point range. This is because
the Agile methods are
not used
for
such large applications, or at
least have not reported
any results to
benchmark
organizations.
The
most important fact that
Table 7-1 reveals is that
the size of
requirements
peaks at about 1000 function
points. For large
applica-
tions,
the volume of paper
documents would grow too
large to read if
100
percent of requirements were
documented.
Table
7-2 extends the results from
Table 7-1 and shows the
approximate
total
quantity of pages in the
requirements for each of the
five methods.
As
can be seen, large systems
have an enormous volume of pages
for
requirements,
and yet they are
not complete. In fact, if
requirements were
100
percent complete for a large
application in the 100,000function
point
size
range, it would take more
than 2500 days, or almost
seven years, to
read
them! It is obvious that
such a mass of paper is
unmanageable.
Table
7-3 extends the logic
derived from Table 7-2
and shows the
approximate
completeness of software
requirements.
TABLE
7-2
Requirement
Pages Produced by Application
Size
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
4
4
5
10
4
5
100
50
45
60
110
40
61
1,000
550
500
700
1,150
450
670
10,000
4,000
4,500
6,000
8,000
0
4,500
100,000
30,000
40,000
50,000
75,000
0
48,750
Average
6,921
9,010
11,353
16,854
165
8,860
Requirements,
Analysis, Architecture, and
Design
445
TABLE
7-3
Requirements
Completeness by Software Size
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
98.00%
99.00%
96.00%
99.00%
93.00%
97.00%
100
95.00%
96.00%
95.00%
97.00%
90.00%
94.60%
1,000
90.00%
93.00%
90.00%
95.00%
87.00%
91.00%
10,000
77.00%
90.00%
82.00%
90.00%
0.00%
84.75%
100,000
62.00%
83.00%
74.00%
80.00%
0.00%
74.75%
Average
84.40%
92.20%
87.40%
92.20%
90.00%
88.42%
As
can be seen from Table 7-3,
completeness of requirements
declines
as
software size goes up.
This explains why creeping
requirements are
endemic
within the software
industry. It is doubtful if any
requirement
method
or language could really
reach 100 percent for
large applications.
Table
7-4 shows the approximate
numbers of requirements
defects
per
function point observed in
applications of various sizes,
using vari-
ous
languages.
While
the size of software
requirement specifications goes
down as
application
size goes up, the
same is not true for
requirements bugs or
defects.
The larger the application,
the more requirement bugs
there
are
likely to be.
However,
note that these tables
show only approximate
average
results.
Many defect prevention
methods such as joint
application
design
(JAD), prototypes, and
participation in formal inspections
can
lower
these typical results by
more than 60 percent.
Table
7-5 extends the results of
Table 7-4 and shows
the approximate
numbers
of requirements defects that
are likely to occur by
application
size.
For large applications, the
numbers are alarming and
cry out for
using
state-of-the-art defect prevention
and removal methods.
Note
that these defects are of
all severity levels. Only a
small fraction
would
generate serious problems. But with
thousands of latent defects
in
requirements,
it is obvious that formal
inspections and other
methods of
TABLE
7-4
Requirements
Defects per Function
Point
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
0.52
0.46
0.65
1.30
0.48
0.68
100
0.57
0.50
0.80
1.46
0.53
0.77
1,000
0.60
0.55
0.98
1.61
0.63
0.87
10,000
0.70
0.60
1.20
1.60
0.00
1.03
100,000
0.72
0.65
1.10
1.65
0.00
1.03
Average
0.62
0.55
0.95
1.52
0.55
0.88
446
Chapter
Seven
TABLE
7-5
Requirements
Defects by Application
Size
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
5
5
7
13
5
7
100
57
50
80
146
53
77
1,000
600
550
980
1,610
630
874
10,000
7,000
6,000
12,000
16,000
0
10,250
100,000
72,000
65,000
110,000
165,000
0
103,000
Average
15,932
14,321
24,613
36,554
229
22,842
requirement
defect removal should be
standard practices for all
applica-
tions
larger than 1000 function
points.
Because
the numbers in Table 7-5
are so large and alarming,
Table 7-6
shows
only the most serious or
"toxic" defects that are
likely to occur.
The
defects shown in Table 7-6
are harmful problems such as
the
Y2K
problem that cause problems
for users and that
trigger expensive
repairs
when they finally surface
and are identified.
The
bottom line is that
requirements cannot be complete
for large
applications
above 10,000 function
points. At least they never
have been
complete.
In
addition, there will be requirements
defects, and a fraction
of
requirements
defects will cause serious
harm. Much more study
is
needed
of requirements defects, defect
prevention, and defect
removal.
One
topic requiring additional
study is how many people
are involved
in
the requirements process.
Customers have "assignment scopes"
of
about
5000 function points. That
reflects the normal quantity
of soft-
ware
features that one user knows
well enough to define what
is needed.
The
range of user knowledge runs
from about 1000 function
points up
to
perhaps 10,000 function
points.
The
assignment scope of systems or
business analysts is larger,
and
runs
up to about 50,000 function
points, although average
amounts are
perhaps
15,000 function
points.
TABLE
7-6
Toxic
Requirements that Cause Serious
Harm
Function
English
Exec.
Use-
UML
User
Points
Text
English
Cases
Diagrams
Stories
Average
10
0
0
0
0
0
0
100
0
0
0
0
0
0
1,000
1
1
2
4
1
2
10,000
15
14
25
40
0
19
100,000
175
150
300
400
0
205
Average
38
33
65
89
0
45
Requirements,
Analysis, Architecture, and
Design
447
These
typical assignment scopes
mean that for a large
system in the
50,000function
point range, about ten
customers will need to be
inter-
viewed
by one systems analyst. In other
words, the ratio of
business
analysts
to customers is about
1-to-10.
These
ratios have implications for
the Agile approach of
embedding
users
in development teams. Since
most Agile projects are
small and fewer
than
1500 function points, a
single user can suffice to
express most of the
requirements.
However, for large
applications, more users are
necessary.
Another
topic that needs more work
is the rate at which
requirements
can
be gathered and analyzed. If
you assume a typical joint
application
design
(JAD) session contains four
user representatives and two
busi-
ness
analysts, they can usually
discuss and document
requirements at
a
rate of perhaps 1000
function points per day. It
should be noted that
requirements
specifications average perhaps
0.5 page per function
point
using
English text, and perhaps
0.75 page using the
UML.
A
single user embedded within an
Agile development team
can
explain
requirements at a rate of perhaps
200 function points per
day.
User
stories are compact and
average about 0.3 page
per function point.
However,
they are not complete, so
verbal interchange between
the user
and
the development team is an
integral part of Agile
requirements.
Creating
Taxonomies of Reusable
Software
Requirements
For
purposes of benchmarks, feature
analysis, and statistical
analysis
of
productivity and quality, it is
useful to record basic
information about
software
applications. Surprisingly, the
software industry does not
have
a
standard taxonomy that
allows applications to be uniquely
identified.
To
fill this gap, the
author has developed a
taxonomy that allows
soft-
ware
applications to be analyzed statistically
with little ambiguity.
For
identifying software for
statistical purposes and for
studying soft-
ware
requirements by industry, it is useful to
know certain basic
facts
such
as the country of origin and
the industry. To record
these facts,
standard
codes can be used:
Country
code
=
1
(United States)
Region
code
=
06
(California)
City
code
=
408
(San Jose)
Industry
code
=
1569
(Telecommunications)
CMMI
level
=
3
(Controlled and
repeatable)
Starting
date
=
04/20/2009
Plan
completion date
=
05/10/2011
True
completion date
=
09//25/2011
These
codes are from telephone
area codes, ISO codes, and
the North
American
Industry Classification (NAIC) codes of
the Department
448
Chapter
Seven
of
Commerce. They do not affect
the sizing algorithms of the
invention,
but
provide valuable information
for benchmarks and
international
economic
studies. This is because software
costs vary widely by
country,
geographic
region, and industry. For
historical data to be
meaningful,
it
is desirable to record all of
the factors that influence
costs, schedules,
requirements,
and other factors.
The
entry for "CMMI level"
refers to the famous
Capability Maturity
Model
Integration developed by the
Software Engineering
Institute
(SEI).
After
location and industry
identification, the taxonomy
consists of
seven
topics:
1.
Project nature
2.
Project scope
3.
Project class
4.
Project type
5.
Problem complexity
6.
Code complexity
7.
Data complexity
In
comparing one software project
against another, it is important
to
know
exactly what kinds of
software applications are
being compared.
This
is not as easy as it sounds.
The industry has long
lacked a standard
taxonomy
of software projects that
can be used to identify
projects in a
clear
and unambiguous
fashion.
By
means of multiple-choice questions,
the taxonomy shown
here
condenses
more than 35 million
variations down to a small
number of
numeric
data items that can
easily be used for
statistical analysis.
The
main
purpose of a taxonomy is to provide
fundamental structures
that
improve
the ability to do research
and analysis.
The
taxonomy shown here has
been in continuous use since
1984. The
taxonomy
is explained in several of the
author's prior books,
including
Estimating
Software Costs (McGraw-Hill,
2007) and Applied
Software
Measurement
(McGraw-Hill,
2008), as well as in older
editions of the
same
books and also in
monographs. The taxonomy is
also embedded
in
software estimating tools
designed by the author. The
elements of
the
taxonomy follow:
PROJECT
NATURE: __
1.
New program development
2.
Enhancement (new functions
added to existing
software)
Requirements,
Analysis, Architecture, and
Design
449
3.
Maintenance (defect repair to
existing software)
4.
Conversion or adaptation (migration to
new platform)
5.
Reengineering (re-implementing a legacy
application)
6.
Package modification (revising
purchased software)
PROJECT
SCOPE: __
1.
Algorithm
2.
Subroutine
3.
Module
4.
Reusable module
5.
Disposable prototype
6.
Evolutionary prototype
7.
Subprogram
8.
Stand-alone program
9.
Component of a system
10.
Release of a system (other
than the initial
release)
11.
New departmental system (initial
release)
12.
New corporate system (initial
release)
13.
New enterprise system (initial
release)
14.
New national system (initial
release)
15.
New global system (initial
release)
PROJECT
CLASS: __
1.
Personal program, for
private use
2.
Personal program, to be used by
others
3.
Academic program, developed in an
academic environment
4.
Internal program, for use at
a single location
5.
Internal program, for use at
multiple locations
6.
Internal program, for use on
an intranet
7.
Internal program, developed by
external contractor
8.
Internal program, with functions
used via time
sharing
9.
Internal program, using
military specifications
10.
External program, to be put in
public domain
450
Chapter
Seven
11.
External program, to be placed on
the Internet
12.
External program, leased to
users
13.
External program, bundled with
hardware
14.
External program, unbundled
and marketed
commercially
15.
External program, developed
under commercial
contract
16.
External program, developed
under government
contract
17.
External program, developed
under military
contract
PROJECT
TYPE: __
1.
Nonprocedural (generated, query,
spreadsheet)
2.
Batch application
3.
Web application
4.
Interactive application
5.
Interactive GUI applications
program
6.
Batch database applications
program
7.
Interactive database applications
program
8.
Client/server applications
program
9.
Computer game
10.
Scientific or mathematical
program
11.
Expert system
12.
Systems or support program,
including "middleware"
13.
Service-oriented architecture
(SOA)
14.
Communications or telecommunications
program
15.
Process-control program
16.
Trusted system
17.
Embedded or real-time
program
18.
Graphics, animation, or image-processing
program
19.
Multimedia program
20.
Robotics, or mechanical automation
program
21.
Artificial intelligence
program
22.
Neural net program
23.
Hybrid project (multiple
types)
Requirements,
Analysis, Architecture, and
Design
451
PROBLEM
COMPLEXITY: ________
1.
No calculations or only simple
algorithms
2.
Majority of simple algorithms
and simple
calculations
3.
Majority of simple algorithms
plus a few of average
complexity
4.
Algorithms and calculations of
both simple and average
complexity
5.
Algorithms and calculations of
average complexity
6.
A few difficult algorithms
mixed with average and
simple
7.
More difficult algorithms
than average or
simple
8.
A large majority of difficult
and complex
algorithms
9.
Difficult algorithms and
some that are extremely
complex
10.
All algorithms and calculations
extremely complex
CODE
COMPLEXITY: _________
1.
Most "programming" done with
buttons or pull-down
controls
2.
Simple nonprocedural code
(generated, database,
spreadsheet)
3.
Simple plus average
nonprocedural code
4.
Built with program skeletons
and reusable modules
5.
Average structure with small
modules and simple
paths
6.
Well structured, but some
complex paths or
modules
7.
Some complex modules, paths,
and links between
segments
8.
Above average complexity,
paths, and links between
segments
9.
Majority of paths and
modules are large and
complex
10.
Extremely complex structure with
difficult links and large
modules
DATA
COMPLEXITY: _________
1.
No permanent data or files
required by application
2.
Only one simple file
required, with few data
interactions
3.
One or two files, simple
data, and little
complexity
4.
Several data elements, but
simple data
relationships
5.
Multiple files and data
interactions of normal
complexity
6.
Multiple files with some
complex data elements and
interactions
7.
Multiple files, complex data
elements and data
interactions
452
Chapter
Seven
8.
Multiple files, majority of
complex data elements and
interactions
9.
Multiple files, complex data
elements, many data
interactions
10.
Numerous complex files, data
elements, and complex
interactions
As
most commonly used for
either measurement or sizing,
users will pro-
vide
a series of integer values to
the factors of the taxonomy,
as follows:
PROJECT
NATURE
1
PROJECT
SCOPE
8
PROJECT
CLASS
11
PROJECT
TYPE
15
PROBLEM
COMPLEXITY
5
DATA
COMPLEXITY
6
CODE
COMPLEXITY
2
Although
integer values are used
for nature, scope, class,
and type,
up
to two decimal places can be
used for the three
complexity factors.
Thus,
permissible values might
also be
PROJECT
NATURE
1
PROJECT
SCOPE
8
PROJECT
CLASS
11
PROJECT
TYPE
15
PROBLEM
COMPLEXITY
5.25
DATA
COMPLEXITY
6.50
CODE
COMPLEXITY
2.45
The
combination of numeric responses to
the taxonomy provides
a
unique "pattern" that
facilitates sizing, estimating,
measurement,
benchmarks,
and statistical analysis of
features and requirements.
The
taxonomy
makes it easy to predict the
outcome of a future project
by
examining
the results of older
projects that have identical
or similar
patterns
using the taxonomy. As it
happens, applications with
identical
patterns
are usually of the same
size in terms of function
points (but
not
source code) and often
have similar results.
Not
only are applications that
share common patterns close to
the
same
size, but they also
tend to have very similar
feature sets and to
have
implemented very similar
requirements. Therefore, placing
an
application
on a taxonomy such as the one
described here could be
a
step
toward creating families of
reusable requirements that
can serve
dozens
or even hundreds of applications.
The same taxonomy can
assist
in
assembling the feature sets
for systems using the
service-oriented
architecture
(SOA).
Requirements,
Analysis, Architecture, and
Design
453
When
demographic information is included,
all the factors in the
tax-
onomy
are as follows:
COUNTRY
CODE
1
(United States)
REGION
CODE
06
(California)
CITY
CODE
408
(San Jose)
INDUSTRY
CODE
1569
(Telecommunications)
CMMI
LEVEL
3
(Controlled and
repeatable)
STARTING
DATE
04/20/2009
PLAN
COMPLETION DATE
05/10/2011
TRUE
COMPLETION DATE
09/25/2011
SCHEDULE
SLIP
4.25
(Calendar months)
INITIAL
SIZE
1000
(Function points)
REUSED
SIZE
200
(Function points)
UNPLANNED
GROWTH
300
(Function points)
DELIVERED
SIZE
1500
(Function points)
INITIAL
SIZE (SOURCE CODE)
52,000
(Logical statements)
REUSED
SIZE
10,400
(Logical statements)
UNPLANNED
GROWTH
15,600
(Logical statements)
DELIVERED
SIZE (SOURCE CODE)
62,400
(Logical statements)
PROGRAMMING
LANGUAGE(S)
65
(Java)
REUSED
CODE
65
(Java)
PROJECT
NATURE
1
(New application)
PROJECT
SCOPE
8
(Stand-alone application)
PROJECT
CLASS
11
(Expert system)
PROJECT
TYPE
15
(External, unbundled)
PROBLEM
COMPLEXITY
5.25
(Mixed, but high
complexity)
DATA
COMPLEXITY
6.50
(Mixed, but high
complexity)
CODE
COMPLEXITY
2.45
(Low complexity)
The
taxonomy provides an unambiguous
pattern that can be
used
both
for classifying historical
data and for sizing
and estimating soft-
ware
projects. This is because software
applications that share the
same
pattern
also tend to be of the same
size when measured using
IFPUG
function
point metrics.
When
applications that share the
same pattern have
differences in
productivity
or quality, that indicates
differences in the effectiveness
of
methods
or differences in the abilities of
the development team. In
any
case,
the taxonomy makes
statistical analysis more
reliable because it
prevents
"apples to oranges"
comparisons.
Software
applications will not be of the
same size using lines of
code
(LOC)
metrics due to the fact
that there are more
than 700 programming
languages
in existence. Also, a majority of
software applications are
coded
in
more than one programming
language.
454
Chapter
Seven
Software
applications of the same
size may vary widely in
costs and
schedules
for development due to the
varying skills of the
develop-
ment
teams, the programming
languages used, the
development tools
and
methods utilized, and also
the industry and geographic
location
of
the developing organization.
Although size is a required
starting
point
for estimating software
applications, it is not the
only informa-
tion
needed.
The
taxonomy can be used well
before an application has
started its
requirements.
Since the taxonomy contains
information that should
be
among
the very first topics
known about a future
application, it is pos-
sible
to use the taxonomy months
before requirements are
finished and
even
some time before they
begin.
It
is also possible to use the
taxonomy on legacy applications
that
have
been in existence for many
years. It is often useful to
know the
function
point totals of such
applications, but normal
counting of func-
tion
points may not be feasible
since the requirements and
specifications
are
seldom updated and may
not be available.
The
taxonomy can also be used
with commercial software, and
indeed
with
any form of software,
including classified military
applications
where
there is sufficient public or
private knowledge of the
application
to
assign values to the
taxonomy tables.
In
theory, the taxonomy could
be extended to include other
interest-
ing
topics such as development
methods, programming languages,
tools,
defect
removal, and many others.
However, two problems make
this
extension
difficult:
1.
New languages, tools, and
methods occur every month,
so there is
no
stability.
2.
A majority of applications use
multiple languages, methods,
and
tools.
However,
to show what an extended
taxonomy might look like,
follow-
ing
is an example of the basic
taxonomy extended to include
develop-
ment
methods:
COUNTRY
CODE
1
(United States)
REGION
CODE
06
(California)
CITY
CODE
408
(San Jose)
INDUSTRY
CODE
1569
(Telecommunications)
CMMI
LEVEL
3
(Controlled and
repeatable)
STARTING
DATE
04/20/2009
PLAN
COMPLETION DATE
05/10/2011
TRUE
COMPLETION DATE
09/25/2011
SCHEDULE
SLIP
4.25
(Calendar months)
Requirements,
Analysis, Architecture, and
Design
455
INITIAL
SIZE
1000
(Function points)
REUSED
SIZE
200
(Function points)
UNPLANNED
GROWTH
300
(Function points)
DELIVERED
SIZE
1500
(Function points)
INITIAL
SIZE (SOURCE CODE)
52,000
(Logical statements)
REUSED
SIZE
10,400
(Logical statements)
UNPLANNED
GROWTH
15,600
(Logical statements)
DELIVERED
SIZE (SOURCE CODE)
62,400
(Logical statements)
PROGRAMMING
LANGUAGE(S)
65
(Java)
REUSED
CODE
65
(Java)
PROJECT
NATURE
1
(New application)
PROJECT
SCOPE
8
(Stand-alone application)
PROJECT
CLASS
11
(Expert system)
PROJECT
TYPE
15
(External; unbundled)
PROBLEM
COMPLEXITY
5.25
(Mixed but high
complexity)
DATA
COMPLEXITY
6.50
(Mixed but high
complexity)
CODE
COMPLEXITY
2.45
(Low complexity)
SIZING
METHOD
1
(IFPUG function points)
ESTIMATING
METHODS
3
(KnowledgePlan)
MANAGEMENT
REPORTING
2
(Automated insight)
RISK
ANALYSIS
0
(Not used)
FINANCIAL
VALUE ANALYSIS
1
(Used)
INTANGIBLE
VALUE ANALYSIS
0
(Not used)
REQUIREMENTS
GATHERING
1
(Joint application
design)
REQUIREMENTS
LANGUAGE(S)
5
(Hybrid: Use-cases,
English)
QUALITY
REQUIREMENTS
1
(QFD)
SOFTWARE
QUALITY ASSURANCE
1
(Formal SQA
involvement)
DEVELOPMENT
METHOD
3
(Team Software
Process)
PRETEST
DEFECT REMOVAL
REQUIREMENTS
INSPECTION
1
(Used)
DESIGN
INSPECTION
1
(Used)
CODE
INSPECTION
0
(Not used)
STATIC
ANALYSIS
1
(Used)
SIX
SIGMA
0
(Not used)
IV
& V
0
(Not used)
AUTOMATED
TESTING
0
(Not used)
TEST
STAGES
UNIT
TEST
1
(Used)
NEW
FUNCTION TEST
1
(Used)
REGRESSION
TEST
1
(Used)
COMPONENT
TEST
1
(Used)
PERFORMANCE
TEST
1
(Used)
(Continued)
456
Chapter
Seven
SECURITY
TEST
0
(Not used)
INDEPENDENT
TEST
0
(Not used)
SYSTEM
TEST
1
(Used)
ACCEPTANCE
TEST
1
(Used)
Although
the basic taxonomy has
been in continuous use since
1984,
the
extended taxonomy that shows
tools, languages, and
methods is
hypothetical.
It is included because such an extended
taxonomy would
facilitate
estimates, benchmark analysis,
statistical studies, and
mul-
tiple
regression analysis to show
the effectiveness of various
methods
and
practices.
By
converting millions of alternatives
into numeric data by
means
of
multiple-choice questions, taxonomies
facilitate statistical
analysis.
Also,
various "patterns" among the
alternatives can easily be
evaluated
in
terms of improving or degrading
productivity and quality, or
explor-
ing
reusable requirements. The
software industry should
invest more
energy
into development of useful
taxonomies along the lines
used by
other
sciences such as biology,
linguistics, physics, and
chemistry.
Software
Requirements Methods
and
Practices
There
are numerous variations in
how software requirements
are col-
lected,
analyzed, and converted into
software. Following are
descriptions
and
some results noted for a
number of common variations.
They are
discussed
in alphabetical order.
An
interesting idea that
has
Agile
requirements with embedded
users
emerged
from the Agile methods is
that of a full-time user
representa-
tive
as part of the development
team. The role of these
embedded users
is
to provide the requirements
for new applications in
fairly small doses
that
can immediately be implemented
and put to use. Typically,
seg-
ments
between 5 percent and 10
percent of the total
requirements are
defined
and built during each
"sprint." This is equivalent to 40 to
200
function
points per sprint.
This
method of full-time users
has proven to be effective
for small
applications
where one person can
actually express the needs
of all
users.
It is not effective for
applications such as Microsoft
Office with
millions
of users, because no one can speak
for the needs of all
users.
Neither
is this method effective for
certain kinds of embedded
applica-
tions
such as fuel-injection
controls.
Including
users with development teams is an
innovative approach
that
works
well once the limits
are understood. See also
"Focus Groups," "Data
Mining
for Legacy Requirements,"
and "Joint Application
Design (JAD)."
Requirements,
Analysis, Architecture, and
Design
457
Changes
taking place in requirements
after a
Creeping
requirements
formal
requirements phase is a normal
occurrence. Surprisingly,
many
applications
are not effective in dealing
with requirements changes.
Creeping
requirements are calculated by
measuring the function
point
total
for an application at the
end of the requirements
phase, and then
doing
another function point count
when the application is
delivered,
including
all requirements that
surfaced after the
requirements phase.
This
form of measurement indicates
creeping requirements grow
at
about
2 percent per calendar month
during the subsequent design
phase
and
perhaps 1 percent per
calendar month during much
of the coding
phase.
After the midpoint of the
coding phase, requirements
changes
are
redirected into future
releases.
Typical
growth patterns for a
"normal" application of 1500
function
points
would be in the range of 30
function points of creeping
require-
ments
per month during design
and 15 function points of
growth per
month
during coding. Since design
should last two months
and coding
eight
months, total growth in
terms of creeping requirements
would be
60
function points during
design and 120 function
points during coding,
or
180 function points in all.
Thus, the application with
1500 function
points
defined at the end of the
requirements phase would be
delivered
as
an application of 1680 function
points.
Note
that larger applications with
longer schedules obviously
have
much
larger totals of requirements
creep.
Considering
the same application in an
Agile context, each
sprint
might
include 150 to 250 function
points. The total size at
delivery would
still
be about 1680 function
points, but the application
is developed in
stages.
The
most effective way to deal
with requirements creep is to
use
methods
that reduce unplanned creep
and also to use methods
that
validate
changes. Joint Application
Design (JAD), executable
English,
and
prototypes slow down creep.
Requirements inspections and
change
control
boards can validate changes.
The Agile method of
embedding
users
with developers increases creep up to 10
percent per month,
but
this
is benign because the Agile
teams are geared up for
such growth.
There
are several problems
associated with creeping
requirements
outside
of the Agile domain: (1)
they have higher defect
potentials than
original
requirements; (2) they cause
schedule delays and cost
over-
runs;
(3) they are frequent causes of
litigation for applications
developed
under
contract or for outsourced
applications.
Data
mining for legacy
requirements As of
2009, more than half of
"new"
applications
are replacements for aging
legacy software
applications.
Some
of these legacy applications
may have been in continuous use
for
more
than 25 years. Unfortunately,
the software industry is lax
in keeping
458
Chapter
Seven
requirements
and design documents up to
date, so for a majority of
legacy
applications,
there is no easy way to find
out what requirements need
to
be
transferred to the new
replacement.
However,
some automated tools can
examine the source code of
legacy
applications
and extract latent
requirements embedded in the
code.
These
hidden requirements can be
assembled for use in the
replace-
ment
application. They can also
be used to calculate the
size of the
legacy
application in terms of function
points, and thereby can
assist
in
estimating the new
replacement application. Latent
requirements
can
also be extracted manually
using formal code
inspections, but this
is
much slower than automated
data mining.
Executable
English Since
many business rules can be
expressed in terms
of
English (or other natural
languages), it makes sense to attempt
to
automate
a formal dialect of English
that facilitates requirements
anal-
ysis.
This is not a new idea,
since COBOL was intended to
have similar
capabilities.
An organization called Internet
Business Logic, headed
by
Dr.
Adrian Walker, has such a
dialect available and
automation to sup-
port
it. Examples and downloads
are available to try out the
method.
The
information on executable English
occurs in several web sites,
but
the
Microsoft Development Network is
perhaps the best known.
The
URL
is
http://msdn.microsoft.com/en-uslibrary/cc169602.aspx.
However,
additional study and data
would be useful. Some
unan-
swered
questions exist about using
executable English for
"toxic"
requirements
such as the Y2K problem.
There are no intrinsic
barriers
to
expressing harmful requirements in
executable English. Also,
there
are
no side-by-side comparisons in terms of
requirements costs,
require-
ments
defects, or requirement productivity
rates between
executable
English
and other methods. Finally,
hybrid approaches to use a
com-
bination
of executable English with other
methods have not yet
been
fully
examined.
In
theory, it would be possible to
run static analysis tools
against
requirements
specifications written in executable
English, assuming
that
the static analysis tools
had parsers available. If
so, finding logical
problems
and omissions in executable
English might add value to
static
analysis
tools such as Coverity,
KlocWorks, XTRAN, and the
like.
Automatic
error detection in requirements
and design created
from
executable
English would help to
eliminate serious classes of error
that
have
long been difficult to deal
with: incomplete and toxic
requirements. A
future
merger of static analysis
and executable English holds
many inter-
esting
prospects for improving the
quality of requirements
analysis.
Focus
groups A
focus
group is an
assembly of customers who
are asked
to
participate in group discussions
about the features and
functions of
Requirements,
Analysis, Architecture, and
Design
459
new
products. Focus groups
usually range from perhaps 5
to more than
25
participants based on the
demographic needs of the
potential prod-
uct.
Focus groups may offer
suggestions or even use
working models
and
prototypes.
Focus
groups have proven to be
effective for products that
are aimed
at
a mixture of diverse interests
and many possible kinds of
use. Focus
groups
are older than software
and are frequently used
for electronic
devices,
appliances, and other
manufactured objects.
In
a software context, focus
groups are most effective
for commercial
software
applications aimed at hundreds or
thousands of users,
where
diversity
is part of the application
goals.
Functional
and nonfunctional requirements
Software
requirements come
in
two flavors: functional
requirements and nonfunctional
requirements.
The
term functional
requirement is
defined as a specific feature
that a
user
wants to have included in a
software application. Functional
require-
ments
add bulk to software
applications, and in general
every functional
requirement
can be measured in terms of
function point
metrics.
Nonfunctional
requirements are
defined as constraints or limits
users
care
about with software applications,
such as performance or
reliabil-
ity.
Nonfunctional requirements may
require work to achieve, but
usu-
ally
don't add size to the
application.
The
concept of joint application
design
Joint
application design
(JAD)
originated
in IBM Toronto as a method for
gathering the
requirements
for
financial applications. The
normal method of carrying
out JAD is
for
a group of stakeholders or users to
meet face-to-face with a group
of
software
architects and designers in a
formal setting with a
moderator.
The
JAD sessions use standard
requirement checklists to ensure
that
all
relevant topics are covered.
Often JAD meetings take
place in off-site
facilities.
Between three and ten
users meet with a group of
between
three
and ten software architects
and designers in a typical JAD
event.
The
meetings usually run from 2
days to more than 15 days,
based on
the
size of the application
under discussion.
JAD
sessions have more than 35
years of empirical data and
rank as
one
of the most effective
methods for gathering
requirements for
large
applications.
Use of JAD can lower
creeping requirements levels
down
to
perhaps one-half percent per
month.
Pattern
matching As
noted previously in the
section of this chapter
deal-
ing
with taxonomies, many applications
are quite similar in terms
of
functional
requirements. For example,
consultants who work with
many
companies
within industries such as
finance, insurance, health
care,
and
manufacturing quickly realize
that every company within
specific
460
Chapter
Seven
industries
has the same kinds of
software applications. Indeed,
the
similarity
of applications within industries is
what caused the
creation
of
the enterprise resource
planning (ERP) tools such as
those marketed
by
SAP, Oracle, BAAN, and
others.
However,
as of 2009, the software
industry lacks effective
methods
for
identifying and reusing
specific functional requirements
between
applications.
To identify patterns and
similarities, it would be
desirable
to
have all functions expressed
in standard fashions, and
also to have a
full
taxonomy of major software
features.
It
would be possible for
various kinds of static
analysis tools to
identify
common
patterns among multiple
applications, and this would
facilitate
reuse
of common features and
functions. But so long as
requirements
are
expressed using more than 30
flavors of graphical
representation
coupled
with free-style English, automated
pattern matching is
difficult
or
impossible.
By
definition, a software prototype
is a
partial model of a
Prototypes
possible
software application, but
stripped down to a few key
functions
and
algorithms. As a general rule,
prototypes are about 10
percent of
the
size of completed applications.
The reason for the
small size of proto-
types
is that they are intended to
be developed quickly. For
example, a 10
percent
prototype of a 10,000function point
application would
amount
to
1000 function points, which
is fairly difficult to develop
quickly.
The
optimal size of applications
where prototypes give the
best results
is
around 1000 function points.
A 10 percent prototype would be
only
100
function points, which can
be developed quickly.
Prototypes
come in two flavors,
disposable
and
evolutionary.
As
the
name
implies, a disposable prototype
can be discarded once it has
served
its
purpose. On the other hand,
an evolutionary prototype will add
more
features
and gradually evolve into a
finished product.
Of
the two flavors, disposable
prototypes are safer. The
shortcuts and
poor
quality control associated with
evolutionary prototypes may
lead
to
downstream security flaws,
quality problems, and
performance prob-
lems
with evolutionary prototypes.
Prototypes
of both flavors are very
successful in reducing
creeping
requirements.
As a rule of thumb, requirements
creep for
applications
that
use prototypes is less than
one-half percent per
calendar month, or
less
than half the creep of
similar applications without
prototypes.
Like
many effective quality
control
Quality
function deployment
(QFD)
approaches,
QFD originated in Japan. QFD
was apparently first
used
circa
1972 by Mitsubishi for the
quality requirements of a large
ocean-
going
tanker. QFD is sometimes
called "house of quality" because
the
QFD
diagrams resemble a house with a
peaked roof.
Requirements,
Analysis, Architecture, and
Design
461
Although
QFD originated for
manufactured products, it has
been
used
with software. Primarily QFD is
used for embedded and
systems
software,
such as aircraft and medical
instruments. It is also used
by
computer
companies such as Hewlett-Packard
and IBM for both
soft-
ware
and hardware
products.
There
are a number of books and
reports on QFD. Since
learning
to
use QFD and successfully
deploying takes more than a
week, addi-
tional
information is needed before starting a
QFD program. A
nonprofit
QFD
institute exists and is one
source of additional data. As with
the
Six
Sigma approach, QFD borrows
some topics from martial
arts and
uses
a "belt" system to indicate
training levels. As with Six Sigma
and
many
martial arts, a black belt
is the highest level of
achievement. (Of
course,
true martial arts
practitioners object to this
approach on the
grounds
that earning a black belt in
a martial art takes years of
train-
ing
and practice. Earning a
black belt in Six Sigma or
QFD takes only
a
few months of training and
requires very little in the
way of hands-on
experience.)
The
topic of requirements engineering is
a
Requirements
engineering
fairly
new subset of software
engineering. Requirements
engineering
attempts
to add rigor to requirements
gathering and analysis by
using
formal
methods of elicitation, analysis,
and also by creating
models
of
the application and
validating the requirements.
That being said,
requirements
engineering is still evolving
and is not yet a fully
formed
discipline.
Requirements
engineering is most likely to be
used for systems
and
embedded
software that operates
fairly complex physical
devices. The
reason
is that systems and embedded
software needs much more
rigor
and
better quality to operate
successfully than any other
kinds of
software.
While
empirical data on requirements
engineering is sparse in
2009,
anecdotal
evidence suggests that
applications using requirements
engi-
neering
methods tend to have
somewhat lower levels of
requirements
defects
and somewhat higher levels
of requirements defect
removal
efficiency
than similar applications with
more casual
requirements
methods.
However, organizations using
requirements engineering
also
tend
to be at or above level 3 on the CMMI,
which by itself could
explain
the
improvements.
Requirements
engineering is synergistic with formal
methods such
as
the Rational Unified Process
(RUP) and the UML approach.
It is
also
synergistic with the Team
Software Process (TSP).
Requirements
engineering
is not normally used with
Agile projects because the
rigor
is
antithetical to the Agile
approach. It would not be
easy to perform
formal
requirements engineering analysis on
short user stories.
462
Chapter
Seven
Requirement
inspections Formal
inspections of software
deliverables
such
as requirements originated within IBM in
the early 1970s.
Inspections
are approaching 40 years of
continued usage and
remain
one
of the most effective defect
removal methods with the
highest levels
of
defect removal efficiency.
Formal inspections can top
85 percent in
defect
removal efficiency and
seldom drop below 65
percent. By contrast,
most
forms of testing are below
35 percent in defect removal
efficiency
levels
and seldom top 50 percent.
Inspections are also good
for defect
prevention,
since participants spontaneously
avoid the same kinds
of
defects
that the inspections
find.
Inspections
are team activities with
well-defined roles for the
mod-
erator,
the recorder, the
inspectors, and the person
whose work is being
inspected.
Substantial data and books
exist on the topic of
inspections.
A
new nonsoftware inspection
organization was created in
2009, in place
of
the former Software
Inspection and Review
Organization (SIRO)
group
from the 1980s.
Once
a specific requirement is defined,
it
Requirements
traceability
must
be included in design documents
and source code as well.
Test
cases
must also be created to
ensure that the requirement
has been
correctly
implemented. Training materials
and user reference
materials
will
probably have to be created to
explain how to use the
requirement.
"Requirements
traceability" refers to methods
that allow
requirements
to
be backtracked from other
deliverables such as code
and test cases.
In
theory, traceability in both
forward and backward
directions is pos-
sible
if each explicit requirement is assigned
a unique identifier or
serial
number.
Once assigned, the same
number is used in specifications,
code,
test
cases, and other
deliverables where the same
requirement is used.
Traceability
is often performed via a
matrix where every
requirement
is
listed on one axis, and
every document or code
segment that contains
the
requirement is listed on the
other access. The
intersection of the
two
axes
indicates that the
requirement was either
present or not.
In
theory, traceability is straightforward,
but in practice,
require-
ments
traceability is complex and
difficult, although a number of
auto-
mated
tools exist that can
ease the problems.
Traceability
is most often used for
defense applications, systems
soft-
ware,
and embedded software, because
these applications often
have
serious
legal and liability issues
associated with them. Traceability
is
also
important for information
technology applications in the
wake of
the
Sarbanes-Oxley Act, which
enforces penalties for poor
governance
of
financial software constructed by
Fortune 500
companies.
However,
traceability is seldom used
for web applications,
entertain-
ing
software, applets for
devices such as iPhone, and
for software that
is
developed for internal use
within a single
company.
Requirements,
Analysis, Architecture, and
Design
463
Much
of the literature on requirements
traceability deals with
trace-
ability
problems, which are numerous
and severe. In spite of more
than
100
tools that assert that
they can help in performing
requirements
traceability,
effective traceability remains
troublesome and
imperfect.
If
reusable requirements will be used in
multiple applications, it
is
obvious that traceability will
need to encompass
cross-application
traces
as well as single-application traces.
This implies a need for
3-D
traceability
matrixes.
Many
software applications perform
very
Reusable
requirements
similar
functions within an industry. For
example, insurance
claims
processing
is very similar from company
to company. Order
process-
ing
and invoicing are very
similar within hundreds of companies
and
thousands
of applications. Almost all
applications need functions
for
error
handling.
In
theory, at least 60 percent to 75
percent of any business
application
could
probably be created from
standard reusable parts,
assuming those
parts
are certified to high levels
of reliability and are
readily available.
Unfortunately,
what is lacking is an effective
catalog of reusable
mate-
rials
that include reusable
requirements, design, code,
interfaces, and
test
cases. Obviously, common
features also need to be
traceable back
to
their original origins, in
case of errors or
recalls.
Some
catalogs of reusable functions
are within specific domains
such
as
defense and avionics
software, and these are
samples of what is
needed.
However, there is no overall
industrywide catalog
available
circa
2009.
As
it happens, the taxonomy
discussed earlier in this
chapter could be
extended
downwards to describe individual or
specific reusable
require-
ments
or features. This is because almost
every function or feature
pro-
vided
by software applications needs to
supply similar services and
to
perform
similar actions. The topics
that would compose a
taxonomy of
reusable
functions would probably
include
1.
The origin
of
the function
2.
The creation
date of
the function
3.
The version
number of
the function
4.
The certification
level
of the function
5.
The business
purpose of
the function
6.
The name
of
the feature
7.
The traceability
serial number of
the function
8.
The programming
language of
the function
9.
The links
to the function's reusable
test cases
464
Chapter
Seven
10.
The links
to the function's reusable
documentation
11.
The links
to related functions
12.
The inputs
to
the function
13.
The outputs
from
the function
14.
The messages
passed by
the function
15.
The messages
received by
the function
16.
The entities
and relationships within
the function
17.
The logical
files used
by the function
18.
The inquiry
types that
can be made of the
function
19.
The interfaces
with
other functions if other
than messages
20.
The error-handling
methods
of the function
21.
The security
methods
of the function
22.
The algorithms
that
the function performs
Reusable
requirements would obviously
extend requirements
trace-
ability
into another dimension. Not
only would requirements have
to
be
traced backwards from the
code in a specific application,
but if many
applications
contain the same reusable
function, then
cross-application
traceability
would also be needed. This
would necessitate using
3-D
matrixes.
As
the global recession
inten-
Security
requirements deployment
(SRD)
sifies,
attacks on software applications in
the form of worms,
viruses,
spyware,
keystroke loggers, and
denial of service attacks
are increasing
daily.
Most software engineers and
most quality assurance
personnel
are
not adequately trained in
security control techniques to be
fully
effective.
Most software application
customers and users are
almost
helpless.
The
idea of security requirements
deployment (SRD), which is
being
introduced
in this book, is to apply
the same rigor to security
require-
ments
as quality function deployment
(QFD) applies to quality
require-
ments.
However, there is an additional
factor that must be
addressed
for
SRD to be effective. It is necessary to
bring in at least one
top-gun
security
expert to meet with the
development team and the
user repre-
sentatives
during the SRD planning
sessions.
The
topics that are to be
addressed during SRD
planning sessions
include
conventional protection methods
such as physical
security
and
avoiding the most common
security vulnerabilities. However,
the
urgency
of the situation calls for
more advanced methods that
can actu-
ally
improve the resistance of
source code to outside
attack. This implies
Requirements,
Analysis, Architecture, and
Design
465
getting
up to speed with capability logic,
restricting permissions,
and
using
languages such as E that
create attack-resistant code.
Adopting
methods
such as those used by the
Google Caja approach. The
word
Caja
is
Spanish for "box" and
refers to methods developed by
Google
for
encapsulating JavaScript and HTML to
prevent outside agents
from
attacking
or modifying them.
In
addition, SRD sessions
should discuss security
inspections, using
static-analysis
tools that are optimized to
find security flaws, and
intro-
ducing
special security test
stages. It may also be
relevant to consider
the
employment of "ethical hackers" to
attempt to penetrate or
gain
access
to confidential information or seize
control of software.
The
seriousness of software security
flaws in today's world
requires
immediate
and urgent solutions. A
firewall combined with
antivirus
software
and antispyware software is no
longer sufficient to
provide
real
protection. In the modern
world, the attacks no longer
come from
malicious
amateurs, but some come
from well-funded and
well-trained
foreign
governments and from very
well-funded organized crime
syn-
dicates.
The
UML modeling language is an
Unified
modeling language
(UML)
integral
part of the Rational Unified
Process (RUP) that is now
owned
by
IBM. The history of the UML as a
merger of the concepts of
Grady
Booch,
James Rumbaugh, and Ivar
Jacobsen is well known among
the
software
community. The UML and its
predecessors were
originally
aimed
at supporting object-oriented
requirements and design, but
can
actually
support almost any form of
software.
The
UML is a rich and complex
set of graphic notations
that encom-
pass
not only requirements but
also architecture, database
design, and
other
software artifacts. In fact, UML
2.0 includes 13 different
kinds of
diagram.
As a result of the richness of
the UML constructs, there is
a
very
lengthy learning curve
associated with the UML.
As
of 2009, scores of commercial tools
can facilitate UML
diagram
construction
and management. UML diagrams
can easily be
inspected
using
standard protocols for
requirements and design
inspections.
However,
it would also be useful to
have some form of automated
con-
sistency
and validity checking tools.
What comes to mind would be
a
kind
of superset of static analysis
capabilities.
For
reusable requirements and
reusable features that are
likely to be
utilized
by multiple applications, it would be
useful to have some
kind
of
a pattern-matching intelligent agent
that could scour UML
diagrams
and
extract similar
patterns.
UML
is not a panacea, but the
Object Management Group
(OMG) is
continuously
working to add useful
features and eliminate
troublesome
elements.
Therefore, UML is likely to expand in
usefulness in the
future.
466
Chapter
Seven
UML
diagrams are normal inputs
to standard function point
analysis.
In
theory, it is possible to develop a
tool that would
automatically create
function
point totals from parsing
various UML diagrams. In fact,
such
experimental
tools have been
constructed.
The
meta-language underneath UML is amenable
to static analysis
and
other forms of automatic
verification. Test suites
might also be con-
structed
from the UML meta-language.
Finally, size in terms of
function
points
might be calculated using
the meta-language.
Use-cases
The
concept of use-cases originated with Ivar
Jacobsen, who
is
also one of the pioneers
working the UML. Although use-cases
are
associated
with the UML, they are also
popular as a stand-alone
method
of
gathering requirements. Use-cases
are aimed squarely at
functional
requirements
and provide an interesting
visual representation of
how
users
invoke, modify, control, and
eventually terminate actions by
soft-
ware
applications. The application
itself is treated as a black
box, and
use-cases
concentrate on how users
interact with it to accomplish
busi-
ness
functions.
Use-cases
have introduced some
interesting abstractions into
soft-
ware
requirements analysis, such as
"actors" and "roles." These
focus
attention
on essential topics and tend
to lead analysts and
customers
in
fruitful directions.
A
number of templates provide
assistance in thinking through
a
sequence
of user interactions with software.
These templates
usually
include
topics such as "goals,"
"actors," "preconditions," and
"triggers,"
among
others.
As
with other features of the UML,
many commercial tools
are
available
for drawing and managing
use-cases. Use-cases are
also
amenable
to formal requirements and
design inspections, and
can
be
used to predict application
size via function point
analysis. In
general,
use-cases are among the
easiest requirements artifacts
for
inspection,
because the visual
representation makes it easy to
exam-
ine
assumptions.
Use-cases
are also used in the
context of joint application
design
(JAD)
and are sometimes created
on-the-fly during JAD
sessions.
The
Agile methods aim at creating
running code as fast
as
User
stories
possible,
and the Agile community
feels that the massive
paper docu-
ment
sets associated with the UML
and sometimes with use-cases
are
barriers
to progress rather than
effective solutions. As a result,
the
Agile
community has developed a
flexible and fast method of
gathering
requirements
termed user
stories. One
unique feature of user
stories is
that
they are closely coupled
with test cases; in fact, the
test cases and
the
user stories are developed
concurrently.
Requirements,
Analysis, Architecture, and
Design
467
To
keep the user stories
concise and in keeping with
the Agile philoso-
phy
of minimizing paper documents,
the stories are usually
written on
3"
× 5" cards rather than
standard office paper. Many
user stories are
only
a single sentence, or perhaps a
few sentences. An example of
such
a
short user story might
be, "I want to withdraw cash
from an ATM."
However,
this means that complicated
transactions may take dozens
of
cards,
with each card defining only
a single step in the entire
process.
While
use-cases can be inputs to
function point analysis,
their concise-
ness
and lack of detail is one of
the reasons why function
point analysis
is
not used very often
for Agile applications. In
fact, an alternative to
user
stories would be to base function
point analysis on the
associated
test
cases, which of necessity
must be more
complete.
It
is a good thing that test
cases and user stories
are created concur-
rently,
because formal inspections of user
stories would not find
many
defects,
since the stories are so
abbreviated. However, inspections of
the
test
cases created with the user
stories are of potential
value.
Another
issue with user stories is
their longevity. Once the
initial
release
of an application goes to customers,
development of the second
and
future releases may pass to
other development teams or be
out-
sourced.
How do these follow-on groups
know what requirements are
in
the
first release? In other words,
are user stories a practical
way of trans-
mitting
knowledge about requirement
over a 10- to 20-year
period?
Some
Agile organizations use a
metric called story
points for
estima-
tion.
However, there are no large
benchmark collections that
use story
points.
In addition, it is not possible to
compare projects whose
require-
ments
are derived from story
points against similar
projects that used
other
methods such as UML or use-cases.
It
is theoretically possible to convert
story points into function
points,
but
a better method would be for
Agile projects to use one of
the high-
speed
function point sizing
methods. Having function
points available
would
allow side-by-side comparisons with
other projects and
would
permit
Agile projects to submit
data to standard benchmark
collections
such
as that of the International
Software Benchmarking
Standards
Group
(ISBSG).
Summary
of Software Requirements
Circa
2009
Even
after 60 years of software
development, methods for
gathering
and
analyzing user requirements
continue to be troublesome.
Creeping
requirements
still occur, as do requirements errors
and also toxic
require-
ments.
Requirement inspections are an
effective antidote to these
prob-
lems,
but occur for less
than 5 percent of U.S.
software projects and
even
fewer
on a global basis.
468
Chapter
Seven
Research
into an extended taxonomy
for specific features and
spe-
cific
requirements would be valuable to
the industry because such
a
taxonomy
would allow similar
requirements to be compared and
evalu-
ated
from multiple applications.
This is because applications that
share
the
same "pattern" on the
taxonomy usually have
similar features and
similar
requirements.
Also
valuable would be elevating
the methods of static
analysis so
that
they operated on requirements.
Additional research on data
mining
to
extract hidden requirements
from source code would be an
adjunct
to
using static analysis on
requirements, as would automatic
derivation
of
function point
totals.
The
eventual goal of requirements
engineering should be to
create
catalogs
of standard reusable requirements
and associated test
and
tutorial
materials. In theory, more
than 50 percent and perhaps
more
than
75 percent of the features in
software applications could
eventually
come
from certified reusable
materials.
Business
Analysis
The
phrase "business analysis" is
very similar to the older
phrase "sys-
tems
analysis." Many corporations
employ business analysis
specialists
who
serve as a liaison between
the software engineering
community and
the
operating units of the
company.
Because
of their role as liaison
between the technical and
business
communities,
business analysts are
involved very early and
are key
participants
even before requirements
elicitation starts.
Business
analysts continue to be involved
during the design and
early
part
of the coding phases, due to
having to analyze and deal
with creep-
ing
requirements that do not
taper off until well
into the coding
phase.
After
that, additional requirements
are shunted into future
releases.
The
roles of the business
analysts are to aid in
requirements elicita-
tion,
and to ensure that both
the information technology
side and the
customer
or stakeholder side communicate
clearly and
effectively.
The
background and training for
business analysis specialists
is
somewhat
ambiguous as of 2009. Many
are former systems
analysts,
software
engineers, or quality assurance
specialists who wanted
broader
responsibilities.
There
is a nonprofit International Institute of
Business Analysis
(IIBA)
that maintains a Business
Analysis Body of Knowledge
(BABOK)
library
with substantial volumes of
information.
Because
business analysts have
backgrounds in both software
and
business
topics, they are in a good
position to facilitate
requirements
elicitation
and requirements analysis.
For example, business
analysts
are
often moderators at joint
application design (JAD)
sessions.
Requirements,
Analysis, Architecture, and
Design
469
Business
analysts can also
participate in requirement
inspections,
quality
function deployment (QFD),
and other activities that
either col-
lect
requirements or analyze them
and explain their meaning to
the
software
community.
Some
visible gaps in the roles of
business analysts often
require other
kinds
of specialists. To illustrate a few of
these gaps:
1.
Sizing and estimating
software projects
2.
Scope management of software
projects
3.
Risk analysis of software
projects
4.
Tracking and monitoring the
progress of software
projects
5.
Quality control of software
projects
6.
Security analysis and
protection of software
projects
The
reason for the assertion
that these areas represent
"gaps" is
because
problems are very common in
all six areas regardless of
whether
business
analysis is part of the
requirements process.
Business
analysts should know a great
deal about corporate and
enter-
prise
software issues. In fact,
the roles of business
analysts and the
roles
of
enterprise architects, to be discussed
later in this chapter,
overlap.
In
the future it would be
useful to have a full and
complete description
of
the roles played by business
analysts, architects, enterprise
archi-
tects,
scope managers, and project
office managers, because
they all
have
some common
responsibilities.
One
useful service that business
analysts could provide for
their
employers
is to collect and summarize
benchmark data from a
variety
of
sources. In fact, in 30 kinds of
software benchmarks, early
knowl-
edge
during the requirements
phase would be useful. The
30 forms of
benchmark
include
1.
Portfolio benchmarks
2.
Industry benchmarks (banks,
insurance, defense,
etc.)
3.
International benchmarks (U.S., UK,
Japan, China, etc.)
4.
Application class benchmarks
(embedded, systems, IT,
etc.)
5.
Application size benchmarks
(1, 10, 100, 1000,
function points, etc.)
6.
Requirements creep benchmarks
(monthly rates of
change)
7.
Data center and operations
benchmarks (availability, MTTF,
etc.)
8.
Data quality
benchmarks
9.
Database volume
benchmarks
10.
Staffing and specialization
benchmarks
470
Chapter
Seven
11.
Staff turnover and attrition
benchmarks
12.
Staff compensation
benchmarks
13.
Organization structure benchmarks
(matrix, small team, Agile,
etc.)
14.
Development productivity
benchmarks
15.
Software quality
benchmarks
16.
Software security benchmarks
(cost of prevention, recovery,
etc.)
17.
Maintenance and support
benchmarks
18.
Legacy renovation
benchmarks
19.
Total cost of ownership
(TCO) benchmarks
20.
Cost of quality (COQ)
benchmarks
21.
Customer satisfaction
benchmarks
22.
Methodology benchmarks (Agile,
RUP, TSP, etc.)
23.
Tool usage benchmarks
(project management, static
analysis, etc.)
24.
Reusability benchmarks (volumes of
various reusable
deliverables)
25.
Software usage benchmarks
(by occupation, by
function)
26.
Outsource benchmarks
27.
Schedule slip
benchmarks
28.
Cost overrun
benchmarks
29.
Project failure benchmarks
(from litigation
records)
30.
Litigation cost
benchmarks
Business
analysts are not the
only personnel who should be
familiar
with
such benchmark data, but
due to their central and
important role
early
in application development, business
analysts are in a key
position
so
the more they know,
the more valuable their
work becomes.
The
assignment scope of business
analysts runs between 1500
and
50,000
function points. That means
that an approximate ratio of
busi-
ness
analysts to ordinary software
engineers would range from
about 1
to
10 up to perhaps 1 to 25. The
ratio of business analysts to
customers
runs
from about 1 to 10 up to perhaps 1 to
50.
Software
Architecture
In
essence, software architecture is
concerned with seven
topics:
1.
The overall structure of a
software application
2.
The structure of the data
used by the software
application
3.
The interfaces between a
software application and the
world outside
Requirements,
Analysis, Architecture, and
Design
471
4.
The decomposition of the
application into functional
components
5.
The linkage or transmission of
information among the
functional
components
6.
The performance attributes
associated with the
structure
7.
The security attributes
associated with the
structure
There
are other associated topics,
but these seven seem to be
the
fundamental
topics of concern.
The
roles of both software
architects and enterprise
architects have
been
evolving in recent years and
will continue to evolve as new
topics
such
as cloud computing, service-oriented
architecture (SOA), and
vir-
tualization
become more
widespread.
The
importance of software architecture
resembles the importance
of
the
architecture of houses and
buildings: the larger the
structure, the
more
important good architecture
becomes.
By
coincidence, the size of a
physical building measured in
terms of
"square
feet" and the size of a
software application measured in
terms
of
"function points" share
identical patterns when it
comes to the impor-
tance
or value of good architecture.
Table 7-7 illustrates how
architec-
ture
goes up in value with physical
size.
Using
the information shown in
Table 7-7, a small iPhone
applet with
a
size of perhaps 5 function
points, or 250 Java
statements can be
suc-
cessfully
implemented without any
formal architecture at all,
other than
the
developer's private knowledge of
the value of structured
code.
However,
a very large system in the
size range of Vista, Oracle,
SAP,
and
the like will probably not
even be possible without
very good archi-
tecture
and a number of architectural
specialists. These massive
applica-
tions
top 100,000 function points,
or more than 5 million
statements in a
language
such as Java (probably more
than 15 million in
actuality).
Both
software architecture and
the architecture of buildings
are con-
cerned
largely with structural issues.
However, software
architecture
is
even more complicated than
building architecture because
software
TABLE
7-7
Value
of Architecture Increases with Structural
Size
Size
in Square Feet or
Size
in Function Points
Importance
of Architecture
1
Not
possible and not
needed
10
Not
needed
100
Minimal
need for architecture
1,000
Architecture
useful
10,000
Architecture
important
100,000
Architecture
critical
472
Chapter
Seven
applications
are not static once they
are constructed. They grow
con-
tinuously
at about 8 percent per year
as new features are added.
This
is
much faster than buildings
grow once complete. Also,
software appli-
cations
have no value unless they
are operating. When they
operate,
software
applications have a very
dynamic structure that can
change
rapidly
due to calls and features
that open up and are
modified during
execution.
Therefore, software architects
have to deal with
dynamic
and
performance-related issues that
building architects only
encounter
occasionally
for structures such as
drawbridges and transit
systems.
Another
significant difference between
building architecture and
soft-
ware
architecture is in the area of
security. Of course, for
some buildings
such
as the Pentagon and CIA
headquarters, security is a top
architec-
tural
concern, but security is not
usually a major architectural
issue for
ordinary
homes and small office
buildings.
For
software, applications security is
becoming increasingly
important
for
all size levels. As the
recession continues, security will
become even
more
important because threats are
becoming much more
sophisticated.
The
recent success of the conflicker
worm, which affected more
than
1.9
million computers, including
some in "secure" government
agencies
in
early 2009, provides an
urgent wakeup call to the
increasing impor-
tance
of security as an architectural issue
for software.
As
software engineering gradually
evolves from a craft to an
engineer-
ing
field, the importance of
architecture will continue to grow.
One reason
for
this is because software architectural
styles are rapidly
evolving.
Returning
to the analogy of the
architecture of buildings,
various
chronological
periods are sometimes
characterized by the
dominant
form
of architecture employed. There
are also regional
differences.
Thus,
many histories of architecture in
the United States include
dis-
cussions
of the "Queen Anne" style,
the "General Grant Gothic"
style,
the
"Southern Antebellum" style,
the "English Tudor" style,
the "Frank
Lloyd
Wright" style, and dozens
more.
Software
engineering is not yet old
enough to have formal
histories
of
the evolution of architectural
styles, but they are
changing at least
as
rapidly as the architecture of
homes and buildings.
One
useful but missing piece of
information from software
bench-
marks
would be a description or taxonomy of
the architecture that
was
used
for applications. This would
facilitate analysis of topics
such as
quality
levels and security
vulnerabilities associated with various
soft-
ware
architectures.
When
applications were small and
averaged less than 1000
function
points
in size, as they did in
until the late 1960s,
software architecture
was
not a topic of interest.
Edsger Dijkstra and David
Parnas first dis-
cussed
software architecture as a topic of
importance circa 1968.
Later
pioneers
such as Mary Shaw and David
Garlan continued to
stress
Requirements,
Analysis, Architecture, and
Design
473
that
software architecture was a
critical factor for the
success of large
systems.
The
reason for the increasing
importance of architecture was
because
of
four factors:
1.
Software applications were
growing rapidly and
exceeding 10,000
function
points or 1 million source
code statements. Today in
2009,
sizes
may be greater than ten
times larger yet
again.
2.
The volume of data used by
software applications has
been growing
even
faster than software itself.
The number of automated
records
increased
from thousands to millions to
billions and continues
to
increase.
No doubt trillions of records
are just over the
horizon.
3.
Database and data
organization schemas have been
evolving as fast
or
faster than software
architectural schemas.
4.
Software applications were no
longer operating all by
themselves
on
one computer.
When
large software applications
began to be divided into
compo-
nents
that could operate in
parallel, or operate on separate
computers
at
the same time, architecture
became a very important
topic.
Software
applications that ran alone
on a single computer were
con-
sidered
to have a "monolithic" architecture.
One of the significant
depar-
tures
from this model was to
have some of the functions
executing on a
host
computer (often a mainframe)
while other functions
operated on
personal
computers. This method of
decomposition was called
client-
server
architecture.
In
the 1980s and even
more in the 1990s, many
other architectural
approaches
emerged. They included but
were not limited to
event-driven
architecture,
three-tier architecture (presentation
layer, business logic
layer,
and database layer), N-tier
architecture with even more
layers,
peer-to-peer
architecture, model-driven architecture,
and of course the
more
recent pattern-based architecture,
service-oriented architecture
(SOA),
soon followed by cloud
computing.
At
the same time that
software architectures were
expanding and
evolving,
data structures and data
volumes were expanding and
evolv-
ing.
Hierarchical data structures
were joined by relational
data struc-
tures
and also row-oriented data,
column-oriented data,
object-oriented
data,
and a number of
others.
Obviously,
software architects need to
consider the join between
the
structure
of software itself and
optimal data organizations to
accom-
plish
the purpose of the
application. These are not
trivial choices, and
both
experience and special
knowledge are
required.
Successfully
choosing and designing
applications using any of
these
more
recent forms of software and
data architecture became a
job that
474
Chapter
Seven
required
special training and
considerable experience. As a result,
many
large
companies such as IBM and
Microsoft created new job
descriptions
and
new job titles such as
"architect" and "senior
architect."
As
the positions of architect
began to appear in large
companies, sev-
eral
associations emerged so that
architects could share
information
and
gain access to the latest
thinking. One of these is
the International
Association
of Software Architects (IASA),
and another is the
World
Wide
Institute of Software Architects
(WWISA). There are also
special-
ized
journals such as the
Microsoft
Architecture Journal dealing
with
architectural
topics.
As
of 2009, the weight of
evidence supports the
hypothesis that large
companies
that build large software
applications should employ
profes-
sional
software architects. That
can be considered a best
practice.
An
interesting question is how
many architects does a
company need?
The
normal assignment scope for
an architect ranges between
5000 and
about
100,000 function points.
This means that an
application of 10,000
function
points will need at least
one architect. However, a
massive
application
of 150,000 function points
might need at least two
archi-
tects.
Total employment of architects
even in large companies such
as
IBM
or Microsoft is probably less
than 100 architects out of
perhaps
50,000
total software
engineers.
However,
the evolution of specific
architectural styles is far
too rapid,
and
the criteria for evaluating
architectures is far too
hazy to state that
using
a specific form of architecture
for a specific application is a
good
choice,
a questionable choice, or a potentially
disastrous choice.
It
should be recalled that
hundreds of companies jumped
onto the
client-server
bandwagon in the 1980s, only
to discover that
complexity
levels
were so high and
implementation so difficult that
quality and
reliability
sometime dropped to unusable
levels.
As
of 2009, service-oriented
architecture
is attracting a huge
amount
of
coverage in the literature
and many early converts. But
will SOA
prove
to be a truly successful architectural
advance, or only a
quantum
leap
in complexity without too
many benefits? Unfortunately,
there are
not
yet enough completed SOA
applications to be sure that
this theo-
retically
useful architecture will live up to
the promises that are
being
made
on its behalf. (Recall that
SOA applications are not
downloaded
into
individual computers, but
operate remotely from web
hosts. This of
course
requires high bandwidths and
transmission speed to be
effective.
No
one has considered whether
there is enough bandwidth
available if
there
are thousands of SOA
applications attempting to serve
millions
of
clients at the same
time.)
Another
form of advanced architecture with
huge claims is that of
cloud
computing.
With
this architecture, applications
are segmented so that
they
can run concurrently on
literally hundreds of remote
computers.
Requirements,
Analysis, Architecture, and
Design
475
This
raises questions of safety
and security given the
rather poor security
protocols
that might be found in a
cloud computing
environment.
The
bottom line for architecture
as of 2009 is that it is evolving
so
rapidly
that it is worthwhile to employ
professional software
archi-
tects
who can stay current with
the evolution of software
architectural
styles.
But
selecting a specific architecture
for a specific application is
not
a
clear-cut choice with only
one correct answer. The
choice needs to
be
made by the architects
assigned to the application,
based on their
knowledge
of both architectural principles
and also on their
knowledge
of
the purpose and features of
the application in
question.
Enterprise
Architecture
The
need for enterprise
architecture has grown
progressively more
important
over the past 30 years,
due in large part to the
way comput-
ers
and software became embedded
in corporate operations.
In
the late 1960s, when
mainframe computers first
began to be applied
to
business problems, their
capabilities were somewhat
primitive and
limited.
As a result, early business
applications tended to be very
local,
to
operate on a specific computer in a
specific data center, and to
serve
only
a limited number of users in a
single business unit.
Corporations
have multiple operating
units, including
manufacturing,
marketing,
sales, finance, human
resources, and a number of
others.
Large
corporations also have
multiple business and
manufacturing sites
scattered
through multiple cities and
states.
When
computers and software first
became business tools, it
was a
common
practice for each operating
unit to have its own
data center and
to
develop its own software.
Often there was little or no
communication
between
operating units as to the
features, interfaces, or data
that the
applications
were automating.
By
the 1980s, large
corporations had developed
hundreds or even
thousands
of software applications, the
majority of which served
only
narrow
and local purposes. When
corporate officers such as
the CEO
needed
consolidated information from
across all business units,
time-
consuming
and expensive work was
necessary to extract data
from vari-
ous
applications and produce
consolidated reports.
This
awkward situation triggered
the emergence of enterprise
architec-
ture
as a key discipline to bring
data processing consistency across
mul-
tiple
operating units. The same
situation also triggered the
emergence of
an
important commercial software
market: enterprise resource
planning
(ERP).
The basic concept of ERP
applications is that individual
applica-
tions
are so hard to link together
that it would be cheaper to
replace all
of
them with a single large
system that could serve
all operating units
476
Chapter
Seven
at
the same time, and to
store data in a consistent
format that served
corporate
and unit needs
simultaneously.
From
about 2000 onward, numerous
instances of corporate fraud
and
severe
accounting errors such as
demonstrated by Enron have
added
another
dimension to enterprise architecture.
Enterprise architects
are
also
key players in software
governance, or
ensuring that financial
data
is
accurate and that corporate
officers take responsibility
for its accu-
racy
under threat of severe
penalties.
The
main difference between
architecture as discussed in the
previ-
ous
section and enterprise
architecture is the scope of
responsibility.
Normally,
architects work on individual
applications, which might
range
from
10,000 to more than 100,000
function points. Enterprise
architects
work
on corporate portfolios, which
may range from about 2
million
function
points to more than 20
million function points in
aggregate
size.
Corporate portfolios for
large companies such as
Microsoft, IBM,
or
Lockheed contain thousands of
applications.
Yet
another aspect of enterprise
architecture is the fact
that large
corporations
create and use many
different kinds of software:
conven-
tional
information technology applications,
web applications,
embedded
applications,
and systems software. Some
of these applications are
built
by
in-house personnel; some are
outsourced; some come from
commer-
cial
vendors; some are
open-source applications; and
some come from
mergers
and acquisitions with other
companies. In addition, any
large
corporation
today in 2009 must also
interface with the computer
sys-
tems
of other corporations and
also with government agencies
such as
taxation
and workers
compensation.
The
most difficult part of
enterprise architecture is probably
that
of
dealing with joining two
software portfolios as a result of a
merger
or
acquisition. Usually, at least 80
percent of the applications in
both
companies
perform similar functions,
but they may use
different data
structures,
have different interface
methods, and have different
internal
architectures.
Combining
portfolios from two
different companies in the
wake of a
merger
is one of the most difficult
tasks faced by enterprise
architects,
by
architects, by business analysts,
and by all other software
engineer-
ing
personnel.
Yet
another set of concerns
studied by enterprise architects
are the
communication
methods among disparate
business units and also
the
databases
and repositories they
develop and maintain.
In
addition, enterprise architects
are also concerned with a
host of
technology
issues including but not
limited to hardware platforms,
soft-
ware
operating systems, open-source
software, COTS packages
from
external
vendors, and emerging topics
such as cloud computing
and
service-oriented
architecture that are not
yet fully deployed.
Requirements,
Analysis, Architecture, and
Design
477
Enterprise
architecture has the same
relationship to architecture
that
urban planning has to
building architecture. With
building archi-
tecture,
an architect is concerned primarily with
a single building. But
urban
planners need to be concerned
about thousands of buildings
at
the
same time. Urban planners
need to think about what
kinds of infra-
structure
will be needed to support various
sectors such as
residential,
commercial,
industrial, and so
forth.
Table
7-8 shows the importance of
enterprise architecture with
increasing
numbers of applications owned by
the enterprise.
Table
7-8 brings up an interesting
question: How many
enterprise
architects
are needed in a large
company? Because this is a
fairly new
occupation,
there is no definitive answer.
However, given the
complexity
of
the situation, a corporation
probably needs one enterprise
architect
for
about every 1000 significant
applications in their portfolio.
Thus, if
a
company has 5000
applications in their portfolio,
they may need
five
enterprise
architects.
Expressed
another way, the assignment
scope of an enterprise
archi-
tect
runs from 500,000 up to more
than 2 million function
points.
For
a large corporation such as IBM,
Microsoft, or Unisys, a full
port-
folio
might include
3,000
in-house information technology
applications
1,500
web-based applications
1,000
tools (project management,
testing, etc.)
3,500
commercial applications from
other companies (ERP, HR,
etc.)
2,000
commercial applications sold to
other companies
2,500
systems-software applications
500
embedded applications (security,
AC, etc.)
250
open-source applications
14,250
total applications
Assuming
this total quantity of
applications, then about 15
enterprise
architects
are likely to be
employed.
TABLE
7-8
Value
of Enterprise Architecture Increases with
Applications
Number
of Applications
Owned
by Enterprise
Importance
of Enterprise Architecture
10
Enterprise
architecture not
needed
100
Enterprise
architecture useful
1,000
Enterprise
architecture important
10,000
Enterprise
architecture very
important
100,000
Enterprise
architecture critical
1,000,000
Enterprise
architecture critical but
very difficult to
achieve
478
Chapter
Seven
These
disparate applications will probably
operate on more than
a
dozen
hardware platforms and
encompass at least half a
dozen operat-
ing
systems. In other words, the
software world of a large
corporation
is
a smorgasbord of diverse applications,
platforms, data file
structures,
communication
channels, and other problem
areas.
As
of 2009, the roles of
enterprise architects are
evolving under the
impact
of service-oriented architecture (SOA),
cloud computing, the
explosion
of open-source applications, and
also under the emerging
cri-
teria
for more accurate financial
reports mandated by
Sarbanes-Oxley
legislation.
The
global recession will also
have a significant but
unpredictable
impact
on enterprise architecture. There
are no models or
guidelines
for
what happens to enterprise
architecture during periods of
massive
layoffs,
closures of business units,
abandonment of unfinished
applica-
tions,
and reduced numbers of
development and maintenance
personnel.
In
fact, there is some risk
that enterprise architects
themselves may be
among
those who are laid
off, because their work is
not always perceived
as
having a direct impact on
corporate bottom
lines.
Several
nonprofit associations support
the enterprise
architecture
domain.
One of these is the
Association of Enterprise Architects
(AEA),
whose
web site is
aeaasociation.org.
Another
is the awkwardly named
Association of Open Group
Enterprise
Architects
(AOGEA), whose web site is
AOGEA.org. This
organization
and
its awkward name are
due to a merger between the
Open Group
organization
and the Global Enterprise
Architecture Organization
(GEAO).
The merged group asserts
that it has become the
largest asso-
ciation
of architects in the
world.
There
is also a journal for
enterprise architects, The
Journal of
Enterprise
Architecture (JEA),
published by the Association of
Enterprise
Architects.
It
is difficult to find information
about the specific plans of
enterprise
architects
for corporations, because their
work is usually
proprietary
and
not made available to the
public. However, many units
of the federal
government
and most state governments
do publish or make
available
information
about their enterprise
architectures. The Department
of
Defense
is the world's largest user
of computer software and is
attempt-
ing
to develop a new and
improved enterprise
architecture.
The
huge increases in hacking,
worms, viruses, and denial
of service
attacks
are obviously topics of
great concern to enterprise
architects.
However,
security requires special
skills, which are rare
today, so exter-
nal
consultants on security are needed to
buttress the work of
enterprise
architects
until they can catch
up.
In
terms of best practices,
organizations that own more
than about 500
software
applications should employ at
least one enterprise
architect.
Requirements,
Analysis, Architecture, and
Design
479
Large
corporations with more than
5000 software applications
may need
five,
as noted before using ratios
of applications to enterprise
architects.
The
roles played by enterprise
architects in specific companies
vary
widely,
and it is hard to pin down
best practices. Obviously,
increasing
data
sharing among operating
units would be a best
practice. Eliminating
redundant
applications and pruning
portfolios of aging and
unwieldy
legacy
applications would be another
best practice. Other roles,
which
may
or may not be viewed as best
practices, might include
changing the
ratios
of COTS applications to in-house
software, and perhaps
partici-
pating
in the selection and
deployment of enterprise resource
planning
(ERP)
applications. No doubt the
work of enterprise architecture
will
continue
to evolve with technical and
business changes.
Software
Design
Suppose
you were asked by the
CEO of your company to
examine the
most
recent 250 applications
developed internally and to
identify can-
didate
features for creating a
library of reusable designs,
code, and test
cases.
How could this assignment be
carried out?
This
would not be an easy
assignment given the state
of the art of
software
design circa 2009. About 75
of the smaller applications
below
1000
function points would
probably have used Agile
development and
expressed
their designs via user
stories perhaps augmented by
other
representation
methods. User stories are
useful enough for
individual
applications,
but not necessarily useful
for identifying common
patterns
across
multiple applications.
About
50 of the larger business
applications above 5000
function
points
would have used more
formal design methods;
probably the UML
with
the requirements being
elicited via joint
application design
(JAD).
While
the UML does capture
individual patterns, the
large volume of
UML
diagrams and their many
flavors means that scanning
through
UML
for a sample of 50 applications,
trying to identify common
features,
would
not be easy or rapid.
An
automated tool such as a
static analysis tool might
parse the meta-
language
underlying UML and identify
common patterns, but this
is
not
readily done circa
2009.
About
25 of the scientific or engineering
applications would have
used
state-change
diagrams, modeling languages
such as LePus3,
Express,
and
probably quality function
deployment (QFD) with "house of
quality"
diagrams
and various architectural
meta-language models.
The
remaining 100 of the
applications might have
utilized a wide
variety
of methods including but not
limited to use-cases, the UML,
Nassi-Schneiderman
charts, Jackson design,
flowcharts, decision
tables,
data-flow
diagrams, HIPO diagrams, and
probably more as well. Some
of
480
Chapter
Seven
these
define patterns, but they
are not easy to scan for a
sample of 100
projects.
In
summary, the 250 most
recent applications might
have used more
than
50 different design languages
and methodologies, which,
for the most
part,
are not easily translatable
from one to another. Neither
are they
amenable
to automatic verification and
error-checking.
As
a result of the large
variety of fairly incompatible
design repre-
sentations
used on the sample of 250
applications in the same
company,
there
is no easy way to pick out
features or patterns that
are common
among
several applications using
design documents. This makes
it dif-
ficult
to identify candidate features
for a library of reusable
materials.
Since
all of the applications are
complete and operating, it
might be
possible
to identify the patterns by
means of static-analysis tools on
the
source
code itself, assuming that
all of these applications
are written
in
C, C++, Java, or any of the
approximately 25 languages where
static
analysis
operates.
Since
some of the design methods
have underlying
meta-languages,
static
analysis is theoretically possible,
but most static analysis
tools
support
programming languages and
not meta-languages as of
2009.
It
would also be possible to
look for patterns using one
or more of the
legacy
renovation tools that parse
source code and to display
the code
in
a fashion that makes
maintenance and modification
easy.
Yet
another possibility would be to
use some of the more
sophisticated
complexity
analysis tools that examine
source code and to
calculate
cyclomatic
and essential complexity and
also to identify code
patterns.
The
bottom line is that as of
2009, it is easier to find
and identify
patterns
in code than it is to identify
patterns in design. This is
not the
way
it should be. Design methods
should be amenable to
automated
analysis
in order to detect defects
and also to look for
patterns of reus-
able
elements.
Another
issue with software design is
that software design errors
are
the
second most numerous form of
software error. Design
errors aver-
age
about 1.25 bugs or mistakes
per function point, while
code averages
about
1.75 bugs per function
point.
Since
design documentation runs
between one page and two pages
per
function
point, the implication is
that essentially every page
of a design
specification
has at least one bug or
error. This is why design
inspections
are
so powerful and effective in
reducing software design
problems.
Given
that the typical error
density in software design
remains high
whether
the representation method
consists of use-cases, the UML,
flowcharts,
or any of the other 50 or so
representation methods, there
is
insufficient
data to select any current
design methods as a best
practice.
What
is more useful, perhaps, is to
consider the fundamental
topics that
need
to be part of software
designs.
Requirements,
Analysis, Architecture, and
Design
481
Software
Design Views
When
considered objectively, software
design is a subset of the
more
general
topic of knowledge
representation. That
brings up important
questions
as to what kinds of knowledge
need to be represented
when
designing
a software application. It also
brings up questions as to
what
languages
or forms of representation are
best for the various
topics that
are
part of software
designs.
Because
software is not readily
visible and also has
dynamic attri-
butes,
it is somewhat more difficult to
enumerate the topics of
software
that
need to be represented than it
might be for a static
physical object
such
as a building. Eight general
topics are needed to represent
software
applications:
1.
The external
view of
software features visible to
users and derived
from
explicit user requirements.
The external view includes
screen
images,
report formats, and responses to
user actions as might
occur
with
embedded software. This view
might identify features that
are
shared
with other applications and
hence potentially reusable.
This
view
also should deal with
error-handling for user
errors. This view
will
also discuss the various
hardware and software
platforms on
which
the application will operate,
and also the various
countries
and
national languages that will be
supported. This view is
fairly
concise
and seems to average between
0.5 and 1.0 page
per function
point.
2.
The algorithm
view of
the mathematical formulas or
algorithms
contained
in the application. These
might be straightforward
calcu-
lations
such as currency conversions or
very complex formulas
such
as
those associated with quantum
mechanics. In any case, the
major
algorithms
need to be represented and
explained prior to
encoding
them.
This view is very concise
and averages below 0.25
page per
function
point.
3.
The structural
view of
software applications includes
components
and
modules and how they
are joined together to form
a complete
application.
This view includes the
sequence or concurrency with
which
these modules will execute.
Calls or interfaces to
external
applications
are also part of the
structural view. This view
might
also
show modules or features
that are reused from
external sources
or
custom-built for a specific
application. Classes and
inheritance
using
object-oriented methods would
also be shown in the
struc-
tural
view. This is the most
verbose view and runs
between 1.0 and
2.0
pages per function
point.
4.
The data
view includes
the kinds of information
created, used, or
manipulated
by the software application.
This view includes
facts
482
Chapter
Seven
about
the data such as whether it
consists of business
information,
symbols,
sensor-based information, images,
sounds, or something
else.
For example, the embedded
software inside a cochlear
implant
converts
external audio information
into electrical signals.
Because
as
of 2009, there is no "data
point" metric or any other
metric for
expressing
the size of databases,
repositories, and data
warehouses,
there
is no effective way to express
the size or volume of data
used
by
software.
5.
The attribute
view or
nonfunctional goals and
targets for the
appli-
cation
once it is deployed. These
attributes can include
performance
in
terms of execution speed, reliability in
terms of mean time to
failure
(MTTF), quality in terms of
delivered defects, and a
number
of
other attributes as well.
This view is also very
concise and usually
requires
less than three pages no
matter how large the
application
itself
is.
6.
The security
view or
how the application will
defend itself against
viruses,
worms, search bots, denial
of service attacks, and
other
attempts
to either interfere with the
operation of the software
or
steal
information used by the
software. This view is new
circa 2009,
but
quickly needs to become a
standard feature of software
applica-
tion
design and especially so for
financial applications,
health-care
applications,
and any application that
deals with valuable or
classi-
fied
information. This view is
too new to have any
size information
available
as of 2009. However, it will probably
turn out to be fairly
concise.
7.
The pattern
view, or
the combinations of the
other views that
are
likely
to occur in multiple software
applications, and hence
are
candidates
for reuse. Typical patterns
with reuse potential will
contain
similar external features,
similar algorithms, and
similar
data
structures. Class libraries
and inheritance of
object-oriented
software
may also be part of software
patterns. This view seems
to
require
about 0.1 to 0.4 page
per function point to
describe specific
patterns,
with the size being based on
the reusable feature
being
described.
8.
The logistical
view records
certain historical facts
about software
applications
that are often lost or
difficult to find. These
logistical
topics
include the date the
application was first
started, the loca-
tions
and companies involved in
construction, and information
on
the
methods, tools, and
practices used in construction.
Application
size
in terms of both function
points and logical code
statements
would
be included in the logistical
view, along with the various
lan-
guages
utilized. Since applications
continue to grow, the
logistical
view
should identify creeping
requirements and then later
growth
Requirements,
Analysis, Architecture, and
Design
483
over
multiple releases. The logistical
view also includes the
sources of
reusable
materials for the
application. The logistical
view is intended
to
aid in benchmarking. The
logistical view would also
be useful
for
multiple regression analysis to
demonstrate the effectiveness
of
methods
such as Agile or TSP. Part
of the logistical view would
be
the
placement of the application on a
formal taxonomy, such as
the
one
discussed earlier in this
chapter. This view is
usually less than
ten
pages, regardless of the size of
the application
itself.
When
all of the eight views
are summed together, the
average size is
about
3.0 pages per function
point, and the range
runs from less
than
1.5
pages per function point to
more than 6.0 pages
per function point.
From
the fairly large sizes
associated with software design, it is
easy
to
understand why the creation of
paper documents can cost
more than
the
source code for large
applications. It is also easy to
understand why
some
of the Agile concepts are in
reaction to the large
volumes and high
costs
of normal software design
practices.
Given
the multiple views that
need to be captured during
software
design,
it is obvious that no single
language or representation
method
can
deal with all eight kinds of
view. Therefore, software
design must
utilize
multiple methods of representing
knowledge:
Natural
language text can be used
for defining the attribute
view,
■
the
logistics view, and for
some of the external views.
Special forms
of
natural language such as
"executable English" may
also be used.
Images
may be needed for some
aspects of the external
view, such as
■
typical
screens or samples of
outputs.
Mathematical
formulas or other forms of
scientific notations
are
■
needed
for the algorithm
view.
Symbols
and diagrams are needed
for the structural view.
Because
■
of
the dynamic nature of
software, some form of
animation would be
preferable
to static views. With
animation, performance can be
mod-
eled
during design.
Since
automation for verification
purposes would be somewhat
dif-
ficult
across multiple representation
methods, it would be desirable
and
useful
if the major views could be
mapped into a single
meta-language.
Obviously,
most of the views eventually
get mapped into source
code,
but
by the time the code is
complete, it is too late to
verify and validate
the
design.
Whether
a generalized design meta-language is
based on some form of
Backus-Naur
notation, a definite clause
grammar (DCG), or
something
else,
it should have the property
of being analyzed automatically
for
verification
and validation purposes.
Taking verification and
validation
484
Chapter
Seven
one
step further, it might also
be possible to generate a suite of
test cases
from
the analysis of the
meta-language.
The
bottom line on software
design circa 2009 is that
some of the 50 or
so
representation methods are
effective for individual
applications. But
none
are effective for pattern
analysis and identification of
candidates
for
reusable features.
Summary
and Conclusions
The
creation of various paper
representations of software
applica-
tions
before the code itself is
created has long been
troublesome for the
software
engineering domain. Errors
and mistakes are found in
every
form
of paper description of software.
Translation from requirements
to
design
and from design to code
always manages to leave some
features
behind,
and often manages to add
features that no one asked
for.
The
cost of producing paper
documents is often greater
than the cost
of
the source code itself.
While paper documents can be
inspected for
errors,
and inspections are quite
effective, it is very difficult to
carry
out
automated verification and
validation of either text
documents or
graphic
design documents.
In
total software requirements,
analysis, architecture, and
design
contribute
to about 60 percent of all
software bugs or defects and
accu-
mulate
between 30 percent and 40
percent of software costs. Indeed,
the
three
top cost elements of large
software applications
are
1.
Finding and fixing bugs
(many of which originate in
paper docu-
ments)
2.
Producing paper documents
including requirements,
architecture,
and
design
3.
Creating the source code
itself
Because
paper documents are
simultaneously more defective
and more
expensive
than source code itself,
there is a continuing need
for software
engineering
researchers to pay more
attention to both the error
content
of
paper documents and also to
the economic costs of
paperwork.
Hopefully,
future studies will enable
software patterns to be
more
easily
found and will also permit
more effective validation of
require-
ments
and design by automated
means.
As
of 2009, formal inspection of
requirements, architecture,
and
design
is the most effective known
way of eliminating defects in
these
important
documents. But inspections are
somewhat slow and
costly.
However,
neither static analysis nor
testing is fully capable of
finding
and
removing requirements and
design errors, so manual
inspections
are
critical activities.
Requirements,
Analysis, Architecture, and
Design
485
Readings
and References
Note:
Software requirements, business
analysis, architecture,
enterprise
architecture,
and design collectively have
more than 500 book
titles and
thousands
of journal articles in print.
Yet in spite of the huge
volume
of
published information, these
areas of software engineering
continue
to
be troublesome and erratic.
The titles shown here
represent only a
small
sample of the available
literature.
The
Cost and Quality
Associated
with
Software Paperwork
Beck,
Kent. Test-Driven
Development. Boston,
MA: Addison Wesley,
2002.
Cohen,
Lou. Quality
Function Deployment--How to Make
QFD Work for You.
Upper
Saddle
River, NJ: Prentice Hall,
1995.
Cohn,
Mike. Agile
Estimating and Planning.
Englewood Cliffs, NJ: Prentice Hall
PTR,
2005.
Garmus,
David and David Herron.
Function
Point Analysis--Measurement Practices
for
Successful
Software Projects. Boston,
MA: Addison Wesley Longman,
2001.
Garmus,
David and David Herron.
Measuring
the Software Process: A Practical
Guide
to
Functional Measurement. Englewood
Cliffs, NJ: Prentice Hall,
1995.
Gilb,
Tom and Dorothy Graham.
Software
Inspections. Reading,
MA: Addison Wesley,
1993.
Glass,
R.L. Software
Runaways: Lessons Learned
from Massive Software
Project
Failures.
Englewood
Cliffs, NJ: Prentice Hall,
1998.
Harris,
Michael, David Herron, and
Stacia Iwanicki. The
Business Value of
IT:
Managing
Risks, Optimizing Performance,
and Measuring Results. Boca
Raton, FL:
CRC
Press (Auerbach),
2008.
Humphrey,
Watts. Managing
the Software Process. Reading,
MA: Addison Wesley,
1989.
Jones,
Capers. Assessment
and Control of Software
Risks. Englewood
Cliffs, NJ:
Prentice
Hall, 1994.
Jones,
Capers. Estimating
Software Costs. New
York, NY: McGraw-Hill,
2007.
Jones,
Capers. Patterns
of Software System Failure
and Success. Boston,
MA:
International
Thomson Computer Press,
1995.
Jones,
Capers. Software
Assessments, Benchmarks, and Best
Practices. Boston,
MA:
Addison
Wesley Longman, 2000.
Jones,
Capers. "Software Project
Management Practices: Failure
Versus Success."
CrossTalk,
Vol.
19, No. 6 (June 2006):
48.
Jones,
Capers. "Why Flawed Software
Projects are not Cancelled
in Time." Cutter
IT
Journal,
Vol.
10, No. 12 (December 2003):
1217.
Kan,
Stephen H. Metrics
and Models in Software
Quality Engineering, Second
Edition.
Boston,
MA: Addison Wesley Longman,
2003.
McConnell,
Steve. Software
Estimating: Demystifying the
Black Art. Redmond,
WA:
Microsoft
Press, 2006.
Radice,
Ronald A. High
Quality Low Cost Software
Inspections. Andover,
MA:
Paradoxicon
Publishing, 2002.
Roetzheim,
William H., and Reyna A.
Beasley. Best
Practices in Software Cost
and
Schedule
Estimation. Upper
Saddle River, NJ: Prentice Hall
PTR, 1998.
Strassmann,
Paul. Governance
of Information Management: The
Concept of an
Information
Constitution,
Second Edition. (eBook).
Stamford, CT: Information
Economics
Press, 2004.
Strassmann,
Paul. Information
Payoff. Stamford,
CT: Information Economics Press,
1985.
Strassmann,
Paul. Information
Productivity. Stamford,
CT: Information Economics
Press,
1999.
Strassmann,
Paul. The
Squandered Computer. Stamford,
CT: Information Economics
Press,
1997.
486
Chapter
Seven
Wiegers,
Karl E. Peer
Reviews in Software--A Practical
Guide. Boston:
Addison Wesley
Longman,
2002.
Yourdon,
Ed. Death
March--The Complete Software
Developer's Guide to
Surviving
"Mission
Impossible" Projects. Upper
Saddle River, NJ: Prentice Hall
PTR, 1997.
Software
Requirements
Artow,
J. & I. Neustadt. UML
and the Unified Process. Boston:
Addison Wesley, 2000.
Booch,
Grady, Ivar Jacobsen, and
James Rumbaugh. The
Unified Modeling
Language
User
Guide, Second
Edition. Boston: Addison
Wesley, 2005.
Cockburn,
Alistair. Writing
Effective Use Cases. Boston:
Addison Wesley. 2000.
Cohn,
Mike. User
Stories Applied: For Agile
Software Development. Boston:
Addison
Wesley,
2004.
Fernandini,
Patricia L. A
Requirements Pattern. Succeeding in
the Internet
Economy.
Boston:
Addison Wesley, 2002.
Gottesidner,
Ellen. The
Software Requirements Memory
Jogger. Salem,
NH: Goal QPC
Inc.,
2005.
Inmon,
William H., John Zachman,
and Jonathan G. Geiger.
Data
Stores, Data
Warehousing,
and the Zachman Framework.
New
York: McGraw-Hill,
1997.
Orr,
Ken. Structured
Requirements Definition. Topeka,
KS: Ken Orr and
Associates,
Inc.,
1981.
Robertson,
Suzanne and James Robertson.
Mastering
the Requirements Process, Second
Edition.
Boston: Addison Wesley,
2006.
Wiegers,
Karl E. Software
Requirements, Second
Edition. Bellevue, WA:
Microsoft Press,
2003.
Wiegers,
Karl E. More
About Software Requirements:
Thorny Issues and
Practical
Advice.
Bellevue,
WA: Microsoft Press,
2000.
Software
Business Analysis
Carkenord,
Barbara A. Seven
Steps to Mastering Business
Analysis. Ft.
Lauderdale, FL:
J.
Ross Publishing,
2008.
Haas,
Kathleen B. Getting
it Right: Business Requirements
Analysis Tools
and
Techniques.
Vienna,
VA: Management Concepts.
2007.
Software
Architecture
Bass,
Len, Paul Clements, and
Rick Kazman. Software
Architecture in Practice. Boston:
Addison
Wesley, 1997.
Marks,
Eric and Michael Bell.
Service-Oriented
Architecture (SOA): A Planning
and
Implementation
Guide for Business and
Technology. New
York: John Wiley &
Sons,
2006.
Reekie,
John and Rohan McAdam.
A
Software Architecture Primer.
Angophora
Press, 2006.
Shaw,
Mary and David Garlan.
Software
Architecture: Perspectives on an
Emerging
Discipline.
Englewood
Cliffs, NJ: Prentice Hall,
1996.
Taylor,
R.N., N. Medvidovic, E.M. Dashofy.
Software
Architecture: Foundations,
Theory,
and
Practice. Hoboken,
NJ: Wiley, 2009.
Warnier,
Jean-Dominique. Logical
Construction of Systems. London:
Van Nostrand
Reinhold,
1978.
Enterprise
Architecture
Bernard,
Scott. An
Introduction to Enterprise Architecture,
Second
Edition. Philadelphia,
PA:
Auerbach Publications,
2008.
Fowler,
Martin. Patterns
of Enterprise Application Architecture.
Boston,
MA: Addison
Wesley,
2007.
Requirements,
Analysis, Architecture, and
Design
487
Lankhorst,
Marc. Enterprise
Architecture at Work: Modeling,
Communication, and
Analysis.
Cologne,
DE: Springer, 2005.
Spewak,
Steven H. Enterprise
Architecture Planning: Developing a
Blueprint for
Data,
Applications,
and Technology. Hoboken,
NSJ: Wiley, 1993.
Software
Design
Ambler,
S. Process
Patterns--Building Large-Scale Systems
Using Object
Technology.
Cambridge
University Press, SIGS
Books, 1998.
Berger,
Arnold S. Embedded
Systems Design: An Introduction to
Processes, Tools,
and
Techniques.
Burlington,
MA:CMP Books. 2001.
Gamma,
Erich, Richard Helm, Ralph
Johnson, John Vlissides.
Design
Patterns:
Elements
of Reusable Object Oriented
Design. Boston:
Addison Wesley, 1995.
Martin,
James & Carma McClure.
Diagramming
Techniques for Analysts
and
Programmers.
Englewood
Cliffs, NJ: Prentice Hall,
1985.
Shalloway,
Alan & James Trott.
Design
Patterns Explained: A New
Perspective on
Object-Oriented
Design, Second
Edition.
Boston,
MA: Addison Wesley
Professional,
2004.
This
page intentionally left
blank
Table of Contents:
|
|||||