Citation
Software process to support schedules, productivity, and maintainability within a software project

Material Information

Title:
Software process to support schedules, productivity, and maintainability within a software project
Creator:
Kelnosky, Jennifer
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
ix, 92 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Computer software -- Development ( lcsh )
Software engineering -- Evaluation ( lcsh )
Capability maturity model (Computer software) -- Evaluation ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 88-92).
Thesis:
There are two pages that are numbered as 27 in the thesis. Computer science
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Jennifer Kelnosky.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
41470571 ( OCLC )
ocm41470571
Classification:
LD1190.E52 1998m .K45 ( lcc )

Downloads

This item has the following downloads:


Full Text
SOFTWARE PROCESS TO SUPPORT SCHEDULES, PRODUCTIVITY, AND
MAINTAINABILITY WITHIN A SOFTWARE PROJECT.
by
Jennifer Kelnosky
B.A., University of Tennessee, 1992
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Computer Science
1998


The thesis for the Master of Science
degree by
Jennifer Kelnosky
has been approved
by
Gita Alghaband


Kelnosky, Jennifer (M.S., Computer Science Engineering)
Software Process to Support Schedules, Productivity, and Maintainability
within a Software Project
Thesis directed by Professor Jody Paul
ABSTRACT
The objectives of this thesis are to provide information regarding the
use of a formal software process and to describe how using a software
process will increase productivity, help meet schedule deadlines, and
improve maintainability. The phases of a general software process are
presented in some detail, including requirements engineering, assessment
and problem elicitation, design, implementation, and monitoring and data
collection. Two common formal processes. Capability Maturity Model and
Cleanroom Software Engineering, are described and used to demonstrate
the benefits that are achieved. Barriers to instituting a software process are
presented, with examples from these two process models. Analysis of these
processes acknowledges the barriers and offers solutions within the models to
prevent or overcome them. This thesis describes how use of a process can
increase productivity, improve maintainability, and help meet milestones
within each stage or level of the software project.
This abstract accurately represents the content of the candidate's thesis. I
recommend its publication.
Signed.
Jody Paul


CONTENTS
1. Chapter 1: Introduction.....................................1
1.1 Introduction into the World of Software Process.............1
1.2 Software Process Defined.................................. 2
1.3 Software Metrics For Software Process.......................3
1.4 Following Sections..........................................4
1.5 Band-Aids For Failing Project...............................4
1.6 What The Experts Say About Software Process.................6
1.7 Approaching Software Process in an Existent Project.........8
2. Chapter 2: A Formal Process................................11
2.1 Requirements Engineering...................................11
2.1.1 Ambiguities Within Language................................15
2.1.2 Types Of Questions For Requirements........................16
2.1.3 Formal Languages Used To Describe Requirements.............17
2.1.4 The People Involved In Requirements Engineering............18
2.1.5 The Communication Throughout The Process...................18
IV


20
20
22
22
23
24
25
27
27
28
28
31
35
37
37
38
Testing Requirements
Black Box Method.....................................
Open Implementation Method...........................
Requirements Engineering Summary......................
Assessment and Problem Elicitation....................
Risk Management Defined...............................
Risk Assessment.......................................
Risk Control..........................................
The People Involved In Problem Elicitation And Assessment
Summary Of Assessment And Problem Elicitation.........
Design................................................
Objected-Oriented Technology In Design................
Trade-offs Within Design..............................
Tools To Support The Design Process...................
Design Languages......................................
Summary Of Design.....................................
V


39
39
41
42
44
44
45
45
46
49
50
51
51
53
53
54
Implementation
Methods For Developing Increments: Top-down Design.
Tools To.Support Implementation Procedures..........
Strategies For Implementing Successful Applications.
Frameworks Used For Reuse Measurement...............
The Implementation Team.............................
Summary Of The Implementation Phase.................
Monitoring And Data Collection......................
Methods To Support The Collection Of Evaluative Data..
Types Of Findings Using Monitoring And Data Collection
Tools To Monitor Data...............................
The Whole Team Is Affected By The Software Progress....
The Summary Of Monitoring And Data Collection.......
Chapter 3: Cleanroom Software Engineering...........
Cleanroom Software Engineering......................
Myths About Cleanroom Software Engineering..........
VI


55
56
64
65
66
67
67
68
69
69
70
72
73
74
75
76
Principles Used In Cleanroom Software Engineering
Practices Involved In Cleanroom Software Process
Barriers Presented When Using Cleanroom...........
Benefits When Using Cleanroom...................
The Summary Of Cleanroom Software Engineering
Chapter 4: Capability Maturity Model............
Capability Maturity Model.......................
Immature Vs. Mature Processes...................
Levels Within The CMM...........................
Level 1: Initial Level..........................
Level 2: Repeatable Level.......................
Level 3: Defined Level..........................
Level 4: Managed Level..........................
Level 5: Optimized Level........................
Barriers Of The CMM.............................
Benefits Of The CMM.............................
VII


4.1.2 The Summary Of Capability Maturity Mode!.....................77
5. Chapter 5: Summary...........................................79
5.1 Summary of Software Process................................ 79
6. Conclusion...................................................82
Appendix: Annotated Bibliography............................84
References..................................................88
VIII


FIGURES
Figure
2.1.1: Initial requirements for elevators................................13
2.1.6.1 Black Box Example................................................21
2.2.2.1 Risk Exposure Table..............................................27
2.3.1.1 Use Case Example.................................................33
2.3.1.1: CRC Card Example................................................34
2.5.1.1: Defects vs. Process Improvements................................48
IX


1. Chapter 1
1.1 Introduction into the World of Software Process
Traditionally, software projects are immediately put into development
without analysis, design, and effective communication about the users needs
or wants. Without analysis or design, new features or implementation that
should occur in a first release, become challenging tasks. As a result, periods
of crisis, along with frustrated personnel and customers, are prevalent in the
workplace. Productivity declines as the developers spend their time
maintaining rushed products that are already released. Then come the legal
repercussions. Software organizations that have failed to incorporate a
software process can be more susceptible to a lawsuit (Krasner 1998).
Software companies face legal battles, as their contracts are not met to
complete satisfaction. As the Internet becomes more prevalent and software
engineers come and go, suggests another reason for putting internal
procedures in place (Barsh 1998). Frederick P. Brooks, in Mythical Man Month,
discusses how little the software engineering has changed in 20 years. He
continues to address the importance of organization and appropriate skills for
the management team. He predicts that software engineering will remain in
what he calls "the tar pit" for a long time (Brooks 1995). Higher productivity.
1


less maintenance, and met timelines of a software project can be achieved
through the use of a software process model rather than they ever occurring
method get the job done fast'. Instituting a software process offers a solution
to these serious problems within the software industry.
In recent times, some software projects have become subject to a formal
software process model that instills discipline upon creation of the product.
Software architectures are put in place to provide organization to the
software components [Kogut & Clements 1998). The idea of a software
process can be effective except that organizations as a whole do not always
agree on the process or not all people participate in the process.
1.2 Software Process Defined
Many different people define several stages in many different ways, but
most often they are discussing the same phases in the software development
process. Often ambiguity causes difficulty in understanding, so this paper will
focus on certain phases (Zave & Jackson 1997). Phases which will be
discussed in section 3, were given in the article Assessing Process-Centered
Software Engineering Environments". They are listed as: Requirements
Specification, Assessment and Problem Elicitation, (Re)Design,
Implementation, and Monitoring and Data Collection. These phases are
what can be defined as a software engineering process (Ambriola, Conradi,
2


& Fuggetta 1997).
1.3 Software Metrics For Software Process
How do we know that software processes actually work? Software
production is considered practically immeasurable as it is challenging to
measure something with some many components and little or no proof of a
process. However, researchers continue to address this issue [Kemerer 1998).
A survey was done with software development organizations to answer how
they measure their software development. Key features in software metrics
and their percent of use within software industries were: lines of code (48%),
scheduled tasks completed on time (48%), test coverage (38%), resource
adequacy, fault density (28%), system operational availability (28%), time to
market (28%), schedule, quality and cost trade-offs (28%). Overall project risk
is another measurement used by some of the best practices organizations
(Anonymous 1998). This is considered a valuable piece of evidence in
situation there might possibly be some sort of litigation brought upon a
company (Krasner 1998). Time to market and scheduled tasks completed on
time will be used to analyze a software engineering process in this paper.
Schedules tend to be a major component in the development process
(Anonymous 1998). In a reengineering case study, participants were asked to
rate each stage of the reengineering project by analyzing the effort to which
the team has attempted to complete stages. They measured overall
3


success, cost reduction, customer satisfaction increase, work productivity,
and defects reduction (Teng, Jeong, & Grover 1998). In another case study,
which will be discussed in more detail, the focus was on maintainability
metrics (Brown, Carney, & Clements 1995).
1.4 Following Sections
In the following sections I will discuss a few of the many topics within the
software engineering process. In the first section, after describing the sort of
Band-Aids organizations use to recover failing projects, I will let you know
what the experts say about software process. The next section will take what
the experts say into further detail by describing a general analysis of the
software process. Following the general description, I will discuss and analyze
two approaches to the software process that show that less maintenance,
met timelines, and higher productivity can be achieved. These two detailed
processes will be the Capability Maturity Model and the Cleanroom software
engineering process. The fourth section will summarize the analysis of the
software engineering process followed by the conclusion section that
highlights the benefits of it.
1.5 Band-Aids For Failing Project
A research study was done to get software development organizations to
describe what they do to recover a failing project. The importance of the


schedule in a software development process was again acknowledged. The
number one attempt to recover a project was to extend the schedule (Glass
1998). This same attempt is considered a negative outcome within a project
because of the possibility it will cause legality issues (i.e. not meeting
contracted timelines (Krasner 1998). A second attempt, instilling better
management procedures, may be effective, but why not instill those at the
start. In contrast to the number one attempt, some organizations will reduce
the project scope to meet the deadlines (Anonymous 1998). Once again,
legal contracts bind organizations to what they promised to include in the
project (Krasner 1998). Some will try to change the current technology that
they are using and generally have been accepted. This proposes the same
problem as attempting to reduce the project scope (Krasner 1998). A more
common practice such as putting more people on the job is considered by
53% of the organizations (Glass 1998). However, this also proposes a negative
effect. More people only means more learning curves, which means more
time and therefore, defeats the whole purpose of the attempt. Increasing
manpower to projects that have deadlines approaching can extend that
deadline more so than by pushing it back enough for the current assigned
engineers to complete it (Brooks 1995). Requesting more funds is a little less
common approach. However, if attempted, this approach can only lead to
more difficulties if the project is a failure (Krasner 1998). Only 25% of
5


organizations thought to instill better development methodologies. However,
its benefit might be better suited at the start of a project. If not possible, it
introduces negative effects as well. Methodologies need to be defined and
applying one in the middle of a crisis brings more challenges as development
and organizations attempt to adhere to the one chosen. Another attempt to
recover a project was to apply pressure on suppliers by threat of litigation
(Glass 1998). Litigation threats can in return be made on development
organizations for several reasons. Criminal Act, Liability, and Breach of
Contract are only a few of these litigation threats (Krasner 1998). Not many
organizations opted to abandon their project, which may presuppose that
there is a possibility that the Band-Aids may be beneficial (Glass 1998).
Litigation appears to be becoming more of an issue when attempting to
Band-Aid a failing project. It is suggested that using a software metrics and a
"mature" software process will ward off the chances of negligence as well as
enhance production (Krasner 1998).
1.6 What The Experts Say About Software Process
Experts, software engineers and managers of development, have not
only approached the topic of a formal software engineering process, but
also followed through by implementing one within the workplace. One of the
more influential writers was Frederick P. Brooks, author of Mythical Man
Month. He often referred to the software engineering process as a tar pit".
6


He discussed the many flaws with the scope of a software project as well as
the methods to improve these flaws [Brooks 1995). Dr. Paul C. Clements from
the Software Engineering institute believes that formal methods are
becoming an essential guide to the software industry. He describes a
method in his article "Formal Methods in Describing Architectures" that was
presented at the Monterey Workshop in 1995 (Clements 1995). This method,
called Modechart, will be discussed later in this paper as a component within
a phase of the general software process. In another article, Clements and his
partnered author, Paul Kogut, discuss the rising influence of software
architectures in the software industry. The software community is moving
towards higher levels of design and methods [Kogut & Clements 1998).
Process-centered Software Engineering Environments (PSEEs) are gaining
interest as a way to support the development process. They are defined as a
process model, which describes the development process, the resources and
their interaction, as well as the tools and system that will be used. The authors
of "Assessing Process-Centered Software Engineering Environments" also
agree that PSEEs are becoming a reality in the software world. It is a means
of communication and organization (Ambriola, Conradi, & Fuggetta 1997).
Other experts go so far as to focus their importance on Knowledge-based
Software Engineering (KBSE) systems to assist the software processes in areas
such as requirements, analysis, and management. Again, this is a formal
7


method for tracking systems (Devanbu & Jones 1997). Software process has
made the most progress within the software industry. Many organizations
have changed their ideas of software development. Instead of the old way
of thinking, "code" and then look at the whole picture, organizations are
beginning to look at the whole picture first and then the processes that will
maintain their views throughout the scope of the project (Kemerer 1998). So
many experts suggest that a formal software process is important to a
successful project. But still, so many organizations fail to use them (Krasner
1998). Some evaluations of organizations suggest that they are too immature
to understand the significance of the software process. It is such a large
transition that some organizations cannot afford the full time focus that is
necessary or the extenuating costs (Ambriola, Conradi, & Fuggetta 1997).
1.7 Approaching Software Process in an Existent Project
Earlier it was suggested that instilling a process in an existent project
would be considered a Band-Aid (i.e. changing methodology midstream
through a project) (Glass 1998). This was discussed as having a negative
outcome, but for some projects, it may be necessary as a means to gain
control (Krasner 1998). Suggesting a complete overhaul of changes may
cause more hassles, but a more manageable approach with incremental
changes can be more successful (Teng, Jeong, & Grover 1998). A case study
was done on the maintainability of large software systems to describe certain
8


procedures and their effectiveness. One of the proposed questions within this
case study was How well does the software lend itself to change?"
Developers were asked to participate in a change exercise", in which they
were to find all the components that were subject to change. This were
compiled together into an "active design review". The expectation of this
active design review was to gain insight into the whole system design rather
than just the changes that might be made. Another goal was to find flaws
within documentation (Brown, Carney, & Clements 1995). This article asserts
that the process for the maintainability of a large system should consist of the
following four pieces: code inspection process, test procedures, build and
release practices, and change request procedures. Code inspection process
should be done frequently for bug fixes and enhancements. Test procedures
should consist of the same procedures used during the development process
(Brown, Carney, &. Clements 1995). However, not mentioned by the authors,
test procedures should also consist of new procedures for the enhancements.
Build and release practices were noted as a process that should be "carefully
defined. It is important to track the releases, the fixes, and the resources that
go with them. Change request procedures were also considered an
important process (Brown, Carney, & Clements 1995). Keeping track,
documenting all change requests, is important, not only for the sanity of the
developers, but for legal reasons as well (Krasner 1998). Several challenges lie
9


ahead in the maintainability environment. For instance, consideration must
be made that the system will have to interface with new systems. Also,
environments might change for the developers and new methods may arise.
Another difficulty is the use of third party vendors. Maintaining the
environment means maintaining what other vendors are doing with their
environment (Brown, Carney, & Clements 1995). All these challenges reflect
the necessity of a formal software process even in the event of an existent
software project.
10


2. Chapter 2
2.1 Requirements Engineering
The purpose of requirements engineering is to gather information about
the project's objectives so that clients and developers completely
understand what is to be created. The requirements process, if successful,
can reap the benefits of clear goals, which, in turn, offers on-time delivery of
the product, future maintainability, and increased production. This phase
begins bridging the gap" between clients and developers' speech to
eliminate some of the misunderstandings that might occur. Different forms of
documentation are produced as a result of specifically defining the
requirements [Ambriola, Conradi, & Fuggetta 1997). This phase is comprised
of many challenges. The growing interest in requirements engineering
promotes many different forms of expressing it.
As stated, the purpose of requirements is to know what to create. The
second purpose is to decrease the language barrier between clients and
developers. If this cannot be accomplished, the software product may be
useless to the client. The product will not be what they want or will not
conform to their vision of the way they thought it would work. As the most
important phase, it is the most detrimental if done wrong (Brooks 1995).
11


So, if this is such a critical phase, how should it be done? There are a few
different approaches to requirements engineering. It is well known by
developers that once beginning to define requirements, it seems like a never-
ending fiasco. Additions to the expectations continue to increase. When do
they stop? The first approach is to freeze" the requirements phase so that
the development phase can commence [Gause & Weinberg 1989). In
contrast to this approach, requirements should continue iteratively
throughout the software process. This is described by Frederick Brooks as a
more effective approach. Any additions made to requirements after the
next phase has begun, can be analyzed in the next cycle [Brooks 1995).
Rapid prototyping supports the requirements by giving a conceptual
understanding of what the product will do. This is another method of
diminishing misunderstandings between clients and developers (Gause &
Weinberg 1989). During the initial review of the requirement specifications,
details need to be understood and abstractions defined; implementation
needs to be should not be examined at this point (Gause & Weinberg 1989).
Headington and Riley note "... the specification describes the behavior of the
data type without the reference to its implementation." In other words,
understand what the product is supposed to do not how it is going to be
done (Headington & Riley 1994). And, in addition, understand the clients
requirements (Teng, Jeong, & Grover 1998). Gause and Weinberg give some
12


examples for elevator requirement.
Requirements For An Elevator
1. There will be only one control/display panel per elevator car.
2. The elevators will travel vertically only.
3. All New York State elevator codes must be observed.
Figure 2.J.I: Initial requirements for elevators.
These are not complete. Assumptions can still be made, therefore
questions must be asked. For example, the requirement that states The
elevators will travel vertically only" could have been assumed, yet since it is
written, questions can be. derived from it. Some questions that may arrive are
"do elevators need to travel vertically for legal purposes?" or "do elevators
need to travel vertically because that is all that is known?". The last question
presents information to the developer about what might occur in the future
(Gause & Weinberg 1989). A requirement should be descriptive and specific.
Requirements should evolve with the project and every possible hypothetical
situation should be discussed as requirements are created. Envision situations
more than once to make sure that they are appropriate for consideration
(Kazman, et.al. 1998). The documentation of requirements should continue
to evolve as additions are made. Tracking additions and changes is also
13


helpful as a memory tool for why things changed or additions were made. In
the software process, this document will be the first created and the last
completed (Brooks 1995). A defined language may assist developers when
discussing requirements amongst each other; however, as it will be discussed,
several conflicting situations may arise between client's descriptions of a
requirement and the developer's analysis of that requirement.
With the growing interest in requirements engineering, many different
terms are used to express the same method or idea. Zave and Jackson
discuss four items for consideration when looking at the process of
requirements engineering. Requirements should be discussed within the
guidelines of the environment. Environment is used in the context of the
systems machines that are to be built (Zave & Jackson 1997). Requirements
should not describe how the project will be built, but what it should do in the
most descriptive detail. This descriptive detail should focus on the actions
that are related to the environment, the machine, and the interactions
between the two (Zave & Jackson 1997). And finally, requirements
engineering should use its domain knowledge as a way to refine
requirements so that they may be described as capable of being built (Zave
& Jackson 1997). It is important to know and understand the structure and
organization of a requirement for the application (Buschmann 1993). There is
much more than these four considerations that makes up the process of
14


requirements engineering.
Authors Gause and Weinberg give a realistic and humorous example
of some the problems encountered in the creation of a project called the
Cockroach Killer. This product was developed by a man in New York and
advertised in the classifieds. He charged five dollars and after he received
the money, he would send a kit with two blocks as the pieces. Two
instructions were given in the package: "1) Place cockroach on block A. 2)
Hit the cockroach with block B." Evidently, not all the methods to kill the
roach are listed (Gause & Weinberg 1989). How do you get it on the block?
How do clean the mess if the actual roach murder has taken place? Does
this really satisfy what the buyer believed to be a cockroach killer? Is it "user-
friendly?
2.1.1 Ambiguities Within Language
Ambiguities can be considered the most occurring problem and most
difficult to tackle within requirements engineering. They continue to remain
prominent throughout the software engineering process. It is very hampering
to grasp a real-world meaning to a requirement term. Even in describing the
software process itself, it is difficult to keep track of the terms and their
appropriate meaning for the context for which they are written (Zave &
Jackson 1997). Gause and Weinberg, again, give an interesting example of
the difficulties with ambiguity. When asking three different individuals to build
15


a shelter to protect humans, these three different individuals might build an
igloo, a castle, or a space station depending on their idea of what a
protective shelter might entail (Gause & Weinberg 1989). Ambiguity cannot
be removed completely from the requirements phase. Human language
proposes many definitions for the same word, so language itself is plagued
with ambiguity (Brooks 1995). Some possible to solutions to the problem might
be draw maps, ask many questions, and record any misinterpretations so that
they may be later refined (Gause & Weinberg 1989).
2.1.2 Types Of Questions For Requirements
Ask questions when addressing requirements to remove any
misunderstandings or vagueness. Direct questions are necessary to find what
the clients wants. Ask questions using a decision tree. Following in order of
the tree, as the client responds to a question, draw the consequences so that
they have a very descriptive view and it can be determine whether or not
that is what they meant by their response (Gause & Weinberg 1989). Showing
an implementation of a requirement has its advantages and disadvantages.
At the very least, it provides quick responses to anything that may a question
in a requirement. It helps to eliminate ambiguity. However, it may over-
prescribe the externals. The drawing may extend above and beyond what
the client meant or wanted (Brooks 1995). Questions regarding future
changes should also be thought through and asked. For instance, questions
16


about what changes will be made and how will they be handled.
2.1.3 Formal Languages Used To Describe Requirements
A suggestive approach to understanding requirements is to use a Process
Specification Language or PSL. This language provides support to the
requirements phase. A language is used to describe concepts of role,
organization/group, business rules, flows of information, and time constraints"
(Ambriola, Conradi, & Fuggetta 1997). This component of the requirement
phase should be maintained and consistent throughout the software process.
This language can be used not only for defining specifications, but also for
active prototyping (Ambriola, Conradi, & Fuggetta 1997). This approach can
be an effective means to reducing ambiguity. Because formal definitions
main goal is for precision, it is easier to sight different stages in the application
and any missing specifications that need to be addressed. The use of this
language assists in turning requirements into an implementation (Brooks 1995).
Another specification language tool to assist in the formalization of
thought patterns regarding a project is the use of Modechart. It is a
verification tool for making sure that specifications are consistent with their
meanings. It provides information about the specifications of the system and
supports model checking verification. It provides an analysis environment
helpful to the developer. This tool can also be used in the testing requirement
stage of the process. As part of the specification, it provides modes and their
17


relationships (Clements 1995).
2.1.4 The People Involved In Requirements Engineering
Who are the people involved in this stage of the process? First and
foremost the typical user should be involved (Gause & Weinberg 1989).
Communication with the users during this stage is crucial. Clients, in this case,
the people sponsoring the production of the project, should be included in
this phase, so that they are aware of the all the constraints with the software
production (Krasner 1998). And finally, an architect, someone whose main
purpose is to clarify, recognize, and understand the users wants, should also
be involved (Brooks 1995).
2.1.5 The Communication Throughout The Process
A suggested approach after defining a process, is to develop a project
plan from which to follow. According to this project plan, incorporate
meetings for continued discussions of the project as a whole. This approach
was a little more difficult as meetings become complicated as the attendees
strive for understanding of the project, its problems and solutions. Meetings
should be outlined to propose the topics of discussion. Several rules contend
with successful meetings. For instance, meetings should consist of a small
group of appropriate people (Brooks 1995). An agenda should be
developed beforehand and not deviated from during the meeting.
"Emergencies" should be withheld and addressed at a different time.
18


Otherwise, the meeting can be filled with random tangents of what needs to
be done now (Gause & Weinberg 1989). Meetings should address one issue
and involve the appropriate individuals to address that issue. With the
growing development of intranet sites in organizations, this would be an
appropriate place for meeting minutes to be published for those interested in
the contents of the meeting. This is also a place for working documents and
charts, which, as an end result, can simplify the final delivery documents
(Gause & Weinberg 1989). Another suggestive approach for communication
via meetings is to label the meeting. For instance, label a meeting as a
status-update meeting, or a problem-action meeting. This is a good way to
distinguish what the type of discussions is going to be before attendance
(Brooks 1995).
Another method to approach communication, in respect to clients, is to
understand the clients. This has been a theme throughout this paper for
many reasons. It is difficult for developers to understand the client's
terminology and vice versa. However, it is the developer's job to learn what it
is the client wants. Many times, developers will attempt to address the client
using their own terminology. Clients may become frustrated or may even feel
belittled. Also, developers, not understanding the clients terminology,
believe this means they do not know what they want. This may be true,
because clients are not used to describing what they want in terms of
19


software. It is the developers job to assist them with their descriptions (Gause
& Weinberg 1989). Some of the above methods (i.e. questioning schemes,
drawing pictures) can be necessary to heighten the communication
between clients and developers [Gause & Weinberg 1989).
2.1.6 Testing Requirements
One method to ensure that validity of requirements is to provide test
cases for them. Test cases do not ensure that a program is rid of bugs".
Intensive testing only shows that a program meets its specification (Brooks
1995). Two approaches to develop test cases and move them into the
design phase are Black Box Abstraction Method and Open Implementation
Method.
2.1.6.1 Black Box Method
The Black Box Method has been the primary method used to assist in software
design. Its main attraction is the reusability of components. However, in this
section, it will be shown how its use can be necessary within the requirements
phase.
20


PUSH 1 ON
CONTROL PANEL
\ DOORS OPEN ON
-/ FLOOR"I"
Figure 2.1.6.1 Black Box method for development of an Elevator Requirement.
The Black Box Method is the approach by which a function's
implementation is hidden. The client is able to see what is going into the
black box and what is coming out, but has no idea what transpired when it
was in the black box (Kiczales 1996). Figure 2.1.6.1 shows what a black box
looks like in regards to development of an elevator (Gause & Weinberg 1989).
The implementation in the black box can process the request a number of
ways. The part that is important here is that the result is correct. For instance,
the user ended up on floor" 1" rather than floor 3". Use this approach to
show what the function is going to do and display the results of that function
while hiding the implementation (Gause & Weinberg 1989). Begin using this
approach while asking "what if" questions. For instance, ask questions about
what the results would be if certain data goes into the black box. Clients will
be able to view and address the events and results without knowing how
they got to those results (Kiczales 1996). This can be advantageous to the
testing of requirements because if the event and the results are correct then
the functionality behind the scenes must also be correct. The ability to
establish a test case shows that the requirement is useful and can be tested.
21


This is also a good method to keep the flow of the requirements phase
(Gause & Weinberg 1989).
2.1.6.2 Open Implementation Method
In contrast to the traditional Black Box Method, the Open Implementation
Method allows the client to view the functionality. This method has focus on
software design and software reuse. It suggests that allowing clients to
participate in the implementation of a function would allow software to be
more reusable and less difficult to change (Kiczales, et.al. 1997). In other
words, once it is established how a client views the functionality, the same
approach can be reused in developing other functions. If clients better
understand the functionality and its constraints, it is believed that future
changes will not be so difficult.
2.1.7 Requirements Engineering Summary
The requirements engineering phase is fraught with many activities. It is
not only the initial phase, but also is continues throughout the software
process as additions and changes are made at each cycle. Defining
requirements and knowing what the project is going to do can assist in
lowering maintenance in the future, increase productivity, and meet the
appropriate timelines as clients and developers will know what needs to be
done and when it is to be completed.
22


2.2 Assessment and Problem Elicitation
The goal of this phase is to examine the progress of the process and
identify any problems that might occur as a result of the defined
requirements (Ambriola, Conradi, & Fuggetta 1997). A study of European
software organizations revealed that approximately 74 percent of them
engage in some process of assessing projects risk, benefits and viability
(Dutta, Wassenhove, & Kulandaiswamy 1998). Identifying potential risks,
called risk management, has made a significant progress in the software
industry (Kemerer 1998). No project is free from risk. Without identify a risk,
most projects never meet the timelines (Lister 1993). So, what would be
considered a risk? In the software industry, a risk can be anything in a
software project that presents a loss of control or may prevent or inhibit a
projects overall success (Lister 1997). Avoiding potential lawsuits to risks gone
bad, is to analyze and prepare for potential hazards that may occur (Krasner
1998). Without doing this, organizations tend to promise something without
knowing the implications or start to code something quickly in hopes that it
will evolve as the client decides what all they want (Boehm 1991). This can
cause a project to render several unexpected problems. Successful projects
are those that incorporate risk management into their process (Lister 1997).
Those organizations that do not incorporate risk management often find
themselves developing what author, Marvin J. Carr, calls risk-aversion culture.
23


Organizations will commend crisis management and penalize anyone who
might mention the potential risk in the first place. Another common pitfall to
the process without risk management, is risk-management season. Again, if
a crisis occurs or client is unhappy, managers notice the problems and
compile a list of the top 10 problems, calling them risks. They will attempt to
regain control of the project by focusing on these problems without looking
into the risks still ahead awaiting them for the next "risk-management season
(Carr 1997).
2.2.1 Risk Management Defined
Risk management is a process of identifying and being continually
cognizant of the potential risks within the software project. One of the
common practices, also used with requirements engineering, is the use of the
decision tree. Another relationship this phase shares with that of requirements
engineering is that it is also incremental throughout the process.
Understanding the evolution of risks is an effective way to think about risk
management (Higuera & Halmes 1996). Risks can be identified as the
process moves through development (Boehm 1991). Answering questions
such as "if we do this, the risk will be..." to find what the end result might be
and what will need to be done. The main goal of risk management is to
complete an early evaluation of the risks associated in Tour stages of
software: acquisition, development, integration and deployment (Higuera &.
24


Holmes 1996). Two main stages within risk management are risk assessment
and risk control (Boehm 1991).
2.2.2 Risk Assessment
Risk assessment includes developing lists of potential risks that may
compromise the project's goals. Decision trees are used as well as the history
of risks in past projects is analyzed. This is considered risk identification
production. Risk analysis analyzes all risks as to how large of a loss will it be
and what is the possibility is will occur. Interaction occurring between risks is
also analyzed. Different models such as cost models or performance models
are used to support the analysis. The next piece is to take all risks and
prioritize them using such techniques as cost-benefit analysis or group-
consensus technique (Boehm 1991).
Identify risks using checklists is a valuable management tool. Not only
should all the potential risks be identified, but next to each risk, a technique to
avoid that risk or control that risk should be identified as well. All sections of
the system should be included such as technology, hardware, software,
people, schedule, and cost (Higuera & Halmes 1996). List items tend to
contain risks in an area that has little research and has little understanding
(Boehm 1991). Some areas that are often not thought of are types of
contracts, subcontractors, personnel management, quality of attitude,
cooperation, politics, customer, morale, and overall project organization
25


(Higuera & Halmes 1996). The checklist can be used later in the process to
track the status of each item in relation to the project's progress (Boehm
1991).
After using and evaluating the checklist as far as each risk exposure
levels, risk prioritization can then be invoked. One challenging aspect of
determining exposure is accurately putting in the numbers for risk probability
and its effects. Risk analysis involves several steps such as prototyping,
benchmarking, and simulation to give a more accurate prediction of the risk
exposure. The following Figure 2.2.2.1 shows a risk exposure table based on
the probability and loss caused by unsatisfactory outcomes (Boehm 1991).
The probability of the unsatisfactory outcome multiplied with the loss caused
by it displays the amount of risk exposure the outcome contains.
26


IKM UnsdHsfactoy Loss caused by unsatisfactory
A. Software error kills experiment 3-5 10 30-50
B. Software error loses key data 3-5 8 24-40
C. Fault-tolerant features cause unacceptable performance 4-8 7 28-56
D. Monitoring software reports unsafe condition as safe 5 9 45
E. Monitoring software reports safe condition as unsafe 5 3 15
F. Hardware delay causes schedule overrun 6 4 24
G. Data-reduction software errors cause extra work 8 1 8
H. Poor user interface causes inefficient operation 6 5 30
1. Processor memory insufficient 1 7 7
J. Database-management software loses derived data 2 2 4
Figure 2.2.2.1 Risk Exposure Table for Unsatisfactory Outcomes.
This table is based on a Satellite Experiment project discussed by Barry W. Boehm (Boehm 1991).
27


2.2.3 Risk Control
Risk control involves several aspects such as risk-management planning,
risk resolution and risk monitoring. After all the risks have been identified,
analyzed, and prioritized it is important to get them under control. Each risk
should have a risk management plan (Boehm 1991). For instance, one risk,
after evaluation, has a risk management plan such as prototyping in order to
gain its control. Prototyping would provide a way to understand the risk and
its components. Schedules should incorporate a time period for the risk
management planning. After development of the risk management plans for
each risk, it is then time to provide what the resolution will be for each risk and
again incorporate time in the schedule to implement these plans. Monitoring
each risk throughout the process through review meetings is another
approach to risk control. This avoids any surprises that might occur (Boehm
1991).
2.2.4 The People Involved In Problem Elicitation And Assessment
The people involved in risk management should consist of a team. One
person can not identify all risks alone. The team should consist of two parts:
supplier and customer. Project managers should not be the only individuals
involved. Team leads and other individuals related to the importance of the
project should also be included. Customers as well as stakeholders should be
aware and included in the process of risk management (Higuera & Halmes
27


1996). Managers should work actively with customers informing them of the
potential risks. They should be experts when it comes to the risks and
successes that are involved in the development of a product (Krasner 1998).
2.2.5 Summary Of Assessment And Problem Elicitation
Continually meeting the needs for risk management in the software
process makes it capable for schedules to be more accurate. Therefore,
schedules will be met more often as they can accommodate the risk factors
involved and allow time for analysis of these risks. Managers have more
control over a project when the element of surprise is removed because all
risks have been identified and are under control.
2.3 Design
The main objective of the design phase is to derive abstractions of the
requirements into understandable structures. These structures show the
components of the project and their interactions with each other. One of the
more challenging efforts in this phase is the decision on what type of
methodology will be used to design the project. It will be difficult to cover all
aspects of the design phase so focus will be made on such parts as: object-
oriented technology, design languages, trade-offs in architecture, and tools.
The latest recommendation for design is to use an object-oriented
methodology. A study was conducted with several software companies to
28


determine what the best methodologies to use with Object Oriented
Technology. The most important conclusions noted were a methodology is
more likely to be used when it is simple, clearly effective, and small in terms of
required work products" and "the acceptance of a methodology is limited
by the software group's ability to change it work habits and by its tolerance
for seemingly bureaucratic content (Cockburn 1994). The most effective
design methodologies are sometimes the simplest. By implementing clear
design methods, which have obvious and direct connections to the final
project, developers are more likely to utilize the entire method, instead of
choosing only the simplest parts. Responsibility driven design and use cases
are two examples of powerful, yet simple, design techniques [Cockburn
1994). Another important aspect within the design phase is incremental
development. Building in pieces rather than all at once has a tremendous
effect on the met timelines. Clients can expect to receive something now
knowing that there other "wants are in the works (Brooks 1995). Incremental
development is "used to manage the scheduling and staging of the system"
(Cockburn 1994). It is an essential technique to assure the success of a
project (Cockburn 1994).
Why is design so important? Authors Fry and Lieberman share a well-
fitting example to answer this question. They discuss the importance of a well
derived debugging environment for programs already developed enhanced
29


with unwanted features (a.k.a. "bugs"). They related programming to
automobiles such as Corvairs and Pintos that were not designed with the
possibility that they may be involved in car crashes. For instance, they lacked
the safety features as simple as a rearview mirror. The demonstration
represents a tool to support the debugging environment to enhance the
capability of finding problems quicker (Fry & Lieberman 1995). Many tools are
incorporated in this phase of the software process. There are many reasons
for a well-defined design. Programmers need to think in the future when
designing a software application. Change is going to occur and software
developers have to prepare and accept this change. Often change will
occur and the initial developers of the project are no longer the developers
assigned to maintaining it. Therefore, they need to design a software project
that is understandable and can be easily modified. It is suggested that
developers design for portability, encapsulating as much as possible and
make sure that when changes occur, it is localized (Meyers 1996). This main
goal of this phase is to provide solutions to problems without going into low-
level implementation. Structured design or top-down design using object-
oriented technology is currently the most accepted design methodology.
Actions are defined and algorithms are used to support the structures
(Headington &. Riley 1994).
30


2.3.1 Objected-Oriented Technology In Design
In Frederick P. Brooks book. The Mythical Man Month, he discusses some
solutions to "accidental difficulties" to the failure of software projects (Brooks
1995). One of his solutions was object-oriented programming. Object-
oriented programming does many things for software design. Encapsulation,
modularity, inheritance, and abstract data typing are considered to be a few
of them. Encapsulation supports maintainability because make
enhancements or fixing issues within the system will effect one object and not
the entire system. Also, the ability to use the same interface to call several
functions assists some of the challenging components with program
maintenance (Haythorn 1994). Object oriented programming solves software
difficulty by allowing the expression within design without the extra syntactic
content. Although, sometimes complex, it offers a design of fitted structures
and sharp interfaces (Brooks 1995). There are several reasons that object-
oriented techniques have become so prominent in the software industry.
Some of the most common reasons are reusability, real world modeling,
maintainability, and it's a unified software method (Haythorn 1994). Three
object-oriented techniques that are among the most competent and
understandable are use cases, responsibilities, and incremental development
(Cockburn 1994). Object-Oriented Design helps developers understand how
to develop the requirements stated by the client (Advanced Concepts
31


Center 1997).
Use cases tell the developer how the system will be used. It essentially is
view of the clients wants. Use cases can assist managers in project planning
because they are developed iteratively (Fowler & Scott 1997). An use case is
a behavior or group of behaviors that are initiated by events from users, other
systems, a timer, or hardware components (Advanced Concepts Center
1997). For scheduling purposes, developers should prioritize the use cases
and give the best possible estimate for its completion in development. Time
should be scheduled accordingly and with sensitivity to risk. Use cases can
be built upon and therefore, they can be iterative throughout the
development process (Fowler & Scott 1997). An example of a use case on
library shows how different objects of the requirements interact with each
other.
32


Library System Use Cases
Figure 2.3.1.1 Use Case example
Software design using the object-oriented techniques involves distributed
responsibilities throughout the software system. Not only are responsibilities
distributed in this phase, they are also defined (Cockburn 1994). A
responsibility describes the reason for having a particular class. A useful
method to create responsibilities of classes is to use CRC [Class Responsibility
Collaboration) cards. The CRC cards display the class name, its responsibility
or purpose, and its collaboration with other classes, hence the name (Fowler
& Scott 1997). The challenge is to determine whether or not an object should
have a particular responsibility. Once it is determined that one object is to
handle a responsibility, the next step is to display the transactions between
33


this object and the object it will pass on its responsibility [Cockburn 1994).
Using the CRC cards can initiate connectivity between classes and present a
more clear way to view them. One suggestion is to keep the responsibilities
at a high level and try to keep a maximum of three responsibilities to each
card (Fowler & Scott 1997). A CRC card for the above Maintain Book
Inventory is found in Figure 2.3.1.1.
Book Inventory -, ; 1 <5
Responsibility Collaborators
Knowledge about the number of Schedules of book orders
each book that are in the library
Figure 2.3.1.1: CRC Card for Inventory Component of Library System.
As stated earlier, the technique known as incremental development, is
the process of growing a software project. This method can be challenging
to managers that do not understand what exactly developing incrementally
means. However, it is essential to the software development process
(Cockburn 1994). Incremental development gives the software organization
better control over the development of the project as well as bring to light
any risks that may be encountered (Fowler & Scott 1997). First, the
application should be developed just enough to see it run. Then each piece
or pieces should then be developed more thoroughly until the software
application is complete. Development in this manner gives developers the
34


capability of prototyping. Clients can view early on and verify that the
software project is going in the direction that they want. Two benefits are
reaped with the use of incremental development. Developing function upon
function, refining the system as you go, can initiate early testing. Also,
schedules can be created more accurately and time can be allotted
accordingly (Brooks 1995). A quality design has many different avenues that
can be taken. Trade-offs within the architecture are analyzed and agreed
upon.
2.3.2 Trade-offs Within Design
Several trade-offs between certain aspects of a system are examined in
order to produce an accepted design. Trade-off analysis during design and
throughout the iterations of the software process produces an effective
design. Several aspects need to be considered. Modifiability, security,
performance, size and availability are the most recognized and effected
sources of a product. In the past, developers have analyzed trade-offs
without using a principled method. For instance, they would design one
pattern for easy modification purposes while designing another pattern for
portability without analyzing each pattern for every trade-off (i.e. Security,
size, etc.). The Software Engineering Institute developed a principled method
that incorporates trade-off analysis. It is called the ATAM or Architecture
Tradeoff Analysis Method (Kazman, et.al. 1998). This method analyzes each
35


trade-off in the design with respect to corresponding requirements (Kazman,
et.al. 1998). Trade-off analysis supports both developers and clients. It is
important for managers to communicate with their clients to understand the
type of environment they will be using to run their application. Size of the
application and performance are two aspects of an application that matter
most with clients. Typically this is because this is what the client can see to
establish some form of measurement. Clients can see the size of the
application and can compare it with other applications on their system. They
can also time the functions within the application to detect the rate it takes
complete a task. They may want certain functionality with the understanding
that the application will grow in size (Brooks 1995). For example, adding
security to a system, which is currently a necessary feature in most
applications, not only slows the application, but it also increases it in size.
Flexibility and safety are also common attributes that require trade-off
analysis. These are the types of considerations that need to be analyzed
before a full-blown design is created (Sebesta 1996). However, trade-off
analysis is an iterative method in which new requirements or changes to the
systems sparks again the analysis of each trade-off (Kazman, et.al. 1998).
One suggestion, taking advantage of iteration, is to optimize a system to
increase performance later in the design phase (Fowler & Scott 1997).
However, optimizing for each aspect is not possible, hence the reason for
36


trade-off analysis (Meyers 1996). Documenting reasons for choosing each
trade-off in the design should occur at each cycle. This is done in order to
avoid rehashing of analysis only to come to the same conclusion (Kazman,
et.al. 1998).
2.3.3 Tools To Support The Design Process
A tool that could provide substantial support to the design phase of the
software project is a knowledge-based software engineering system (KBSE).
There are four requirements to a successful KBSE. It should provide whatever
information is necessary for it's completeness. It should be forever accurate
as it applies to the real world. The information it contains should be
consistent. And finally, the KBSE should provide all the necessary algorithms
sought to establish the software project (Devanbu & Jones 1997). Another
helpful tool for both analysis and design are CASE Tools. CASE Tools support
developers by allowing them to use graphical representations of
requirements (Jarzabek & Huang 1998). However, although tools may be
helpful to developers, it is still necessary for developers to understand and
create quality designs (Fowler & Scott 1996). Along with tools, and much like
a specification language in the requirements phase, a language should be
chosen to work in the design phase.
2.3.4 Design Languages
Design languages are useful for consistency within the design phase of
37


the software process. A PDL or Process Design Language is used to
communicate ideas about the overall software architecture. A PDL contains
information on constructs that provides detail on modules, interactive display
of conditions that are processed concurrently, and associations or
interactions between each module (Ambriola, Conradi, & Fuggetta 1997). In
other words, choosing a design language is important in order to provide
descriptions that are understood by all members involved in this stage of the
process. It is just another way to avoid ambiguities within human language.
2.3.5 Summary Of Design
As seen with previous phases, the design phase also supports
maintainability, productivity and met schedules. By analyzing and creating a
design using use cases and responsibilities, developers can be asked
questions regarding time completion of each use case. Managers can then
schedule accordingly. Another aspect of this phase that supports schedules
is the trade-off analysis portion. Again, analyzing different attributes and
deciding on the trade-off gives managers a better idea of what would be a
time constraint. Also, through trade-off analysis and through object oriented
technology, a project is more maintainable. Encapsulation and
polymorphism are the features capable of this support. Tools are supportive
to productivity. Using something like CASE tools can assist with generated
code, which speeds up the process of implementation and deliveries.
38


2.4 Implementation
The goal of this phase is to revisit the design and make any necessary
changes to the existing process or models. This is where the creation begins
and continues as the models are put into execution (Ambriola, Conradi, &
Fuggetta 1997). The implementation phase can have several methods to
support the software process and several different languages to express it.
Some of the methods are the top-down method. This method is discussed as
a design method, but continues through the stage of implementation. The
UML (Unified Modeling Language) is another method used to further
implement the project by using models to express the design [Fowler & Scott
1997). Further information on the UML method can be found in Martin
Fowler's and Kendall Scott's book, UML Distilled: Applying the Standard
Object Modeling Language. As with design, there are several tools used to
support developers during this stage.
2.4.1 Methods For Developing Increments: Top-down Design
This top-down method takes a problem and further divides it into smaller
problems refining as it goes and keeping the high-level problem as an
abstraction. Using a top-down method with implementation is one most
favorable types of process within the implementation and design phases.
Top-down implementation also provides-ease when developing in increments
(Brooks 1995). Major actions are implemented initially and then additional
39


layers of abstraction are created in the next increment. Top down
implementing, or as others refer to as top-down testing, takes what is
considered pseudocode in the design model and begins using programming
languages to solve the problem. After each problem or subproblem is solved
using the programming language, testing of each can commence. For the
layers of the problem that are not yet implemented using code, dummy
procedures can be called while the problem is tested (Headington &. Riley
1994). The implementation phase uses the logical design already created
and begins examining compile-time and link-time dependencies between
physical constructs. This stage will dominate the outcome of the design
model. In other words, as implementation finds some of the design areas to
be difficult or cyclic dependencies are identified, the implementation phase
will then make appropriate modifications to the design model (Lakos 1996).
One method, called generalization, provides two types of procedures:
subclassing, or implementation-inheritance. Subclassing is a procedure by
which a subclass contains all the interfaces represented by the superclass
(Fowler & Scott 1997). These interfaces and functions are shown in
implementation model as private or protected based on whether its use will
be read-only or hidden to other classes. In implementation models
associations between classes represent the use of a similar data construct or
array (Fowler & Scott 1997). There are some tools available that support these
40


types of representations.
2.4.2 Tools To Support Implementation Procedures
As discussed briefly earlier, CASE Tools support developers as they design
(Jarzabek & Huang 1998). What exactly is a CASE Tool? A CASE (Computer
Aided Software Engineering) tool is an application that assists developers with
some sort of method whether it be object-oriented or structured. Two types
of CASE tools exist. Some support the logical design and analysis phase while
the others support the implementation phase. They support models of
requirements, structure, data, and behavior. They provide information
according to the method used on what rules to apply and further support the
implementation by providing code generation (Jarzabek & Huang 1998).
CASE tools enhance the productivity and efficiency of a project by
supporting the above and in addition, allowing developers to work
simultaneously. Code generation usually contains parent classes with
implementations. Implementation consists of attributes and methods. They
are declared as private, protected, or public. A private method or attribute
can only be viewed within the scope of its component, whereas protected
and public can be accessed outside the component. However, although
protected methods or attributes can be accessed, there are certain
limitations on what and how they are accessed. Code is generated
according to the method implied and the specification from which the class
41


definition evolved. Some precautions should be taken when using code
generation from CASE tools. For instance, mapping associations using by-
reference containment, multiplicity, optional participation and how
components are bound are some of the concerns that should be known
when using a CASE tool to generate code (Advanced Concepts Center
1997). Tools that are used should also be consistent not only for initial
development but for the maintenance aspect as well (Brown, Carney, &
Clements 1995). In other words, the same tool should be used throughout the
entire scope of the project in order to prevent extra hassles of conversion
from one tool to the next when production is a concern.
2.4.3 Strategies For Implementing Successful Applications
During implementation, certain strategies are recommended. Several
revolve around the methods of object-oriented techniques such as
encapsulation. A suggested rule is to keep data hidden by making all data
members within a class private. This rule keeps other classes from
manipulating or changing data (Lakos 1996). A module should be "a
software entity with its own data model and its own set of operations (Brooks
1995). Modules should only be accessed through certain accessible
functions. This method allows developers to work on different pieces of the
same project at the same time. They only need to know what functions of
accessible from each of the created modules. Productivity increases with
42


multiple developers being able to function simultaneously (Brooks 1995).
Sometimes it is necessary to create global information to pass to external files.
If at all possible, it is recommended this is avoided. However, if it is absolutely
necessary, put the information into a structure and make them private. Then
use static function to access them (Lakos 1996). And most importantly, to
relieve some of the frustration a developer might experience when taking on
another developers part, document the module and behaviors within the
module that are undefined (Lakos 1996). The recommended forms of
documentation for each module are: 1) create a flow chart, 2) give
descriptions of all algorithms used, 3) explicitly state what files are being used,
4) explain the steps of passing from tape or disks, and 5) express possible
future modifications that might be made and how they might be confronted
(Brooks 1995). For readability purposes, developers should be consistent
when creating variable names. For instance, if local variables are named
using a "1 before the name, then this should remain consistent throughout
the program. Also, the names should say something about what they are or
what type of data they store. These methods are to assist the developers in
the future, making the project more understandable, usable, and
maintainable (Lakos 1996). The process of implementation, again, should be
incremental. Testing pieces of code in between fixes and/or enhancements
is a way to increase flexibility within the module and the project itself (Fowler
43


& Scott 1997).
2.4.4 Frameworks Used For Reuse Measurement
Frameworks, although this is not necessarily considered part of this
software process, it is proposed to offer many advantages to the software
process as a whole. Therefore, it deserves a honorable mention, but not in an
exceptional amount of detail. Enterprise frameworks, although initially costly,
may support not only the implementation phase, but all of the phases
combined. Frameworks provide a way to incorporate telecommunications,
avionics, financial services, and manufacturing". They provide ability to
produce quality software quickly and timely. It has the capability of tracking
reuse metrics. For instance, it can measure how much code is reused within
the projects components (Hamu & Fayad 1998).
2.4.5 The Implementation Team
The people of this stage in the project should consist of the developers
and the architects that designed the modules. Communication between
these two groups of people is absolutely necessary. Any questions
developers have should be directed to the architects. They both should listen
to each suggestion even when the implementation might make a change to
the initial design because of cost or timely procedures found by the
developer (Brooks 1995). Flowever, it is suggested that design changes
should not be made in the instance that it might be providing a standard for
44


the project. Others necessary to in the implementation phase are the quality
assurance individuals or the people who test the increments upon
completion of the developers. It is important to include them in the stage of
implementation as soon as possible to ensure quality of each of the
implemented pieces (Lakos 1996).
2.4.6 Summary Of The Implementation Phase
In this section, I have discussed the implementation phase, again,
pointing out that incremental development is important. Using the top-down
method supports incremental development as it takes a large problem and
breaks it into smaller pieces. Each piece can be fully tested as they are
developed. Some of the tools that are useful to this stage are CASE tools and
an enterprise framework. Both tools are beneficial and not limited to this
stage within the software process. Frameworks can act as a measuring tool
for code. They allow the developer to see how much code is reused or not
used. In the next section, I will cover the last stage of the software process
that also focuses on measurement.
2.5 Monitoring And Data Collection
This stage is the evaluative phase of the software process. Data about
the process is gathered to enhance the future progress of the software
process. Feedback is provided for the above phases (Ambriola, Conradi, &
45


Fuggetta 1997). Market pressures has made an impact on the incorporation
of this phase. People have grown more interested in benchmarking and
evaluating tools as well as the impact of the software process in the
development of the software product (Kemerer 1998). The Quality
Improvement Story is one of the methods that will be discussed in regards to
the evaluation phase. Techniques within the Capability Maturity Model have
been combined with the Quality Improvement Story. However, further detail
about the Capability Maturity Model will be discussed later in its own section.
Certain tools support the measurement or data collection about the process.
Also, quantitative numbers from studies can show how defining measurement
in the process can further assist the software process in the future.
2.5.1 Methods To Support The Collection Of Evaluative Data
Quality Improvement Story is a technique used to solve problems within
the software process. It consists of seven steps: l) Reason for improvement, 2)
Current situation, 3) Analysis, 4) Countermeasures, 5) Results, 6)
Standardization, 7) Future Plans (Hollenbach 1997). Evaluating is done for
several reasons. It is important to assess the quality of the overall project and
process related to complete the project by evaluating its reliability, testability,
performance, functionality, usability and maintainability. Testability and
reliability go hand in hand as a measure of quality. In order to produce
something reliable, it must be designed so that it is testable. Testing in pieces
46


and removing as many errors as possible will make a product more reliable.
A product should be functional to the expectations of the client. It should
also be easy to use that functionality and in a timely fashion. Furthermore,
maintenance is important when anticipating future changes or added
functionality [Lakos 1996). The Quality Improvement Story has provided
techniques for each step in the evaluative process. In the improvement step,
it is recommended to use graphs, flowcharts, and control charts. In
evaluation of the current situation, the use of Pareto charts, checklists,
histograms, and control charts are suggested. [Hollenbach, et.al. 1997) In
one article, Evaluating the Cost of Software Quality", the authors used a
Pareto chart to analyze the possible defect categories (i.e. program logic,
support documentation, database, etc.) and their percentage within the
software project (Slaughter, Harter, & Krishnan 1998). Figure 2.5.1.1 shows a
chart of defects per 1,000 lines of code over a ten-year period in an IT
company. The process improvements consist of the following: Process 1,
creation of life-cycle standards for development, introduction of CASE Tools,
Process 2, increase hiring standards, status reviews, and guides for
documentation, Process 3, CASE tool integration with documentation,
schedule and performance metrics, cost analysis, software configuration,
Pareto analysis, and Process 4, cycle time analysis, automated cost
estimation method. This analysis shows that incorporating formal process
47


decreases the amount of defects.
4!
.3
a
c
a
&
s
a 2
J ,
SI I
D
0
Defect Density

Process improvements reduce the defecc density at a decreasing rate _
* *

t *
! 1
j 4 ft t : '

: ; * *. f t
Jan-B4 Jur-85 Oct-86 feb-88 Jui-S9 Nov-90 Apr-92 Aug-93 Jaft-95
Prawat tpfncws t Process f Process
|itiprtivtmwiS#l %T^irrw!nienC^2 lrnporo*smeric#3 infiprovEmont^-
Figure 2.5.1.1: Defects vs. Process Improvements.
The analysis step uses Ishikawa/Fishbone Diagrams, scatter diagrams and
again, the use of the Pareto chart. This step analyzes the cause and effects
of procedures used within the software process. The countermeasures step,
using similar tools, analyzes the cost estimations and action plans. The results
step evaluates the conclusions made from the software process.
Standardization uses techniques such a procedures training. And lastly, the
future plans step evaluates the future goals and enhancements using an
action plan (Hollenbach, et.al. 1997). Evaluations using the following steps
have shown marked improvements in the process areas such as reduction in
defects and time-to-market reduction. A case study evaluating 120 software
48


processes demonstrated that developing time for a software process in
organizations has lowered due to reuse (Hollenbach, et.al. 1997).
2.5.2 Types Of Findings Using Monitoring And Data Collection
The whole purpose of instilling improvement processes is to reduce missed
schedules, high costs, and less than acceptable quality. Even though the
above case study shows that time has decreased in the use of software
process, managers still believe that doing quality improvement processes will
make them incapable of meeting time-to-market schedules. This is not the
case. A study done at BDM International, an information technology
company, shows the results for quality improvement processes put in place.
Through the use of a Pareto chart, defect analysis is done by categorizing all
the types of defects such as: JCL, Program Logic, Support Documentation,
Database, Specification, CICS, Migration, Test Plan, and Requirements
Change. These categories are the analyzed by the percentage of defects
occurring within each category in the software project. Costs of projects
decreased due to the reduction in defects at about 10 percent. Process
improvements such as incorporation of CASE Tools and development
standards, status reviews, increased hiring standards, detailed style
guidelines, schedule and performance metrics, cost estimation, software
configuration, Pareto analysis, cycle time analysis, and automated cost
estimation methodology, demonstrated a significant decrease in defects per
49


thousand lines of code over a ten year period (Slaughter, Harter, & Krishnan
1998). Another study regarding the reengineering process revealed that the
process evaluation stage had a strongest effect on software process with a
perceived success of about fifty-percent. Business Process Reengineering
had an overall success when incorporating the continuous monitoring and
evaluating with the software process (Teng, Jeong, & Grover 1998). Another
study evaluating foreign countries, showed that about 56% of these countries
use records of resources and schedules vs. cost for measuring purposes.
Seventy five percent of the European countries documented post-
implementation problems and what resolutions that were used to fix them. A
less significance was placed on evaluating statistics of the errors in the
projects, their causes and how they are avoided. An even less significance
was placed on the efficiency of testing. However, a good portion of these
countries (approximately 68%) use software tools to support their projects
through planning, estimating, and evaluating critical paths (Dutta,
Wassenhove, & Kulandaiswamy 1998).
2.5.3 Tools To Monitor Data
Aside from the already discussed CASE Tools, software visualization tools
are another way to evaluate the performance of the software project.
Software visualization supports developers through animation of the software
project as it executes. Algorithms are displayed showing their processing and
50


their states that change as the program executes. Source code representing
sorting algorithms such as: quicksort, shell sort, or insertion sort can be
animated for the developer by showing how sorting data is occurring while
the program runs. This allows the developer to see the performance of the
different sorting algorithms in order to choose which method would be the
best. This type of tool would also be effective for debugging the
environment. It would help to localize problems with the code (Baecker,
DiGiano, Marcus 1997).
2.5.4 The Whole Team Is Affected By The Software Progress
Almost everyone is affected within the evaluation phase. However,
being that it is mainly a monitoring phase, it is the managers that need to
make sure this phase is accomplished and continually updated. Others will
be informed of the successes or failures through the managers
documentation of this phase (Slaughter, Harter, & Krishnan 1998).
2.5.5 The Summary Of Monitoring And Data Collection
This stage's major benefit is the evaluation it makes about the software
process and the software project. Through continuous monitoring and
collection of data for reuse, this stage offers credential to the effects of the
process as a whole. The Quality Improvement Story supports evaluation as it
tracks progress through the, different steps. Both CASE Tools as well as data
monitoring tools such as software visualization tools make this stage easier to
51


manage and track.
52


3. Chapter 3
3.1 Cleanroom Software Engineering
Cleanroom Software Engineering is an approach to software process with
a focus on managers and technical developers. The approach investigates
quality of software design achieved through the testing phase (Becker, Deck,
& Janzon 1996). Historically, Cleanroom Software Engineering was given its
name through the close relationship to the clean rooms hardware. Dr. Harlan
Mills of IBMs Federal Systems Division (FSD) found that quality can be
achieved in software as well as hardware. The relationship being the goal for
development of a quality product through thorough design and testing
(Deck 1994). This approach has been in effect since the early 1980s. It
began to be recognized and used by several organizations in 1987. Such
organizations were IBM, of course, NASA Goddard Space Flight Center, and
Martin Marietta (Deck 1994). Cleanroom has gained popularity through the
growing use of the Internet and software quality conferences. According to
the Software Engineering Institute, Cleanroom Software Engineering has some
similarities to the Capability Maturity Model, which is another type of process
(Deck 1996). Cleanroom Software Engineering can be applied to both new
systems and existing systems. Its focus is on defect prevention and the goal is
53


to provide an error free product to testing (Deck 1995). Cleanroom shares
similarities to the general process structure defined earlier. Its objective is to
provide a quality product while increasing productivity and decreasing
maintenance time. It reduces risk to the project, which, in turn, supports met
timelines (Hines & Deck 1997).
3.1.1 Myths About Cleanroom Software Engineering
Several myths about Cleanroom Software Engineering have challenged
its worth over the years and the validity of its principles and practices.
Michael Deck, of Cleanroom Software Engineering, Inc., has spent several
years attempting to dispel these criticisms. In addition to the reduced
criticisms, Cleanroom Software Engineering has experienced some changes
or refinements since its first encounter into the software engineering world
(Deck 1997). It was once believed that Cleanroom represents an "orthodox"
view to software engineering. This idea came from a story about a small
project gone bad. A manager insisted the use of Cleanroom practice to a
team that did not want to do it. In doing so, he denied them access to a
compiler in order to eliminate debugging practices among individuals
without communication to all team members. There was also a limitation on
the use of arrays and pointers. Obviously, this made it difficult to perform
unless they had formal and large amounts of mathematical training. Hence,
criticisms were to follow. However, unlike the above rigor, Cleanroom
54


Software Engineering has gain advocacy through the developers who used
it. They found their code to be more reliable. It was found that Cleanroom
Software Engineering could achieve the same amount of reliability without
the unrealistic approaches the manager used in the above story (Deck 1997).
Throughout the years of usage, Cleanroom Software Engineering has been
modified to be more successful for the engineers who use it. It has added
new ideas and refined some of the old ideas in order to continue its goal of
real software quality (Whittaker & Deck 1997). The following sections will
describe these ideas in detail.
3.1.2 Principles Used In Cleanroom Software Engineering
Cleanroom Software Engineering has two main principles that it has used
traditionally over the years. The first being the design principle. The design
principle basically states that before any project enters testing, it should
already be tested in teams to the point it is error free (Deck 1997). Design
should aid in the prevention of defects (Deck 1994). The second main
principle is the testing principle. Testing is really just a measurement for quality
of the software project (Deck 1997). Statistical measurement has been
important to engineers for the purposes of reliability not to mention the ability
to view the success of an error free (or close to error free) projects (Deck
1994). Several practices associated with the Cleanroom Software
Engineering process can be tailored according to the project's needs and
55


levels of process (Deck 1996). The management is another principle force
behind the use of Cleanroom. It is important for management to drive the
software process by focusing on the two above principles and making sure
the following practices are in place and followed. The focus of managers
should be on the incremental development using a team. Managers should
provide feedback to continue improvement of the software project as well
as the decrease the failures caused by humans (Deck 1994).
3.1.3 Practices Involved In Cleanroom Software Process
At the basic level, the initial practices are incremental development and
team ownership used by management, black box specification used in
development as well as clear box design, team review used during the
review portion, and finally behavioral testing used in the testing stage.
Incremental development is used with the integration of top-down
approach related to the general approach discussed above. Deck
describes it as a serious of waterfall methods that produce increments as it
works towards the final product rather than one cycle of the waterfall
method that never produces anything (Deck 1996). Incremental
development could be considered a spiral approach if it allows risk
assessment and problem elicitation to guide its planning. It gives managers
the ability to manage and incorporate its assessed time constraint into the
increments (Hines & Deck 1997). The goal of using the incremental approach
56


is to achieve "end to end executability" which guarantees that there is no
time lost in building. Another advantage of the incremental practice is the
capability of providing feedback to improve the next cycle or increment.
Metrics can be applied to each segment (Hines & Deck 1997). The
measurement chosen to be used after each increment is the Mean Time To
Failure (MTTF) (Whittaker & Deck 1997). This becomes part of the Statistical
Quality Control that is later discussed. The team ownership used by
management is based on the approach that every project is worked on by a
team that does the reviewing for it. Requirements, test plans, and
specifications are all things that should be reviewed by the team. The teams
should consist of a few people and work should be balanced with a leader to
make sure every aspect is covered. However, Cleanroom Software
Engineering suggests a "team-of-teams" approach to larger projects. For
instance, having a couple of development teams and a couple of testing
teams. Each team would have a lead that would meet with each other to
create a project team. It is not necessary to have these teams in place at
the project's commencement, instead it can be established throughout
stages of the software process (Deck 1996). Even though teams do not have
to be formed in the beginning, the organizational structure should be. Team
structure is therefore dependent on the project. They can consist of three
programmers up to dozens and possibly hundreds if the project calls for it
57


(Deck 1997).
After team structure is established, Cleanroom Software Engineering
supports Black Box structures for the development of specifications leading to
design. Black Box specification is the foundation for the Cleanroom
development practice. This is the same Black Box method discussed earlier
as part of the requirements engineering section within the general process
structure. This type of structure has a direct relation to that of object-oriented
programming (Deck 1996). The three types of structures that are integral to
this method of specification are component, object, and part. These
structures represent an entity's behavior (Hines & Deck 1997). Cleanroom
specifications use regular language to describe some specifications that do
not need precision. Cleanroom focuses on an Input/Output style of
specification without the unnecessary details in between. The overall
objective is to have a specification for every visual event viewable to the user
(Deck 1996). This process supports the methods of stepwise refinement and
verification (IBM 1998).
The next stage at a basic level of design is the development of the Clear
Box design. Clear box design is the method in which the specifications are
defined as algorithms. They display a top-down view of the specifications
and their break down into sub-specifications (Deck 1996). The sub-
specifications then contain a black box, state box, and clear box design
58


respectively (Hines & Deck 1997). It seems to resemble pseudocode. Some
of the examples of clear box design are statements such as if-then-else or do-
while (IBM 1998).
After the completion of the clear box design, the team begins their team
review of all the specifications and design. Three rules to the process of
review are to keep them iterative, use the same team each iteration, and
concentrate on the quality of the information that is reviewed (Deck 1996).
Initially, the first iterations of the team review do not examine a high level of
correctness on the design. Instead, the intent is to get the specifications as
solid as possible before the design stage begins. The same rules apply to the
review of the design, which tends to take place after several iterations (Hines
& Deck 1997). Some iterations of specifications may look like the following:
1. Review of initial specifications.
2. Review of the add-ons or more detailed specifications.
3. Review of the final specifications.
Iterations for the design phase follow the same scenario (Deck 1996).
Team members may include testers for the review of specifications. However,
they are not included in the design review. This is contradictory to the rule
that the team should consist of the same members, but one may be inclined
to believe that this rule is to be applied at each level. For instance, the team
59


for specification remains the same at each iteration and the team is the
same for each iteration of the design review, but the specification and the
design review teams differ due to level expertise and ability to give valid
contributions (Hines & Deck 1997). An important aspect, as well as a rule, in
the team reviews is not to focus just on the quest for bugs, but for the overall
quality of the portion of the project being reviewed (Deck 1996).
The testing stage focuses on the behavioral aspects of the product rather
than the structures. Testers practicing behavioral testing do not need to
know about the projects internal structures. Since the developers focus is
usually on the internals, behavioral testing tests the project from a different
view, which heightens the quality of the project (Deck 1996).
All three levels, basic, intermediate, and advanced, incorporate the use
of incremental development and team ownership on the part of
management. The intermediate stage works off the basic level of process
with added enhancements. For the specification stage, the intermediate
level uses the conditional-rule process for specifications and abstract data
models. Basically, a conditional-rule process for a specification is just a way
to express a black box specification. For instance, the following example
shows a conditional-rule process for an address list.
60


Array address-list is sorted in order by last-name and an item whose last-name
field equals lookup is in the array
set address to the address data field that corresponds to any instance of
array address-list is sorted in order by last-name
set address to anything and sef found to FALSE
TRUE -> undefined
This particular method is used for its simplicity. The notation follows that of
natural language. The arrows express mappings of inputs and outputs within
the requirement (Deck 1996). Another addition to the black box
specification is that of abstract data models. These are used to give a better
understanding of the system as a whole and verify the each of requirements.
The idea is to relate more to the user by incorporating a view that they might
understand rather than using a low level approach that would only be
understandable to engineers (Hines & Deck 1997). Abstract data models
reflect the object-oriented aspects of programming. Each object is
encapsulated and defined in a way users can understand. The user can see
the different objects and how they relate to one another. This, like
conditional-rule process for specification, is also useful to validate
requirements (Deck 1996).
At an intermediate level, there is the addition of state boxes before the
implementation of clear box verification. State boxes are derived from the
61


black box specifications. They are essentially implementation of the black
box specifications although, this time, the functionality is hidden through
abstraction. This is where object-oriented design is encountered as the
design abstraction uses encapsulation to hide functionality (Deck 1997). The
state box is the added to the abstract data models and takes the form of
determining types of implementation for a specification. For instance, it can
be expressed through arrays or hash tables (Deck 1996). The state box
verification stage, also apparent at the applied intermediate level of
Cleanroom Software Engineering process, is the method of testing
correctness of the state boxes.
It is up to the team to decide at what level of correctness they will use to
verify the state box and clear box designs (Deck 1996). The intent with the
verification process is to check the design against the specifications to see if it
correctly represents what the specification describes (Deck 1996).
Since the general theme of Cleanroom Software Engineering is to focus
on the users, usage modeling is practiced. Its main purpose is to analyze how
the users will use the system. Each function is analyzed from the perspective
of both beginner and expert users of the system (Deck 1996). Test case plans
are then developed reflecting the type and frequency of usage of a
particular function within the system (Hines & Deck 1997).
The last phase of the intermediate level of Cleanroom Software
62


Engineering is the statistical testing phase. This takes the information learned
from usage modeling and begins testing of the components for reliability.
Every aspect of the system resources are analyzed such as file access,
resources, and overall environment. This analysis becomes the focus of a
random key, which will be used to test (Hines & Deck 1997). The testing is
measured by using the Mean Time To Failure (MTTF) algorithm. There is some
skepticism about the value and accurateness of this phase due to the
inability to develop a perfect level of reliability based on assumptions of the
projects usage. However, the main purpose is understand what the user will
be doing with the product and making it as reliable as possible within the
realm that they are using it. Its focus is not to find bugs in general, but to find
what bugs may appear in the field (Deck 1996).
The final level of the Cleanroom Software Engineering incorporates the
use of formal methods to the process. These formal methods were discussed
earlier in the general process. The advanced level basically incorporates
formal methods for the stages within the intermediate level. Stimulus History
Models are just a type of abstract data model that focus more on the state
of the system (Deck 1996). Its goal is to filter out requirements that have not
yet been established. Again, formal modeling languages are used to
represent clear and state box design in the Formal Models of Real Time and
Concurrency stage. Correctness proof incorporates the use of tools to verify
63


the correctness of the system. Advanced Usage Models is just another level
of low-level detail to express the usage modeling at the intermediate level.
Lastly, Product Warranties is the additional method in which guarantees are
placed on the project. Warranties are made with time constraints based on
the cost effectiveness of supporting the claims made on the product (Deck
1996). Warranties should be analyzed closely as the software industry
changes rapidly.
3.1.4 Barriers Presented When Using Cleanroom
The Cleanroom Software Engineering process offers flexibility to
approaching a software process. However, there are barriers that prevent
Cleanroom from begin a smooth process. If the process is instilled to a system
already developed, existing code may be quite troublesome to the process.
The process will need to understand the existing structure in order to
formulate changes to it. Changes must not produce a negative effect to the
current users (Deck 1995). Three types of methods can be applied to the
Cleanroom approach in working with old code. First, specification recovery is
used. This method defines a trace table of the existing code for
requirements. As cases are found, they will be analyzed to see how things
are working and whether or not they are of any value. Another method,
called Specification Interposition, uses a "middle-out" approach (Deck 1995).
It makes an assumption as to what behavior is happening. It then takes those
64


assumptions and determines which parts are being used and why. Lastly, the
Relative Specification method could be used. This method describes the old
code and its newly relative code in a specification. This could hamper the
process by confusing the requirement, but it is up to the team to decide how
they want to approach it (Deck 1995).
Another barrier for the installation of the Cleanroom Software Engineering
process is the change in organization structure (granted there is one
currently). First, everyone in the organizational structure must adhere to the
guidelines representing the software process. To often, it is the portions of the
software organizations infrastructure that participate in the software process,
causing the process to fail and eventually the organization returns to its
chaotic state (Becker, Deck, & Janzon 1996). Also, incentives given to
members of the software organization are based on more negative
approaches than positive. For instance, testers are given an incentive based
on the number of bugs they fine rather than development of quality test
plans (Becker, Deck, & Janzon 1996).
3.1.5 Benefits When Using Cleanroom
If it is possible for a software organization to get passed the above
barriers and others that may appear, the Cleanroom Software Engineering
process can reap several benefits for the users of it. Cleanroom focuses on
defect prevention, careful documentation, statistical quality control, and
65


teamwork (Deck 1996). Use of all of these methods, allows an organization to
produce quality software. With less maintenance on the software product
due to the quality of testing, productivity increases. Due to the increase of
productivity, the software organization can then meet their time to market
demands (Becker, Deck, & Janzon 1996).
3.1.6 The Summary Of Cleanroom Software Engineering
Cleanroom Software Engineering has three levels of process: basic,
intermediate, and advanced. Each level incorporates methods that build on
one another. The four layers of these methods are management,
development, review, and testing. Some of the methods used by the
Cleanroom Software Engineering process have been discussed in the general
process from above. Incorporating Cleanroom Software Engineering has
barriers such as the effects of instilling new code relative to existing code and
making changes to the existing organizational structure. However, if a
software company can get passed the barriers, the will reap such benefits as
less defects, higher productivity, and met time-to-market demands.
66


4. Chapter 4
4.1 Capability Maturity Model
A more common software process is the Capability Maturity Model
developed by the Software Engineering Institute (SEI) in 1986 (Paulk 1996). Its
main purpose was to develop a mature process that would benefit software
organizations in the development of their projects (Paulk 1996). The Software
Engineering Institute and Mitre Corporation came up with two methods
initially called software process assessment and software capability
evaluation. Along with these methods they used a maturity questionnaire in
order to evaluate the software process and its level of maturity. After four
years, this evaluation became the Capability Maturity Model (CMM) (Curtis,
et.al. 1993). This software process has become extremely prevalent in the
software world. Major organizations as well as the military have gained much
success using this software process model (Herbsleb, et.al. 1997). Studies
have been done taking an organization with high maturity levels and using
metrics to see what areas have worked for them. This was done in order to
get a better understanding of a software process that works rather than
taking an organization that has little or no process, putting a maturity process
in place, and then attempting to evaluate what benefits will be brought to
67


them in the future. Metrics chosen for the high maturity process involved
details of collected data. Size of the programs, dates of completion for both
development and testing, and number of defects were chosen as parts of
the process to be measured (Burke 1997).
4.1.1 Immature Vs. Mature Processes
There are several distinctions between what would be called an
immature process or a mature process. An immature process within an
organization tends to react to situations that are considered a crisis (Curtis,
et.al. 1993). For instance, pulling as many resources as possible to get a
quick fix in for a company that is complaining or threatening law suits may
cause an existing project to fail in meeting its deadline. Immature
organizations develop unrealistic schedules by choosing dates without
evaluating risk or possible delays (Curtis, et.al. 1993). By doing this, other
pieces of the project suffer such as quality and maintainability in the future.
Immature organizations tend to have no way to measure the quality of the
product and when things go wrong, whatever testing procedures that may
be in place are cut short in order to deal with the current crisis (Fayad 1997).
On the other hand, a mature process involves the type of evaluation
necessary to allow for risk so that the teams do not have to go into crisis"
mode. Everyone participates and is aware of the procedures. The
procedures are "user-friendly" and remain consistent in order to get the work
68


done. Improvements are made as they are evaluated and deemed
beneficial to the current process. Overall the roles and responsibilities of the
members within the organization are defined and understandable.
Schedules are evaluated by past experiences and future expectations.
Quality is measured and controlled by managers (Curtis, et.al. 1993). The
Capability Maturity Model takes an organization through levels of maturity
and the key processes associated with each level.
4.1.1.1 Levels Within The CMM
The levels defined by the Capability Maturity Model were developed on
the belief that software organizations take small steps towards improving the
procedures that they plan to instill. All of the levels within the Capability
Maturity Model have incorporated key processes with the exception of the
first (Paulk 1996). All levels objectives and key processes will be discussed in
order to show an organizations evolution into a high maturity process.
4.1.1.2 Level 1: Initial Level
With the description of the immature process above, level one is defined
with many similarities. The initial level operates under a somewhat chaotic
often thrown together process (Herbsleb, et.al. 1997). There is little formality.
Methods might be spoken of, but are not monitored or controlled. There is no
tracking of changes and no review of changes once they occur, so that
changes cause a crisis (Toth 1997). If a crisis is to occur, the policies or
69


procedures that might have been in place are often ignored or tossed aside,
while the organizations focuses on coding to get things done quickly.
Managers who might believe in the process may lose focus when a crisis
occurs and take all of his team members with him. An organization operating
under the level one approach may actually produce a product. However,
the product may be delayed in getting released and may be over the
budget planned (Curtis, et.al. 1993). Capability Maturity Model states that in
order to make it to the success of getting a product out the door", it is
necessary to have very competent people" as well as "heroics" (Herbsleb,
et.al. 1997). In other words, individuals that will work until all hours of the night
to put the crisis to rest. So, the basis of the initial level is the characteristic of
the employees that are involved in the current project (Curtis, et.al. 1993). In
order to advance to the next level, there must be qualified management
that is willing to uphold the software process. Quality assurance must be in
place in order to test the project. Tracking of schedules, errors, fixes, and
changes must all be in place. Lastly, as seen before in above processes,
there must be an overall acceptance of the procedures and policies that are
to become the process (Toth 1997).
4.1.1.3 Level 2: Repeatable Level
There is more stability in the level two portion of the Capability Maturity
Model. Management is put in place in order to monitor costs, schedules, and
70


functionality with the project. Procedures that were successful in history are
maintained and put in place in the current process. The process is
documented and measured so that improvements can be made (Curtis,
et.al. 1993). Several key processes take place during the repeatable level.
They consist of: requirements management, software project planning,
software project tracking and oversight, software subcontract management,
software quality assurance, and software configuration management
(Herbsleb, et.al. 1997).
The key processes have several purposes. Requirements management is
essentially the inclusion of the customers when developing the requirements.
A level of understanding between the developers and the customers is
approached. The software project planning is a method of coming up with
realistic schedules and plans for both the engineers and management.
Software project tracking and oversight is used to give teams the capability
of viewing what is going on with the software project. It allows managers to
keep the project on track if it begins to slip. Software subcontract
management is the process of managing and hiring quality subcontractors.
Software quality assurance gives management the ability to see how the
project is working. It allows them to view the quality of the project. Lastly, the
purpose of software configuration management is to maintain the project
through the establishment of certain procedures to do so (Paulk 1996).
71


The repeatable level is still missing key procedures that make it a mature
process. It does not incorporate quality training within this phase. Testing is
not yet solidified. Documentation, although the need for it may be identified,
does not have individuals assigned to it (Toth 1997). Level three of the
Capability Maturity Model begins to enhance some of the missing pieces
within the repeatable level.
4.1.1.4 Level 3: Defined Level
The defined level addresses the lack of training issue by instilling quality
training for all of the individuals within the organization. The process is
documented and software engineering practices are chosen. This level of
the Capability Maturity Model is flexible. It allows the organization to select its
own defined software process given the basis contains standards and
procedures for verification, developing, and completion. It suggests the
development of a Software Engineering Process Group (SEPG) to define the
software practices to be used in the process. The SEPG should be developed
early. The coordinated efforts will focus on improvement and appraisals of
the process activities (Paulk 1996). Management has control of the project as
a whole and the entire organization understands the process. The parts of
this process are well defined and stable. Because of this, they can be
repeated (Curtis, et.al. 1993). The key processes of this level consist of:
organization process focus, organization process definition, training program.
72


integrated software management, software product management,
intergroup coordination, and peer reviews (Herbsleb, et.al. 1997).
Each key process has its purpose. The organization process focus defines
the responsibilities of the members within the organization. The organization
process definition is a collection of assets used to further improve the process.
Training programs are used to teach individuals the importance and
responsibilities of their roles within the organization so that their skills will
enhance the process. The integrated software management is basically a
process to allow flexibility in determining which software practices will be
instilled. The software product engineering is the integration of all the
practices in order to create a quality product. The intergroup coordination is
the communication effort between groups so that there is no guessing about
what is to happen next (Paulk 1996). Everyone has an idea of what is going
on. The peer reviews is essentially the defect prevention discussed earlier.
The product will be of better quality before it goes into the field (Paulk 1996).
All of these key processes allow the organization to produce a better
understanding of the roles, responsibilities, and effectiveness of each
individual involved in the creation of the product (Curtis, et.al. 1993).
4.1.1.5 Level 4: Managed Level
Quality and measurement become key components at the managed
level. The software process and the products developed using it are
73


controlled. The measurements give this level the ability to be predictable in
determining the outcome of the project (Curtis, et.al. 1993). The types of
items that are measured are the time in each cycle, the amount of errors that
have been found and the amount that has been fixed per cycle. The
measurements provide information on how reliable the project is and how
well it performs. Maintenance is also the focus of measurement. The purpose
is to measure how long it will take to maintain the project and how it will be
done (Toth 1997). Risks are carefully evaluated as the process is put in place
for an upcoming project (Curtis, et.al. 1993). This constitutes developing with
the use of an incremental approach, evaluating the risks through each
increment (Paulk 1996).
The managed level has a couple of key processes to be included in the
software process. They are quantitative process management and software
quality management. The purpose of quantitative process management is
to evaluate and control the performance of the software process. The
purpose of software quality management is to analyze the measurements
conducted on the project in order to produce a quality product (Paulk 1996).
In other words, the goal is to understand what the project is doing so realistic
goals of quality can be applied (Ginsberg & Quinn 1994).
4.1.1.6 Level 5: Optimized Level
The optimized level has now gotten the attention of the whole
74


organization. Everyone is involved in the process and making it better.
Instead of reacting to situations, which occurred at earlier levels, the
organization is capable of dealing with situations proactively. The
measurements and data collected from the previous levels can be used to
further improve the process in areas such as cost and schedule estimation.
Errors are not just fixed. They are analyzed to determine how they happened
(Curtis, et.al. 1993).
The key processes in this area are defect prevention, technology change
management, and process change management. Defect prevention, as
seen in earlier processes like Cleanroom Software Engineering, is factor that
determines the cause of errors so that they are removed from occurring in the
future. The technology change management process is the incorporation of
tools based on the investigation of their ability to support the process. The
process change management is basically the process of making the overall
process better. In doing so, cycle time is decreased and both quality and
productivity are increased (Paulk 1996).
4.1.1.7 Barriers Of The CMM
The Capability Maturity Model experiences the same types of barriers as
the Cleanroom Software Engineering process or any other process that is
about to be incorporated into an organization. However, there are more
barriers. Some organizations attempting to incorporate the Capability
75


Maturity Model do so for the wrong reasons. For example, the intent is to get
as much process as possible without evaluating the effects. The goal of the
organization is to develop process instead of develop software. Everything
becomes process (Fayad 1997). Another barrier to the success of the
Capability Maturity Model is the misconception that an organization can get
to the managed or optimized level without experiencing the levels in
between. For instance, an organization can start collecting data in the
manner used in the managed level (level 4), but will be unable to apply it
because an earlier level was skipped and no comparisons or improvements
from an earlier level can therefore be obtained (Curtis, et.al. 1993). Other
barriers that might be imposed upon an organization that is truly attempting
to put a process in place are the many excuses that come with it. For
instance, some will view it as more bureaucratic and resist its invocation.
Others will say that they are too busy to put a process in place. Some may
believe that developing software is a creative process and putting in an
approach that has defined rules and procedures stifles that creativity (Fayad
1997).
4.1.1.8 Benefits Of The CMM
If the barriers can be broken, then the Capability Maturity Model can be
beneficial to a software organization in many ways. Studies that have been
conducted show many successes and improvements in productivity, cycle
76


time, and maintenance. It is important to note that initially putting a process
in place is difficult and takes a long time. For instance, it takes about 1.5 to
2.5 years to move from level 1 to level 2 according to a study of i 3
organizations. However, once past this, the benefits are worth it. Productivity
gain ranged between 9-67%. Time to market reduced about 15-23%. Post-
release defects reduced 10-94% (Herbsleb, et.al. 1997). In a survey taken
with organizations that had noted good or excellent percentages in their
organizations performance, showed improvements moving from the initial
level to the defined level in the following areas: product quality, customer
satisfaction, productivity, ability to meet schedules, ability to meet budgets
and staff morale. Only one respondent showed a decrease in customer
satisfaction between the initial level and the repeatable level, but again
showed an increase in customer satisfaction going from the repeatable level
to the defined level (Herbsleb, et.al. 1997). Another study revealed that
organizations that achieved a higher level of maturity were more apt to take
substantial risks. This may show that the confidence level of the organization
as a whole increased in addition to the many other major benefits (Herbsleb,
et.al. 1997).
4.1.2 The Summary Of Capability Maturity Model
The Capability Maturity Model consists of several levels. Each level, with
the exception of the initial level, incorporate several key processes in order to
77


improve the software process and project that is developed using it. Some
barriers may prevent the success of the Capability Maturity Model such as the
organizations inability to understand the that the goal is to support the
development of the product, not to develop a process just to have one.
Removing the barriers and instilling the process without skipping through the
maturity levels, can bring much success to the organization. Some of these
successes are the increased productivity, the decrease in cycle time, and the
ability to maintain the project.
78


5. Chapter 5
5.1 Summary of Software Process
In this paper I discussed the importance of a software process, what it is
and how it is done. First, software process is important because of the
difficulties encountered when one is not in place, such as: failing to meet
schedules, maintenance nightmares, chaotic work scenarios, and possibilities
of lawsuits. Several software companies use different types of metrics to
evaluate their software process. Some of the metrics considered are: lines of
code, scheduled tasks completed on time, test coverage, resource
adequacy, fault density, system operational availability, time to market,
schedule, quality and cost trade-offs.
A general software process was defined as having the following
components: Requirements Specification, Assessment and Problem
Elicitation, (Re)Design, Implementation, and Monitoring and Data Collection.
These components were discussed in detail. Each phase is cycled through at
each increment. Requirements Specification was essentially the phase in
which requirements were gathered from the client and put into a formal type
of notation to be understandable by both developers and clients.
Assessment and Problem Elicitation was the process in which potential risks
79


were evaluated. Schedules and costs were estimated based on these
evaluated risks. (Re)Design was discussed in great length. There are many
methods that can be applied within this phase. Object-oriented designs
were among the ones discussed. Certain tools such as CASE were
mentioned as supportive to this stage of development. Top down methods
as well as strategies were discussed in the implementation phase. And finally,
the monitoring and data collection phase examined the process of
evaluating the process and making improvements. Methods of
standardization were discussed like the Quality Improvement Story. Tools that
present visualization to the organization about the performance of the
project were also considered during this phase.
Two of the more formal processes discussed were that of Cleanroom
Software Engineering and Capability Maturity Model. Each shared some of
the same components that the general process contained using different
ways to represent them.
The Cleanroom Software Engineering process has three levels: basic,
intermediate and advanced. Each level builds upon the previous level. The
methods incorporate in this process are the use of box structures to use during
specification and design. The box structures are then verified for correctness.
Levels of correctness are determined at each level. This model expresses the
importance of peer reviews to increase effective communication,
80


understanding of the project, and decreasing errors.
The Capability Maturity Model has five levels: initial, repeatable, defined,
managed, and optimized. These levels also build off of each other. It is
suggested that an organization should not skip levels so that to ensure the
success of the process. The process focuses on key processes at each level.
Essentially the first level is based on competent employees. The second focus
on the installation of project management. The next level focuses on the
engineering practices and the development of the SEPG group to support
the organization. The managed level or level 4 focuses on quality and
quantitative measurements of the project. The last level looks at the history
and continues to make marked improvements on the software process.
81


6.
Conclusion
This investigation has revealed that following a software process can
increase productivity, make a product more maintainable, and help meet
the set schedules. Essential components were discussed along with two
important software processes: Cleanroom Software Engineering and
Capability Maturity Model. All of the case studies showed marked
improvements in software production by using a software process. The
benefits discussed were found to be substantial. Note that there are
additional benefits not addressed by this study.
This work also identified some of the difficulties and barriers encountered
when attempting to incorporate a software engineering process. It should be
noted that not all types of models have been included. There are several
that use many of the same features discussed here and for which these
findings apply. Similarly, not all aspects have been addressed, such as how to
pick a software process and perform the cost/benefit analysis. Additional
investigations are needed to address the full spectrum of process models and
characteristics.
The information presented concerning the two processes, Cleanroom
Software Engineering and Capability Maturity Model, offer suggestions for the
82


best ways to incorporate these models into a software development
organization. Many of the references included in the annotated bibliography
(Appendix A) reflect the importance and the difficulties of software process
models that may be both useful and necessary for anyone attempting to use
a software process model.
83


APPENDIX
Annotated Bibliography
Advance Concepts Center. Advanced Object-Oriented Analysis and Design Using
UML. Advanced Concepts Center of Lockheed Martin Corp., 1997.
This book was a tutorial in presentation form to describe the software process and the
use of UML. Several figures are used from this book that describe the phases of
software process and types of practices used.
Ambriola, Vincenzo, Reidar Conradi and Alfonso Fuggetta. Assessing Process-
Centered Software Engineering Environments", ACM Transactions on Software
Engineering and Methodology. v6n3(July 1997): 283-329.
Evaluation of fhree process models and their architectures.
Anonymous. Software Metrics Best Practices", Methods & Tools. v6n3(March 1998):
13-14.
Survey in software metrics practices.
Baecker, Ron, Chris DiGiano and Aaron Marcus. Software Visualization for
Debugging", Communications of the ACM. v40n4(April 1997): 44-54.
Discussion of making programming a multimedia experience. Human factors of the
programmer.
Barsh, Gregory S. If Not This, What? The Internet as Cause to Refine Your Internal
Procedures", Cutter IT Journal. v11n4(April 1998): 28-32.
Another discussion on the legal implications that can be caused by not revising
internal procedures.
Brooks, Frederick P. The Mythical Man-Month. Reading, MA: Addison Wesley
Longman, Inc., 1995.
This book provided information on all aspects of software engineering, the
challenges, problems and solutions. Every chapter is utilized in one way or another.
The goal of the book is to provide tips on managing large software projects.
84


Brown, Alan W., David J. Carney and Paul C. Clements. A Case Study in Assessing
the Maintainability of Large, Software-Intensive Systems", Proceedings of International
Symposium and Workshop on Systems Engineering of Computer Based Systems,
Tucson, AZ, (March 1995).
Assessment of the maintainability of software systems. Weaknesses and strengths are
discussed about techniques described.
Clements, Paul C. Formal Methods in Describing Architectures", Proceedings of
Monterey Workshop on Formal Methods and Architecture, Software Engineering
Institute, Carnegie Mellon University, Pittsburgh, PA, 1995.
Architectural Description Languages (ADL's) are discussed with a focus on
Modechart
Devanbu, Premkumar and Mark A. Jones. The Use of Description Logics in KBSE
Systems", ACM Transactions on Software Engineering and Methodology. v6n2( April
1997): 141-172.
Focus on knowledge-based software engineering and the use of description logics.
Dutta, Soumitra, Luk N. Van Wassenhove and Selvan Kulandaiswamy. Benchmarking
European Software Management Practices", Communications of the ACM.
v41n6(June 1998): 77-86.
This article displayed the progress of companies in Europe that used software
management practices.
Fowler, Martin and Kendall Scott. UML Distilled: Applying the Standard Object
Modeling Language. Reading, MA: Addison Wesley Longman, Inc., 1997.
This second chapter in this book discussed the use of UML with an outlining
development process.
Fry, Christopher, and Henry Lieberman. Programming as Driving: Unsafe at Any
Speed?", Proceedings of CHI95 Mosaic of Creativity. May 7-11,1995. CHI'
Companion 95, Denver, Colorado.
Introduction of the Zstep 94. Tool to assist developers with programming.
Gause, Donald C. and Gerald M. Weinberg. Exploring Requirements: Quality Before
Design. New York, NY: Dorset House Publishing, 1989.
This book was used primarily to discuss the requirements phase of the software
process. It gives insight into the challenges of developing requirements for a product,
but also discusses the benefits.
Glass, Robert L. Short-Term and Long-Term Remedies for Runaway Projects",
Communications of the ACM. v41n7fJulv 1998): 13-15.
Survey of management opinions on runaway projects. Some remedies suggested
without detail.
85


Headington, Mark R. and David D. Riley. Data Abstraction and Structures Using C++.
Lexington, MA: D.C. Heath and Company, 1994.
This book provided certain definitions for the words used to describe the software
engineering process. It also provided certain techniques used in the design phase.
Lakos, John. Large-Scale C++ Software Design. Reading, MA: Addison Wesley
Longman, Inc., 1996.
This book gives several tips to designing better software products using C++.
However, the first chapter the one that was focused on for this paper. It discussed
general design and quality information for software design such as designing for
reuse and involving quality assurance in the design phase.
Kemerer, Chris F. Progress, Obstacles, and Opportunities in Software Engineering
Economics, Communications of the ACM. v41n8(August 1998): 63-66.
This article was a good summary of the recent advances of software engineering.
Kiczales, Gregor. Beyond the Black Box: Open Implementation
. January 1996.
Discussion of the Open Implementation Design method. Comparison of the Open
Implementation method with the Black-Box method.
Kiczales, Gregor, John Lamping, Christina Videira Lopes, Chris Maeda, Anurag
Mendhekar and Gail Murphy. Open Implementation Design Guidelines, Association
for Computing Machinery. 1997.
Discussion of the Open Implementation Design method. Comparison of the Open
Implementation method with the Black-Box method.
Kogut, Paul and Paul Clements. The Software Architecture Renaissance",
. September
1998.
Discussion on software architectures. The advantages/disadvantages and the
definition.
Krasner, Herb. Looking Over the Legal Edge at Unsuccessful Software Projects,
Cutter IT Journal, vl 1 n4(April 1998): 33-40.
Discusses negative project outcomes and the legal implication that follow. Gives
some helpful hints to avoid this problem.
Teng, James T. C., Seung Ryul Jeong and Varun Grover. Profiling Successful
Reengineering Projects", Communications of the ACM. v41 n6(June 1998): 77-86.
This article gave an example of a software engineering process for reengineering
projects. Measurements were taken to evaluate the process the article discussed.
Ungar, David, Henry Lieberman and Christopher Fry. Debugging and the Experience
of Immediacy, Communications of the ACM. v40n4(April 1997): 38-44.
Human factors of the programmer. Making the experience easier.
86


Zave, Pamela and Michael Jackson. Four Dark Corners of Requirements
Engineering, ACM Transactions on Software Engineering and Methodology.
v6n1 (January 1997): 1-30.
Discussion on what requirements engineering entails. This article focuses on the
problems and some of the solutions to requirements engineering:
87


REFERENCES
Advance Concepts Center. Advanced Object-Oriented Analysis and
Design Using UML. Advanced Concepts Center of Lockheed Martin Corp., 1997.
Ambriola, Vincenzo, Reidar Conradi and Alfonso Fuggetta. Assessing
Process-Centered Software Engineering Environments". ACM Transactions on
Software Engineering and Methodology. v6n3 (July 1997): 283-329.
Anonymous. Software Metrics Best Practices", Methods & Tools.
v6n3(March 1998): 13-14.
Baecker, Ron, Chris DiGiano and Aaron Marcus. "Software Visualization for
Debugging", Communications of the ACM. v40n4 (April 1997): 44-54.
Barsh, Gregory S. "If Not This, What? The Internet as Cause to Refine Your
Internal Procedures", Cutter IT Journal, vl ln4 (April 1998): 28-32.
Becker, Shirley A., Michael Deck and Tove Janzon. "Cleanroom and
Organizational Change", Proceedings of Pacific Northwest Software Quality
Conference, 1996.
Boehm, Barry. "Software RISK Management: Principles and Practices"
DATAPRO publication AS20-600-201, Delran, NJ: McGraw-Hill, (September
1991).
Brown, Alan W., David J. Carney and Paul C. Clements. "A Case Study in
Assessing the Maintainability of Large, Software-Intensive Systems", Proceedings
of International Symposium and Workshop on Systems Engineering of Computer
Based Systems, Tucson, AZ, (March 1995).
Burke, Steven. "Radical Improvements Require Radical Actions: Simulating
a High Maturity Software Organization", CMU/SEI-96-TR-024, June 1997.
Buschmann, Frank. "Rational architectures for object-oriented software
systems". Journal of Object Oriented Programming, (September 1993): 30-41.
Carnegie Mellon Software Engineering Institute. "The Capability Maturity
Model for Software", .
June 1998.
Carr, Marvin J. "Risk Management May Not Be for Everyone", IEEE
Software. vl4n3fMav/June 1997): 21,24.
88


Clements, Paul C. "Formal Methods in Describing Architectures",
Proceedings of Monterey Workshop on Formal Methods and Architecture,
Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 1995.
Cockburn, Alistair AR. "In Search of a Methodology", ObTect Magazine.
(July-August 1994) 53-56, 76.
Curtis, Bill, Mark C. Paulk, Mary Beth Chrissis and Charles V. Weber.
"Capability Maturity Model, Version 1.1". IEEE Software. v!0n4f July 1993): 18-27.
Deck, Michael. "Cleanroom Software Engineering and "Old Code" -
Overcoming Process Improvement Barriers", Proceedings of Pacific Northwest
Software Quality Conference, 1995.
Deck, Michael. "Cleanroom Software Engineering: Quality Improvement
and Cost Reduction", Proceedings of Pacific Northwest Software Quality
Conference, 1994.
Deck, Michael. "Cleanroom Software Engineering Myths and Realities".
Proceedings of Quality Week '97, San Francisco, CA, May 27-30, 1997.
Deck, Michael. "Cleanroom Practice: A Theme and Variations". 9th
International Software Quality Week (QW '96), San Francisco, CA, May 21-24,
1996.
Devanbu, Premkumar and Mark A. Jones. "The Use of Description Logics in
KBSE Systems", ACM Transactions on Software Engineering and Methodology.
v6n2 (April 1997): 141-172.
Dutta, Soumitra, Luk N. Van Wassenhove and Selvan Kulandaiswamy.
"Benchmarking European Software Management Practices". Communications of
the ACM. v41n6 (June 1998): 77-86.
Fayad, Mohamed. "Software Development Process: A Necessary Evil",
Communications of the ACM. v40n9 (September 1997): 101-103.
Fowler, Martin and Kendall Scott. UML Distilled: Applying the Standard
Object Modeling Language. Reading, MA: Addison Wesley Longman, Inc., 1997.
Fry, Christopher, and Henry Lieberman. "Programming as Driving: Unsafe
at Any Speed?", Proceedings of CHI'95 Mosaic of Creativity, May 7-11, 1995. CHI'
Companion 95, Denver, Colorado.
Gause, Donald C. and Gerald M. Weinberg. Exploring Requirements:
Quality Before Design. New York, NY: Dorset House Publishing, 1989.
89


Ginsberg, Mark P. and Lauren H. Quinn. Process Tailoring and the
Software Capability Maturity Model", CMU/SEI-94-TR-24, November 1995.
Glass, Robert L. Short-Term and Long-Term Remedies for Runaway
Projects". Communications of the ACM. v41n7(July 1998): 13-15.
Hamu, David S. and Mohamed E. Fayad. "Achieving Bottom-Line
Improvements with Enterprise Frameworks", Communications of the ACM. v41n8
(August 1998): 110-113.
Haythorn, Wayne. "What is object-oriented design?". Journal of Object
Oriented Programming. (March-April 1994): 68-78.
Headington, Mark R. and David D. Riley. Data Abstraction and Structures
Using C++. Lexington, MA: D.C. Heath and Company, 1994.
Herbsleb, James, David Zubrow, Dennis Goldenson, Will Hayes, and Mark
C. Paulk Software Quality and the Capability Maturity Model", Communications
of the ACM. v40n6 (June 1997): 30-40.
Higuera, Ronald P. and Yacov Y. Halmes. "Software Risk Management",
CMU/SEI-96-TR-012, June 1996.
Hines, Braden E. and Michael Deck. "Cleanroom Software Engineering for
Flight Systems: A Preliminary Report", Proceedings of IEEE Aerospace Conference,
Snowmass, Colorado, Feb. 1-7,1997.
Hollenbach, Craig, Ralph Young, Al Pflugrad, and Doug Smith.
"Combining Qualify and Software Improvement", Communications of the ACM.
v40n6 (June 1997): 41-45.
IBM, Inc. Cleanroom Software Engineering".
. June 1998.
Jarzabeck, Stan and Riri Huang. "The Case for User-Centered CASE Tools",
Communications of the ACM. v41n8 (August 1998): 93-99.
Kazman, Rick, Mark Klein, Mario Barbacci, Tom Longstaff, Howard Lipson
and Jeromy Carriere. "The Architecture Tradeoff Analysis Method", Software
Engineering Institute, Pittsburgh, PA, August 1998.
Kemerer, Chris F."Progress, Obstacles, and Opportunities in Software
Engineering Economics", Communications of the ACM. v41n8(August 1998): 63-
66.
Kiczales, Gregor. "Beyond the Black Box: Open Implementation"
. January 1996.
90


Full Text

PAGE 1

SOFTWARE PROCESS TO SUPPORT SCHEDULES, PRODUCTIVITY, AND MAINTAINABILITY WITHIN A SOFTWARE PROJECT. by Jennifer Kelnosky B.A., University of Tennessee, 1992 A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Computer Science 1998

PAGE 2

The thesis for the Master of Science degree by Jennifer Kelnosky has been approved Date II

PAGE 3

Kelnosky, Jennifer (M.S., Computer Science Engineering) Software Process to Support Schedules, Productivity, and Maintainability within a Software Project Thesis directed by Professor Jody Paul ABSTRACT The objectives of this thesis are to provide information regarding the use of a formal software process and to describe how using a software process will increase productivity, help meet schedule deadlines, and improve maintainability. The phases of a general software process are presented in some detail, including requirements engineering, assessment and problem elicitation, design, implementation, and monitoring and data collection. Two common formal processes, Capability Maturity Model and Cleanroom Software Engineering, are described and used to demonstrate the benefits that are achieved. Barriers to instituting a software process are presented, with examples from these two process models. Analysis of these processes acknowledges the barriers and offers solutions within the models to prevent or overcome them. This thesis describes how use of a process can increase productivity, improve maintainability, and help meet milestones within each stage or level of the software project. This abstract accurately represents the content of the candidate's thesis. recommend its publication. Sig Jody Paul Ill

PAGE 4

CONTENTS 1. Chapter 1: Introduction .......................................................................... 1 1.1 Introduction into the World of Software Process ................................. 1 1 .2 Software Process Defined ........................................................... ; ........... 2 1.3 Software Metrics For Software Process .................................................. 3 1 .4 Following Sections .................................................................................... 4 1 .5 Band-Aids For Failing Project .................................................................. 4 1.6 What The Experts Say About Software Process .................................... 6 1 .7 ApproaGhing Software Process in an Existent Project ........................ 8 2. Chapter 2: A Formal Process ............................................................... 11 2.1 Requirements Engineering .................................................................... 11 2. 1 1 Ambiguities Within Language .............................................................. 15 2. 1 .2 Types Of Questions For Requirements ................................................. 16 2.1.3 Formal Languages Used To Describe Requirements ......................... 17 2.1 .4 The People Involved In Requirements Engineering ........................... 18 2.1 .5 The Communication Throughout The Process ................................... 18 IV

PAGE 5

2.1.6 Testing Requirements ............................................................................. 20 2.1.6.1 Black Box Method ................................................................................. 20 2.1.6.2 Open Implementation Method .......................................................... 22 2.1 .7 Requirements Engineering Summary .................................................. 22 2.2 Assessment and Problem Elicitation .................................................... 23 2.2.1 Risk Management Defined ................................................................... 24 2.2.2 Risk Assessment ....................................................................................... 25 2.2.3 Risk Control .............................................................................................. 27 2.2.4 The People Involved In Problem Elicitation And Assessment ........... 27 2.2.5 Summary Of Assessment And Problem Elicitation ............................. 28 2.3 Design ...................................................................................................... 28 2.3.1 Objected-Oriented Technology In Design ......................................... 31 2.3.2 Trade-offs Within Design ........................................................................ 35 2.3.3 Tools To Support The Design Process ................................................... 37 2.3.4 Design Languages ................................................................................. 37 2.3.5 Summary Of Design ............................................................................... 38 v

PAGE 6

2.4 Implementation ...................................................................................... 39 2.4.1 Methods For Developing Increments: Top-down Design ................. 39 2.4.2 Tools To Support Implementation Procedures ................................... 41 2.4.3 Strategies For Implementing Successful Applications ....................... 42 2.4.4 Frameworks Used For Reuse Measurement ........................................ 44 2.4.5 The Implementation Team .................................................................... 44 2.4.6 Summary Of The Implementation Phase ............................................ 45 2.5 Monitoring And Data Collection .......................................................... 45 2.5.1 Methods To Support The Collection Of Evaluative Data .................. 46 2.5.2 Types Of Findings Using Monitoring And Data Collection ................ 49 2.5.3 Tools To Monitor Data ............................................................................ 50 2.5.4 The Whole Team Is Affected By The Software Progress .................... 51 2.5.5 The Summary Of Monitoring And Data Collection ........................... 51 3. Chapter 3: Cleanroom Software Engineering .................................. 53 3. 1 Cleanroom Software Engineering ....................................................... 53 3.1.1 Myths About Cleanroom Software Engineering ................................ 54 VI

PAGE 7

3.1.2 3.1.3 3.1.4 3.1.5 3.1.6 4. 4.1 4.1.1 4.1.1.1 4.1.1.2 4.1.1.3 4.1.1.4 4.1.1.5 4.1.1.6 4.1.1.7 4.1.1.8 Principles Used In Cleanroom Software Engineering ........................ 55 Practices Involved In Cleanroom Software Process ......................... 56 Barriers Presented When Using Cleanroom ........................................ 64 Benefits When Using Cleanroom .......................................................... 65 The Summary Of Cleanroom Software Engineering ......................... 66 Chapter 4: Capability Maturity Model ............................................... 67 Capability Maturity Model .................................................................... 67 Immature Vs. Mature Processes ........................................................... 68 Levels Within The CMM ......................................................................... 69 Level 1 : Initial Level ............................................................................... 69 Level2: Repeatable Level ................................................................... 70 Level 3: Defined Level .......................................................................... 72 Level 4: Managed Level ...................................................................... 73 Level 5: Optimized Level ...................................................................... 7 4 Barriers Of The CMM ............................................................................. 7 5 Benefits Of The CMM ............................................................................ 7 6 VII

PAGE 8

4.1 .2 The Summary Of Capability Maturity Model ...................................... 77 5. Chapter 5: Summary .............................................................................. 79 5.1 Summary of Software Process ................................... .......................... 79 6. Conclusion .............................................................................................. 82 Appendix: Annotated Bibliography .................................................... 84 References .............................................................................................. 88 VIII

PAGE 9

FIGURES Figure 2.1. 1 : Initial requirements for elevators ................................................................. 13 2.1.6.1 Black Box Example ....................................................................................... 21 2.2.2.1 Risk Exposure Table ....................................................................................... 27 2.3.1 .1 Use Cas.e Example ....................................................................................... 33 2.3.1.1: CRC Card Example ..................................................................................... 34 2.5.1.1: Defects vs. Process Improvements ........................................................... 48 IX

PAGE 10

1. Chapter 1 1.1 Introduction into the World of Software Process Traditionally, software projects are immediately put into development without analysis, design, and effective communication about the users needs or wants. Without analysis or design, new features or implementation that should occur in a first release, become challenging tasks. As a result, periods of crisis, along with frustrated personnel and customers, are prevalent in the workplace. Productivity declines as the developers spend their time maintaining rushed products that are already released. Then come the legal repercussions. Software organizations that have failed to incorporate a software process can be more susceptible to a lawsuit (Krasner 1998). Software companies face legal battles, as their contracts are not met to complete satisfaction. As the Internet becomes more prevalent and software engineers come and go, suggests another reason for putting internal procedures in place (Barsh 1998). Frederick P. Brooks, in Mythical Man Month, discusses how little the software engineering has changed in 20 years. He continues to address the importance of organization and appropriate skills for the management team. He predicts that software engineering will remain in what he calls "the tar pit" for a long time (Brooks 1995). Higher productivity,

PAGE 11

less maintenance, and met timelines of a software project can be achieved through the use of a software process model rather than they ever occurring method 'get the job done fast'. Instituting a software process offers a solution to these serious problems within the software industry. In recent times, some software projects have become subject to a formal software process model that instills discipline upon creation of the product. Software architectures are put in place to provide organization to the software components [Kogut & Clements 1998). The idea of a software process can be effective except that organizations as a whole do not always agree on the process or not all people participate in the process. 1.2 Software Process Defined Many different people define several stages in many different ways, but most often they are discussing the same phases in the software development process. Often ambiguity causes difficulty in understanding, so this paper will focus on certain phases (Zave & Jackson 1997). Phases which will be discussed in section 3, were given in the article "Assessing Process-Centered Software Engineering Environments". They are listed as: Requirements Specification, Assessment and Problem Elicitation, [Re)Design, Implementation, and Monitoring and Data Collection. These phases are what can be defined as a software engineering process (Ambriola, Conradi. 2

PAGE 12

& Fuggetta 1997). 1.3 Software Metrics For Software Process How do we know that software processes actually work? Software production is considered practically immeasurable as it is challenging to measure something with some many components and little or no proof of a process. However. researchers continue to address this issue [Kemerer 1998). A survey was done with software development organizations to answer how they measure their software development. Key features in software metrics and their percent of use within software industries were: lines of code (48%). scheduled tasks completed on time (48%). test coverage (38%). resource adequacy, fault density (28%). system operational availability (28%). time to market (28%). schedule, quality and cost trade-offs (28%). Overall project risk is another measurement used by some of the best practices organizations (Anonymous 1998). This is considered a valuable piece of evidence in situation there might possibly be some sort of litigation brought upon a company (Krasner 1998). Time to market and scheduled tasks completed on time will be used to analyze a software engineering process in this paper. Schedules tend to be a major component in the development process (Anonymous 1998). In a reengineering case study, participants were asked to rate each stage of the reengineering project by analyzing the effort to which the team has attempted to complete stages. They measured overall 3

PAGE 13

success, cost reduction, customer satisfaction increase, work productivity, and defects reduction (Teng, Jeong, & Grover 1998). In another case study, which will be discussed in more detail, the focus was on maintainability metrics (Brown, Carney, & Clements 1995). 1.4 Following Sections In the following sections I will discuss a few of the many topics within the software engineering process. In the first section, after describing the sort of Band-Aids organizations use to recover failing projects, I will let you know what the experts say about software process. The next section will take what the experts say into further detail by describing a general analysis of the software process. Following the general description, I will discuss and analyze two approaches to the software process that show that less maintenance, met timelines, and higher productivity can be achieved. These two detailed processes will be the Capability Maturity Model and the Cleanroom software engineering process. The fourth section will summarize the analysis of the software engineering process followed by the conclusion section that highlights the benefits of it. 1.5 Band-Aids For Failing Project A research study was done to get software development organizations to describe what they do to recover a failing project. The importance of the 4

PAGE 14

schedule in a software development process was again acknowledged. The number one attempt to recover a project was to extend the schedule (Glass 1998). This same attempt is considered a negative outcome within a project because of the possibility it will cause legality issues (i.e. not meeting contracted timelines (Krasner 1998). A second attempt, instilling better management procedures, may be effective, but why not instill those at the start. In contrast to the number one attempt, some organizations will reduce the project scope to meet the deadlines (Anonymous 1998). Once again, legal contracts bind organizations to what they promised to include in the project (Krasner 1998). Some will try to change the current technology that they are using and generally have been accepted. This proposes the same problem as attempting to reduce the project scope (Krasner 1998). A more common practice such as putting more people on the job is considered by 53% of the organizations (Glass 1998). However, this also proposes a negative effect. More people only means more learning curves, which means more time and therefore, defeats the whole purpose of the attempt. Increasing manpower to projects that have deadlines approaching can extend that deadline more so than by pushing it back enough for the current assigned engineers to complete it (Brooks 199 5). Requesting more funds is a little less common approach. However, if attempted, this approach can only lead to more difficulties if the project is a failure (Krasner 1998). Only 25% of 5

PAGE 15

organizations thought to instill better development methodologies. However, its benefit might be better suited at the start of a project. If not possible, it introduces negative effects as well. Methodologies need to be defined and applying one in the middle of a crisis brings more challenges as development and organizations attempt to adhere to the one chosen. Another attempt to recover a project was to apply pressure on suppliers by threat of litigation (Glass 1998). Litigation threats can in return be made on development organizations for several reasons. Criminal Act, Liability, and Breach of Contract are only a few of these litigation threats (Krasner 1998). Not many organizations opted to abandon their project, which may presuppose that there is a possibility that the Band-Aids may be beneficial (Glass 1998). Litigation appears to be becoming more of an issue when attempting to Band-Aid a failing project. It is suggested that using a software metrics and a "mature" software process will ward off the chances of negligence as well as enhance production (Krasner 1998). 1.6 What The Experts Say About Software Process Experts, software engineers and managers of development, have not only approached the topic of a formal software engineering process, but also followed through by implementing one within the workplace. One of the more influential writers was Frederick P. Brooks, author of Mythical Man Month. He often referred to the software engineering process as a "tar pit". 6

PAGE 16

He discussed the many flaws with the scope of a software project as well as the methods to improve these flaws (Brooks 1995). Dr. Paul C. Clements from the Software Engineering Institute believes that formal methods are becoming an essential guide to the software industry. He describes a method in his article "Formal Methods in Describing Architectures" that was presented at the Monterey Workshop in 1995 (Clements 1995). This method, called Modechart, will be discussed later in this paper as a component within a phase of the general software process. In another article, Clements and his partnered author, Paul Kogut, discuss the rising influence of software architectures in the software industry. The software community is moving towards higher levels of design and methods (Kogut & Clements 1998). Process-centered Software Engineering Environments (PSEEs) are gaining interest as a way to support the development process. They are defined as a process model, which describes the development process, the resources and their interaction, as well as the tools and system that will be used. The authors of "Assessing Process-Centered Software Engineering Environments" also agree that PSEEs are becoming a reality in the software world. It is a means of communication and organization (Ambriola, Conradi, & Fuggetta 1997). Other experts go so far as to focus their importance on Knowledge-based Software Engineering (KBSE) systems to assist the software processes in areas such as requirements, analysis, and management. Again, this is a formal 7

PAGE 17

method for tracking systems (Devanbu & Jones 1997). Software process has made the most progress within the software industry. Many organizations have changed their ideas of software development. Instead of the old way of thinking, "code" and then look at the whole picture, organizations are beginning to look at the whole picture first and then the processes that will maintain their views throughout the scope of the project (Kemerer 1998). So many experts suggest that a formal software process is important to a successful project. But still, so many organizations fail to use them (Krasner 1998). Some evaluations of organizations suggest that they are too immature to understand the significance of the software process. It is such a large transition that some organizations cannot afford the full time focus that is necessary or the extenuating costs (Ambriola, Conradi, & Fuggetta 1997). 1.7 Approaching Software Process in an Existent Project Earlier it was suggested that instilling a process in an existent project would be considered a Band-Aid (i.e. changing methodology midstream through a project) (Glass 1998). This was discussed as having a negative outcome, but for some projects, it may be necessary as a means to gain control (Krasner 1998). Suggesting a complete overhaul of changes may cause more hassles, but a more manageable approach with incremental changes can be more successful (Teng, Jeong, & Grover 1998). A case study was done on the maintainability of large software systems to describe certain 8

PAGE 18

procedures and their effectiveness. One of the proposed questions within this case study was "How well does the software lend itself to change?" Developers were asked to participate in a "change exercise", in which they were to find all the components that were subject to change. This were compiled together into an "active design review". The expectation of this active design review was to gain insight into the whole system design rather than just the changes that might be made. Another goal was to find flaws within documentation (Brown. Carney, & Clements 1995}. This article asserts that the process for the maintainability of a large system should consist of the following four pieces: code inspection process. test procedures. build and release practices, and change request procedures. Code inspection process should be done frequently for bug fixes and enhancements. Test procedures should consist of the same procedures used during the development process (Brown. Carney. & Clements 1995). However. not mentioned by the authors. test procedures should also consist of new procedures for the enhancements. Build and release practices were noted as a process that should be "carefully defined". It is important to track the releases, the fixes. and the resources that go with them. Change request procedures were also considered an important process (Brown, Carney, & Clements 1995). Keeping track. documenting all change requests. is important. not only for the sanity of the developers. but for legal reasons as well (Krasner 1998). Several challenges lie 9

PAGE 19

ahead in the maintainability environment. For instance, consideration must be made that the system will have to interface with new systems. Also, environments might change for the developers and new methods may arise. Another difficulty is the use of third party vendors. Maintaining the environment means maintaining what other vendors are doing with their environment (Brown, Carney, & Clements 1995). All these challenges reflect the necessity of a formal software process even in the event of an existent software project. 10

PAGE 20

2. Chapter 2 2.1 Requirements Engineering The purpose of requirements engineering is to gather information about the project's objectives so that clients and developers completely understand what is to be created. The requirements process, if successful, can reap the benefits of clear goals, which, in turn, offers on-time delivery of the product, future maintainability, and increased production. This phase begins "bridging the gap" between clients' and developers' speech to eliminate some of the misunderstandings that might occur. Different forms of documentation are produced as a result of specifically defining the requirements (Ambriola, Conradi, & Fuggetta 1997). This phase is comprised of many challenges. The growing interest in requirements engineering promotes many different forms of expressing it. As stated, the purpose of requirements is to know what to create. The second purpose is to decrease the language barrier between clients and developers. If this cannot be accomplished, the software product may be useless to the client. The product will not be what they want or will not conform to their vision of the way they thought it would work. As the most important phase, it is the most detrimental if done wrong (Brooks 1995). 11

PAGE 21

So, if this is such a critical phase, how should it be done? There are a few different approaches to requirements engineering. It is well known by developers that once beginning to define requirements, it seems like a never ending fiasco. Additions to the expectations continue to increase. When do they stop? The first approach is to "freeze" the requirements phase so that the development phase can commence (Gause & Weinberg 1989). In contrast to this approach, requirements should continue iteratively throughout the software process. This is described by Frederick Brooks as a more effective approach. Any additions made to requirements after the next phase has begun, can be analyzed in the next cycle (Brooks 1995). Rapid prototyping supports the requirements by giving a conceptual understanding of what the product will do. This is another method of diminishing misunderstandings between clients and developers (Gause & Weinberg 1989). During the initial review of the requirement specifications, details need to be understood and abstractions defined; implementation needs to be should not be examined at this point (Gause & Weinberg 1989). Headington and Riley note ... the specification describes the behavior of the data type without the reference to its implementation." In other words, understand what the product is supposed to do not how it is going to be done (Headington & Riley 1994). And, in addition, understand the client's requirements (Teng, Jeong, & Grover 1998). Gause and Weinberg give some 12

PAGE 22

examples for elevator requirement. Requirements For An Elevator 1. There will be only one control/display panel per elevator car. 2. The elevators will travel vertically only. 3. All New York State elevator codes must be observed. Figure 2. 1. 1: Initial requirements for elevators. These are not complete. Assumptions can still be made, therefore questions must be asked. For example, the requirement that states "The elevators will travel vertically only" could have been assumed, yet since it is written, questions can be derived from it. Some questions that may arrive are "do elevators need to travel vertically for legal purposes?" or "do elevators need to travel vertically because that is all that is known?". The last question presents information to the developer about what might occur in the future {Gause & Weinberg 1989). A requirement should be descriptive and specific. Requirements should evolve with the project and every possible hypothetical situation should be discussed as requirementsare created. Envision situations more than once to make sure that they are appropriate for consideration {Kazman, et.al. 1998). The documentation of requirements should continue to evolve as additions are made. Tracking additions and changes is also 13

PAGE 23

helpful as a memory tool for why things changed or additions were made. In the software process. this document will be the first created and the last .. completed (Brooks 1995). A defined language may assist developers when discussing requirements amongst each other; however. as it will be discussed, several conflicting situations may arise between client's descriptions of a requirement and the developer's analysis of that requirement. With the growing interest in requirements engineering, many different terms are used to express the same method or idea. Zave and Jackson discuss four items for consideration when looking at the process of requirements engineering. Requirements should be discussed within the guidelines of the environment. Environment is used in the context of the systems machines that are to be built (Zave & Jackson 1997). Requirements should not describe how the project will be built. but what it should do in the most descriptive detail. This descriptive detail should focus on the actions that are related to the environment, the machine. and the interactions between the two (Zave & Jackson 1997). And finally, requirements engineering should use its domain knowledge as a way to refine requirements so that they may be described as capable of being built (Zave & Jackson 1997). It is important to know and understand the structure and organization of a requirement for the application (Buschmann 1993). There is much more than these four considerations that makes up the process of 14

PAGE 24

requirements engineering. Authors Gause and Weinberg give a realistic and humorous example of some the problems encountered in the creation of a project called the Cockroach Killer. This product was developed by a man in New York and advertised in the classifieds. He charged five dollars and after he received the money, he would send a kit with two blocks as the pieces. Two instructions were given in the package: "1) Place cockroach on block A. 2) Hit the cockroach with block B." Evidently, not all the methods to kill the roach are listed (Gause & Weinberg 1989). How do you get it on the block? How do clean the mess if the actual roach murder has taken place? Does this really satisfy what the buyer believed to be a cockroach killer? Is it "user friendly"? 2.1.1 Ambiguities Within Language Ambiguities can be considered the most occurring problem and most difficult to tackle within requirements engineering. They continue to remain prominent throughout the software engineering process. It is very hampering to grasp a real-world meaning to a requirement term. Even in describing the software process itself, it is difficult to keep track of the terms and their appropriate meaning for the context for which they are written (Zave & Jackson 1997). Gause and Weinberg, again, give an interesting example of the difficulties with ambiguity. When asking three different individuals to build 15

PAGE 25

a shelter to protect humans, these three different individuals might build an igloo, a castle, or a space station depending on their idea of what a protective shelter might entail (Gause & Weinberg 1989). Ambiguity cannot be removed completely from the requirements phase. Human language proposes many definitions for the same word, so language itself is plagued with ambiguity (Brooks 1995). Some possible to solutions to the problem might be draw maps, ask many questions, and record any misinterpretations so that they may be later refined (Gause & Weinberg 1989). 2.1.2 Types Of Questions For Requirements Ask questions when addressing requirements to remove any misunderstandings or vagueness. Direct questions are necessary to find what the client's wants. Ask questions using a decision tree. Following in order of the tree, as the client responds to a question, draw the consequences so that they have a very descriptive view and it can be determine whether or not that is what they meant by their response (Gause & Weinberg 1989). Showing an implementation of a requirement has its advantages and disadvantages. At the very least, it provides quick responses to anything that may a question in a requirement. It helps to eliminate ambiguity. However, it may "over prescribe the externals". The drawing may extend above and beyond what the client meant or wanted (Brooks 199 5). Questions regarding future changes should also be thought through and asked. For instance, questions 16

PAGE 26

about what changes will be made and how will they be handled. 2.1.3 Formal Languages Used To Describe Requirements A suggestive approach to understanding requirements is to use a Process Specification Language or PSL. This language provides support to the requirements phase. A language is used to describe "concepts of role, organization/group, business rules, flows of information, and time constraints" (Ambriola, Conradi, & Fuggetta 1997). This component of the requirement phase should be maintained and consistent throughout the software process. This language can be used not only for defining specifications, but also for active prototyping (Ambriola, Conradi, & Fuggetta 1997). This approach can be an effective means to reducing ambiguity. Because formal definitions main goal is for precision, it is easier to sight different stages in the application and any missing specifications that need to be addressed. The use of this language assists in turning requirements into an implementation (Brooks 199 5). Another specification language tool to assist in the formalization of thought patterns regarding a project is the use of Modechart. It is a verification tool for making sure that specifications are consistent with their meanings. It provides information about the specifications of the system and supports model checking verification. It provides an analysis environment helpful to the developer. This tool can also be used in the testing requirement stage of the process. As part of the specification, it provides modes and their 17

PAGE 27

relationships (Clements 1995). 2.1.4 The People Involved In Requirements Engineering Who are the people involved in this stage of the process? First and foremost, the typical user should be involved (Gause & Weinberg 1989). Communication with the users during this stage is crucial. Clients. in thjs case. the people sponsoring the production of the project. should be included in this phase. so that they are aware of the all the constraints with the software production (Krasner 1998). And finally. an architect, someone whose main purpose is to clarify. recognize. and understand the users wants. should also be involved (Brooks 1995). 2.1.5 The Communication Throughout The Process A suggested approach after defining a process. is to develop a project plan from which to follow. According to this project plan. incorporate meetings for continued discussions of the project as a whole. This approach was a little more difficult as meetings become complicated as the attendees strive for understanding of the project. its problems and solutions. Meetings should be outlined to propose the topics of discussion. Several rules contend with successful meetings. For instance. meetings should consist of a small group of appropriate people (Brooks 1995). An agenda should be developed beforehand and not deviated from during the meeting. "Emergencies" should be withheld and addressed at a different time. 18

PAGE 28

Otherwise, the meeting can be filled with random tangents of what needs to be done now (Gause & Weinberg 1989). Meetings should address one issue and involve the appropriate individuals to address that issue. With the growing development of intranet sites in organizations, this would be an appropriate place for meeting minutes to be published for those interested in the contents of the meeting. This is also a place for working documents and charts, which, as an end result, can simplify the final delivery documents (Gause & Weinberg 1989). Another suggestive approach for communication via meetings is to label the meeting. For instance, label a meeting as a status-update meeting, or a problem-action meeting. This is a good way to distinguish what the type of discussions is going to be before attendance (Brooks 1995). Another method to approach communication, in respect to clients, is to understand the clients. This has been a theme throughout this paper for many reasons. It is difficult for developers to understand the client's terminology and vice versa. However, it is the developer's job to learn what it is the client wants. Many times, developers will attempt to address the client using their own terminology. Clients may become frustrated or may even feel belittled. Also, developers, not understanding the client's terminology, believe this means they do not know what they want. This may be true, because clients are not used to describing what they want in terms of 19

PAGE 29

software. It is the developer's job to assist them with their descriptions (Gause & Weinberg 1989). Some of the above methods (i.e. questioning schemes, drawing pictures) can be necessary to heighten the communication between clients and developers (Gause & Weinberg 1989). 2.1.6 Testing Requirements One method to ensure that validity of requirements is to provide test cases for them. Test cases do not ensure that a program is rid of "bugs". Intensive testing only shows that a program meets its specification (Brooks 1995). Two approaches to develop test cases and move them into the design phase are Black Box Abstraction Method and Open Implementation Method. 2.1.6.1 Black Box Method The Black Box Method has been the primary method to assist in software design. Its main attraction is the reusability-of components. However, in this section, it will be shown how its use can be necessary within the requirements phase. 20

PAGE 30

PUSH "1" ON CONTROL PANEL ? _. DOORS OPEN ON -/ FLOOR"1" Figure 2.1.6.1 Black Box method for development of an Elevator Requirement. The Black Box Method is the approach by which a function's implementation is hidden. The client is able to see what is going into the black box and what is coming out. but has no idea what transpired when it was in the black box [Kiczales 1996). Figure 2.1.6.1 shows what a black box looks like in regards to development of an elevator (Gause & Weinberg 1989). The implementation in the black box can process the request a number of ways. The part that is important here is that the result is correct. For instance. the user ended up on floor "1" rather than floor "3". Use this approach to show what the function is going to do and display the results of that function while hiding the implementation (Gause & Weinberg 1989). Begin using this approach while asking "what if" questions. For instance. ask questions about what the results would be if certain data goes into the black box. Clients will be able to view and address the events and results without knowing how they got to those results [Kiczales 1996). This can be advantageous to the testing of requirements because if the event and the results are correct then the functionality behind the scenes must also be correct. The ability to establish a test case shows that the requirement is useful and can be tested. 21

PAGE 31

This is also a good method to keep the flow of the requirements phase {Gause & Weinberg 1989). 2.1.6.2 Open Implementation Method In contrast to the traditional Black Box Method, the Open Implementation Method allows the client to view the functionality. This method has focus on software design and software reuse. It suggests that allowing clients to participate in the implementation of a function would allow software to be more reusable and less difficult to change {Kiczales, et.al. 1997). In other words, once it is established how a client views the functionality, the same approach can be reused in developing other functions. If clients better understand the functionality and its constraints, it is believed that future changes will not be so difficult. 2.1.7 Requirements Engineering Summary The requirements engineering phase is fraught with many activities. It is not only the initial phase, but also is continues throughout the software process as additions and changes are made at each cycle. Defining requirements and knowing what the project is going to do can assist in lowering maintenance in the future, increase productivity, and meet the appropriate timelines as clients and developers will know what needs to be done and when it is to be completed. 22

PAGE 32

2.2 Assessment and Problem Elicitation The goal of this phase is to examine the progress of the process and identify any problems that might occur as a result of the defined requirements (Ambriola. Conradi, & Fuggetta 1997). A study of European software organizations revealed that approximately 7 4 percent of them engage in some process of assessing project's risk. benefits and viability (Dutta. Wassenhove, & Kulandaiswamy 1998). Identifying potential risks. called risk management. has made a significant progress in the software industry (Kemerer 1998). No project is free from risk. Without identify a risk. most projects never meet the timelines (Lister 1993). So. what would be considered a risk? In the software industry, a risk can be anything in a software project that presents a loss of control or may prevent or inhibit a project's overall success (Lister 1997). Avoiding potential lawsuits to risks gone bad. is to analyze and prepare for potential hazards that may occur (Krasner 1998). Without doing this, organizations tend to promise something without knowing the implications or start to code something quickly in hopes that it will evolve as the client decides what all they want (Boehm 1991 ) This can cause a project to render several unexpected problems. Successful projects are those that incorporate risk management into their process (Lister 1997). Those organizations that do not incorporate risk management often find themselves developing what author. Marvin J. Carr. calls risk-aversion culture. 23

PAGE 33

Organizations will commend crisis management and penalize anyone who might mention the potential risk in the first place. Another common pitfall to the process without risk management, is "risk-management season". Again, if a crisis occurs or client is unhappy, managers notice the problems and compile a list of the top 1 0 problems, calling them risks. They will attempt to regain control of the project by focusing on these problems without looking into the risks still ahead awaiting them for the next "risk-management season" (Carr 1997). 2.2.1 Risk Management Defined Risk management is a process of identifying and being continually cognizant of the potential risks within the software project. One of the common practices, also used with requirements engineering, is the use of the decision tree. Another relationship this phase shares with that of requirements engineering is that it is also incremental throughout the process. Understanding the evolution of risks is an effective way to think about risk management (Higuera & Haimes 1996). Risks can be identified as the process moves through development (Boehm 1991 ). Answering questions such as "if we do this, the risk will be ... to find what the end result might be and what will need to be done. The main goal of risk management is to complete an early evaluation of the risks associated in tour stages of software: acquisition, development, integration and deployment (Higuera & 24

PAGE 34

Haimes 1996). Two main stages within risk management are risk assessment and risk control (Boehm 1991). 2.2.2 Risk Assessment Risk assessment includes developing lists of potential risks that may compromise the project's goals. Decision trees are used as well as the history of risks in past projects is analyzed. This is considered risk identification production. Risk analysis analyzes all risks as to how large of a loss will it be and what is the possibility is will occur. Interaction occurring between risks is also analyzed. Different models such as cost models or performance models are used to support the analysis. The next piece is to take all risks and prioritize them using such techniques as cost-benefit analysis or group consensus technique (Boehm 1991 ). Identify risks using checklists is a valuable management tool. Not only should all the potential risks be identified. but next to each risk, a technique to avoid that risk or control that risk should be identified as well. All sections of the system should be included such as technology, hardware, software. people, schedule. and cost (Higuera & Haimes 1996). List items tend to contain risks in an area that has little research and has little understanding (Boehm 1991). Some areas that are often not thought of are types of contracts. subcontractors, personnel management. quality of attitude, cooperation, politics, customer, morale, and overall project organization 25

PAGE 35

(Higuera & Haimes 1996}. The checklist can be used later in the process to track the status of each item in relation to the project's progress (Boehm 1991 }. After using and evaluating the checklist as far as each risk exposure levels, risk prioritization can then be invoked. One challenging aspect of determining exposure is accurately putting in the numbers for risk probability and its effects. Risk analysis involves several steps such as prototyping, benchmarking, and simulation to give a more accurate prediction of the risk exposure. The following Figure 2.2.2.1 shows a risk exposure table based on the probability and loss caused by unsatisfactory outcomes (Boehm 1991). The probability of the unsatisfactory outcome multiplied with the loss caused by it displays the amount of risk exposure the outcome contains. 26

PAGE 36

A. Software error kills experiment B. Software error loses key data C Fault-tolerant features cause unacceptable performance D. Monitoring software reports unsafe condition as safe E. Monitoring software reports safe condition as unsafe F. Hardware delay causes schedule overrun G. Data-reduction software errors cause extra work H. Poor user interface causes inefficient operation I Processor memory insufficient J Database-management software loses derived data 3-5 3-5 4-8 5 5 6 8 6 2 Figure 2.2.2.1 Risk Exposure Table for Unsatisfactory Outcomes. 10 30-50 8 24-40 7 28-56 9 45 3 15 4 24 8 5 30 7 7 2 4 This table is based on a Satellite Experiment project discussed by Barry W. Boehm (Boehm 1991 ) 27

PAGE 37

2.2.3 Risk Control Risk control involves several aspects such as risk-management planning. risk resolution and risk monitoring. After all the risks have been identified, analyzed. and prioritized it is important to get them under control. Each risk should have a risk management plan (Boehm 1991). For instance. one risk. after evaluation. has a risk management plan such as prototyping in order to gain its control. Prototyping would provide a way to understand the risk and its components. Schedules should incorporate a time period tor the risk management planning. After development of the risk management plans tor each risk. it is then time to provide what the resolution will be tor each risk and again incorporate time in the schedule to implement these plans. Monitoring each risk throughout the process through review meetings is another approach to risk control. This avoids any surprises that might occur (Boehm 1991 ). 2.2.4 The People Involved In Problem Elicitation And Assessment The people involved in risk management should consist of a team. One person can not identity all risks alone. The team should consist of two parts: supplier and customer. Project managers should not be the only individuals involved. Team leads and other individuals related to the importance of the project should also be included. Customers as well as stakeholders should be aware and included in the process of risk management (Higuera & Haimes 27

PAGE 38

1996). Managers should work actively with customers informing them of the potential risks. They should be experts when it comes to the risks and successes that are involved in the development of a product (Krasner 1998). 2.2.5 Summary Of Assessment And Problem Elicitation Continually meeting the needs for risk management in the software process makes it capable for schedules to be more accurate. Therefore, schedules will be met more often as they can accommodate the risk factors involved and ailow time for analysis of these risks. Managers have more control over a project when the element of surprise is removed because all risks have been identified and are under control. 2.3 Design The main objective of the design phase is to derive abstractions of the requirements into understandable structures. These structures show the components of the project and their interactions with each other. One of the more challenging efforts in this phase is the decision on what type of methodology will be used to design the project. It will be difficult to cover all aspects of the design phase so focus will be made on such parts as: objectoriented technology, design languages, trade-offs in architecture, and tools. The latest recommendation for design is to use an object-oriented methodology. A study was conducted with several software companies to 28

PAGE 39

determine what the best methodologies to use with Object Oriented Technology. The most important conclusions noted were "a methodology is more likely to be used when it is simple, clearly effective, and small in terms of required work products" and "the acceptance of a methodology is limited by the software group's ability to change it work habits and by its tolerance for seemingly bureaucratic content" (Cockburn 1994). The most effective design methodologies are sometimes the simplest. By implementing clear design methods, which have obvious and direct connections to the final project, developers are more likely to utilize the entire method, instead of choosing only the simplest parts. Responsibility driven design and use cases are two examples of powerful. yet simple, design techniques (Cockburn 1994). Another important aspect within the design phase is incremental development. Building in pieces rather than all at once has a tremendous effect on the met timelines. Clients can expect to receive something now knowing that there other "wants" are in the works (Brooks 1995). Incremental development is "used to manage the scheduling and staging of the system" (Cockburn 1994). It is an essential technique to assure the success of a project (Cockburn 1994). Why is design so important? Authors Fry and Lieberman share a well fitting example to answer this question. They discuss the importance of a well derived debugging environment for programs already developed enhanced 29

PAGE 40

with unwanted features (a.k.a. "bugs"). They related programming) to automobiles such as Corvairs and Pintos that were not designed with the possibility that they may be involved in car crashes. For instance, they lacked the safety features as simple as a rearview mirror. The demonstration represents a tool to support the debugging environment to enhance the capability of finding problems quicker (Fry & Lieberman 1995). Many tools are incorporated in this phase of the software process. There are many reasons for a well-defined design. Programmers need to think in the future when designing a software application. Change is going to occur and software developers have to prepare and accept this change. Often change will occur and the initial developers of the project are no longer the developers assigned to maintaining it. Therefore, they need to design a software project that is understandable and can be easily modified. It is suggested that developers design for portability, encapsulating as much as possible and make sure that when changes occur, it is localized (Meyers 1996). This main goal of this phase is to provide solutions to problems without going into low level implementation. Structured design or top-down design using objectoriented technology is currently the most accepted design methodology. Actions are defined and algorithms are used to support the structures (Headington & Riley 1994). 30

PAGE 41

2.3.1 Objected-Oriented Technology In Design In Frederick P. Brooks book, The Mythical Man Month, he discusses some solutions to "accidental difficulties" to the failure of software projects (Brooks 1995). One of his solutions was object-oriented programming. Object oriented programming does many things for software design. Encapsulation, modularity, inheritance, and abstract data typing are considered to be a few of them. Encapsulation s.upports maintainability because make enhancements or fixing issues within the system will effect one object and not the entire system. Also, the ability to use the same interface to call several functions assists some of the challenging components with program maintenance (Haythorn 1994). Object oriented programming solves software difficulty by allowing the expression within design without the extra syntactic content. Although, sometimes complex, it offers a design of fitted structures and sharp interfaces (Brooks 1995). There are several reasons that objectoriented techniques have become so prominent in the software industry. Some of the most common reasons are reusability, real world modeling, maintainability, and it's a unified software method (Haythorn 1994). Three object-oriented techniques that are among the most competent and understandable are use cases, responsibilities, and incremental development (Cockburn 1994). Object-Oriented Design helps developers understand how to develop the requirements stated by the client (Advanced Concepts 31

PAGE 42

Center 1997). Use cases tell the developer how the system will be used. It essentially is view of the clients wants. Use cases can assist managers in project planning because they are developed iteratively (Fowler & Scott 1997). An use case is a behavior or group of behaviors that are initiated by events from users, other systems, a timer, or hardware components (Advanced Concepts Center 1997). For scheduling purposes, developers should prioritize the use cases and give the best possible estimate for its completion in development. Time should be scheduled accordingly and with sensitivity to risk. Use cases can be built upon and therefore, they can be iterative throughout the development process (Fowler & Scott 1997). An example of a use case on library shows how different objects of the requirements interact with each other. 32

PAGE 43

Figure 2.3. 1.1 Use Case example .,.llldg5 :--...... Clerk "!!a at or ... L-.--. ....... -; .r C..H-i: C&arll; Software design using the object-oriented techniques involves distributed responsibilities throughout the software system. Not only are responsibilities distributed in this phase, they are also defined (Cockburn 1994). A responsibility describes the reason for having a particular class. A useful method to create responsibilities of classes is to use CRC (Class Responsibility Collaboration) cards. The CRC cards display the class name, its responsibility or purpose, and its collaboration with other classes, hence the name (Fowler & Scott 1997). The challenge is to determine whether or not an object should have a particular responsibility. Once it is determined that one object is to handle a responsibility, the next step is to display the transactions between 33

PAGE 44

this object and the object it will pass on its responsibility (Cockburn 1994). Using the CRC cards can initiate connectivity between classes and present a -. more clear way to view them. One suggestion is to keep the responsibilities at a high level and try to keep a maximum of three responsibilities to each card (Fowler & Scott 1997). A CRC card for the above Maintain Book Inventory is found in Figure 2.3.1.1. Responsibility Knowledge about the number of each book that are in the library Collaborators Schedules of book orders Figure 2.3.1.1: CRC Card for Inventory Component of Ubrary System. As stated earlier, the technique known as incremental development is the process of growing a software project. This method can be challenging to managers that do not understand what exactly developing incrementally means. However, it is essential to the software development process (Cockburn 1994). Incremental development gives the software organization better control over the development of the project as well as bring to light any risks that may be encountered (Fowler & Scott 1997). First the application should be developed just enough to see it run. Then each piece or pieces should then be developed more thoroughly until the software application is complete. Development in this manner gives developers the 34

PAGE 45

capability of prototyping. Clients can view early on and verify that the software project is going in the direction that they want. Two benefits are .. reaped with the use of incremental development. Developing function upon function, refining the system as you go, can initiate early testing. Also, schedules can be created more accurately and time can be allotted accordingly (Brooks 1995). A quality design has many different avenues that can be taken. Trade-offs within the architecture are analyzed and agreed upon. 2.3.2 Trade-offs Within Design Several trade-offs between certain aspects of a system are examined in order to produce an accepted design. Trade-off analysis during design and throughout the iterations of the software process produces an effective design. Several aspects need to be considered. Modifiability, security, performance, size and availability are the most recognized and effected sources of a product. In the past, developers have analyzed trade-offs without using a principled method. For instance, they would design one pattern for easy modification purposes while designing another pattern for portability without analyzing each pattern for every trade-off (i.e. Security, size, etc.). The Software Engineering Institute developed a principled method that incorporates trade-off analysis. It is called the ATAM or Architecture Tradeoff Analysis Method (Kazman, et.al. 1998). This method analyzes each 35

PAGE 46

trade-off in the design with respect to corresponding requirements (Kazman, et.al. 1998). Trade-off analysis supports both developers and clients. It is .. important for managers to communicate with their clients to understand the type of environment they will be using to run their application. Size of the application and performance are two aspects of an application that matter most with clients. Typically this is because this is what the client can see to establish some form of measurement. Clients can see the size of the application and can compare it with other applications on their system. They can also time the functions within the application to detect the rate it takes complete a task. They may want certain functionality with the understanding that the application will grow in size (Brooks 1995). For example, adding security to a system, which is currently a necessary feature in most applications, not only slows the application, but it also increases it in size. Flexibility and safety are also common attributes that require trade-off analysis. These are the types of considerations that need to be analyzed before a full-blown design is created (Sebesta 1996). However, trade-off analysis is an iterative method in which new requirements or changes to the systems sparks again the analysis of each trade-off (Kazman, et.al. 1998). One suggestion, taking advantage of iteration, is to optimize a system to increase performance later in the design phase (Fowler & Scott 1997). However, optimizing for each aspect is not possible, hence the reason for 36

PAGE 47

trade-off analysis [Meyers 1996). Documenting reasons for choosing each trade-off in the design should occur at each cycle. This is done in order to avoid rehashing of analysis only to come to the same conclusion [Kazman, et.al. 1998). 2.3.3 Tools To Support The Design Process A tool that could provide substantial support to the design phase of the software project is a knowledge-based software engineering system [KBSE). There are four requirements to a successful KBSE. It should provide whatever information is necessary for it's completeness. It should be forever accurate as it applies to the real world. The information it contains should be consistent. And finally, the KBSE should provide all the necessary algorithms sought to establish the software project [Devanbu & Jones 1997). Another helpful tool for both analysis and design are CASE Tools. CASE Tools support developers by allowing them to use graphical representations of requirements (Jarzabek & Huang 1998). However, although tools may be helpful to developers, it is still necessary for developers to understand and create quality designs (Fowler & Scott 1996). Along with tools, and much like a specification language in the requirements phase, a language should be chosen to work in the design phase. 2.3.4 Design Languages Design languages are useful for consistency within the design phase of 37

PAGE 48

the software process. A POL or Process Design Language is used to communicate ideas about the overall software architecture. A POL contains --information on constructs that provides detail on modules, interactive display of conditions that are processed concurrently, and associations or interactions between each module (Ambriola, Conradi, & Fuggetta 1997). In other words, choosing a design language is important in order to provide descriptions that are understood by all members involved in this stage of the process. It is just another way to avoid ambiguities within human language. 2.3.5 Summary Of Design As seen with previous phases, the design phase also supports maintainability, productivity and met schedules. By analyzing and creating a design using use cases and responsibilities, developers can be asked questions regarding time completion of each use case. Managers can then schedule accordingly. Another aspect of this phase that supports schedules is the trade-off analysis portion. Again, analyzing different attributes and deciding on the trade-off gives managers a better idea of what would be a time constraint. Also, through trade-off analysis and through object oriented technology, a project is more maintainable. Encapsulation and polymorphism are the features capable of this support. Tools are supportive to productivity. Using something like CASE tools can assist with generated code, which speeds up the process of implementation and deliveries. 38

PAGE 49

2.4 Implementation The goal of this phase is to revisit the design and make_ any necessary changes to the existing process or models. This is where the creation begins and continues as the models are put into execution (Ambriola, Conradi, & Fuggetta 1 997). The implementation phase can have several methods to support the software process and several different languages to express it. Some of the methods are the top-down method. This method is discussed as a design method, but continues through the stage of implementation. The UML (Unified Modeling Language) is another method used to further implement the project by using models to express the design (Fowler & Scott 1997). Further information on the UML method can be found in Martin Fowler's and Kendall Scott's book, UML Distilled: Applying the Standard Object Modeling Language. As with design, there are several tools used to support developers during this stage. 2.4.1 Methods For Developing Increments: Top-down Design This top-down method takes a problem and further divides it into smaller problems refining as it goes and keeping the high-level problem as an abstraction. Using a top-down method with implementation is one most favorable types of process within the implementation and design phases Top-down implementation also provides ease when developing in increments (Brooks 1995). Major actions are implemented initially and then additional 39

PAGE 50

layers of abstraction are created in the next increment. Top down implementing, or as others refer to as top-down testing, takes what is considered pseudocode in the design model and begins using programming languages to solve the problem. After each problem or subproblem is solved using the programming language, testing of each can commence. For the layers of the problem that are not yet implemented using code, dummy procedures can be called while the problem is tested (Headington & Riley 1994). The implementation phase uses the logical design already created and begins examining compile-time and link-time dependencies between physical constructs. This stage will dominate the outcome of the design model. In other words, as implementation finds some of the design areas to be difficult or cyclic dependencies are identified, the implementation phase will then make appropriate modifications to the design model (Lakos 1996). One method, called generalization, provides two types of procedures: subclassing, or implementation-inheritance. Subclassing is a procedure by which a subclass contains all the interfaces represented by the superclass (Fowler & Scott 1997). These interfaces and functions are shown in implementation model as private or protected based on whether its use will be read-only or hidden to other classes. In implementation models associations between classes represent the use of a similar data construct or array (Fowler & Scott 1997). There are some tools available that support these 40

PAGE 51

types of representations. 2.4.2 Tools To Support Implementation Procedures As discussed briefly earlier, CASE Tools support developers as they design (Jarzabek & Huang 1998). What exactly is a CASE Tool? A CASE (Computer Aided Software Engineering) tool is an application that assists developers with some sort of method whether it be object-oriented or structured. Two types of CASE tools exist. Some support the logical design and analysis phase while the others support the implementation phase. They support models of requirements, structure, data, and behavior. They provide information according to the method used on what rules to apply and further support the implementation by providing code generation (Jarzabek & Huang 1998). CASE tools enhance the productivity and efficiency of a project by supporting the above and in addition, allowing developers to work simultaneously. Code generation usually contains parent classes with implementations. Implementation consists of attributes and methods. They are declared as private, protected, or public. A private method or attribute can only be viewed within the scope of its component, whereas protected and public can be accessed outside the component. However, although protected methods or attributes can be accessed, there are certain limitations on what and how they are accessed. Code is generated according to the method implied and the specification from which the class 41

PAGE 52

definition evolved. Some precautions should be taken when using code generation from CASE tools. For instance, mapping associations using by reference containment, multiplicity, optional participation and how components are bound are some of the concerns that should be known when using a CASE tool to generate code (Advanced Concepts Center 1997). Tools that are used should also be consistent not only for Initial development but for the maintenance aspect as well (Brown, Carney, & Clements 1995). In other words, the sdme tool should be used throughout the entire scope of the project in order to prevent extra hassles of conversion from one tool to the next when production is a concern. 2.4.3 Strategies For Implementing Successful Applications During implementation, certain strategies are recommended. Several revolve around the methods of object-oriented techniques such as encapsulation. A suggested rule is to keep data hidden by making all data members within a class private. This rule keeps other classes from manipulating or changing data (Lakos 1996). A module should be "a software entity with its own data model and its own set of operations" (Brooks 1995). Modules should only be accessed through certain accessible functions. This method allows developers to work on different pieces of the same project at the same time. They only need to know what functions of accessible from each of the created modules. Productivity increases with 42

PAGE 53

multiple developers being able to function simultaneously (Brooks 1995). Sometimes it is necessary to create global information to pass to external files. If at all possible, it is recommended this is avoided. However, if it is absolutely necessary, put the information into a structure and make them private. Then use static function to access them (Lakos 1996). And most importantly, to relieve some of the frustration a developer might experience when taking on another developer's part, document the module and behaviors within the module that are undefined (Lakos 1996). The recommended forms of documentation for each module are: 1) create a flow chart, 2) give descriptions of all algorithms used, 3) explicitly state what files are being used, 4) explain the steps of passing from tape or disks, and 5) express possible future modifications that might be made and how they might be confronted (Brooks 1995). For readability purposes, developers should be consistent when creating variable names. For instance, if local variables are named using a "I" before the name, then this should remain consistent throughout the program. Also, the names should say something about what they are or what type of data they store. These methods are to assist the developers in the future, making the project more understandable, usable, and maintainable (Lakos 1996). The process of implementation, again, should be incremental. Testing pieces of code in between fixes and/or enhancements is a way to increase flexibility within the module and the project itself (Fowler 43

PAGE 54

& Scott 1997). 2.4.4 Frameworks Used For Reuse Measurement Frameworks, although this is not necessarily considered part of this software process, it is proposed to offer many advantages to the software process as a whole. Therefore, it deserves a honorable mention, but not in an exceptional amount of detail. Enterprise frameworks, although initially costly, may support not only the implementation phase, but all of the phases combined. Frameworks provide a way to incorporate "telecommunications, avionics, financial services, and manufacturing". They provide ability to produce quality software quickly and timely. It has the capability of tracking reuse metrics. For instance, it can measure how much code is reused within the project's components [Hamu & Fay ad 1998). 2.4.5 The Implementation Team The people of this stage in the project should consist of the developers and the architects that designed the modules. Communication between these two groups of people is absolutely necessary. Any questions developers have should be directed to the architects. They both should to each suggestion even when the implementation might make a change to the initial design because of cost or timely procedures found by the developer (Brooks 1995). However, it is suggested that design changes should not be made in the instance that it might be providing a standard for 44

PAGE 55

the project. Others necessary to in the implementation phase are the quality assurance individuals or the people who test the increments upon .. completion of the developers. It is important to include them in the stage of implementation as soon as possible to ensure quality of each of the implemented pieces (Lakos 1996). 2.4.6 Summary Of The Implementation Phase In this section, I have discussed the implementation phase, again, pointing out that incremental development is important. Using the top-down method supports incremental development as it takes a large problem and breaks it into smaller pieces. Each piece can be fully tested as they are developed. Some of the tools that are useful to this stage are CASE tools and an enterprise framework. Both tools are beneficial and not limited to this stage within the software process. Frameworks can act as a measuring tool for code. They allow the developer to see how much code is reused or not used. In the next section, I will cover the last stage of the software process that also focuses on measurement. 2.5 Monitoring And Data Collection This stage is the evaluative phase of the software process. Data about the process is gathered to enhance the future progress of the software process. Feedback is provided for the above phases (Ambriola, Conradi, & 45

PAGE 56

Fuggetta 1 997). Market pressures has made an impact on the incorporation of this phase. People have grown more interested in benchmarking and evaluating tools as well as the impact of the software process in the development of the software product (Kemerer 1998). The Quality Improvement Story is one of the methods that will be discussed in regards to the evaluation phase. Techniques within the Capability Maturity Model have been combined with the Quality Improvement Story. However, further detail about the Capability Maturity Model will be discussed later in its own section. Certain tools support the measurement or data collection about the process. Also, quantitative numbers from studies can show how defining measurement in the process can further assist the software process in the future. 2.5.1 Methods To Support The Collection Of Evaluative Data Quality Improvement Story is a technique used to solve problems within the software process. It consists of seven steps: 1) Reason for improvement, 2) Current situation, 3) Analysis, 4) Countermeasures, 5) Results, 6) Standardization, 7) Future Plans (Hollenbach 1997). Evaluating is done for several reasons. It is important to assess the quality of the overall project and process related to complete the project by evaluating its reliability, testability, performance, functionality, usability and maintainability. Testability and reliability go hand in hand as a measure of quality. In order to produce something reliable, it must be designed so that it is testable. Testing in pieces 46

PAGE 57

and removing as many errors as possible will make a product more reliable. A product should be functional to the expectations of the client. It should -also be easy to use that functionality and in a timely fashion. Furthermore, maintenance is important when anticipating future changes or added functionality (Lakos 1996). The Quality Improvement Story has provided techniques for each step in the evaluative process. In the improvement step, it is recommended to use graphs, flowcharts, and control charts. In evaluation of the current situation, the use of Pareto charts, checklists, histograms, and control charts are suggested. (Hollenbach, et.al. 1997) In one article, "Evaluating the Cost of Software Quality", the authors used a Pareto chart to analyze the possible defect categories (i.e. program logic, support documentation, database, etc.) and their percentage within the software project (Slaughter, Harter, & Krishnan 1998). Figure 2.5.1 .1 shows a chart of defects per 1,000 lines of code over a ten-year period in an IT company. The process improvements consist of the following: Process 1, creation of life-cycle standards for development, introduction of CASE Tools, Process 2, increase hiring standards, status reviews, and guides for documentation, Process 3, CASE tool integration with documentation, schedule and performance metrics, cost analysis, software configuration, Pareto analysis, and Process 4, cycle time analysis, automated cost estimation method. This analysis shows that incorporating formal process 47

PAGE 58

decreases the amount of defects. Defect Density Figure 2.5. 1.1: Defects vs. Process Improvements. The analysis step uses lshikawa/Fishbone Diagrams, scatter diagrams and again, the use of the Pareto chart. This step analyzes the cause and effects of procedures used within the software process. The countermeasures step, using similar tools, analyzes the cost estimations and action plans. The results step evaluates the conclusions made from the software process. Standardization uses techniques such a procedures training. And lastly, the future plans step evaluates the future goals and enhancements using an action plan (Hollenbach, et.al. 1997). Evaluations using the following steps have shown marked improvements in the process areas such as reduction in defects and time-to-market reduction. A case study evaluating 120 software 48

PAGE 59

processes demonstrated that developing time for a software process in organizations has lowered due to reuse (Hollenbach, et.al. 1997). 2.5.2 Types Of Findings Using Monitoring And Data Collection The whole purpose of instilling improvement processes is to reduce missed schedules, high costs, and less than acceptable quality. Even though the above case study shows that time has decreased in the use of software process, managers still believe that doing quality improvement processes will make them incapable of meeting time-to-market schedules. This is not the case. A study done at BDM International, an information technology company, shows the results for quality improvement processes put in place. Through the use of a Pareto chart, defect analysis is done by categorizing all the types of defects such as: JCL Program Logic, Support Documentation, Database, Specification, CICS, Migration, Test Plan, and Requirements Change. These categories are the analyzed by the percentage of defects occurring within each category in the software project. Costs of projects decreased due to the reduction in defects at about 1 0 percent. Process improvements such as incorporation of CASE Tools and development standards, status reviews, increased hiring standards, detailed style guidelines, schedule and performance metrics, cost estimation, software configuration, Pareto analysis, cycle time analysis, and automated cost estimation methodology, demonstrated a significant decrease in defects per 49

PAGE 60

thousand lines of code over a ten year period [Slaughter, Harter, & Krishnan 1998). Another study regarding the reengineering process revealed that the process evaluation stage had a strongest effect on software process with a perceived success of about fifty-percent. Business Process Reengineering had an overall success when incorporating the continuous monitoring and evaluating with the software process [Teng, Jeong, & Grover 1998). Another study evaluating foreign countries, showed that about 56% of these countries use records of resources and schedules vs. cost for measuring purposes. Seventy five percent of the European countries documented post implementation problems and what resolutions that were used to fix them. A less significance was placed on evaluating statistics of the errors in the projects, their causes and how they are avoided. An even less significance was placed on the efficiency of testing. However, a good portion of these countries [approximately 68%) use software tools to support their projects through planning, estimating, and evaluating critical paths [Dutta, Wassenhove, & Kulandaiswamy 1998). 2.5.3 Tools To Monitor Data Aside from the already discussed CASE Tools, software visualization tools are another way to evaluate the performance of the software project . Software visualization supports developers through animation of the software project as it executes. Algorithms are displayed showing their processing and 50

PAGE 61

their states that change as the program executes. Source code representing sorting algorithms such as: quicksort. shell sort. or insertion sort can be animated for the developer by showing how sorting data is occurring while the program runs. This allows the developer to see the performance of the different sorting algorithms in order to choose which method would be the best. This type of tool would also be effective for debugging the environment. It would help to localize problems with the code (Baecker. DiGiano. Marcus 1997). 2.5.4 The Whole Team Is Affected By The Software Progress Almost everyone is affected within the evaluation phase. However. being that it is mainly a monitoring phase. it is the managers that need to make sure this phase is accomplished and continually updated. Others will be informed of the successes or failures through the manager's documentation of this phase (Slaughter. Harter. & Krishnan 1998). 2.5.5 The Summary Of Monitoring And Data Collection This stage's major benefit is the evaluation it makes about the software process and the software project. Through continuous monitoring and collection of data for reuse. this stage offers credential to the effects of the process as a whole. The Quality Improvement Story supports evaluation as it tracks progress through the, different steps. Both CASE Tools as well as data monitoring tools such as software visualization tools make this stage easier to 51

PAGE 62

manage and track. 52

PAGE 63

3. Chapter 3 3.1 Cleanroom Software Engineering Cleanroom Software Engineering is an approach to software process with a focus on managers and technical developers. The approach investigates quality of software design achieved through the testing phase (Becker, Deck, & Janzen 1996). Historically, Cleanroom Software Engineering was given its name through the close relationship to the clean rooms hardware. Dr. Harlan Mills of IBM's Federal Systems Division (FSD) found that quality can be achieved in software as well as hardware. The relationship being the goal for development of a quality product through thorough design and testing (Deck 1994). This approach has been in effect since the early 1980's. It began to be recognized and used by several organizations in 1987. Such organizations were IBM, of course, NASA Goddard Space Flight Center, and Martin Marietta (Deck 1994). Cleanroom has gained popularity through the growing use of the Internet and software quality conferences. According to the Software Engineering Institute, Cleanroom Software Engineering has some similarities to the Capability Maturity Model, which is another type of process (Deck 1996). Cleanroom Software Engineering can be applied to both new systems and existing systems. Its focus is on defect prevention and the goal is 53

PAGE 64

to provide an error free product to testing (Deck 1995). Cleanroom shares similarities to the general process structure defined earlier. Its objective is to provide a quality product while increasing productivity and decreasing maintenance time. It reduces risk to the project, which, in turn, supports met timelines (Hines & Deck 1997). 3.1.1 Myths About Cleanroom Software Engineering Several myths about Cleanroom Software Engineering have challenged its worth over the years and the validity of its principles and practices. Michael Deck, of Cleanroom Software Engineering, Inc., has spent several years attempting to dispel these criticisms. In addition to the reduced criticisms, Cleanroom Software Engineering has experienced some changes or refinements since its first encounter into the software engineering world (Deck 1997). It was once believed that Cleanroom represents an "orthodox" view to software engineering. This idea came from a story about a small project gone bad. A manager insisted the use of Cleanroom practice to a team that did not want to do it. In doing so, he denied them access to a compiler in order to eliminate debugging practices among individuals without communication to all team members: There was also a limitation on the use of arrays and pointers. Obviously, this made it difficult to perform unless they had formal and large amounts of mathematical training. Hence, criticisms were to follow. However, unlike the above rigor, Cleanroom 54

PAGE 65

Software Engineering has gain advocacy through the developers who used it. They found their code to be more reliable. It was found that Cleanroom .. Software Engineering could achieve the same amount of reliability without the unrealistic approaches the manager used in the above story (Deck 1997). Throughout the years of usage, Cleanroom Software Engineering has been modified to be more successful for the engineers who use it. It has added new ideas and refined some of the old ideas in order to continue its goal of real software quality (Whittaker & Deck 1997). The following sections will describe these ideas in detail. 3.1.2 Principles Used In Cleanroom Software Engineering Cleanroom Software Engineering has two main principles that it has used traditionally over the years. The first being the design principle. The design principle basically states that before any project enters testing, it should already be tested in teams to the point it is error free (Deck 1997). Design should aid in the prevention of defects (Deck 1994). The second main principle is the testing principle. Testing is really just a measurement for quality of the software project (Deck 1997). Statistical measurement has been important to engineers for the purposes of reliability not to mention the ability to view the success of an error free (or close to error free) projects (Deck 1994). Several practices associated with the Cleanroom Software Engineering process can be tailored according to the project's needs and 55

PAGE 66

levels of process (Deck 1996). The management is another principle force behind the use of Cleanroom. It is important for management to drive the software process by focusing on the two above principles and making sure the following practices are in place and followed. The focus of managers should be on the incremental development using a team. Managers should provide feedback to continue improvement of the software project as well as the decrease the failures caused by humans (Deck 1994). 3.1.3 Practices Involved In Cleanroom Software Process At the basic level, the initial practices are incremental development and team ownership used by management. black box specification used in development as well as clear box design, team review used during the review portion, and finally behavioral testing used in the testing stage. Incremental development is used with the integration of top-down approach related to the general approach discussed above. Deck describes it as a serious of waterfall methods that produce increments as it works towards the final product rather than one cycle of the waterfall method that never produces anything (Deck 1996). Incremental development could be considered a spiral approach if it allows risk assessment and problem elicitation to guide its planning. It gives managers the ability to manage and incorporate its assessed time constraint into the increments (Hines & Deck 1997). The goal of using the incremental approach 56

PAGE 67

is to achieve "end to end executability" which guarantees that there is no time lost in building. Another advantage of the incremental practice is the ... capability of providing feedback to improve the next cycle or increment. Metrics can be applied to each segment (Hines & Deck 1997). The measurement chosen to be used after each increment is the Mean Time To Failure (MTIF) (Whittaker & Deck 1997). This becomes part of the Statistical Quality Control that is later discussed. The team ownership used by management is based on the approach that every project is worked on by a team that does the reviewing for it. Requirements, test plans, and specifications are all things that should be reviewed by the team. The teams should consist of a few people and work should be balanced with a leader to make sure every aspect is covered. However, Cleanroom Software Engineering suggests a "team-of-teams" approach to larger projects. For instance, having a couple of development teams and a couple of testing teams. Each team would have a lead that would meet with each other to create a project team. It is not necessary to have these teams in place at the project's commencement, instead it can be established throughout stages of the software process (Deck 1996). Even though teams do not have to be formed in the beginning, the organizational structure should be. Team structure is therefore dependent on the project. They can consist of three programmers up to dozens and possibly hundreds if the project calls for it 57

PAGE 68

(Deck 1997). After team structure is established, Cleanroom Software Engineering supports Black Box structures for the development of specifications leading to design. Black Box specification is the foundation for Cleanroom development practice. This is the same Black Box method discussed earlier as part of the requirements engineering section within the general process structure. This type of structure has a direct relation to that of object-oriented programming (Deck 1996). The three types of structures that are integral to this method of specification are component, object, and part. These structures represent an entity's behavior (Hines & Deck 1997). Cleanroom specifications use regular language to describe some specifications that do not need precision. Cleanroom focuses on an Input/Output style of specification without the unnecessary details in between. The overall objective is to have a specification for every visual event viewable to the user (Deck 1996). This process supports the methods of stepwise refinement and verification (IBM 1998). The next stage at a basic level of design is the development of the Clear Box design. Clear box design is the method in which the specifications are defined as algorithms. They display a top-down view of the specifications and their break down into sub-specifications (Deck 1996). The sub specifications then contain a black box, state box, and clear box design 58

PAGE 69

respectively (Hines & Deck 1997). It seems to resemble pseudocode. Some of the examples of clear box design are statements such as if-then-else or do while (IBM 1998). After the completion of the clear box design, the team begins their team review of all the specifications and design. Three rules to the process of review are to keep them iterative, use the same team each iteration, and concentrate on the quality of the information that is reviewed [Deck 1996). Initially, the first iterations of the team review do not examine a high level of correctness on the design. Instead, the intent is to get the specifications as solid as possible before the design stage begins. The same rules apply to the review of the design, which tends to take place after several iterations [Hines & Deck 1997). Some iterations of specifications may look like the following: 1 Review of initial specifications. 2. Review of the add-ons or more detailed specifications. 3. Review of the final specifications. Iterations for the design phase follow the same scenario (Deck 1996). Team members may include testers for the review of specifications. However, they are not included in the design review. This is contradictory to the rule that the team should consist of the same members, but one may be inclined to believe that this rule is to be applied at each level. For instance, the team 59

PAGE 70

for specification remains the same at each iteration and the team is the same for each iteration of the design review, but the specification and the .. design review teams differ due to level expertise and ability to give valid contributions (Hines & Deck 1997). An important aspect, as well as a rule, in the team reviews is not to focus just on the quest for bugs, but for the overall quality of the portion of the project being reviewed (Deck 1996). The testing stage focuses on the behavioral aspects of the product rather than the structures. Testers practicing behavioral testing do not need to know about the projects internal structures. Since the developer's focus is usually on the internals, behavioral testing tests the project from a different view, which heightens the quality of the project (Deck 1996). All three levels, basic, intermediate, and advanced, incorporate the use of incremental development and team ownership on the part of management. The intermediate stage works off the basic level of process with added enhancements. For the specification stage, the intermediate level uses the conditional-rule process for specifications and abstract data models. Basically, a conditional-rule process for a specification is just a way to express a black box specification. For instance, the following example shows a conditional-rule process for an address list. 60

PAGE 71

Array address-list is sorted in order by last-name and an item whose last-name field equals lookup is in the array set address to the address data field that corresponds to any instance of array address-list is sorted in order by last-name set address to anything and set found to FALSE TRUE -> undefined This particular method is used for its simplicity. The notation follows that of natural language. The arrows express mappings of inputs and outputs within the requirement (Deck 1996). Another addition to the black box specification is that of abstract data models. These are used to give a better understanding of the system as a whole and verify the each of requirements. The idea is to relate more to the user by incorporating a view that they might understand rather than using a low level approach that would only be understandable to engineers (Hines & Deck 1997). Abstract data models reflect the object-oriented aspects of programming. Each object is encapsulated and defined in a way users can understand. The user can see the different objects and how they relate to one another. This, like conditional-rule process for specification, is also useful to validate requirements (Deck 1996). At an intermediate level, there is the addition of state boxes before the implementation of clear box verification. State boxes are derived from the 61

PAGE 72

black box specifications. They are essentially implementation of the black box specifications although, this time, the functionality is hidden through abstraction. This is where object-oriented design is encountered as the design abstraction uses encapsulation to hide functionality (Deck 1997). The state box is the added to the abstract data models and takes the form of determining types of implementation for a specification. For instance, it can be expressed through arrays or hash tables (Deck 1996). The state box verification stage, also apparent at the applied intermediate level of Cleanroom Software Engineering process, is the method of testing correctness of the state boxes. It is up to the team to decide at what level of correctness they will use to verify the state box and clear box designs (Deck 1996). The intent with the verification process is to check the design against the specifications to see if it correctly represents what the specification describes (Deck 1996). Since the general theme of Cleanroom Software Engineering is to focus on the users, usage modeling is practiced. Its main purpose is to analyze how the users will use the system. Each function is analyzed from the perspective of both beginner and expert users of the system (Deck 1996). Test case plans are then developed reflecting the type and frequency of usage of a particular function within the system (Hines & Deck 1997). The last phase of the intermediate level of Cleanroom Software 62

PAGE 73

Engineering is the statistical testing phase. This takes the information learned from usage modeling and begins testing of the components for reliability. Every aspect of the system resources are analyzed such as file access, resources, and overall environment. This analysis becomes the focus of a random key, which will be used to test (Hines & Deck 1997). The testing is measured by using the Mean Time To Failure (MTTF) algorithm. There is some skepticism about the value and accurateness of this phase due to the inability to develop a perfect level of reliability based on assumptions of the project's usage. However, the main purpose is understand what the user will be doing with the product and making it as reliable as possible within the realm that they are using it. Its focus is not to find bugs in general, but to find what bugs may appear in the field (Deck 1996). The final level of the Cleanroom Software Engineering incorporates the use of formal methods to the process. These formal methods were discussed earlier in the general process. The advanced level basically incorporates formal methods for the stages within the intermediate level. Stimulus History Models are just a type of abstract data model that focus more on the "state" of the system (Deck 1996). Its goal is to filter out requirements that have not yet been established. Again, formal modeling languages are used to represent clear and state box design in the Formal Models of Real Time and Concurrency stage. Correctness proof incorporates the use of tools to verify 63

PAGE 74

the correctness of the system. Advanced Usage Models is just another level of low-level detail to express the usage modeling at the intermediate level. -Lastly, Product Warranties is the additional method in which guarantees are placed on the project. Warranties are made with time constraints based on the cost effectiveness of supporting the claims made on the product (Deck 1996). Warranties should be analyzed closely as the software industry changes rapidly. 3.1.4 Barriers Presented When Using Cleanroom The Cleanroom Software Engineering process offers flexibility to approaching a software process. However, there are barriers that prevent Cleanroom from begin a smooth process. If the process is instilled to a system already developed, existing code may be quite troublesome to the process. The process will need to understand the existing structure in order to formulate changes to it. Changes must not produce a negative effect to the current users (Deck 1995). Three types of methods can be applied to the Cleanroom approach in working with old code. First, specification recovery is used. This method defines a trace table of the existing code for requirements. As cases are found, they will be analyzed to see how things are working and whether or not they are of any value. Another method, called Specification Interposition, uses a "middle-out" approach (Deck 1995). It makes an assumption as to what behavior is happening. It then takes those 64

PAGE 75

assumptions and determines which parts are being used and why. Lastly, the Relative Specification method could be used. This method describes the old .. code and its newly relative code in a specification. This could hamper the process by confusing the requirement, but it is up to the team to decide how they want to approach it (Deck 1995). Another barrier for the installation of the Cleanroom Software Engineering process is the change in organization structure (granted there is one currently). First, everyone in the organizational structure must adhere to the guidelines representing the software process. To often, it is the portions ofthe software organizations infrastructure that participate in the software process, causing the process to fail and eventually the organization returns to its chaotic state (Becker, Deck, & Janzon 1996). Also, incentives given to members of the software organization are based on more negative approaches than positive. For instance, testers are given an incentive based on the number of bugs they fine rather than development of quality test plans (Becker, Deck, & Janzon 1996). 3.1.5 Benefits When Using Cleanroom If it is possible for a software organization to get passed the above barriers and others that may appear, the Cleanroom Software Engineering process can reap several benefits for the users of it. Cleanroom focuses on defect prevention, careful documentation, statistical quality control, and 65

PAGE 76

teamwork (Deck 1996). Use of all of these methods, allows an organization to produce quality software. With less maintenance on the software product --due to the quality of testing, productivity increases. Due to the increase of productivity, the software organization can then meet their time to market demands (Becker, Deck, & Janzon 1996). 3.1.6 The Summary Of Cleanroom Software Engineering Cleanroom Software Engineering has three levels of process: basic, intermediate, and advanced. Each level incorporates methods that build on one another. The four layers of these methods are management, development. review, and testing. Some of the methods used by the Cleanroom Software Engineering process have been discussed in the general process from above. Incorporating Cleanroom Software Engineering has barriers such as the effects of instilling new code relative to existing code and making changes to the existing organizational structure. However, if a software company can get passed the barriers, the will reap such benefits as less defects. higher productivity, and met time-to-market demands. 66

PAGE 77

4. Chapter4 4.1 Capability Maturity Model A more common software process is the Capabili_ty Maturity Model developed by the Software Engineering Institute (SEI) in 1986 (Paulk 1996). Its main purpose was to develop a mature process that would benefit software organizations in the development of their projects (Paulk 1996). The Software Engineering Institute and Mitre Corporation came up with two methods initially called software process assessment and software capability evaluation. Along with these methods they used a maturity questionnaire in order to evaluate the software process and its level of maturity. After four years, this evaluation became the Capability Maturity Model (CMM) (Curtis, et.al. 1993). This software process has become extremely prevalent in the software world. Major organizations as well as the military have gained much success using this software process model (Herbsleb, et.al. 1997). Studies have been done taking an organization with high maturity levels and using metrics to see what areas have worked for This was done in order to get a better understanding of a software process that works rather than taking an organization that has little or no process, putting a maturity process in place, and then attempting to evaluate what benefits will be brought to 67

PAGE 78

them in the future. Metrics chosen for the high maturity process involved details of collected data. Size of the programs, dates of completion for both -development and testing, and number of defects were chosen as parts of the process to be measured (Burke 1997). 4.1.1 Immature Vs. Mature Processes There are several distinctions between what would be called an immature process or a mature process. An immature process within an organization tends to react to situations that are considered a crisis (Curtis, et.al. 1993). For instance, pulling as many resources as possible to get a quick fix in for a company that is complaining or threatening law suits may cause an existing project to fail in meeting its deadline. Immature organizations develop unrealistic schedules by choosing dates without evaluating risk or possible delays (Curtis, et.al. 1993). By doing this, other pieces of the project suffer such as quality and maintainability in the future. Immature organizations tend to have no way to measure the quality of the product and when things go wrong, whatever testing procedures that may be in place are cut short in order to deal with the current crisis (Fay ad 1997). On the other hand, a mature process involves the type of evaluation necessary to allow for risk so that the teams do not have to go into "crisis" mode. Everyone participates and is aware of the procedures. The procedures are "user-friendly" and remain consistent in order to get the work 68

PAGE 79

done. Improvements are made as they are evaluated and deemed beneficial to the current process. OveralL the roles and responsibilities of the members within the organization are defined and understandable. Schedules are evaluated by past experiences and future expectations. Quality is measured and controlled by managers (Curtis, et.al. 1993). The Capability Maturity Model takes an organization through levels of maturity and the key processes associated with each level. 4.1.1.1 Levels Within The CMM The levels defined by the Capability Maturity Model were developed on the belief that software organizations take small steps towards improving the procedures that they plan to instill. All of the levels within the Capability Maturity Model have incorporated key processes with the exception of the first (Paulk 1996). All levels objectives and key processes will be discussed in order to show an organization's evolution into a high maturity process. 4.1.1.2 Level 1: Initial Level With the description of the immature process above, level one is defined with many similarities. The initial level operates under a somewhat chaotic often thrown together process (Herbsleb, et.al. 1997). There is little formality. Methods might be spoken of, but are not monitored or controlled. There is no tracking of changes and no review of changes once they occur, so that changes cause a crisis (Toth 1997). If a crisis is to occur, the policies or 69

PAGE 80

procedures that might have been in place are often ignored or tossed aside. while the organizations focuses on coding to get things done quickly. Managers who might believe in the process may lose focus when a crisis occurs and take all of his team members with him. An organization operating under the level one approach may actually produce a product. However. the product may be delayed in getting released and may be over the budget planned (Curtis, et.al. 1993). Capability Maturity Model states that in order to make it to the success of getting a product "out the door". it is necessary to have very "competent people" as well as "heroics" (Herbsleb. et.al. 1997). In other words. individuals that will work until all hours of the night to put the crisis to rest. So. the basis of the initial level is the characteristic of the employees that are involved in the current project (Curtis, et.al. 1993). In order to advance to the next level, there must be qualified management that is willing to uphold the software process. Quality assurance must be in place in order to test the project. Tracking of schedules, errors. fixes, and changes must all be in place. Lastly, as seen before in above processes. there must be an overall acceptance of the procedures and policies that are to become the process (T oth 1997). 4.1.1.3 Level 2: Repeatable Level There is more stability in the level two portion of the Capability Maturity Model. Management is put in place in order to monitor costs. schedules, and 70

PAGE 81

functionality with the project. Procedures that were successful in history are maintained and put in place in the current process. The process is documented and measured so that improvements can be made (Curtis, et.al. 1993). Several key processes take place during the repeatable level. They consist of: requirements management, software project planning, software project tracking and oversight, software subcontract management, software quality assurance, and software configuration management (Herbsleb, et.al. 1997). The key processes have several purposes. Requirements management is essentially the inclusion of the customers when developing the requirements. A level of understanding between the developers and the customers is approached. The software project planning is a method of coming up with realistic schedules and plans for both the engineers and management. Software project tracking and oversight is used to give teams the capability of viewing what is going on with the software project. It allows managers to keep the project on track if it begins to slip. Software subcontract management is the process of managing and hiring quality subcontractors. Software quality assurance gives management the ability to see how the project is working. It allows them to view the quality of the project. Lastly, the purpose of software configuration management is to maintain the project through the establishment of certain procedures to do so (Paulk 1996). 71

PAGE 82

The repeatable level is still missing key procedures that make it a mature process. It does not incorporate quality training within this phase. Testing is not yet solidified. Documentation, although the need for it may be identified, does not have individuals assigned to it (Toth 1997). Level three of the Capability Maturity Model begins to enhance some of the missing pieces within the repeatable level. 4.1.1.4 Level3: Defined Level The defined level addresses the lack of training issue by instilling quality training for all of the individuals within the organization. The process is documented and software engineering practices are chosen. This level of the Capability Maturity Model is flexible. It allows the organization to select its own defined software process given the basis contains standards and procedures for verification, developing, and completion. It suggests the development of a Software Engineering Process Group (SEPG) to define the software practices to be used in the process. The SEPG should be developed early. The coordinated efforts will focus on improvement and appraisals of the process activities (Paulk 1996). Management has control of the project as a whole and the entire organization understands the process. The parts of this process are well defined and stable. Because of this, they can be repeated (Curtis, et.al. 1993). The key processes of this level consist of: organization process organization process definition, training program, 72

PAGE 83

integrated software management, software product management, intergroup coordination, and peer reviews (Herbsleb, et.al. 1997). Each key process has its purpose. The organization process focus defines the responsibilities of the members within the organization. The organization process definition is a collection of assets used to further improve the process. Training programs are used to teach individuals the importance and responsibilities of their roles within the organization so that their skills will enhance the process. The integrated software management is basically a process to allow flexibility in determining which software practices will be instilled. The software product engineering is the integration of all the practices in order to create a quality product. The intergroup coordination is the communication effort between groups so that there is no guessing about what is to happen next (Paulk 1996). Everyone has an idea of what is going on. The peer reviews is essentially the defect prevention discussed earlier. The product will be of better quality before it goes into the field (Paulk 1996). All of these key processes allow the organization to produce a better understanding of the roles, responsibilities, and effectiveness of each individual involved in the creation of the product (Curtis, et.al. 1993). 4.1.1.5 Level 4: Managed Level Quality and measurement become key components at the managed level. The software process and the products developed using it are 73

PAGE 84

controlled. The measurements give this level the ability to be predictable in determining the outcome of the project (Curtis, et.al. 1993). The types of items that are measured are the time in each cycle, the amount of errors that have been found and the amount that has been fixed per cycle. The measurements provide information on how reliable the project is and how well it performs. Maintenance is also the focus of measurement. The purpose is to measure how long it will take to maintain the project and how it will be done (T oth 1997). Risks are carefully evaluated as the process is put in place for an upcoming project (Curtis, et.al. 1993). This constitutes developing with the use of an incremental approach, evaluating the risks through each increment (Paulk 1996). The managed level has a couple of key processes to be included in the software process. They are quantitative process management and software quality management. The purpose of quantitative process management is to evaluate and control the performance of the software process. The purpose of software quality management is to analyze the measurements conducted on the project in order to produce a quality product (Paulk 1996). In other words, the goal is to understand what the project is doing so realistic goals of quality can be applied (Ginsberg & Quinn 1994). 4.1.1.6 Level 5: Optimized Level The optimized level has now gotten the attention of the whole 74

PAGE 85

organization. Everyone is involved in the process and making it better. Instead of reacting to situations, which occurred at earlier levels, the organization is capable of dealing with situations proactively. The measurements and data collected from the previous levels can be used to further improve the process in areas such as cost and schedule estimation. Errors are not just fixed. They are analyzed to determine how they happened (Curtis, et.al. 1993). The key processes in this area are defect prevention, technology change management, and process change management. Defect prevention, as seen in earlier processes like Cleanroom Software Engineering, is factor that determines the cause of errors so that they are removed from occurring in the future. The technology change management process is the incorporation of tools based on the investigation of their ability to support the process. The process change management is basically the process of making the overall process better. In doing so, cycle time is decreased and both quality and productivity are increased (Paulk 1996). 4.1.1. 7 Barriers Of The CMM The Capability Maturity Model experiences the same types of barriers as the Cleanroom Software Engineering process or any other process that is about to be incorporated into an organization. However, there are more barriers. Some organizations attempting to incorporate the Capability 75

PAGE 86

Maturity Model do so for the wrong reasons. For example, the intent is to get as much process as possible without evaluating the effects. The goal of the organization is to develop process instead of develop software. Everything becomes process (Fayad 1997). Another barrier to the success of the Capability Maturity Model is the misconception that an organization can get to the managed or optimized level without experiencing the levels in between. For instance, an organization can start collecting data in the manner used in the managed level (level4), but will be unable to apply it because an earlier level was skipped and no comparisons or improvements from an earlier level can therefore be obtained (Curtis, et.al. 1993). Other barriers that might be imposed upon an organization that is truly attempting to put a process in place are the many excuses that come with it. For instance, some will view it as more bureaucratic and resist its invocation. Others will say that they are too busy to put a process in place. Some may believe that developing software is a creative process and putting in an approach that has defined rules and procedures stifles that creativity (Fayad 1997). 4.1.1.8 Benefits Of The CMM If the barriers can be broken, then the Capability Maturity Model can be beneficial to a software organization in many ways. Studies that have been conducted show many successes and improvements in productivity, cycle 76

PAGE 87

time, and maintenance. It is important to note that initially putting a process in place is difficult and takes a long time. For instance, it takes about 1 .5 to --2.5 years to move from level 1 to level 2 according to a study of 13 organizations. However, once past this, the benefits are worth it. Productivity gain ranged between 9-67%. Time to market reduced about 15-23%. Post-release defects reduced 10-94% (Herbsleb, et.al. 1997). In a sur-vey taken with organizations that had noted good or excellent percentages in their organizations performance, showed improvements moving from the initial level to the defined level in the following areas: product quality, customer satisfaction, productivity, ability to meet schedules, ability to meet budgets and staff morale. Only one respondent showed a decrease in customer satisfaction between the initial level and the repeatable level, but again showed an increase in customer satisfaction going from the repeatable level to the defined level (Herbsleb, et.al. 1997). Another study revealed that organizations that achieved a higher level of maturity were more apt to take substantial risks. This may show that the confidence level of the organization as a whole increased in addition to the many other major benefits (Herbsleb, et.al. 1997). 4.1.2 The Summary Of Capability Maturity Model The Capability Maturity Model consists of several levels. Each level, with the exception of the initial level, incorporate several key processes in order to 77

PAGE 88

improve the software process and project that is developed using it. Some barriers may prevent the success of the Capability Maturity Model such as the .. organizations inability to understand the that the goal is to support the development of the product. not to develop a process just to have one. Removing the barriers and instilling the process without skipping through the maturity levels, can bring much success to the organization. Some of these successes are the increased productivity, the decrease in cycle time, and the ability to maintain the project. 78

PAGE 89

5. Chapter 5 5.1 Summary of Software Process In this paper I discussed the importance of a software process, what it is and how it is done. First, software process is important because of the difficulties encountered when one is not in place, such as: failing to meet schedules, maintenance nightmares, chaotic work scenarios, and possibilities of lawsuits. Several software companies use different types of metrics to evaluate their software process. Some of the metrics considered are: lines of code, scheduled tasks completed on time, test coverage, resource adequacy, fault density, system operational availability, time to market. schedule, quality and cost trade-offs. A general software process was defined as having the following components: Requirements Specification, Assessment and Problem Elicitation, (Re)Design, Implementation, and Monitoring and Data Collection. These components were discussed in detail. Each phase is cycled through at each increment. Requirements Specification was essentially the phase in which requirements were gathered from the client and put into a formal type of notation to be understandable by both developers and clients. Assessment and Problem Elicitation was the process in which potential risks 79

PAGE 90

were evaluated. Schedules and costs were estimated based on these evaluated risks. (Re)Design was discussed in great length. There are many methods that can be applied within this phase. Object-oriented designs were among the ones discussed. Certain tools such as CASE were mentioned as supportive to this stage of development. Top down methods as well as strategies were discussed in the implementation phase. And finally, the monitoring and data collection phase examined the process of evaluating the process and making improvements. Methods of standardization were discussed like the Quality Improvement Story. Tools that present visualization to the organization about the performance of the project were also considered during this phase. Two of the more formal processes discussed were that of Cleanroom Software Engineering and Capability Maturity Model. Each shared some of the same components that the general process contained using different ways to represent them. The Cleanroom Software Engineering process has three levels: basic, intermediate and advanced. Each level builds upon the previous level. The methods incorporate in this process are the use of box structures to use during specification and design. The box structures are then verified for correctness. Levels of correctness are determined at each level. This model expresses the importance of peer reviews to increase effective communication, .so

PAGE 91

understanding of the project. and decreasing errors. The Capability Maturity Model has five levels: initial, repeatable, defined, managed, and optimized. These levels also build off of each other. It is suggested that an organization should not skip levels so that to ensure the success of the process. The process focuses on key processes at each level. Essentially the first level is based on competent employees. The second focus on the installation of project management. The next level focuses on the engineering practices and the development of the SEPG group to support the organization. The managed level or level 4 focuses on quality and quantitative measurements of the project. The last level looks at the history and continues to make marked improvements on the software process. 81

PAGE 92

6. Conclusion This investigation has revealed that following a software process can increase productivity, make a product more maintainable, and help meet the set schedules. Essential components were discussed along with two important software processes: Cleanroom Software Engineering and Capability Maturity Model. All of the case studies showed marked improvements in software production by using a software process. The benefits discussed were found to be substantial. Note that there are additional benefits not addressed by this study. This work also identified some of the difficulties and barriers encountered when attempting to incorporate a software engineering process. It should be noted that not all types of models have been included. There are several that use many of the same features discussed here and for which these findings apply. Similarly, not all aspects have been addressed, such as how to pick a software process and perform the cost/benefit analysis. Additional investigations are needed to address the full spectrum of process models and characteristics. The information presented concerning the two processes, Cleanroom Software Engineering and Capability Maturity Model, offer suggestions for the 82

PAGE 93

best ways to incorporate these models into a software development organization. Many of the references included in the annotated bibliography [Appendix A) reflect the importance and the difficulties of software process models that may be both useful and necessary for anyone attempting to use a software process model. 83

PAGE 94

APPENDIX Annotated Bibliography Advance Concepts Center. Advanced Object-Oriented Analysis and Design Using UML. Advanced Concepts Center of Lockheed Martin Corp., 1997. This book was a tutorial in presentation form to describe the software process and the use of UML. Several figures are used from this book that describe the phases of software process and types of practices used. Ambriola, Vincenzo, Reider Conradi and Alfonso Fuggetta. "Assessing Process Centered Software Engineering Environments", ACM Transactions on Software Engineering and Methodology, v6n3(July 1997): 283-329. Evaluation of three process models and their architectures. Anonymous. "Software Metrics Best Practices", Methods & Tools, v6n3(March 1998): 13-14. Survey in software metrics practices. Baecker, Ron, Chris DiGiano and Aaron Marcus. "Software Visualization for Debugging", Communications of the ACM, v40n4(April 1997): 44-54. Discussion of making programming a multimedia experience. Human factors of the programmer. Barsh, Gregory S. "If Not This, What? The Internet as Cause to Refine Your Internal Procedures", Cutter IT Journal, v11 n4(April 1998): 28-32. Another discussion on the legal implications that can be caused by not revising internal procedures. Brooks, Frederick P. The Mvthical Man-Month. Reading, MA: Addison Wesley Longman, Inc., 1995. This book provided information on all aspects of software engineering, the challenges, problems and solutions. Every chapter is utilized in one way or another. The goal of the book is to provide tips on managing large software projects. 84

PAGE 95

Brown, Alan W., David J. Carney and Paul C. Clements. "A Case Study in Assessing the Maintainability of Large, Software-Intensive Systems", Proceedings of International Symposium and Workshop on Systems Engineering of Computer Based Systems, Tucson, AZ, (March 1995). Assessment of the maintainability of software systems. Weaknesses and strengths are discussed about techniques described. Clements, Paul C. "Formal Methods in Describing Architectures", Proceedings of Monterey Workshop on Formal Methods and Architecture, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 1995. Architectural Description Languages (ADL's) are discussed with a focus on Modechart Devanbu, Premkumar and Mark A. Jones. "The Use of Description Logics in KBSE Systems", ACM Transactions on Software Engineering and Methodology, v6n2(April 1997): 141-172. Focus on knowledge-based software engineering and the use of description logics. Dutta, Soumitra, Luk N. Van Wassenhove and Selvan Kulandaiswamy. "Benchmarking European Software Management Practices", Communications of the ACM, v41n6(June 1998): 77-86. This article displayed the progress of companies in Europe that used software management practices. Fowler, Martin and Kendall Scott. UML Distilled: Aoolvina the Standard Object Modeling Language. Reading, MA: Addison Wesley Longman, Inc., 1997. This second chapter in this book discussed the use of UML with an outlining development process. Fry, Christopher, and Henry Lieberman. "Programming as Driving: Unsafe at Any Speed?", Proceedings of CHI'95 Mosaic of Creativity, May 7-11, 1995. CHI' Companion 95, Denver, Colorado. Introduction of the Zstep 94. Tool to assist developers with programming. Gause, Donald C. and Gerald M. Weinberg. Exploring Requirements: Quality Before Design. New York, NY: Dorset House Publishing, 1989. This book was used primarily to discuss the requirements phase of the software process. It gives insight into the challenges of developing requirements for a product, but also discusses the benefits. Glass, Robert L. "Short-Term and Long-Term Remedies for Runaway Projects", Communications of the ACM, v41n7(July 1998): 13-15. Survey of management opinions on runaway projects. Some remedies suggested without detail. 85

PAGE 96

Headington, Mark R. and David D. Riley. Data Abstraction and Structures Using C++. Lexington, MA: D.C. Heath and Company, 1994. This book provided certain definitions for the words used to describe the software engineering process. It also provided certain techniques used in the design phase. Lakos, John. Large-Scale C++ Software Design. Reading, MA: Addison Wesley Longman, Inc., 1996. This book gives several tips to designing better software products using C++. However, the first chapter the one that was focused on for this paper. It discussed general design and quality information for software design such as designing for reuse and involving quality assurance in the design phase. Kemerer, Chris F. "Progress, Obstacles, and Opportunities in Software Engineering Economics", Communications of the ACM, v41 n8(August 1998): 63-66. This article was a good summary of the recent advances of software engineering. Kiczales, Gregor. "Beyond the Black Box: Open Implementation . January 1 996. Discussion of the Open Implementation Design method. Comparison of the Open Implementation method with the Black-Box method. Kiczales, Gregor, John Lamping, Christina Videira Lopes, Chris Maeda, Anurag Mendhekar and Gail Murphy. "Open Implementation Design Guidelines", Association for Computing Machinerv. 1997. Discussion of the Open Implementation Design method. Comparison of the Open Implementation method with the Black-Box method. Kogut, Paul and Paul Clements. "The Software Architecture Renaissance", , September 1998. Discussion on software architectures. The advantages/disadvantages and the definition. Krasner, Herb. "Looking Over the Legal Edge at Unsuccessful Software Projects", Cutter IT Journal, v11 n4(April1998): 33-40. Discusses negative project outcomes and the legal implication that follow. Gives some helpful hints to avoid this problem. Teng, James T. C., Seung Ryul Jeong and Varun Grover. "Profiling Successful Reengineering Projects", Communications of the ACM, v41n6(June 1998): 77-86. This article gave an example of a software engineering process for reengineering projects. Measurements were taken to evaluate the process the article discussed. Ungar, David, Henry Lieberman and Christopher Fry. "Debugging and the Experience of Immediacy", Communications of the ACM, v40n4(April 1997): 38-44. Human factors of the programmer. Making the experience easier. 86

PAGE 97

Zave, Pamela and Michael Jackson. "Four Dark Corners of Requirements Engineering", ACM Transactions on Software Engineering and Methodology, v6nl(January 1997): 1-30. Discussion on what requirements engineering entails. This article focuses on the problems and some of the solutions to requirements engineering: 87

PAGE 98

REFERENCES Advance Concepts Center. Advanced Object-Oriented Analysis and Design Using UML. Advanced Concepts Center of Lockheed Martin Corp . 1997. Ambriola. Vincenzo. Reidar Conradi and Alfonso Fuggetta. "Assessing Process-Centered Software Engineering Environments". ACM Transactions on Software Engineering and Methodology. v6n3 (July 1997): 283-329. Anonymous. "Software Metrics Best Practices". Methods & Tools. v6n3(March 1998): 13-14. Baecker. Ron. Chris DiGiano and Aaron Marcus. "Software Visualization for Debugging", Communications of the ACM. v40n4 (Apri11997): 44-54. Barsh. Gregory S. "If Not This, What? The Internet as Cause to Refine Your Internal Procedures". Cutter IT Journal. v11n4 (April1998): 28-32. Becker. Shirley A . Michael Deck and Tove Janzon. "Cieanroom and Organizational Change". Proceedings of Pacific Northwest Software Quality Conference. 1996. Boehm. Barry. "Software RISK Management: Principles and Practices" DATAPRO publication AS20-600-201. Delran. NJ: McGraw-Hill. (September 1991 ). Brown. Alan W . David J. Carney and Paul C. Clements. "A Case Study in Assessing the Maintainability of Large. Software-Intensive Systems". Proceedings of International Symposium and Workshop on Systems Engineering of Computer Based Systems. Tucson. AZ. (March 1995). Burke. Steven. "Radical Improvements Require Radical Actions: Simulating a High Maturity Software Organization", CMU/SEI-96-TR-024. June 1997. Buschmann. Frank. "Rational architectures for object-oriented software systems". Journal of Object Oriented Programming. (September 1993): 30-41. Carnegie Mellon Software Engineering Institute. "The Capability Maturity Model for Software". . June 1998. Carr. Marvin J. "Risk Management May Not Be for Everyone",IEEE Software, v14n3(May/June 1997): 21. 24. 88

PAGE 99

Clements, Paul C. "Formal Methods in Describing Architectures", Proceedings of Monterey Workshop on Formal Methods and Architecture, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 1995. Cockburn, Alistair AR. "In Search of a Methodology", ObTect Magazine, (July-August 1994) 53-56, 76. Curtis, Bill, Mark C. Paulk, Mary Beth Chrissis and Charles V. Weber. "Capability Maturity Model, Version 1.1 ",IEEE Software, v10n4(July 1993): 18-27. Deck, Michael. "Cieanroom Software Engineering and "Old Code"Overcoming Process Improvement Barriers", Proceedings of Pacific Northwest Software Quality Conference, 1995. Deck, Michael. "Cieanroom Software Engineering: Quality Improvement and Cost Reduction", Proceedings of Pacific Northwest Software Quality Conference, 1994. Deck, Michael. "Cieanroom Software Engineering Myths and Realities". Proceedings of Quality Week '97, San Francisco, CA, May 27-30, 1997. Deck, Michael. "Cieanroom Practice: A Theme and Variations", 9th International Software Quality Week (QW '96), San Francisco, CA, May 21-24, 1996. Devanbu, Premkumar and Mark A. Jones. "The Use of Description Logics in KBSE Systems", ACM Transactions on Software Engineering and Methodology, v6n2 (April1997): 141-172. Dutta, Soumitra, Luk N. Van Wassenhove and Selvan Kulandaiswamy. "Benchmarking European Software Management Practices", Communications of the ACM, v41n6 (June 1998): 77-86. Fayad, Mohamed. "Software Development Process: A Necessary Evil", Communications of the ACM, v40n9 (September 1997): 101-103. Fowler, Martin and Kendall Scott. UML Distilled: Applying the Standard Object Modeling Language. Reading, MA: Addison Wesley Longman, Inc., 1997. Fry, Christopher, and Henry Lieberman. "Programming as Driving: Unsafe at Any Speed?", Proceedings of CHI'95 Mosaic of Creativity, May 7-11, 1995. CHI' Companion 95, Denver, Colorado. Gause, Donald C. and Gerald M. Weinberg. Exploring Requirements: Qualitv Before Design. New York, NY: Dorset House Publishing, 1989. 89

PAGE 100

Ginsberg, Mark P. and Lauren H. Quinn. "Process Tailoring and the Software Capability Maturity Model", CMU/SEI-94-TR-24, November 1995. Glass. Robert L. "Short-Term and Long-Term Remedies for Runaway Projects", Communications of the ACM. v41n7(July 1998): 13-15.Hamu, DavidS. and Mohamed E. Fayad. "Achieving Bottom-Une Improvements with Enterprise Frameworks", Communications of the ACM, v41 n8 (August 1998): 110-113. Haythorn, Wayne. "What is object-oriented design?", Journal of Object Oriented Programming, (March-Apri11994): 68-78. Headington, Mark R. and David D. Riley. Data Abstraction and Structures Using C++. Lexington. MA: D.C. Heath and Company, 1994. Herbsleb, James, David Zubrow, Dennis Goldenson, Will Hayes. and Mark C. Paulk "Software Quality and the Capability Maturity Model", Communications of the ACM, v40n6 (June 1997): 30-40. Higuera, Ronald P. and Yacov Y. Haimes. "Software Risk Management", CMU/SEI-96-TR-012, June 1996. Hines, Braden E. and Michael Deck. "Cieanroom Software Engineering for Flight Systems: A Preliminary Report", Proceedings of IEEE Aerospace Conference. Snowmass, Colorado, Feb. 1-7, 1997. Hollenbach. Craig, Ralph Young, AI Pflugrad. and Doug Smith. "Combining Quality and Software Improvement", Communications of the ACM, v40n6 (June 1997): 41-45. IBM, Inc. "Cieanroom Software Engineering". . June 1998. Jarzabeck, Stan and Riri Huang. "The Case for User-Centered CASE Tools", Communications of the ACM, v41n8 (August 1998): 93-99. Kazman, Rick, Mark Klein. Mario Barbacci, Tom Longstaff, Howard Lipson and Jeremy Carriere. "The Architecture Tradeoff Analysis Method", Software Engineering Institute. Pittsburgh, P A, August 1998. Kemerer, Chris F. "Progress, Obstacles, and Opportunities in Software Engineering Economics", Communications of the ACM, v41 n8(August 1998): 63-66. Kiczales. Gregor. "Beyond the Black Box: Open Implementation" . January 1996. 90

PAGE 101

Kiczales, Gregor, John Lamping, Christina Videira Lopes, Chris Maeda, Anurag Mendhekar and Gail Murphy. "Open Implementation Design Guidelines", Association for Computing Machinery, 1997. Kogut. Paul and Paul Clements. "The Software Architecture Renaissance", , September 1998. Krasner, Herb. "Looking Over the Legal Edge at Unsuccessful Software Projects", Cutter IT Journal, v 11 n4 (April 1998): 33-40. Lakos, John. Large-Scale C++ Software Design. Reading, MA: Addison Wesley Longman, Inc., 1996. Lister, Tim. "Risk Management Is Project Management for Adults", IEEE Software, v14n3 (May/June 1997): 20.22. Meyers. Scott. More Effective C++: 35 New Ways to Improve Your Programs and Designs. Reading, MA: Addison-Wesley Publishing Company, 1996. Orfali, Robert, Dan Harkey and Jeri Edwards. The Essential Client/Server Survival Guide Second Edition. New York: John Wiley & Sons, Inc., 1996. Paulk, Mark C. "Effective CMM-Based Process Improvement", Proceedings of the 6th International Conference on Software Quality, Ottawa, Canada, 28-31 October 1996: 226-237. Paulk, Mark C. "Process Improvement and Organizational Capability: Generalizing the CMM", Proceedings of the ASQC's 50th Annual Quality Congress and Exposition, Chicago, IL May 1996: 92-97. Sebesta, Robert W. Concepts of Programming Languages. Reading, MA: Addison-Wesley Publishing Company, 1996. Slaughter, Sandra A., Donald E. Harter and Mayuram S. Krishnan. "Evaluating the Cost of Software Quality", Communications of the ACM, v41 n8 (August 1998): 67-73. Teng, James T. C., Seung Ryul Jeong and Varun Grover. "Profiling Successful Reengineering Projects", Communications of the ACM, v41 n6 (June 1998): 77-86. Toth, Kal. "Software Engineering Institute's Capability Maturity Model". . September 1997. 91

PAGE 102

Ungar. David. Henry Lieberman and Christopher Fry. "Debugging and the Experience of Immediacy". Communications of the ACM. v40n4 (April1997}: 38-44. Whittaker. James A and Michael Deck. "Lessons Learnedfrom Fifteen Years of C/eanroom Testing". Proceedings of Software Testing. Analysis. and Review (STAR} '97. May 5-9.1997. Zave, Pamela and Michael Jackson. "Four Dark Corners of Requirements Engineering". ACM Transactions on Software Engineering and Methodology. v6n 1 (January 1997}: 1-30. 92