Citation
Tales of adoption

Material Information

Title:
Tales of adoption a case study in the adoption of a computer-based technology of instruction
Portion of title:
Case study in the adoption of a computer-based technology of instruction
Creator:
Lowry, May
Publication Date:
Language:
English
Physical Description:
xii, 256 leaves : illustrations, forms ; 29 cm

Subjects

Subjects / Keywords:
Computer-assisted instruction ( lcsh )
Educational innovations ( lcsh )
Educational technology ( lcsh )
Learning, Psychology of ( lcsh )
Computer-assisted instruction ( fast )
Educational innovations ( fast )
Educational technology ( fast )
Learning, Psychology of ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 247-256).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Doctor of Philosophy, Educational Leadership and Innovation.
General Note:
School of Education and Human Development
Statement of Responsibility:
by May Lowry.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
37160032 ( OCLC )
ocm37160032
Classification:
LD1190.E3 1996d .L69 ( lcc )

Full Text
TALES OF ADOPTION:
A CASE STUDY IN THE ADOPTION OF A
COMPUTER-BASED TECHNOLOGY OF INSTRUCTION
B.A., Wheeling Jesuit College, 1971
M.A.T., Northwestern University, 1972
M.S.W., University of Maryland, 1978
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Educational Leadership and Innovation
by
May Lowiy
1996
i


1996 by May Lowry
AH rights reserved.
n


This thesis for the Doctor of Philosophy
degree by
May Lowry
has been approved

Date


DEDICATION
I dedicate this work to the many people who
have supported me without question or fail:
My partner Kathleen June, who has always
believed in me;
My sister Phyllis Gubanc, who encouraged me
to speak in my own voice;
My friend Regina Kilkenny, who told me you
don't have to be brilliant, just persistent;
My grandmother Ellen O'Connell, who inspired
our family to learn;
And especially to my loving husband, Larry Fisher,
who kept the home fires burning.
IV


Lowry, May (Ph.D., Educational Leadership and Innovation)
Tales of Adoption: A Case Study in the Adoption of a Computer-Based
Technology of Instruction
Thesis directed by Associate Professor Brent G. Wilson and Associate
Professor W. Alan Davis
ABSTRACT
Instructional technology as a field has excelled in the design and
development of a variety of creative and sophisticated instructional tools. An
emerging issue for IT is the disappointing rate of adoption of these
technologies by the users for whom they were designed. Models and
concepts about change from the literature on organizational development and
related disciplines shed light on the process of the successful adoption of
technologies of instruction. This study pursued the research question: What
factors support the successful adoption of technologies of instruction ? The site of
the study was COMET, an organization which provides distance learning to
meteorologists. One of COMET's products is a series of computer-based,
multimedia training modules specifically for weather forecasters. Although
the modules are well-designed using state-of-the-art instructional
technologies, their rate of use among the forecasters is quite low. Based on a
national evaluation of module use, the study examined the factors supporting


and inhibiting the adoption of these computer-based modules. The study
concluded that five factors identified in the literature were important in
COMET's efforts to encourage the adoption of the modules: fit with the
culture of the end users, adequate support, channels of communication,
significant participation of the end users, and clear direction or vision
regarding the innovation. In addition, the study concluded that, in support of
these five important factors, COMET would likely need to embark on
significant second order change of its operating paradigm, mission,
organizational culture, and core business processes in order to successfully
promote the adoption of its technologies of instruction.
This abstract accurately represents the content of the
candidate's thesis. I recommend its publication.
Signed
Brent G. Wilson
Signed
W. Alan Davis


ACKNOWLEDGMENTS
I would like to acknowledge the help and cooperation of
the COMET Program,
whose talented staff and creative leadership invited us into a partnership
to examine the mysteries of adoption and change.
COMET is a program of the University Corporation for Atmospheric
Research, and is funded in part by the National Oceanic and
Atmospheric Administration.
I would also like to acknowledge the support and guidance of the
community of learning at the
School of Education at the University of Colorado at Denver,
especially the Dean and Faculty of
Instructional Technology.
vn


CONTENTS
CHAPTERS
1. Introduction......................................... 1
The Problem........................................... 2
This Study........................................... 5
COMET.......................................... 6
A Question of Adoption.......................... 8
COMET's Response............................... 9
A Case Study of COMET........................... 9
Methods........................................ 10
Significance................................... 11
2. Introduction..........................................13
What is Instructional Technology?.............. 14
Hard and Soft Technologies......................14
What is Adoption?.....................................16
Definition..................................... 16
Adoption is a Human Process.................... 17
Change Agents................................ 19
What is Successful Adoption?......................... 19
When is Adoption a Concern?.......................... 20
Adoption of Technology in IT......................... 23
Social Models of Adoption...................... 24
If It's a Good Idea, Why Aren't We Doing
More To Promote Adoption?............................ 26
Historical Reasons............................. 26
Cultural Reasons................................28
Economic Reasons............................... 29
How Can We Improve the Rate of Adoption?............. 31
vm


Context is Everything............................ 31
Change........................................... 32
Comparison of the Models......................... 35
How Do Change Models Apply to IT?...................... 36
Successful Change................................ 37
Summary...........................................40
3. Introduction.............................................42
Qualitative Inquiry.....................................43
The Case Study Method............................ 44
This Study............................................ 45
Selection of COMET as a Site..................... 46
Scope of the COMET Evaluation.....................47
Scope of This Study.............................. 48
Research Questions............................... 49
Data Collection........................................ 50
Six Sources of Data.............................. 51
Five Data Collection Strategies.................. 53
Focus Groups and Interviews...................... 54
Questionnaire.................................... 55
Site Visits...................................... 58
Extant Data...................................... 61
Data Analysis.................................... 62
Assurances and Confidentiality................... 64
Limitations of the Study............................. 65
My Biases.............................................. 66
4. Introduction............................................ 69
Westcoast Forecast Office........................ 70
North Island Air Force Base...................... 72
South Bay Naval Air Station...................... 74
IX


Forecasting............................................ 76
The Forecast Revolution.......................... 78
COMET.................................................. 79
COMET Programs................................... 83
The Distance Learning Program.......................... 84
What Are We Going to Teach and
How Are We Going to Teach It?.................... 84
The Modules............................................ 85
Early Successes................................ 86
Evolution of the Design Process ....................... 88
COMETS Concern........................................ 88
The Evaluation......................................... 91
The Manager...................................... 93
The SOO.......................................... 93
The Lieutenant.................................. 94
Findings of the Evaluation............................ 95
COMET's Response to the Evaluation.................... 102
New Design Guidelines........................... 103
Re-thinking the Modules as Curriculum........... 106
New Training Approaches......................... 106
COMET Educational Resource Center............... 108
Decentralizing the Role of the
Subject Matter Expert..................... 108
In-House Forecaster............................. 109
What Does a New SOO Need?...................... 109
Transition to CD-ROM Technology................. 109
Changing the Role of the Heads of
the Customer Groups...................... 110
Summary......................................... 110
x


5. Introduction........................................... Ill
Factors Contributing to Successful Adoption........... 113
Factor 1: Fit with the Culture of the End Users. 114
Factor 2: Adequate Support...................... 121
Factor 3: Channels of Communication............. 122
Factor 4: Participation of the End Users........ 124
Factor 5: Clear Vision or Direction............. 127
Relative Importance of Factors........................ 129
First Order and Second Order Change................... 131
Second Order Change for Kids.................... 132
COMET and Second Order Change..........................137
A Change in Assumptions......................... 140
Level of Organizational Paradigm................ 140
Level of Organizational Mission and Purpose..... 141
Level of Organizational Culture..................143
Level of Organizational Core Processes.......... 145
Summary......................................... 146
Lessons Learned....................................... 147
Systems..........................................147
The Five Factors................................ 148
The Role of the Instructional Designer.......... 150
Technology...................................... 152
Orders of Change................................ 153
Future Research....................................... 154
APPENDIX..................................................... 156
COMET Final Evaluation................................ 157
Site Visit Interview Protocol......................... 225
Focus Group Protocol.................................. 227
Questionnaire Cover Letter.............................229
xi


Page
Sample Questionnaire.................................230
Questions for COMET Software Administrator.......... 239
Questions for the Training Coordinator...............241
Questionnaire Distribution.......................... 243
Evaluation Timeline................................. 244
Letters of Agreement from COMET..................... 245
REFERENCES ............................................... 247
xu


CHAPTER 1
INTRODUCTION
You want to see an instructional designer wince?
Just ask to be shown a situation in which one of
his or her design products is working as planned
within an organization. Such a request almost
always results in a blank stare or an evasive response.
Standard evasive responses include giving a list of
the constraints that prevented superior efforts from
being implemented, gnashing of teeth over the
incompetence of the people on the firing line that
sabotaged some really noble project, or passing along
a dusty example of a 10-year-old product along with
a tale about how well it worked before being abandoned.
(Burkinan, 1987, p. 429)
The purpose of this study was to lend insight into what factors
contribute to the successful adoption of technologies of instruction. As a field
of study, instructional technology (IT) is young. Bom in the 1920's with the
formalization of instructional planning (Saettler, 1968), IT is young compared
to education, the profession of its origin, and compared to other ancient
professions like law, medicine, architecture, and engineering. As Glaser
(1976, p. 3) comments, the traditional role of the professions is to take
information from the sciences, and focus on how to design and make
interventions "... aimed at changing existing situations into preferred ones."
1


And so it is with instructional technology, whose role is to draw on theory
about how people learn, and use that theory to design and make
interventions to support their learning.
As professions mature, they seems to follow a typical pattern of
growth (Brint, 1994; Elliott, 1972). Professions are bom as they start to do
work that fills a need. Through doing that work, they develop a specific
knowledge base, theoretical constructs, a unique identity, and
acknowledgment of their relationship to other professions (AECT, 1979). As a
result of becoming recognizable, professions also typically begin to accept an
expanded role in their community, and hold themselves more consistently
responsible for the effects of their work (Brint, 1994; Elliott, 1972).
Apparently, this evolution of identity and responsibility is an on-going
feature of the life cycle of professions: witness modem medicine's titanic
struggles over its responsibility in the face of contemporary issues like
assisted suicide and genetic engineering.
We can see this pattern of growth currently in IT. As the field debates
and redefines who it is and what it does (Seels & Richey, 1994), it is an
inevitable benchmark of the growth of the profession to grapple with
questions about our appropriate role in the communities we serve.
The Problem
For example, a serious question has lately arisen in IT about an
imbalance between our unparalleled advances in designing creative and
technologically advanced interventions, and our limited success in being
2


consistently helpful to the people whose learning we want to support. The
problem doesn't seem to lie in the design and development of instructional
products and processes, nor in our technical expertise or our ability to come
up with creative uses for existing technology, but in having the instructional
products and processes put to their intended use on a regular basis.
In defining instructional technology trends for the 1990s, Ely (1991, p.
52) identified a growing concern about the lack of instructional designs that
are .. used directly in day-to-day classroom activities." Clancey (1993, p. 7),
referring to his internationally respected work in expert tutoring systems,
admitted his frustration that, "After more than a decade ... not a single
program I worked on was in routine use." Surry and Gustafson (1994), in
their discussion of the problem of instructional designers' chronic lack of
success in getting programs adopted, summarize by quoting Tyler (1980, p.
11): "Many developers of technology accept the view that as time passes,
there will be increasing use of the innovation until it has become a common
element in school practice." As instructional designers, we persist in
believing that, if instructional design is effective, it will"... automatically be
attractive to potential adopters" (Surry & Gustavson, 1994, p. 1). Keen (1976,
p. 3) describes the dilemma this way: "... [We] have concentrated on design
independent of implementation, assuming that the power of a good idea is
enough to assure its adaptation. Reality is painfully different."
The painful reality is that, although instructional products and
programs are designed to be used by someone somewhere to support
learning, the designers too often appear to live in one world, and the someone
3


and the somewhere on another. We tend to live in the world of our
laboratory or design studio, and our clients live in the dynamic, complex, and
individualistic world of their every day work and school. It has been difficult
for us to bridge that distance.
Consider the example of the computer, one of the most widely-
adopted technologies of the late 20th century. The contemporary school
described by the congressional Office of Technology Assessment (1995) is one
in which they do not have an overhead projector that works, let alone
computers in regular use. The OTA report concludes that, "Although three-
quarters of schools report having sufficient number of computers and
televisions, they do not have the system or building infrastructure to use
them (p. 12). As usual, poor schools fare even worse. Even when computers
in schools abound, being available or accessible is a long way from being used
consistently and effectively (Harvey, Kell, & Drexler, 1990). Ely (1991, p. 43)
concurs that, while many schools have computers, "The potential for
computers in teaching and learning has not been advancing."
One of the widest gaps between the potential and the use of
technology for instruction is in higher and adult education (Koontz, 1989;
Schieman & Fiordo, 1990; Newman, 1989). Frank Newman, President of the
Education Commission of the States and former President of the University of
Rhode Island, analyzed the use of information technology in the life of the
university (Newman, 1989). He argues that higher education is
extraordinarily effective in its use of information technology in three of its
main functions research (especially in the hard sciences), administration,
4


and library services and extraordinarily ineffective in "the university's
most fundamental task, namely how it teaches students" (Newman, 1989, p.
2). He concludes that
In the teaching and learning function of the
university, we are still in the category (of computer
use) that we in the academic world euphemistically
call "potential." That is to say we have made
practically no progress. (Newman, 1989, p. 2)
From young learners to adults, new technologies of instruction have
not made the kind of impact intended and hoped for by the field of
instructional technology. The generally dismal opinion among educational
researchers is summed up by LaFrenz and Friedman (1989, p. 223):
there is little doubt that the vision of the computers
power to transform education has not been fulfilled.
The hard fact is that the impact of the computer on
the teaching and learning process has not yet been
significant.
The adoption of learning technology is a problem for IT.
This Study
This study posed the question, What factors contribute to the successful
adoption of technologies of instruction? In order to pursue this question, I
explored a classic case of instructional designers attempting to discover why
5


their technology, in this case computer-based interactive learning programs,
is not in routine use.
COMET
In 1989, the University Center for Atmospheric Research (UCAR) in
Boulder established a project called COMET (Cooperative Program for
Operational Meteorology, Education, and Training). COMET's1 purpose was
to advance the science of weather forecasting by developing three programs:
one program designed to stimulate research in meteorology and hydrology, a
second designed to develop and teach new meteorological techniques, and a
third designed to provide distance learning to weather forecasters in offices
throughout the nation. The focus of this study is the last of these three,
COMETs distance learning program.
The heart of this program is the Operational Forecaster's Multimedia
Library, a series of state-of-the-art interactive multimedia computer modules.
Figure 1.1 illustrates a screen from a COMET module on forecasting methods.
The modules are designed to teach scientific principles, basic skills, and
emerging issues in weather forecasting. The modules are built around
engaging scenarios depicting problems faced by forecasters. Learners are
encouraged to describe how they would approach the problem, and are
provided with help in the form of information, advice, quizzes, animation,
1 COMET's registered name is the COMET Program. For simplicity in this study, I use
COMET as a proper noun and the name of the organization.
6


On this 350 hPa map of heights and temperatures are seven
gold rings. The three to the upper right are associated with
a temperature gradient beneath a jet streak. The others are
associated with an upper-level trough.
What Q-vector patterns correspond to the temperature
gradients in each ring?
Use the mouse to move each set of yellow Q-vectors to its gold ring, to
grab the Q-vectors, position the tip of your arrow cursor over a yellow
Q-vector. The cursor changes into a "+" symbol (this only occurs;When
the TIP of the arrow cursor is directly over one of the yellow vectors)! :
Click and release the LEFT MOUSE BUTTON causing the vectors to
stick to your cursor. Move them over the appropriate gold ring and click
the LEFT MOUSE BUTTON again. If your choice is incorrect the vectors
do not stay. Repeat for each.set until all are positioned;
\ \
\( |

1350 hPa heights \
! and temperatures (C)
\ Y
\ \ ^

774 '^vectors
'""v,, t Jj
"* ---- I 7* 'I
1 /
,yV/*//'
is?*'*/}
) / /
''f ' ' /'
/ J' (''
S28 ?

V !
2 I 3 I 4|s 6 I 7 I
Figure 1.1. This is a sample screen from a COMET computer-based instructional module.
(Used with permission, COMET, 1996).


and quick time video clips of experts discussing the particular weather topic
and forecasting techniques.
A Question of Adoption
By mid-1995, COMET had released nine modules on seven forecasting
topics, such as Doppler radar, marine weather, flash flooding, and the use of
satellites in forecasting. Seven other modules are in the design and
development stages. COMET'S modules are exemplary instructional
products. They are well-designed, interactive, visually appealing, and feature
up-to-date topics in forecasting. COMET's formal evaluative data indicate
that people who use the modules enjoy them (Surry & Okey, 1993).
Nevertheless, COMET has been concerned and disappointed over their
current rate of use.
At the start of the study, COMET estimated that the modules were
currently being used in 550 forecast offices of the National Weather Service,
the Air Force Weather Service, and the Naval Weather Service.
They believed that the modules were being used by 50% of the forecasters in
50% of the weather offices nation-wide. Their hope was that they would be
used by 75% of the forecasters in 75% of the weather offices nation-wide.
Lately, COMET had also become concerned with changing conditions
in the world of their end users, the forecasters. Their three main forecasting
customers are the National Weather Service, the Air Force Weather Service,
and the Naval Weather Service. All of these government agencies are facing
massive budget and staffing cuts, including the potential dismantling of the
8


Commerce Department (home of the National Weather Service), and the
current downsizing of the Air Force and Navy.
On another front, COMET is facing new competition. There are a
growing number of instructional designers who are beginning to provide
instruction in weather forecasting. These competitive programs are available
as the result of new authoring software and popular new communication
channels, like the Internet.
COMKT's Response
COMET remains committed to serving its customers in the face of
these new needs and trends. In response, they commissioned an evaluation
of the design and use of their current modules. At COMETs request, Brent
Wilson and I led a team charged with conducting this evaluation of the
modules.
Through this evaluation, COMET was seeking information on the
value and effectiveness of its computer modules so that they could use the
information to assess and improve their technology and service to forecasters.
They wanted data on the effectiveness of their modules, and suggestions on
their process of design and implementation. That was COMETs desired
outcome of the evaluation.
A Case Study of COMET
COMET's dilemma over lack of full adoption of their technology is an
excellent example of the same thorny problem described in the current IT
9


literature. I used this evaluation as the data vehicle for a case study of the
adoption process. My intention was to gather data from several sites of
multi-agencies to illuminate COMET's adoption process, and to document
what they did to encourage adoption of the modules, including what worked
and did not work about the process.
The first outcome of this study was the evaluation commissioned by
COMET (found in the Appendix to this study). The evaluation document
was meant to serve COMET in assessing the design and delivery of its
distance learning program. The second outcome was this case study of
COMET's adoption process, which is meant to serve the community of
designers and scholars who are seeking information about the process of
adoption of instructional technology.
Methods
Although there is an extensive body of research on adoption of
innovations generally (Rogers, 1995), the study of the adoption of instructional
technology is nascent. Calling adoption "the overlooked literature," Sachs
(1994) and others have begun to explore the specific application of this
information to instructional technology (Surry & Gustafson, 1994; Ely, 1991).
The qualitative method of inquiry is ideal when research is in the
exploratory stage (Mellon, 1990). In particular, the case study method of
qualitative inquiry is especially suited to this research. Yin and others (Yin,
1994; Krathwohl, 1993; Mellon, 1990) argue that the case study is the
methodology of choice when researching contemporary processes to discover
10


how and why certain decisions are made and how they are implemented, and
what happens as a result of those actions. This was a good fit for what I
wanted to accomplish. Rogers (1983, p. 358) adds that, "Given our present
rather limited understanding of innovation in organizations, the in-depth
approach of process research is more appropriate." A detailed description of
the study methodology is provided in chapter three.
Significance
The mission of IT is to benefit learners through instructional products
and processes. Because the IT community recognizes adoption of
instructional technologies as a problem, seeking information about the
problem is significant at this stage in our development as a profession.
This study adds to an empirical basis for reflecting on the issue of the
boundaries that define the job of the instructional technologist: Does the job
of the instructional designer include adoption? If it does, what is the
appropriate role in that process? If it does not, then whose job is it?
Academic disciplines do not typically enjoy the benefit of each others'
knowledge domains and constructs the proverbial hardening of the
categories. The literature bases from psychology, sociology, and
organizational development offer us models and concepts regarding the
adoption process. This study is one more contribution to the effort at cross-
disciplinary insight.
In changing times, even publicly funded agencies like COMET are no
longer able to count on a predictable base of customers. Many instructional
11


designers are newly presented with the need to remain competitive. This
study adds to the literature of how one such agency approached the adoption
of technology in support of its mission.
12


CHAPTER 2
INTRODUCTION
If one wishes to understand the term
holy water one should not study the
properties of the water but rather
the assumptions and beliefs of the
people who use it. That is, holy water
derives its meaning from those who
attribute a special essence to it.
Szasz (1974, p. 243)
The adoption of technology is a problem for IT. How can we begin to
conceptualize and understand the problem? In constructing a conceptual
scaffold, this chapter reviews fundamental ideas and models from the
disciplines of sociology, psychology, and organizational development. It
defines terms critical to the study, including instructional technology, hard
technologies and soft technologies. Next, it explores the concept of adoption
of technologies, and finally reviews research from the literature on systems,
change and organizational development as it relates to the adoption of
technologies.
13


What is Instructional Technology?
Definitions of instructional technology fill volumes. This study uses
the current definition approved by the Association of Educational
Communication and Technology in 1994:
Instructional Technology is the theory and practice
of the design, development, utilization, management,
and evaluation of products and processes for learning
(Seels & Richey, 1994, p. 36).
Debated and refined by IT professionals over the course of several years, this
definition is the final product of a collaborative effort (Seels & Richey, 1994).
The definition is particularly suited to this study because it highlights the job
of instructional technologists not only in designing and developing
instructional products and processes, but also in playing a role in how the
products and processes are used, managed, and evaluated. The important
point is that the responsibility for use of the technology does not stop at the
stage of design and development.
Hard and Soft Technologies
Another important element to note is that the technology in this
definition of IT can be both hard products, like computers, videos, and
satellites; and soft processes consisting of information and instruction, for
instance a word processing program, a formative evaluation plan, or a design
for a geography lesson (Gentry, 1991; Reiser & Salisbury, 1991). Gagne (1987)
14


sums up by describing hard technology as media-based, and soft technology
as media-independent.
Drawing from its Greek root tekhnologia, meaning a systematic
treatment of art, Saettler (1968, p. 6) defines technology as "any practical art
using scientific knowledge." Although the word technology often brings to
mind hard products, the soft technologies are also developed by instructional
designers, and are,"... at least as important as media and equipment in
improving instructional practice" (Wilson, 1987, p. 7). Reiser and Salisbury
(1991) consider instructional technology to be whatever solves an
instructional problem, both the plan and/ or the media that supports the plan.
They contend that this systems approach to technology has become
"___the standard view of many professionals who claim instructional
technology as their field of endeavor" (p. 228).
These conceptualizations of hard and soft technology are so closely
connected that, although we can imagine soft technology without hard, it is
difficult to imagine hard technology without the accompanying soft
technology process or instructions for how to use it. In any case, both kinds
of technology are used in instruction as aids in extending our learning
capacities, and both kinds can be considered and tracked in questions of
adoption (Rogers, 1995).
15


What is Adoption?
Definition
What do we mean when we say that technology has been adopted?
Adoption is a concept that has been developed over the past seventy years in
a body of research known as diffusion of innovations. Diffusion of
innovations is a well-established research tradition synthesizing work from
the disciplines of anthropology, psychology, sociology, business, and
education. Its purpose is to shed light on the nature of new objects and ideas,
how they are spread through a system, and why they are eventually adopted
or rejected. The classic model of adoption is one developed by Everett Rogers
in his seminal work Diffusion of Innovations (1983,1995). In one form or
another, Rogers' model is what Keeves (1989, p. 584) calls.. still the most
popular for the study of diffusion or for an understanding and use of the
diffusion process."
Rogers (1983, p. 21) defines adoption as"... a decision to make full
use of an innovation as the best course of action available." This decision can
be made by an individual, by a group using consensus, or by a few
individuals who have the power to decide for the group. At first, reaching
this decision sounds like a simple, straightforward, rational process. In some
cases it is, but life experience and the research data tell us that it's often not
simple at all.
16


Adoption is a Human Process
The central idea and most important consideration in understanding
adoption is that it is a human process, carried out by people and through
people (Rogers, 1983). While this might sound self-evident, it becomes
important because of the nature of modem technologies of instruction. The
past decade has seen an explosion of technology features and design choices.
Sound, pictures, videos, color, animation, real-time interactive features,
individualization down to including the learners name these options
attract us to the power of the technology, and make it easy to assume that the
key to whether the technology is used is the technology itself. However, the
key determinate to adoption is not the technology, but the potential adopters'
perception of the technology (Kearsley & Lynch, 1994; Cuban, 1986). As Szasz
suggests in the quote at the beginning of this chapter, the key to
understanding holy water lies not in the properties of the water, but in the
devotee's perception of the water (Szasz, 1974).
The adoption process is a human process, and as such is embedded in
any number of social, psychological, historical, economic, and logistical
variables, both explicit and implicit. To add more complexity, this process
depends on perceptions not only of the individual end user but also
perceptions of others who are a part of his or her universe (Bateson, 1972).
Rogers (1995) provides an example of how these perceptions affect adoption
by suggesting five perceived characteristics of any innovation. He argues that
the more positive the perceptions the potential adopters have of the
17


innovation, the more likely they will adopt it. The perceptions typically
prompt a potential adopter system to ask questions, spoken or unspoken,
conscious or pre-consdous, about the innovation:
Compatibility. Does this fit with our values, our past experiences, our
self-image? Does it fit with our history of using innovations?
Relative Advantage. Is this innovation better than what we already
have? Is it more economical, more socially prestigious, more convenient,
more satisfying? What is the risk involved in adopting it? Is it worth the
change?
Observability. Can we see it in action first? Can we watch to see how
other people like us use it?
Complexity. How simple or complex is this? Is it easy enough to
understand and use? How about on-going use and maintenance? Can we
easily explain it to others?
Trialabilitv. Can we try it out on a limited basis first? Can we change
our mind later? Can we adapt it to work in our situation?
Adoption of a technology is a process. It can be planned or
spontaneous, quick or slow, complete or incomplete, but it is most
importantly a process, not a single, unitary event (Rogers, 1983). In other
words, it is not at all a given that a technology will be adopted just because it
seems to be a good idea.
18


Change Agents
One more key adoption concept is that of change agent. Individuals
tend to play different roles in the adoption process, depending on their place
in the system and on the innovation itself. Change agents are individuals
who are especially interested in supporting the change, and who"...
influence clients' innovation decisions" by encouraging and supporting them
to adopt the innovation (Rogers, 1983, p. 28). They"... try deliberately to
bring about a change or innovation in a social organization" (Havelock, 1995,
p. 21). Change agents play a pivotal role in the adoption process, either
directly or by working through others in and out of the system (Havelock &
Zlotolow, 1995).
In IT, designers and developers of instructional products are change
agents. Change agents can also be those who support the instructional design
team, the end users or some other linking group in the middle who helps
move the process along (Havelock & Zlotolow, 1995; Gross, 1971).
What Is Successful Adoption?
Successful adoption (Rogers "full use") is somewhat in the eye of the
beholder (Rogers, 1995; Havelock & Zlotolow, 1995). Is the adoption
considered successful if the end users tried it for a week, a month, or a year?
If one person in the group adopts, is it successful? If the technology is
adopted but tinkered with and reinvented until it does not resemble the
original, is it really adopted at all? For purposes of this study, successful
19


adoption is said to have occurred when the instructional technology is in use
by the system for which it was designed, for at least one year, in the general
manner in which it was intended, and at the rate of use at which it was
intended. That is, COMET modules are said to be adopted when they are
used as training aids by about half the forecaster customers over the course of
a year. The adoption is considered successful if both the designers and the
end users agree that this criterion has been met.
When is Adoption a Concern?
In some circumstances, adoption is not a concern of the instructional
designer. Adoption is more or less a concern depending on the type of
instructional product or process, and the relationship of the designer to the
end user for whom the product or process is designed.
Teaching and training are common activities in our everyday world,
and there are many kinds of situations in which a teacher or designer invents
or adapts some tool to encourage learning. These instructional products and
processes may be thought of as falling along a spectrum based on the
relationship of the instructional designer to the customer (Figure 2). Different
relationships suggest different kinds of adoption strategies.
20


Textbook-Type:
Adoption strategy is
mass distribution;
designer does not know
the customer.
Client-Type:
Adoption is a concern
of the designer;
adoption strategy
depends on interaction
between the
designer and the
customer.
Classroom-Type:
Adoption strategy is not a
concern because the designer
is the customer.
Figure 2.1. When is adoption a concern? Adoption of the instructional technology is a
concern depending on the relationship of the instructional designer and the customer for
whom the product is designed. Technologies of instruction can be understood as occurring
along a continuum from textbook-type to client-type to classroom-type, each suggesting a
different adoption strategy (categories adapted from Gustafson and Powell, 1991).
On one end of the spectrum is the textbook-type of product or process.
In this instance, a standard tool (for example, a fifth grade geography
textbook, a commercial piece of accounting school software, or a national
Hispanic Studies curriculum) is designed and developed based on a broad
needs assessment of its potential audience of teachers and trainers. The
product is designed for a large class of end users, and is mass distributed
through wholesale and retail outlets. The instructional designers have a
generic group of customers in mind, and neither intend nor imply that the
technology is custom made for any individual users. In fact, the designers
usually never come in contact with the customers, except if the users seek
them out to comment on the product. Adoption strategies are not the
21


purview of the designer, but are business decisions regarding distribution
and marketing.
The other end of the spectrum is the classroom-type product or process.
In this instance, the designers create the instructional tool for their own use
with their own learners, not intending it for use by others. This design is the
ultimate in personalization, and adoption by others is not a concern. An
example of this would be a high school teacher who invents a way to explain
math to her class using the baseball statistics of the local heroes, or a human
resources manager who develops a computer simulation to teach his staff
personnel hiring practices. Classroom-type tools do not involve designers
beyond the initial individual or group. Similar to the textbook-type, adoption
strategies for these classroom-type products are not the concern of
instructional designers.
In this study, my interest was the client-type product or process in the
middle of the spectrum. It is the case in which the tool is designed by one
group (for example in a company, consultancy, or university) and meant to be
used by another identifiable group (for example another department in the
organization, a particular school or school district, a government agency, or a
company). Much of our public work in IT falls into this category.
One instances of this client-type tool would be a computer-based
program designed by a university team to teach diagnostic skills to the
universitys new doctors, or a hypercard program to teach a class of aspiring
psychotherapists how to think through ethical decisions with their clients. In
the case of COMET, designers are charged with helping approximately 4,000
22


forecasters from the National Weather Service, Air Force, and Navy by
designing products and services just for them.
For client-type products, unlike textbook-type and classroom-type,
adoption of the technology is a major concern of the designer. These designs
can be technically perfect according to the requested specifications, but unless
they are adopted, they ultimately become an example of orphaned
technology, left alone to gather the proverbial dust on the shelf (Tessmer &
Harris, 1992). Because the designers and customers of the design are known
to each other, the stakes are higher for the reputation of the designers, and the
collective reputations of individual designers ultimately affect the field of
instructional technology.
Adoption of Technology in IT
If adoption of technology is at least sometimes a major concern, and if
it is likely that adoption is not an automatic outcome, then what does this
imply for the role of the designer of the technology? On this point, the field of
IT is ambivalent. Some design models address the issue of adoption directly,
most notably the systems-based models of Rossett, and of Mager and Pipe
(Gustafson & Powell, 1991). Although the adoption literature conceptualizes
implementation as only one stage of the whole adoption process (Rogers,
1983), these IT models use the terms implementation, utilization and adoption
interchangeably.
The most current description of instructional technology by the
Association for Educational Communications and Technology, one of the
23


field's premier professional organizations, recognizes utilization (along with
design, development, management, and evaluation) as a critical domain
(Seels & Richey, 1994). The field's generic design model is the acronym
"ADDIE," which stands for analysis, design, development, implementation,
and evaluation. Gagne, in his 1987 Instructional Technology: Foundations,
includes three articles on adoption topics (Gagne, 1987). Anglin's (1991)
Instructional Technology: Past, Present and Future discusses adoption from a
variety of points of view (Garland, 1991; Ely, 1991; Heinich, 1991).
Other recent design texts and anthologies, however, have generally not
included references to adoption. The popular handbook, Exercises In
Instructional Design by Seels and Glasgow (1990), briefly mentions
adoption/ implementation in the book's epilogue, but not as a major
consideration. One of the most respected, authoritative, and widely used
texts in instructional design, The Systematic Design of Instruction by Dick and
Carey (1990), does not directly mention the concept of adoption or
implementation; they do emphasize formative evaluation, and advise that a
field test be conducted in a situation that "closely resembles" the environment
in which the technology will be used. The overall message from these sources
seems to be that, while it may be advisable to add on adoption activities, it is
not necessarily an integral part of instructional design.
Social Models of Adoption
Some contemporary researchers take the opposite position in their
assessment of the importance of adoption. In fact, some analyses of the
24


effectiveness of technology in education suggest that the adoption process
may be more important to its eventual use than any characteristic of the
technology itself (Kearsley & Lynch, 1994; Cuban, 1986; Office of Technology
Assistance, 1988).
For example, the Concems-Based Adoption Model (CBAM), developed
at the University of Texas, includes a strong adoption component (Hall &
Hord, 1987). The model advises change agents to thoroughly "probe" the
potential users for their concerns about the innovation. Based on data about
these concerns, the change agent then plans an intervention which will
encourage adoption of the innovation (Mink, Esterhuysen, Mink, & Owen,
1993; Farquhar & Surry, 1994).
In a revision of his 1973 classic, The Change Agent's Guide, Havelock
presents a similar model in which he advises change agents to
Re-C-R-E-A-T-E : Renew, Care, Relate, Examine, Acquire, Try, and Extend
(Havelock & Zlotolow, 1995). The hallmark of each of these stages is an
awareness of the adopter's culture, and a strategy of actively building and
sustaining relationships with the potential adopters of the innovation.
The emerging constructivist commentaries on the IT process are likely
fertile ground for discussion of the adoption process. Constructivism
advocates recognizing that more and more of the responsibility and power for
learning resides with the learner (Wilson, Teslow, & Osman-Jouchoux, 1993).
The fit between the technology and the learner's world will naturally become
a focal point. Literature on constructivist design processes are relatively new,
25


and thus far has not directly addressed the adoption process (Duffy, Lowyck,
& Jonassen, 1993).
These examples of socially oriented models of adoption of technology
indicate a trend toward understanding the process of instructional design as
occurring in a system which includes the designers as well as the end users,
and emphasizing the responsibility of the designers to tend to issues of
adoption. This is a developmental awareness we would expect as part of the
growth of IT as a profession (Brint, 1994; Elliott, 1972). Praxis suggests that,
in the IT field, adoption is more frequently but not always considered a part
of the design process. In general, although adoption is seen as worthwhile
and a devoutly-to-be-hoped-for state, it is nevertheless a topic that is either
neglected, or has not yet become accepted as an integral part of the
instructional design process.
If It's A Good Idea, Why Aren't We Doing More to Promote Adoption?
The adoption equation has not added up for IT. If adoption is a good
idea and at least sometimes recognized as a specified part of the design
process, and if we consider lack of adoption as a problem, why aren't we
doing more to promote it? Some of the explanation lies in reasons of history,
culture, and economics.
Historical Reasons
Instructional technology was incubated in two major settings,
academic and military. In the academic world, many universities and their
26


extension services were originally founded as land grant institutions (Louis,
1994). Their express mission was to research and develop (R&D) new
technologies for the benefit of the community, often the farming community.
This R&D model was of the textbook-type, with an adoption strategy of mass
distribution, and with the responsibility to adopt or reject the technologies
remaining squarely in the hands of the end users. This R&D paradigm
continues to prevail in educational research (Scott, et al., 1987; Rogers, 1995).
Translating research into practice is a classic tension in education, and is not
limited to instructional technology (Burkman, 1987; Berman & McLaughlin,
1981; Ely, 1991; Fox & Saunders, 1989; Kaestle, 1993).
Instructional technology's other home has been the military. In his
history of the field, Saettler (1968, p. 47) comments that, "Instructional
technology came of age during World War n." The war caused a massive
logistical crisis for military training. The services suddenly needed to train
hundreds of thousands of personnel to perform thousands of tasks in a
relatively uniform manner under emergency conditions. This need fueled the
development of classic, systematic IT models based on the R&D tradition
(Shrock, 1991). There was literally no time or inclination to work with
individual user systems to encourage adoption of the instructional
technology; on the contrary, the adoption strategy was to order the uniform
use of the instructional technology through the military chain of command.
Given the desperate context of war, this strategy was successful.
Both of these traditions, academic and military, are based on the R&D
model. While this model is more compatible with the textbook-type and
27


classroom-type adoption process, the client-type adoption process depends
on interaction between the designer and the end users. When applied to the
client-type process, R&D has inherent elements that tend to inhibit success.
Chief among these inhibiting elements is the fact that the process is top down
and does not encourage interaction with or inclusion of the end user
(Havelock & Zlotolow, 1995; McLaughlin, 1987). As a result, researchers and
designers are often isolated from practitioners (Tikunoff, 1979). The basic
assumption of R&D is that technology can be directly transplanted from the
lab to the culture of the user (McCollum, 1994; Schein, 1992). In short, our
historical roots have handed down an R&D adoption tradition that does not
always serve us or our clients well.
Cultural Reasons
Heinich (1991) argues that cultural obstacles also contribute to IT's
limited success in adoption of instructional technology. He says that by its
nature, IT tends to be learner-centered, and therefore challenges traditional
assumptions about teaching as teacher-centered. Instructional designers, who
are products of the same teacher-centered academic system, are ambivalent
about promoting challenges to the foundations of our intellectual home.
Adoption of instructional technology implies a shift in the teacher's
traditional role, responsibility and power, and evokes what Heinich calls "a
disturbing sense of disloyalty to our colleagues in education (1991, p. 76).
Related to the teacher-centered cultural obstacle is the tradition that
the SMEs (Subject Matter Experts) who are experts in content, are seen as the
28


center of the instructional design process. This theory-to-practice approach to
design starts with a body of knowledge, and designs instruction to help
learners absorb that identified body of knowledge (Wilson et. al, 1996). It is
contrasted with an expertise-based model which starts with an analysis of what
constitutes expertise in the learners' community, then designs instruction that
helps the learner develop toward expertise. This approach turns the design
process on its head, and represents a departure for many of us whose
educational roots are in the academic and scientific worlds.
Newman (1989) gives an example of a specific cultural inhibitor in
higher education. He points out that the coin of the realm in academia is
primarily success in research and publication, and only lastly in the skill of
teaching. Given that we get what we reward, he asks, what is sufficient
motivation for a professor to spend time and energy to try more complicated
alternative teaching technologies, especially when the old ones have worked
to their satisfaction? Newman concludes that, unless universities change the
system of incentives, (for instance weighing teaching innovations as heavily
as publications during tenure evaluations), then adoption of new technologies
of instruction may be limited.
Economic Reasons
Instructional technologists, ever practical, may simply say, "We don't
have the time or money to carry the designs through to adoption. Our
agreement is to design, develop, and deliver the instructional technology to
the client's doorstep, on time and on budget." As a result, instructional
29


designs can meet the agreed upon specifications but may in practice be
incomplete or even fatally flawed, lacking the fine tuning they need to be
used in the clients environment.
Anyone who has been in the position of managing a project will
recognize that issues of time and money are familiar and legitimate pressures
of running a successful venture. The dilemma is real. At the same time, we
realize that we ultimately neglect the adoption process at our professional
peril.
One of the great lessons of contemporary business is the importance of
forming on-going partnerships with clients in order to see work through to
successful completion (Senge, 1990; Kanter et al., 1992). All of us can recall
an experience as a client or customer, and how we feel about a business that
stands behind its work once we get it home (or send it off to the IRS!).
Contrast that feeling of partnership with the sinking feeling of working with a
business that does not offer that assurance of continued support.
Historical, cultural, and economic trends have led the field of IT to
concentrate on design and development at the expense of adoption. A variety
of factors in our evolution as a field have brought us to this point; in
Boulding's famous maxim, things got that way because they did (Boulding,
1970). But our history does not consign us to continue to underestimate the
importance of adoption, or limit our ability to improve our effectiveness.
30


How Can We Improve The Rate Of Adoption?
Although the adoption equation is complicated and dynamic, it is not
insoluble. There is a relevant body of knowledge on the process of adoption
and change that is available to be borrowed for application to IT.
Context is Everything
As Rogers (1995) reminds us, adoption of an innovation such as
technology is a complicated human process, carried out by people and
through people. Drawing on earlier studies of anthropology and systems,
Bateson (1972) recognized that all human processes are part of a context.
Clarke (1996 a, p. 7) defines context as .. .the full range of information that
individuals use to make choices." The term context encompasses the entire
milieu the physical, mental, social, psychological, and cultural
environment, both implicit and explicit of which everyone is a part. At any
particular time, for any particular individual or group, context includes
everything,"... from the cultural and personal histories they bring with them
to the details of the moment..." (Clarke, 1996 a, p. 7). Although we often
speak of being in a context, we are not so much in it as we are an integral part
of the context, participating in its on-going construction (Clarke, 1996 b). And
because we are an inextricable part of our context, it is impossible to
understand events without understanding the context. Consequently,
understanding context becomes an indispensable part of understanding an
adoption effort.
31


Moreover, each of us is a part of multiple contexts (Clarke, 1996 a). We
are each a part of the context as individuals, as a member of our intimate
social group, as a part of a extended group of acquaintances, and as a part of
a larger societal group. Each context is its own unique physical, mental,
social, and cultural environment. In this study, for example, forecasters are
seen as individual employees, as a part of their office work group, as a part of
their larger national organization, and as a part of the international forecast
profession. Each of these levels of context can be profitably examined to
analyze the fit, or compatibility (Rogers, 1995) between the context and the
modules.
Contexts are rich and unique, and we can expect that technologies like
modules will be compatible in some ways and not in others. A good fit with
the context is an element that will support successful adoption, and a misfit is
an element that will inhibit successful adoption. A successful adoption
would be characterized by an overall better fit than misfit, as perceived by
those who are a part of the context (Rogers, 1995; Havelock & Zlotolow, 1995).
Change
Learning is change (Bateson, 1972; Mink et al., 1993). When we invite
people to adopt an instructional product or process, we are, in essence, asking
them to change. In using the technology, individuals and their collective
systems change (to one degree or another) their behaviors, their knowledge
and skill level, and their attitudes or beliefs. We are asking people to examine
or stop what they were doing, start something new or amend what they are
32


h
ui-
es
Lis
ciing, start something new or amend what they are doing, and perhaps in the
c cess look at themselves and their learning in a different way. In essence,
very nature of our work in IT makes us agents of change. This suggests
hjalt in order to do our job well, it would serve us to become familiar with
jarch and theory about the process of change.
Change is so pervasive in human experience that it is a theme
cussed in all fields from philosophy and theology, to art, science, and
>o' itics. In order to examine different concepts of how change occurs,
cnsider five models of change from five divergent fields (Table 2.2). These
ci temporary authors stand out as being particularly relevant for
in ierstanding the process of change.
Prigogine. This Nobel Prize-winning physicist formulated the Theory
Dissipative Structures for complex adaptive systems, a term which includes
iu nans and human systems. The theory, also known as chaos theory,
lescribes the cyclical nature of order and chaos in these systems (Prigogine,
984; Gleick, 1987; Wheatley, 1992).
Kuhn. Kuhn's Structure of Science (1972) outlined the process by
vHich the world of science changes its fundamental thinking over time from
>n i paradigm to another.
Lewin. Kurt Lewin is recognized as one of the founders of the
ligcipline of sociology. His description of the process of change which groups
ixberience (freeze, thaw and change, refreeze) is considered a classic model
Lewin, 1951; Kanter et al., 1992).
33


Table 2.2. Comparison of Models of the Change Process
(Adapted from Levy and Merry, 1986).
Stages (O'Connor, 1992) Dissipative structures (Prigogine, 1984) Scientific revolutions (Kuhn, 1972) Social change (Lewin, 1951) Innovation in organizations (Rogers, 1983) Individual change (Kiibler-Ross, 1969)
Summer Fluctuations within defined boundaries Normal science Steady state Pre-innovation Pre-change
Fall Fluctuations past a threshold Growth of anomalies Decline and procrastination Agenda setting/ matching Denial
Winter Crisis Revolution Chaos Redefining and Restructuring Cycle of Anger/ Bargaining/ Depression
Spring Jump to a higher order Normalization Back to basics Clarifying Acceptance
Summer Equilibrium New paradigm Revitalization Routinizing Incorporation


Rogers. As mentioned earlier, Rogers' work in diffusion of innovations
is still the foundational work for current research on the adoption of
innovations for individuals as well as organizations (Rogers, 1983; 1995).
Kiibler-Ross. Kiibler-Ross (1969) formulated the stages of grief and
loss in relation to the ultimate change, death. Her model is credited with
fomenting a revolution in American health care in which death and dying
began to be widely and openly discussed for the first time.
Comparison of the Models
A comparison of these cross-disciplinary change models is intriguing
for their striking similarities. The first similarity is the fundamental
assumption that people live and work in the context of open systems
(Prigogine, 1984; Wheatley, 1992; Lewin, 1951). Open systems are susceptible
to influences from outside and can, in turn, influence systems outside of their
own.
A second similarity in the models is the belief that change is an
inevitable part of the life of a system (Kiibler-Ross, 1969; Kuhn, 1972; Gleick,
1987). Permanent stasis is not an option. Because change is inevitable, the
dynamics of change can be recognized and planned for, but not controlled, in
the life of the system (Kanter et al., 1992).
These models also portray change as precipitated by an event or a
series of events, which can range from brief and abrupt to extended and
evolutionary (Kuhn, 1972; Rogers, 1995; Prigogine, 1984). As change occurs,
35


the system is called upon to respond to the event or events, and its response
directly affects the future of the system (Wheatley, 1992; Gleick, 1987).
One of the most dramatic aspects of the change process according to
these diverse models is the period of crisis, chaos, or revolution (Prigogine,
1984; Kuhn, 1972). This predictable stage is often characterized by ambiguity
and re-organization of the system. This is naturally an unsettling time, but
these researchers contend that, for human systems, this stage of crisis is
literally an incubation period during which new adaptive behaviors and
attitudes can be developed. Consequently, this period is a critical one during
which the system can potentially become revitalized and more suited to its
changing conditions. The temptation is to hurry through this period to
resume "normality." But short-circuiting this stage tends to either prolong the
change cycle or compromise the system's ability to adapt to the change in the
long run (Kiibler-Ross, 1969; Gleick, 1987). In any case, it is not inevitable that
the system complete the cycle; a system can remain at any of the stages.
How Do Change Models Apply to IT?
If learning is change, then instructional technologies are invitations to
change. When instructional designers engage with clients in designing
instructional systems (see Figure 2), then the designers become a part of the
system. Further, if instructional designers are inviting this change, and
systems in change go through an identifiable cyclical process, then it becomes
part of the work of instructional designers/ change agents to be aware of the
36


cycle and to encourage the system (which includes them) to move through
successfully to the stage of incorporation.
For an illustration, we can return to the example of COMET. By
offering its computer-based instructional modules to forecasters, COMET has
essentially asked the forecasters, and the systems of which they are a part, to
change and adapt to the introduction of the technology. The forecasters are
being invited by COMET to use the soft technology of the meteorology lesson
and the hard technology required to run it; to adapt their training and
education activities and daily schedules to make time for the modules; to
adjust their thinking to conceptualize how the modules are going to help
them; to accept the ambiguity and confusion inherent in trying any new and
complex technology; and to do this all in the context of their already rapidly
changing systems. What seems like a simple invitation to use technology can
be massively complex and daunting when seen from the point of view of the
end users of the technology. I believe that the more we understand how
change works, the more cognizant we can become about what it is we are
asking, and the more helpful we can be to the people whom we are inviting to
change.
Successful Change
Organizational development writers have used systems theory and
models of change as a lens through which to view organizations (Wheatley,
1992; Rogers, 1995; Havelock & Zotolow, 1995; Senge, 1990; Reigeluth, 1994;
Schein, 1992). From this sharpened perspective, certain elements emerge
37


which tend to be present in successful change efforts in organizations.
Interestingly, though these theorists and researchers represent divergent
fields, they seem to repeatedly emphasize five specific factors associated with
successful change:
Fit with the Culture of the End Users. Successful change fits with the
culture in which it is being promoted. Every human system has a culture,
made up of shared factors such as norms, values, beliefs, customs,
assumptions, language, artifacts, and self-image (Robbins, 1993). Schein
(1992), a leading scholar of organizational culture, describes culture as
A pattern of shared basic assumptions that the
group learned as it solved its problems of
external adaptation and internal integration,
that has worked well enough to be considered
valid and, therefore to be taught to new
members as the correct way to perceive, think,
and feel in relation to those problems, (p.12)
The group may or may not have fully articulated these shared assumptions,
but they are powerful determinants of behavior nevertheless. A change that
is incompatible with the cultural norms is usually not adopted.
Adequate Support. Successful change requires a reasonable amount of
support. Systems tend to change more readily when there is enough time,
money, space, other resources so that the change can be adopted successfully
(Havelock & Zlotolow, 1995). What is considered adequate depends on the
perceptions of the adopters. What may seem more than adequate to the
change agent may not seem like nearly enough for the adopter, and what may
38


ange (Rogers, 1995). This may be especially important during the chaotic
i je of re-organization inherent in the change process (Wheatley, 1992). The
annels can be internal within the system, or external between systems.
[] nmarv
Models from disciplines as divergent as physics, sociology, psychology,
lijlosophy, and organizational development have identified similar
aracteristics of the process of change (Table 2.2). Change is seen as a
versal phenomenon that occurs as an inevitable feature of open, living
stems. In human systems, certain conditions (fit with the culture of the end
rs, adequate support, clear vision or direction about the change, strong
z nnels of communication, participation of the end users) are thought to be
1 pful in the process of successful change. Do these factors support the
(iption of technologies of instruction? In what way? What are their
lative importance in an adoption effort? Chapter three outlines the study
lich pursued these questions.
40


seem to be woefully inadequate from the outside may be plenty for motivated
adopters.
Clear Vision or Direction. Successful change is promoted by a clear
vision of what change is sought and why the change is beneficial or
necessary. Systems that adapt well to change seem to be those that have
arrived at a dear sense of purpose and core values (Wheatley, 1992). Focus
on this dear purpose, and articulation of how the change supports the
purpose, allows adopters to more quickly understand and accept a proposed
innovation.
End User Partidpation. Successful change includes significant
partidpation of the people being asked to change. In recent years, dassic
prescriptions for change have been criticized (Donohew & Springer, 1980).
Bom in the R&D tradition, they are generally top-down efforts, in which the
change agents either conceive of or favor an innovation, and spend their
effort trying to persuade the end users to adopt it. More contemporary
models of the adoption process recognize the effectiveness of starting with
the end users, discovering what they need and want, and spend effort helping
them achieve their goals (Reigeluth, 1994; Havelock & Zotolow, 1995).
Strong Channels of Communication. Successful change is promoted
through strong channels of communication. These channels, especially
interpersonal networks, provide information to the members of the system.
Depending on what information is provided, and the reaction of the members
of the system to that information, the channels can help them to become more
familiar and comfortable with the change, or encourage them to reject the
39


CHAPTER 3
INTRODUCTION
[A case studyjis intended to provide
at least one anchor that steadies the ship
of generalization until more anchors
can be fixed for eventual boarding.
Walton (1992, p. 122)
Chapter one provided background on the professional growth of IT as
a field, and on IT's current dilemma with the adoption of technologies a
dilemma as exemplified by COMET'S problems with the adoption of their
computer-based distance learning program. Chapter two reviewed the
relevant literature on adoption of the technologies of instruction. It provided
definitions of some critical adoption terms, established a link between
adoption and change, and compared models of change from a variety of
disciplines in order to distill elements that tend to be present in successful
change efforts.
This chapter describes the methodology for this study. It begins with
a rationale for qualitative inquiry and the case study method. Next, it
outlines details of the research design, including rationale for the site
42


selection, the scope of the study, research questions, sources of data and data
collection strategies, and the method of data analysis. Chapter three ends
with a discussion of my biases relevant to this study.
Qualitative Inquiry
Qualitative inquiry is a style of research based on the premise that
reality is complex, socially constructed, and heavily dependent on the context
in which it occurs (Krathwohl, 1993; Yin, 1994; Tesch, 1990). Qualitative
studies focus on "viewing experiences from the perspective of those involved"
(Mellon, 1990, p. 3). Unlike quantitative inquiry that seeks to isolate and
control some of the variables in the situation under study, qualitative inquiry
concentrates on describing and understanding the situation as it is. Reality is
seen as complex, and qualitative inquiry"... seeks to portray the complexity"
(Krathwohl, 1993, p. 538). The entire point of the research is to discover and
describe patterns inherent in the situation (Tesch, 1990).
Because adoption is a critical problem for IT, this study was designed
to examine the adoption process undertaken by COMET. Although the study
is grounded in theory about how change models can help explain and direct
the process, it was intended to be an open exploration of a complex system.
. in living color," (Krathwohl, 1993, p. 348), making it an ideal candidate for
qualitative inquiry (Mellon, 1990; Guba & Lincoln, 1981).
43


The Case Study Method
Within the qualitative tradition, the case study method was
particularly suited to this proposal. Yin's (1994) analysis of when to use a
case study provides a good description of the circumstances of this study. He
argues that the case study methodology is most advantageous when
a how or why question is being asked in a
contemporary set of events, over which the
investigator has little or no control. [The researcher is]
trying to illuminate a decision or set of decisions; why
they were taken, how they were implemented, and
with what result. (Yin, 1994, p. 3)
One of the strengths of the case study is that it acknowledges the
context of the process under study, which is appropriate to the study of
adoption. In his analysis of diffusion research from the 1940's to the 1980's,
Rogers (1983) comments that correlation studies of the past have yielded
weak results because they isolate factors too much, and ignore the
complicated cultural factors in the adoption process. He strongly advocates a
qualitative approach
in which interview, archival, and other data are
gathered about the innovation-decision progress in an
organization. Such an in-depth approach means that
only a much smaller sample of organizations can be
studied with the same research results. But in return
such an in-depth approach provides more reliable
data and permits greater insight in tracing the nature
of the innovative process in each organization. This
type of research design follows a process approach
44


rather than a variance approach. The researcher
learns more about less, rather than less about more.
Given our present rather limited understanding of
innovation in organizations, the in-depth approach of
process research is more appropriate. (Rogers, 1983,
p. 358)
The intent of this study is to provide a detailed and close-up look at an
adoption process as it occurred. I concur with Heinich (1991, p. 76), who
suggests that"... we spend too much time telling practitioners what they
should be doing, and not enough in finding out what the conditions are that
shape their decision."
This Study
This is a case study of a process of adoption. Rather than an in-depth
analysis of an individual forecast office or of COMET, I tried to provide a rich
description of the process itself. The research focus is on the constraints faced
by COMET and the forecast community, the actions and decisions that were
made in reaction to those constraints, and the current results of the actions
and decisions.
Specifically, this is a case study of the adoption process of COMETS
distance learning technology, the Operational Forecaster's Multimedia Library.
This library consists of nine computer-based, interactive video disk modules.
Each module usually takes between five to twenty hours to complete, and
addresses a broad topic in weather forecasting (marine meteorology, heavy
precipitation and flash flooding, cyclones, etc.).
45


Selection of COMET as a Site
COMET was a particularly appropriate site for this study because it
met several important criteria:
COMET was involved in client-type adoption. COMET is a classic
example of client-type of adoption concerns (Figure 2.1). The organization is
charged with developing instructional technologies and technological
applications for an identified and finite client group, the weather forecasters
of the National Weather Service, the Air Force, and the Navy.
COMET had a proven instructional product. COMET employs a team
of highly competent and nationally regarded designers, technologists, and
subject matter experts. They have a variety of first-class technologies
available, as well as the budget and logistical ability to deliver the technology.
Non-use of the technology is not due to the concern that it is an inferior
product.
COMET had a current adoption concern. The COMET staff had
identified an adoption concern, prompted by changing conditions among
their users and reported field data about the level of use of the modules. This
situation was current.
COMET had a strong interest in research. COMET commissioned an
evaluation of the modules, and committed considerable money and staff time
to the evaluation effort. Their system was accessible and encouraging of this
research study.
46


COMET was accommodating. Because of their commitment to the
evaluation, COMET was also committed to providing relevant data and
access to the people needed in order to complete the study.
Scope of the COMET Evaluation
Besides distance education, COMET offers several other programs for
meteorologists, including research projects, scholarships, and residential
training courses. The evaluation was limited to the distance education
program. For COMET's purposes, the evaluation examined the computer
modules on four levels. These levels are described in this excerpt from the
proposal submitted to COMET prior to the evaluation (Wilson & Lowry, 1995,
p. 2):
1. Module design. What aspects of the
modules' design are most and least effective? Module
design includes issues of user friendliness, content,
instructional strategies, motivation to use, level of
challenge, user interface, and so on.
2. Implementation. In what ways is the
technology being successfully implemented? The
design of a module can be excellent, while other
factors may get in the way of its use in the office. The
technology must be physically accessible; use of the
modules must fit into forecasters' activities and job
expectations; and users need to have the time,
resources, and encouragement to use the technology
effectively. Because local decision makers (e.g.,
managers, SOOs, and CBL Administrators) are the
ones who provide the time, resources and
encouragement, they must see the modules as worth
47


the effort of the staff. The technology must support
the larger goals, direction, budget and staffing of the
office, and be seen as a good use of staff time and
energy.
3. Integration with the job. In order to be
used on a regular basis, the technology needs to be
perceived by forecasters as helping them do a better
job. We will look for evidence that forecasters are
incorporating module-based concepts in their day-to-
day forecasting.
4. Other distance-learning supports. What
other products and services would be helpful to
forecasters? At this level of analysis, we will probe
forecasters and ask them to brainstorm about the
kinds of services and supports they would appreciate
from a distance-learning agency. Are there products
or services that might complement the current library
of modules in meeting forecasters' on-the-job
performance needs?
Scope of This Study
While the overall evaluation consisted of an analysis of all four levels,
this study focused only on item number two, implementation, which refers to
the process of the adoption of the modules. Because I had access to the entire
evaluation, I had the benefit of drawing on data from the other levels that
may also contributed to an understanding of the adoption process. However,
the study was limited to the module adoption process itself.
48


Research Questions
As discussed in chapter two, the literature on adoption and change
suggests several elements that tend to be present in successful change efforts:
fit with the culture of the end users, adequate support, clear vision or
direction about the change, strong channels of communication, significant
participation of the end users (Kanter et al., 1992; Kuhn, 1972; Prigogine,
1984; Rogers, 1995; Reiser & Salisbury, 1991; Senge, 1990).
My research questions were based on those elements from the
literature. Yin (1994, p. 13) argues that, case study inquiry "... benefits from
the prior development of theoretical propositions to guide data collection and
analysis.". But Rogers (1983) cautions against intellectual blinders that screen
out data. Heeding this warning, I also looked for discrepancies with the
model and other surprising factors.
The study's primary research question was, What factors contribute to the
successful adoption of technologies of instruction? Based on this question, I
explored the specific questions that follow:
1. How is adoption of the technology affected by its fit with the culture
of the end users? What, in general, is the culture of the forecaster and the
forecast community? What, in general, is the culture of COMET? In what
ways is there a fit between the office cultures and the module technology? In
what ways is there a misfit? What is the impact of a fit, or of a misfit, between
the office cultures and the technology?
49


2. What kinds of support are important in adoption? What is the role
of adequate time to use the modules? What is the role of logistical
accessibility of the modules? What other kinds of support are important to
successful adoption of the modules?
3. What role do channels of communication play in the adoption
process? What communication channels are important to COMET? What
communication channels are established between COMET and the technology
end user/forecasters? What communication channels are established within
end user offices?
4. What is the impact of participation of the end users? What if any
role is played by the local decision makers (SOOs, Office Managers, CBL
Administrators, etc.) in the design, development and adoption of the
technology? What if any role is played by the forecasters in the design,
development and adoption of the technology?
5. What is the role of vision or direction in the adoption process?
What is the vision of the forecast community in regard to this work? What is
the vision of COMET in regard to the forecast community? Are these visions
or directions in alignment? What is their impact?
Data Collection
Even though qualitative inquiry operates on a different paradigm than
does quantitative inquiry, it still insists that rigor and usefulness be evaluated
on some agreed-upon criteria. Guba and Lincoln (1981) use the overall
50


concept of trustworthiness to describe the traditional research concerns of
validity and reliability.
Data sources and data collection strategies are the chief sources of
trustworthiness in qualitative research. In order to address the issue of
trustworthiness, this study provided triangulation of both data sources and
data collection strategies. Together, these multiple data points were intended
to form a web that supported the goal of conducting trustworthy and useful
research. Sources of data and data collection strategies described below,
designed to safeguard trustworthiness, were based on suggestions from Yin
(1994), Krathwohl (1993), and Mellon (1990).
Six Sources of Data
The evaluation took place from July, 1995 through February, 1996 (the
evaluation timeline is summarized in Table 3.1). Data were collected from six
different groups of individuals with six different relationships toward the
adoption process under study (adapted from Wilson & Lowry, 1995, p. 3):
COMET instructional designers and
meteorologists. Designers conduct interviews with
subject matter experts, develop objectives, select
learning strategies, and develop the modules.
Meteorologists serve as subject matter experts.
COMET administrators. These are the people
who, together with outside advisors, set policy and
direction for the agency.
51


Table 3.1. Evaluation Time Line (July/1995 February, 1996)
July '95 Planning Data Collection Approval of draft evaluation plan Development of interview instruments Final selection of sites to visit Completion of final version of evaluation plan Interviews and focus group session at Managers Courses in Boulder
August *95 Data Collection Development of written instrument and plan Progress reports as needed
September '95 Data Collection Final draft of written instrument and plan Site visits Progress updates as needed
October '95 Data Collection Progress Report Site visits Questionnaires distributed Preliminary data collection Presentation of Progress Report to COMET Compilation of data
November/ December '95 Provisional Report Development Compilation and analysis of data Development of Provisional Report
January '96 Provisional Report Presentation of Provisional Report to COMET * Compilation and analysis of data from additional AWS questionnaires
February '96 Final Report Presentation of Final Report to COMET


Managers of forecasting offices. Every weather
forecasting office has someone generally in charge of
operations. These managers may or may not be
meteorologists, but may be responsible for making
decision about the use of the modules.
Science and Operations Officers fSOOs). The
SOO is responsible for providing scientific expertise,
training and support for weather forecasters. COMET
modules are intended to be a resource for the SOO.
CBL (Computer-Based Learning)
Administrators. The CBL administrator is the
individual with the specific responsibility of
overseeing the use of COMET modules within the
office. In some cases, the CBL administrator may be
the office manager or SOO; in other cases, a separate
individual may be assigned this role.
Forecasters. The forecaster is commonly
thought of as the end user of the COMET modules.
Forecasters in the office complete the modules and
apply them to their daily work.
Five Data Collection Strategies
To obtain reliable findings, we triangulated our data collection
through the use of five strategies. The data collection design is illustrated in
Figure 3.2:
53


Figure 3.2. Five Data Collection Strategies
Focus Groups and Interviews
During the months of July and August, 1995, we started by conducting
three focus groups of forecast office managers visiting the COMET facility in
Boulder for resident training courses, and one focus group at an AWS
training in Colorado Springs. A member of our team and at least one COMET
staff member attended each of these sessions to facilitate discussion on the
modules and record the content of the discussions. The format of these
sessions was a brief introduction of our purpose, then a group discussion on
several questions regarding modules and their use, especially in regard to the
five factors associated with successful change efforts. In this setting, we tried
54


out some of the items which we intended to include in the written
questionnaire. Some of the questions were, "What would you say is the most
helpful thing about the modules? What is the least helpful thing about the
modules?" and "How do forecasters get trained in your office? What has been
the best training you've had?" and "Putting the modules aside, think about
your office staff. What other kinds of products or services can you think of
that would help them do their job?" (the focus group protocol is in the
Appendix, p. 227).
Interviews, both formal interviews and spontaneous conversations,
with any of the sources enumerated above, both at COMET and in the
forecasting offices, enabled us to probe for rich detail on the adoption process.
I collected written field notes and minutes of meetings.
Questionnaire
As we were piloting items with the forecasters in focus groups and
interviews, we began to develop the written questionnaire, another major
source of data. The instrument was developed in a collaborative effort with a
committee of COMET staff and our evaluation team (sample copy of the
instrument is in the Appendix, p. 230). In addition, the instruments were
reviewed and approved by the NWS, AWS, and NMOC. These reviews
provided us with helpful clarifying language and changes in format, which
we included in the final draft. We then developed three final editions of the
instrument, each with slightly different terms appropriate to NWS, AWS, and
55


NMOC. We also included several questions specific to hardware /software
administrators and training managers (see Appendix pp. 229-242 for a sample
copy of the cover letter and questionnaires).
During September and October, 1995, we distributed approximately
1,000 questionnaires from the COMET office to a selection of National
Weather Service (NWS), Air Force (AWS), and Navy (NMOC) forecast offices
(Table 3.3). The NWS questionnaires were sent to offices selected from a list
provided by the COMET Program of NWS offices which have access to the
COMET modules. We first stratified the list by region, alphabetized the
offices within each region, including an equal number of WSFO's (larger
offices) and WSO's (smaller offices) in the list. We then selected a percentage
of offices at random from each regional list. The NMOC locations, including
international locations, were selected with the same method. From a list
provided by the Navy, we alphabetized the locations (the Navy list was not
categorized by region), and selected a percentage of offices at random.
The packets of questionnaires, cover letters, and return envelopes were
then sent to forecast offices. Because we were not on hand to administer the
instruments, we have no definitive information about how they were
distributed to the forecasters in each individual office. The questionnaires
were sent directly to the selected offices, and returned directly from the
forecasters to the University of Colorado.
We received 394 returned questionnaires, a 39% overall response rate.
Of the 394 returned questionnaires, 258 were returned from NWS (a 70%
56


Table 3.3. Questionnaire Distribution and Return
NATIONAL WEATHER SERVICE AIR FORCE NAVY
Selected Si tes 37 Approx. 60 At their request, AWS sent the surveys out through their distribution and collection system 31
Packet 1 Cover Letter 1 Cover Letter 1 Cover Letter
Contents 8 Forecaster Surveys 1 Training Coordinator Survey 1 PDW Administrator Survey 10 business size envelopes 1 large return envelope 5 Forecaster Surveys 1 Training Coordinator Survey 1 IVD Administrator Survey 4-8 Forecaster Surveys 1 Training Coordinator/CANOE Administrator Survey 4-8 business size envelopes 1 large return envelope
Packets returned to... University of Colorado at Denver Chief, Requirements Division Scott AFB University of Colorado at Denver
Surveys Sent 37 packets containing 370 surveys sent in 9/95 Approximately 420 surveys sent in 10/95 31 packets containing 217 surveys sent in 9/95
Contact person SOO in each office Chief, Requirements Division Scott AFB NMOC
Surveys Returned 258 surveys retumed/70% return rate 92 surveys returned/22% return rate 44 surveys returned/20% return rate


response rate), 92 were returned from AWS (a 22% response rate), and 44
were returned from NMOC (a 20% response rate). Because of this relatively
low return rate from DOD offices, we were cautious about drawing
conclusions from these data.
The questionnaires were designed to provide both quantitative data
from questions with numerical answers, and qualitative data from open-
ended questions and invitations to comment. Many of the respondents took
advantage of this invitation, and provided rich and plentiful comments on the
modules and their use.
The quantitative questions asked for numerical ratings of modules and
features, average time spent on modules, rating of training strategies, and so
forth; we analyzed the responses using descriptive statistics. For the
qualitative data, we performed a content analysis of the open-ended
questions, looking for an indication of the issues and ideas which the
forecasters chose to bring up, how they chose to articulate those issues and
ideas, and the frequency with which they mentioned them.
Site Visits
After sending out the questionnaires, and before we received any
returns, our evaluation team visited 17 forecast offices nationwide in
September of 1995 (detailed in Table 3.4). The goal of these visits was to
observe the environments in which the COMET modules were being
implemented, and to interview forecasters and administrators concerning
58


Table 3.4. Evaluation site visits, conducted from September, 11,1995 to September 30,1995
LOCATION AIR FORCE NAVY NATIONAL WEATHER SERVICE
Washington, D.C. Langley AFB Andrews AFB Norfolk Center Sterling WSFO Wakefield WSFO NMC
Washington State Whidbey Island Detachment Seattle WSFO
Alaska Elmendorf AFB Anchorage WSFO Alaskan Regional Office
Florida/Georgia Moody AFB Patrick AFB Jacksonville Naval Facility Jacksonville WSFO Melbourne WSFO
Colorado Denver WSFO


their use and perceptions of the modules. In the interest of obtaining the
most blunt and unvarnished responses from the forecasters during these
visits, the COMET Program and our team jointly decided that we would
make these visits alone, with each evaluation team member assigned a
selection of site visits.
The sites we visited were selected from a list provided by the COMET
Program of 29 NWS, AWS, and NMOC forecast offices. The final decision
about which offices to visit was reached collaboratively with the COMET
Program, and was based on considerations of regional and agency
representativeness, as well as time, office availability, and team members
availability. We visited offices in the east, west, midwest, and southern
regions of the country; for efficiency, we selected locations that had forecast
offices from different agencies clustered in close proximity. Other than these
efforts at representativeness, we did not attempt to randomize the selection of
the site visit locations.
Interviews at the site visits followed a topic outline (the interview
protocol is in the Appendix p. 225). The questions we asked were parallel to
the items on the questionnaire: did they use the modules, what did they
think of them, what kind of training did they find most useful? This outline
provided a structure for the interviews with considerable room to improvise
questions and follow leads that came up in conversations. Some of these
confidential interviews were of a group in the forecast office, some were of
individuals. Most of our visits were conducted during the day shifts in the
60


forecast offices, and lasted approximately three to four hours each. Following
these visits, we reconvened in Denver, debriefed as a team, and complied
written notes for our records.
In addition to the site visits conducted by our evaluation team, the
COMET staff continued to perform their customary visits to forecast offices
during the period from July, 1995 through October, 1995. They provided us
with copies of their written travel reports, which we used to augment the data
from our site visits.
Extant Data
Extant data about the distance learning program, including reports of
module use, previous evaluation reports, and site visit notes by COMET staff
also provided data about module adoption. During the course of the
evaluation, the COMET Program provided files of background data which
proved very helpful in formulating the context and format of the evaluation.
This extant data included an evaluation survey done by the University of
Georgia in 1993 (Surrey & Okey, 1993), a draft of the COMET Program's five
year plan, a summary of past activities, proposals for future activities, and the
aforementioned staff travel reports.
61


Data Analysis
A critical part of any research is analysis, the process of "making sense
out of one's data" (Merriam, 1988). Wolcott (1990) calls the process a dialectic,
and quotes a description by Agar (1980, p. 9):
You learn something ("collect some data"), then you
try to make some sense out of it ("analysis"), then
you go back and see if the interpretation makes sense
in light of new experience ("collect more data"), then
you refine your interpretation ("more analysis"), and
so on.
In the spirit of introducing some structure, but not too much, I
followed a process cited by several qualitative researchers (Mellon, 1990;
Krathwohl, 1993; Yin, 1994):
Step 1. I reviewed data as it came in from all sources, generating
written notes or summaries as soon as possible, usually within 48 hours. This
data came from the variety of sources cited above. I also kept a reflective
notebook to capture my impressions about the process and its emerging
trends.
Step 2. Each day, I reviewed notes and summaries for common words
and phrases, "... as well as for surprising, counterintuitive, and unexpected
material" (Krathwohl, 1993, p. 338). I used two technologies for recording
these notes: I entered site visit notes into Folio VIEWS, organized by
location and customer group; meeting and interview notes I retained in
62


folders filed by the source; other extant data I retained in folders by the type
of document (travel report, planning document, etc.).
Step 3. At intervals, I looked at the data for patterns and main themes,
and organized the patterns into codes. Foster (1969), author of some of the
earlier qualitative studies of the change process, suggests beginning by
identifying three categories broad enough to encompass all the data: first, the
people in the group targeted for change (in this case, COMET'S customers in
forecasting offices); second, the innovating organization that wants to
produce a change (COMET); and third, the arenas in which the groups
interact (in this case, forecast offices and meetings of forecasters at which
COMET is a presenter or convener).
Inside those categories, I looked for references to any of the five
identified factors from the literature (fit with the culture, adequate support,
channels of communication, end user participation, and direction or vision),
and highlighted those references. I also highlighted references that didn't fit
into the factors.
For example, during one COMET meeting at which we were
discussing preliminary results from the questionnaires, it was clear that
module use was very low. One of the staff accused the respondents of lying
about their lack of module completion. Later, I decided to note this reaction
both in the category of end user participation and stages of loss and change
(Table 1). Over time, we did hear more angry reactions from the COMET
staff which I believe were also typical of stages in the change process.
63


Step 4. From the beginning, and up until the end, I developed working
hypotheses which changed and evolved. I discussed these hypotheses
frequently with the COMET staff and the evaluation team, especially Brent
Wilson. In addition, we continually evaluated the research design for
possible revision in light of emerging information and needs.
Step 5. The final evaluation report was submitted to COMET in
February, 1996. The present study was written from February through
August, 1996. The findings, reported in chapter four, basically consists of
stories of forecasting offices and how they interact with COMET and the
COMET modules, told through vignettes and quotes. These stories are
interspersed with data from all other sources for comparison, support, or
contradiction, and are followed by my observations of the emerging themes
and working hypotheses.
Assurances and Confidentiality
In order to secure and clarify COMET's permission to use the
summative evaluation data in this study, I received letters of agreement from
the Director of COMET and the Director of the Distance Education Program
(Appendix pp. 245-246). The UCD Human Research Committee, who also
has jurisdiction over this research, approved the study as exempt.
64


Limitations of the Study
It is the nature of qualitative research to learn "more about less, rather
than less about more" (Rogers, 1983, p. 358). Although this is a case study
rich in data, it will become more valuable when other case studies add to the
aggregated data about successful adoption of technologies of instruction.
The data collection itself was limited in several ways. Resources and
practicality limited the amount of time available to spend in forecast offices
and at COMET. With additional time, we could have added the dimension of
interviewing the heads of the customer groups (NWS, AWS, and Navy) who
have also played a key role in the adoption of the modules; as it was, COMET
provided the data on these interviews.
Besides the site visits and interviews, the other main source of data
was the written questionnaire. A low return rate, especially from the military
customers, made us cautious about its usefulness. At the request of the Air
Force, their questionnaires were returned to their central office for review
before being sent back to us; this most likely compromised the anonymity we
tried to build into the system.
Finally, the study was limited by the arbitrary nature of its stopping
point. A longer period of observation and data collecting would have offered
more confirmation of the process over time.
65


My Biases
Articulating researcher biases is an important part of the tradition of
qualitative inquiry. As Guba and Lincoln (1981) point out, researchers in the
qualitative tradition are recognized as part of the design, mutually affecting
the situation under study, and subjectively deciding what to tend to and what
to ignore. Recognizing this role, Rogers (1983) also warns against intellectual
blinders worn when a researcher becomes familiar with a well developed
tradition like that of change and adoption. The danger is in unstated
assumptions and not seeing rival explanations or more subtle and
contradictory details. In the spirit of trustworthy research, I would like to
enumerate some of my stronger biases, and describe a traditional structure for
helping those biases to become an asset as well as a limitation.
I came to this study with biases regarding instructional technology, the
process of change, and COMET. After a dozen years of designing instruction
for adults, I have come to believe the potential power of both hard and soft
technologies that are well designed, suited to the learners, and developed in
support of their learning goals. Therefore, I think that learning more about
successful adoption of technologies is a worthwhile end in itself.
I also came with a bias toward the importance of the study of the
change process. I am intrigued with the inextricable links between learning
and change. I believe that it is impossible to understand how people learn
without also understanding how they change. When we ask people to learn,
we implicitly ask them to change in the bargain: change what they know,
66


change their skills and abilities, their behaviors, their beliefs and attitudes,
Sometimes, like COMET, we also ask them to incorporate new technologies of
learning into their world. I believe that it is critical to educators to
understand the effects of these change
Finally, I had a bias in regard to COMET. Although I have never been
directly involved with COMET, and have no particular stake in their success,
I am nevertheless in support of their mission to serve the meteorological
community. Several of the COMET staff members were friends prior to this
study, and several more became friends during the course of the study.
"Acceptance of the critical tradition"
One method of balancing out researcher bias in the context of
qualitative inquiry is what Phillips (1990, p. 30) refers to as"... acceptance of
the critical tradition." According to this tradition, quality research is based on
a point of view that has been
opened up to scrutiny, to examination, to challenge. It is
(based on) a view that has been teased out, analyzed,
criticized, debated in general, it is a view that has been
forced to face the demands of reason and evidence. It is a
view that has respectable warrant. (Phillips, 1990, p.
30)
In order to partake in the critical tradition, Phillips suggests
conducting the research as a part of a community of inquiry. In this sense, I
am fortunate to be part of such a community. At COMET, the summative
67


evaluation on which this study is based was directed by a collaborative team.
This team consisted of Brent Wilson as co-investigator, the director of
COMET'S distance education program, the manager of COMET's
international projects, two staff instructional designers, and one meteorologist
on permanent assignment from the National Weather Service. The teams job
was to review the evaluation design and data collection process, and discuss
all findings and conclusions on an on-going basis. We did have many hours
of discussion with everyone involved, which was a crucible for facing
"... the demands of reason and evidence" (Phillips, 1990, p. 30).
In addition, I had regular meetings with my dissertation co-chairs,
Brent Wilson and Alan Davis, who provided frequent "scrutiny, examination,
and challenge" as well as encouragement on both design and content of the
study. Finally, my dissertation committee was composed of researchers who
provide another opportunity for thoughtful review and feedback.
While this critical tradition does not claim to ensure truth or objectivity
in the quantitative sense, it provides a structure to support the research
becoming useful, reasonable, ethical, creative and thorough. Because of the
safeguards provided by this community of inquiry, I feel confident that my
biases have served to focus and energize the study, and were a strength of the
study as well as a limitation.
68


CHAPTER 4
INTRODUCTION
We tell the story to put us in touch with strengths we
may have forgotten, wisdom that has faded or
disappeared, and hopes that have fallen into
darkness. (Mellon, 1992, p.2)
This is the story of the forecast community, the story of COMET, and
of how the two systems have interacted around the issue of forecast training.
These stories are told to describe the constraints and motivations that led to
the creation of the COMET computer modules and the efforts to promote
their adoption. It is also the story of the evaluation of the modules and what
the evaluation found. Eventually, chapter five will "try to make some sense
out of it" (Agar, 1980, p. 9) by returning to the research questions of this study
and the analysis of the results, but chapter four begins with the story.
69


Westcoast Forecast Office
When you walk into the Westcoast Forecast Office2, the first thing you
notice is that it is not an office at all in the usual sense of the word. On the
perimeter of the low glass and brick building are cubicles, small work rooms,
a supply room and break room, but most of them are empty. Your attention
is drawn instead to the large central forecast hub. This room, the center of
daily activity, is essentially a circular bank of computers with a scattering of
mobile chairs on wheels. Interspersed among the computers are a variety of
other high tech screens and instruments, mostly with moving images on the
screens.
The staff seems constantly engaged and moving from screen to
instrument to phone and, like a beehive, there is a steady, low buzz of
conversation. One wall of the room, overlooking gray coastal waters, is floor-
to-ceiling glass, through which you can observe the weather "the old
fashioned way," as one seasoned veteran put it.
Forecasting is watching, discussing, comparing, calculating data,
reading signs, understanding patterns. Here at Westcoast, Frank, the marine
forecaster, is seated at his computer. He has been with the National Weather
Service for six years, and stationed in this office for four. For the last ten
minutes, he has been intently studying the computer screen in front of him.
From the radar images, he recognizes the beginning of a major weather
movement.
2 Pseudonyms are used throughout this study for all people and forecast office locations.
70


Across the circle, Larry, a twelve-year forecast veteran has been
watching a different screen focused on inland weather. Larry has been
explaining the inland weather pattern to Regina, a young meteorology intern
from State University. After a few minutes, without taking his eyes off the
screen, Frank calls over to Larry and Regina, who join him. Together the two
experienced forecasters and the new intern watch, discuss, and watch some
more.
Bill, the lead forecaster, is just arriving for a shift change. He sits down
in one of the chairs to retrieve a fax with incoming data from another office.
Without getting up from his chair, he playfully sends himself rolling across
the linoleum floor to see what the group is absorbed in. After a quick
greeting and briefing, the whole team moves a few steps to the table in the
middle of the room, and huddles over a tattered notebook from a collection of
books and manuals. Larry and Bill recall a similar pattern from two seasons
ago, and flip through the binder until they find the weather map and notes
they took at the time. Meanwhile, Frank and Regina settle in at an adjacent
computer to pull up the relevant mathematical computer models designed to
aid in the forecast process; the numbers are checked, and checked again. As
the data come together, the team begins to form a prognostication. The
marine forecast is declared to be compatible with the inland forecast. Both
forecasts are further refined by Bill and Larry's observation of the skies and
intimate knowledge of the local terrain of coast and mountain. After the
watching and comparing have been done, the signs read and the calculations
71


completed, the group is ready to issue what is called a weather product, a
collaborative forecast which represents the office's best estimation on what
the weather is, and is likely to be.
North Island Air Force Base
North Island Base is a town in itself, with houses, stores, schools,
churches, movies, and a recreation park. It is one of the largest and busiest
Air Force bases in the country, and often serves as the site of arrivals and
departures for foreign destinations and VIP travelers. Because of its critical
location, it also has one of the busiest Air Weather Service forecast offices in
the country.
The forecast offices are located in a long, low corrugated metal hangar
at the edge of the base. Though it looks like it might be a temporary building,
it has housed the forecast office for years. Just as in the Westcoast office, the
heart of activity here is the forecast room. Because of the shape of the hangar,
the forecast room is long and narrow, with a row of small windows
overlooking a parking lot outside and the base beyond. The room is arranged
for maximum viewing of all the computer screens, with two banks of
computers and phones, and a collection of filing cabinets and reference books
clustered at the end of the row.
The night before, there had been a drenching rain storm, and the
parking lot and air strips are pools of water. Kathleen, the lieutenant on duty,
has been a forecaster for one year. Like all Air Weather Service forecasters,
72


she had gone through Observers School for ten months, and successfully
served as an observer for two years. Her next step was to attend Air Weather
Forecast School for nine months; after graduation, she was assigned to North
Island, which provides an intensive practice site and is therefore considered a
plum location.
Her first activity of the morning is to attend a station meeting. Last
night, the crew "blew a forecast," underestimating the strength and duration
of the rainstorm. Kathleen and the rest of the crew gather in the training
room. Jack, the officer in charge of training, spends the next hour with both
the day and night crews, using handouts and overheads to review last night's
forecast, and to encourage them to discuss what they did correctly and how
they could have made a more accurate forecast. The day's weather is
expected to be clear and bright, providing the office staff with calmer times to
discuss the errant forecast, catch up on the latest National Weather Service
briefing paper, and tackle administrative duties.
Jack, who led the station meeting, has a degree in meteorology, and is a
firm believer in continuous training for forecasters. He often initiates detailed
discussions about forecasts, and uses any opportunity to provoke a debate
about forecast methods. His teaching formula is that time plus experience
plus coaching results in success.
Jack describes the office as "operationally intense;" that is, he feels
strongly that the prime duty of forecasters is to perform the operations
necessary to produce accurate weather products. Consequently, training for
73


his staff emphasizes forecast skills over background scientific knowledge
about the weather. He encourages forecasters to learn just enough scientific
background to enable them to make the argument about how and why they
made a forecast, and is fond of saying that you don't have to be a mechanic to
be a good driver, and you don't have to know how to publish a paper to be a
good journalist.
South Bay Naval Air Station
"Support the fleet. Ready today. Safe for Tomorrow. Able to fight
smarter." The Navy's mission, framed in gold, hangs on the wall in the
forecast room in South Bay Naval Air Station. The atmosphere in the forecast
office is busy and efficient, as crisp as the mission that guides it. Fifteen
forecasters, about half of the thirty-five person forecast office crew, are on
duty on todays shift. Two forecasters are on the phone, relaying information
to the office's mobile environmental team. Several are hunched over log
books, recording data from the last hour's forecast; later they will use that
sheet to compare the forecast they made with the weather as it actually
occurred. One forecaster comes shivering through the door from his watch
on the station's crow's nest where he was taking a turn with the outdoor
observation instruments.
The rest of the forecasters are clumped in twos and threes, observing
computer screens and the variety of weather instruments scattered
throughout the large room. There is a sense of movement with no wasted
74


effort as the weather data is tracked and noted. Mark, the deputy station
chief, is leaning on a long waist-high Formica counter which opens up onto
the public hallway adjacent to the forecast room. The counter is where pilots,
dressed in full flight gear, stop any time of the day or night to receive up-to-
the-minute forecasts for the base, for their destination, for the weather en
route, and for the times of sunrise and sunset. The counter is the pilots' last
stop before taking off.
Mark's first job of the afternoon is to train Kevin, the base's newest
forecaster, in the counter procedure. It is at the counter that you get a sense of
why the office has a serious tone. The intense faces around the room can see
and hear the pilots as they quiz the forecaster on the details of the weather,
with a joke and a thanks before they leave. No one can forget that the
forecasts produced from this office are, literally, a matter of life and death.
Later that day, Kevin studies the regulation book, to learn how
forecasts are coded and transmitted. Although he has graduated from Naval
forecast school, he will need to learn the procedures specific to this office. He
plans to stay in this base office during his three year tour of duty, and is
considering Naval weather forecasting as a career. During a lull in the day,
Mark reviews the report from the PBT (Planning Board for Training) that
supervises his training efforts in the office. After the office staff was cut last
year, the PBT advised him to cross-train the remaining staff, as well as bring
everyone up to speed on new satellite technology. He begins to make notes
on how he might accomplish these goals, and what materials he will need.
75


His reading and planning are interrupted by Robert, a second year forecaster,
who leans into Mark's office to ask his help in assessing a fog bank which is
getting thicker by the minute.
Forecasting
These scenes from Westcoast, North Island, and South Bay are
repeated hourly in forecast offices worldwide. For the most part, forecasting
the weather is a collaborative, often intense activity. From its first official
weather reporting functions in 1870, the U.S. government now trains and
employs approximately 5,000 forecasters in the National Weather Service, Air
Force and Navy. They are in small offices in remote locations, and in large
operations in major cities. They are located on military installations and ships
at sea, and report from locations all over the world. The forecasters range in
experience from student forecast interns to seasoned veterans with 30 or more
years of service.
Forecast offices are staffed 24 hours a day, every day of the year. The
pace and activity in a forecaster's shift depends mostly on the vagaries of the
weather, which, like all open systems, are difficult to predict. Like the
weather, there are calm, more stable periods and other hectic, more turbulent
ones. The task at hand may be producing forecasts for the public and the
media, or answering phone queries on the weather; it may be issuing
bulletins to airports, highway patrols, or fisheries, or developing VIP
76


briefings for the Pentagon or White House. Some offices function solely to
collect and compile information from other forecast offices.
A work shift typically begins and ends with a briefing between those
coming on duty and those going off. In addition to weather duties,
forecasters share administrative tasks as well. Called focal point duties, these
tasks can be any administrative or maintenance task that keeps the operation
running smoothly, from paperwork to procuring supplies to overseeing the
operation of the office technology. Forecast offices tend to be structured
around teams in both producing weather products as well as training and
reviewing prior forecasts. It is a world based on changing technology and
refined techniques, with an emphasis on continual learning for novices and
experts alike.
Because of the rapidly changing environment, one standard function in
all forecast offices is training. Most offices have a wide variety of training
modalities at their disposal. Sometimes training plans are mandated by the
organization's central management, sometimes only suggested, and other
times they are totally at the discretion of the local training officer. Typical
forecast training modalities are on-the-job training, coaching, outside classes,
group training sessions, computerized training, brown bag discussion groups,
written articles and memos, and professional conferences. Training is
considered to be one of the most critical of functions in a forecast office,
particularly since the mesoscale revolution of the 1980s.
77


The Forecast Revolution
In the 1980's, the world of weather forecasting experienced a dramatic
modernization. For the first time, new radar and interpretive technology was
enabling the forecast community to greatly refine their ability to see and
understand very localized weather conditions, known as the mesoscale
environment. Heretofore, weather forecasts were by necessity rather global
and unspecified, covering large geographical areas in which there could be a
variety of weather conditions at any given time. With mesoscale technology,
forecasters could now pinpoint a smaller area for example, a single airport
or a section of town and provide specific and very localized descriptions of
current and expected weather. Mesoscale forecasting provided more
complete, more refined information critical to pilots, mariners, farmers,
travelers, those in outdoor activities, and all whose safety and effectiveness
depend on awareness of the weather. For meteorologists, mesoscale
forecasting was a revolutionary breakthrough.
Like many contemporary changes, new information and new
technology were twin driving forces behind the mesoscale advances. But
these changes were complex, and necessitated that both novice and seasoned
forecasters be able to understand and rapidly integrate the latest theories,
applications, and equipment. The organizations in which most U.S.
forecasters work the National Weather Service, the Air Force, and the
Navy were compelled to find a way to train new forecasters as well as re-
train working forecasters who are stationed all over the world.
78


Simultaneous with the development of mesoscale forecasting, the
world of instructional design and technology was exploding with innovations
and improvements in interactive computer-based tools. Computer-based
tools were good candidates for teaching learners at a distance. These
programs could be used wherever the right hardware could be found. They
were designed to enable learners to be more in control of the timing, pace,
and sequence of their learning, and gave them ultimate freedom to repeat a
lesson as often as they liked. The best programs were noted for the
motivational advantage of their colorful and engaging graphics and video,
and features that got the learners interacting with the material and testing
their proficiency.
Given that forecasters are very familiar with computer technology, and
given that it was much more cost effective than moving large groups of
learners to a training site, computer-based instruction seemed to be a logical
solution for the new training needs of the forecast community. It was out of
this coming together of an urgent need for updated training for weather
forecasters and the new technologies of learning that COMET (The
Cooperative Program for Operational Meteorology, Education and Training)
was created in September, 1989.
COMET
When you first walk into the modem, airy, brick-and-glass building
near the foothills in Boulder, Colorado, it is apparent that there is
79


concentrated working going on. The long hallways that outline the perimeter
of the building are quiet. Unlike weather forecast offices, the centers of
activity in this building are the individual offices of COMET's creative
talent the designers, programmers, support staff and administrators
responsible for carrying out COMET programs, including the design and
development of the COMET modules.
You could mistake Dave's office at COMET for the headquarters of an
architect (he works at a drafting table), a linguist (his chalkboard is full of
Chinese characters), or a graphic artist (surrounded by toys and cartoons,
with a computer screen as pallet). He brings a rich background, as well as an
advanced degree in instructional technology to his job as an instructional
designer for the Distance Learning Program.
On this day, as on many days, Dave is focused on the looming
publication deadline for the latest COMET module. Fifteen months ago, Dave
was assigned to be the designer of a module on an important innovation in
meteorology, the use of satellites in weather forecasting. As the instructional
designer on this particular module, his job is to oversee the entire project from
start to finish. Soon after he received the assignment, he held a series of
meetings at COMET with Donald and Mimi (COMET's in-house
meteorologists), and two national experts on the topic. They began the long
process of helping Dave define the scope of the module content and its
learning goals and objectives. Over time, although the task was to refine the
objectives and narrowed the scope, Dave's dreaded "content creep" began to
80


set in. The farther they get into the project, the more the experts wanted to
cover, arguing that the many complex and interrelated scientific factors
needed to be thoroughly explained in order to be understood. The more the
content expanded, the more case studies and examples the experts inserted.
In fact, most of Dave's time lately has been devoted to finding the specific
materials required by the SMEs.
As the content became more finalized, Dave developed a written
project plan. Like a story board, the plan described a sense of the overall flow
of the module, and what they intended to cover in what section. Next, he
began to dig in to develop the screen by screen details of the module. What
content will be covered in this section? What strategies can I use to help to
reinforce the learning: text? case studies? games and quizzes? panel
discussions by experts? How will each point be illustrated: graphics?
animation? video? Eight months ago, Dave began to take his work-in-
progress sketches and notes on each section to the programming, graphics
and production staffs to involve them in giving form to the module content
and design ideas.
His task today is to review the latest video clip that has come back
from the camera crew. As Dave watches the screen, he feels sure that this
discussion with the expert meteorologist is too long, too dry, and takes on too
much complex subject matter. The SMEs on this project had been chosen
because they are leaders in satellite meteorology, and are credible names that
will be recognized by many of the forecasters. But because they are well
81


known and in demand, their time is also at a premium. It has been almost
impossible to schedule them to come to Boulder to work with Dave on
developing the module content. Every delay in scheduling the SMEs creates
one more problem in meeting the production deadline.
Deadlines are a chronic hazard of module development. Years ago, the
DL Program entered into an agreement on module topics, production
schedules, and interned production deadlines. From a management point of
view, these deadlines are commitments to the customers and funders. From a
design point of view, the deadlines often seem artificial, with formative
evaluation with the forecasters as a casualty of the process. From time to
time, there is an opportunity to try out the module with some of the
forecasters who have come to participate in a residence workshop at COMET,
but this kind of formative evaluation is no longer a regular or formal part of
the design process.
In another four months, the satellite module will need be ready for the
quality assurance process. Dave's task will be to work with Donald and Mimi
on developing the users guide, bibliography and glossary. At last, after 18
months, the module will be farmed out for the final publication, printing,
and shipping to the forecast offices. Before this module is ready for
distribution, it will have required the work of a dozen specialists and nearly
2,000 staff hours.
82


COMET Programs
COMET is an arm of the University Center for Atmospheric Research
(UCAR), a consortium of universities that teach and conduct research in the
atmospheric and oceanic sciences. It is funded through a partnership of the
National Oceanic and Atmosphere Administration (NOAA, a federal agency),
and the forecasting arms of the National Weather Service, the Air Force and
the Navy. COMET was created in order to
serve as a premier resource that supports, enhances,
and stimulates the communication and application of
scientific knowledge of the atmospheric and related
sciences for the operational and educational
communities. (COMET, 1996, p. 2)
The strategy for embarking on this mission of promoting scientific
knowledge has been the development over time of four programs: 1) the
Residence Program that brings teachers and learners together in the Boulder
headquarters for in-person group seminars; 2) the Outreach Program which
organizes funding and other activities to stimulate collaborative weather
research; 3) the New Products and International Program, which is developing
new educational products, an educational resource center, and international
markets for COMET; and 4) the Distance Learning Program, which specializes
in developing educational products to be used by forecasters on-site in their
far flung weather forecast offices. It is the Distance Learning (DL) Program
that is the focus of this study.
83


The Distance Learning Program
What Are We Going to Teach, and How Are We Going to Teach It?
The first staff of the COMET DL Program was a creative and energetic
team. It was composed of two meteorologist/instructional designers and a
variety of part-time personnel and consultants: scientists, professors of
meteorology, and consulting professors of instructional design who were
highly knowledgeable about computer applications and cognitive science. In
order to fulfill the charge of inaugurating a national program providing
education at a distance, the team was faced with a myriad of strategic
decisions, two of which were central to the mission: What are we going to
teach? And how are we going to teach it?
To answer the first question of what to teach, the team turned to
experts in meteorology. From their experience in the scientific and academic
worlds, the meteorologists devised a structure of five very broad content
tracks: Marine Meteorology Track, Convective Track, Aviation Forecast
Track, Foundation Track, and Special Topics Track. Their guiding metaphor
was that the tracks were a condensed university curriculum in meteorology
with topics representing a range of subjects that meteorologists might cover
in such a curriculum, and that the curriculum would be divided into courses
presenting sub-topics of each track.
To answer the second question, how to teach it, COMET chose to place
its efforts into the design and production of one technology multimedia-
84


media computer-based instruction. These multimedia-media materials were
called modules, with each module approximating a course from one of the
tracks in the designated curriculum. They were to be designed so that the
individual forecasters could complete the module at their own pace, in their
own time, in their own forecast office. The modules were to be self-contained
so that they could be completed in five to 20 hours with no outside guidance
needed. This mode of computer-based interactive training, one of many
strategies for facilitating distance learning, seemed best suited to COMET's
needs and has remained the primary DL strategy from its inception in 1989 to
the present.
The Modules
Thus the computer-based modules, designed around a university
curriculum, became the cornerstone of the COMET plan for distance
education for forecasters. Collectively, the modules are known as the
Operational Forecaster's Multimedia Library. The modules represent state-of-
the-art computer-based instructional products (for a sample module screen,
see Figure 1.1).
Three years after start-up, COMET's first module was distributed in
the Spring of 1992. The topic of the first module was the radar technology
known as Doppler, an innovation which was to become a standard in weather
forecasting. The National Weather Service (NWS), working in partnership
with COMET, mandated that the Doppler module be completed as a
85


prerequisite for NWS employees to attend a required training in Doppler
technology at the agency training center in Kansas City. The Workshop on
Doppler Radar Interpretation was a success. NWS employees who were
surveyed reported that 90% of them had completed the Doppler module;
even Air Force and Navy forecasters, who were not required to use the
module, reported about a 50% to 60% rate of completion (Appendix, p. 174).
Although it is now somewhat out of date due to changing radar technology
over the past four years, the module is still highly valued for both its content
and its effective interactive design.
Early Successes
It is instructive to look back on the creation and adoption of the
Doppler module. The topic was selected in response to a strong request from
COMET'S customers. So critical was the perceived need for the distance
training, in fact, that the NWS considered it a mandatory foundation for its
own internal training strategy.
With a small staff, and a need to maximize resources, the COMET team
used forecasters in local offices to try out the design of the module as it was
being developed. Forecasters were asked to think through the content as well
as help assess its effectiveness as a teaching tool. One veteran forecaster tells
the story of Roger, one of the founders of the DL Program, one of the
designers of the Doppler module, and a former forecast colleague:

86


He (Roger) comes in here all exdted, telling me about
this new computer module theyre developing on
Doppler. He had me try it out, and he kept asking me
questions about whether it made sense, and how we
could change it, and whether I thought it would teach
the guys. We just spent hours pouring over it.
Motivated and engaged learners, the right content, an effective design,
adequate time to use the modules and technology support, formative
evaluation with the forecasters, and a clear alignment with the needs of the
customer groups: this turned out to be a formula for creating a module with
a high rate of completion which has never been repeated.
The DL Program grew rapidly. COMET planned an ambitious number
of modules, and negotiated an intense production schedule with its customer
groups. The stiff of two grew to a staff of fifteen. From the inaugural
Doppler module distributed in 1992, the DL program has distributed eight
additional modules through the end of 1995: Convection Initiation &
Boundary Detection (also distributed in 1992); Heavy Precipitation and Flash
Hooding (distributed in 1993); Forecast Process (distributed in 1993);
Numerical Weather Prediction (distributed in 1994); Marine Meteorology
Volumes 1 and 2 (distributed in 1994); and Extratropical Cyclones Volumes 1
and 2 (distributed in 1994). As COMET developed new and more
sophisticated modules, its process of instructional design evolved as well.
87


Evolution of the Design Process
In describing the multimedia design process, COMET draws an
analogy with Hollywood film production (Lamos & Parrish, 1994). In this
process, the subject matter experts are seen as the stars of the production,
similar to the stars of a movie, and have a central role in deciding what cases
will be presented in the module, and which data will be used to illustrate
which points. The instructional designer, as producer and director, develops
the overall theme, sequences the information instructional events that explain
the topic, and designs the interactive tasks and games that will help the
forecasters to practice and learn the forecasting skills. A team of media and
programming specialists give form to the designers' ideas, and the final
product is checked by a staff quality assurance specialist. Unlike the earliest
modules, the forecasters have virtually no role in the development or
evaluation of the modules.
This laborious and time-consuming design process is what made it
especially disheartening for the DL team to realize that, in spite of their
award-winning design, in spite of meeting the pressures of distribution
deadlines, the modules were not being used very often in the field.
COMET's Concern
Almost from the beginning, after distributing the second module, the
COMET staff had begun to collect anecdotal information that the modules
were not in regular use, at least not as much as they had hoped. As part of
88


their process of formative evaluation, the staff and leadership of COMET
often traveled to forecast offices, attended professional weather conferences,
and met on a regular basis with the leadership of all three customer groups to
assess their satisfaction with the modules. By mid 1995, they were estimating
that 50% of the forecasters were currently using 50% of the modules; they had
hoped that 75% of the forecasters would be using 75% of the modules.
COMET'S three customer groups, the forecast agencies, have in the
past been somewhat immune to fluctuations in the national budget and
international political situation. However, in the uncertain budget climate of
the nineties and the end of the cold war, the NWS, Air Force, and Navy are
among those organizations experiencing a move to downsize similar to other
American corporations. Cut-backs in personnel and budgets had several
effects on forecast offices. First, there were fewer people to carry out duties,
and less time available for training. Second, fewer dollars meant more
scrutiny on how to spend dwindling resources on the most effective training
possible.
In addition, there was a trend to decentralize the forecast agencies,
another similarity to modern corporate life. These government organizations,
especially the Air Force and Navy, traditionally rim on a strict structure of
decisions made at the top, were beginning to allow more decision-making to
occur on the local level. For example, instead of following uniform training
plans, regional offices and individual forecast offices began to assume more
power to decide what training they needed and how to conduct it.
89