Citation
Can theories of technological change be tested?

Material Information

Title:
Can theories of technological change be tested?
Creator:
Wilson, Roy Woodrow
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
vi, 113 leaves : ; 29 cm

Thesis/Dissertation Information

Degree:
Master's ( Master of Humanities)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
Department of Humanities and Social Sciences, CU Denver
Degree Disciplines:
Humanities

Subjects

Subjects / Keywords:
Technological innovations -- Social aspects ( lcsh )
Technological innovations -- Social aspects ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 108-113).
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Master of Humanities, Humanities Program.
General Note:
Department of Humanities and Social Sciences
Statement of Responsibility:
by Roy Woodrow Wilson.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
22698275 ( OCLC )
ocm22698275
Classification:
LD1190.L58 1990m .W54 ( lcc )

Downloads

This item has the following downloads:


Full Text
CAN THEORIES OF TECHNOLOGICAL CHANGE BE TESTED?
by
Roy Woodrow Wilson
B.A., Metropolitan State College, 1976
M.A., University of Denver, 1978
M.S., University of Denver, 1983
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Master of Humanities
Humanities Program
1990


This thesis for the Master of Humanities
degree by
Roy Woodrow Wilson
has been approved for the
Humanities Program
by
Glenn A. Webster
*///
Da-fe


Wilson, Roy Woodrow (M.H.)
Can Theories of Technological Change Be Tested?
Thesis directed by Associate Professor Glenn A. Webster
This inter-disciplinary thesis considers the possibility of testing "theories"
of technological change. Public and business policies often presuppose that
technological change can be successfully managed. Since this presupposition may
rest on unacknowledged or untested theories of technological change, the
testability of such theories is a significant issue.
The thesis is structured by the tension between the disciplines of
philosophy and history in relation to the phenomenon of technological change.
Based on an analogy to the history and philosophy of science, the thesis considers
the possibility of testing theories of technological change by evaluating those
theories with respect to the history of technological change. As an experiment in
this kind of evaluation, two theories of technological change are evaluated with
respect to the history of management information systems (MIS) from 1945 to
1960. The history of MIS during this period is interpreted in terms of the
technological co-evolution of the components of an idealized information
processing system.
It is argued on anti-realist epistemological grounds that the discipline of
history produces evidence which is both objective and variable with respect to
meta-historical claims. The thesis concludes that meta-historical claims are testable
in the sense that the objectivity of the judgements of the discipline of history


allows us to determine the warrantability of judgements concerning the likelihood
of a meta-historical claim.
Finally, it is suggested that, insofar as technological change brings
unmanageable technological and social consequences, policy managers and
analysts would be well served by attempting to identify and test the theories of
technological change presupposed by broader policy initiatives.
The form and content of this abstract are approved. I recomi
Signed
Glenn A. Webster
IV


CHAPTER
CONTENTS
1. INTRODUCTION................................................. 1
Purpose of the Study ..................................... 1
Scope of the Study........................................ 3
Arrangement of the Thesis ................................ 5
2. REVIEW OF THE LITERATURE ..................................... 7
The Philosophy of Science and Technology................. 7
The History of Technology................................ 16
3. TWO META-HISTORICAL CLAIMS................................... 22
Introduction............................................. 22
Technological Disequilibrium ............................ 23
Technological Co-evolution............................... 30
Conclusion .............................................. 33
4. MANAGEMENT INFORMATION SYSTEMS AND TOTAL WAR . 35
Introduction............................................. 35
Von Neumann and the Bomb................................. 38
A Passion for Planning................................... 39
The Search for Optimum War............................... 42
At the Dawn of Electronic Computing ..................... 47
New Applications of Computing: A Watershed .............. 49
Computer Design and National Defense .................... 53


Von Neumann and the Weather
59
The Big Picture.......................................... 66
Conclusion .............................................. 71
5. RELATING EACH CLAIM TO THE HISTORICAL RECORD_______________ 79
Introduction............................................. 79
Technological Disequilibrium ............................ 81
Technological Co-evolution............................... 83
Conclusion .............................................. 88
6. ARE META-HISTORICAL CLAIMS TESTABLE? ....................... 90
Introduction............................................. 90
Kinds of Historical Evidence............................. 94
On the Uses of History................................... 96
History and Truth........................................ 97
Conclusion ............................................. 100
7. CONCLUSION.................................................. 101
Von Neumann as Change Agent ........................... 101
Technological Change and the Arrow of Time............. 103
A Counsel of Prudence................................... 105
APPENDIX
WARRANTABIL1TY, JUDGEMENT, AND PROBABILITY _________ 106
SELECTED BIBLIOGRAPHY ............................................ 108
vi


CHAPTER 1
INTRODUCTION
Purpose of the Study
Successfully implementing policy decisions in both the private and public
sectors increasingly hinges on the successful management of technological
change.1 Technological change is manageable only if the dynamics of
technological change are understood, at least in an "engineering" sense. Regardless
of how such understanding is obtained, it may be articulated in the form of a
model or theory of technological change.
Some accounts of technological change are based on economic theory,
others on the sociology of invention, others still on the concepts of general
systems theory.1 2 To the extent that successful policy implementation hinges on
the successful development and/or deployment of a technological system, that
policy also presupposes a theory of technological change. With its program for the
1 Educational programs for the middle-manager in government and industry
have been created that focus on the problems of managing technological change:
the Sloan School of Management at MIT offers a concentration in the
Management of Technological Innovation.
2 Rachel Laudan, Introduction to The Nature of Technological Knowledge:
Are Theories of Scientific Change Relevant?, ed. Rachel Laudan (Boston, 1984),
pp. 1-26.


development of advanced computing technology, the Strategic Defense Initiative is
a case in point.3
The decision to implement higher-level policies may turn on the apparent
plausibility of a lower-level theory of technological change which implies that
through a definite series of steps, the desired technological change can in fact be
produced. Obviously, it is important to be able to assess such theories. One
measure of merit might be the "testability" of a theory.
This thesis examines the claim that theories of technological change ought
to be "empirically testable" by reference to the history of technological change. If
theories of technological change cannot be "empirically tested" in this sense, then
the policies which presuppose them lack grounding. Discovering and evaluating
theories of technological change presupposed by higher-level policies has the
function, if not of identifying "better" theories, then at least of helping to identify
"worse" ones. And, as recently observed, if this represents only a marginal
improvement of policy-making, it may be worthwhile.4
This thesis is an inter-disciplinary study of theories of technological
change in the sense that it is equally a work of history and philosophy that
3 Defense Advanced Research Projects Agency, Strategic Computing. New-
Generation Computing Technology: A Strategic Plan for Its Development and
Application to Critical Problems in Defense (28 October 1983), p. 69, employs the
"push-pull" ianguage of technological change used by economists.
4 Richard E. Neustadt and Ernest R. May, Thinking in Time: The Uses of
History for Decision-Makers (New York, 1986).
2


highlights the interplay between the two disciplines.5 The inter-disciplinary nature
of the study structures the problem posed, the way in which it is treated, and the
way the thesis is organized. In particular, we are required to balance differences in
locution between the two disciplines.
The differences, however, between philosophers and historians are not
merely verbal. While not all philosophers are abstract and ahistorical, philosophy
as a discipline has tended to be more concerned with relatively abstract issues and
to employ relatively ahistorical methods.6 While not all historians are concrete
and indifferent to theoretical insights, history as a discipline has tended to concern
relatively concrete issues and to use relatively atheoretical methods. These
differences add to the complexity of the thesis.
Scope of the Study
Hegel is undoubtedly the best/worst example of a philosopher arrogating,
not only the role of historian, but history itself to the purposes of a philosophical
system. Historians have been understandably quick to resist such encroachment,
5 This is a stated requirement of the Master in Humanities program at the
University of Colorado at Denver.
6 Philosophers used to try to determine the nature of history or historical
discourse. Now, through the works of Rorty, Gadamer, Habermas, and Arendt,
philosophy is being subjected to the "relativizing" effects of history. For an
account of this transformation, see Richard J. Bernstein, Beyond Objectivism and
Relativism: Science. Hermeneutics, and Praxis (Philadelphia, 1983).
3


establishing disciplinary mores which make the appearance of philosophical
concerns a cause of suspicion.7
History is not a neutral stuff to be arbitrarily interpreted. However
important it may be to establish a line of demarcation between history and
philosophy, this is outside the scope of the thesis. Some useful distinctions can be
made, however, by briefly examining the terms historical, meta-historical,and
philosophical. A claim would be naively regarded as historical if it pertains to
the past and does not involve any obvious generalizations. For example, "The
United States was attacked at Pearl Harbor on December 7, 1941 seems a
patently uninteresting historical claim. On the other hand, "Nations which are
overly militarized become second rate powers" is also about the past and so is
historical, but seems also to be a generalization. A nonnative rather than factual
claim is being made, undoubtedly on the basis of implicit factual claims. The
second claim seems better classified as meta-historical: it asserts something of
all relevant historical periods.8
7 David Hackett Fischer, Introduction to Historians Fallacies (New York,
1970); Arthur M. Schlesinger, "The Inscrutability of History," in The Vital Past:
Writings on the Uses of History, ed. Stephen Vaughn (Athens, Georgia, 1985).
8 A similar relation between fact and theory arises in every discipline that
attempts to be theoretical. For a discussion of these difficulties, and the role
played by history, see David Kaplan and Robert Manners, Culture Theory
(Englewood Cliffs, 1972), pp. 67-75.
4


Are the claims of Thomas Kuhn concerning scientific (rather than
technological) change, philosophical or meta-historical or even historical?9
Certainly Kuhn makes some claims which are historical, while others are more
difficult to categorize. Here the three-fold distinction breaks down, as it is unable
to successfully classify allow a well-known case.
Despite this weakness, I want to deliberately over-simplify matters by
calling a claim: historical, which concerns the past and involves no obvious
generalizations; meta-historical, which also involves obvious generalization; and
philosophical, which states the manner in which historical claims provide
evidence for meta-historical claims.
Arrangement of the Thesis
Employing a strategy whereby we progressively narrow our focus, Chapter
2 begins by surveying the literature of the history and philosophy of technology,
in the broad, ordinary, sense of these terms. We briefly touch upon the subject
which will become a focal point of the thesis, the evolution of management
information systems (MIS).
Chapter 3 presents two relatively well-known theories of technological
change. The originator of each asserts that the theory in question has wider
application than they have given it: hence, these theories can be regarded as
9 Thomas Kuhn, The Structure of Scientific Revolutions (Chicago, 1962).
5


meta-historical claims applicable, at least in rough terms, to the evolution of
MIS.
Chapter 4 is "a" history of MIS in the post-war period. It is intended to be
a "standalone" work of history which can be evaluated independently of the uses
it serves in this thesis. Chapter 4 serves as the evidential basis against which the
meta-historical claims of Chapter 3 are assessed in Chapter 5. In effect, Chapters
3, 4 and 5 represent an "experiment" in which two meta-historical claims
concerning technological change are "tested" against a particular episode in the
history of technological change. One of the meta-historical claims may be
judged, if not false, then less "adequate" to the historical claims of Chapter 4.
Chapter 6 reconsiders the philosophical claim that theories of
technological change should be testable with respect to the history of
technological change. We assert that "testability" is possible, but only in a
restricted sense. Chapter 7 considers the nature of technological change as
exemplified in Chapter 4 and its implications for those who would manage
technological change.
6


CHAPTER 2
REVIEW OF THE LITERATURE
The Philosophy of Science and Technology
This chapter selectively surveys the literature in the history and philosophy
of science and technology, in prepartion for the next chapter where two specific
theories of technological change are considered.
Nearly twenty years ago, T.S. Kuhn noted several variations in the
historical interplay between science and technology. Prior to the twentieth century,
scientific and technological activities were more easily distinguished: in the 18th
century, science drove technology; in the 19th, technology drove science; and
now, science and technology interpenetrate to such an extent that it is difficult to
separate one from the other except in an abstract, ideal sense.1
The development of the atomic bomb provides a good example of this
interpenetration. The theory indicating the possibility of achieving a chain reaction
came from mathematical physics, but the realization of this possibility was
decisively shaped by technological possibilities. The successful approach was to
enclose plutonium in a spherical casing of conventional explosive materials: upon 1
1 Thomas Kuhn, "The Relations between History and the History of Science,"
Daedelus 100 (1971): 271-304.


detonation of the conventional materials an implosion would occur, initiating a
chain reaction.
A number of other workers in the history and philosophy of science have
tried to account for the interplay as well as the differences between science and
technology. Henryk Skolimowski, for example, locates the essential difference
between science and technology in their respective ideals of progress:
... technological progress is the key to the understanding of technology ... in
science, we are concerned with reality in its basic meaning; our investigations
are recorded in treatises "on what there is." .... Technological progress ... could
be described as the pursuit of effectiveness in producing objects of a given
kind.2
This distinction is useful way to characterize two easily recognized and
different kinds of experience. While primarily concerned with "reality in its basic
meaning," the concern of a scientist may be partially expressed via the
construction of a fifty-mile tunnel called a super-collider. The engineer building a
super-collider will be engaged in "the pursuit of effectiveness in producing objects
of a given kind," nevertheless furthering the elaboration of the scientific theory
which motivated the construction of the super-collider.
Science and technology interpenetrate in a complex and, historically
speaking, progressively more systematic means-ends relationship. In the second
half of this century, the belief that this means-end relationship can be managed
2 Henryk Skolimowski, "The Structure of Thinking in Technology," in
Philosophy and Technology: Readings in the Philosophical Problems of
Technology, ed. C. Mitcham and R. Mackey (New York, 1973), pp. 43-45.
8


has become commonplace. The Strategic Computing Plan assumes that
investigation can be structured to maximize the mutual reinforcement of scientific
(computer science) and technological (computer engineering) change.
Rachel Laudan has recently suggested that philosophies of scientific
change and the history of science might form the basis from which philosophies
of technological change could be developed. In considering the history and
philosophy of science as an entry point into the history and philosophy of
technology, several problems arise, most notably the different senses which
"science" holds for the philosopher and the historian.
Eman McMullin distinguishes two inter-related senses of "science" by
employing a symbolic notion. SI concerns the end product of research, while S2
addresses the processes, intellectual and otherwise, which culminate in SI. For
example, the famous equation of Einstein has two stories: the first, which
concerns the meaning of the equation itself (SI) and the second, which describes
the processes leading up to the equation (S2).3 Despite this clarification, the
philosophy of science (PS) presents additional difficulties because of ambiguities
in the meaning of "philosophy."
McMullin resolves this ambiguity by noting that "external" philosophy of
science (PSE) explains SI in terms of broader theories, typically those involving
3 Eman McMullin, "Philosophy of Science: An Overview," in Scientific
Knowledge: Basic Issues in the Philosophy of Science, ed. Janet Kourany
(Belmont, California, 1987), pp. 3-19.
9


the phenomenon of knowing or the logical structure of demonstration, while
"internal" philosophy of science (PSI) is "based on what scientists do rather than
upon what they say they are doing." Both PSE and PSI leave the scientific content
of SI to the scientist: the philosophy of science is concerned with S2 and its
relation to SI. One can choose between explanation based either in symbolic logic
(PSE) or in historical exposition (PSI). At least in a naive sense, these are very
different kinds of explanation.4
For McMuIlin, the history of science (HS) can serve diverse purposes in
PS. HS supplies PSE with examples illustrating the philosophical claims made by
PSE, while HS plays an evidential role in PSI. According to McMuIlin, the claims
of PSI receive their warrant from HS. Let us consider this a little more closely.
The focus of PSE is on the logical relationship between experiment and
theory. A historical account which reveals aberrant scientific behavior having little
relationship to logically articulated accounts will have little (apparent) value in
PSE. Since the focus of PSI is on historical accounts of scientific behavior, these
accounts are important because they may confirm or falsify theories concerning
the actions of the scientific community.
Given the strategy of using the history and philosophy of science as an
entry point into the history and philosophy of technology, the above discussion of
4 For an account of scientific explanation, see Janet Kourany, Scientific
Knowledge: Basic Issues in the Philosophy of Science (Belmont, California,
1987), pp. 20-110. For an account of historical explanation, see Patrick Gardiner,
Theories of History (Glencoe, Illinois, 1959), pp. 344-443.
10


PS needs to be focused on scientific change. It has been suggested that "internal "
philosophers of science subject their own claims regarding scientific change to
"empirical test" by using HS to evaluate competing theses.5 This comparison is
possible by virtue of the different disciplinary goals of history and philosophy.
As McMullin notes, HS aims at the particular, the singular, hoping to
establish what actually happened. The interpretation of particular events in terms
of universal patterns is central to PS, while of secondary importance to HS. In the
language of Chapter 1, PSI is inherently mera-historical. The role of history in
what McMullin terms "the new genre" of the history and philosophy of science
(HPS) will be "prior and in a sense basic, for on the establishing of the analysis as
history depends its warrant as philosophy." Thus, HS both provides material for
PSI and assesses the adequacy of how that material is treated by PSI.6
Since history aims at the singular, it entertains no "universal" which could
be under-, over-, or otherwise determined by historical fact. McMullin does not
address problems concerning the appropriation of the "raw data" of HS by the
HPS, nor does he spell out how the singularity of HS and the universality of PSI
are reconciled in HPS. He does, however, point to a paradigmatic instance of
work in the genre: Thomas Kuhns Structure of Scientific Revolutions.
5 Larry Laudan et ah, "Scientific Change: Philosophical Models and Historical
Research," Svnthese 69 (1986): 141-223,
6 McMullin (n. 3 above). In the language of Chapter 1, McMullin makes the
philosophical claim that historical' claims warrant meta-historical ones.
11


If Structure is an exemplary work in the history and philosophy of science,
then perhaps, following the suggestion of Rachel Laudan, it is an especially good
starting point for the development of the history and philosophy of technology.
Quite likely for different reasons, historian Edward Constant II contrasts with the
Kuhnian notion of paradigm change in science his own conception of
technological change.
Constant claims that science fuels certain species of technological
revolutions by identifying the operating conditions under which technology failure
can be presumed. Constant argues that it was not actual failure of earlier engines
that led to the development of the turbojet: engines were nowhere near operating
conditions of hypothesized failure. The internal dynamic of overcoming technical
barriers by the engineering profession coupled with the revolutionary activities of
certain members led to a paradigm change by the "community," a change
evidenced by the displacement of the propeller plane by the jet.7
The notion of an "internal dynamic" has led some thinkers such as Marx to
speak about a "technological imperative" generating inexorable technological
expansion. Langdon Winner scrutinizes the claims of such "technological
determinists" and clears away some conceptual underbrush by distinguishing
between apparatus, technique, and organization. The development of an atomic
bomb (apparatus) required the novel social arrangement of Los Alamos
7 Edward Constant n, The Origins of the Turbojet Revolution (Baltimore,
1980).
12


(organization) to utilize the skills, methods and procedures (techniques) required
to construct an atomic bomb.8
Typically, technological determinists attempt to parlay mere illustrations
from the history of technology (HT) into evidence for their claims. Consider the
following claim: "Major technological changes have been brought about by the
actions of government." Although supported by the atomic bomb and other
examples, the claim becomes less convincing as one considers additional examples
such as the effect of the government on (1) the creation of the incandescent
electric light and (2) the oil shale industry in the 1970s and 1980s.9 The claim
that major technology changes have been effected by government suffers from an
abundance of conflicting evidence.
Historian David Hackett Fischer has excoriated historians who resolve
conflicting evidence by the logical solecism of shifting ones ground. This has led
some to suspect that Fischer is pushing a new variant of "scientific history," one
in which the "hypothesis of universal form" becomes the methodological rule for
the practicing historian. According to the philosopher of science Carl Hempel,
the explanation of the occurrence of an event of some specific kind E at a
certain place and time consists, as it is usually expressed, in indicating the
causes or determining factors of E.... the scientific [emphasis added]
explanation of the event in question consists of
8 Langdon Winner, Autonomous Technology: Technics-out-of-ControI as a
Theme in Political Thought (Cambridge, 1977).
9 Thomas P. Hughes, Networks of Power: Electrification in Western Society.
1880-1930 (Baltimore, 1983), pp. 58-61.
13


(1) a set of statements asserting the occurrences of certain events Cl, ...
Cn at certain times and places.
(2) a set of universal hypotheses, such that
(a) the statements of both groups [that is, a set of statements and a set of
universal hypotheses] are reasonably well-confirmed by empirical evidence,
(b) from the two groups of statements the sentence asserting the
occurrence of event E can be logically deduced.10 11
What does Fischer have to say about the above "covering law"? First, this
kind of explanation cannot be had in historical writing although one might attempt
it by adopting the form of universals: "All revolutions occurring in France in the
late 18th century were preceded by a period in which those in power attempted to
mollify the underclass." Secondly, historians do not use universal laws in their
work, perhaps due to "an inherited antipathy to questions and hypotheses and
models, which is apt to run below the surface of a historians thought."11
Historians do not use universal laws because, as noted by McMullin, they
are not interested in the kind of generalization afforded by universal law. The
discipline of history often plays a decisive role in the evaluation of meta-
historical claims formulated by economists, sociologists, or even historians,
occasionally rejecting them as unwarranted by the facts of a particular historical
episode.12 Although the discipline of history has its own norms and autonomy, it
10 David Hackett Fischer, Historians Fallacies: Toward a Logic of Historical
Thought (New York, 1970), p. 128.
11 Fischer (n. 10 above), p. 7.
12 For an example of the use of history in the "verification" of anthropological
theories, see David Kaplan and Robert Manners, Culture Theory (Englewood
Cliffs, 1972), pp. 67-79.
14


may nevertheless provide "data" to complement the techniques and theories of
disciplines interested in explanation via subsumption to universals.13
For Sir Karl Popper, philosopher of science and history, scientific
disciplines advance by formulating theories which are capable of refutation, which
most often comes through experimental observation. Although Popper is primarily
concerned with refutability in science, we see no reason not to appropriate the
concept of refutability.14 Just as proof in the formal disciplines has a strong
social, informal component, so too can refutation in empirical disciplines occur
without formal contradiction.
One key ingredient in the factual refutation of a claim is the objectivity of
the fact. From a phenomenological point of view, the objectivity of science is an
attribution rather than a given. With the adoption of a different view toward the
experience of the historian, historical research can yield "facts" having an
objectivity analogous to that of scientific research, a possibility considered further
in Chapter 6. Let us now move from the philosophy of science and technology to
the history of science and technology.
13 During the 1930s, historians saw themselves as providing data to and, at
the same time, testing the hypotheses of social scientists. Oscar Handlin, Truth in
History (Cambridge, 1979), p. 7.
14 Karl Popper, The Poverty of Historicism (Boston, 1957).
15


The History of Technology
17th and 18th century science was the recreation of the gentry. The rise of
the 19th century English mercantile class coincided with the growth of industrial
cities propelled by scientific and technological power into the arena of economic
and political power. As self-proclaimed "City of Science," Manchester was
imitated throughout Europe as towns created technical universities in order to fuel
continued technological and economic development.15
The impetus to create technological universities was carried to late 19th
century America in the hearts and minds of doctoral students trained at German
scientific universities. The best and brightest of the West foresaw the perfection of
Man via scientific rationality.16 Specialization and the lack of an institutionalized
base led American scientists to act as entrepreneurs, relying on patronage to fund
and secure recognition for their research:
... the early gifts for science were particularly significant. They established
precedents and projected lines of development. By a process of institutional
aggrandizement the first scientific schools, observatories and laboratories
created their own need and perpetuated their own kind.17
15 Arnold Thackray,"Natural Knowledge in Cultural Context: The Manchester
Model," American Historical Review. 74 (1974): 672-709.
16 Charles Rosenberg, No Other Gods: On Science and American Social
Thought (Baltimore 1976).
17 Howard S. Miller, Dollars for Research: Science and Its Patrons in
Nineteenth Century America (Seattle, 1970), p. ix.
16


The link between industry, education, and science in the United States was
strengthened by the business practice and philanthropy of Andrew Carnegie. In the
early 1870s, Carnegie drew the short-lived ridicule of his competitors by hiring a
metallurgical chemist to supervise his blast furnace operations. At the turn of the
century, the Carnegie Institution of Washington heralded Big Science while
marking the ebb of individualism:
[t]he most effective way to find and develop the exceptional man is to put
promising men at work at research, under proper ... supervision. The men who
can not fulfill their promise will soon drop out, and by the survival of the
fittest the capable, exceptional man will appear.18
With the spread of industrialization, science bifurcated into pure and
applied science. Technology was viewed as applied science: research scientists
discovered universal truths concerning nature, and applied scientists exploited
these truths, perhaps even embodying them in apparatus.19 The conceptual
subordination of technology as applied science to pure science was reflected in the
sociology of science and technology. American technology became the province of
the working- or middle-class engineer, while science was worked in universities
and government research centers by a higher social class. By the third quarter of
the 19th century, engineers were doers and scientists knowers.20
18 Miller (n. 16 above), p. 177.
19 Thomas Kuhn, The Essential Tension (Chicago, 1977), pp. 142-46.
20 Edwin T. Layton Jr., "Mirror-Image Twins: The Communities of Science
and Technology in 19th-Century America," Technology and Culture 12 (October
1971): 562-80.
17


As the "genius of Menlo Park" who seemingly invented the incandescent
electric light without academic science, Thomas Alva Edison personified the
difference between (European) knowing and (American) doing. Edison initially
saw the incandescent light as a relatively simple adaptation of arc-light
technology. Experiment proved the contrary and Edison hired trained scientists
and began to consult advanced scientific journals.
Edison the engineer seemed to yield to Edison the scientist. In the design
of the central power generator needed to establish his system of electric lighting,
an optimum armature winding scheme was needed to make the system
economically competitive with existing arc-lighting systems. Scientific theory
could not determine this scheme. The repeated success of Edisons well-funded
laboratory research mixed with science suggested the promise of industrial
research.21
It was not until technologically superior European products eroded the
market share of the (former Edison) General Electric Company that GE
determined to meet the threat through research. At the Bell Telephone Company,
the potential threat to telephone from wireless radio-based systems drove the
company to establish control over wireless through a series of patents. Although
strategies for doing so differed, both companies bent science to the development
of apparatus.
21 Robert D. Friedel, Paul Israel, and Bernard Finn, Edisons Electric Light:
Biography of an Invention (New Brunswick, 1986).
18


The wide range of problems faced by research workers forced GE to
depart from the disciplinary strategies acquired at educational institutions such as
the Massachusetts Institute of Technology. While certainly not discarding
scientific theory, GE workers relied on the power of experiment to discover the
behavior of specific pieces of electrical apparatus.
Bell system experience, resources, and relations with the government all
served to restrict potential market areas for the company, encouraging Bell
researchers to formulate
technological theories conceptual and mathematical constructs that described
the behavior of particular types of technology. Technological theories could be
used directly or, with experience, codified for further development and
design.22
The development of such theories was hastened by both World Wars, leading to
the establishment of a research tradition in which even current Bell System
workers speak of technological theories.
The United States entered World War II technologically ill-prepared.
United States fighter aircraft never reached technological parity with German
fighter planes. Luckily, after 1943, attrition eroded the effect of German battle
superiority. As the war progressed, technological theories were needed for the
construction and control of advanced weapon systems. The delivery and
construction of conventional warheads made it necessary to solve ballistics and
22 Leonard Reich, The Making of American Industrial Research: Science and
Business at GE and Bell, 1876-1926 (Cambridge, 1985), pp. 205-8, 250.
19


other mathematical problems faster than any army of human computers. This led
the US Army to develop the first electronic digital computer, the ENT AC.23
Often regarded as the father of the modern electronic digital computer, the
reknowned mathematician, physicist, economist, weaponeer, and computer
designer John von Neumann foresaw the applicability of the computer to a host of
mathematical, scientific and technological problems. Indeed, as we shall see in
more detail in Chapter 4, von Neumann played a decisive role in the diffusion as
well as the development of advanced computing technology in the post-war world,
principally in the context of research and development of thermonuclear weapons.
It has been claimed that, because of their focus on scientific problems, von
Neumann and other computer pioneers did not foresee the widespread demand for
computing generated by business and industry.24 Von Neumann understood,
however, that the construction of an intercontinental ballistic missile system
depended less on the level of missile or thermonuclear device technology than the
ability to manage the concurrent development of a myriad of technological
subsystems. We shall argue in Chapter 4 that von Neumann played a central role
in the development and diffusion of management information systems (MIS).
23 Computers played a critical role in the development of both atomic and
hydrogen bombs. See Joel Shurkin, Engines of the Mind (New York, 1985) for a
general history of the modern digital computer.
24 Paul Cenuzzi, "An Unforeseen Revolution: Computers and Expectations,
1935-1985," in Imagining Tomorrow: History. Technology and the American
Future, ed. Josoph Corn, (Cambridge, 1986).
20


Why is the development of MIS during the post-war period a noteworthy
instance of technological change? As MIS changed in response to the demands of
concurrent development, manual processing gave way to processing by electronic
digital computers. This change perhaps made the additional complexity associated
with concurrency appear less daunting so that national security managers no
longer felt constrained by the additional complexity Of concurrency. It is possible
that concurrency, which began as a possibility pursued only because of the
exigencies of the Cold War, was now transformed into a standard management
approach for projects lacking the urgency of the ICBM projects. If so, this change
in MIS occasioned and perhaps encouraged the creation of programs dwarfing
even the Manhattan Project.
To the extent that such programs have led to an alteration of American
social experience, favoring the survival of some species of experience over others,
this change in MIS may have led to broader changes. Thus, with the interpretation
of MIS as a technology (an organization for Langdon Winner), we have moved
from the study of nature to the creation of artifacts and on to the use of these
artifacts in the guidance and control of evolving complexes of nature, artifact, and
man.
21


CHAPTER 3
TWO META-HISTORICAL CLAIMS
Introduction
This chapter presents two theories of technological change, what I am
calling meta-historical claims. After considering the more fully developed views
which form the background for these theories, we focus on the portion of this
background that allows us to compare the two theories.
Each theory is articulated in terms of a very different historical framework.
The theory of technological disequilibrium of Thomas P. Hughes is elaborated in
the context of the growth of electrification from 1880 to 1930, while the theory of
technological co-evolution of Edward Constant has as its context the development
and use of supersonic aircraft. As a result, one would expect some difficulty in
comparing the two theories.1
Edward Constant provides one point of comparison when discussing one
aspect of the theory of technological co-evolution, which
implies more than either technological disequilibrium or "reverse salients in an
advancing technological front, the image Thomas P. Hughes has used to
1 See W. David Lewis, review of The Origins of the Turbojet Revolution, by
Edward Constant II, in Technology and Culture 23 (1982): 512-16; Terry S.
Reynolds, review of Networks of Power, by Thomas P. Hughes, in Technology
and Culture 25 (1984): 644-47.


portray severe problem areas that hold up the rapid advance of an entire
technology.2
This threefold contrast between technological co-evolution, technological
disequilibrium and reverse salients in an advancing technological front is the
vehicle for comparing these two broad theories of technological change.
First, we examine the concepts of technological disequilibrium, reverse
salient, and technological co-evolution. Second, we consider the applicability of
the concepts of disequilibrium and co-evoludon to the emergence of MIS in the
post-war era.
Technological Disequilibrium
In 1971, historian of technology Thomas P. Hughes published his study of
Elmer Sperry and the various Sperry companies which formed around that prolific
inventor/engineer of the late 19th and early 20th centuries. Hughes began to
articulate the metaphor of "reverse salients in an expanding technological front,"
arguing that
Sperry was an interesting man aside from his profession, but to write of him
the biographer must also write of machines, processes, and systems.3
As a former engineering student subsequently professionally trained in history, it
is natural that the "systems concept" figure prominently in the thinking and
2 Edward Constant II, The Origins of the Turbojet Revolution (Baltimore,
1980), p. 14.
3 Thomas P. Hughes, Elmer Sperry. Inventor and Engineer (Baltimore, 1971),
p. xv.
23


writing of Hughes. The importance of the "systems concept" is that it enables us
to establish a vocabulary relatively common to both Hughes and Constant.
The development of the two related technologies of gun-fire control and
gyro-compasses in the years before Sarajevo engaged the energies of the Sperry
company and its namesake. The war itself brought, first, increased demand by
European governments for Sperry gyrocompasses, and then the time and money
needed to bring research and developments efforts to fruition.
As the conflict progressed, the
character of the war changed after the battle of the Marne stopped the German
offensive in 1914 and the Battle of Jutland brought a stand-off in the naval
war. Both the Germans and the Allies then looked to technological invention,
as well as battles of attrition, to break the deadlock.4
The relevant Sperry companies were well-informed about the need for improved
gunfire-control and navigation, well-positioned to be "the brain mill" for the
military when the United States entered the First World War.
The phrase "reverse salient" was widely used by military strategists. A
reverse salient is produced when one portion of an advancing line of battle is
much further extended than the rest of the line. During the Second World War, a
reverse salient was created by the German offensive at the Battle of the Bulge.
A reverse salient is an inherently unstable configuration. First, it invites
attack at the two points where the line protrudes, in order to enclose and
neutralize the enemy forces operating within the reverse salient. Second, since the
4 Hughes (n. 3 above), p. 202.
24


combatant operating within the reverse salient hopes to avoid becoming enclosed,
it must either retreat or rapidly advance the front in the neighborhood of the
reverse salient. In either case, tremendous energies must be expended to eliminate
a military reverse salient.
WWI provided Sperry with a matrix of guidance and control problems
within which to work and which illustrate the idea of a technological reverse
salient: that is, "a reverse salient in an advancing technological front." Sperry
improved both search-light defense and anti-aircraft gun-fire control, forcing Axis
planes to fly higher in order to evade defensive attack and thus rendering
bombsights less effective. Hence, a technological reverse salient was created in
which one component of aerial warfare (search-light defense) was greatly
advanced relative to another component (bombsights).
Of course, these improvements were quickly replicated by the Axis powers
so that for a time Ailied planes were also required to fly higher:
Improvements in search-light defense also affected other components of the
aerial warfare system, necessitating, for example, an improvement in
bombsights so that airplanes could fly higher and yet bomb accurately.5
Prior work in navigational and gun-fire controls made it possible for Sperry to
develop improved bomb-sights, allowing Allied planes to fly higher by
neutralizing improved Axis search-light defense. Thus, the reverse salient created
by improved search-light defenses was removed.
5 Hughes (n. 3 above), pp. 219-20.
25


Gun-fire control provides another example of a technological reverse
salient. Between 1912 and 1916, Sperry greatly increased the accuracy and
responsiveness of naval turret guns. As the range of such guns increased from
under 4000 yards before 1900 to about 10,000 yards in 1910 to about 20,000
yards by the end of the war, greater accuracy and responsiveness were required.
Unfortunately, these improvements also increased the effect of aiming errors.
If a gun must deliver a shell 4000 yards and has an angular error of 0.1
degrees, trigonometry reveals that the shell will be wide of its target by less than
7 yards. If, however, the range of the gun is increased to 10,000 yards, the error
increases to 18 yards. Greater range demands greater accuracy, and greater
accuracy invites the development of gunnery with greater range. Eventually,
additional advances in either range or accuracy became much more difficult to
achieve: it was not until WWII that a new system was found capable of providing
further advances.
The superiority of sea based air force was demonstrated repeatedly in the
Pacific front. A carrier equipped with planes bring the gun to the target,
minimizing aiming error due to extreme range. As the explosives carried by
planes became more powerful, the battleship was reduced to playing a support
role in sea battle and amphibious operations.
This rather extended example illustrates how, because of coupling between
sub-systems, change in subsystem A (concerned with range) can generate change
in subsystem B (concerned with accuracy), generating further change in subsystem
26


A. Oscillation of the change point between the two subsystems eventually
dampens as each subsystem approaches its inherent operating limits. Major
breakthroughs can only come through a new system, possibly one in which some
of the old subsystems are completely replaced, perhaps radically altering the
coupling between the subsystems.
In a historiographical essay published in 1980, Hughes brought the theme
of systemic change to the forefront in endeavoring to
suggest how ... writers [of major historical works] have used the concept of
the system to organize, analyze and draw conclusions about the history of
technology from disparate materials.6
Hughes provides a definition of "system" aimed at the historian of technology:
... a system is constituted by related parts or components. Because the parts
are related, the state, or activity, of one influences the state, or activity of
others in the system .... A system has characteristics different from those of its
components in isolation .... Control of a system is often centralized and man
often closes the loop. The managers in the Chandler systems are examples.7
Hughes provides a single example of a system, but given this abstract
definition of a system, a variety of instances exemplify this abstract structure. The
system concept is fruitful for the history of technology because it provides a
6 Thomas P. Hughes, "The Order of the Technological World," in History of
Technology, ed. A.R. Hall and N. Smith (London, 1980), p. 3.
7 Hughes (n. 6 above), p. 2.
27


conceptual structure that can be applied to a variety of technologies and to the
phenomenon of technological change.8
Hughes asks which technological systems merit the attention of historians
of technology:
Having had success in planning and constructing small systems like the
machines, then larger ones like the factory, man in the twentieth century has
drawn upon his empirical knowledge of these to try such monstrous systematic
endeavors as the Manhattan Project.9
Hence, the Manhattan Project is of interest not simply for the apparatus created
and used, but also for novel forms of social organization which spawned novel
mathematical and computing techniques.
Chapter 4 concerns the application of techniques and apparatus to the
management of large organizations, a line of investigation which can be traced to
The Visible Hand by business historian Alfred Chandler. There Chandler describes
the rise of the American managerial class and the change in business organization
and methods it accompanied. Hughes observes with Alfred Chandler that
The integration of all the processes of textile production stimulated innovation
in each of the specific processes. The rapid and complex flow of materials
presented challenging problems of coordination and monitoring, problems that
8 In this Hughes is not unique, although the system perspective is less well
enunciated in Hugh G, J. Aitken, Taylorism at Watertown Arsenal: Scientific
Management in Action 1908-1915 (Cambridge, 1960).
9 Hughes (n. 6 above), p. 2. The language of Hughes recalls the apparatus-
technique-organization distinction of Langdon Winner.
28


would also bring the visible hand into other industries but several decades
later*10 11
If the problems of coordination and monitoring inside large organizations are
within the purview of the historian of technology, so too is the post-war
emergence of management information systems.
In 1981, Hughes published Networks of Power, which provides an explicit
model of systemic change in the evolution of electric power systems:
although the electric power systems described herein were introduced in
different places and reached their plateaus of development at different times,
they are related to one another in the overall model of system evolution that
structures this study at the most general level.11
Only the third of the five phases of the model is of interest to us:
The essential characteristic of the third phase of the model is system
growth. The method of growth analysis used in this study involves reverse
salients and critical problems. Because the study unit is a system, the
historian finds reverse salients arising in the dynamics of the system during
the uneven growth of its components and hence of the overall network......
Having identified the reverse salients, the system tenders can then analyze
them as a series of critical problems...An inventor or applier of science
transforms an amorphous challenge the backwardness of a system into a
set of problems that are believed to be solvable...When engineers correct
reverse salients by solving critical problems, the system usually grows if
there is adequate demand for its product. On occasion, however, a critical
problem cannot be solved.....this study offers an explanation not only of
the evolution of systems as reverse salients are identified and solved, but
also of the occasional emergence of new systems out of the failure to solve
critical problems in the context of the old.12
10 Hughes (n. 6 above), p. 8.
11 Thomas P. Hughes, Networks of Power: Electrfication in Western Society,
1880-1930 (Baltimore, 1981), p. 14.
12 Hughes (n. 11 above), pp. 14-15.
29


These are the key notions for comparing the theories of Hughes and Constant and
also for determining the applicability of the theory to the emergence of MIS in the
post-war world.
Technological Co-evolution
While still a doctoral student at Northwestern University, Edward Constant
II put forward a theory of technological change and claimed its applicability
beyond the turbojet.13 Claiming that a technological revolution occurred with the
development of the turbojet, Constants work recalls for many Structure of
Scientific Revolutions.
Because Constant recognizes the limits which Kuhn puts on Structures
when he explicitly limits its scope to science, Structures is available to Constant
only as a basis for analogy. Recognizing that parallels between science and
technology may carry over to scientific and technological change, Constant
employs the Kuhnian metaphor when he defines
... a technological paradigm as an accepted mode of technical operation, the
usual means of accomplishing a technical task. ... Normal technology, like
normal science, is not static. ... It is "puzzle solving." When a technological
revolution occurs, however, the community paradigm changes. Technological
13 Edward Constant n, "A Model for Technological Change Applied to the
Turbojet Revolution," Technology and Culture 14 (1973): 553-72. This article
summarizes his dissertation work and was an early look at what became The
Origins of the Turbojet Revolution.
30


revolution is defined here only in terms of a relevant community of
practitioners and has no connotation of social or economic magnitude.14 15
While the central notion of Constants theory is that of presumptive
anomaly, this thesis is more concerned with the concepts of technological
disequilibrium and co-evolution which are presupposed by the discussion of
presumptive anomaly. The concept of technological equilibrium shows the
influences of philosopher Karl Popper and biologist Donald Campbell, proponents
of "evolutionary epistemology" which suggests that advances in knowledge from
the behavior variation and selection retention of these variations.
The scientific or technological community explores its "world," making
multiple claims about its nature. In one fashion or another these claims are tested,
so that claims which are currently "best" survive and continue to guide the
practice of the community.
Technological variation-retention is similar [to that of science], but does differ
at important points.
.... technological selection differs from scientific selection. Technological
systems directly, not vicariously, explore the environment: planes crash,
engines explode. Technological systems thus face direct elimination by an
environment unmediated by background scientific theory in a sense that
scientific conjectures do not.ls
Evolutionary epistemology is a useful starring point, but is too general:
14 Constant (n. 13 above), p. 554.
15 Edward Constant II, The Origins of the Turbojet Revolution (Baltimore,
1980), p. 7.
31


a more highly articulated, middle-level model for technological change, a
model less general than evolutionary epistemology but in no way contradictory
to it, is necessary.16
Because the community of technological practitioners is proposed as the unit of
analysis, technological paradigms, and therefore revolutions, are defined by the
behavior of a community of technological practitioners. A technological
revolution, then, is primarily behavioral and only secondarily cognitive
phenomenon: a revolution has occurred when "the usual means of accomplishing a
technical task changes."
What events generally precipitate such changes? Often, it is the crash of a
plane or the explosion of an engine that eliminates a form of technological
practice or at least identifies its operating limitations. In other cases,
technological disequilibrium, of either the intersystem variety (improved looms
demanded improved spinning) or intrasystem variety (improvements in one
machine component demand improvements in other components) can result in
a pressing need for change.17
Neither type of disequilibrium will be seen as a potentially soluble problem,
according to Constant, until the relevant community of technological practitioners
associates the problem with some candidate solution, conventional or radical.
Now consider technological co-evolution, key to contrasting the theories of
Hughes and Constant:
16 Constant (n. 15 above), p. 8.
17 Constant (n. 15 above), p. 13.
32


Transposed to technology [from evolutionary biology], the concept of co-
evolution implies that the development of one set of devices may be linked
intimately to the development of other devices within the same macrosystem,
and that the two sets of devices may exert powerful, mutually selective
pressure on each other. For example, the direction of both water turbine
and steam turbine development was highly responsive to the demands of
electrical power generation; the development of dynamos, in turn, was
dependent upon the characteristics of water and steam turbines.18
Technological co-evolution is a special case of technological
disequilibrium because it spells out the nature of the disequilibria:
Technological co-evolution implies, first, specificity: the direction of
development of a given technology (steam turbines) is linked to some other
specific co-evolving technology (power generation and transmission)....
Second, technological co-evolution implies a hierarchy of retentive or selective
processes. The fate of a given invention its developmental direction not
only depends on its competition with alternative devices perfonning the same
or similar functions and on its co-evolution with a specific other technology,
but also depends on the evolutionary success or failure of the higher-level
macrosystems of which it is a part.19
Conclusion
If the theory of technological disequilibrium is applicable to the emergence
of MIS, then the theory of co-evolution is also applicable. By virtue of the
generality of "the systems concept" presupposed by both theories, both theories
appear to be applicable to the emergence of management information systems
18 Constant (n. 15 above), p. 14.
19 Constant (n. 15 above), p. 14.
33


(MIS) in the post-war years.20 Nevertheless, what are the system boundaries: are
humans part of the system? If an MIS consists of technique and social
organization as well as apparatus, humans an element of an MIS.21
Given the applicability of both theories and the greater generality of the
disequilibrium theory, it should be possible to account for the emergence of MIS
in terms of the theory of technological disequilibrium. If not, then neither theory
has the generality required of bona fide theories. If so, then we can go on to
consider the theory of technological disequilibrium.
Since co-evolution is a special kind of disequilibrium, the theory of
technological co-evolution can be affirmed only if we can explain the evolution of
MIS in terms of (1) the mutual, reciprocal influence of (at least) two subsystems,
and (2) the presence of a hierarchy of selective-retentive processes. Absent either
factor, the theory of technological co-evolution is less adequate than the theory of
technological disequilibrium with respect to the emergence of MIS,
20 For a discussion of the dangers of using the "systems concept," see A. D.
Hall and R. E. Fagen, "Definition of System," in Modern Systems Research for
the Behavioral Scientist, ed. Walter Buckley (Chicago, 1968).
21 A current texbook regards people as a distinct and most important element
of an information processing system. Jerome S. Burstein and Edward G. Martin,
Computer Information Systems with BASIC (Chicago, 1989), pp. 36-67.
34


CHAPTER 4
MANAGEMENT INFORMATION SYSTEMS AND TOTAL WAR
Introduction
The United States was not prepared for the Second World War. As
General Henry "Hap" Arnold put it:
The margin of winning was narrow .... many times we were near losing, and
... our enemies mistakes often pulled us through.1
Allied victory in Europe, for example, while certainly aided by the strategic errors
of Hitler, was equally the result of other, less glamorous factors.
After the Battle of the Atlantic, the steadily increasing economic isolation
of the Rome-Berlin axis made it difficult to obtain raw materials for
manufacturing. The German decision to withhold resources from computer and
atomic scientists helped insure Allied first use of the atomic bomb. Superior
Allied industrial capacity resulted in an increasingly heavy aerial attack that
severely reduced both the volume and consistency of Axis industrial output.1 2
In the aftermath of World War II, the issue of preparedness was central to
both civilian and military defense planners. While successful and continued use
1 Third Report to the Secretary of War by the Commanding General of the
Army Air Forces (12 November 1945), in The Impact of Air Power. National
Security and World Politics, ed. Eugene M. Emme (Princeton, 1959).
2 James L. Stokesbury, A Short History of World War II (New York, 1980).


of a weapon such as a Stuka fighter presupposes research, development, training,
deployment, and maintenance, the development of a breakthrough weapon such as
the atomic bomb requires the allocation of resources on a significantly greater
scale. In wartime, the will exists to allocate resources on a large scale, but in
peacetime, the situation is less certain. The terrible, swift success of atomic
weapons helped continue such massive resource allocation.
The atomic bomb was the logical conclusion of the strategic bombing
doctrine. Disagreement about the effectiveness of conventional bombing began
during World War II and continued into the Vietnam era, but the hastening of V-J
Day by atomic weapons seemed indisputable. The unforeseen side-effect was that
an era of total war had now begun: after Dresden and Hiroshima, it seemed clear
that in the next war,: some populations might be sacrificed for others.
Military-industrial relations were deliberately restructured so that, at least
from the standpoint of nuclear weapons development, the United States would
never again have to depend on its enemies for survival. It is against this backdrop
that this paper considers the evolution of management information systems in the
period from 1945 to 1960.
Management information systems (MIS) collect data concerning
orgnaizational performance and transform it into information to support
management decision-making. Typically, computers are at the heart of such
systems and permit project performance to be compared against established time
and cost schedules. The roots of management information systems can be found in
36


World War I, when the Gantt chart was developed for the Army by a co-worker
of Frederick W. Taylor, often dubbed the father of scientific management.
The development of MIS in the aftermath of World War II is sketched in
terms of the career of John von Neumann, a mathematician, physicist, economist,
weaponeer, and computer designer who played key advisory roles as the U.S.
prepared for total thermonuclear war. Although the major primary source for this
research is the von Neumann Manuscript Collection at the Library of Congress,
the objective of this paper is to develop a new framework for both new and old
material.
Von Neumann was a major contributor to the mathematics of solving large
systems of equations. The solution of such large systems figures equally in the
development of atomic weaponry, economic planning, and games of military
strategy. Being at the intellectual center and near the major funding sources
allowed von Neumann to a major role in the design of several computing
machines.
One von Neumann machine was eventually used by the Navy to
computerize the Program Evaluation and Review Technique (PERT). The PERT
system is the prototypical management information system: by the early 1960s,
the use of this system was required throughout the Defense Department and its
use soon spread throughout corporate America. This chapter suggests the
intellectual influence of von Neumann in the development of management
information systems.
37


Von Neumann and the Bomb
As Harry Truman noted in his radio address on the eve of Hiroshima,
the battle of the laboratories held fateful risks for us as well as the battles of
the air, land and sea, and we have now won the battle of the laboratories as
we have won the other battles.3
In May of 1940, the 37 year-old, Hungarian bom von Neumann was recruited by
the Army Chief of Ordnance along with physicist 1.1. Rabi, aerodynamicist
Theodor von Karman and others to form a scientific advisory committee.
During the third meeting of the scientific advisory committee on October
3rd and 4th of 1941, von Neumann met with future Nobel Laureate in chemistry
Harold Urey and staff member Robert H. Kent of the Army Ballistics Research
Laboratory (BRL) to discuss the theory of detonation. He also discussed
instrumentation for bomb ballistics with Major Leslie Simon.4
Major civilian contributors to the war effort received the Medal for Merit.
The 262 recipients included Dean Acheson, Irving Berlin, J. Edgar Hoover, Bob
Hope, and von Neumann. Harry Truman said that von Neumann:
.... by his outstanding devotion to duty, technical leadership, untiring
cooperativeness, and sustained enthusiasm, was primarily responsible for
fundamental research by the United States Navy on the effective use of high
explosives, which has resulted in the discovery of a new ordnance principle
3 Emme (n. 1 above), p. 84.
4 Von Neumann Manuscripts, Manuscripts Division, Library of Congress
(hereafter cited as von Neumann MSS).
38


for offensive action, and which has already been proved to increase the
efficiency of air power in the atomic bomb attacks upon Japan.5
The new ordnance principle was the "air burst principle," expressed as follows:
detonating an atomic bomb at a given height above ground increases the blast
effect that occurs if the bomb is instead detonated at ground level. Because less
fissionable material is required to achieve a given blast effect, one can achieve
"more bang for the buck."
The early work of von Neumann had consequences for research at Los
Alamos as well as for testing at Alamagordo and for Pacific deployment. On the
strength of this work, the recognition it brought, and the contacts it established,
von Neumann became an effective agent in the nuclear weapons policy-making
after World War II. As military and civilian planners considered the implications
of the last war for a possible next war, the importance of pre-war work by von
Neumann began to emerge.
A Passion for Planning
As a war-weary American public began to return to isolationism, its
military, economic, and political leaders began to appreciate what such isolation
would mean. The American organization for war had not arisen spontaneously: it
5 Von Neumann to Ralph E. Duncan, 18 December 1957, von Neumann MSS.
39


had, in FDRs words, been "carefully thought out ... created from the top down,
not the bottom up."6
Given the demonstrated importance of a planned war economy to the
research, development, production, and deployment of advanced weaponry, no
wonder that American military leaders feared a return to pre-war "normalcy."
Americas wartime leaders were captive of a "profound fear, mounting to almost
an obsession, of ... a revived isolationism after the war."7
Perhaps it was in this spirit that Charles E. Wilson of General Electric
(subsequent Secretary of Defense) declared in 1944 that the nation needed a
"permanent war economy."8 In the spring of 1946, the Army circulated a
memorandum treating scientific and technological resources as specifically
military assets, enunciating a view of science and technology that had emerged
6 Kenneth S. Davis, ed., Arms. Industry and America (New York, 1971), p. 4.
7 Richard M, Freeland, The Truman Doctrine and the Origins of McCarthvism
(New York, 1972), p. 25.
8 Sidney Lens, "The Military-Industrial Complex," in Arms. Industry and
America, ed. K. Davis (New York, 1971), p. 61. Some critics of the military-
industrial complex take it as axiomatic that such a statement originates solely in
economic self-interest. Such critics should be reminded that war work was not
always profitable. National Cash Register, after important wartime cryptographic
work, opted out of Government work after the war because of the restrictions
associated with it and limited opportunities for profit.
40


just prior to World War I and continued through the next war via the Office of
Scientific Research and Development.9
Recalling the Manhattan Project, the so-called Eisenhower Memorandum
stated that
(1) The army must have civilian assistance in military planning [emphasis
added] as well as for the production of weapons ....
(4) Within the army we must separate responsibility for research and
development [emphasis added] from the functions of procurement, purchase,
storage, and distribution.
By developing the general policies outlined above under the leadership of the
Director of Research and Development the army will demonstrate the value it
places upon science and technology and further the integration of civilian and
military resources [emphasis added].10
Budgeting was one aspect of military planning which was of particular
interest after the war. In 1948, government procedures required the Air Force to
estimate budgets about two and half-years in advance, and to respond quickly but
systematically to Congressional changes. Although military and civilian planning
organizations such as the War Production Board grew to enormous size during the
war, budgeting was still a very time-consuming process. During the war, the
Army Air Staff created a program monitoring function which began with a war
9 Concerning the emergence of institutional forms of cooperation between
science, technology, industry, and the military, see the discussion of the Naval
Consulting Board and the National Research Council in Thomas P. Hughes, Elmer
Sperry. Inventor and Engineer (Baltimore, 1971), pp. 244-50.
10 Davis (n. 6 above), pp. 73-76.
41


plan and derived from it requirements for training, supply, etc. Even with this
improvement, it took approximately seven months to complete the process.
After the war, staff members of the Air Force Comptroller foresaw that
efficiently coordinating the energies of whole nations in the event of a total
war would require scientific programming [that is, planning] techniques.
Undoubtedly this need had occurred many times in the past, but this time there
were two concurrent developments that had a profound influence: (a) the
development of large scale electronic computers, and (b) the development of
the inter-industry model.11
The inter-industry model permits a quantitative assessment of the ability of the
U.S. economy to support expenditure levels needed for preparedness.
Although electronic computers and the inter-industry model developed to
some degree independently of one another, there was significant interaction. The
inter-industry model was "pulled"'by the development of large computing
machines, while the development of these machines was partially "pushed" by the
computational requirements of the inter-industry model. We consider first the
development of the inter-industry model before turning to the development of
large-scale computing machines.
The Search for Optimum War
As chief of combat analysis activity, Dr. George Dantzig hoped to apply
"scientific" techniques to the planning process. In 1947
11 George B. Dantzig, Linear Programming and Extensions (Princeton, 1963),
pp. 14-15.
42


... [the mathematicians von Neumann and Dantzig] conceived of... developing
a set of linear inequalities that would reflect the relationship between various
Air Force activities and the items that were consumed in the military
environment... presentations were made to General Rawlings and much of the
Air Staff, and they sold the concept of what they called Project SCOOP,
which was the Scientific Computation of Optimum Programs.12
Linear programming is the basis for the "scientific computation of
optimum programs," where programming is the allocation of scarce resources to
competing activities. Associated with each allocation pattern is a "cost": linear
programming (LP) seeks programs that minimize "cost" or maximize "benefit."
The following example describes the use of LP.
Suppose prototypes of weapon systems A and B have been developed and
the quantities of each system to be manufactured must now be decided. Let the
variable x (y) represent the number of units of system A (B) to be manufactured.
Given information about the costs associated of each unit of each system, the
problem is to determine the appropriate values of x and y.
Suppose that system A requires 1 hour of training and 3 hours of
maintenance per week, while system B requires 2 hours of training and 4 hours of
maintenance. Suppose also that system A has an effectiveness rating of 50,
whereas system B has a better rating of 80. Suppose that during any week, 32
hours are available for training and 84 hours are available to perform
maintenance.
12 Edward Dunaway, Interview by Cadet James R. Luntzel, in, 22 June 1973,
Call Number K239.0512-935, transcript, United States Air Force Oral History
Program, p. 33.
43


Determining how many units of each system to manufacture, even for this
simple yet realistic example, is difficult since there are numerous variables to
consider: training cost, maintenance cost, mission effectiveness, and available time
for training and maintenance. The problem can be stated in the customary form of
linear programming as follows: maximize total effectiveness E = 50x + 80y,
subject to the constraints x + 2y <= 32 and 3x + 4y <= 84.
Problems of personnel assignment, the blending of materials, product
distribution and transportation, and investment portfolio management can all be
solved via linear programming (LP). LP was most fully articulated by the
mathematician Dr. George Dantzig: nevertheless, "credit for laying the
mathematical foundations of this Field goes to John von Neumann more than to
any other man."13
Interest in the interaction between Air Force projects and the civilian
economy was not limited to budgetary matters. Motivated by work begun in 1936
at the Bureau of Labor Statistics, Dantzig and von Neumann sought mathematical
generalizations of the input-output (I-O) models of the economist Wassily
Leontieff. The simplest example of an input-output model involves an electric
"industry" and a water "industry."
The electric "industry" produces electricity; the water "industry" produces
(provides) water as output. The electric "industry" uses both electricity and water
13 Dantzig (n, 11 above), p. 24.
44


as input in order to produce more electricity; the water "industry" uses both
electricity and water to produce (provide) water as its output. Given the demands
of each "industry" for its own output and for the output of the other "industry,"
input-output analysis determines the minimum output required of each "industry"
in order to satisfy total demand. Total demand is the sum of (a) internal demand
(by the two "industries") and (b) external demand (by any other industries).14
The mathematical generalizations of von Neumann and Dantzig made it
possible to formulate input-output (I-O) models for problems involving hundreds
of industries.15 The Air Force went on to support more applied work on
Leontieff-type inter-industry models, thereby reflecting progress on both the
mathematical and computational fronts. For example, in 1951, Leontieff used the
1-0 model to study the interactions of 500 sectors of the American economy.
Leontieff won the 1973 Nobel prize in economics largely due to the impact of
input-output analysis on economic planning in industrialized countries.
LP techniques and inter-industry models made it possible to plan for total
war, hot or cold. For the newly formed Air Force, with the mission to deliver
atomic weapons to their targets, LP was an essential tool:
... [there was] a lot of work being done in looking at the deployment of targets
that we would want to attack; backing off from the targets that we needed to
14 This brief description is based on Raymond A, Barnett, College
Mathematics for Management. Life, and Social Sciences (San Francisco, 1981),
pp. 160-65.
15 Dantzig (n. 11 above), p. 18.
45


attack, through a complete war plan back to the requirements on the civilian
economy ... and you then back off from that to your training structure and
your logistics structure. There is a continuous string that you could put
together mathematically to show that, in order to fight this kind of war youve
got to have these kinds of resources from the civilian economy.16
But war is a matter of strategy as well as linear programming and input-output
analysis: here too, von Neumann was a principle player.
In 1944, von Neumann and Oskar Morgenstem published The Theory of
Games and Economic Behavior in which decision-making is studied via a formal
calculus. Using game theory, one can determine the "optimum" choice of action
given a description of the options available to "rational" players. This work
generated considerable interest among strategists and, more surprisingly, among
planners. Interest among planners is partly explained by noting that certain linear
programming problems are formally equivalent to von Neumann-Morgenstem
games of strategy. This implies that certain linear programming problems can be
solved as games, and that certain games can be solved via linear programming.
There are at least two threads tying together nuclear weapons research,
economic planning, and military strategy. One thread is the common formalism
necessary (but not sufficient) to efficiently solve large numbers of simultaneous
linear equations. The second thread is the digital computer, which provides the
means for transforming the mathematical possibility of solution into a practically
16 Dunaway (n. 12 above), pp. 6-7.
46


achievable solution. At the dawn of the "computer age," von Neumann wove these
threads together.
At the Dawn of Electronic Computing
Computing machines using both electrical and mechanical computing
devices were developed and used during the war, but the Electronic and
Numerical Integrator And Calculator (ENIAC) was the worlds first general
purpose electronic digital computer.17 The Army Ballistics Research Laboratory
(BRL) commissioned the construction of the ENIAC by the Moore School of
Engineering at the University of Pennsylvania. Von Neumann first became
associated with the Moore School group in the second half of 1944, when he was
appointed consultant on the ED VAC, the successor machine to the ENIAC,
Von Neumanns involvement in the Manhattan Project beginning in late
1943 alerted him to the need for improved methods of calculation to support "high
explosive" research at Los Alamos. Previous experience with shock wave
phenomenon made him a natural contributor to solving the problem of achieving
an atomic explosion: to produce the desired "chain reaction," conventional
material would be detonated, producing a blast pushing inward simultaneously on
all points of an encased sphere of plutonium.
17 The Mark series of electro-mechanical computing devices was
commissioned by the Office of Naval Research and developed during the war by
Harvard University and IBM,
47


According to Herman Goldstine, friend, admirer and collaborator of von
Neumann on the ENIAC, and subsequent employee of IBM:
[Although James L. Tuck and] von Neumann invented an ingenious type of
explosive lens that could be used to make a spherical wave ... von Neumanns
main contribution ... was ... in showing the theoretical people how to model
their phenomenon mathematically and then to solve the resulting equations
numerically.18
Mathematical models of physical phenomenon were created and solved via the
digital computer, allowing the likely effects of various experimental configurations
to be considered without the time and cost needed to build and test each
configuration.19
His status as "expert" followed from his role on the Manhattan and
EN1AC/EDVAC projects and made von Neumann valuable to those in the
military/govemmental bureaucracy interested in the combination of new weaponry
and computational power. By April 1945, von Neumann was consulting to both
the theoretical group in high explosives and the Applied Mathematics Panel of the
Navy Bureau of Ordnance. The high explosives group drew on both the
18 Herman H. Goldstine, The Computer from Pascal to von Neumann
(Princeton, 1972), p. 181. "A punched card laboratory [consisting largely of IBM
punched card equipment] was set up to handle the implosion problem, and this
later grew into one of the worlds most advanced and largest installations."
19 The experimental configurations that were built made it possible to: (1) test
the mathematical models of physical phenomenon; (2) calibrate the technological
theories describing the behavior of specific experimental configurations. These
were necessary intermediate steps toward the development of a deployable
weapon. On the notion of technological theories, see Robert Freidel, The Making
of American Industrial Research: Science and Business at GE and Bell. 1876-1926
(New York, 1985), pp. 205-8, 250.
48


computing capacity and courtesy of several external agencies including the
Applied Mathematics Panel, Harvard University, Bureau of Ships, and the
Tabulation Division of the Bureau of Ordnance.
Von Neumann was engaged by the Applied Mathematics Panel to perform
a "survey of computing machines and services," and was requested to investigate
and recommend computing services that would be adequate if the Bureau of
Ordnance were to establish machines and equipment for computing.20 In
wartime, cost was not the issue, but speed and volume were, and newer computing
equipment promised an order of magnitude increase in the number of gunnery
firing and bombing tables which could be generated. In noting that the high
explosives group "has need of computing services totalling a large number of man
[emphasis added] hours," the chief of the Panel implied the greater efficiency of
machine rather than human computers.21
New Applications of Computing: A Watershed
The March 1947 enunciation of the Truman Doctrine effected what former
Secretary of State Dean Acheson termed "a complete revolution in American
foreign policy."
Prior to March 1947 the prevailing attitudes of the country toward
international affairs were controlled by opinions formed during the war -
20 J, A. E. Hindman to von Neumann, 24 April 1945, von Neumann MSS.
21 Goldstine (n. 18 above), p. 138,
49


optimism about the post-war period, belief that great-powers cooperation and
particularly U.S.-Soviet cooperation could be preserved
In March 1947 public reaction against war and armaments had placed the
Defense Department on the defensive, and [the Republican] Congress was
considering the Defense budget only to reduce it.22
One aim of the Truman Doctrine was to remove domestic political
impediments to the establishment of the Marshall Plan. The Plan was to create the
international market needed to prevent the collapse of the now vital U.S.
economy. The detonation of a Soviet atomic bomb in September 1949 seemed to
confirm the international Communist threat and reinforce the view that economic
strength would be needed to meet it.
The end of American monopoly on atomic weapons closed the
Oppenheimer-Teller debate over thermonuclear weapons and opened a whole new
vista for computing. From the necessity for thermonuclear weapons followed a
requirement for computational power on a much larger scale than previously
needed even for atomic weapons. Von Neumann was to play a central role in the
securing of additional computational and nuclear power.
As "father of the digital computer," von Neumann coaxed the Institute of
Advanced Studies (IAS) at Princeton into funding, with the Army Ordnance
Bureau, what became known as the IAS computer. This machine exercised
enormous influence over the first through third generations of computers. Only in
22 Freeland (n. 7 above), p. 8.
50


the 1980s have radically different computer architectures been offered
commercially.23
November 1949 found von Neumann the computer designer along with 82
others in Endieott, New York, at an IBM sponsored "Seminar on Scientific
Computation." A watershed in the history of computing, this seminar marked the
beginning stages of large scale diffusion of computing into American life. The
services, national laboratories, industrial contractors, and academics were all
represented at the Seminar.
The Department of the Army, funding source for both the atomic bomb
and the ENIAC, was represented by the Ordnance Department of the BRL and by
a physicist in the Operations Research Office at Johns Hopkins University.24 The
Navy, which sponsored von Neumanns early work on "high explosives," sent Dr.
Mina Rees, Director of the Mathematical Sciences Division, Office of Naval
Research.25 Although the Air Force was not in attendance, its creation, the
RAND (for Research and Development) Corporation was represented by Herman
23 R. W. Hockney and C. R. Jesshope, Parallel Computers 2. Architecture,
Programming and Algorithms (Bristol, 1988), pp. 2-53.
24 "Operations Research" is an umbrella term which covers game theory, linear
programming, and a host of other techniques initially developed in a military or
governmental setting.
25 The Office of Naval Research became a substantial source of support to
university researchers in the post-war era, especially in the areas of game theory,
linear programming, and the solution of systems of linear equations.
51


Kahn, who eventually became famous for his book On Thermonuclear War and as
a subject for parody in the 1963 movie Dr. Straneelove.
RAND is an offshoot of the Douglas Aircraft Corporation, so it is not
surprising that other aircraft companies were also represented. RAND seemed
relatively unique, however, in the intensity of its regard for von Neumann:
... members of the Project with problems in your line (i.e., the wide world)
could discuss them with you ... We would send you all working papers and
, reports of RAND which we think would interest you, expecting you to react
(with frown, hint, or suggestion) when you had a reaction. In this phase, the
only part of your thinking time wed like to bid for systematically is that
which you spend shaving: wed like you to pass on to us any ideas that come
to you while so engaged.26
The National Laboratories, heavily involved in the development of
thermonuclear weapons, were strongly represented. Dr. Alton Householder had
done much to further the analysis of large systems of equations and attended the
seminar as Chief of the Mathematics Panel at the Oak Ridge National Laboratory
(ORNL) in Tennessee.
Carbide and Carbon Chemicals Corporation (to become Union Carbide),
could count two current and one former employee in attendance. Union Carbide
was under contract to the Atomic Energy Commission to manage ORNL, which
used some of the electrical energy from the Tennessee Valley Authority to
perform work connected with nuclear energy. Dr. Cuthbert Hurd, by then an IBM
employee, had previously received a letter from von Neumann expressing the
26 John D. Williams to von Neumann, 16 December 1947, von Neumann
MSS.
52


willingness of the latter to consult for Oak Ridge. The letter also described a
process of generating sequences of so-called random numbers, a process important
in the simulation of nuclear reactions.27
The majority of the other seminar participants were physicists, chemists,
and mathematicians with academic affiliations. Some governmental workers were
present, including Dr. John H. Curtiss, Chief, National Applied Labs, National
Bureau of Standards (NBS). Curtiss later ran the NBS Applied Mathematics
Laboratory which investigated the mathematics of solving large systems of linear
equations, both for physical and linear programming applications.28
Computer Desisn and National Defense
In the months after the IBM Seminar, von Neumann became increasingly
concerned with computational power. As director of the IAS Computer project,
von Neumann also evaluated several other computers. In January 1950, he
received a letter requesting an evaluation of the MADDIDA, a small machine
devised by Northrop Aircraft in connection with its development of guidance and
control for the SNARK missile system. In the opinion of Northrop:
27 Hurd to von Neumann, 10 December 1948, von Neumann MSS. Simulation
requires sequences of statistically random numbers termed random variates. Valid
statistical inference increases the length of the sequence, while fast simulation
requires reducing the length of the sequence or the generation time.
28 S. W. Dun well of the IBM Future Demands Department was present for
obvious reasons. List of Attendees, IBM Corporation, date unavailable, von
Neumann MSS.
53


computers of such [small] sizes and capacities [the MADDIDA was claimed to
be equivalent to a Univac] would be invaluable not only to the Air Forces but
to the field of mathematics and research in general....Recognizing your
unusually high standing in your profession and in the esteem of the Air
Forces, we should like you to visit our plant on a consulting basis, view this
equipment and review the principles involved.29
When von Neumann did not go to California to review the MADDIDA,
Northrop engineers boxed up the small machine and flew to Princeton. After
several days of examining the machine, discussing its principles, and even
programming it in a Princeton hotel room, von Neumann gave the machine high
praise:
... your magnetic differential analyzer is a most remarkable and promising
instrument.... you have established the principles of a whole family of very
new and most useful instruments.
Your equipment ... seems to me more interesting as a basis for a family of
special purpose machines to deal with matrix problems..[Among these are]
Performing linear transformations on n variables, solving n linear equations in
n variables,... solving the problems of game-strategy and of linear
programming of order n all of this for values of n of the order of 100 and
even higher.
Solution of problems of this last class [linear programming] will be of great
importance, and may well be decisive in certain phases, for enterprises like
Project SCOOP ... and Project RAND.30
The week before, von Neumann was asked by the Naval Ordnance Bureau
to evaluate a proposal for a computing machine to be built by IBM called the
29 Northrop to von Neumann, 26 January 1950, von Neumann MSS.
30 John von Neumann, "The von Neumann Letter," Annals of the History of
Computing 9 (1988): 357-68.
54


Naval Ordnance Research Calculator (NORC). Von Neumann offers the following
caveat:
At our conference I emphasized the circumstances which may limit the
validity of my judgement on the matter in question. I understand that you are
entirely aware and appreciative of these things, and that you want me to give
you my views nevertheless. I am doing this in what follows and leave the
evaluation to you.
Having worked for the Bureau for a decade on problems involving weapons
systems and the computations needed to develop them:
I shall enumerate some particularly outstanding subjects which are integral
parts of the Bureau of Ordnance activities, and indicate how high speed
calculation will contribute essentially to the Bureau work in these subjects.31
Von Neumann enumerates the six subjects of aerodynamics, hydrodynamics,
elasticity and plasticity, high explosives, missile design, and finally, atomic
weapons and motors. The affects of blast waves on solid structures, whether
surrounded by air or water, fall under the heading of Elasticity and Plasticity, but
are not of interest here. The other subjects show the linkage between
computational and destructive power that was being forged.
Aerodynamics concerns the nature of air flows, of obvious importance to
airplane and missile design. According to von Neumann, ordinary problems
already taxed the capacity of the ENIAC and its electro-mechanical kin, the IBM
MARK III and the IBM Selective Sequence Calculator.
31 Von Neumann to Dr. Richard S. Burington, 19 January 1951, von Neumann
MSS.
55


Turning to hydrodynamics, von Neumann observes that underwater blast
phenomenon, such as those generated by depth charges, present even greater
complications, making the use of high-speed computing devices even more
necessary. Perhaps these complications arise also in the launching of an
underwater missile such as Polaris.
The effects of explosive shape upon weapon effectiveness are important to
both nuclear and conventional weaponry, and here von Neumann claimed that
"very complicated" calculations were required, von Neumann was also looking
toward the Polaris in the mid-fifties:
Missile design would be greatly advanced if extensive simulation by
calculation were possible. In other words, if for any given set of aerodynamic
properties, control and steering element characteristics, communications system
and noise level characteristics, the performance of such an hypothetical missile
could be calculated in a variety of relevant situations. ... Such problem setups
can lead into extremely intricate analytical and combinatorial discussions, also
involving a wide variety of mathematically very difficult questions concerning
the target tactics and countermeasures.
The savings and the acceleration that will be achieved in the missile field,
when such techniques can be routinely used, are evident: Most missile designs
can then be tested by mathematical simulation on computing machines, and
only a few critical and especially promising ones, selected on the basis of the
computational simulations, will have to be carried on into the hardware
stage.32
This proposal, almost a manifesto in form, expresses one key lessons of the
Manhattan Project: simulation via computation speeds development by allowing
scientists to reduce the number and complexity of experiments required.
32 Von Neumann to Burington (n. 31 above).
56


The development of the NORC was not driven by ordinary market forces:
This proposal [by IBM] does probably not represent the quickest and
cheapest way to acquire a very high speed computing machine. It corresponds
to a program of proceeding to the next stage beyond the present one and to
obtain by a considerable effort a machine which is likely to have its peak
usefulness toward the middle 1950s.33
Here von Neumann was undoubtedly considering the uses of NORC in the design
of the Polaris submarine system.
On June 19th, von Neumann corresponded with the director of the Office
of Naval Research, concerning several mathematical questions which "might be
profitably considered in the context of the 1951 tests," principally the decay of
blast waves. Here von Neumann reiterated his view that the
... decay of a spherical blast wave could be calculated with any one of a
number of machines which are available today. The ENIAC could certainly do
it, but less fast machines could also be used for this purpose. In any case, this
calculation is badly needed.34 35
The more realistic case involved what von Neumann antiseptically refemed to as
the decay of an initially spherical blast wave in a vertically stratified
atmosphere. This problem is clearly more difficult, since it has only cylindrical
symmetry .... For the cylindrical problem, the ENIAC is probably adequate,
but I think that a machine with more limited characteristics would not do.33
33 Von Neumann to Burington (n. 31 above).
34 Von Neumann to Rees, 19 June 1951, von Neumann MSS.
35 Von Neumann to Rees, (n. 34 above).
57


The simpler calculation may have been needed to test preliminary work by von
Neumann collaborator Stanislaus Ulam suggesting that the then current design for
the hydrogen bomb would produce a fizzle.
While Ulam worked with a desktop calculator at Los Alamos, a team at
the Army Aberdeen Proving Ground ran a test on the newly installed ENIAC,
reaching the same negative result.
All the work hitherto done on the Super had been, in [Edward] Tellers
own words, "nothing but fantasies. It had to be started all over again. Had the
preliminary measurements themselves, upon which the calculations had so far
been based, in fact been accurate? One could find out only by testing them
afresh in an actual trial. If practical results were to be obtained, much more
precise observations would have to be taken in the new test than in any
previous undertaking in the atomic-armaments field. Instruments of hitherto
unknown speed and precision were essential. Cameras would have to take
thousands of photographs in the fraction of a minute. A system of signals
would be necessary to relay their "experiences" to a distant control before they
themselves were destroyed by the force of the explosion.36
The construction of this "system of signals" was a mammoth undertaking which
affected the fields photography, electronics, and computing.
By March 6, 1952, von Neumann had already begun to consult with IBM
on the development of the NORC. The IBM point of contact, Dr. L. H. Thomas,
wrote a note summarizing an earlier conference in which several NORC design
issues were discussed. Von Neumann provided clarification to Thomas on several
points and in the process demonstrated several design principles in use today.
36 Robert Jungk, Brighter Than a Thousand Suns (New York, 1958), p. 293.
58


However absorbing these design activities might have been, von Neumann
was drawn back to the source of his authority, becoming increasingly involved in
the development of a new weapons system which promised a more effective
means of delivering nuclear weapons to their targets.
Von Neumann and the Weather
As the Korean War seemed to underscore the intent of Soviet communism
to establish world domination, the Air Force increased its reliance on the
Scientific Advisory Board (SAB) to the USAF Chief of Staff. In April of 1951,
the SAB was chaired by Dr. Theodor von Kaiman, with whom von Neumann had
served on the Army SAB back in 1941. Von Neumann was recommended for
SAB membership by Robert Kent, the Chairman of the Explosives and Armament
Panel, with whom von Neumann had worked at the BRL and on the ENIAC.
Things moved rapidly: von Neumann was invited to participate in the meeting of
the Board on April 13th 1951 and responded affirmatively by mail on the 25th.
On May 1st, he was invited to attend the reception for the Board given by Air
Force General Hoyt Vandenberg on the evening of May 9th, "which many
Department of Defense personalities will attend."37
Both the Army and the Navy were interested in von Neumanns Princeton
meteorology prediction project. Although active in meteorological prediction since
37 Letter from D. L. Putt, Major General, USAF, Military Director, Scientific
Advisory Board, to von Neumann, 1 May 1952, von Neumann MSS.
59


1948, now von Neumanns involvement deepened and in mid-May, he wrote
concerning the electronic computing demonstration provided by the meteorological
group at Princeton to the Air Force Geophysical Research Panel. Von Neumann
indicated his eagerness to discuss the theory of meteorological prediction during
an upcoming visit to Washington.38
In June 1952, von Neumann wrote to Nobel Laureate I. I. Rabi concerning
the possible connection between tornadoes in the Hast and A-bomb tests in
Nevada. After assuring Rabi that the power of any A-bomb tested was too
insignificant to account for even an ordinary weather front, let alone a tornado,
von Neumann notes in passing that "the U. S. Weather Bureau is preparing maps
for all the relevant A-bomb clouds," presumably either for prediction purposes or
simply to allow subsequent study of any effects following in the path of such
clouds.39
At its next full meeting, the SAB was asked to produce a "Toward New
Horizons" study considering "the trend capabilities of those technical areas which
will contribute most to the development of Air Force equipment in the next ten
38 Von Neumann to USAF Major General Craigie, 16 May 1952, von
Neumann MSS. Von Neumann was in town to attend a meeting of the General
Advisory Committee of the Atomic Energy Commission.
39 Von Neumann to Rabi, 23 June 1952, von Neumann MSS, Air Force
Global Weather is across the street from Offutt Air Force Base in Omaha, home
of the Strategic Air Command.
60


plus years." Put otherwise, it was important to foresee the benefits of emerging
technologies in order to avoid any technology gap:
We have a growing apprehension that our conventional process of allocating a
large portion of our R&D effort to obtaining incremental [emphasis added]
advances and improvements in available weapons systems may well lead in
time not to qualitative superiority but to qualitative mediocrity, when measured
by the only realistic qualitative index comparison with the competitive
weapon system.40
On 13 December 1952, von Neumann received a letter from the Ramo-
Wooldridge Corporation (now TRW) confirming his membership on the Strategic
Missiles Evaluation Committee (also known as "Project Teapot").41 Within six
months, the Secretary of the Air Force was expressing personal thanks to von
Neumann for the work of "your committee," noting that the
.... quality and effectiveness of our atomic strength must depend in large
measure upon a continuing close working relationship between the military
and American science. In this connection, I am aware of the heavy demands
being made on you by various agencies of the government.42
In July of 1953, Trevor Gardner, formerly of the Strategic Missiles
Evaluation Committee, wrote concerning the formation of a Scientific Advisory
Group (SAG), chaired by von Neumann, to advise the ATLAS ICBM
development project under the direction of Brigadier General Bernard A.
Schriever. Its membership would include Dr. Herbert F. York, eventual critic of
40 Craigie to D. L. Putt, 24 November 1953, von Neumann MSS.
41 Ramo-Wooldridge to von Neumann, 13 December 1952, von Neumann
MSS.
42 Harold Talbott to von Neumann, 12 April 1952, von Neumann MSS.
61


U.S. nuclear strategy; Dr, Norris Bradbury, successor to Oppenheimer as Director
at Los Alamos; Dr. George Kistiakowsky of Harvard, first Presidential Science
Advisor; J, B. Weisner of MIT, also to become Science Advisor; and,
interestingly, Charles A. Lindbergh.43
Von Neumanns involvement with the ATLAS SAG, as important as it
became, did not preclude work on electronic digital computers and meteorology.
Von Neumann and Cuthbert Hurd discussed a statement drafted by the latter
describing the nature of some meteorological calculations to be performed on an
IBM 701 and acknowledging that the IAS machine had neither the speed nor the
memory for the calculations required. The next step beyond the capabilities of the
IBM 701 would be to extend prediction from less than a week up to a month,
with correspondingly greater amounts of computation required.44
This next step came two and a half years after von Neumann discussed the
NORC design with IBM. The NORC was unveiled in New York on 12 December
1954 amid pomp and circumstance as von Neumann spoke after the luncheon,
putting the NORC into perspective. The NORC appeared to be faster than the
IBM 701 by a factor of five (5), a capability required for advanced meteorological
computations. Although considerable NORC time would be for ballistics
43 Letter entitled "ATLAS," undated, von Neumann MSS.
44 Von Neumann to Hurd, 8 November 1954, von Neumann MSS.
62


calculations, von Neumann noted some less familiar applications that were
becoming increasingly important:
NORC, being a machine with a very high speed, a rather large memory, and a
very exceptional capability to ingest data, is clearly suited for problems where
large amounts of material have to be processed. Therefore one must ask:
Where do scientific calculations require large amounts of data?45
Many physical problems require considering (at least) the three dimensions
of ordinary space: in many parts of geophysics, notes von Neumann, it is very
difficult to omit any dimension. The three main areas of geophysics involve water,
earth and, of particular importance here, the air:
We know that calculations of meteorological forecasts for longer periods, like
30 to 60 days, which one would particularly want to perform, are probably
possible but that one will then have to consider areas that are much larger than
the United States....with the best available modem computing machines it is
still a very large problem, and when one deals with a new problem one must
solve it a few dozen times "the wrong way" before one gradually finds out by
trial and error, and by coming to grief many times, what a reasonably "good
way" is. Consequently, one will simply not do it unless one can obtain
individual solutions quite rapidly46
After considering the expected uses of the NORC, von Neumann notes the
"statistical analysis of complicated situations," mentioning first the use of fast
computing machines to exclude infeasible weapons designs. There are also
45 John von Neumann, "The NORC and Problems in High Speed Computing,"
in Papers of John von Neumann on Computing and Computer Theory, ed. W.
Aspray and A. Burks (Cambridge, 1986), pp. 350-59.
46 Von Neumann (n. 45 above), p. 354. Although von Neumann had relatively
little to say the water and the earth, consider the hydrodynamic calculations to
model the underwater launch of a ballistic missile from a submarine.
63


complicated processes which are not exactly mechanical, like large-scale
operations involving an organization, involving many units, e. g. combat
operations, or more simple logistical operations, where one can perform an
analysis in this way.47
Von Neumann reiterates the advocacy of computational simulation in his early
evaluation of the IBM NORC proposal:
One can, with a certain choice of parameters, go through the calculation, say a
hundred times, each time assuming the accidental factors differently,
distributing them appropriately, and so in hundred trials obtain the correct
statistical pattern. In this way one can evaluate ahead of time by calculation
how those parameters should be chosen, how the decisions should be made -
or at least obtain a general orientation on these.48
But now von Neumann adds a new dimension:
No one in planning a research program, or an investigation, likes to commit
himself to a particular plan which may not work, if this involves tying up the
whole organization for a half year. If, on the other hand, one can do these
things [simulations] rapidly, one will be bolder and find out more quickly how
to do it.
Similar considerations apply to various other calculations involved in planning
and programming; those of you who know the concepts of "linear
programming" and "non-linear programming" and various other forms of
logistic programming will know what I mean49
Thus, now von Neumann sees computers used as instruments of planning and
control.
47 Von Neumann (n. 45 above), pp. 356-57.
48 Von Neumann (n. 45 above).
49 Von Neumann (n. 45 above).
64


In view of his myriad contributions to the services, it is hardly that
surprising that the letter notifying von Neumann of his reappointment to the
USAF SAB stated that
... you and the technical community you represent hold in your hands and
minds the technical aspects of the new and greater Air Force capabilities
which must come into being in the years ahead. Now, more than ever before,
we in the Air Force must look to you for guidance and help.50
Von Neumann would barely live out his term on the SAB: six months after his
appointment to the Atomic Energy Commission, von Neumann was diagnosed as
having inoperable cancer.51
Von Neumanns normally pressing schedule began to lighten as he
declined new and terminated consulting roles. Von Neumann wrote to William
Shockley, inventor of the transistor, and then Director of Research for the
Weapons Systems Evaluation Group (WSEG) of the Office of the Secretary of
Defense, to ask that his (von Neumanns) role in the WSEG be terminated.52
The Air Force insisted at von Neumanns Senate confirmation hearings that he
retain chairmanship of the ICBM Scientific Advisory Group.53
50 Nathan F. Twining, USAF Chief of Staff, to von Neumann, date
unavailable, von Neumann MSS.
51 Clay Blair Jr., "The Passing of a Great Mind," Life Magazine. 8 February
1957.
52 Von Neumann to Shockley, 15 April 1955, von Neumann MSS.
53 Claude J. Johns, Jr., "The United States Air Force Intercontinental Ballistic
Missile Program, 1954-1959: Technological Change and Organizational
Innovation" (Ph.D. dissertation, University of North Carolina, Chapel Hill, 1964),
65


The Big Picture
His life ebbing, his general notoriety growing, von Neumann claimed in a
popular article that the earth itself was in a crisis, becoming too small for the
technological advances that were occurring:
In the first half of this century the accelerating Industrial Revolution
encountered an absolute limitation not on technological progress as such, but
on an essential safety factor ... [which] was essentially a matter of
geographical and political lebensraum. ... At long last, we begin to feel the
effects of the finite, actual size of the earth in a critical way.Technologies
are always constructive and beneficial, directly or indirectly. Yet their
consequences tend to increase instability.54
Von Neumann predicts massive use of nuclear reactors by the 1980s,
believing that "[fjorced by the limitations of our real estate, we must... do much
better than nature" by not requiring a star to produce thermonuclear reactions,
instead making full use of fission!55 He also notes the rapid evolution of
automation, observing that computers can be used as control devices as well as for
"logistical, economic and other planning, and many other purposes heretofore
lying entirely outside the compass of quantitative and automatic control and
preplanning," using weather and climate control as examples.56
p. 39.
54 Von Neumann, "Can We Survive Technology," Fortune Magazine. June
1955, pp. 33-35.
55 Von Neumann (n. 54 above), p. 36.
56 Von Neumann (n. 54 above), p. 37.
66


Summarizing these developments, von Neumann notes, first, that they all
lend themselves to destructive purposes. Second,
... there is in most of these developments a trend toward affecting the earth as
a whole, ..., toward producing effects that can be projected from any one to
any other point on the earth...The technology that is now developing and
that will dominate the next decades seems to be in total conflict with
traditional and, in the main, momentarily still valid geographical and political
units and concepts. This is the maturing crisis of technology.
Whatever one feels inclined to do, one decisive trait must be considered: the
very techniques that create the dangers and the instabilities are in themselves
useful. In fact, the more useful they could be, the more unstabilizing their
effects can also be.57
Neither division nor prohibition are likely to preserve us from the dangers
wrought by useful technology:
After global climate control becomes possible, perhaps all our present
involvements will seem simple. We should not deceive ourselves: once such
possibilities become actual, they will be exploited.All experience shows
that even smaller technological changes than those now in the cards
profoundly transform political and social relationships. Experience also shows
that these transformations are not a priori predictable and that most
contemporary "first guesses" concerning them are wrong.58
Machines such as the NORC, and its more powerful successors, could be and are
used for the prediction and control of weather and social organizations.
In late June of 1955, von Neumann spoke at the Armed Forces Staff
College on the impact of atomic and thermonuclear weapons on national policy.
57 Von Neumann (n. 54 above), pp. 38-39.
58 Von Neumann (n. 54 above), pp. 40-41. Compare this meditation on
technology with that of philosopher Martin Heidegger, also published in 1955: see
The Question Concerning Technology and Other Essays, translated with an
introduction by William Lovitt (1955; reprint, New York, 1977).
67


In July, a wheelchair bound von Neumann, along with Trevor Gardner and Air
Force General Bernard A. Schriever briefed President Eisenhower on the ATLAS
ICBM program, which led to its establishment as the number one national priority.
Von Neumann
... continued to preside over the ballistic missile committee, and to receive an
unending stream of visitors from Los Alamos, Livermore, the Rand
Corporation, Princeton. Most of these men knew that von Neumann was dying
of cancer, but the subject was never mentioned.59
* In the last quarter of 1955, Thomas J. Watson, Jr. wrote to von Neumann
to bring him up to date on the NORC. The Navy had operated the NORC at the
IBM Watson Laboratory at Columbia until 1 March 1955 when it was moved to
Dahlgren Proving Ground in Virginia. The NORC, described in a new book
published by IBM, was being used on each of three shifts.60
The year 1955 closed out with an invitation to attend the Industrial
Preparedness Meeting of the American Ordnance Association, suggestively
scheduled for December 7, 1955. The dinner honored Army Chief of Staff
Maxwell Taylor. In the second part of the (Robert H.) Kent seminar on the
scientific bases of weapons, von Neumann spoke on "Defense in Atomic War" in
which he explicitly linked the power of computing and the power of nuclear
weapons. According to von Neumann, high speed computing machines are an
59 Blair (n. 51 above).
60 Watson to von Neumann, 11 November 1955, von Neumann MSS.
68


"absolutely necessary condition" for the calculation of firing tables for air-to-air
firings and for computation of "missile-trajectories" guidance.61
For von Neumann, the history of warfare is the history of increased
firepower, with atomic weapons representing the (then) latest stage of this
increase. The effect of this increase in firepower is twofold: first, it reduces the
need for manpower and conventional (non-nuclear) equipment, provided one is
willing to counter them with nuclear force; second, the time needed to decide a
war can be reduced from years or months to days or weeks.
Given this, the importance of a new weapon or a countermeasure against a
weapon can have tremendous significance. A single unanticipated measure or
countermeasure could result in several weeks of nuclear vulnerability, a period in
which our retaliatory capability could be utterly destroyed. How does one defend
against such a possibility? Here von Neumann returns to the field of systems
analysis and operations research, where the characteristics of a weapon system can
be determined without first building and testing it:
The manner in which one now calculates the performance of a weapon
system consists of taking it through a military maneuver, an engagement, or a
series of engagements on a computing machine.62
61 John von Neumann, "Defense in Atomic War," in Papers of John von
Neumann on Computing and Computing Theory, ed. W. Aspray and A. Burks
(Cambridge, 1986), pp. 523-24.
62 Von Neumann (n. 61 above), p. 524.
69


Not only are computing machines used to analyze the physical properties
of materials needed to produce weapons systems, but so also are they used to
assess the performance of a hypothetical system. This is not all, for questions of
strategy also arise:
It will not be sufficient to know that the enemy has only fifty possible tricks
and that you can counter every one of them, but you must also invent some
system of being able to counter them practically at the instant they occur.
It is not easy to see how this is going to be done. Some of the traditional
aspects of the use of the same weapon for several purposes and of limiting its
use until you need it... may have some of the elements of an answer.63
Here we see the need for computing machines capable of performing the
"statistical experiments of complicated situations" mentioned at the NORC
dedication.
In April 1956, von Neumann received the Medal of Freedom in his last
public appearance, received the Enrico Fermi Award accompanied by a $50,000
tax-free grant for contributions to the theory and design of computing machines -
and began a residence at Walter Reed Army Hospital terminated on 7 February
1957.64
63 Von Neumann (n. 61 above), p. 525.
64 Blair (n. 51 above), p, 104.
70


Conclusion
Von Neumanns brainchildren, unlike their father, successfully refused to
die. In mid-1956, the Navy established the Program Evaluation Branch of the
Polaris missile program. This program had a great impact on defense, defense
planning, and the conduct of business in the United States, for out of it came the
now-famous, but somewhat discredited, technique called PERT, the Program
Evaluation and Review Technique.65
PERT is a project management technique based on visualizing work as a
network of interdependent tasks. Associated with a task A is its projected time to
complete and two lists. The first list shows the tasks, if any, which must be
completed before task A can begin and the second list shows those tasks which
cannot begin until task A is complete. An example of a very small PERT chart is
shown in Figure 1 on the following page.
Figure 1 depicts four tasks: A, B, C and D. Since task A is to the left of
task B, the completion task of B depends upon the completion of task A: task B
cannot begin until task A is completed. Task A is scheduled to begin on 1/90, and
has a duration of 1 (one) month. Similarly, task D depends directly upon the
completion of tasks B and C, is scheduled to begin on 5/90, and has duration of 1
month.
65 Willard P. Fazar, "The Origins of PERT," The Controller 30 (December
1962): 4.
71


'!
Gantt chart, developed during World War I by an early associate of Frederick W.
Taylor, the "Father of Scientific Management." Figure 2 below contains a pantt
chart representing tasks A, B, C, and D.
Tgure 2. A Simple Gantt Chan.
72


1
For high level status briefings, a Gantt chart is preferred for its simplicity.: The
Gantt chart, however, does not explicitly show dependencies between tasks: the
viewer must synthesize this information.
A strength and weakness of a PERT chart is that it shows task
dependencies, which in principle provides more control but introduces an
additional type of element that must be tracked through a planning activity:
namely, dependencies among tasks. A slip in the completion date associated with
I
any single task can "ripple" through an entire network of tasks so that the "entire
I
schedule may become infeasible, requiring at the least considerable updating.
!
While representing a project with 50 tasks can be formidable when done vjjith a
I
PERT chart, maintaining the corresponding Gantt chart is equally daunting!
j
precisely because the relationships between the tasks are not explicitly represented
in the Gantt chart.66
In the early 1950s, the defense weapons systems concept was coming into
!
prominence. Weapons systems developed in an orderly, linear fashion had 'the
unfortunate side-effect of creating lead times of 7 to 10 years from the inception
of the weapons systems concept to system deployment. With the natural
presumption that any technological advances on our side would be mirrored by
our adversaries, lead-time requirements of 7 to 10 years became increasing'
66 Consider the maximum number of dependencies between N tasks: foij N =
4, the number of dependencies is 6; for N = 5, the number is 10, while for N = 50
the number jumps to nearly 2500.
73


unacceptable, particularly when coupled with the belief that our adversaries were
able to cut lead times to approximately 5 years.67 68 |1
jj
The possibility of speeding up the development of weapons systems had
already been examined prior to the Strategic Missiles Committee: one of the chief
jj
proponents of the "Doctrine of Concurrency" was a member of that committee,
Air Force General Bernard A. Schriever. The "Doctrine of Concurrency" dictates
that as many phases as possible in the development of a weapons system be
carried out more or less simultaneously. This is possible to the extent thatia
>1
weapons systems can be "carved up" into separate sub-systems, each of which can
I
,R i
be developed more or less independently. !
When sub-system development is complete, the sub-systems are integrated
.1
into a functioning whole. There are several benefits to this approach. By t
identifying relatively independent tasks, additional man-power and equipment can
be devoted to each relatively independent task without incurring inordinate^!
j!
communication costs. If the inter-relations between sub-systems are adequately
|i
defined, then the system as a whole has a high probability of operating j|
successfully when all sub-systems are integrated. ICBM system and sub-system
ij
reliability were raised by the "test philosophy" which required repeated tests of all
ij
components from top to bottom of the component hierarchy. Subsequent ,
67 Johns (n. 53 above),
68 Johns (n. 53 above).
74


technological advances in specific sub-systems are easily exploited: if subsystem
interfaces are preserved, we just "swap out" the new sub-system with the
bid.69
Despite the desire to perform as many sub-tasks as possible in parallel,
some activities cannot be carried out concurrently. Once the system has been
i
broken down into sub-systems, one major potential problem remains: delays in
r
t
system integration. The system cannot be integrated until the development* of each
i
sub-system is complete. Thus, if sub-systems B and C are complete and ready for
i
integration, unscheduled delay in completing the development of sub-systdm A
[i
will not affect the completion of B and C (since they are already complete), but it
il
will prevent the system from being operational without unscheduled delayjl
As chairman of the Strategic Missiles Committee, von Neumann argued
!i
I,
that the factors inhibiting rapid development of an operational ICBM involved
It
li
management technique rather than technology. The implications were twofold.
First, the traditional weapons system development cycle could not lead to an
i!
operational ICBM within a short period of time. Second, new management
techniques were required to guide a newly conceived development process
intended to meet short development schedules. ['
i
Until reconnaissance indicated in mid-1955 that the Soviets were |]
I,
I
developing an ICBM, the will was lacking to tackle the problems associate^ with
l|
concurrency. It then seemed that the doctrine of concurrency was the only {way to
f
69 Osmond J. Ritland, "Concurrency," Air University Quarterly Reviewji
(Winter-Spring 1960-61). ]i
75 i:
I;


'I
eliminate "the missile gap" by 1960. The elevation of the ballistic missile program
li
r
to number one national priority in mid-1955 brought intense concern about the
management of development concurrency.
70
It has been suggested that the Air Force, to promote its own role in the
I1
ICBM business, spread damaging rumors about lack of progress in the Nayy
|i
Polaris program. War-time experience had made the virtues of "scientific \
management" seem obvious, so it is not surprising that the Navy touted PERT as
71 '
such a tool and used it to ward off criticism. But PERT had significant l!
technical antecedents: work done on a linear programming and simulation
early 1950s foreshadowed PERT. Moreover, the PERT concept was very
!
iin the
.]
much in
I
the air, as evidenced by the independent and contemporaneous development of the
Critical Path Method by DuPont for non-defense use. The Air Force configuration
l';
[i
management technique, although overshadowed by PERT, contained many! of the
same elements.70 71 72
70 Johns (n. 53 above), p. 47. i
71 One critic claimed that PERT was a bureaucratic response by the Navy to
internal and external criticism and was intended, not as a management tool] but as
a weapon with which to silence critics. See Harvey M. Sapolsky, The Polaris
System Development: Bureaucratic and Programmatic Success in Government
(Cambridge, 1972). For a discussion of the reverence for scientific management
that came out of World War II, see Merritt Roe Smith, "Introduction," in Military
Enterprise and Technological Change. Perspectives on the American Experience,
ed. Merritt Roe Smith (Cambridge, 1985). I
]
72 Robert W. Miller, Schedule, Cost, and Profit Control with PERT: A |
Comprehensive Guide for Program Management (New York, 1963), p. 27.;
76


The diffusion of PERT throughout the defense community, and from there
|l
into the industrial world, was hastened when the Defense Department began to
require the use of PERT as a management tool, largely because of Polaril| success.
I
The potential of PERT as a "tool of scientific management" could not be realized,
however, without the computerization of PERT networks. The PERT teanj, which
formally began its work in February 1958, quickly realized that computerization
was necessary. Indeed, the rapid diffusion of PERT through industry and
government depended to a large degree upon the development of in-place
electronic data processing capability.73
As the von Neumann machine chosen to computerize the original
system, the NORC was the heart of the PERT decision-support system:
)
PERT
One of the most useful aspects of the NORC outputs is the ability it provides
for checking the feasibility of current schedules and for permitting technical
management to experiment with or evaluate the effects of proposed changes
in the research program under its technical direction.74
!
The potential of the NORC for the "statistical analysis of complicated situations"
J
had been realized. I
73 Miller (n. 71 above), p. 26; J. S. Butz, "The USAF Missile Programj; A
Triumph of Orderly Engineering," in A History of the US Air Force Ballistic
Missiles, ed, E. Schweibert (New York, 1965), p. 198.
74
D. G. Malcolm, J. H, Roseboom, C. E. Clark, and W. Fazar, "Application
li .
of a Technique for Research and Development Program Evaluation," Operations
Reseateh 5 (1959): 662.
77


I
With the initial success and subsequent diffusion of PERT, the ustj of
concurrency in development undoubtedly acquired considerable momentum.
PERT, with its ability to focus on the interdependency of tasks, seemed to give
management greater control over "research and development" activities involving
I
great uncertainty. Many courses of action which might have been unthinkable
i
before (because unmanageable), now became open for exploration. Projects
rendered feasible by this new form of control may have acquired near
inevitability, a result of "reverse adaption."75
Out of these changes to management practice, there have perhaps Been
'i
broad, subtle changes to life in the "information" society. We remain undtjr the
spell of von Neumanns genius for fusing war and computadon in an increasingly
complete equation of knowledge and power. Whether it is necessary, possible, or
'I
desirable to continue in the trajectory established by von Neumanns generation
remains a question worthy of our consideration.
75
Langdon Winner, Autonomous Technology: Technics-out-of-Control
Theme in Political Thought (Cambridge, 1977), pp. 226-36, 238-51.
78
K
as a


CHAPTER 5
RELATING EACH CLAIM TO THE HISTORICAL RECORD!
Introduction
Although Chapter 4 as a history of the rise of management infomiation
i,
systems is limited to the role of von Neumann, it can now be interpreted in terms
of the two meta-historical claims of Chapter 3. It is necessary, however, to
'i
interpret the planning process as a technological system and to identify the sub-
systems which comprise it.1
Key to interpreting the planning process as a technological system1 !lies in
viewing planning as an information processing system (IPS). The purpose: of an
F
IPS is to transform data obtained from the environment into information of use to
i
decision makers. Abstractly, an IPS consists of an input, processing, and output
|j
stage. In a Gantt system, the processing stage is carried out by a human tjping. In
a computerized PERT system, the processing stage is performed by an electronic
digital computer. Human technology is at the heart of the Gantt system, whereas
electronic technology is at the heart of the computerized PERT system.2
1 Referring to a management technique as a technology has as a precedent the
following technical paper. Line of Balance Technology, Navy DepartmentJjOffice
of Naval Material (NAVEXOSP1851 Rev. 4-62), Washington, D.C., AprilJI 1962.
[
2 Jerome S. Burstein and Edward G. Martin, Computer Information Systems
with BASIC (Chicago, 1989), pp. 36-66.


1
1
I
There are several sources of data for each system type. In a Gantt/PERT
system, the data are (1) a Gamt/PERT chart to be updated and (2) a list of
schedule changes which (in the simplest non-trivial case) need to be applied to the
i
chart. The two systems differ not only in the "technology" being used to ;
implement the processor, but also in the kind of processing being performed. In a
Gantt system, dependencies between tasks are not explicitly part of the input data
*
I
structure, whereas task dependencies are explicitly represented in PERT networks.
The output of each system is again a chart of the appropriate kind.
Recall that the theory of technological disequilibrium involves several
concepts: imbalances emerge in a system because of uneven growth of its j
l
components; in order to move the bottleneck to system growth, reverse salients
j
must be identified as solvable, but not yet solved, critical problems; if engineers
I,
solve these critical problems, the imbalance is removed and the system will
resume growth; otherwise, a new system emerges.
Recall also that the technological co-evolution is a special case of
technological co-evolution. This motivates our strategy to first assess the
applicability of the theory of technological disequilibrium by attempting
explicate the change from Gantt MIS to PERT MIS in terms of the key concepts
of the theory of technological disequilibrium as outlined above.
1

80


r
Technological Disequilibrium I1
i;
|
To show the applicability of the disequilibrium theory, we must snow
uneven growth in one of the components of the system, a "reverse salient!;" What
i;
are the components on the system? Here we are concerned with the "hardware"
>i
and "software" that perform the input, processing, and output functions ofjour
idealized IPS. |!
j!
In the Gantt system, the human planner is the interface between the Gantt
system and its environment. In addition, the planner transforms input datatj(a Gantt
!!
chart, a change list, and a list of precedences) into a new Gantt chart. In the
PERT system, the human planner no longer processes the chart, change list, and
precedence list, but still exchanges data and results with the environment. IX
human acts as input and output medium/device in both systems, so we must look
|i
elsewhere for the distinctive differences between the two types of MIS. '!
Consider the form of the primary data in each system. A Gantt chart can
i!
be described as a temporally ordered collection of paired start and end dates,
whereas a PERT chart is a collection of paired dates that is ordered by a !'
I;
precedence relation. The precedence relation consists of pairs (A,B) of task's A
||
and B such that A precedes B and imposes a stronger ordering than simply
temporal order. From a PERT chart, we can uniquely derive a Gantt chart, ibut a
# ii
Gantt chart does not itself provide the precedence task pairs needed to build a
PERT chart. So, a PERT chart is structurally more complex than a Gantt chart.
81
I


With the use of concurrency, the number of tasks and the complexity of
|i
the precedence relation increased as well. The imbalance required for the (|
I!
!i
applicability of the disequilibrium theory occurs not in the human technology
itself, but in the character of the data it must process. The effect of this iilibalance
i!
in the Gantt system is to cognitively overload the human planner qua processor.
The critical problem to be solved is how to deal with this overload.
I
The key to solving the critical problem is to recognize the primary ; source
i.
ii
of the difficulty. In the Gantt system, as the number of tasks in a project increase,
i;
there is an exponential increase in the number of task pairs that must be checked
|i
to determine if they are to appear in the precedence list. Forming and processing
i'
the precedence list leads to cognitive overload of the human processor in the
Gantt system. !'
I,
One solution to the critical problem of processing a precedence list; is to
\
"upgrade the processor" by obtaining a superior planner. As the requirement for
ii
more and better planners increases, we are witnessing the final phase in thb
f
evolution of Gantt systems in which the capacities of the system are beingJ
i,.
reached. Eventually, other approaches were sought to address the technological
il
disequilibrium wrought by the increased complexity of the data.
11
Von Neumanns Strategic Missiles Evaluation Committee had focused on
. ii
the shortcomings of serial development strategies and the need for techniques to
i,
manage the additional complexity engendered by concurrency. The PERT !
approach addresses the limitations of the Gantt system by incorporating the:
82


precedence list into the PERT chart. Formation of the precedence list anc | its
i
processing are simplified. The PERT system does not preclude the continued use
of a human processor: the first PERT systems were, in fact, manual systems.
]
Technological Co-evolution
We must now show that the special sort of disequilibrium required by the
I
theory of technological co-evolution in fact obtained. More specifically, the
disequilibrium described above must be explicable in terms of (1) the mutual,
reciprocal influence of at least two subsystems, and (2) the presence of a
selective-retentive process.
Consider the distinction between mutual and reciprocal influence:
RECIPROCAL, MUTUAL, COMMON mean shared or experienced by each.
RECIPROCAL implies an equal return or counteraction by each of two sides
toward or against or in relation to the other; MUTUAL applies to feelings or
effects shared by two jointly.3
;i
i
Thus, to assert the mutual, reciprocal influence of two subsystems A and B is to
assert that: changes to the system of which A and B are a part influence both A
i
and B and; if changes to A influence changes to B, then at some later time,
further changes to B will influence changes to A.
In the change from Gantt MIS to PERT MIS, three things have changed.
i
First, the nature of the processing has changed due to an input data set which is
Websters Seventh New Collegiate Dictionary, (Springfield, Mass., 1965), p
715.
83


i:
more: conceptually complex, since it represents a network of task dependencies,
and; numerically complex due to an increase in the quantity of tasks to be
|j
managed. Second, the processing stage is now performed by an electronic' digital
computer instead of a human computer. Last, a human being now performs the
i
functions of a peripheral processor by accepting schedule slippage data from the
i
i
environment, making it available to the central processor in a machine-readable
format, and feeding back an updated PERT chart to the planning environment.
Which changes to the Gantt system as a whole affected the input,!
I
processing, and output components? This question addresses the first condition for
technological co-evolution. Certainly, with development concurrency, data volume
and complexity substantially increased. Assuming that the components of L system
|
are specialized, any change in the environment which affects the mix of functions
that must be performed will differentially load each component. Thus, we seem to
t
have a "mutual, reciprocal relationship" between at least two of the three
components of the Gantt system. I
[j
Next, we must demonstrate that if changes to component A influerices
||
changes to component B, then at some later time, further changes to component B
l
will influence changes to component A. Consider an IPS from the standpoint of
throughput and response time.4 System throughput is defined to be the number of
4 This approach to the evolution of information processing system is inspired
by my reading of Stuart K. Card, Thomas P. Moran, and Allen Newell, "Tne
Keystroke-Level Model for User Performance Time with Interactive Systems,"
Communications of the ACM 23 (1980): 396-410. 1
84


items that can be completely processed per unit time, a measure of the volume of
i
information processed. System response time is defined to be the time from the
l
I
start of processing to the end of processing, a measure of the speed of the system
as a whole. j
i
[]
Several results are needed to understand the dynamics of the evolution of
i
I
(open) information processing systems.5 First, as the rate of requests to a given
j
IPS increases, throughput will increase up to a saturation point. Second, as
i
throughput increases on a given IPS, response time also increases. Information
systems are typically subject to both throughput and response time requin Intents.
j
A system which satisfies both throughput and response time requirements j
i
one load, may fail either or both under a heavier load. The structure of the
information system may change as a result of other factors, but evolution
response to failure to meet one or both types of requirement is also likelyJ Let us
first consider throughput.6
When an IPS fails a throughput requirement, one may try to increase
i
throughput by speeding up the component that is constraining throughput jfalso
known as the bottleneck). In the case of Gantt systems, we conjecture that! as
3
junder
in
5 An "open" IPS is one in which the rate at which requests are presented to the
IPS is independent of the degree of responsiveness of the IPS. The rate at jjvhich
successor lemmings attempt to cross a ravine is unaffected by the fate of their
predecessors. j
i
6 We do not address the applicability of functional failure and presumptive
anomaly.
85
i
1


failures to effectively manage concurrency occurred, there was an increased
emphasis on finding superior planners. If speed-up does not bring the system into
compliance, the bottleneck component can be replicated, thus increasing !
throughput. Expansion and contraction of planning staff in the defense industry is
well-known and undoubtedly occurred in the evolution of Gantt systems. |
Replication will only help up to a certain point: communication and otheri costs
eventually swamp the positive effects of more manpower.
i!
Consider now the situation when a response time requirement is violated.
i
,i
As throughput increases on a given system, response time also increases. Without
changing the system, the only way to reduce response time is to reduce dire rate at
which requests are made, which in turn reduces system throughput. In the;case of
i
Gantt systems, it is unlikely that program management would accept reduced
i1
processing from its planning staff in return for quicker response. ;
l
In order to see reciprocal influence among the components of an IPS, note
that every IPS has a bottleneck. The system bottleneck can be moved but 'never
removed, since there is always a slowest component or set of components I
j
regardless of how fast a component becomes or how many times it is replicated.
i
Suppose the bottleneck is moved from component A to component B. If the
I
throughput of component is B high enough, there is no reason for the system to
|;
evolve further for reasons of system performance. If, on the other hand, the
|
throughput of component B is too low, then attention will focus on speed-up or
86


replication of the component B, perhaps causing the bottleneck to move to another
component.
To the extent that moving the bottleneck from component A to component
'I
B represents a technical advance, it may fairly be described as a "reverse isalient"
of sorts. If the throughput of the newest bottleneck component does not satisfy the
throughput requirement, the "critical problem" for IPS engineers is to solye the
i
problem through speed-up or replication. Over the course of its history, the
bottleneck may visit several different components one or more times. !
We have suggested a dynamic of IPS change actuated by external (demand
and dependent on the technical nature of the IPS components. The replacement of
a human processors by electronic processors made it necessary for the input
Li
component to supply instructions and data at higher speeds geared to the new
I
processor component. Thus, for Gantt systems, the initial influence flows from the
processor to the input/output component. To the extent that the human planner
i
had been used for input functions as well as a processor, some of these input
f
functions were moved to the electronic processor. Thus, changes to the processor
function, while actuated by changes to the load processed by a human input
i
component, subsequently led to changes in the functions performed by the'i human
input component.
In order to view the change from Gantt to PERT systems as an ins^nce of
technological co-evolution, there must be in this change a selective-retentive
process, a mechanism by which the
87


I
fate of a given invention its developmental direction not only depends on
its competition with alternative devices performing the same or similar
functions and on its co-evolution with a specific other technology, but also
depends on the evolutionary success or failure of the higher-level !
macrosystems of which it is a part.7
In the present case, the given invention was the new PERT management
ii
u
information system. The ability to implement PERT-type management information
systems using digital computers insured the ability of PERT systems to compete
against Gantt systems. PERT management information systems were themselves
part of a larger "military-industrial complex," a macrosystem then dedicated to
eliminating a "missile gap" thought to threaten the very existence of the country.
As PERT systems proliferated, accumulated experience suggested that such
i
systems could not of themselves guarantee success in managing large-scale
systems, hardly a surprising observation. "Scientific management" practices lost
some of their luster, and by the mid-1960s, the social macrosystem had changed
enough so that PERT management information systems were pruned back.
Conclusion
Having demonstrated the applicability of both the theory of technological
disequilibrium and the theory of technological co-evolution, we now ask which
7 Edward Constant II, The Origins of the Turbojet Revolution (Baltimore,
1980), p. 14.
88
I
1


"theory" seems to better "explain" the facts of Chapter 4.8 With respect to the
technological change from Gantt to PERT MIS, the theory of technological co-
I
evolution is superior to theory of technological disequlibrium by virtue or being
more specific.
i
j
I
.1
I-
!
i
i
i
i'
8 See the distinction between theoretical orientations and theories in David
Kaplan and Robert Manners, Culture Theory (Englewood Cliffs, 1972), pp. 32-35,
88-91.
I
89
t!


CHAPTER 6
ARE META-HISTORICAL CLAIMS TESTABLE?
Introduction
11
The narrower the claim, the easier to determine what is empirically
required to satisfy or falsify it. Most importantly for us, the idea that a claim can
be tested by reference to the outcome of some activity or procedure presupposes
'i
the relative neutrality of that activity or procedure with respect to what isj at stake
in the claim. In short, an activity or procedure or method can test a claim: only if
that activity has the character of objectivity with respect to the claim.
The philosophical claim that meta-historical claims concerning
!i
technological change are testable by reference to the history of technological
5
change implies that the history of technological change possesses the requisite
character of objectivity. Before attempting to establish the objectivity of history,
we assume that this objectivity and see how on that basis a meta-historical claim
might be "tested." j
Suppose a meta-historical claim is generated by a source discipline.
l
Suppose also that strong evidence in support of the claim is lacking, so thit
|i
additional evidence is sought from colleaques in other fields. Hie special methods
r
'i
or materials of another discipline may make it possible to collect especially
!
compelling forms of evidence, particularly since a different discipline may not


!
share the presuppositions of the source discipline. Evidence acquired from another
discipline is likely to have an additional increment of "objectivity."
by phenomenology. For the phenomenologist, objectivity is an attribute of
II
cognition, which seems to condemn objectivity to a fundamental arbitrariness. For
|
the phenomenologist, however, even cognition has its own logical character, and it
cognition which guarantees the existence of that which is cognized. |,
ji
Also inspired by phenomenology, historian Leon Goldstein takes jup the
l
issue of the objectivity of history. In the language of phenomenologist Edmund
Husserl, Goldstein wants to step back from the "natural attitude" of the historian
which is expressed through historical realism: ji
... a habit of mind ... which inclines those possessed of it simply to assume
that the conceptions of factuality, truth, or reference which apply when we
speak of the natural world in the natural present must apply when we speak of
the historical past.1 l!
i.;
Although Goldsteins natural present may present some difficulties for 1|
Jj
phenomenological philosophers of science, the major point is that the historical
is this structure which rescues cognition from mere subjectivity. The correlate of
[i
objectivity as a character of cognition are the object(s) which is(are) cognized in
the act of cognition. In other words, for the phenomenologist, it is the character of
1 Leon J. Goldstein, Historical Objectivity (Austin, 1976), p. xxiv.
91
i


I!
past is not to be assimilated to the view of time employed by the naturaljjsciences,
sometimes called the B-series concept of time.2 !
i:
Goldstein attempts to clarify the Husserlian concept of constitution, then
explicitly applies to history:
Sokolowski argues that for Husserl consciousness is a necessary but not a
sufficient condition for reality; while it constitutes it, it does not create it.
|i
.... we have no access to the historical past except through its constitution in
historical research. fj
l!
.... the objects of historical knowing are not given in the way in which natural
objects present to perception are.3 !|
Is Goldstein a phenomenologist of history? Not quite, since the concept of
constitution functions as a metaphor for Goldstein, who wants to locate the
objectivity of history, not in the consciousness of the individual historian [ but
rather in the methods used by the discipline of history.4
[j
Goldstein locates the objectivity of history in the practice of history as a
discipline. Viewing as a way of knowing, the discipline of history is wrenched
away from the ordinary conceptions of history which have entered everyday
consciousness via the abstractions of science. I*
l1
2 Richard M. Gale, "Die Static versus the Dynamic Temporal Introduction,"
in The Philosophy of Time, ed. Richard M. Gale (New Jersey, 1968), pp.|65-85.
3 Goldstein (n. 1 above), pp. xxi-xxv. I
|
4 Goldstein thus presupposes the inter-subjective agreement which was so
difficult for Husserl to reach from the starting point of individual consciousness.
l:
92
ii


If history is conceived as a disciplined way of knowing ... not every statement
about the past embodies a historical belief. A historical belief would be based
upon the outcome of historical research, a claim to knowledge based
historical inquiry.5
.upon
Since the historian has no direct access to a real past, but rather to a past
constituted in the present through the discipline of history,
the problem of historical objectivity can only be raised properly from within
the context of an inquiry into the nature of the discipline.6 !
!
The nature of the discipline may be considered in terms of the distinction between
I
"the context of discovery" and "the context of justification. The context!of
i
discovery concerns the process by which claims are arrived at, while thelcontext
i
of justification concerns the rational justification of various knowledge claims.
According to Goldstein,
With such a view ... the actual intellectual processes of historical constitution
ought to be left to the historian to discuss ... while philosophy becomes
concerned only when the work is done and the question of justifying] the
claims of knowledge that arise from it is on the agenda.7 j
Historians may occasionally give an account of how and why they make
J
their judgements. In doing so, they may invoke the standards and the reasoning of
5 Goldstein (n. 1 above), p. xix. '
6 Goldstein (n. 1 above), p. 184. Although Goldstein considers the views of
Arthur Danto, Maurice Mandelbaum and others in elaborating the positions
presented, we are interested only in using the conclusions of these arguments to
suggest the kind of objectivity achievable by works in the history of technological
change. ]
N
7 Goldstein (n. 1 above), p. 211.
93
I
i


I
I
their philosophical friends, but these standards and techniques play a subordinate,
persuasive role.
I
The objectivity which we wish to ascribe to the discipline of history is of
I!
two kinds. The discipline of history governs the conditions under which 'the
s
"game" of historical research is played and, once played, how it is conducted and
f
how performances are evaluated. This level of rationality is associated wjith the
ij
context of discovery for historians. It may also be possible to describe disciplinary
practices in perhaps more logically compelling terms: if so, then another!, second,
level of rationality has obtained. This level of rationality is associated with the
E
context of justification for historical knowledge claims.
In this work, we assume for the discipline of history no more thin the
limited rationality suggested by continued practice. Given this kind of rationality,
i1
the discipline of history can appraise various meta-historical* claims and can, in
,1
this limited sense, test those claims. The discipline of history through the actions
of the community of historians evaluates a claim in light of evidence assembled
using the materials and methods of historical research. It is this capability that
renders a meta-historical claim testable*. l!
Kinds of Historical Evidence
We now distinguish between two uses of historical evidence. Suppose we
ii
have the following meta-historical claim: Revolutions follow the relaxation of
Ij
oppressive conditions. In formulating this claim, Crane Brinton did not [first
i.i
ij
94


Full Text

PAGE 1

CAN THEORIES OF TECHNOLOGICAL CHANGE BE TESTED? by Roy Woodrow Wilson B.A., Metropolitan State College, 1976 M.A., University of Denver, 1978 M.S., University of Denver, 1983 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in panial fulfillment of the requirements for the degree of Master of Humanities Humanities Program 1990 ... .._.j.f

PAGE 2

This thesis for the Master of Humanities degree by Roy Woodrow Wilson has been approved for the Humanities Program by Glenn A. Webster l MarkS. Foster

PAGE 3

Wilson, Roy Woodrow (M.H.) Can Theories of Technological Change Be Tested? Thesis directed by Associate Professor Glenn A. Webster This inter-disciplinary thesis considers the possibility of testing "theories" of technological change. Public and business policies often presuppose that technological change can be successfully managed. Since this presupposition may rest on unacknowledged or untested theories of technological change, the testability of such theories is a significant issue. The thesis is structured by the tension between the disciplines of philosophy and history in relation to the phenomenon of technological change. Based on an analogy to the history and philosophy of science, the thesis considers the possibility of testing theories of technological change by evaluating those theories with respect to the history of technological change. As an experiment in this kind of evaluation, two theories of technological change are evaluated with respect to the history of management information systems (MIS) from 1945 to 1960. The history of MIS during this period is interpreted in terms of the technological co-evolution of the components of an idealized information processing system. It is argued on anti-realist epistemological grounds that the discipline of history produces evidence which is both objective and variable with respect to meta-historical claims. The thesis concludes that meta-historical claims are testable in the sense that the objectivity of the judgements of the discipline of history

PAGE 4

allows us to determine the warrantability of judgements concerning the likelihood of a meta-historical claim. Finally, it is suggested that, insofar as technological change brings unmanageable technological and social consequences, policy managers and analysts would be well served by attempting to identify and test the theories of technological change presupposed by broader policy initiatives. The form and content of this abstract are approved. I recom nd its publicati n. Glenn A. Webster iv

PAGE 5

CONTENTS CHAPTER 1. INTRODUCTION Purpose of the Study ............................... Scope of the Study ................................. Arrangement of the Thesis ........................... 2. REVIEW OF THE LITERATURE ........................ The Philosophy of Science and Technology ................ The History of Technology .......................... 3. TWO META-HISTORICAL CLAIMS ..................... Introduction ..................................... Technological Disequilibrium ........................ Technological Co-evolution .......................... Conclusion ..................................... 4. MANAGEMENT INFORMATION SYSTEMS AND TOTAL WAR Introduction ..................................... Yon Neumann and the Bomb ......................... A Passion for Planning ............................. The Search for Optimum War ........................ At the Dawn of Electronic Computing .................. New Applications of Computing: A Watershed ............ Computer Design and National Defense ................. 1 1 3 5 7 7 16 22 22 23 30 33 35 35 38 39 42 47 49 53

PAGE 6

Von Neumann and the Weather . . . . . . . . . . . . 59 The Big Picture . . . . . . . . . . . . . . . . . 66 Conclusion . . . . . . . . . . . . . . . . . . . 71 5. RELATING EACH CLAIM TO THE HISTORICAL RECORD . . 79 Introduction . . . . . . . . . . . . . . . . . . . 79 Technological Disequilibrium . . . . . . . . . . . . 81 Technological Co-evolution . . . . . . . . . . . . . 83 Conclusion . . . . . . . . . . . . . . . . . . . 88 6. ARE META-HISTORICAL CLAIMS TESTABLE? . . . . . . 90 Introduction . . . . . . . . . . . . . . . . . . . 90 Kinds of Historical Evidence . . . . . . . . . . . . . 94 On the Uses of History . . . . . . . . . . . . . . . 96 History and Truth . . . . . . . . . . . . . . . . . 97 Conclusion 7. CONCLUSION 100 101 Von Neumann as Change Agent . . . . . . . . . . . 101 Technological Change and the Arrow of Time . . . . . . 103 A Counsel of Prudence . . . . . . . . . . . . . . 105 APPENDIX WARRANTABILITY, JUDGEMENT, AND PROBABILITY 106 SELECTED BIBLIOGRAPHY . . . . . . . . . . . . . . . 108 vi

PAGE 7

CHAPTER 1 INTRODUCTION Purpose of the Study Successfully implementing policy decisions in both the private and public sectors increasingly hinges on the successful management of technological change.1 Technological change is manageable only if the dynamics of technological change are understood, at least in an "engineering" sense. Regardless of how such understanding is obtained, it may be articulated in the form of a model or theory of technological change. Some accounts of technological change are based on economic theory, others on the sociology of invention, others still on the concepts of general systems theory.2 To the extent that successful policy implementation hinges on the successful development and/or deployment of a technological system, that policy also presupposes a theory of technological change. With its program for the 1 Educational programs for the middle-manager in government and industry have been created that focus on the problems of managing technological change: the Sloan School of Management at MIT offers a concentration in the Management of Technological Innovation. 2 Rachel Laudan, Introduction to The Nature of Technological Knowledge: Are Theories of Scientific Change Relevant?, ed. Rachel Laudan (Boston, 1984), pp. 1-26.

PAGE 8

development of advanced computing technology, the Strategic Defense Initiative is a case in point.3 The decision to implement higher-level policies may tum on the apparent plausibility of a lower-level theory of technological change which implies that through a definite series of steps, the desired technological change can in fact be produced. Obviously, it is important to be able to assess such theories. One measure of merit might be the "testability" of a theory. This thesis examines the claim that theories of technological change ought to be "empirically testable" by reference to the history of technological change. If theories of technological change canri.ot be "empirically tested" in this sense, then the policies which presuppose them lack grounding. Discovering and evaluating theories of technological change presupposed by higher-level policies has the function, if not of identifying "better" theories, then at least of helping to identify "worse" ones. And, as recently observed, if this represents only a marginal improvement of policy-making, it may be worthwhile.4 This thesis is an inter-disciplinary study of theories of technological change in the sense that it is equally a work of history and philosophy that 3 Defense Advanced Research Projects Agency, Strategic Computing, New Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense (28 October 1983), p. 69, employs the "push-pull" language of technological change used by economists. 4 Richard E. Neustadt and Ernest R. May, Thinking in Time: The Uses of History for Decision-Makers (New York, 1986). 2

PAGE 9

highlights the interplay between the two disciplines.5 The inter-disciplinary nature of the study structures the problem posed, the way in which it is treated, and the way the thesis is organized. In particular, we are required to balance differences in locution between the two disciplines. The differences, however, between philosophers and historians are not merely verbal. While not all philosophers are abstract and ahistorical, philosophy as a discipline has tended to be more concerned with relatively abstract issues and to employ relatively ahistorical methods.6 While not all historians are concrete and indifferent to theoretical insights, history as a discipline has tended to concern relatively concrete issues and to use relatively atheoretical methods. These differences add to the complexity of the thesis. Scope of the Study Hegel is undoubtedly the best/worst example of a philosopher arrogating, not only the role of historian, but history itself to the purposes of a philosophical system. Historians have been understandably quick to resist such encroachment, 5 This is a stated requirement of the Master in Humanities program at the University of Colorado at Denver. 6 Philosophers used to try to determine the nature of history or historical discourse. Now, through the works of Rorty, Gadamer, Habermas, and Arendt, philosophy is being subjected to the "relativizing" effects of history. For an account of this transformation, see Richard J. Bernstein, Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis (Philadelphia, 1983). 3

PAGE 10

establishing disciplinary mores which make the appearance of philosophical concerns a cause of suspicion.' History is not a neutral stuff to be arbitrarily interpreted. However important it may be to establish a line of demarcation between history and philosophy, this is outside the scope of the thesis. Some useful distinctions can be made, however, by briefly examining the terms 'historical,' 'meta-historical,'and 'philosophical.' A claim would be naively regarded as 'historical' if it pertains to the past and does not involve any obvious generalizations. For example, "The United States was attacked at Pearl Harbor on December 7, 1941" seems a patently uninteresting historical claim. On the other hand, "Nations which are overly militarized become second rate powers" is also about the past and so is historical, but seems also to be a generalization. A normative rather than factual claim is being made, undoubtedly on the basis of implicit factual claims. The second claim seems better classified as 'meta-historical': it asserts something of all relevant historical periods. 8 7 David Hackett Fischer, Introduction to Historians' Fallacies (New York, 1970); Arthur M. Schlesinger, "The Inscrutability of History," in The Vital Past: Writings on the Uses of History, ed. Stephen Vaughn (Athens, Georgia, 1985). 8 A similar relation between fact and theory arises in every discipline that attempts to be theoretical. For a discussion of these difficulties, and the role played by history, see David Kaplan and Robert Manners, Culture Theory (Englewood Cliffs, 1972), pp. 67-75. 4

PAGE 11

Are the claims of Thomas Kuhn concerning scientific (rather than technological) change, 'philosophical' or 'meta-historical' or even 'historical '?9 Certainly Kuhn makes some claims which are historical, while others are more difficult to categorize. Here the three-fold distinction breaks down, as it is unable to successfully classify allow a well-known case. Despite this weakness, I want to deliberately over-simplify matters by calling a claim: 'historical,' which concerns the past and involves no obvious generalizations; 'meta-historical,' which also involves obvious generalization; and 'philosophical,' which states the manner in which 'historical' claims provide evidence for 'meta-historical' claims. Arrangement of the Thesis Employing a strategy whereby we progressively narrow our focus, Chapter 2 begins by surveying the literature of the history and philosophy of technology, in the broad, ordinary, sense of these terms. We briefly touch upon the subject which will become a focal point of the thesis, the evolution of management information systems (MIS). Chapter 3 presents two relatively well-known theories of technological change. The originator of each asserts that the theory in question has wider application than they have given it: hence, these theories can be regarded as 9 Thomas Kuhn, The Structure of Scientific Revolutions (Chicago, 1962). 5

PAGE 12

'meta-historical' claims applicable, at least in rough terms, to the evolution of MIS. Chapter 4 is "a" history of MIS in the post-war period. It is intended to be a "standalone" work of history which can be evaluated independently of the uses it serves in this thesis. Chapter 4 serves as the evidential basis against which the 'meta-historical' claims of Chapter 3 are assessed in Chapter 5. In effect, Chapters 3, 4 and 5 represent an "experiment" in which two 'meta-historical' claims concerning technological change are "tested" against a particular episode in the history of technological change. One of the 'meta-historical' claims may be judged, if not false, then less "adequate" to the 'historical' claims of Chapter 4. Chapter 6 reconsiders the 'philosophical' claim that theories of technological should be testable with respect to the history of technological change. We assert that "testability" is possible, but only in a restricted sense. Chapter 7 considers the nature of technological change as exemplified in Chapter 4 and its implications for those who would manage technological change. 6

PAGE 13

CHAPTER 2 REVIEW OF THE LITERAWRE The Philosophy of Science and Technology This chapter selectively surveys the literature in the history and philosophy of science and technology, in prepartion for the next chapter where two specific theories of technological change are considered. Nearly twenty years ago, T.S. Kuhn noted several variations in the historical interplay between science and technology. Prior to the twentieth century, scientific and technological activities were more easily distinguished: in the 18th century, science drove technology; in the 19th, technology drove science; and now, science and technology interpenetrate to such an extent that it is difficult to separate one from the other except in an abstract, ideal sense.1 The development of the atomic bomb provides a good example of this interpenetration. The theory indicating the possibility of achieving a chain reaction came from mathematical physics, but the realization of this possibility was decisively shaped by technological possibilities. The successful approach was to enclose plutonium in a spherical casing of conventional explosive materials: upon 1 Thomas Kuhn, "The Relations between History and the History of Science," Daedelus 100 (1971): 271-304.

PAGE 14

detonation of the conventional materials an implosion would occur, initiating a chain reaction. A number of other workers in the history and philosophy of science have tried to account for the interplay as well as the differences between science and technology. Henryk Skolimowski, for example, locates the essential difference between science and technology in their respective ideals of progress: ... technological progress is the key to the understanding of technology ... in science, we are concerned with reality in its basic meaning; our investigations are recorded in treatises "on what there is." .... Technological progress ... could be described as the pursuit of effectiveness in producing objects of a given kind.2 This distinction is useful way to characterize two easily recognized and different kinds of experience. While primarily concerned with "reality in its basic meaning," the concern of a scientist may be partially expressed via the construction of a fifty-mile tunnel called a super-collider. The engineer building a super-collider will be engaged in "the pursuit of effectiveness in producing objects of a given kind," nevertheless furthering the elaboration of the scientific theory which motivated the construction of the super-collider. Science and technology interpenetrate in a complex and, historically speaking, progressively more systematic means-ends relationship. In the second half of this century, the belief that this means-end relationship can be managed 2 Henryk Skolimowski, "The Structure of Thinking in Technology," in Philosophy and Technology: Readings in the Philosophical Problems of Technology, ed. C. Mitcham and R. Mackey (New York, 1973), pp. 43-45. 8

PAGE 15

has become commonplace. The Strategic Computing Plan assumes that investigation can be structured to maximize the mutual reinforcement of scientific (computer science) and technological (computer engineering) change. Rachel Laudan has recently suggested that philosophies of scientific change and the history of science might form the basis from which philosophies of technological change could be developed. In considering the history and philosophy of science as an entry point into the history and philosophy of technology, several problems arise, .most notably the different senses which "science" holds for the philosopher and the historian. Eman McMullin distinguishes two inter-related senses of "science" by employing a symbolic notion. S 1 concerns the end product of research, while S2 addresses the processes, intellectual and otherwise, which culminate in S 1. For example, the famous equation of Einstein has two stories: the first, which concerns the meaning of the equation itself (S 1) and the second, which describes the processes leading up to the equation (S2).3 Despite this clarification, the philosophy of science (PS) presents additional difficulties because of ambiguities in the meaning of "philosophy." McMullin resolves this ambiguity by noting that "external" philosophy of science (PSE) explains S 1 in terms of broader theories, typically those involving 3 Ernan McMullin, "Philosophy of Science: An Overview," in Scientific Knowledge: Basic Issues in the Philosophy of Science, ed. Janet Kourany (Belmont, California, 1987), pp. 3-19. 9

PAGE 16

the phenomenon of knowing or the logical structure of demonstration, while "internal" philosophy of science (PSI) is "based on what scientists do rather than upon what they say they are doing." Both PSE and PSI leave the scientific content of S 1 to the scientist: the philosophy of science is concerned with S2 and its relation to S 1. One can choose between explanation based either in symbolic logic (PSE) or in historical exposition (PSI). At least in a naive sense, these are very different kinds of explanation.4 For McMullin, the history of science (HS) can serve diverse purposes in PS. HS supplies PSE with examples illustrating the philosophical claims made by PSE, while HS plays an evidential role in PSI. According to McMullin, the claims of PSI receive their warrant from HS. Let us consider this a little more closely. The focus of PSE is on the logical relationship between experiment and theory. A historical account which reveals aberrant scientific behavior having little relationship to logically articulated accounts will have little (apparent) value in PSE. Since the focus of PSI is on historical accounts of scientific behavior, these accounts are important because they may confirm or falsify theories concerning the actions of the scientific community. Given the strategy of using the history and philosophy of science as an entry point into the history and philosophy of technology, the above discussion of 4 For an account of scientific explanation, see Janet Kourany, Scientific Knowledge: Basic Issuesin the Philosophy of Science (Belmont, California, 1987), pp. 20-110. For an account of historical explanation, see Patrick Gardiner, Theories of History (Glencoe, Dlinois, 1959), pp. 344-443. 10

PAGE 17

PS needs to be focused on scientific change. It has been suggested that "internal philosophers of science subject their own claims regarding scientific change to "empirical test" by using HS to evaluate competing theses.5 This comparison is possible by virtue of the different disciplinary goals of history and philosophy. As McMullin notes, HS aims at the particular, the singular, hoping to establish what actually happened. The interpretation of particular events in terms of universal patterns is central toPS, while of secondary importance to HS. In the language of Chapter 1, PSI is inherently 'meta-historical.' The role of history in what McMullin terms "the new genre" of the history and philosophy of science (HPS) will be "prior and in a sense basic, for on the establishing of the analysis as history depends its warrant as philosophy." Thus, HS both provides material for PSI and assesses the adequacy of how that material is treated by PSI.6 Since history aims at the singular, it entertains no "universal" which could be under-, over-, or otherwise determined by historical fact. McMullin does not address problems concerning the appropriation of the "raw data" of HS by the HPS, nor does he spell out how the singularity of HS and the universality of PSI are reconciled in HPS. He does, however, point to a paradigmatic instance of work in the genre: Thomas Kuhn's Structure of Scientific Revolutions. 5 Larry Laudan et al., "Scientific Change: Philosophical Models and Historical Research," Synthese 69 (1986): 141-223. 6 McMullin (n. 3 above). In the language of Chapter 1, McMullin makes the 'philosophical' claim that 'historical' claims warrant 'meta-historical' ones. 11

PAGE 18

If Structure is an exemplary work in the history and philosophy of science, then perhaps, following the suggestion of Rachel Laudan, it is an especially good starting point for the development of the history and philosophy of technology. Quite likely for different reasons, historian Edward Constant II contrasts with the Kuhnian notion of paradigm change in science his own conception of technological change. Constant claims that science fuels certain species of technological revolutions by identifying the operating conditions under which technology failure can be presumed. Constant argues that it was not actual failure of earlier engines that led to the development of the turbojet: engines were nowhere near operating conditions of hypothesized failure. The internal dynamic of overcoming technical barriers by the engineering profession coupled with the revolutionary activities of certain members led to a paradigm change by the "community," a change evidenced by the displacement of the propeller plane by the jet.7 The notion of an "internal dynamic" has led some thinkers such as Marx to speak about a "technological imperative" generating inexorable technological expansion. Langdon Winner scrutinizes the claims of such "technological determinists" and clears away some conceptual underbrush by distinguishing between apparatus, technique, and organization. The development of an atomic bomb (apparatus) required the novel social arrangement of Los Alamos 7 Edward Constant II, The Origins of the Turbojet Revolution (Baltimore, 1980). 12

PAGE 19

(organization) to utilize the skills, methods and procedures (techniques) required to construct an atomic bomb.8 Typically, technological determinists attempt to parlay mere illustrations from the history of technology (HT) into evidence for their claims. Consider the following claim: "Major technological changes have been brought about by the actions of government." Although supported by the atomic bomb and other examples, the claim becomes less convincing as one considers additional examples such as the effect of the government on (1) the creation of the incandescent electric light and (2) the oil shale industry in the 1970's and 1980's.9 The claim that major technology changes have been effected by government suffers from an abundance of conflicting evidence. Historian David Hackett Fischer has excoriated historians who resolve conflicting evidence by the logical solecism of shifting one's ground. This has led some to suspect that Fischer is pushing a new variant of "scientific history," one in which the "hypothesis of universal form" becomes the methodological rule for the practicing historian. According to the philosopher of science Carl Hempel, the explanation of the occurrence of an event of some specific kind E at a certain place and time consists, as it is usually expressed, in indicating the causes or detennining factors of E .... the scientific [emphasis added] explanation of the event in question consists of 8 Langdon Winner, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought (Cambridge, 1977). 9 Thomas P. Hughes, Networks of Power: Electrification in Western Society, 1880-1930 (Baltimore, 1983), pp. 58-61. 13

PAGE 20

(1) a set of statements asserting the occurrences of certain events Cl, ... Cn at certain times and places. (2) a set of universal hypotheses, such that (a) the statements of both groups [that is, a set of statements and a set of universal hypotheses] are reasonably well-confirmed by empirical evidence, (b) from the two groups of statements the sentence asserting the occurrence of event E can be logically deduced.10 What does Fischer have to say about the above "covering law"? First, this kind of explanation cannot be had in historical writing although one might attempt it by adopting the form of universals: "All revolutions occurring in France in the late 18th century were preceded by a period in which those in power attempted to mollify the underclass." Secondly, historians do not use universal laws in their work, perhaps due to "an inherited antipathy to questions and hypotheses and models, which is apt to run below the surface of a historian's thought."11 Historians do not use universal laws because, as noted by McMullin, they are not interested in the kind of generalization afforded by universal law. The discipline of history often plays a decisive role in the evaluation of 'metahistorical' claims formulated by economists, sociologists, or even historians, occasionally rejecting them as unwarranted by the facts of a particular historical episode.12 Although the discipline ofhistory has its own norms and autonomy, it 10 David Hackett Fischer, Historians' Fallacies: Toward a Logic of Historical Thought (New York, 1970), p. 128. 11 Fischer (n. 10 above), p. 7. 12 For an example of the use of history in the "verification" of anthropological theories, see David Kaplan and Robert Manners, Culture Theory (Englewood Cliffs, 1972), pp. 67-79. 14

PAGE 21

may nevertheless provide "data" to complement the techniques and theories of disciplines interested in explanation via subsumption to universals.13 For Sir Karl Popper, philosopher of science and history, scientific disciplines advance by formulating theories which are capable of refutation, which most often comes through experimental observation. Although Popper is primarily concerned with refutability in science, we see no reason not to appropriate the concept of refutability.14 Just as proof in the formal disciplines has a strong social, informal component, so too can refutation in empirical disciplines occur without formal contradiction. One key ingredient in the factual refutation of a claim is the objectivity of the fact. From a phenomenological point of view, the objectivity of science is an attribution rather than a given. With the adoption of a different view toward the experience of the historian, historical research can yield "facts" having an objectivity analogous to that of scientific research, a possibility considered further in Chapter 6. Let us now move from the philosophy of science and technology to the history of science and technology. 13 During the 1930's, historians saw themselves as providing data to and, at the same time, testing the hypotheses of social scientists. Oscar Handlin, Truth in History (Cambridge, 1979), p. 7. 14 Karl Popper, The Poverty of Historicism (Boston, 1957). 15

PAGE 22

The History of Technology 17th and 18th century science was the recreation of the gentry. The rise of the 19th century English mercantile class coincided with the growth of industrial cities propelled by scientific and technological power into the arena of economic and political power. As self-proclaimed "City of Science," Manchester was imitated throughout Europe as towns created technical universities in order to fuel continued technological and economic development.15 The impetus to create technological universities was carried to late 19th century America in the hearts and minds of doctoral students trained at German scientific universities. The best and brightest of the West foresaw the perfection of Man via scientific rationality.16 Specialization and the lack of an institutionalized base led American scientists to act as entrepreneurs, relying on patronage to fund and secure recognition for their research: ... the early gifts for science were particularly significant. They established precedents and projected lines of development. By a process of institutional aggrandizement the first scientific schools, observatories and laboratories created their own need and perpetuated their own kind.17 15 Arnold Thackray,"Natural Knowledge in Cultural Context: The Manchester Model," American Historical Review, 74 (1974): 672-709. 16 Charles Rosenberg, No Other Gods: On Science and American Social Thought (Baltimore 1976). 17 Howard S. Miller, Dollars for Research: Science and Its Patrons in Nineteenth Century America (Seattle, 1970), p. ix. 16

PAGE 23

The link between industry, education, and science in the United States was strengthened by the business practice and philanthropy of Andrew Carnegie. In the early 1870's, Carnegie drew the shon-lived ridicule of his competitors by hiring a metallurgical chemist to supervise his blast furnace operations. At the turn of the century, the Carnegie Institution of Washington heralded Big Science while marking the ebb of individualism: [t]he most effective way to find and develop the exceptional man is to put promising men at work at research, under proper ... supervision. The men who can not fulfill their promise will soon drop out, and by the survival of the fittest the capable, exceptional man will appear.18 With the spread of industrialization, science bifurcated into pure and applied science. Technology was viewed as applied science: research scientists discovered universal truths concerning nature, and applied scientists exploited these truths, perhaps even embodying them in apparatus.19 The conceptual subordination of technology as applied science to pure science was reflected in the sociology of science and technology. American technology became the province of the workingor middle-class engineer, while science was worked in universities and government research centers by a higher social class. By the third quarter of the 19th century, engineers were doers and scientists knowers.2 18 Miller (n. 16 above), p. 177. 19 Thomas Kuhn, The Essential Tension (Chicago, 1977), pp. 142-46. 20 Edwin T. Layton Jr., "Mirror-Image Twins: The Communities of Science and Technology in 19th-Century America," Technology and Culture 12 (October 1971): 562-80. 17

PAGE 24

As the "genius of Menlo Park" who seemingly invented the incandescent electric light without academic science, Thomas Alva Edison personified the difference between (European) knowing and (American) doing. Edison initially saw the incandescent light as a relatively simple adaptation of arc-light technology. Experiment proved the contrary and Edison hired trained scientists and began to consult advanced scientific journals. Edison the engineer seemed to yield to Edison the scientist. In the design of the central power generator needed to establish his system of electric lighting, an optimum armature winding scheme was needed to make the system economically competitive with existing arc-lighting systems. Scientific theory could not determine this scheme. The repeated success of Edison's well-funded laboratory research mixed with science suggested the promise of industrial research.21 It was not until technologically superior European products eroded the market share of the (former Edison) General Electric Company that GE determined to ineet the threat through research. At the Bell Telephone Company, the potential threat to telephone from wireless radio-based systems drove the company to establish control over wireless through a series of patents. Although strategies for doing so differed, both companies bent science to the development of apparatus. 21 Robert D. Friedel, Paul Israel, and Bernard Finn, Edison's Electric Light: Biography of an Invention (New Brunswick, 1986). 18

PAGE 25

The wide range of problems faced by research workers forced GE to depart from the disciplinary strategies acquired at educational institutions such as the Massachusetts Institute of Technology. While certainly not discarding scientific theory, GE workers relied on the power of experiment to discover the behavior of specific pieces of electrical apparatus. Bell system experience, resources, and relations with the government all served to restrict potential market areas for the company, encouraging Bell researchers to formulate technological theories conceptual and mathematical constructs that described the behavior of particular types of technology. Technological theories could be used directly or, with experience, codified for further development and design.22 The development of such theories was hastened by both World Wars, leading to the establishment of a research tradition in which even current Bell System workers speak of technological theories. The United States entered World War II technologically ill-prepared. United States fighter aircraft never reached technological parity with German fighter planes. Luckily, after 1943, attrition eroded the effect of German battle superiority. As the war progressed, technological theories were needed for the construction and control of advanced weapon systems. The delivery and construction of conventional warheads made it necessary to solve ballistics and 22 Leonard Reich, The Making of American Industrial Research: Science and Business at GE and Bell, 1876-1926 (Cambridge, 1985), pp. 205-8, 250. 19

PAGE 26

other mathematical problems faster than any army of human computers. This led the US Army to develop the first electronic digital computer, the ENIAC.23 Often regarded as the father of the modern electronic digital computer, the reknowned mathematician, physicist, economist, weaponeer, and computer designer John von Neumann foresaw the applicability of the computer to a host of mathematical, scientific and technological problems. Indeed, as we shall see in more detail in Chapter 4, von Neumann played a decisive role in the diffusion as well as the development of advanced computing technology in the post-war world, principally in the context of research and development of thermonuclear weapons. It has been claimed that, because of their focus on scientific problems, von Neumann and other computer pioneers did not foresee the widespread demand for computing generated by business and industry.24 Von Neumann understood, however, that the construction of an intercontinental ballistic missile system depended less on the level of missile or thermonuclear device technology than the ability to manage the concurrent development of a myriad of technological subsystems. We shall argue in Chapter 4 that von Neumann played a central role in the development and diffusion of management information systems (MIS). 23 Computers played a critical role in the development of both atomic and hydrogen bombs. See Joel Shurkin, Engines of the Mind (New York, 1985) for a general history of the modern digital computer. 24 Paul Cerruzzi, "An Unforeseen Revolution: Computers and Expectations, 1935-1985," in Imagining Tomorrow: History, Technology and the American Future, ed. Josoph Corn, (Cambridge, 1986). 20

PAGE 27

Why is the development of MIS during the post-war period a noteworthy instance of technological change? As MIS changed in response to the demands of concurrent development, manual processing gave way to processing by electronic digital computers. This change perhaps made the additional complexity associated with concurrency appear less daunting so that national security managers no longer felt constrained by the additional complexity of concurrency. It is possible that concurrency, which began as a possibility pursued only because of the exigencies of the Cold War, was now transformed into a standard management approach for projects lacking the urgency of the ICBM projects. If so, this change in MIS occasioned and perhaps encouraged the creation of programs dwarfing even the Manhattan Project. To the extent that such programs have led to an alteration of American social experience, favoring the survival of some species of experience over others, this change in MIS may have led to broader changes. Thus, with the interpretation of MIS as a technology (an organization for Langdon Winner), we have moved from the study of nature to the creation of artifacts and on to the use of these artifacts in the guidance and control of evolving complexes of nature, artifact, and man. 21

PAGE 28

CHAPTER 3 TWO META-IDSTORICAL CLAIMS Introduction This chapter presents two theories of technological change, what I am calling 'meta-historical' claims. After considering the more fully developed views which form the background for these theories, we focus on the portion of this background that allows us to compare the two theories. Each theory is articulated in terms of a very different historical framework. The theory of technological disequilibrium of Thomas P. Hughes is elaborated in the context of the growth of electrification from 1880 to 1930, while the theory of technological co-evolution of Edward Constant has as its context the development and use of supersonic aircraft. As a result, one would expect some difficulty in comparing the two theories.1 Edward Constant provides one point of comparison when discussing one aspect of the theory of technological co-evolution, which implies more than either technological disequilibrium or "reverse salients in an advancing technological front," the image Thomas P. Hughes has used to 1 See W. David Lewis, review of The Origins of the Turbojet Revolution, by Edward Constant II, in Technology and Culture 23 (1982): 512-16; Terry S. Reynolds, review of Networks of Power, by Thomas P. Hughes, in Technology and Culture 25 (1984): 644-47.

PAGE 29

portray severe problem areas that hold up the rapid advance of an entire technology? This threefold contrast between technological co-evolution, technological disequilibrium and reverse salients in an advancing technological front is the vehicle for comparing these two broad theories of technological change. First, we examine the concepts of technological disequilibrium, reverse salient, and technological co-evolution. Second, we consider the applicability of the concepts of disequilibrium and co-evolution to the emergence of MIS in the post-war era. Technological Disequilibrium In 1971, historian of technology Thomas P. Hughes published his study of Elmer Sperry and the various Sperry companies which formed around that prolific inventor/engineer of the late 19th and early 20th centuries. Hughes began to articulate the metaphor of "reverse salients in an expanding technological front," arguing that Sperry was an interesting man aside from his profession, but to write of him the biographer must also write of machines, processes, and systems.3 As a former engineering student subsequently professionally trained in history, it is natural that the "systems concept" figure prominently in the thinking and 2 Edward Constant II, The Origins of the Turbojet Revolution (Baltimore, 1980), p. 14. 3 Thomas P. Hughes, Elmer Sperry, Inventor and Engineer (Baltimore, 1971), p. XV. 23

PAGE 30

writing of Hughes. The importance of the "systems concept" is that it enables us to establish a vocabulary relatively common to both Hughes and Constant. The development of the two related technologies of gun-fire control and gyro-compasses in the years before Sarajevo engaged the energies of the Sperry company and its namesake. The war itself brought, first, increased demand by European governments for Sperry gyrocompasses, and then the time and money needed to bring research and developments efforts to fruition. As the conflict progressed, the character of the war changed after the battle of the Marne stopped the German offensive in 1914 and the Battle of Jutland brought a stand-off in the naval war. Both the Germans and the Allies then looked to technological invention, as well as battles of attrition, to break the deadlock.4 The relevant Sperry companies were well-informed about the need for improved gunfrre-control and navigation, well-positioned to be "the brain mill" for the military when the United States entered the First World War. The phrase "reverse salient" was widely used by military strategists. A reverse salient is produced when one portion of an advancing line of battle is much further extended than the rest of the line. During the Second World War, a reverse salient was created by the German offensive at the Battle of the Bulge. A reverse salient is an inherently unstable configuration. First, it invites attack at the two points where the line protrudes, in order to enclose and neutralize the enemy forces operating within the reverse salient. Second, since the 4 Hughes (n. 3 above), p. 202. 24

PAGE 31

combatant operating within the reverse salient hopes to avoid becoming enclosed, it must either retreat or rapidly advance the front in the neighborhood of the reverse salient. In either case, tremendous energies must be expended to eliminate a military reverse salient. WWI provided Sperry with a matrix of guidance and control problems within which to work and which illustrate the idea of a technological reverse salient: that is, "a reverse salient in an advancing technological front." Sperry improved both search-light defense and anti-aircraft gun-fire control, forcing Axis planes to fly higher in order to evade defensive attack and thus rendering bombsights less effective. Hence, a technological reverse salient was created in which one component of aerial warfare (search-light defense) was greatly advanced relative to another component (bombsights). Of course, these improvements were quickly replicated by the Axis powers so that for a time Allied planes were also required to fly higher: Improvements in search-light defense also affected other components of the aerial warfare system, necessitating, for example, an improvement in bombsights so that airplanes could fly higher and yet bomb accurately.5 Prior work in navigational and gun-fire controls made it possible for Sperry to develop improved bomb-sights, allowing Allied planes to fly higher by neutralizing improved Axis search-light defense. Thus, the reverse salient created by improved search-light defenses was removed. 5 Hughes (n. 3 above), pp. 219-20. 25

PAGE 32

Gun-frre control provides another example of a technological reverse salient. Between 1912 and 1916, Sperry greatly increased the accuracy and responsiveness of naval turret guns. As the range of such guns increased from under 4000 yards before 1900 to about 10,000 yards in 1910 to about 20,000 yards by the end of the war, greater accuracy and responsiveness were required. Unfortunately, these improvements also increased the effect of aiming errors. If a gun must deliver a shell 4000 yards and has an angular error of 0.1 degrees, trigonometry reveals that the shell will be wide of its target by less than 7 yards. If, however, the range of the gun is increased to 10,000 yards, the error increases to 18 yards. Greater range demands greater accuracy, and greater accuracy invites the development of gunnery with greater range. Eventually, additional advances in either range or accuracy became much more difficult to achieve: it was not until WWII that a new system was found capable of providing further advances. The superiority of sea based air force was demonstrated repeatedly in the Pacific front. A carrier equipped with planes bring the gun to the target, minimizing aiming error due to extreme range. As the explosives carried by planes became more powerful, the battleship was reduced to playing a support role in sea battle and amphibious operations. This rather extended example illustrates how, because of coupling between sub-systems, change in subsystem A (concerned with range) can generate change in subsystem B (concerned with accuracy), generating further change in subsystem 26

PAGE 33

A. Oscillation of the change point between the two subsystems eventually dampens as each subsystem approaches its inherent operating limits. Major breakthroughs can only come through a new system, possibly one in which some of the old subsystems are completely replaced, perhaps radically altering the coupling between the subsystems. In a historiographical essay published in 1980, Hughes brought the theme of systemic change to the forefront in endeavoring to suggest how ... writers [of major historical works] have used the concept of the system to organize,_ analyze and draw conclusions about the history of technology from disparate materials.6 Hughes provides a definition of "system" aimed at the historian of technology: ... a system is constituted by related parts or components. Because the parts are related, the state, or activity, of one influences the state, or activity of others in the system .... A system has characteristics different from those of its components in isolation .... Control of a system is often centralized and man often closes the loop. The managers in the Chandler systems are examples? Hughes provides a single example of a system, but given this abstract definition of a system, a variety of instances exemplify this abstract structure. The system concept is fruitful for the history of technology because it provides a 6 Thomas P. Hughes, "The Order of the Technological World," in History of Technology, ed. A.R. Hall and N. Smith (London, 1980), p. 3. 7 Hughes (n. 6 above), p. 2. 27

PAGE 34

conceptual structure that can be applied to a variety. of technologies and to the phenomenon of technological change. 8 Hughes asks which technological systems merit the attention of historians of technology: Having had success in planning and constructing small systems like the machines, then larger ones like the factory, man in the twentieth century has drawn upon his empirical knowledge of these to try such monstrous systematic endeavors as the Manhattan Project.9 Hence, the Manhattan Project is of interest not simply for the apparatus created and used, but also for novel forms of social organization which spawned novel mathematical and computing techniques. Chapter 4 concerns the application of techniques and apparatus to the management of large organizations, a line of investigation which can be traced to The Visible Hand by business historian Alfred Chandler. There Chandler describes the rise of the American managerial class and the change in business organization and methods it accompanied. Hughes observes with Alfred Chandler that The integration of all the processes of textile production stimulated 'innovation in each of the specific processes.' The rapid and complex flow of materials presented challenging problems of coordination and monitoring, problems that 8 In this Hughes is not unique, although the system perspective is less well enunciated in Hugh G. J. Aitken, Taylorism at Watertown Arsenal: Scientific Management in Action 1908-1915 (Cambridge, 1960). 9 Hughes (n. 6 above), p. 2. The language of Hughes recalls the apparatus technique-organization distinction of Langdon Winner. 28

PAGE 35

would also bring the visible hand into other industries but several decades later.10 If the problems of coordination and monitoring inside large organizations are within the purview of the historian of technology, so too is the post-war emergence of management information systems. In 1981, Hughes published Networks of Power, which provides an explicit model of systemic change in the evolution of electric power systems: although the electric power systems described herein were introduced in different places and reached their plateaus of development at different times, they are related to one another in the overall model of system evolution that structures this study at the most general level.11 Only the third of the five phases of the model is of interest to us: The essential characteristic of the third phase of the model is system growth. The method of growth analysis used in this study involves reverse salients and critical problems. Because the study unit is a system, the historian finds reverse salients arising in the dynamics of the system during the uneven growth of its components and hence of the overall network ..... Having identified the reverse salients, the system tenders can then analyze them as a series of critical problems ..... An inventor or applier of science transforms an amorphous challenge the backwardness of a system into a set of problems that are believed to be solvable ..... When engineers correct reverse salients by solving critical problems, the system usually grows if there is adequate demand for its product. On occasion, however, a critical problem cannot be solved ..... this study offers an explanation not only of the evolution of systems as reverse salients are identified and solved, but also of the occasional emergence of new systems out of the failure to solve critical problems in the context of the old.12 10 Hughes (n. 6 above), p. 8. 11 Thomas P. Hughes, Networks of Power: Electrfication in Western Society, 1880-1930 (Baltimore, 1981), p. 14. 12 Hughes (n. 11 above), pp. 14-15. 29

PAGE 36

These are the key notions for comparing the theories of Hughes and Constant and also for determining the applicability of the theory to the emergence of MIS in the post-war world. Technological Co-evolution While still a doctoral student at Northwestern University, Edward Constant U put forward a theory of technological change and claimed its applicability beyond the turbojet.13 Claiming that a technological revolution occurred with the development of the turbojet, Constant's work recalls for many Structure of Scientific Revolutions. Because Constant recognizes the limits which Kuhn puts on Structures when he explicitly limits its scope to science, Stmctures is available to Constant only as a basis for analogy. Recognizing that parallels between science and technology may carry over to scientific and technological change, Constant employs the Kuhnian metaphor when he defines ... a technological paradigm as an accepted mode of technical operation, the usual means of accomplishing a technical task. ... Normal technology, like normal science, is not static. .. It is "puzzle solving." When a technological revolution occurs, however, the community paradigm changes. Technological 13 Edward Constant II, "A Model for Technological Change Applied to the Turbojet Revolution," Technology and Culture 14 (1973): 553-72. This article summarizes his dissertation work and was an early look at what became The Origins of the Turbojet Revolution. 30

PAGE 37

revolution is defined here only in terms of a relevant community of practitioners and has no connotation of social or economic magnitude.14 While the central notion of Constant's theory is that of presumptive anomaly, this thesis is more concerned with the concepts of technological disequilibrium and co-evolution which are presupposed by the discussion of presumptive anomaly. The concept of technological equilibrium shows the influences of philosopher Karl Popper and biologist Donald Campbell, proponents of "evolutionary epistemology" which suggests that advances in knowledge from the behavior variation and selection retention of these variations. The scientific or technological community explores its "world," making multiple claims about its nature. In one fashion or another these claims are tested, so that claims which are currently "best" survive and continue to guide the practice of the community. Technological variation-retention is similar [to that of science], but does differ at important points .... technological selection differs from scientific selection. Technological systems directly, not vicariously, explore the environment: planes crash, engines explode. Technological systems thus face direct elimination by an environment unmediated by background scientific theory in a sense that scientific conjectures do not.15 Evolutionary epistemology is a useful starting point, but is too general: 14 Constant (n. 13 above), p. 554. 15 Edward Constant IT, The Origins of the Turbojet Revolution (Baltimore, 1980), p. 7. 31

PAGE 38

a more highly articulated, middle-level model for technological change, a model less general than evolutionary epistemology but in no way contradictory to it, is necessary .16 Because the community of technological practitioners is proposed as the unit of analysis, technological paradigms, and therefore revolutions, are defined by the behavior of a community of technological practitioners. A technological revolution, then, is primarily behavioral and only secondarily cognitive phenomenon: a revolution has occurred when "the usual means of accomplishing a technical task changes." What events generally precipitate such changes? Often, it is the crash of a plane or the explosion of an engine that eliminates a form of technological practice or at least identifies its operating limitations. In other cases, technological disequilibrium, of either the intersystem variety (improved looms demanded improved spinning) or intrasystem variety (improvements in one machine component demand improvements in other components) can result in a pressing need for change.17 Neither type of disequilibrium will be seen as a potentially soluble problem, according to Constant, until the relevant community of technological practitioners associates the problem with some candidate solution, conventional or radical. Now consider technological co-evolution, key to contrasting the theories of Hughes and Constant: 16 Constant (n. 15 above), p. 8. 17 Constant (n. 15 above), p. 13. 32

PAGE 39

Transposed to technology [from evolutionary biology], the concept of co evolution implies that the development of one set of devices may be linked intimately to the development of other devices within the same macrosystem, and that the two sets of devices may exert powerful, mutually selective pressure on each other. For example, ... the direction .of both water turbine and steam turbine development was highly responsive to the demands of electrical power generation; the development of dynamos, in tum, was dependent upon the characteristics of water and steam turbines.18 Technological co-evolution is a special case of technological disequilibrium because it spells out the nature of the disequilibria: Technological co-evolution implies, first, specificity: the direction of development of a given technology (steam turbines) is linked to some other specific co-evolving technology (power generation and transmission) .... Second, technological co-evolution implies a hierarchy of retentive or selective processes. The fate of a given invention its developmental direction not only depends on its competition with alternative devices performing the same or similar functions and on its co-evolution with a specific other technology, but also depends on the evolutionary success or failure of the higher-level macrosystems of which it is a part.19 Conclusion If the theory of technological disequilibrium is applicable to the emergence of MIS, then the theory of co-evolution is also applicable. By virtue of the generality of "the systems concept" presupposed by both theories, both theories appear to be applicable to the emergence of management information systems 18 Constant (n. 15 above), p. 14. 19 Constant (n. 15 above), p. 14. 33

PAGE 40

(MIS) in the post-war years.2 Nevertheless, what are the system boundaries: are humans part of the system? If an MIS consists of technique and social organization as well as apparatus, humans an element of an MIS.21 Given the applicability of both theories and the greater generality of the disequilibrium theory, it should be possible to account for the emergence of MIS in terms of the theory of technological disequilibrium. If not, then neither theory has the generality required of bona fide theories. If so, then we can go on to consider the theory of technological disequilibrium. Since co-evolution is a special kind of disequilibrium, the theory of technological co-evolution can be affirmed only if we can explain the evolution of MIS in terms of (1) the mutual, reciprocal influence of (at least) two subsystems, and (2) the presence of a hierarchy of selective-retentive processes. Absent either factor, the theory of technological co-evolution is less adequate than the theory of technological disequilibrium with respect to the emergence of MIS. 2 For a discussion of the dangers of using the "systems concept," see A. D. Hall and R. E. Fagen, "Definition of System," in Modern Systems Research for the Behavioral Scientist, ed. Walter Buckley (Chicago, 1968). 21 A current texbook regards people as a distinct and most important element of an information processing system. Jerome S. Burstein and Edward G. Martin, Computer Information Systems with BASIC (Chicago, 1989), pp. 36-67. 34

PAGE 41

CHAPTER 4 MANAGEMENT INFORMATION SYSTEMS AND TOTAL WAR Introduction The United States was not prepared for the Second World War. As General Henry "Hap" Arnold put it: The margin of winning was narrow .... many times we were near losing, and ... our enemies' mistakes often pulled us through.1 Allied victory in Europe, for example, while certainly aided by the strategic errors of Hitler, was equally the result of other, less glamorous factors. After the Battle of the Atlantic, the steadily increasing economic isolation of the Rome-Berlin axis made it difficult to obtain raw materials for manufacturing. The German decision to withhold resources from computer and atomic scientists helped insure Allied first use of the atomic bomb. Superior Allied industrial capacity resulted in an increasingly heavy aerial attack that severely reduced both the volume and consistency of Axis industrial output.2 In the aftermath of World War II, the issue of preparedness was central to both civilian and military defense planners. While successful and continued use 1 Third Report to the Secretary of War by the Commanding General of the Army Air Forces (12 November 1945), in The Impact of Air Power, National Security and World Politics, ed. Eugene M. Emme (Princeton, 1959). 2 James L. Stokesbury, A Short History of World War II (New York, 1980).

PAGE 42

of a weapon such as a Stuka fighter presupposes research, development, training, deployment, and maintenance, the development of a breakthrough weapon such as the atomic bomb requires the allocation of resources on a significantly greater scale. In wartime, the will exists to allocate resources on a large scale, but in peacetime, the situation is less certain. The terrible, swift success of atomic weapons helped continue such massive resource allocation. The atomic bomb was the logical conclusion of the strategic bombing doctrine. Disagreement about the effectiveness of conventional bombing began during World War II and continue'd into the Vietnam era, but the hastening of V-J Day by atomic weapons seemed indisputable. The unforeseen side-effect was that an era of total war had now begun: after Dresden and Hiroshima, it seemed clear that in the next war,: some populations might be sacrificed for others. Military-industrial relations were deliberately restructured so that, at least from the standpoint of nuclear weapons development, the United States would never again have to depend on its enemies .for survival. It is against this backdrop that this paper considers the evolution of management information systems in the period from 1945 to 1960. Management information systems (MIS) collect data concerning orgnaizational performance and transform it into information to support management decision-making. Typically, computers are at the heart of such systems and permit project performance to be compared against established time and cost schedules. The roots of management information systems can be found in 36

PAGE 43

World War I, when the Gantt chart was developed for the Anny by a co-worker of Frederick W. Taylor, often dubbed the father of scientific management. The development of MIS in the aftermath of World War II is sketched in terms of the career of John von Neumann, a mathematiCian, physicist, economist, weaponeer, and computer designer who played key advisory roles as the U.S. prepared for total thermonuclear war. Although the major primary source for this research is the von Neumann Manuscript Collection at the Library of Congress, the objective of this paper is to develop a new framework for both new and old material. Von Neumann was a major contributor to the mathematics of solving large systems of equations. The solution of such large systems figures equally in the development of atomic weaponry, economic planning, and games of military strategy. Being at the intellectual center and near the major funding sources allowed von Neumann to a major role in the design of several computing machines. One von Neumann machine was eventually used by the Navy to computerize the Program Evaluation and Review Technique (PERT). The PERT system is the prototypical management information system: by the early 1960's, the use of this system was required throughout the Defense Department and its use soon spread throughout corporate America. This chapter suggests the intellectual influence of von Neumann in the development of management information systems. 37

PAGE 44

Von Neumann and the Bomb As Harry Truman noted in his radio address on the eve of Hiroshima, the battle of the laboratories held fateful risks for us as well as the battles of the air, land and sea, and we have now won the battle of the laboratories as we have won the other battles. 3 In May of 1940, the 37 year-old, Hungarian born von Neumann was recruited by the Anny Chief of Ordnance along with physicist I. I. Rabi, aerodynamicist Theodor von Karman and others to form a scientific advisory committee. During the third meeting of the scientific advisory committee on October 3rd and 4th of 1941, von Neumann met with future Nobel Laureate in chemistry Harold Urey and staff member Robert H. Kent of the Army Ballistics Research Laboratory (BRL) to discuss the theory of detonation. He also discussed instrumentation for bomb ballistics with Major Leslie Simon.4 Major civilian contributors to the war effort received the Medal for Merit. The 262 recipients included Dean Acheson, Irving Berlin, J. Edgar Hoover, Bob Hope, and von Neumann. Harry Truman said that von Neumann: .... by his outstanding devotion to duty, technical leadership, untiring cooperativeness, and sustained enthusiasm, was primarily responsible for fundamental research by the United States Navy on the effective use of high explosives, which has resulted in the discovery of a new ordnance principle 3 Emme (n. 1 above), p. 84. 4 Von Neumann Manuscripts, Manuscripts Division, Library of Congress (hereafter cited as von Neumann MSS). 38

PAGE 45

for offensive action, and which has already been proved to increase the efficiency of air power in the atomic bomb attacks upon Japan.5 The new ordnance principle was the 11air burst principle, .. expressed as follows: detonating an atomic bomb at a given height above ground increases the blast effect that occurs if the bomb is instead detonated at ground level. Because less fissionable material is required to achieve a given blast effect, one can achieve "more bang for the buck." The early work of von Neumann had consequences for research at Los Alamos as well as for testing at Alamagordo and for Pacific deployment. On the strength of this work, the recognition it brought, and the contacts it established, von Neumann became an effective agent in the nuclear weapons policy-making after World War II. As military and civilian planners considered the implications of the last war for a possible next war, the importance of pre-war work by von Neumann began to emerge. A Passion for Planning As a war-weary American public began to return to isolationism, its military, economic, and political leaders began to appreciate what such isolation would mean. The American organization for war had not arisen spontaneously: it 5 Von Neumann to Ralph E. Duncan, 18 December 1957, von Neumann MSS. 39

PAGE 46

had, in FDR's words, been "carefully thought out ... created from the top down, not the bottom up. "6 Given the demonstrated importance of a planned war economy to the research, development, production, and deployment of advanced weaponry, no wonder that American military leaders feared a return to pre-war "normalcy." America's wartime leaders were captive of a "profound fear, mounting to almost an obsession, of ... a revived isolationism after the war."7 Perhaps it was in this spirit that Charles E. Wilson of General Electric (subsequent Secretary of Defense) declared in 1944 that the nation needed a "permanent war economy. "8 In the spring of 1946, the Army circulated a memorandum treating scientific and technological resources as specifically military assets, enunciating a view of science and technology that had emerged 6 Kenneth S. Davis, ed., Anns, Industry and America (New York, 1971), p. 4. 7 Richard M. Freeland, The Truman Doctrine and the Origins of McCarthyism (New York, 1972), p. 25. 8 Sidney Lens, "The Military-Industrial Complex," in Anns, Industry and America, ed. K. Davis (New York, 1971), p. 61. Some critics of the military industrial complex take it as axiomatic that such a statement originates solely in economic self-interest. Such critics should be reminded that war work was not always profitable. National Cash Register, after important wartime cryptographic work, opted out of Government work after the war because of the restrictions associated with it and limited opportunities for profit. 40

PAGE 47

just prior to World War I and continued through the next war via the Office of Scientific Research and Development.9 Recalling the Manhattan Project, the so-called Eisenhower Memorandum stated that (1) The army must have civilian assistance in military planning [emphasis added] as well as for the production of weapons .... (4) Within the army we must separate responsibility for research and development [emphasis added] from the functions of procurement, purchase, storage, and distribution. By developing the general policies outlined above under the leadership of the Director of Research and Development the army will demonstrate the value it places upon science and technology and further the integration of civilian and military resources [emphasis added].10 Budgeting was one aspect of military planning which was of particular interest after the war. In 1948, government procedures required the Air Force to estimate budgets about two and half-years in advance, and to respond quickly but systematically to Congressional changes. Although military and civilian planning organizations such as the War Production Board grew to enormous size during the war, budgeting was still a very time-consuming process. During the war, the Army Air Staff created a program monitoring function which began with a war 9 Concerning the emergence of institutional forms of cooperation between science, technology, industry, and the military, see the discussion of the Naval Consulting Board and the National Research Council in Thomas P. Hughes, Elmer Sperry, Inventor and Engineer (Baltimore, 1971), pp. 244-50. 10 Davis (n. 6 above), pp. 73-76. 41

PAGE 48

plan and derived from it requirements for training, supply, etc. Even with this improvement, it took approximately seven months to complete the process. After the war, staff members of the Air Force Comptroller foresaw that efficiently coordinating the energies of whole nations in the event of a total war would require scientific programming [that is, planning] techniques. Undoubtedly this need had occurred many times in the past, but this time there were two concurrent developments that had a profound influence: (a) the development of large scale electronic computers, and (b) the development of the inter-industry model.11 The inter-industry model permits a quantitative assessment of the ability of the U.S. economy to support expenditure levels needed for preparedness. Although electronic computers and the inter-industry model developed to some degree independently of one another, there was significant interaction. The inter-industry model was "pulled"-by the development of large computing machines, while the development of these machines was partially "pushed" by the computational requirements of the inter-industry model. We consider first the development of the inter-industry model before turning to the development of large-scale computing machines. The Search for Optimum War As chief of combat analysis activity, Dr. George Dantzig hoped to apply "scientific" techniques to the planning process. In 1947 11 George B. Dantzig, Linear Programming and Extensions (Princeton, 1963), pp. 14-15. 42

PAGE 49

... [the mathematicians von Neumann and Dantzig] conceived of ... developing a set of linear inequalities that would reflect the relationship between various Air Force activities and the items that were consumed in the military environment ... presentations were made to General Rawlings and much of the Air Staff, and they sold the concept of what they called Project SCOOP, which was the Scientific Computation of Optimum ProgramsP Linear programming is the basis for the "scientific computation of optimum programs," where 'programming' is the allocation of scarce resources to competing activities. Associated with each allocation pattern is a "cost": linear programming (LP) seeks programs that minimize "cost" or maximize "benefit." The following example describes the use of LP. Suppose prototypes of weapon systems A and B have been developed and the quantities of each system to be manufactured must now be decided. Let the variable x (y) represent the number of units of system A (B) to be manufactured. Given information about the costs associated of each unit of each system, the problem is to determine the appropriate values of x andy. Suppose that system A requires 1 hour of training and 3 hours of maintenance per week, while system B requires 2 hours of training and 4 hours of maintenance. Suppose also that system A has an effectiveness rating of 50, whereas system B has a better rating of 80. Suppose that during any week, 32 hours are available for training and 84 hours are available to perform maintenance. 12 Edward Dunaway, Interview by Cadet James R. Luntzel, III, 22 June 1973, Call Number K239.0512-935, transcript, United States Air Force Oral History Program, p. 33. 43

PAGE 50

Detennining how many units of each system to manufacture, even for this simple yet realistic example, is difficult since there are numerous variables to consider: training cost, maintenance cost, mission effectiveness, and available time for training and maintenance. The problem can be stated in the customary fonn of linear programming as follows: maximize total effectiveness E = 50x + 80y, subject to the constraints x + 2y <= 32 and 3x + 4y <= 84. Problems of personnel assignment, the blending of materials, product distribution and transportation, and investment portfolio management can all be solved via linear programming (LP). LP was most fully articulated by the mathematician Dr. George Dantzig: nevertheless, "credit for laying the mathematical foundations of this field goes to John von Neumann more than to any other man. "13 Interest in the interaction between Air Force projects and the civilian economy was not limited to budgetary matters. Motivated by work begun in 1936 at the Bureau of Labor Statistics, Dantzig and von Neumann sought mathematical generalizations of the input-output (I-0) models of the economist Wassily Leontieff. The simplest example of an input-output model involves an electric "industry" and a water "industry." The electric "industry" produces electricity; the water "industry" produces (provides) water as output. The electric "industry" uses both electricity and water 13 Dantzig (n. 11 above), p. 24. 44

PAGE 51

as input in order to produce more the water "industry" uses both electricity and water to produce (provide) water as its output. Given the demands of each "industry" for its own output and for the output of the other "industry," input-output analysis determines the minimum output required of each "industry" in order to satisfy total demand. Total demand is the sum of (a) internal demand (by the two "industries") and (b) external demand (by any other industries).14 The mathematical generalizations of von Neumann and Dantzig made it possible to formulate input-output (I-0) models for problems involving hundreds of industries.15 The Air Force went on to support more applied work on Leontieff-type inter-industry models, thereby reflecting progress on both the mathematical and computational fronts. For example, in 1951, Leontieff used the I-0 model to study the interactions of 500 sectors of the American economy. Leontieff won the 1973 Nobel prize in economics largely due to the impact of input-output analysis on economic planning in industrialized countries. LP techniques and inter-industry models made it possible to plan for total war, hot or cold. For the newly formed Air Force, with the mission to deliver atomic weapons to their targets, LP was an essential tool: ... [there was] a lot of work being done in looking at the deployment of targets that we would want to attack; backing off from the targets that we needed to 14 This brief description is based on Raymond A. Barnett, College Mathematics for Management, Life, and Social Sciences (San Francisco, 1981), pp. 160-65. 15 Dantzig (n. 11 above), p. 18. 45

PAGE 52

attack, through a complete war plan back to the requirements on the civilian economy ... and you then back off from that to your training structure and your logistics structure. There is a continuous string that you could put together mathematically to show that, in order to fight this kind of war you've got to have these kinds of resources from the civilian economy.16 But war is a matter of strategy as well as linear programming and input-output analysis: here too, von Neumann was a principle player. In 1944, von Neumann and Oskar Morgenstern published The Theory of Games and Economic Behavior in which decision-making is studied via a formal calculus. Using game theory, one can determine the "optimum" choice of action given a description of the options available to "rational" players. This work generated considerable interest among strategists and, more surprisingly, among planners. Interest among planners is partly explained by noting that certain linear programming problems are formally equivalent to von Neumann-Morgenstern games of strategy. This implies that certain linear programming problems can be solved as games, and that certain games can be solved via linear programming. There are at least two threads tying together nuclear weapons research, economic planning, and military strategy. One thread is the common formalism necessary (but not sufficient) to efficiently solve large numbers of simultaneous linear equations. The second thread is the digital computer, which provides the means for transforming the mathematical possibility of solution into a practically 16 Dunaway (n. 12 above), pp. 6-7. 46

PAGE 53

achievable solution. At the dawn of the "computer age," von Neumann wove these threads together. At the Dawn of Electronic Computing Computing machines using both electrical and mechanical computing devices were developed and used during the war, but the Electronic and Numerical Integrator And Calculator (ENIAC) was the world's first general purpose electronic digital computer.17 The Army Ballistics Research Laboratory (BRL) commissioned the construction of the ENIAC by the Moore School of Engineering at the University of Pennsylvania. Von Neumann first became associated with the Moore School group in the second half of 1944, when he was appointed consultant on the EDVAC, the successor machine to the ENIAC. Von Neumann's involvement in the Manhattan Project beginning in late 1943 alerted him to the need for improved methods of calculation to support "high explosive" research at Los Alamos. Previous experience with shock wave phenomenon made him a natural contributor to solving the problem of achieving an atomic explosion: to produce the desired "chain reaction," conventional material would be detonated, producing a blast pushing inward simultaneously on all points of an encased sphere of plutonium. 17 The Mark series of electro-mechanical computing devices was commissioned by the Office of Naval Research and developed during the war by Harvard University and IBM. 47

PAGE 54

According to Herman Goldstine, friend, admirer and collaborator of von Neumann on the ENIAC, and subsequent employee of IBM: [Although James L. Tuck and] von Neumann invented an ingenious type of explosive lens that could be used to make a spherical wave ... von Neumann's main contribution ... was ... in showing the theoretical people how to model their phenomenon mathematically and then to solve the resulting equations numerically.18 Mathematical models of physical phenomenon were created and solved via the digital computer, allowing the likely effects of various experimental configurations to be considered without the time and cost needed to build and test each configuration.19 His status as "expert" followed from his role on the Manhattan and ENIAC/EDV AC projects and made von Neumann valuable to those in the military/govemment(!.l bureaucracy interested in the combination of new weaponry and computational power. By April 1945, von Neumann was consulting to both the theoretical group in high explosives and the Applied Mathematics Panel of the Navy Bureau of Ordnance. The high explosives group drew on both the 18 Herman H. Goldstine, The Computer from Pascal to von Neumann (Princeton, 1972), p. 181. "A punched card laboratory [consisting largely of ffiM punched card equipment] was set up to handle the implosion problem, and this later grew into one of the world's most advanced and largest installations." 19 The experimental configurations that were built made it possible to: (1) test the mathematical models of physical phenomenon; (2) calibrate the technological theories describing the behavior of specific experimental configurations. These were necessary intermediate steps toward the development of a deployable weapon. On the notion of technological theories, see Robert Freidel, The Making of American Industrial Research: Science and Business at GE and Bell, 1876-1926 (New York, 1985), pp. 205-8, 250. 48

PAGE 55

computing capacity and courtesy of several external agencies including the Applied Mathematics Panel, Harvard University, Bureau of Ships, and the Tabulation Division of the Bureau of Ordnance. Von Neumann was engaged by the Applied Mathematics Panel to perform a "survey of computing machines and services," and was requested to investigate and recommend computing services that would be adequate if the Bureau of Ordnance were to establish machines and equipment for computing.20 In wartime, cost was not the issue, but speed and volume were, and newer computing equipment promised an order of magnitude increase in the number of gunnery firing and bombing tables which could be generated. In noting that the high explosives group "has need of computing services totalling a large number of !!Wl [emphasis added] hours," the chief of the Panel implied the greater efficiency of machine rather than human computers.21 New Applications of Computing: A Watershed The March 1947 enunciation of the Truman Doctrine effected what former Secretary of State Dean Acheson termed "a complete revolution in American foreign policy." Prior to March 1947 the prevailing attitudes of the country toward international affairs were controlled by opinions formed during the war20 J. A. E. Hindman to von Neumann, 24 Aprill945, von Neumann MSS. 21 Goldstine (n. 18 above), p. 138. 49

PAGE 56

optimism about the post-war period, belief that great-powers cooperation and particularly U.S.-Soviet cooperation could be preserved .... In March 1947 public reaction against war and armaments had placed the Defense Department on the defensive, and [the Republican] Congress was considering the Defense budget only to reduce it.22 One aim of the Truman Doctrine was to remove domestic political impediments to the establishment of the Marshall Plan. The Plan was to create the international maiket needed to prevent the collapse of the now vital U.S. economy. The detonation of a Soviet atomic bomb in September 1949 seemed to confirm the international Communist threat and reinforce the view that economic strength would be needed to meet it. The end of American monopoly on atomic weapons closed the OppenheimerTeller debate over thermonuclear weapons and opened a whole new vista for computing. From the necessity for thermonuclear weapons followed a requirement fot computational power on a much larger scale than previously needed even for atomic weapons. Von Neumann was to play a central role in the securing of additional computational and nuclear power. As "father of the digital computer," von Neumann coaxed the Institute of Advanced Studies (lAS) at Princeton into funding, with the Army Ordnance Bureau, what became known as the lAS computer. This machine exercised enormous influence over the first through third generations of computers. Only in 22 Freeland (n. 7 above), p. 8. 50

PAGE 57

the 1980's have radically different computer architectures been offered commercially. 23 November 1949 found von Neumann the computer designer along with 82 others in Endicott, New York, at an IBM sponsored "Seminar on Scientific Computation." A watershed in the history of computing, this seminar marked the beginning stages of large scale diffusion of computing into American life. The services, national laboratories, industrial contractors, and academics were all represented at the Seminar. The Department of the Army, funding source for both the atomic bomb and the ENIAC, was represented by the Ordnance Department of the BRL and by a physicist in the Operations Research Office at Johns Hopkins University.24 The Navy, which sponsored von Neumann's early work on "high explosives," sent Dr. Mina Rees, Director of the Mathematical Sciences Division, Office of Naval Research.25 Although the Air Force was not in attendance, its creation, the RAND (for Research and Development) Corporation was represented by Herman 23 R. W. Hockney and C. R. Jesshope, Parallel Computers 2, Architecture, Programming and Algorithms (Bristol, 1988), pp. 2-53. 24 "Operations Research" is an umbrella term which covers game theory, linear programming, and a host of other techniques initially developed in a military or governmental setting. 25 The Office of Naval Research became a substantial source of support to university researchers in the post-war era, especially in the areas of game theory, linear programming, and the solution of systems of linear equations. 51

PAGE 58

Kahn, who eventually became famous for his book On Thermonuclear War and as a subject for parody in the 1963 movie Dr. Strangelove. RAND is an offshoot of the Douglas Aircraft Corporation, so it is not surprising that other aircraft companies were also represented. RAND seemed relatively unique, however, in the intensity of its regard for von Neumann: ... members of the Project with problems in your line (i.e., the wide world) could discuss them with you ... We would send you all working papers and reports of RAND which we think would interest you, expecting you to react (with frown, hint, or suggestion) when you had a reaction. In this phase, the only part of your thinking time we'd like to bid for systematically is that which you spend shaving: we'd like you to pass on to us any ideas that come to you while so engaged.26 The National Laboratories, heavily involved in the development of thermonuclear weapons, were strongly represented. Dr. Alton Householder had done much to further the analysis of large systems of equations and attended the seminar as Chief of the Mathematics Panel at the Oak Ridge National Laboratory (ORNL) in Tennessee. Carbide and Carbon Chemicals Corporation (to become Union Carbide), could count two current and one former employee in attendance. Union Carbide was under contract to the Atomic Energy Coiillllission to manage ORNL, which used some of the electrical energy from the Tennessee Valley Authority to perform work connected with nuclear energy. Dr. Cuthbert Hurd, by then an IBM employee, had previously received a letter from von Neumann expressing the 26 John D. Williams to von Neumann, 16 December 1947, von Neumann MSS. 52

PAGE 59

willingness of the latter to consult for Oak. Ridge. The letter also described a process of generating sequences of so-called random numbers, a process important in the simulation of nuclear reactions?' The majority of the other seminar panicipants were physicists, chemists, and mathematicians with academic affiliations. Some governmental workers were present, including Dr. John H. Curtiss, Chief, National Applied Labs, National Bureau of Standards (NBS). Curtiss later ran the NBS Applied Mathematics Laboratory which investigated the mathematics of solving large systems of linear equations, both for physical and linear programming applications.28 Computer Design and National Defense In the months after the IBM Seminar, von Neumann became increasingly concerned with computational power. As director of the IAS Computer project, von Neumann also evaluated several other computers. In January 1950, he received a letter requesting an evaluation of the MADDIDA, a small machine devised by Northrop Aircraft in connection with its development of guidance and control for the SNARK missile system. In the opinion of Northrop: 27 Hurd to von Neumann, 10 December 1948, von Neumann MSS. Simulation requires sequences of statistically random numbers termed random variates. Valid statistical inference increases the length of the sequence, while fast simulation requires reducing the length of the sequence or the generation time. 28 S. W. Dunwell of the IBM Future Demands Department was present for obvious reasons. List of Attendees, IBM Corporation, date unavailable, von Neumann MSS. 53

PAGE 60

computers of such [small] sizes and capacities [the MADDIDA was claimed to be equivalent to a Univac] would be invaluable not only to the Air Forces but to the field of mathematics and research in general. .... Recognizing your unusually high standing in your profession and in the esteem of the Air Forces, we should like you to visit our plant on a consulting basis, view this equipment and review the principles involved.29 When von Neumann did not go to California to review the MADDIDA, Northrop engineers boxed up the small machine and flew to Princeton. After several days of examining the machine, discussing its principles, and even programming it in a Princeton hotel room, von Neumann gave the machine high praise: ... your magnetic differential analyzer is a most r.emarkable and promising instrument .... you have established the principles of a whole family of very new and most useful instruments. Your equipment ... seems to me more interesting as a basis for a family of special purpose machines to deal with matrix problems ..... [Among these are] Performing linear transformations on n variables, solving n linear equations in n variables, ... solving the problems of game-strategy and of linear programming of order n-all of this for values of n of the order of 100 and even higher. Solution of problems of this last class [linear programming] will be of great importance, and may well be decisive in certain phases, for enterprises like Project SCOOP ... and Project RAND.30 The week before, von Neumann was asked by the Naval Ordnance Bureau to evaluate a proposal for a computing machine to be built by IDM called the 29 Northrop to von Neumann, 26 January 1950, von Neumann MSS. 30 John von Neumann, "The von Neumann Letter," Annals of the History of Computing 9 (1988): 357-68. 54

PAGE 61

Naval Ordnance Research Calculator (NORC). Von Neumann offers the following caveat: At our conference I emphasized the circumstances which may limit the validity of my judgement on the matter in question. I understand that you are entirely aware and appreciative of these things, and that you want me to give you my views nevertheless. I am doing this in what follows and leave the evaluation to you. Having worked for the Bureau for a decade on problems involving weapons systems and the computations needed to develop them: I shall enumerate some particularly outstanding subjects which are integral parts of the Bureau of Ordnance activities, and indicate how high speed calculation will contribute essentially to the Bureau work in these subjects.31 Von Neumann enumerates the six subjects of aerodynamics, hydrodynamics, elasticity and plasticity, high explosives, missile design, and finally, atomic weapons and motors. The affects of blast waves on solid structures, whether surrounded by air or water, fall under the heading of Elasticity and Plasticity, but are not of interest here. The other subjects show the linkage between computational and destructive power that was being forged. Aerodynamics concerns the nature of air flows, of obvious importance to airplane and missile design. According to von Neumann, ordinary problems already taxed the capacity of the ENIAC and its electro-mechanical kin, the IBM MARK III and the IBM Selective Sequence Calculator. 31 Von Neumann to Dr. Richard S. Burington, 19 January 1951, von Neumann MSS. 55

PAGE 62

Turning to hydrodynamics, von Neumann observes that underwater blast phenomenon, such as those generated by depth charges, present even greater complications, making the use of high-speed computing devices even more necessary. Perhaps these complications arise also in the launching of an underwater missile such as Polaris. The effects of explosive shape upon weapon effectiveness are important to both nuclear and conventional weaponry; and here von Neumann claimed that "very complicated" calculations were required. von Neumann was also looking toward the Polaris in the mid-fifties: Missile design would be greatly advanced if extensive simulation by calculation were possible. In other words, if for any given set of aerodynamic properties, control and steering element characteristics, communications system and noise level characteristics, the performance of such an hypothetical missile could be in a variety of relevant situations .... Such problem setups can lead into extremely intricate analytical and combinatorial discussions, also involving a wide variety of mathematically very difficult questions concerning the target tactics and countermeasures. The savings and the acceleration that will be achieved in the missile field, when such techniques can be routinely used, are evident: Most missile designs can then be tested by mathematical simulation on computing machines, and only a few critical and especially promising ones, selected on the basis of the computational simulations, will have to be carried on into the hardware stage.32 This proposal, almost a manifesto in fom1, expresses one key lessons of the Manhattan Project: simulation via computation speeds development by allowing scientists to reduce the number and complexity of experiments required. 32 Von Neumann to Burington (n. 31 above). 56

PAGE 63

The development of the NORC was not driven by ordinary market forces: This proposal [by IBM] does probably not represent the quickest and cheapest way to acquire a very high speed computing machine. It corresponds to a program of proceeding to the next stage beyond the present one and to obtain by a considerable effort a machine which is likely to have its peak usefulness toward the middle 1950's.33 Here von Neumann was undoubtedly considering the uses of NORC in the design of the Polaris submarine system. On June 19th, von Neumann corresponded with the director of the Office of Naval Research, concerning several mathematical questions which "might be profitably considered in the context of the 1951 tests," principally the decay of blast waves. Here von Neumann reiterated his view that the ... decay of a spherical blast wave could be calculated with any one of a number of machines which are available today. The ENIAC could certainly do it, but less fast machines could also be used for this purpose. In any case, this calculation is badly needed.34 The more realistic case involved what von Neumann antiseptically referred to as the decay of an initially spherical blast wave in a vertically stratified atmosphere. This problem is clearly more difficult, since it has only cylindrical symmetry .... For the cylindrical problem, the ENIAC is probably adequate, but I think that a machine with more limited characteristics would not do.35 33 Von Neumann to Burington (n. 31 above). 34 Von Neumann to Rees, 19 June 1951, von Neumann MSS. 35 Von Neumann to Rees, (n. 34 above). 57

PAGE 64

The simpler calculation may have been needed to test preliminary work by von Neumann collaborator Stanislaus Ulam suggesting that the then current design for the hydrogen bomb would produce a fizzle. While Ulam worked with a desktop calculator at Los Alamos, a team at the Army Aberdeen Proving Ground ran a test on the newly installed ENIAC, reaching the same negative result. All the work hitherto done on the Super had been, in [Edward] Teller's own words, "nothing but fantasies." It had to be started all over again. Had the preliminary measurements themselves, upon which the calculations had so far been based, in fact been accurate? One could find out only by testing them afresh in an actual trial. If practical results were to be obtained, much more precise observations would have to be taken in the new test than in any previous undertaking in the atomic-armaments field. Instruments of hitherto unknown speed and precision were essential. Cameras would have to take thousands of photographs in the fraction of a minute. A system of signals would be necessary to relay their "experiences" to a distant control before they themselves were destroyed by the force of the explosion.36 The construction of this "system of signals" was a mammoth undertaking which affected the fields photography, electronics, and computing. By March 6, 1952, von Neumann had already begun. to consult with IBM on the development of the NORC. The IBM point of contact, Dr. L. H. Thomas, wrote a note summarizing an earlier conference in which several NORC design issues were discussed. Von Neumann provided clarification to Thomas on several points and in the process demonstrated several design principles in use today. 36 Robert Jungk, Brighter Than a Thousand Suns (New York, 1958), p. 293. 58

PAGE 65

However absorbing these design activities might have been, von Neumann was drawn back to the source of his authority, becoming increasingly involved in the development of a new weapons system which promised a more effective means of delivering nuclear weapons to their targets. Von Neumann and the Weather As the Korean War seemed to underscore the intent of Soviet communism to establish world domination, the Air Force increased its reliance on the Scientific Advisory Board (SAB) to the USAF Chief of Staff. In April of 1951, the SAB was chaired by Dr. Theodor von Karman, with whom von Neumann had served on the Army SAB back in 1941. Von Neumann was recommended for SAB membership by Robert Kent, the Chairman of the Explosives and Armament Panel, with whom von Neumann had worked at the BRL and on the ENIAC. Things moved rapidly: von Neumann was invited to participate in the meeting of the Board on April 13th 1951 and responded affmnatively by mail on the 25th. On May 1st, he was invited to attend the reception for the Board given by Air Force General Hoyt Vandenberg on the evening of May 9th, "which many Department of Defense personalities will attend. "37 Both the Army and the Navy were interested in von Neumann's Princeton meteorology prediction project. Although active in meteorological prediction since 37 Letter from D. L. Putt, Major General, USAF, Military Director, Scientific Advisory Board, to von Neumann, 1 May 1952, von Neumann MSS. 59

PAGE 66

1948, now von Neumann's involvement deepened and in mid-May, he wrote concerning the electronic computing demonstration provided by the meteorological group at Princeton to the Air Force Geophysical Research Panel. Von Neumann indicated his eagerness to discuss the theory of meteorological prediction during an upcoming visit to Washington.38 In June 1952, von Neumann wrote to Nobel Laureate I. I. Rabi concerning the possible connection between tornadoes in the East and A-bomb tests in Nevada. After assuring Rabi that the power of any A-bomb tested was too insignificant to account for even an ordinary weather front, let alone a tornado, von Neumann notes in passing that "the U. S. Weather Bureau is preparing maps for all the relevant A-bomb clouds," presumably either for prediction purposes or simply to allow subsequent study of any effects following in the path of such clouds.39 At its next full meeting, the SAB was asked to produce a "Toward New Horizons" study considering "the trend capabilities of those technical areas which will contribute most to the development of Air Force equipment in the next ten 38 Von Neumann to USAF Major General Craigie, 16 May 1952, von Neumann MSS. Von Neumann wasin town to attend a meeting of the General Advisory Committee of the Atomic Energy Commission. 39 Von Neumann to Rabi, 23 June 1952, von Neumann MSS. Air Force Global Weather is across the street from Offutt Air Force Base in Omaha, home of the Strategic:: Air Command. 60

PAGE 67

plus years." Put otherwise, it was important to foresee the benefits of emerging technologies in order to avoid any technology gap: We have a growing apprehension that our conventional process of allocating a large portion of our R&D effort to obtaining incremental [emphasis added] advances and improvements in available weapons systems may well lead in time not to qualitative superiority but to qualitative mediocrity, when measured by the only realistic qualitative index comparison with the competitive weapon system.40 On 13 December 1952, von Neumann received a letter from the RamoWooldridge Corporation (now TRW) confirming his membership on the Strategic Missiles Evaluation Committee (also known as "Project Teapot").41 Within six months, the Secretary of the Air Force was expressing personal thanks to von Neumann for the work of "your committee," noting that the .... quality and effectiveness of our atomic strength must depend in large measure upon a continuing close working relationship between the military and American science. In this connection, I am aware of the heavy demands being made on you by various agencies of the government.42 In July of 1953, Trevor Gardner, formerly of the Strategic Missiles Evaluation Committee, wrote concerning the formation of a Scientific Advisory Group (SAG), chaired by von Neumann, to advise the ATLAS ICBM development project under the direction of Brigadier General Bernard A. Schriever. Its membership would include Dr. Herbert F. York, eventual critic of 4 Craigie to D. L. Putt, 24 November 1953, von Neumann MSS. 41 Ramo-Wooldridge to von Neumann, 13 December 1952, von Neumann MSS. 42 Harold Talbott to von Neumann, 12 April 1952, von Neumann MSS. 61

PAGE 68

U.S. nuclear strategy; Dr. Norris Bradbury, successor to Oppenheimer as Director at Los Alamos; Dr. George Kistiakowsky of Harvard, first Presidential Science Advisor; J. B. Weisner of MIT, also to become Science Advisor; and, interestingly, Charles A. Lindbergh.43 Von Neumann's involvement with the ATLAS SAG, as important as it became, did not preclude work on electronic digital computers and meteorology. Von Neumann and Cuthbert Hurd discussed a statement drafted by the latter describing the nature of some meteorological calculations to be performed on an IBM 701 and acknowledging that the lAS machine had neither the speed nor the memory for the calculations required. The next step beyond the capabilities of the IBM 701 would be to extend prediction from less than a week up to a month, with correspondingly greater amounts of computation required.44 This next step came two and a half years after von Neumann discussed the NORC design with ffiM. The NORC was unveiled in New York on 12 December 1954 amid pomp and circumstance as von Neumann spoke after the luncheon, putting the NORC into perspective. The NORC appeared to be faster than the IBM 701 by a factor of five (5), a capability required for advanced meteorological computations. Although considerable NORC time would be for ballistics 43 Letter entitled "ATLAS," undated, von Neumann MSS. 44 Von Neumann to Hurd, 8 November 1954, von Neumann MSS. 62

PAGE 69

calculations, von Neumann noted some less familiar applications that were becoming increasingly important: NORC, being a machine with a very high speed, a rather large memory, and a very exceptional capability to ingest data, is clearly suited for problems where large amounts of material have to be processed. Therefore one must ask: Where do scientific calculations require large amounts of data't5 Many physical problems require considering (at least) the three dimensions of ordinary space: in many parts of geophysics, notes von Neumann, it is very difficult to omit any dimension. The three main areas of geophysics involve water, earth and, of particular importance here, the air: We know that calculations of meteorological forecasts for longer periods, like 30 to 60 days, which one would particularly want to perform, are probably possible but that one will then have to consider areas that are much larger than the United States ..... with the best available modem computing machines it is still a very large problem, and when one deals with a new problem one must solve it a few dozen times "the wrong way" before one gradually finds out by trial and error, and by coming to grief many times, what a reasonably "good way" is. Consequently, one will simply not do it unless one can obtain individual solutions quite rapidly.46 After considering the expected uses of the NORC, von Neumann notes the "statistical analysis of complicated situations," mentioning first the use of fast computing machines to exclude infeasible weapons designs. There are also 45 John von Neumann, "The NORC and Problems in High Speed Computing," in Papers of John von Neumann on Computing and Computer Theory, ed. W. Aspray and A. Burks (Cambridge, 1986), pp. 350-59. 46 Von Neumann (n. 45 above), p. 354. Although von Neumann had relatively little to say the water and the earth, consider the hydrodynamic calculations to model the underwater launch of a ballistic missile from a submarine. 63

PAGE 70

complicated processes which are not exactly mechanical, like large-scale operations involving an organization, involving many units, e. g. combat operations, or more simple logistical operations, where one can perform an analysis in this way.47 Von Neumann reiterates the advocacy of computational simulation in his early evaluation of the IBM NORC proposal: One can, with a certain choice of parameters, go through the calculation, say a hundred times, each time assuming the accidental factors differently, distributing them appropriately, and so in hundred trials obtain the correct statistical pattern. In this way one can evaluate ahead of time by calculation how those parameters should be chosen, how the decisions should be made -or at least obtain a -general orientation on these.48 But now von Neumann adds a new dimension: No one in planning a research program, or an investigation, likes to commit himself to a particular plan which may not work, if this involves tying up the whole organiza:tion for a half year. If, on the other hand, one can do these things [simulations] rapidly, one will be bolder and find out more quickly how to do it. Similar considerations apply to various other calculations involved in planning and programming; those of you who know the concepts of "linear programming" and "non-linear programming" and various other forms of logistic programming will know what I mean.49 Thus, now von Neumann sees computers used as instruments of planning and control. 47 Von Neumann (n. 45 above), pp. 356-57. 48 Von Neumann (n. 45 above). 49 Von Neumann (n. 45 above). 64

PAGE 71

In view of his myriad contributions to the services, it is hardly that surprising that the letter notifying von Neumann of his reappointment to the USAF SAB stated that ... you and the technical community you represent hold in your hands and minds the technical aspects of the new and greater Air Force capabilities which must come into being in the years ahead. Now, more than ever before, we in the Air Force must look to you for guidance and help.50 Von Neumann would barely live out his term on the SAB: six months after his appointment to the Atomic Energy Commission, von Neumann was diagnosed as having inoperable cancer.51 Von Neumann's normally pressing schedule began to lighten as he declined new and terminated consulting roles. Von Neumann wrote to William Shockley, inventor of the transistor, and then Director of Research for the Weapons Systems Evaluation Group (WSEG) of the Office of the Secretary of Defense, to ask that his (von Neumann's) role in the WSEG be terminated.52 The Air Force insisted at von Neumann's Senate confirmation hearings that he retain chairmanship of the ICBM Scientific Advisory Group. 5 3 so Nathan F. Twining, USAF Chief of Staff, to von Neumann, date unavailable, von Neumann MSS. 51 Clay Blair Jr., "The Passing of a Great Mind," Life Magazine, 8 February 1957. 52 Von Neumann to Shockley, 15 April 1955, von Neumann MSS. 53 Claude J. Johns, Jr., "The United States Air Force Intercontinental Ballistic Missile Program, 1954-1959: Technological Change and Organizational Innovation" (Ph.D. dissertation, University of North Carolina, Chapel Hill, 1964), 65

PAGE 72

The Big Picture His life ebbing, his general notoriety growing, von Neumann claimed in a popular article that the earth itself was in a crisis, becoming too small for the technological advances that were occurring: In the first half of this .century the accelerating Industrial Revolution encountered an absolute limitation not on technological progress as such, but on an essential safety factor ... [which] was essentially a matter of geographical and political lebensraum .... At long last, we begin to feel the effects of the finite, actual size of the earth in a critical way ..... Technologies are always constructive and beneficial, directly or indirectly. Yet their consequences tend to increase instability.54 Von Neumann predicts massive use of nuclear reactors by the 1980's, believing that "[fjorced by the limitations of our real estate, we must ... do much better than nature" by not requiring a star to produce thermonuclear reactions, instead making full use of fission!55 He also notes the rapid evolution of automation, observing that computers can be used as control devices as well as for "logistical, economic and other planning, and many other purposes heretofore lying entirely outside the compass of quantitative and automatic control and preplanning," using weather and climate control as examples.56 p. 39. 54 Von Neumann, "Can We Survive Technology," Fortune Magazine, June 1955, pp. 33-35. 55 Von Neumann (n. 54 above), p. 36. 56 Von Neumann (n. 54 above), p. 37. 66

PAGE 73

Summarizing these developments, von Neumann notes, first, that they all lend themselves to destructive purposes. Second, ... there is in most of these developments a trend toward affecting the earth as a whole, ... toward producing effects that can be projected from any one to any other point on the earth ..... The technology that is now developing and that will dominate the next decades seems to be in total conflict with traditional and, in the main, momentarily still valid geographical and political units and concepts. This is the maturing crisis of technology. Whatever one feels inclined to do, one decisive trait must be considered: the very techniques that create the dangers and the instabilities are in themselves useful. In fact, the more useful they could be, the more unstabilizing their effects can also be.57 Neither division nor prohibition are likely to preserve us from the dangers wrought by useful technology: After global climate control becomes possible. perhaps all our present involvements will seem simple. We should not deceive ourselves: once such possibilities become actual, they will be exploited ..... All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary "first guesses" concerning them are wrong.58 Machines such as the NORC, and its more powerful successors, could be and are used for the prediction and control of weather and social organizations. In late June of 1955, von Neumann spoke at the Armed Forces Staff College on the impact of atomic and thermonuclear weapons on national policy. 57 Von Neumann (n. 54 above), pp. 38-39. 58 Von Neumann (n. 54 above), pp. 40-41. Compare this meditation on technology with that of philosopher Martin Heidegger, also published in 1955: see The Question Concerning Technology and Other Essays, translated with an introduction by William Lovitt (1955; reprint, New York. 1977). 67

PAGE 74

In July, a wheelchair bound von Neumann, along with Trevor Gardner and Air Force General Bernard A. Schriever briefed President Eisenhower on the ATLAS ICBM program, which led to its establishment as the number one national priority. Von Neumann ,, ... continued to preside over the ballistic missile committee, and to receive an unending stream of visitors from Los Alamos, Livermore, the Rand Corporation, Princeton. Most of these men knew that von Neumann was dying of cancer, but the subject was never mentioned. 5 9 In the last quarter of 1955, Thomas J. Watson, Jr. wrote to von Neumann to bring him up to date on the NORC. The Navy had operated the NORC at the IBM Watson Laboratory at Columbia until 1 March 1955 when it was moved to Dahlgren Proving Ground in Virginia. The NORC, described in a new book published by IBM, was being used on each of three shifts.60 The year 1955 closed out with an invitation to attend the Industrial Preparedness Meeting of the American Ordnance Association, suggestively scheduled for December 7, 1955. The dinner honored Army Chief of Staff Maxwell Taylor. In the second part of the (Robert H.) Kent seminar on the scientific bases of weapons, von Neumann spoke on "Defense in Atomic War" in which he explicitly linked the power of computing and the power of nuclear weapons. According to von Neumann, high speed computing machines are an 59 Blair (n. 51 above). 60 Watson to von Neumann, 11 November 1955, von Neumann MSS. 68

PAGE 75

"absolutely necessary condition" for the calculation of ftring tables for air-to-air ftrings and for computation of "missile-trajectories" guidance.61 For von Neumann, the history of warfare is the history of increased firepower, with atomic weapons representing the (then) latest stage of this increase. The effect of this increase in firepower is twofold: first, it reduces the need for manpower and conventional (non-nuclear) equipment, provided one is willing to counter them with nuclear force; second, the time needed to decide a war can be reduced from years or months to days or weeks. Given this, the importance of a new weapon or a countermeasure against a weapon can have tremendous significance. A single unanticipated measure or countermeasure could result in several weeks of nuclear vulnerability, a period in which our retaliatory capability could be utterly destroyed. How does one defend against such a possibility? Here von Neumann returns to the field of systems analysis and operations research, where the characteristics of a weapon system can be determined without first building and testing it: The manner in which one now calculates the performance of a weapon system consists of taking it through a military maneuver, an engagement, or a series of engagements on a computing machine. 5 2 61 John von Neumann, "Defense in Atomic War," in Papers of John von Neumann on Computing and Computing Theory, ed. W. Aspray and A. Burks (Cambridge, 1986), pp. 523-24. 62 Von Neumann (n. 61 above), p. 524. 69

PAGE 76

Not only are computing machines used to analyze the physical properties of materials needed to produce weapons systems, but so also are they used to assess the performance of a hypothetical system. This is not all, for questions of strategy also arise: It will not be sufficient to know that the enemy has only fifty possible tricks and that you can counter every one of them, but you must also invent some system of being able to counter them practically at the instant they occur. It is not easy to see how this is going to be done. Some of the traditional aspects of the use of the same weapon for several purposes and of limiting its use until you need it ... may have some of the elements of an answer.63 Here we see the need for computing machines capable of performing the "statistical experiments of complicated situations" mentioned at the NORC dedication. In April 1956, von Neumann received the Medal of Freedom in his last public appearance, received the Enrico Fermi Award accompanied by a $50,000 tax-free grant for contributions to the theory and design of computing machines and began a residence at Walter Reed Army Hospital terminated on 7 February 63 Von Neumann (n. 61 above), p. 525. 64 Blair (n. 51 above), p. 104. 70

PAGE 77

Conclusion Von Neumann's brainchildren, unlike their father, successfully refused to die. In mid-1956, the Navy established the Program Evaluation Branch of the Polaris missile program. This program had a great impact on defense, defense planning, and the conduct of business in the United States, for out of it came the now-famous, but somewhat discredited, technique called PERT, the Program Evaluation and Review Technique.65 PERT is a project management technique based on visualizing work as a network of interdependent tasks. Associated with a task A is its projected time to complete and two lists. The first list shows the tasks, if any, which must be completed before task A can begin and the second list shows those tasks which cannot begin until task A is complete. An example of a very small PERT chart is shown in Figure 1 on the following page. Figure 1 depicts four tasks: A, B, C and D. Since task A is to the left of task B, the completion task of B depends upon the completion of task A: task B cannot begin until task A is completed. Task A is scheduled to begin on 1/90, and has a duration of 1 (one) month. Similarly, task D depends directly upon the completion of tasks B and C, is scheduled to begin on 5/90, and has duration of 1 month. 65 Willard P. Fazar, "The Origins of PERT," The Controller 30 (December 1962): 4. 71

PAGE 78

2/90 3 Figure 1. A Simple PERTChait. I ., II I II I II \I I i I 11 il II i\ 1'1 :'1 I .I I! i! 1\ A project consisting of only a few tasks can also be managed using a II Gantt chart, developed during World War l by an early associate of FredeAck W. II Taylor, the "Father of Scientific Management.;, Figure 2 below contains a pantt i! li chart representing tasks A, B, C, and D. A c D Figure 2. A Simple Gantt Chart. 72 ,, ,, r: I! II 1\ I! II li II [I \I II

PAGE 79

For high level status briefings, a Gantt chart is preferred for its simplicity The Gantt chart, however, does not explicitly show dependencies between the viewer must synthesize this information. A strength and weakness of a PERT chart is that it shows task dependencies, which in principle provides more control but introduces an 1 I additional type of element that must be tracked through a planning namely, dependencies among tasks. A slip in the completion date associat1d with I any single task can "ripple" through an entire network of tasks so that the :entire I I schedule may become infeasible, requiring at the least considerable updatidg. While representing a project with 50 tasks can be formidable when done Jith a PERT chart, maintaining the corresponding Gantt chart is equally dauntinJl precisely because the relationships between the tasks are not explicitly in the Gantt chart.66 'I ' I i I In the early 1950's, the defense weapons systems concept was coming into prominence. Weapons systems developed in an orderly, linear fashion had I I unfortunate side-effect of creating lead times of 7 to 10 years from the of the weapons systems concept to system deployment. With the natural I I I II presumption that any technological advances on our side would be mirrored by II 'I I our adversaries, lead-time requirements of 7 to 10 years became increasingly I i I 66 Consider the maximum number of dependencies between N tasks: foft N = 4, the number of dependencies is 6; for N = 5, the number is 10, while for N =50 the number jumps to nearly 2500. I ,I 73 I I

PAGE 80

li I ., 1. II unacceptable, particularly when coupled with the belief that our were able to cut lead times to approximately 5 years.67 fl The possibility of speeding up the development of weapons systeJs had fl II already been examined prior to the Strategic Missiles Committee: one of chief i! proponents of the "Doctrine of Concurrency" was a member of that comniittee, [I Air Force General Bernard A. Schriever. The "Doctrine of Concurrency" d1 ictates f! that as many phases as possible in the development of a weapons system carried out more or less simultaneously. This is possible to the extent that If weapons systems can be "carved up" into separate sub-systems, each of can f: be developed more or less independently.68 When sub-system development is complete, the sub-systems are \I into a functioning whole. There are several benefits to this approach. By f! iJ identifying relatively independent tasks, additional man-power and can be devoted to each relatively independent task without incurring inordinate1 \ communication costs. If the inter-relations between sub-systems are adequJtely I, ll defined, then the system as a whole has a high probability of operating li successfully when all sub-systems are integrated. ICBM system and li reliability were raised by the "test philosophy" which required repeated tests of all components from top to bottom of the component hierarchy. Subsequent 67 Johns (n. 53 above). 68 Johns (n. 53 above). 74 II i! il f! I!

PAGE 81

technological advances in specific sub-systems are easily exploited: if sutijsystem I, interfaces are preserved, we just "swap out" the new sub-system with the [fld.69 Despite the desire to perform as many sub-tasks as possible in r: some activities cannot be carried out concurrently. Once the system has II broken down into sub-systems, one major potential problem remains: in I' I' system integration. The system cannot be integrated until the of each I' I sub-system is complete. Thus, if sub-systems B and C are complete and rJady for integration, unscheduled delay in completing the development of sub-systJb A I will not affect the completion of B and C (since they are already completJ), but it I.! will prevent the system from being operational without unscheduled delayJ\ I As chairman of the Strategic Missiles Committee, von Neumann Jkued I, I that the factors inhibiting rapid development of an operational ICBM involved 1.1 1: management technique rather than technology. The implications were twofold. I' ,I First, the traditional weapons system development cycle could not lead to F I operational ICBM within a short period of time. Second, new managemen{; 11 techniques were required to guide a newly conceived development process\: intended to meet short development schedules. Until reconnaissance indicated in mid-1955 that the Soviets were I developing an ICBM, the will was lacking to tackle the problems associatep with concurrency. It then seemed that the doctrine of concurrency was the only to l.i 69 Osmond J. Ritland, "Concurrency," Air University Quarterly Review\! (Winter-Spring 1960-61). I' ,, 75 II 1: ,I I. i' I I' I I' 1 :

PAGE 82

II ,I fi eliminate "the missile gap" by 1960. The elevation of the ballistic missilef\program \:1 to number one national priority in mid-1955 brought intense concern about the jl management of development concurrency.70 j\ It has been suggested that the Air Force, to promote its own role the ICBM business, spread damaging rumors about lack of progress in the Polaris program. War-time experience had made the virtues of management" seem obvious, so it is not surprising that the Navy touted PERT as ll such a tool and used it to ward off criticism.71 But PERT had significant 1! I I technical antecedents: work done on a linear programming and simulation\,lin the I, early 1950's foreshadowed PERT. Moreover, the PERT concept was in the air, as evidenced by the independent and contemporaneous developmeht of the ll Critical Path Method by DuPont for non-defense use. The Air Force configuration [,\ management technique, although overshadowed by PERT, contained of the same elements.72 Ill, 1: l! 70 Johns (n. 53 above), p. 47. \,i 71 One critic claimed that PERT was a bureaucratic response by the to internal and external criticism and was intended, not as a management tool! but as a weapon with which to silence critics. See Harvey M. Sapolsky, The PolaHs System Development: Bureaucratic and Programmatic Success in Governrrient (Cambridge, 1972). For a discussion of the reverence for scientific that came out of World War II, see Merritt Roe Smith, "Introduction," in military Enterprise and Technological Change, Perspectives on the American Experience, ed. Merritt Roe Smith (Cambridge, 1985). r: 72 W. Schedule, Cost. and Profit Control with PERT: A f Comprehens1Ve Gmde for Program Management (New York, 1963), p. 27. f' ,I II 76 II 1:! fi 11

PAGE 83

The diffusion of PERT throughout the defense community, and frQm there into the industrial world, was hastened when the Defense Department to require the use of PERT as a management tool, largely because of Polarill success. :, The potential of PERT as a "tool of scientific management" could not be realized, I however, without the computerization of PERT networks. The PERT tean{, which 1:1 I formally began its work in February 1958, quickly realized that computerilzation was necessary. Indeed, the rapid diffusion of PERT through industry and government depended to a large degree upon the development of in-place electronic data processing capability.73 As the von. Neumann machine chosen to computerize the original PERT II system, the NORC was the heart. of the PERT decision-support system: I One of the most useful aspects of the NORC outputs is the ability it provides for checking the of schedules and for permitting tec?nical management to 'expenment' wlth or evaluate the effects of proposed in the research program under its technical direction?4 The potential of the NORC for the "statistical analysis of complicated J had been realized. :I 73 Miller (n. 71 above), p. 26; J. S. Butz, "The USAF Missile A Triumph of Orderly Engineering," in A History of the US Air Force BalliJtic Missiles, ed. E. Schweibert (New York, 1965), p. 198. i I ., i 74 D. G. Malcolm, J. H. Roseboom, C. E. Clark, and W. Fazar, "Appliqation of a Technique for Research and Development Program Evaluation," Research 5 (1959): 662. : 77

PAGE 84

lj ,I I I II I With the initial success and subsequent diffusion of PERT, the of concurrency in development undoubtedly acquired considerable momentuL. PERT, with its ability to focus on the interdependency of tasks, seemed give management greater control over ;'research and development" activities great uncertainty. Many courses of action which might have been unthinklble before (because unmanageable), now became open for exploration. ProjeJs rendered feasible by this new form of control may have acquired near inevitability, a result of "reverse adaption. "75 Out of these changes to management practice, there have perhaps I' I broad, subtle changes to life in the "information" society. We remain the spell of von Neumann's genius for fusing war and computation in an 1:: complete equation of knowledge and power. Whether it is necessary, possible, or fl desirable to continue in the trajectory established by von Neumann's I remains a question worthy of our consideration . 75 Langdon Winner, Autonomous Technology: Technics-out-of-Controi.Ias a Theme in Political Thought (Cambridge, 1977), pp. 226-36, 238-51. 78 'I I I 1:

PAGE 85

CHAPTER 5 RELATING EACH CLAIM TO THE IDSTORICAL RECOR91 Introduction ,, I I Although Chapter 4 as a history of the rise of management infomi!ation systems is limited to the role of von Neumann, it can now be interpreted lin terms II of the two meta-historical claims of Chapter 3. It is necessary, however, li interpret the planning process as a technological system and to identify sub1: systems which comprise it.1 ,, I Key to interpreting the planning process as a technological in viewing planning as an information processing system (IPS). The purposd'! of an IPS is to transform data obtained from the environment into information use to II decision makers. Abstractly, an IPS consists of an input, processing, and butput stage. In a Gantt system, the processing stage is carried out by a human +ing. In a computerized PERT system, the processing stage is performed by an digital computer. Human technology is at the heart of the Gantt system, Jhereas :'1 I electronic technology is at the heart of the computerized PERT system.' I! l:i 1 Referring to a management technique as a technology has as a precedent the following technical paper. Line of Balance Technology, Navy Department,IJ'Office of Naval Material (NAVEXOSP1851 Rev. 4-62), Washington, D.C., ApriiJ 1962. 2 Jerome S. Burstein and Edward G. Martin, Computer Information with BASIC (Chicago, 1989), pp. 36-66. '1

PAGE 86

I i I There are several sources of data for each system type. In a GanttlPERT system, the data are (1) a Gantt/PERT chart to be updated and (2) a list ]f t: schedule changes which (in the simplest non-trivial case) need to be to the I chart. The two systems differ not only in the "technology" being used to : ; implement the processor, but also in the kind of processing being penoJed. In a G d d . b k I f h ll d antt system, epen encteS etween tas S are not exp lClt y part 0 t e InrUt ata structure, whereas task dependencies are explicitly represented in PERT The output of each system is again a chart of the appropriate kind. l Recall that the theory of technological disequilibrium involves sevtral concepts: imbalances emerge in a system because of uneven growth of it1j components; in order to move the bottleneck to system growth, reverse sa!lients must be identified as solvable, but not yet solved, critical problems; if enl\ineers [i solve these critical problems, the imbalance is removed and the system will I! resume growth; otherwise, a new system emerges. Recall also that the technological co-evolution is a special case of .! ., technological co-evolution. This motivates our strategy to first assess the I applicability of the theory of technological disequilibrium by attempting explicate the change from Gantt MIS to PERT MIS in terms of the key of the theory of technological disequilibrium as outlined above. 80

PAGE 87

Technological Disequilibrium I' li fl II \I I; 'I To show the applicability of the disequilibrium theory, we must II uneven growth in one of the components of the system, a "reverse .. What tl are the components on the system? Here we are concerned with the li and "software" that perform the input, processing, and output functions ol\our idealized IPS. I! II In the Gantt system, the human planner is the interface between the Gantt II i: system and its environment. In addition, the planner transforms input datal:(a Gantt !I chart, a change list, and a list of precedences) into a new Gantt chart. In the II fl PERT system, the human planner no longer processes the chart, change list, and II precedence list, but still exchanges data and results with the environment.I!A human acts as input and output medium/device in both systems, so we look I elsewhere for the distinctive differences between the two types of MIS. li II Consider the form of the primary data in each system. A Gantt chart can be described as a temporally ordered collection of paired start and end dais, whereas a PERT chart is a collection of paired dates that is ordered by a 1: j: precedence relation. The precedence relation consists of pairs (A,B) of tas.s A and B such that A precedes B and imposes a stronger ordering than simplJ; I temporal order. From a PERT chart, we can uniquely derive a Gantt chart.f!but a fl Gantt chart does not itself provide the precedence task pairs needed to a II PERT chart. So, a PERT chart is structurally more complex than a Gantt chart. 1: I t: tl ii I! li 81 II 1:

PAGE 88

II 1: II II ii With the use of concurrency, the number of tasks and the of the precedence relation increased as well. The imbalance required for the 1.! I! II applicability of the disequilibrium theory occurs not in the human technolpgy itself, but in the character of the data it must process. The effect of this I; in the Gantt system is to cognitively overload the human planner qua prooessor. 'I II The critical problem to be solved is how to deal with this overload. 1: 1. I The key to solving the critical problem is to recognize the primaryr; source ii of the difficulty. In the Gantt system, as the number of tasks in a project !; there is an exponential increase in the number of task pairs that must be checked d if h h d 1 F d \1 to etermme t ey are to appear m t e prece ence 1st. orrmng an pro9-essmg the precedence list leads to cognitive overload of the human processor in Gantt system. I; II One solution to the critical problem of processing a precedence lisi:is to I, "upgrade the processor" by obtaining a superior planner. As the requiremeht for :I more and better planners increases, we are witnessing the final phase in I' evolution of Gantt systems in which the capacities of the system are being!., I reached. Eventually, other approaches were sought to address the technological II disequilibrium wrought by the increased complexity of the data. I, 1: Von Neumann's Strategic Missiles Evaluation Committee had on \j, the shortcomings of serial development strategies and the need for to I 1, manage the additional complexity engendered byconcurrency. The PERT J\ approach addresses the limitations of the Gantt system by incorporating 82 I' I I 'I 1 I. ,., !I I' II I:

PAGE 89

:j I precedence list into the PERT chart. Formation of the precedence list and1 its processing are simplified. The PERT system does not preclude the contiJted use of a human processor: the first PERT systems were, in fact, manual systjLs. Technological Co-evolution We must now show that the special sort of disequilibrium by the II: theory of technological co-evolution in fact obtained. More specifically, disequilibrium described above must be explicable in terms of (1) the mJual, reciprocal influence of at least two subsystems, and (2) the presence of a selective-retentive process. Consider the distinction between mutual and reciprocal influence: [: RECIPROCAL, MUTUAL, COMMON mean shared or experienced by each. RECIPROCAL implies an equal return or counteraction by each of tW:p sides toward or against or in relation to the other; MUTUAL applies to or effects shared by two jointly.3 II 'I i Thus, to assert the mutual, reciprocal influence of two subsystems A is to assert that: changes to the system of which A and B are a part influence lth A and B and; if changes to A influence changes to B, then at some later timt I! further changes to B will influence changes to A. In the change from Gantt MIS to PERT MIS, three things have changed. First, the nature of the processing has changed due to an input data set is :1 'I \:! 3 Webster's Seventh New Collegiate Dictionary, (Springfield, Mass., p. 715. ,' 83

PAGE 90

,, I' more: conceptually complex, since it represents a network of task dependencies, I and; numerically complex due to an increase in the quantity of tasks to be I managed. Second, the processing stage is now performed by an digital computer instead of a human computer. Last, a human being now perfols the II I functions of a peripheral processor by accepting schedule slippage data frbm the I I I environment, making it available to the central processor in a machine-readable format, and feeding back an updated PERT chart to the planning enviromnent. . II Which changes to the Gantt system as a whole affected the input, : I processing, and output components? This question addresses the first for technological co-evolution. Certainly, with development concurrency, data: volume I and complexity substantially increased. Assuming that the components of system I are specialized, any change in the environment which affects the mix of f#nctions I ., II that must be performed will differentially load each component. Thus, we :seem to I have a "mutual, reciprocal relationship" between at least two of the three ' components of the Gantt system. ii Next, we must demonstrate that if changes to component A influen?es changes to component B, then at some later time, further changes to comJ1bnent B will influence changes to component A. Consider an IPS from the standpoint of throughput and response time.' System throughput is defined to be the nuiber of il 4 This approach to the evolution of information processing system is by my reading of Stuart K. Card, Thomas P. Moran, and Allen Newell, Keystroke-Level Model for User Performance Time with Interactive Communications of the ACM 23 (1980): 396-410. 84

PAGE 91

items that can be completely processed per unit time, a measure of the vcilume of information processed. System response time is defined to be the time the start of processing to the end of processing, a measure of the speed of system as a whole. I .I Several results are needed to understand the dynamics of the of I (open) information processing systems.5 First, as the rate of requests to a given IPS increases, throughput will increase up to a saturation point. Second, Jls h h . IPS 1 lnf, 1: t roug put mcreases on a g1ven response tlme a so mcreases. ormat10n 11 b' b h th h d . 1!1 systems are typ1ca y su to ot roug put an response tlme requrrements. ll 'I A system which satisfies both throughput and response time requirements'junder I one load, may fail either or both under a heavier load. The structure of I information system may change as a result of other factors, but evolution ' response to failure to meet one or both types of requirement is also Let us I first consider throughput.6 When an IPS fails a throughput requirement, one may try to throughput by speeding up the component that is constraining throughput (.also known as the bottleneck). In the case of Gantt systems, we conjecture as 5 An "open" IPS is one in which the rate at which requests are to the IPS is independent of the degree of responsiveness of the IPS. The rate at Fhich successor lemmings attempt to cross a ravine is unaffected by the fate of ,, predecessors. I I 6 We do not address the applicability of functional failure and presumptive anomaly. 85

PAGE 92

failures to effectively manage concurrency occurred, there was an emphasis on finding superior planners. If speed-up does not bring the syJiem into I compliance, the bottleneck component can be replicated, thus increasing ! throughput. Expansion and contraction of planning staff in the defense in4ustry is well-known and undoubtedly occurred in the evolution of Gantt systems. I Replication will only help up to a certain point: communication and othe:r:: costs I eventually swamp the positive effects of more manpower. li II Consider now the situation when a response time requirement is vjolated. I ,I I' As throughput increases on a given system, response time also increases. Without changing the system, the only way to reduce response time is to reduce J:e rate at which requests are made, which in turn reduces system throughput. In thjf case of Gantt systems, it is unlikely that program management would accept reduJed li processing from its planning staff in return for quicker response. l! I In order to see reciprocal influence among the components of an IPS, note that every IPS has a bottleneck. The system bottleneck can be moved but !never removed, since there is always a slowest component or set of components! I I regardless of how fast a component becomes or how many times it is I' Suppose the bottleneck is moved from component A to component B. If the throughput of component is B high enough, there is no reason for the to I: evolve further for reasons of system perfonnance. If, on the other hand, throughput of component B is too low, then attention will focus on or 86

PAGE 93

replication of the component B, perhaps causing the bottleneck to move to another H component. I To the extent that moving the bottleneck from component A to I 'I B represents a technical advance, it may fairly be described as a "reverse isalient" of sorts. If the throughput of the newest bottleneck component does not the throughput requirement, the "critical problem" for IPS engineers is to solJe the problem through speed-up or replication. Over the course of its history, I bottleneck may visit several different components one or more times. :J h :I 'I We have suggested a dynamic of IPS change actuated by external I I and dependent on the technical nature of the IPS components. The replacbment of h b 1. . d & h a uman processors y e ectromc processors rna e 1t necessary 10r t e mp.ut tl component to supply instructions and data at higher speeds geared to the 'I I processor component. Thus, for Gantt systems, the initial influence flows from the :i :I processor to the input/output component. To the extent that the human 1'1 had been used for input functions as well as a processor, some of these functions were moved to the electronic processor. Thus, changes to the prtcessor function, while actuated by changes to load by a human in1t component, subsequently led to changes m the functions performed by thei human I input component. In order to view the change from Gantt to PERT systems as an of technological co-evolution, must be in this change a selective-retentiie process, a mechanism by which the 87

PAGE 94

fate of a given invention its developmental direction not only depends on its competition with alternative devices performing the same or similk functions and on its co-evolution with a specific other technology, also depends on the evolutionary success or failure of the higher-level I! macrosystems of which it is a part.' I In the present case, the given invention was the new PERT management i: II 1.1 information system. The ability to implement PERT-type management information 1: systems using digital computers insured the ability of PERT systems to ctmpete against Gantt systems. PERT management information systems were part of a larger "military-industrial complex," a macrosystem then to II eliminating a "missile gap" thought to threaten the very existence of the Jlountry. As PERT systems proliferated, accumulated experience suggested that suth systems could not of themselves guarantee success in managing large-scale systems, hardly a surprising observation. "Scientific management" practicL lost I! some of their luster, and by the mid-1960's, the social macrosystem had bhanged il enough so that PERT management information systems were pruned 1: [. 1: Conclusion il Having demonstrated the applicability of both the theory of techJlogical disequilibrium and the theory of technological co-evolution, we now ask thich 'I ,I I' 'j li 7 Edward Constant II, The Orieins of the Turbojet Revolution :[ 1980), p. 14. 88

PAGE 95

I j I 'i "theory" seems to better "explain" the facts of Chapter 4. 8 Wi'th respect the technological change from Gantt to PERT MIS, the theory of co evolution is superior to theory of technological disequlibrium by virtue being more specific. 'I II II ,I I I I! 1, I I' !I 8 See the distinction between theoretical orientations and theories in Kaplan and Robert Manners, Culture Theory (Englewood Cliffs, 1972), 32-35, 88-91. I 89 11 lo q I, II ll 'I I.

PAGE 96

CHAPTER 6 ARE META-IDSTORICAL CLAIMS TESTABLE? i II 1: I' Introduction I 1'1 II The narrower the claim, the easier to determine what is empirically 1: required to satisfy or falsify it. Most importantly for us, the idea that a daim can I] be tested by reference to the outcome of some activity or procedure the relative neutrality of that activity or procedure with respect to what Ji at stake in the claim. In short, an activity or procedure or method can test a claiJ only if h . h h h f b' . h h 1 li t at acuv1ty as t e c aracter o o w1t respect to t e c rum. 1. The 'philosophical' claim that meta-historical claims concerning 1: I technological change are testable by reference to the history of 'I change implies that the history of technological change possesses the I" character of objectivity. Before attempting to establish the objectivity of I I we assume that this objectivity and see how on that basis a meta-historicarr claim I! might be "tested." l: Suppose a meta-historical claim is generated by a source Suppose also that strong evidence in support of the claim is lacking, so 1.: additional evidence is sought from colleaques in other fields. The speciaJ,tyiethods l'i or materials of another discipline may make it possible to collect especially I compelling forms of evidence, particularly since a different discipline mat not I 1l

PAGE 97

share the presuppositions of the source discipline. Evidence acquired froL another I ,, discipline is likely to have an additional increment of "objectivity." 1; I. Let us now reconsider the objectivity of history using an approach inspired I! by phenomenology. For the phenomenologist, objectivity is an attribute Of I ll cognition, which seems to condemn objectivity to a fundamental arbitraJipess. For h h 1 h . h 1 a1 h ll d t e p enomeno ogtst, owever, even cogmuon as Its own ogtc c aracter, an 1t is this structure which rescues cognition from 'mere' subjectivity. The of II objectivity as a character of cognition are the object(s) which is(are) in 1:1 the act of cognition. In other words, for the phenomenologist, it is the of ,: cognition which guarantees the existence of that which is cognized. f1 ,, Also inspired by phenomenology, historian Leon Goldstein takes the 1:: issue of the objectivity of history. In the language of phenomenologist Edmund Husser!, Goldstein wants to step back from the "natural attitude" of the Jistorian which is expressed through historical realism: i ... a habit of mind ... which inclines those possessed of it simply to that the conceptions of factuality, truth, or reference which apply we speak of the natural world in the natural present must apply when we speak of the historical past.1 l; I Although Goldstein's 'natural present' may present some difficulties for 1:: phenomenological philosophers of science, the major point is that the I 1 Leon J. Goldstein, Historical Objectivity (Austin, 1976), p. xxiv. 91

PAGE 98

i: II I 'I 1, I past is not to be assimilated to the view of time employed by the natural11,isciences, sometimes called the B-series concept of time.2 1: Goldstein attempts to clarify the Husserlian concept of constituti4? then explicitly applies to history: Sokolowski argues that for Husser! consciousness is a necessary but a sufficient condition for reality; while it constitutes it, it does not creafe it. 'I .... we have no access to the historical past except through its in historical 1 h b f h" 1 kn . h h" lh: 1 .... t e o o tstonca owmg are not given m t e way m w 1c natura objects present to perception are.' I! Is Goldstein a phenomenologist of history? Not quite, since the concept ff ,! constitution functions as a metaphor for Goldstein, who wants to locate the ,. objectivity of history, not in the consciousness of the individual historiJ[ but : I! rather in the methods used by the discipline of history.4 1; I [I Goldstein locates the objectivity of history in the practice of histo!Y as a discipline. Viewing as a way of knowing, the discipline of history is wrlched away from the ordinary conceptions of history which have entered consciousness via the abstractions of science. f 2 Richard M. Gale, "The Static versus the Dynamic Temporal Introdpction," in The Philosophy of Time, ed. Richard M. Gale (New Jersey, 1968), PP.I65-85. I 3 Goldstein (n. 1 above), pp. xxi-xxv. I :I 4 Goldstein thus presupposes the inter-subjective agreement which was so difficult for Husser! to reach from the starting point of individual II I 92 I li

PAGE 99

ri 'I I, If history is conceived as a disciplined way of knowing ... not every about the past embodies a historical belief. A historical belief would 1be based upon the outcome of historical research, a claim to knowledge based historical inquiry.5 I Since the historian has no direct access to a real past, but rather tb a past j.l constituted in the present through the discipline of history, the problem of historical objectivity can only be raised properly from within the context of an inquiry into the nature of the discipline.6 Jl The nature of the discipline may be considered in terms of the between "the context of discovery" and "the context of justification." The context!!of li discovery concerns the process by which claims are arrived at, while of justification concerns the rational justification of various knowledge claims. I I According to Goldstein, I With such a view ... the actual intellectual processes of historical coristitution ought to be left to the historian to discuss ... while philosophy becorrles concerned only when the work is done and the question of justifying'j 1 the claims of knowledge that arise from it is on the agenda.7 Historians may occasionally give an account of how and why make their judgements. In doing so, they may invoke the standards and the rJsoning of s Goldstein (n. 1 above), p. xix. 6 Goldstein (n. 1 above), p. 184. Although Goldstein considers the of Arthur Danto, Maurice Mandelbaum and others in elaborating the presented, we are interested only in using the conclusions of these to suggest the kind of objectivity achievable by works in the history of tecHhological 7 Goldstein (n. 1 above), p. 211. 93 ,. ,., '!

PAGE 100

I I'' i' II I their philosophical friends, but these standards and techniques play a i' II persuasive role. j, I The objectivity which we wish to ascribe to the discipline of hist0ry is of I, two kinds. The discipline of history governs the conditions under which !the I "game" of historical research is played and, once played, how it is condJcted and II how performances are evaluated. This level of rationality is associated ith the ,, context of discovery for historians. It may also be possible to describe ['! practices in perhaps more logically compelling terms: if so, then another[! second, !, level of rationality has obtained. This level of rationality is associated the ., context of justification for historical knowledge claims. I In this work, we assume for the discipline of history no more the I! limited rationality suggested by continued practice. Given this kind of Jl the discipline of history can appraise various 'meta-historical' claims can, in this limited sense, test those claims. The discipline of history through jhe actions II Jl of the community of historians -evaluates a claim in light of evidence assembled "I using the materials and methods of historical research. It is this that renders a meta-historical claim 'testable'. J,j Kinds of Historical Evidence We now distinguish between two uses of historical evidence. we have the following meta-historical claim: 'Revolutions follow the of II oppressive conditions.' In formulating this claim, Crane Brinton did not lhrst 94 I i I! j I

PAGE 101

define the class of revolutions before formulating his generalization; he 1.: organized the results of earlier inquiries into specific revolutions. These t'esults are b 1 d 1 h' 1 emg put to externa uses, servmg as ata m a arger extra1stonca arg4ment. Th B . f d h i'!th e nnton Interpretation can now unction as a atum wit respect to o1 er, future, claims. .I Consider the following meta-historical claim of technological co-evolution: II 'Technological change follows when change occurs in one of two linked I I ,I technological systems.' There may be a mass of historical research whicn bears on this claim, but it is not available to a historian of technology in the way which research results concerning revolution were available to Crane Brinton. mespite II the work of Edward Constant, there is no established collection of in ,I which technological change is due to technological co-evolution. Part of :the work of the community of historians of technology will be to identify cases of.l I technological change to which the claim of co-evolution is applicable. 1 I The idea of assembling the 'historical evidence' is much more in h f hl'l .... 1 h j h t e case o tee no og1ca co-evo ution, smce It 1s not c ear at t e outset ,.,ow t e evidence is to be found. Indeed, the evidence consists of a collection of I! technological systems and the types of change which they have endured.! Only I I after this initial classification is done can additional work be done in addressing the claim of technological co-evolution. Hence, because of the relative iLaturity fl of the discipline of the history of technological change, the evidence gatllered I 95

PAGE 102

I relevant to the claim of technological co-evolution is internal to the disci1line of j history. On the Uses of History Historical research begins with data and ends with one or more dltum. In the case of Crane Brinton, the initial data were the results of previous reLarches 1 d h 1 d h' al 1 J on vanous revo ut10ns an t e termma atum was a meta1stonc c rum 1 If . 1 h' I d I ki h di 1' f1l concemmg revo uuons. 1mua 1stonca ata are ac ng, t e sc1p me must g enerate historical data from "non-historical" data. If there are no historill facts except as constituted through historical research, then this task is I If the discipline of history provides the primary evidence bearing on a :I meta-historical claim, how is the "truth" of such a claim detennined, aniiwhere is this locus of decision? Here we return to the distinction between the and fl external uses of historical research. When the results of historical research are used externally, they function simply as a datum within a meta-historical! claim Whether such a datum agrees with the meta-historical claim is of no con&em to the discipline of history: it is up to the consumers of historical research decide how to use the data provided. I The discipline of history prescribes the methods by which historieal research yields new data from initial historical or non-historical data. oJr time, the discipline judges whether this transition from initial data to historical conclusions is valid in particular cases. The conclusions reached by histo 'cal I I 96 J :j

PAGE 103

I li I research are specific: they state what happened. Once historical research f:has I' paused to yield conclusions, these become initial data available for :J with one or more meta-historical claims. I Jl li History and Truth ll :J We now decribe how the data of historical research and a meta-Jistorical I claim each participate in a judgement of truth or falsity using the theoryf:of propositions advanced by philosopher Alfred North Whitehead.8 regards the truths of formal logic as very restricted instances of ordinaryl'i I propositional truth. Whiteheadian propositions involve subjects and understood in terms of reality as the creative process of passage from data to a I new datum. ,.1 A Whiteheadian proposition is "about" a set of entities, the subjects of the proposition. The proposition states a potentiality, the possibility the logical subjects may be related in the manner indicated by the logical of the proposition. If the entities are so related, then the proposition is true)i otherwise f. 1 F Wh h d .b h I! f a se. or Ite ea propositions express certam a stract c aractenst1cs1 o a judgement. 8 Whitehead was a non-professional metaphysician and philosopher of history. See Adventures of Ideas (New York, 1933). For discussion of the theory rbr propositions, see Alfred North Whitehead, Process and Reality, ed. D. R.f,briffin and D. W. Sherburne (New York, 1978), pp.184-207. 97 .,

PAGE 104

II 1: 'I I. Consider the following claim: 'Systemic technological change from I the technological co-evolution of two or more components. The logical II II of this proposition are actual instances of systemic technological change,!:while the 1: 'l predicate (also termed the predicative pattern) asserts the possibility that lin each .I instance the components involved have been in technological Notice that both the logical subjects and the predicative pattern are complex. II 1: Several kinds of judgements are possible. First, the historian mayl!identify one or more specific instances of technological change and in each case or deny the pattem of technological co-evolution. Here, the proposition is +tertained in relation to a restricted set of logical subjects. Second, the historian simply I 'I identify one or more instances of technological change and describe this lfhange without explicit reference to the notion of technological co-evolution. H9te, the 1.1 proposition does not enter into the judgement of the historian at all, the findings of the historian may be germane to the truth of the proposition.lfrnird, the judgement of the historian may make take these other two forms of as I, datum. Here the proposition is an element in a still more complex involving the meta-historical claim and its relatedness to the two forms I! historical judgement described above. This new proposition is the following: 'The l: given meta-historical claim is warranted (unwarranted) by the findings of the r: discipline of history.' The notion of warrant is complex, the most rigorous : interpretation being that of 'logical truth.' 98

PAGE 105

,, J If the findings of the discipline of history have the kind of I claimed earlier, then it is possible in the Whiteheadian view of propositions as abstractions from judgement to have a "correspondence" between a mela-11 f: historical claim and the historical judgements to which it is related, and hence a II kind of truth that closely approximates the form of the ordinary sense of1J'!truth as I correspondence. li Let us consider the form of this third kind of judgement more by I I comparing it with the first two kinds of judgment. Let S represent the ldgical subject of a proposition, P represent the predicative pattern, so that (S,PJ: 1: represents a proposition with subject S and predicate P. The first kind oli judgement involves the generation of restricted subjects: the discipline of history I! entertains the proposition (S,P), perhaps "finding" circumstances that lear to the formation of a finite number of restricted propositions (S[l],P), (S[2],P),,, ... (S[N],P), each subsequently affirmed or denied as a result of historical rfsearch. In the second kind of judgement, the discipline of history simply considMrs various II logical subjects S[l], S[2], ... S[N] independently of P. II The third kind of judgement addresses the warrantability of a mJia-II I' historical claim with respect to its evidential basis. Here the subject of tlie new . (S P) If . d h d' . I'! 11 proposition IS , Itse a proposmon, an t e pre Icat1ve pattern 1s a co ectton of affirmation/denials based on an evaluation of the restricted propositioJs I! (S[l],P), (S[2],P), etc., by the discipline of history. In this third kind of 1: one might stipulate that the meta-historical proposition (S,P) is not 99 I I , ,I II

PAGE 106

unless all of its instances (S[l],P), ... (S[N],P) have been afflrmed by hitorical I I research. The universe of instances is finite so that, even though a meta-pistorical II claim (S,P) has the form of a universal judgement, it has only a flnite I indefinite number of instantiations. ji I As Whitehead points out, propositions grow with the world. Thisl'!means f. I that today the discipline of history may rule one way about each of the ipstances (S[i],P), tomorrow differently. Moreover, the number of instances will j; 1, undoubtedly change with the passage of time. Thus, in this third kind or!: 1.1 1.: judgement, the predicative pattern of affirmations/negations varies with ti,me, while the predicate (S,P) is fixed. As the predicative pattern associated a meta-historical claim (S,P) varies, the relationship of (S,P) to the historid:al li evidence, in the form of judgements of (S[i],P), also varies. Hence, a jud'gement I of warrantability may appear to be arbitrary. The objectivity and variabi#ty. of judgements of the discipline of history make it possible to "rationalize" I! judgements through an inductive, statistical formulation given in the j:' I Conclusion I I History is not "the real past," but the past constituted by historici) [, research; meta-historical claims are testable in the sense that the warrantability of judgements concerning them can be assessed, both qualitatively and qujtitatively. I! This is possible only as we assert both the objectivity and the variability judgement of the discipline of history. 100

PAGE 107

CHAPTER 7 CONCLUSION f: I I! I' ,, I I I I 11 Von Neumann as_ Change Agent I :I I, Von Neumann brought change to the large-scale organizations affected by his recommendations as a consultant. Some interesting contrasts can be I, between Elmer Sperry and Thomas Edison, the engineer-entrepreneurs, td John von Neumann, the "scientist-intrapreneur," the new breed of organizatiorll man II who brings about innovation in large-scale Never a "part"l1of the bureacracy in the way ordinary employees are, von Neumann was up with 1: the mission of these organizations in ways that few employees can be. 1.: The extensive work performed by von Neumann on the Manhattl Project II I'' at Los Alamos put him at the confluence of the three currents of physics! computing, and weaponry. His ability to understand the problems of the hulitary I I in a comprehensive and politically acceptable fashion made von I invaluable. Defense requirements provided the organizational impetus to bxplore I technological possibilities, the money to develop them, and the political armor needed to insure production of the resulting weapons systems. In this s4ous game, von Neumann was both pawn and player. f' ,: Having experienced Russian 'diplomacy' as a Hungarian, von believed with great fervor that the threat to the U.S. from the Soviet was

PAGE 108

f! real and called for dramatic action. Sharing this sense of mission with members of the military and governmental bureaucracy, von Neumann Jcame a guide through the mysterious regions of science and technology, able to so well because he understood the capabilities that were needed and why tJiy were I_[ needed. I Von Neumann nuuried his vision of the technologically possible rith an understanding of military requirements, guided by the key assumption thrt whatever weaporuy conjured up by QJ!l: scientists would also be imaginef by their scientists. Since they would not hesitate to develop and, at the right morilent, J1 deploy such weapons systems, it was .Q!!!: obligation to explore the and to develop and deploy those enhancing our strategic position the Soviet Union. As illustrated by the development of the hydrogen bomb, 'he computer was an indispensable tool for the exploration of physical reality and its technological The electronic digital computer became a nece1sary d" h 1 . d f b h d 1: mgre 1ent m t e p anmng, preparation, an execution o ot war an The development of systems designed to secure adequate militJ. ol political, and economic power became increasingly global in scope the ,, period from 1945 to 1960. The ICBM program involved many of workers, hundreds of companies, and engendered an extensive security I! New times bred new management practices which in tum spawned new ':[ management information systems. 102

PAGE 109

I! I ,., 1: Technological Change and the Arrow of Time I! I! World War II legitimatized saturation bombing, serving notice civilian populations were an instrument of war. This logic was extended in the years so that the management of populations became a necessity of I defense. In these developments we see a chain of necessities forged elements which were once only possibilities. Do the realized possibilities of ,.1 change necessarily constrain future possibilities? i'l My tentative answer to this question is in the affirmative. This the I' need for caution, at least in thought, if not in action. If meta-historical l,l I concerning technological change are testable in the limited sense asserted in I; 'I Chapter 6, what are the consequences of this testability for those whose rests I upon sometimes unacknowledged and less often tested theories of J change? i II First, it is prudent to consider the possible direct consequences of failing to successfully manage the technological change upon which broader depends .. If our adversaries assume that we have the capabability promised by prdbonents of 'i SDI and construct their foreign policy accordingly, could this be a factor? It may, but perhaps need not be, that the risks of successful of technological change are outweighed by the risks of unsuccessful mJagement. Second, it is prudent to consider the indirect consequences of sudicessful f:i l;j management of the technological change. The tremendous prestige transferred to 103

PAGE 110

il the PERT MIS made PERT an obligatory technique for managing of large scale military, industrial or governmental enterprise. This was an indirect I i consequence of success in the Polaris ICBM project. I Third, since the concept of the manageability of technological change ,, II c I d" . th t:!! f 1.0cuses on y on Irect consequences, 1t 1s easy to Ignore e 1ty o funher technological and/or social change that follows from the first of I' change. The many examples of reverse salients offered by Thomas Hugliles in the context of military invention indicate that every technological improvemlnt in one !.I technological subsystem brings with it an improvement in the technoloJtal subsystems to which it is coupled. It is at least conceivable that, in soml! I situations, there is greater risk in entering upon a technological race tha.ril in not developing a technological capability. Such a race may consume the reslurces of each competitor and heighten the danger which, as von Neumann ,I I I accompanies the destabilizing and potentially catastrophic effects of change. Fourth, since the applicability of any theory of technological chige is in principle always arguable, it behooves policy makers and analysts to unqerstand the "empirical" support for any theory of technological change. That suJ1on may be theoretical, but it likely rests upon historical research. 104

PAGE 111

A Counsel of Prudence 'i Up until several years ago, Wrigley Field was the only baseball I; without night games. Over much protest, lighting was installed and baseball fans II can enjoy both day and night games. All that has been lost, or has becofue more drfi I . f A d h f 11. I Icu t to retnve, 1s one species o expenence. n t e species o expenence II which has been lost seems merely negative, for it is the experience of as I r only a day game. I ,. That the variety of American social experience has been altered the spread of computerized MIS is an untested hypothesis. It comes from thl fascination with and anxiety over technological change which I share Heidegger, Whitehead, Mumford, Winner, and Ortega. This .thesis a f d. h d I f h" al II th Irst step towar t e eve opment o a systematic 1stonc perspective on e ,.: emergence of computerized MIS in post-war America. Such a might be an effective element in the process of formulating and implementing in I' the plane of social experience defined by information technology, II security, and management planning.1 1: 1.! I I I! 1 Different information management systems may help shape different technical societies. See David Chaum, "Security Without Identification: i;ransaction Systems to Make Big Brother Obsolete," Communications of the ACM 28 (1985): 1: 1030-44. II 105 li fl ti

PAGE 112

j, APPENDIX I WARRANTABILITY, JUDGEMENT, AND PROBABILITY I In the spirit of the 17th and 18th century probabilists, this appendix I speculates on the formal relationship between meta-historical claims and :beir I basis in historical judgement. Consider the following claim C(X): 'The likelihood I I is X that the meta-historical claim (S,P) is true' where X is number that ilies ll between 0 and 1. C(X) is understood to mean that if M instances of the letahistorical claim (S,P) are evaluated, X*M of its instances (S[i],P) will bj.j affirmed. II The warrantability of C(X) is obtained by comparing an a priori ("them..Jrcal") value of X (namely, Y) with an a posteriori ("empirical") value for X (namely Y') I' I relative to a (possibly reconstructed) evidential basis. Assume that M instances (S[i],P) of (S,P) have been enumerated and that the discipline of history has rendered a judgement J[i] for each, the M jjtlgements I forming the evidential basis for/against (S,P). Suppose that, of these M 1 i, judgements J[i], N are affirmations. The most reasonable a posteriori estip1ate of Y' = N/M. The probability of obtaining exactly N affirmations out of M: judgements can be computed given that the probability of a affirmative . 1 . Y Th' b b'l' f b . t1 N affi I,J m any smg e mstance 1s 1s pro a 1 1ty o o tammg exac y out of M judgements is what I call the warrantability of C(X) with respJt to the [1

PAGE 113

evidential basis J[i] supplied by the discipline of history. Note that the I! warrantability of C(X) can be quite high or low independently of the of X. II Th al Y . .. 1 d h d J! e v ue IS our 1mt1a JU gement concermng t e associate historical claim (S,P). If X were "really" equal to Y = 0.90, then after cJinsidering J! 100 instances (S[i],P) and rendering M = 100 judgements J[i], we would; expect to I, II have made Y*M = 0.90*100 = 90 = N affirmations. Suppose, however, that out of 100 instances, only 50 affirmations are made, so that Y' = 0.50. the 'I I. objectivity of the discipline of history, it seems obvious that the claim is not warranted by the 100 historical judgements that form the data for (S]P). Using il I. a binomial distribution (based on Y, M and N) one can compute the prooability of lj obtaining N =50 affirmations out of M = 100 judgements. With N = 5d,and M = j! ,I 100, and Y. = 0.90, the probability is quite close to zero. 11 As a meta-historical claim (S,P) "OTows in time," the values of bbth M and e. I' 'I ' N will change. Changes in the value of M reflect the accumulation or elimination I of instances by the discipline of history as.it continues to entertain the ietahistorical claim (S,P). Changes in the number of affirmations, N, reflect lbe Jl evolution of judgement by the discipline of history concerning the I (S[i],P) of (S,P). If the ratio of M toN approaches 1 over time, then Y' approach 1 so that we would expect the value of Y to also approach 1. Hence, I C(X) would be highly warranted by the data supplied by the discipline df history and the truth of (S,P) would be considered a relatively settled matter the discipline undergoes a radical shift in interpretation. f! 107

PAGE 114

SELECTED BIBLIOGRAPHY I II I 1: I 1: Aitken, Hugh G. J. Taylorism at Watertown Arsenal: Scientific Management in Action 1908-1915. Cambridge: Harvard University Press, 1960. !i Arnold, Henry. "Third Report to the Secretary of War by the Commandibg General of the Army Air Forces." In The Impact of Air Power: Nationalllsecurity and World Politics, ed. Eugene M. Emme. Princeton: Van Nostrand, 19l? li Barnett, Raymond A. Collee:e Mathematics for Management, Life, and Social Sciences. San Francisco: Dellen Publishers, 1981. r I! Bernstein, Richard J. Beyond Objectivism and Relativism: Science, Herrlieneutics, and Praxis. Philadelphia: University of Pennsylvania Press, 1983. li Burstein, Jerome S. and Edward G. Martin. Computer Information with BASIC. Chicago: Dryden Press, 1989. Blair, Clay, Jr. "The Passing of a Great Mind." Life Magazine. 8 1957. t;: Butz, J. S. "The USAF Missile Program: A Triumph of Orderly In A History of the US Air Force Ballistic Missiles, ed. E. Schwiebert. York: Praeger, 1965. Card, Stuart, Thomas Moran, and Allen Newell. "The Keystroke-Level for User Perfom1ance Time with Interactive Systems." Communications of tlie Association for Computing Machinery 23 (1980): 396-410. j! Cerruzzi, Paul. "An Unforeseen Revolution: Computers and ExpectationJlt935I 985." In Imaginin!! Tomorrow: History, Technology, and the American !Future, ed. Joseph J. Corn. Cambridge: MIT Press, 1986. \ Chaum, David. "Security Without Identification: Transaction Systems to Make Big Brother Obsolete." Communications of the ACM 28 (1985): 1030-44. 1 : Constant, Edward, II. "A Model for Technological Change Applied to the Turbojet Revolution." In Technology and Culture 14 (1973): 553-72. f! 108 'I II I I I

PAGE 115

r l,i I constant, Edward, II. The Origins of the Turbojet Revolution. Baltimore!; Johns Hopkins University Press, 1980. I ,. Dantzig, George B. Linear Programming and Extensions. Princeton: PrinFeton University Press, 1963. 11 II Davis, Kenneth S., ed., Arms. Industry and America. New York: H. H. Wilson, 1971. I" i: Defense Advanced Research Projects Agency. Strategic Computing, Generation Computing Technology: A Strategic Plan for Its Developmen1t and Application to Critical Problems in Defense. 28 October 1983. 1: ,; Edward Dunaway. Interview by Cadet James R. Luntzel, Ill, 22 June 1973. Call Number K239.0512-935, transcript. United States Air Force Oral I 11 Fazar, Willard P. "The Origins of PERT." The Controiier 30 (December 11962). f: Fischer, David Hacket. Historians' Fallacies. New York: HillJler & Row,l,!1970. Freeland, Richard M. The Truman Doctrine and the Origins of McCarth):liSm. New York: Knopf, 1972. 1 ,, II Friedel, Robert D., Paul Israel, and Bernard Finn. Edison's Electric Light: Biography of an Invention. New Brunswick, New Jersey: Rutgers Univefsity Press, 1986. 11 I I' Gale, Richard M. "The Static versus the Dynamic TemporalIn The Philosophy of Time, ed. Richard M. Gale. New Jersey: Humanities Press, 1968. Ji I' II Gardiner, Patrick. Theories of History. Glencoe, Illinois: Free Press, 1959. 'I Goldstein, Leon J. Historical Objectivity. Austin: University of Texas Prts, 1976. J Goldstine, Herman H. The Computer from Pascal to von Neumann. Prinbeton: Princeton University Press, 1972. 1 ' I Handlin, Oscar. "A Discipline in Crisis." Chap. in Truth in History. Cambridge: Harvard University Press, 1979. li I I i I I I 'I 109 j

PAGE 116

r: I' Heidegger, Martin. The Question Concerning Technology and Other Translated with an Introduction by William Lovitt. New York: Harper Colophon, I 1977. i! Hackney, R. W. and C. R. Jesshope. Parallel Computers 2: Architecture,!: Programming and Algorithms. Bristol: Adam Hilger, 1988. I i! Hughes, Thomas P. Networks of Power: Electrification in Western Society, 18801930. Baltimore: Johns Hopkins University Press, 1983. Hughes, Thomas P. Elmer Speny: Inventor and Engineer. Baltimore: Joti:ns Hopkins University Press, 1971. .! Hughes, Thomas P. "The Order of the Technological World." In History.iof Technology, ed. A. R. Hall and N. Smith, London:' Mansell, 1980. !.1 I! Johns, Claude J., Jr. "The United States Air Force Intercontinental Ballistlc Missile Program, 1954-1959: Technological Change and Organizational li Innovation." Ph.D. dissertation. University of North Carolina, Chapel Hill, 1964. I 'I Jungk, Robert. Brighter Than a Thousand Suns. New York: Harcoun 1958. 1: Kaplan, David and Robert Manners. Culture Theory. Englewood Cliffs: Prentice. I Hall, 1972. j I I Kourany, Janet. Scientific Knowledge: Basic Issues in the Philosophy ofi!Science. Belmont, California: Wadsworth Publishing Co., 1987. 1; Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: of Chicago Press, 1962. 11 Kuhn, Thomas. The Essential Tension. Chicago: University of Chicago f:ress, 1977. II I Kuhn, Thomas. "The Relations between History and the History of Daedelus 100 (1971): 271-304. I Laudan, Larry et al. "Scientific Change: Philosophical Models and Research." Synthese 69 (1986): 141-223. [, Laudan, Rachel. Introduction to The Nature of Technological Are Theories of Sciendfic Change Relevant?, ed. Rachel Laudan. Boston: Publishing Company, 1984. li I 110 I

PAGE 117

1.1 I' p I Layton, Edwin T. "Mirror-Image Twins: The Communities of Science arl:d Technology in 19th Century America." Technology and Culture 12 1971): 562-80. l! Lens, Sidney. "The MilitaryIndustrial Complex." In Anns, Industry and America, ed. Kenneth Davis. New York: H. H. Wilson, 1971. 1: 1: Lewis, David W. Review of The Origins of the Turbojet Revolution, by Edward Constant II. In Technology and Culture 23 (1982): 512-16. Malcolm, D. G. et al. "Application of a Technique for Research and Program Evaluation." Operations Research 5 (1959). I'! McMullin, Ernan. "Philosophy of Science: An Overview." In Scientific i'i Knowledge: Basic Issues in the Philosophy of Science, ed. Janet Kourany, 3-19. Belmont, California: Wadsworth Publishing Co., 1987. . 11 Miller, Howard S. Doiiars for Research: Science and Its Patrons m Nmeteenth Century America. Seattle: University of Washington Press, 1970. [:j Miller, Robert W. Schedule, Cost. and Profit Control with PERT: A I'; Comprehensive Guide for Program Management. New York: McGraw-Hill, 1963. Neustadt, Richard E., and Ernest R. May, Thinking in Time: The Uses History for Decision-Makers. New York: Free Press, 1986. 1; li Popper, Karl. The Poverty of Historicism. Boston: Beacon Press, 1957. 1: jl Reich, Leonard. The Making of American Industrial Research: Science and Business at GE and Bell, 1 R76-1926. Cambridge: MIT Press, 1985. I! f, Reynolds; Terry S. Review of Networks of Power: Electrification in Society, 1 RR0-1930, by Thomas P. Hughes. In Technology and Culture (1984): 644-47. I! Ritland, Osmond J. "Concurrency." Air University Quarterly Review (W!IJnter-Spring 1960-61). I I I! Rosenberg, Charles. No Other Gods: On Science and American Social 'Fhought. Baltimore: Johns Hopkins University Press, 1976. 111

PAGE 118

I i' I 1: Sapolsky, Harvey M. The Polaris System Development: Bureaucratic and Programmatic Success in Government. Cambridge: Harvard University Bress, 1972. I Schlesinger, Arthur M. "The Inscrutability of History." In The Vital Past! Writings on the Uses of History, ed. Stephen Vaughn. Athens, Georgia: University of Georgia Press, 1985. Shurkin, Joel. Engines of the Mind. New York: Norton, 1985. l'i Skolimowski, Henryk. "The Structure of Thinking in Technology." In PHilosophy and Technologv: Readinas in the Philosophical Problems of TechnologyJled. C. Mitcham and R. Mackey. New York: Free Press, 1973. f'l Smith, Merritt Roe. Introduction to Military Enterprise and Technologicdt Change: Perspectives on the American Experience, ed. Merritt Roe Smith. Cambridge: MIT Press, 1985. j.i ,.1 I' I Thackray, Arnold. "Natural Knowledge in Cultural Context: The Model." American Historical Review 74 (1974): 672-709. 1: 1: ,I United States Navy Department. Line of Balance Technology. Office of fNaval Material (NAVEXOSP1851 Rev 4-62). Washington, D.C. April, 1962. li John von Neumann. "The von Neumann Letter." Annals of the History Computing 9 (1988): 357-68. I:! II John von Neumann. "The NORC and Problems in High Speed Computi.Jg." In II Papers of John von Neumann on Computing and Computer Theory, ed. W. Aspray and A. Burks. Cambridge: MIT Press, 1986. 1; John von Neumann. "Defense in Atomic War." In Papers of John von N6umann on Computing and Computer Theory, ed. W. Aspray and A. Burks. Cwrtbridge: MIT Press, 1986. John von Neumann. "Can We Survive Technology?" Fortune Magazine. June 1955. I I ., Von Neumann Manuscript Collection. Manuscripts Division. Library of pongress. I Webster's Seventh New Colleginte Dictionary. Springfield, Mass.: G.C. 1965. 1,: 1: I ,I 112 I :I I

PAGE 119

II I !. ,. 1: Whitehead, Alfred North. Process and Reality, ed. D. R. Griffin and D. W. Sherburne. New York: Free Press, 1978. \, Winner, Langdon. Autonomous Technology: Technics-out-of-Control as in Political Thought. Cambridge: MIT Press, 1977. I! I; 113 ,' II I' I' ,1 II ,, I. j l