Citation
Measuring outcomes of management training in the public sector

Material Information

Title:
Measuring outcomes of management training in the public sector
Creator:
Nickerson, Anne Louise
Publication Date:
Language:
English
Physical Description:
xiv, 223 leaves : ; 29 cm

Subjects

Subjects / Keywords:
Executives -- Training of -- Evaluation ( lcsh )
Executives -- Training of -- Evaluation ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 190-205).
General Note:
Submitted in partial fulfillment of the requirements of the degree of Doctor of Public Administration, Graduate School of Public Affairs.
General Note:
School of Public Affairs
Statement of Responsibility:
by Anne Louise Nickerson.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
18004076 ( OCLC )
ocm18004076
Classification:
LD1190.P86 1986d .N52 ( lcc )

Full Text
MEASURING OUTCOMES OF MANAGEMENT TRAINING
IN THE PUBLIC SECTOR
by
Anne Louise Nickerson
B.A., Washburn University, 1952
M.S.W., University of Denver, 197.0
M.P.A., University of Denver, 1981
A thesis submitted to the
Faculty of the Graduate School of Public Affairs of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Doctor of Public Administration
Graduate School of Public Affairs
1986


This thesis for the Doctor of Public Administration
degree by
Anne Louise Nickerson
has been approved for the
Graduate School
of Public Affairs
by
Date
fi/U


Nickerson, Anne Louise (D.P.A., Public Administration)
Measuring Outcomes of Management Training in the Public
Sector
Thesis directed by Associate Professor Dail A. Neugarten
Due .to a rapidly changing economic and social
environment, more emphasis is now placed on the need
for organizations to be more efficient and effective to
increase productivity. Management training is seen as
one method for an organization to prepare staff to meet
this goal. Comprehensive methods for evaluating the
results of management training are an integral part of
the training programs.
This dissertation addressed methodologies for
measuring the outcomes of management training in the
public sector. Was there a difference in either the
individual or the organization? The problem identified
was the lack of comprehensive methods for evaluation of
management training.
Difficulties in evaluating management training
include: unclear purposes for management training and
its evaluation; determination of management training as
the cause of individual or organizational changes; deter-
mination of cost effectiveness, and limitation of agency
resources.


IV
The purpose of this study was to develop a model to
evaluate management training in the public sector, to
design a methodology based upon the model, and to apply
part of the methodology to a specific management train-
ing program, namely the State of Colorado's 1984 Manage-
ment Certificate Program (MCP).
This research project was an exploratory study
using a nonexperimental design. Personal interviews with
a structured questionnaire were used to obtain the data
from 46 participants of the Management Certificate Pro-
gram (MCP3). Each participant was required to identify
a work-related problem, develop and implement a Manage-
ment Improvement Project (MIP) using the knowledge and
skills acquired in MCP.
As a result of descriptive analysis, including cross-
tabulations, statistically significant relationships were
identified between sets of variables. These variables
included: (a) the status of the MIP; (b) whether changes
resulted from implementation of the MIP; (c) the levels
of changes; (d) characteristics of the participants, and
(e) managerial styles and support.
Results and conclusions were that this methodology
can be applied in the evaluation of management training;
MIPs were helpful in the application of knowledge and
skill from MCP3 to the work place; statistically


V
significant relationships were identified between sets
of variables, and the implementation of MIPs did result
in individual and organizational changes.
The form and content of this abstract are approved. I
recommend its publication.


VI
ACKNOWLEDGMENTS
The writer would like to acknowledge and thank
each member of the Dissertation CommitteeDail A.
Neugarten, the Chairperson; E. Sam Overman; and Judd
N. Adams. The writer also appreciated the assistance
given by Kyle Davis in the statistical analysis of the
data. In addition, appreciation is expressed for the
assistance given by Mike Allen, U.S. Office of Personnel
Management, Denver Regional Training Center. Special
thanks is given to the family of the writerRobert E.
Nickerson, the husband, and Lynne, Laura, and Robert
J. Nickersonfor their moral support and encouragement
during this project.


CONTENTS
CHAPTER I
STATEMENT OF PROBLEM, PURPOSE AND
SIGNIFICANCE ..................................... 1
Introduction................................. 1
Statement of Problem ............................. 4
Historical Perspective ......................... 5
Future Trends ................................. 11
Causes of the Problem............................ 14
Unclear Purposes of Management
Training Programs ................ 15
Results of Management Training ........... 19
Cost Effectiveness ............................ 22
Limited Agency Resources ...................... 26
Statement of Purpose and Significance .... 29
Purposes..................................... 29
Significance ................................ 29
Government agencies ........... 30
Universities ................................ 31
Individuals.................................. 32
CHAPTER II
LITERATURE REVIEW................................ 33
Evaluation Objectives ........................... 35
Kirkpatrick.................................... 35


viii
Hamblin.......................................... 39
Searcy et al. ................................... 43
Bakken and Bernstein............................ 44
Bauman and Ott................................... 47
Evaluation Methods ................................ 49
Participant Action Plan Approach
(PAPA) ..................................... 49
Clement and Aranda .............................. 53
Evaluation Tools................................. 56
Evaluation Strategies ............................. 68
CHAPTER III
STATEMENT OF METHOD.................................. 79
Research Question ................................ 81
Management Certificate Program ... ................ 81
Purpose......................................... 82
Participants .................................... 83
Course Content .................................. 84
The Model......................................... 87
Course Content .................................. 88
Training Technologies ........................... 90
Application..................................... 92
Impact........................................... 93
Research Methodology .,.......................... 95
Design ........................................ 96
Sample.......................................... 97
Interview Guide................................. 98


ix
Statistical Analysis ........................... 103
Anticipated Findings ........................... 105
CHAPTER IV
DATA PRESENTATION................................... 108
Description of All Participants .................. 109
Demographics ................................... 109
Managerial Styles of Participants .............. 114
Characteristics of Organizations ............... 117
Implementing MIPs . .......................... 118
Management support ........................... 122
Transfer of learning . . '............ . 124
Description of Participants as to
Status of MIPs ............................... 126
Demographics.............................. . 127
Managerial Styles .............................. 132
MIPs ........................................... 135
Description of Individual and/or
Organizational Changes .................. ... 141
Changes.....................*................ 142
Management Support and Managerial
Styles....................................... 148
Statistical Analysis of Relationships .... 150
The Status of the Project....................... 151
Significant relationships .................... 141
Interesting results .......................... 160
Not significant.............................. 161
Did Changes Occur?............................. 162


X
Significant relationships ................... 162
Interesting results .............. .... 167
Not significant.............................. 168
The Five Levels of Change ..................... 168
Significant relationships ................... 169
Other results.....................; . . 172
Summary of Findings............................. 172
CHAPTER V
CONCLUSIONS AND RECOMMENDATIONS ................... 175
Limitations of the Study......................... 176
Implications and Significance ................... 178
Conclusions and Recommendations ................. 183
Future of Management Training
Evaluations.................................... 195
BIBLIOGRAPHY......................................... 198
APPENDIX
A. RESEARCH QUESTIONNAIRE ....................... 206
B. LETTER TO PARTICIPANTS FOR THE
INTERVIEWS ................................... 214
C. SUMMARY OF RESPONSES TO
QUESTIONNAIRES ............................... 216


TABLES
Table
1. Training Expenditures of Public
and Private Organizations as
Reported in the Survey Conducted
by Training Magazine ........................ 24
2. Contingency Framework for Management
Training Evaluation ........................... 54
3. Evaluation Tools to Measure Evaluation
Objectives....................................... 61
4. MCP3 Enrollments and Graduates................... 85
5. Length of Employment With the State
of Colorado for Participants of
MCP3 ........................................... 110
6. Paygrade Levels of the Participants
of MCP3...................'.................. 112
7. Participants' Rankings of Power Used
With Work Groups............................ . 116
8. Prevailing Management Styles of the
Organizations of the Participants
Based on Likert's System Theory ................ 119
9. Management Support Received by
Participants in Implementing the
MIPs........................................... 123
10. Participants' Responses to the
Question, "Was MIP a Good Vehicle
for On-the-Job Testing of the
Knowledge and Skills Acquired in
MCP?" . ................................... 125
11. Description of the Groups of the
Participants by the Status of MIPs........... 128
12. Agencies of Participants for Groups
A, B, and C.................................. 129


Table
Xll
13. Groups A, B, and C and Length of
Employment With the State of
Colorado....................................... 131
14. Prevailing Management Styles of
Organizations in Groups A, B, and C ..... 134
15. Problems Encountered by Groups A,
B, and C in Implementing MIPS................ 137
16. Management Support Received by
Participants in Groups A, B, and
C in Implementing MIPs.......................... 139
17. Comparision of Variables of the
Three Groups, Groups A, B, and C............. 140
18. Responses of Groups A, B, and C
as to Whether Changes Occurred as
a Result of Implementing MIPs.................. 143
19. Frequency Distribution of Levels
of Changes Identified by Group D . . ... 147
20. Relationships Between the Status
of the Project and Whether the
Final Written Report Had Been
Submitted..................................... 152
21. Relationship Between Current Status
of Project Variable and the Number
of Years of Employment With the
State of Colorado . ....................... 153
22. Significant Relationships Between
Current Status of Project Variable
and Problems in Implementing MIPs............ 155
23. Significant Relationships Between
Current Status of MIPs Variable
and Some of MCP3 Course Content................. 157
24. Significant Relationships Using
ANOVA Tests Between Current Status
of Project Variable and Total Use
of MCP3 Course Areas............................ 158


Table
xm
25. Relationship Between Current Status
of Project Variable and Changes
Resulting From MIPS.......................... 159
26. Relationship Between Whether Changes
Resulted and Problem of Change in
Work Assignment Variables ...................... 163
27. Relationship. Between Resulting
Changes and Management Support
Received by Participants ....................... 165
28. Relationship Between Resulting
Changes and Total Use of MCP3
Courses......................................... 166
29. Relationships Between the Five
Levels of Change Variable and
MCP3 Courses.................................... 171
30. Summary of Significant Statistical
Relationships From Cross-Tabulations
of Sets of Variables............................ 173
31. Comparison of Anticipated Findings
With the Results of the Research ............
179


XIV
FIGURES'
Figure
1. Hamblin's Model for the Evaluation
of Management Training ...................... 41
2. The Systematic Evaluation of
Training (SET) Approach ..................... 45
3. Evaluation Uses for the Diagnostic
Instruments of Burns and Gragg .............. 66
4. Management Training Strategies to
Increase the Odds that Skills,
Knowledge and Behavior Acquired
in Training Will be Used Back on
the Job.......................................... 69
5. Evaluation Strategies Identified
by Salinger and Deming........................... 76
6. Management Training Evaluation Model
Developed for this Research Project .... 89
7. The Course Content of MCP3, by the
Five Major Course Areas and the
Sections in Each Course Area.................... 91
8. Cross-Tabulation Between Dependent
and Independent Variables From the
Data Collected in the Questionnaires . . .
106


CHAPTER I
STATEMENT OF PROBLEM, PURPOSE AND
SIGNIFICANCE
Introduction
Despite an increase in the volume of management
training for public managers, there are few comprehen-
sive methods for evaluating whether such training results
in any change in the individuals or the organization of
which they are members (Brandenburg, 1982). Federal,
state, and local governments provide management train-
ing for staff, but the purposes and effectiveness of
such training remain unclear.
Doeringer (1981) states that training is such a
critical aspect of the government's ability to provide
public goods and services that more exhaustive analysis
of effectiveness is warranted. As the role of govern-
ment has expanded, managers have been required to
operate more effectively and efficiently in a more
complex environment, requiring better administrative
skills to perform their duties (Levine, 1978).
Management training has developed to provide
knowledge, skills, and abilities for improved job perfor-
mance. Training, as defined by Hamblin (1974) is a


2
sequence of experiences or opportunities designed to
modify behavior in order to attain a stated objective
or skill in a job, education, or personal development.
The evaluation of such training, however, has not been
considered a priority (Patton, 1982).
Clement and Aranda (1982) and Robinson and Robin-
son (1979) agree that one of the most significant
problems with evaluation is the lack of comprehensive
methods to assess the relationship, if any, between
management training and improved individual or organi-
zational performance. Does management training, in
either the public or the private sector, make a differ-
ence?
What happens, for instance, when an employee
returns from a management training program? Were there
improved skills? Was there improved on-the-job perfor-
mance? Did the organization benefit?
Too often, employees attend a training program,
find the skills valuable, but don't use them on the
job. There has been no skill transfer--no effective
on-the-job application of skills and knowledge
acquired. Everyonethe organization, the trainer
and the employeeloses. (Robinson & Robinson,
1985, p. 82)
There are various formats for management training,-
including on-the-job training, training provided by
"in-house" organization training departments, special
seminars, or workshops provided by trainees from outside


3
of the organization, and more formalized training pro-
vided in an academic setting (Doeringer, 1981).
According to Kirkpatrick (1983) an objective or
purpose of training should include descriptions of
appropriate behavior so that improvement within the
organization will be achieved. "Desired results include
improved profitability, reduction of turnover, reduction
of accidents, reduction of cost, improved quality, and
improved morale and satisfaction among employees"
(Kirkpatrick,.1983, p. 79). Even though Kirkpatrick and
others agree on the importance of the results of train-
ing, there is a problem with how to measure the results.
In this study, an evaluation model was developed,
an evaluation methodology was designed, and the method-
ology was applied to one aspect of management training
at the state level. The results or. outcomes of manage-
ment training were evaluated as to differences in the
individuals who participated in the training, or in the
organizations to which they belonged. Any differences
were considered in relation to six levels of change,
which had been adapted from Suchman (1967) and Bauman
and Ott (1985). These six levels were:
1. No change;
2. Expanded awarenessnew sensitivity to oppor-
tunities, general awareness;


4
3. New job skills and knowledgeacquired, but not
applied;
4. Increased willingness to utilize skills or
knowledge acquired in management training;
5. Individual changeimproved job performance as
a result of application of expanded awareness* new job
skills and knowledge and personal enlightenment; and
6. Organizational changebeing of general benefit
to the agency.
Statement of Problem
Most current managerial training programs
suffer from two major weaknesses: they are rarely
evaluated properly to see if they indeed result in
improved levels of performance, and they neglect
fundamental principles of adult learning. (Gold-
stein & Sorcher, 1974, p. 1)
Proper evaluation should include comprehensive methods
for evaluating the impact of the management training on
the individuals and/or the organizations involved. The
first weakness identified by Goldstein and Sorcher
(1974) relates to the problem addressed in this discus-
sion.
There is a lack of comprehensive methods for evalu-
ating whether public sector management training programs
result.in any change in the individuals or the organiza-
tions of which they are a member. "That is to say, up
to now there doesn't seem to have been a practical


method- for evaluating the effectiveness and establishing
the value of management training for an organization"
(Pattan, 1983, p. 34). This problem will be presented
(a) from the historical perspective, (b) current prac-
tices, and (c) future trends.
Historical Perspective
Management training for public managers emerged in
the 1930s with the expansion of government at all levels
(Steinmetz, 1976). As the work force enlarged, employee
were being promoted to management levels without the
necessary knowledge, skills or abilities required for
the work. Specialized knowledge in management skills
was needed for developing the expertise needed by super-
visors and managers.
The advent of World War II emphasized the training
functions of a supervisor. "In fact, management found
that without training skill, supervisors were unable to
produce adequately for the defense or the war effort"
(Steinmetz, 1976, pp. 1-10). Training offices were
created to meet the training needs of organizations.
In the 1950s, additional emphasis was placed on
both training and evaluation as an aftermath of World
War II and a rapidly changing economic and social
environment. The federal government initiated and
implemented legislative actions in response to this


need. The legislation later impacted state and local
levels of government as well (Steinmetz, 1976) .
In 1958, for example, Title 8 of the National
Defense Act was passed, giving a mandate to the Civil
Service Commission (now the U.S. Office of Personnel
Management) "to gather and analyze information on train-
ing in the federal government on an annual basis, and be
ready and able to answer questions Congress might have
about the training of Federal employees" (Gordon, 1984,
p. 114).
With the passage of the Civil Service Reform Act
in 1978, the U.S. Office of Personnel Management was
charged with the responsibility of ensuring that federal
agencies establish programs for the training and develop
ment of public sector managers, including Senior Execu-
tive Service candidates. To meet this responsibility,
training programs, research and evaluations were
initiated for public sector managers (Campbell, 1979).
During this period, there were several types of
management training delineated. One had to do with the
systems of training, which included (a) informal or
work place training and (b) formal classroom training
through schools and other training organizations
(Doeringer, 1981).


Three functional categories of the informal train-
ing, as listed by Doeringer (1981) were:
1. Essentialbased on skills needed;
2. Remedialbased on basic education, skills
training and counseling, to compensate for shortcomings
of public education and the training system, and
3. Beneficialmay or may not be vocational, but
useful, initially benefiting the worker more than the
employer (Doeringer, 1981) .
Management training, as identified in this research
project, relates to formal classroom training provided
by those outside of the organization. Public sector
organizations,, realizing the need for more effective
management, have established collaborative relationships
with universities to provide training and development
assistance to improve their management. Universities
have developed courses in management and technical skill
given both on and off campuses (Bresnick, 1981).
There was also a growing concern about how to
measure and increase employee and organizational
performance (Doeringer, 1981). Bresnick (1981) states:
Some central issues of training evaluation
have not yet been successfully solved in manage-
ment training. While change scores on pre- and
posttests are a good indicator of knowledge
acquisition, the measurement of changes in on-the-
job performance is difficult, time-consuming, and
expensive. (p. 685)


8
Training was seen as a way to reduce or eliminate defi-
ciencies in organizational practices by not only
increasing the skills and knowledge of the employee,
but by augmenting the knowledge and skills of managers.
So, the need for training and evaluation were seen as
increasingly significant.
Clement and Aranda (1982.) state that there has been
an increase in the emphasis on the evaluation of manage-
ment training. Robinson and Robinson (1979) emphasize
the importance of the results of management training.
"A disturbingly large amount of evidence indicates,
however, that the amount of skill actually transferred
to the job is still disappointingly low" (Robinson &
Robinson, 1979, p. 21). Specific methods for increasing
the amount of skill transfer in supervisory training
have been identified by Robinson and Robinson (1979) .
These methods are applied before, during, and after
training. For example, prior to training, assessment
of the supervisors' needs and obtaining management
support are essential. During training, developing the
mastery of skills, building the confidence of the
trainee, and the trainee's applying the skills immedi-
ately are helpful. Essential in the application of the
skills are the commitment of the trainee, the use of
work-related assignments, and mutual sharing between
the trainee and the supervisor.


After the training period, self-assessment and
reinforcement of the new skills by peers and management
are needed. The self-assessment includes feedback and
reinforcement, measures of effectiveness and any
necessary revisions. Without these or similar methods,
how do we know that the management training is a success
or that the individuals and organizations are benefiting
from the use of agency resources that provided.the
training opportunities. These methods are the ideal
but not always applied in either the public or the
private sectors.
In the public sector, management training programs
at the federal and state agencies have developed. At
some of the state levels, management training programs
for top level public managers were developed beginning
in the late 1970s in the states of Georgia, Louisiana,
Vermont, North Carolina, Massachusetts, and Montana. A
similar program was developed in Colorado in 1981
(Davies, 1981). In these state programs, the program
content usually included modern management attitudes,
practices, and techniques, as well as "in-depth analysis
and diagnosis of complex managerial, organizational,
behavior, financial, and administrative law programs in
both public and private organizations utilizing the case
method" (Davies, 1981, p. A-3). In addition, students


10
were required to complete projects designed to apply
course information to actual on-the-job situations.
These projects and certification examinations were
needed for completion of the program in some of the
states.
Since 1983, additional states have developed
training for public managers, with management certifi-
cation programs, using similar guidelines for course
content and projects (Davies, 1981). Limited informa-
tion is available as to the evaluation of these programs.
However, the Colorado program has a continuing interest
in the development of effective evaluation methods for
the management training programs.
In the Denver area, the Denver Regional Training
Center and the Western Executive Seminar Center, both
divisions of the U.S. Office of Personnel Management,
provide training for federal employees. Managers and
supervisors may receive training through the Denver
Regional Training Center, while the Western Executive
Seminar Center provides training for upper level
managers. In addition, state or local government
employees, as well as those in the private sector, may
attend the training programs at the Denver Regional
Training Center.


11
The evaluation process for the Western Executive
Seminar Center consists primarily of written and verbal
feedback periodically during the course about individual
training sessions and instructors. In addition, seminar
evaluations are completed at the end of the course, and
in some cases, during a period of time after the course.
At the Denver Regional Training Center, management
and supervisory courses are evaluated by the written
comment sheets at the end of the course. Some of the
courses required job-related pre-course work. In addi-
tion, OPM training regions identify certain courses for
pre- and posttest evaluations, with the same test being
administered prior to the first class, and again at the
end of the course; These evaluations are related to the
course content and the learning of knowledge or skills.
There is a continuous interest by the federal government
in the development of more comprehensive evaluation
methods for the management training courses (Salinger &
Deming, 1982).
Future Trends
The current economic and social environments
indicate- a need for more effective evaluation methods
for management training for public sector managers
(Gordon, 1984). Organizational problems such as


12
decreased budgets, fewer resources, and the emphasis
on more accountability heighten the importance of
evaluation of management training. The trends for the
future have been addressed by different authors, but
with similar predictions. Gordon (1984) reported
trends of:
1. Government agencies doing more in-house
design, developing and delivering more training
programs than in the private sector organizations,
and
2. Government agencies decreasing both the number
of training occurrences and the length and cost of those
programs.
According to Smith (Doeringer, 1981), training
trends include:
1. For the federal government, costs will be up,
the average length of training down, and the occurrences
decreasing,
2. For state and local governments, management of
training will be even more decentralized, and
3. Interest in training in the public sector is
increasing.
Giegold's and Grindle's (1983) predictions for the
future included:


13
1. Methods of training will include high tech-
nology, such as teleconferencing and computer-assisted
instructions,
2. More participative learning as opposed to
lecturing, and
3. Topics will (and are) changing from the plan-
ning, organizing, directing, and controlling of the
1960s and 1970s, to the predominance of behavioral
science subjects, such as communication and interper-
sonal skills, in the 1980s and 1990s. These training
trends appear to indicate a continuing need for the
evaluation of management training to assist in making
training decisions.
According to Hamblin (1974), high technology, in
the form of computers, will result in changes in evalu-
ation of management training. As information systems
provide more easily accessible data, the purposes and
methods of evaluation will be impacted.
I would predict that before the end of the
century, it will be impossible to write a book
purely about evaluation of training. Training,
except for certain routine tasks, will cease to
be a separate organizational activity in the
majority of firms, and evaluation (if the word
is still being used) will be concerned with the
total evaluation of organizational or personal
activities rather than with training as such.
(Hamblin, 1974, p. 190)
In summary, costs will continue to rise, especially
in management training, with questionable returns on


14
investments (Pattan, 1983). Management training needs
to be more closely related to the work being done and
repackaged to emphasize this relationship. The training
should reflect both experiences on the job, as well as
the educational concept from educational institutions
(Rush, 1976) Most important, there is an increased
emphasis on the importance of evaluation and account-
ability (Kirkpatrick, 1983).
The impact of all of these trends is to re-emphasize
the importance of determining the results of management
training; that is to determine the benefit to the indi-
vidual and the organization. Comprehensive evaluation
methods are needed to assess results, and provide feed-
back for revising and improving the management training
being offered (Bradford & Cohen, 1984).
Causes of the Problem
In a survey done by Brandenburg (1982), one set of
results was that longitudinal data collection techniques
were seldom used.
The most-used evaluative techniques, regardless
of criticism heaped upon them, are the "smile"
indices, i.e., questionnaires and comments. Cogni-
tive and performance-based outcome measures are
used less often, while the least used techniques
are those that require longitudinal follow-up of
participants. (Brandenburg, 1982, p. 18)


15
Adding to this result was the low priority given to
evaluation functions as well as the lack of evaluative
skills by the evaluators. Appropriate evaluation methods
to measure outcomes of management training programs
usually include longitudinal follow-up of participants,
and are difficult to design and implement.
In addition to the problem of designing longi-
tudinal studies, other reasons why evaluation of manage-
ment training is difficult include:
1. Unclear purposes of training programs them-
selves, impacting the evaluations of such programs,
2. Difficulty in determining that management
training is the cause of individual or organizational
changes,
3. Difficulty in showing cost effectiveness, and
4. Limitations of agency resourses (Deming, 1982;
Giegold & Grindle, 1983; Kirkpatrick, 1983; Pattan,
1983) .
Unclear Purposes of
Management Training
Programs
Difficulties in designing and implementing evalu-
ation methods relate to the purpose of a management
training program. The goals and purposes of management
training programs are often unclear, creating problems


16
with both the evaluation of the training and the selec-
tion of the proper evaluation tools. One of the first
questions to be asked in planning a training program,
and designing the evaluation should relate to the
purpose of the training program itself.
Bakken and Bernstein (1982) in their Systematic
Evaluation of Training (SET) approach identify the
importance of clarifying goals and objectives of a
training program. Identification of needs of the
decision makers and the goals of training assist the
selection of appropriate outcomes.
One of the primary reasons for evaluating
training in the first place is to determine if
the goals of the program have been achieved.
Unless those goals have been clearly stated at
the outset, it may be impossible after the fact
to perform a meaningful evaluation. (Bakken &
Bernstein, 1982, p. 45)
To prevent misinterpretation, negotiating written con-
tracts prior to the training program can help clarify
expectations of the trainee as well as decision makers
in the organization. The SET approach will be discussed
in more detail in Chapter II.
Weiss (1972) discusses the purpose of evaluation
research, and the comparison between evaluation and other
research. She states that:
The purpose of evaluation research is to measure
the effects of a program against the goals it set
out to accomplish as a means of contributing to


17
subsequent decision making about the program and
improving future programming. (p. 4)
Weiss (1972) further states that the distinguishing
feature of evaluation research is not the method or
subject matter, but intentthe purpose for which it is
done. The purpose of both the training and the evalu-
ation should be clearly stated and understood by instruc-
tors, course participants, and evaluators. The written
contract, referred to by Bakken and Bernstein (1982)
would be helpful in accomplishing this.
Clement and Aranda (1982) describe an approach to
management training evaluation which includes the
consideration of the objective or problem on which the
management training is to focus. They indicate that
recent surveys have shown the effectiveness of any
training technique or tool depends on the chosen objec-
tives of the training.
When the purposes of the training are not clear, the
evaluation tools chosen and the outcomes can be nega-
tively impacted. To avoid this, accurate determina-
tion of the purpose of the evaluation is important.
Even more so is the question of whether appropriate
tools exist. Will they work in a particular situation?
Giegold and Grindle (1983) identify three types of
potential tools, depending upon the purpose of the
evaluation. They are:


18
1. Preactivetools to determine needs used before
training begins. (Examplea pretest.)
2. Real timeobservations of behavior, completed
during the training by the instructor or a consultant
or evaluator. (Examplewritten observations.)
3. Reactive--written evaluations at the end of the
course, also called "happiness sheets" done after the
fact.
Combinations of the preactive, real time, and
reactive evaluation tools may be used for a particular
training program. Methods for data gathering, according
to Huse (1980) listed from a lower degree of inter-
action or confrontation with the course participant to
a higher degree, are (a) questionnaires and instruments,
(b) interviewing, (c) sensing, (d) polling, (e) collages
and drawing, (f) observation, and (g) unobtrusive
methods. Evaluation tools will be discussed in more
detail in Chapter II.
With a clear purpose for the evaluation, Giegold
and Grindle (1983) suggest that some of the factors to
consider in the selection of effective measurement tools
could be:
-How does the manager view the training efforts of
the individual?
-What is the manager's interest in the learning of
the trainee?


19
-What are the joint plans the manager sets with the
trainee to carry out or apply the newly learned behavior?
-What is the recognition of the newly learned
behavior?
-What is the manager's latitude for the experi-
menting of the trainee, or the risk factor?
Unclear purposes for the management training program
lead to a problem in the selection of an appropriate
evaluation method. This, in turn, leads to problems
with the selection of appropriate evaluation tools.
Results of Management Training
Evaluating management training is difficult for
another reasonnamely the complexity of human behavior
and the knowledge that changes in behavior result from a
variety of causes, only one of which might be the manage-
ment training program. It is often difficult to deter-
mine that the training itself resulted in any changes in
the work performance of the individual or had an impact
on the organization. Other factors or variables which
might have resulted in changes could be the organiza-
tional climate, the internal grapevine, the quality of
management, past experience of the participant in the
training program, or the characteristics of the partic-
ipant (Deming, 1982).


20
In describing changes in individuals, Hersey and
Blanchard (1982) were of the opinion that there are four
levels of change. .These levels are: (a) knowledge
changes, (b) attitudinal changes, (c) behavior changes,
and (d) group or organizational changes. This is
similar to descriptions of changes by Suchman (1967) and
Bauman and Ott (1985) as well as the ievels of change
used in this research.
Change in the knowledge of an individual are the
easiest to make, with group or organizational changes
being more difficult, according to Hersey and Blanchard
(1982) Can organizational change result from the
training of the public manager? If so., at what level?
What aspect of the training program will increase the
chances of organizational changes? All of these diffi-
cult questions to answer relate to the transfer of the
knowledge, skills and abilities from management train-
ing programs to the job.
In addition, it is also difficult to.isolate any
particular aspect of the human behavior to determine
what has caused it. With formal management training,
the individual as a class participant is affected by
previous experiences, individual characteristics, moti-
vation and interest, as well as variables in the work
situation he or she will return to after the training
(Ashkenas & Schaffer, 1979; Pattan, 1983).


21
Identification of possible relationships between
training and individual or organizational change is
critical to the solution of the problemdeveloping
comprehensive evaluation training methods. Behavioral
changes that result from management training are diffi-
cult to identify and evaluate, as previously stated.
However, according to Patten (1979), there are five
basic requirements which must first be met by those who
do want to change behavior as a result of management
training. These are:
1. individual must want to improve,
2. Individual must recognize his/her own weak-
nesses,
3. Individual must work in a permissive climate,
4. Individual must have some help from someone who
is interested and skilled, and
5. Individual must have an opportunity to try out
the new ideal (Patten, 1979).
Even though Robinson and Robinson (1979) and
Patten (1979) identify these requirements, achieving
them is difficult adding to the complexity of the problem.
In evaluating most training, our prime concern
should be to determine what bottom-line results
can be directly attributed to the training. How-
ever, with management and general knowledge train-
ing, it is often more, difficult to define measur-
able performance indices, so an evaluation of the
results of these training programs is seldom
attempted. (Brown, 1980, p. 11)


22
Cost Effectiveness
In addition to unclear purposes and the difficulty
in determining training as a cause of individual or
organizational changes, another aspect of the problem
is the difficulty in showing the benefit to the agency,
or the cost effectiveness resulting from management
training. In reality, "Very few federal agencies can
show a 'bottomline' outcome from employee training, and
therefore, have difficulty defending their training
budgets from the unskilled wielding of the bleeding
knife" (Gordon, 1984, p. 115).
Benefits to the individual or the organization can
be documented by effective and comprehensive methods of
evaluation. Specific information is needed, however, to
do this. Examples of information needed, but often not
readily available to the evaluator, include training
expenses, the number of participants in the training
program, hours of training for the trainee as well as
outcome measurements (Brown, 1980; Doeringer, 1981;
Pattan, 1983).
Data on expenses and numbers of participants in
public sector training programs are difficult to obtain
at any level of government. Reporting of data to a
centralized source helps in record keeping and reporting.
On the state and local levels, centralized or statewide


23
data were not usually available, as management of train-
ing programs are more decentralized, located within
individual agencies within the state (Doeringer, 1981).
In California, however, annual surveys of state
agencies are being made in order to prepare centralized
training reports for the governor (Doeringer, 1981). In
determining cost effectiveness, the dollars spent and
the number of trainees are factors. Having this and
other management training information centralized as the
levels of local, state, and federal governments would
help in solving the problem of determining the benefits
of management training programs. In the meantime,
limited information is available. One source of available
information is Training Magazine.
The October 1984 issue of Training Magazine provided
some information as to training expenses. A data base of
1,821 organizations from both the public and private
sectors were used. As shown in Table 1, the projected
total of budgeted dollars in these agencies for training
employees in 1984, excluding salaries, was $4,208.6
million. Training items included in this budget were
hardware budget expenditures for audiovisual and video
equipment; off-shelf materials for prepackaged training
materials; custom-designed materials which had been
individually tailored to the organizations' specific


Table 1
Training Expenditures of Public and Private Organizations as
Reported in the Survey Conducted by Training. Magazine
1984 Budgeted Training Expenditures (in millions $)
Organization Size by No. of Employees Hardware Budget Off-Shelf Materials Budget Custom Materials Budget Outside Services Budget Seminars/ Conferences Budget Total Projected Budget
50-99 191.4 184.1 177.2 246.7 417.5 1,216.9
100-499 330.0 315.7 280.9 303.0 580.1 1,809.7
500-999 57.8 87.5 47.7 85.6 125.0 403.6
1,000-2,499 58.6 63.9 49.4 68.9 106.2 347.0 -
2,500-9,999 43.5 47.8 42.9 38.5 79.4 252.1
10,000+ 32.0 37.2 33.6 31.5 45.0 179.3
Total 713.3 736.2 631.7 774.2 1,353.2 4,208.6
Note. From Training Magazine, October 1984, p. 26.
to


25
needs, and outside services for consultants, printing,
etc. The size of the budgets for seminars and confer-
ences provided by trainers from outside of the organi-
zation was $1,353.2 million, or approximately 25% of the
total budget. Management skills and development train-
ing were included in the training programs of 89.0% of
the total sample in the survey.
In addition, Training Magazine included in their
survey a report of the training for federal employees,
both military and civilian, in the fiscal year (FY) of
1981. The data were provided by the U.S. Office of
Personnel Management. The 1981 data were used as
personnel cutback and computer-system changeovers made
the 1982 and 1983 data highly suspect, making the 1981
data more reliable. Problems with the data collection
of the federal government training programs have been
continuous (Gordon, 1984).
In view of the focus on the public sector in this
dissertation, the report has significance as to the
amount and cost of training for federal employees.
Gross averages suggest that approximately one
in three federal employees received some amount
of training in FY 1981 (indicated by the over-
all training rate of .37), at a gross average
of $175 per employee. (Gordon, 1984, p. 116)
Federal agencies were found to have spent a combined
total of $366.5 million to deliver 780,174 training


26
occurrences to a combined employee population of 2.09
million people (Gordon, 1984).
The general types of training reported in the study
included management skills and development being the
type of training most frequently offered in both the
public and private sectors.
Nearly 90%'of our respondents indicated they
offer some formal training of this type, compared
to 74.6% in 1983. Second-ranked supervisory skills
also experienced a jump over last year. Ranked
third in 1983, with 67.3% of the respondents indi-
cating their organizations provided it, supervisory
skills increased to 86*3% this year. (Gordon,
1984, p. 45)
The ability to answer questions as to the cost
effectiveness of a management training program, or the
benefits to the individual or the organization will be
critical to the continuation or expansion of management
training. The future trends of management training, as
previously discussed, will be affected by whether or not
cost effectiveness can be determined.
Limited Agency Resources
Another cause of the problem is the result of
limited agency resources, such as staff positions, funds
or assets available to the organization, time available,
or changes in job descriptions. This has a meaning for
the planning of training and evaluation programs, as they
are usually the lowest on the list of agency priorities


27
in the reassignment of available resources (Giegold &
Grindle, 1983). As available resources have changed,
the resulting predominant management style in the public
sector in the late 1970s and the early 1980s has been
cutback management, which deals primarily with survival
during a stressful period of diminishing resources
(Levine, 1978).
In most public sector agencies, as well as some
private agencies, limited agency resources curtail
training opportunities, as well as evaluations. Time
needed to determine the purpose or tools for evaluation
is not always available (Giegold & Grindle, 1983) There
are usually very few training planners and coordinators
having the knowledge, ability, and time to evaluate
individual or organizational changes resulting from
training (Deming, 1982). In the list of agency prior-
ities, management training and the evaluation of the
management training rank low.
The paradox has developed--to do more with less.
Public agencies are required to increase efficiency,
effectiveness and productivity. However, fewer resources
are now available to improve the skills and knowledge of
the staff expected to accomplish these goals. The
result is an internal organizational environment under
constant change, emphasizing the need for effective


28
evaluation methods to determine the results of manage-
ment training planned for managers and supervisors.
Evaluation of management training should be viewed
as a coordinated process, planned from the identifi-
cation of a need for training through the evaluation.
This planning should be answering why, what, who, and
when. In accomplishing effective evaluation, Daly
(1976) states:
It should begin in advance of the training,
when the training objectives are first estab-
lished, and a plan of evaluation should be pre-
pared at that time. The evaluation should deter-
mine the trainees' ability before the program
starts, continually monitor their progress during
the course of the program, and evaluate the results
immediately at the conclusion of the program, as
well as at three-, six- and nine-month intervals
afterward. (p. 22.21)
Despite the suggestions of Daly, evaluation of
public managers' training has usually been based on
immediate reactions of participants to training activ-
ities, or measures of the learned skills or knowledge
acquired. These evaluations include the use of employee
self-assessments, pre- and posttests related to course
content and reaction or "happiness sheets" (Kirkpatrick,
1983). In this section, theproblem and four of the
causes of the problem have been discussed. The
research will address this problem, with the findings
hopefully contributing to the available knowledge.


29
Statement of Purpose and Significance
The problem, as previously stated, is the lack of
comprehensive evaluation methods to determine whether
management training programs in the public sector make
a difference in the work performance of the individuals,
or in the overall performance of their organizations.
The purpose and significance of this dissertation were
identified within this context.
Purpose
The purpose of this dissertation was:
1. To develop a comprehensive model to evaluate
management training in the public sector,
2. To design an evaluation methodology, based upon
the model, and
3. To apply part of the methodology to a specific
management training program, namely the State of Colo-
rado's 1984 Management Certificate Program (MCP).
Significance
Management training programs have increased as the
need for improved efficiency, effectiveness, and produc-
tivity in the public sector increased. Therefore, the
anticipated results of this study could be helpful in
providing methodology for measuring the outcome or
results. The impact of this research could be


30
significant to (a) government agencies, (b) universities,
and (c) individuals in the training field.
Government agencies. Federal,. state, and local
agencies have been involved in the provision of training
for managers and supervisors. Since the 1978 Civil
Service Reform Act, the Executive Seminar Centers and
the Regional Training Centers of the U.S. Office of
Personnel Management, as well as individual federal
agencies, have provided management training for federal
managers. Before that time, the U.S. Civil Service
Commission was providing this service (Campbell, 1979).
In addition, state and local governments have estab-
lished various methods of providing management training
for managers and supervisors.
As previously discussed, a need exists for compre-
hensive evaluation methods to determine the outcomes of
these management training programs. The results of this
research project could be helpful in identifying a
methodology for. determining the indicators for the
transfer of knowledge, skills, and abilities from
management training programs to the work situation,
providing benefits to both the participants in the
training and their organizations.
This study could also be significant to state or
local levels of government agencies, more specifically


31
the Colorado State Department of Personnel. Currently,
this department has annual reviews as to continued
funding for their Management Certification Program
(MCP). This training program, funded by the State of
Colorado, provides management training for Colorado
public sector managers. Part of the model and evalu-
ation methodology for this study were applied specif-
ically to MCP, to hopefully provide data for decision
makers about the utility of the program. The model and
evaluation methodology, however, also have application
to other management training programs offered by other
levels of the government, as well as the private sector.
Universities. The findings of this dissertation
could also be helpful to universities and other academic
programs in developing curricula in the field of public
administration. Public administration course content
might be directed to developing skills and knowledge
that result in improved work performance of managers.
Clarification of relationships between course content
and individual or organizational changes resulting from
the management training could affect the planning and
implementation of course content.
Colleges and universities are increasingly involved
in establishing management training programs for public
sector managers, requiring knowledge of the transition


32
of learning from the classroom to the work environment
(Zemke & Gunkler, 1985). The findings from this
research could identify significant relationships
between changes and the content of the management train-
ing programs.
Individuals. Individuals who are interested in or
involved in training programs, address a lack of compre-
hensive evaluation methods.
Assessment of the transfer of learning from train-
ing to actual performance is the exception, rather
than the rule, because it is not firmly estab-
lished as a necessary component of organizational
and human resource development. (Deming, 1982,
p. 65)
As the assessment of transfer of learning becomes more
of a priority, the importance of having an effective
comprehensive evaluation of management training increases.
In the next chapter, current literature, pertaining
to this research project, will be reviewed.


CHAPTER II
LITERATURE REVIEW
As stated earlier, the transfer of knowledge,
skills, and abilities by a participant from the manage-
ment training classroom to the job is difficult to
accomplish and to assess. A Common observation about
this phenomenon is that:
The. training "worked" but the problemwhatever
it wasdidn't get solved. Somehow, what the
trainees learned didn't transfer to the workplace,
and into their day-to-day performance repertoire.
(Zemke & Gunkler, 1985, p. 48)
The systematic evaluation of management training in
the public sector seems too complicated, too risky, too
costly, or warranting only limited commitment (Deming,
1982; Giegold & Grindle, 1983). Usually only enough
evaluation is done to satisfy minimal needs to demon-
strate program success or accountability. "However,
the gains to be achieved through cost-effective,
reliable, and objective evaluation of training are
substantial" (Searcy, Cushing, & Singh, 1982, p. 1).
As reflected in the current literature, the benefits
of evaluation include:
1. Improvement of the training office situation as to
accountability, management support, and provision of data,


34
2. Benefit to the training manager to make changes
in instructional techniques, program management, and
course design, as well as to improve credibility with
management through use of assessment data,
3. Benefits to the trainee, in better job perfor-
mance or enhanced career opportunities, and
4. Training course and program improvement, by
providing feedback needed to improve both instructional
content and methodology (Searcy et al., 1982, p. 2).
Evaluation of the outcomes of management training
is a measure of the transfer of learning. Zemke and
Gunkler (1985) state that "The empirical studies of
transfer of training have provided a wealth of practical
suggestions for enhancing the value of training in the
world of work" (Zemke & Gunkler, 1985, p. 49). However,
they continue:
Not only is there little clear-cut evidence
that doing particular things will help transfer
learning to the job environment, there isn't even
much consistent theory behind why transfer does
and doesn't occur. We can't even find agreement
about how to define "transfer of training." . . .
Given prejudice, we define "transfer of training"
as: the effects of training on the subsequent
performance of an operational task. (Zemke &
Gunkler, 1985, p. 49)
Zemke and Gunkler (1985) indicate that their
definition of the transfer of training includes both
the retention of what is learned and the quality of the
original training as factors in the transfer.


35
In the review of literature pertinent to this
dissertation, there was more written on management
training than on the evaluation of management training
in the public sector. In this section, the literature
will be reviewed in four areas: (a) evaluation objec-
tives, (b) evaluation methods, (c) evaluation tools,
and (d) evaluation strategies.
Evaluation Objectives
The identification of evaluation objectives guides
the model and methodologies applied in the evaluation
of management training. Current objectives of training
program evaluations are similar to those first identified
by Kirkpatrick in 1959. In later writings, authors
including Kirkpatrick (1983), Hamblin (1974), Searcy
et al. (1982), Bakken and Bernstein (1982) and Bauman
and Ott (1985) used similar typologies in describing
evaluation objectives.
Kirkpatrick
Kirkpatrick (1959) listed four main objectives of
evaluation as:
1. Reactionflow did the participants feel about
the training, how well did they like it,
2. Learningmeasuring the principles, facts, and
skills understood by the participant, did they learn,


36
3. Behaviorthe difference between behavior on
the job before and after training, did they apply/use
the skills or knowledge on the job, and
4. Resultshow much change occurs in the work
environment, e.g., systems, procedures, policies, did
the training affect the bottom line.
In measuring whether or not a particular objective
has been met, the evaluation methods become more complex
from measuring reactions to measuring results. There-
fore, the first three objectives are used more frequently
than the fourth.
Evaluations of reactions, even though the easiest
to do, do not measure learning, but do attempt to measure
the feelings of the participants. The evaluation methods
used in measuring whether learning objectives have been
met are usually specific and applied in the teaching of
a skill or a knowledge. For example, in a training
program to teach giving effective presentations, partic-
ipants may be given an opportunity to do presentations
in class. Simply by observing these presentations,
evaluation of the progress made, to some degree, can be
determined by comparing the first presentation to later
ones.
Behavior evaluation is a particularly important
evaluation objective. Application of the new knowledge


37
and skills, as stated earlier, is usually the desired
end result of the training. Another way to describe
these three objectives is that they deal with evaluating
the effects of the training on the trainee (Pattan,
1983) In evaluating reactions, the focus is on changes
in how the trainees feel; in learning, the focus is on
the changes in the trainee's knowledge; and in behavior,
the focus is on changes in the trainees' job behavior
(Pattan, 1983).
These results are identical with the learning
outcomes described in classic learning theory:
Kirkpatrick's reaction and learning changes are
the same as the two internal learning outcomes
of affective and cognitive skills.and his behavior
change is identical with the external learning
outcomes of psychomotor skills in classical learn-
ing theory. (Pattan, 1985, p. 45)
In measuring the results of training, the focus is
on organization-wide changes in productivity, profits,
safety, quality, and/or cost reductions. Kirkpatrick
suggests evaluating results by using the standard
methods, such as pretests, posttests, reaction sheets,
paper-and-pencil tests, and questionnaires, with some
of the evaluation being done after the training has been
completed, using surveys, interviews, or observation.
"There are, however, so many complicating factors that
it is extremely difficult if not impossible to evaluate
certain kinds of programs in terms of results" (Kirk-
patrick, 1960, no page numbers).


38
Pattan (1983) criticizes Kirkpatrick's methods for
measuring results.
These techniques can be useful but, in my
opinion, they do not go far enough: They are not
significantly linked with people's self-interest
in terms of pleasant or unpleasant consequences.
They lack punch or a motivating force powerful
enough for people to want to acquire, retain, and
use new managerial skills and change their
behavior accordingly on the job. (Pattan, 1983,
p. 45)
He relates this to the fears and anxieties of people
who resist change. The key is the motivation provided
for change, to be willing to take the risk and apply the
knowledge learned in the management training program to
the job. This is the challenge.
As measuring the results of training is the most
difficult, Kirkpatrick recommends concentration on
evaluating training by the use of the first three
objectivesreaction, learning, and behavior (Kirkpatrick,
1983). Pattan (1983) agrees that "Nobody can establish
incontrovertible, direct, one-to-one matching correla-
tion between a specific management training course and
a later drop in absenteeism or an increase in morale"
(p. 46). His model, which will be discussed later, is
another attempt, though, to meet this challenge in
developing a comprehensive methodology for the evalu-
ation of management training.


39
Hamblin
Hamblin (1974) states that evaluation, as an
assessment of value or worth, can be applied at any
point in a training program. As with other authors, his
evaluation objectives are similar to those of Kirkpatrick
(1959). In his opinion, there is a cause and effect
chain linking the five levels or objectives of manage-
ment training effects or outcomes. The five levels are:
1. Reactionsincluding reactions to the trainer,
other trainees, the training setting, or previous
knowledge. Reactions are highly complex and shifting,
requiring the identification of objectives to evaluate
effects.
2. Learningacquiring the ability to behave.in new
kinds of ways. Three conditions which must be met
include: (a) trainee has basic aptitude, (b) trainee's
existing state of learning must be compatible with
assumptions made in training objectives, and (c) trainees
must react favorably to training.
3. Job behaviorapplying learning on the job,
discovering all the job behavior changes that have been
affected by the training.
4. Organizationdetermining the effects of
trainees' job behavior changes on the functioning of
the organization in which they work.


40
5. Ultimate valuedetermining the cost efficiency
of the training. This is usually economic, but in the
public sector, would be a measurement of human good over
purely financial criteria (Hamblin, 1974).
Hamblin describes his objectives as being similar
to those of Kirkpatrick (1959), except for Kirkpatrick's
fourth objective of results. Hamblin has divided this
level into two, organization and ultimate value,
because it seems to be operationally useful to
distinguish between, on the one hand, changes in
the way in which the organization works, and, on
the other hand, changes in the extent to which the
organization achieves its ultimate goals. (Hamblin,
1974, p. 14)
The methodology for the evaluation of these objec-
tives (see Figure 1) begins with the training. As the
model indicates, each level consists of both objectives
and effects. The objectives (01 thru 05), on the left
hand side of the model, require the evaluator to inves-
tigate and evaluate pre-training situations. In deter-
mining the effects of the training (El thru E5), the
evaluator needs to investigate the post-training.
The arrows in the diagram, which are off-shoots
from the objectives and the effects, represent unspeci-
fied non-training activities or events which always
occur and are present outside of training. Hamblin
also discusses the Hawthorne effect on evaluation, which
the Hawthorne studies of the 1930s identified. The


41
Reactions
objectives ^
T
Training
Reactions
effects
02
Learning
objectives
E2
Learning
effects
03
Job behavior
objectives
E3
Job behavior
effects
04
Organization
objectives
E4
Organization
effects
05
Ultimate value
objectives ^
tr
E5
Ultimate value
effects
The Cycle of Evaluation
Figure 1. Hamblin's (1974) Model for Evaluation of
Management Training (as presented in the Appendix
of Evaluation of Management Training, 1974).


42
Hawthorne effect determined that people behaved differ-
ently precisely because they were the subjects of
research. The existence of the research project was
the cause of participants' changing behavior (Hamblin,
1974). Since this effect cannot be entirely eliminated
in the evaluation of training, Hamblin feels it can be
put to positive use in the involvement of participants
in evaluating management training.
In describing the techniques used in measuring
these objectives, Hamblin (1974) includes:
Objective Techniques
Reactions Session reaction scales, reac- tion notebooks, observation records, end of course reaction forms, or expectations evalu- ations .
Learning Pre-course questionnaires, skill tests, exams.
Job Behavior Activity sampling, diaries, interviews, questionnaires, or observation.
Organization Indices of productivity, studies of organizational climate, or work flow studies.
Ultimate Value Cost--benefit analysis. (Hamblin,
1974 )
The question asked by Hamblin (1974) is not, "Can
training be evaluated?" but, "How is it evaluated?" He
defines management training as being a sequence of expe-
riences or opportunities designed to modify behavior, in


43
order to attain a stated objective or skill in a job
(Hamblin, 1974) The same as with Kirkpatrick, there
is agreement on the importance of the relationship
between the management training, the evaluation objec-
tives, and the results.
Searcy et al.
Searcy et al. (1982) use a fourfold typology for
evaluation objectives, as follows:
1. Reaction
2. Achievement
3. Performance/application
4. Organizational results
The. objective of the reaction evaluation is to assess
participant's reactions to, and remarks about the
training experience. Information is received about
instruction, content, methods, media, and program
organization. As with Kirkpatrick (1959), acknowledg-
ment is made that this does not include an assessment of
achievement of learning objectives.
Achievement evaluation is an attempt to determine
trainee skill or knowledge gains, reflecting the
accomplishment of instructional objectives. This gauges
the degree to which concepts, principles, facts, skills,
and/or attitudes are acquired by participants from the
training. Performance/application has the objective of


44
determining changes in individual performance on the job,
resulting from training. Measurements relate to exami-
nation of changes in individual employee job behavior,
or practices attributable to training.
As discussed with Hamblin (1974) and Kirkpatrick
(1959), Searcy et al. (1982) state that organizational
results determine the impact of training on an organi-
zation or the work environment. They also acknowledge
the difficulty of segregating the variables to determine
how much organizational improvement is due to training
as opposed to other factors. "However, inferences can
be made about the effect of training on organizational
improvement which substantiate the effectiveness of
training" (Searcy et al., 1982, p. 3).
Bakken and Bernstein
The evaluation model (Figure 2) of Bakken and
Bernstein (1982) is based on evaluation objectives
similar to those of Hamblin (1974).
We have modified these categories and identi-
fied four general goals of training: personal
growth, knowledge acquisition, skill acquisition
or performance improvement and organizational
development and improvement in the "bottom line."
The importance of. these distinctions lies in
determining where to look for evidence that the
objective has been achieved. (Bakken & Bernstein,
1982, p. 45)


^ ^ >----:- V ^
PERSONAL
GROWTH
ACQUIRE
KNOWLEDGE
IMPROVE'
PERFORMANCE
IMPROVE
ORGANIZATION
Figure 2. The Systematic Evaluation of Training (SET) Approach of
Bakken and Bernstein (1982, p. 45) .
ifc.
Ul


46
The first step identified by Bakken and Bernstein
(1982) in their Systematic Evaluation of Training (SET)
model is the identification of
the decision makers who need or desire information
about the effectiveness of training, and by clari-
fying the goals of training, the trainer or
designer who employs this approach can easily
determine which observable outcomes will meet the
needs of decision makers and show whether the
objectives of training were achieved. (Bakken
& Bernstein, 1982, p. 44)
Bakken and Bernstein (1982) relate their evaluation
objectives or personal growth, knowledge acquisition,
improved individual performance, and improved organiza-
tional performance to both Hamblin (1974) and Kirkpatrick
(1959). Hamblin's (1974) categories were revised, with
the importance of the distinctions being in determining
where to look for the evidence that the objective has
been achieved. Similar to Kirkpatrick (1959), they can
examine learner reactions, test for knowledge or perfor-
mance, measure on-the-job performance, and look at
organizational characteristics and the "bottom line"
measures (Bakken & Bernstein, 1982).
Bakken and Bernstein discuss the difficulty in
measuring the observable outcomes, but feel that the
SET approach provides the key in selecting the outcome
that is appropriate to the particular objective of
training, as well as the needs of various decision
makers (Bakken & Bernstein, 1982). The greatest


evaluation need is in determining "what to measure and
how to measure it." What to measure could be answered.
by the application of the SET approach. In response to
the question of "how to measure,"
Quantification of improvement in job performance,
for example, can be as simple as counting the
number of times the target behavior occurs or
the number of errors on a performance test, or as
involved as asking supervisors to estimate the
magnitude of such dimensions as "quality of per-
formance" and "employee motivation." (Bakken &
Bernstein, 1982, p. 46)
Bauman and Ott
Bauman and Ott (1985) developed a hierarchy of
levels of changes used in evaluating program impact on
the participants and the organizations. The outcome
levels, listed in descending order of generality or
conceptual importance were:
1. Improved agency performanceimproved agency
planning, decision making, internal or external communi
cations, better coordination with other agencies,
2. Improved individual job performanceutiliza-
tion of new management skills, more energy devoted to
the j ob,
3. Increased willingness to utilize skills or
knowledgeimproved attitude toward the job, the agency
and/or public sector service,
4. Possession of new skillsnew management skill
were acquired but may not have been applied on the job,


48
5. Possession of new knowledgehow and when to
use strategic planning techniques, manage conflict, and
6. Expanded awarenessnew sensitivity to oppor-
tunities, management techniques (Bauman & Ott, 1985,
p. 2.2).
These were considered in the determination of the
outcome variables relating to individual and organiza-
tional changes in this dissertation.
Both quantitative and qualitative data were obtained
by Bauman and Ott (1985) from alumni, faculty, and admin-
istrators of this and other management development
programs across the country. The evaluation research
design, based on the evaluation objective of results or
outcomes, was pre-experimental, using both process and
impact or outcome evaluation methodologies. This
research design, as well as the application of the
theory of a hierarchy of change (Suchman, 1967) were
significant to this research project.
In summary, there is a similarity in the
typology of evaluation objectives and the techniques for
the evaluation of the management training. Three points
are significant:
1. The evaluation objectives for management train-
ing can be described in five levels:
a. Reactions,
b. Learning or Achievement or Knowledge,


49
c. Behavior or Performance/Application,
d. Results or Organization or Organizational
Change or Results, and
e. Ultimate Value.
2. The techniques of evaluation become more
complex, moving from the first or lowest level to the
highest one.
3. Techniques related to the evaluation of the
reactions of the participants are more popular, used more
often, and least costly.
Evaluation Methods
Evaluation models have been designed in relation to
the previously mentioned evaluation objectives. Two
methods measuring resultsParticipant Action Plan
Approach (PAPA) and Clement and Aranda (1982)will be
discussed as pertinent to the dissertation.
Participant Action Plan
Approach (PAPA)
In evaluating management training based on results,
practical problems may limit the evaluation methodology.
The timing of the evaluation may limit or prohibit the
collection of pre-course data, or the nature of the
training or the job limits the collection of work samples
or observation of the work-in-progress to measure the


50
effects of training (Salinger & Deming, 1982). PAPA was
designed to be used in these situations.
The concept of work-related classroom assignments
and evaluation of the resulting changes in the work
environment was originated by Professor James Mosel of
the George Washington University (Salinger & Deming,
1982) The method uses a "modified critical incident"
approach, and is based on a self-change agenda technique.
Since the modified critical incident approach
does not require pre-course measures and can be
applied after completion of training, it is par-
ticularly practical for occasions when the evalu-
ator has had little time or opportunity to con-
struct and implement a more elaborate evaluation
design. (Salinger & Deming, 1982, p. 23)
Ruth Salinger,.employee development specialist with
the U.S. Office of Personnel Management, has continued
and validated research based on Mosel's ideas of work-
related classroom assignments and evaluations in the
work environment. The result is the Participant Action
Plan Approach (PAPA). PAPA is being used nationally in
federal and state government agencies. The evaluation
process can include action plans for behavior changes on
the job which were designed by the participants during
the management training course, with the participants
being contacted after the end of training. The follow-
up evaluations can be by interviews or mail-out question-
naires, for the purpose of obtaining data as to the


51
status and impact of the action plan. The time frame for
the follow-up should be from 1 to 6 months after the end
of the program.
In the interviews, specific examples of change are
described by the participant.
For instance, because you are relying on the
participants' self-reports, it is essential that
you ask for specific examples of change (hence
the relation to the "critical incident" approach)
and that you feel reasonably confident that the
change was a result of the person's training.
(Salinger & Deming, 1982, p. 22)
It must be noted, though, that this would be based on the
perceptions of the individual, but could be an indicator
in measuring the results of the training.
The data produced by PAPA provides answers to ques-
tions such as:
-What happened on the job as a result of the
training?
-Are changes that occurred the ones intended by
those providing the course?
-What may have interfered with participants' trying
to use on the job what they learned in the training?
The five steps outlined for evaluation using PAPA
are:
1. Planningwhat is to be measured, questions to
be answered, and the specific design to be used,


52
2. Conduct of in-course activitiesintroduction
of action plan idea at beginning of the course, and
writing an action plan or project at the end of the
course,
3. Follow-up activitiescontact learners by
interviews or questionnaires as to results,
4. Analysis and conclusionssort, categorize, and
display data to- show the extent and type of change, and
5. Reportpresenting written analysis, conclu-
sions, and recommendations, using the best format for
the situation (Deming, 1982).
One example of the successful application of PAPA
is with Fort Belvoir Engineering Center, Department of
the Army. A managerial communications course was taught
for the directorate managers. Three major objectives of
the evaluation were:
1. To assess on-the-job changes in participant
behavior which resulted from the training.
2. To assess the effect of the training on the two
directorates.
3. To identify problems encountered when partic-
ipants attempted behavior changes and problems or
concerns of individuals who were directly or indirectly
affected by the participants' behavior changes (U.S.
Office of Personnel Management [U.S.OPM], 1979).


53
The data collected through PAPA presented a clear
picture of improved communications practices among the
course participants, as well as the resolutions of
work problems. The course was confirmed as an effective
means for initiating improvement of communications
behavior. PAPA presents an evaluation technique for
actively involving the participant in devising an action
plan based on course knowledge and skills, for on-the-
job results. The follow-up is completed from 1 to 6
months after the course.
Clement and Aranda
Clement and Aranda (1982) presented a similar
approach, or a contingency approach to evaluating
management training, as shown in Table 2. They identi-
fied three variables on which to place the emphasis in
measuring whether or not changes resulted from the
management training.
These variables include the organizational
setting within which the manager attempts to use
the training, the unique characteristics of the
manager to be trained and the nature of the organi-
zational problem to be solved by the training.
(Clement & Aranda, 1982, p. 39)
The influence of the organizational setting was
based on leadership training research of Robert House.
Three organizational factors were identified that
affected the transfer of training to the job. They were


54
Table 2
Contingency Framework for Management Training
Evaluation
Variable Dimension The Manager The Subordinate The Organization
#1 Training Results What is the person doing differently? How has the sub- ordinate 1 s job improved? What is the impact on overall job performance?
#2 Relative Effective- ness of Technique Was new behavior thoroughly learned? Is trainee able to translate information to subordinate needs? Is trainee able to translate information to organization Needs?
#3 Impact of Individual Was trainee willing/able to respond to the training? Is trainee able to adjust new behavior to subordinate differences? Is trainee able to adjust new behavior to organization norms?
#4 Impact of Environ- ment Differences Can trainee apply new behavior to his or her job? Does new behavior mesh with work group style? Is there a reward for practicing new behavior?
Note. From "Evaluating Management Training: A Contingency
Approach" by R. W. Clement and E. K. Aranda, 1982, Training and
Development Journal, p. 42.


55
the formal authority system within the organization,
the immediate superior of the trainee, and the primary
work group of the trainee (Clement & Aranda, 1982).
Questions related to these areas were included in this
research questionnaire (see Appendix A).
The second contingency variablethe unique charac-
teristics of the manager to be trainedrefers to the
immediate superior's right to administer rewards and
punishments. "In other words, if the superior encour-
ages the trainee to apply principles taught in a training
course, the training is more likely to transfer to the
job setting" (Clement & Aranda, 1982, p. 39). The
variables of the prevailing managerial styles within an
organization, as well as the management support obtained
in implementing MIPs were included in the questionnaire.
The primary work group of the trainee, the third
contingency variable, refers to the expectations, rela-
tionships, and influences with the peer group. These
contingency variables relate to the influence of the
organizational setting and climate on the use of manage-
ment training by the trainee. The four-dimension
contingency format of Clement and Aranda offers a method
for the evaluator of a management training program to
assess program results in an organizationally relevant
way, as well as providing a method for the trainee and


the organization to jointly determine the effectiveness
of the management training (Clement & Aranda, 1982) .
In summary, this research project was based on
evaluation objectives related to outcomes, including
individual and organizational changes resulting from
management training. This is similar to related evalu-
ation objectives identified by Kirkpatrick (1959),
Hamblin (1974), Searcy et al. (1982), and Bakken and
Bernstein (1982) The implementation of work-related
assignments were used to measure the results of manage-
ment training.
Evaluation Tools
Evaluation tools are selected in relation to the
evaluation objectives and the overall evaluation design
Kirkpatrick (1983), Hamblin (1979), and Deming (1982)
describe evaluation tools in a similar manner as Searcy
et al. (1982), who placed the tools into the following
major groups:
Testsincludes both written examinations and
performance terms, such as comment sheets, paper and
pencil tests, and simulations,
Observationsincludes all visibly noticeable
aspects of training results,.
Interrogative instrumentssuch as questionnaires
individual interviews, and group interviews,


57
Participatory instrumentsall self-assessment and
participant contract devises, and
Organizational analysis--all measurements of an
organization's vital documents and statistics.
The purpose of the written test is to gauge the
level of knowledge imparted to participants during train-
ing. The performance tests require a demonstration of
learning or performance change under simulated or actual
conditions. Advantages of written tests are a relatively
low administration cost; easily and quickly scored;
easily administered, and the possibility of a wide sam-
pling. The disadvantages are a possibly low degree of
relation to job performance, and highly prescriptive and
potential cultural bias.
Advantages of the performance test are reliability
and relevance, and the disadvantage's are high develop-
mental costs, time-consuming, potential difficulty in
constructing, and the participants feeling threatened
(Searcy et al., 1982). These factors would be considered
in choosing tests, whether written or performance.
The observation tests can include unobtrusive
observations, which can help to determine participant
reactions. However, the data tend to be affective,
uncategorized, and based on inferential physical indica-
Observation includes organized surveillance of
tions.


58
the training participants, and can be done by a human
observer. Structured surveillance would include a
specific observation with a particular goal or objective
in mind.
Advantages of this tool are the low cost, immediate
feedback, and a process which appears non-threatening
to the participant, if the -observation method is con-
cealed or relatively unobtrusive. The disadvantages are
the subjectivity or misinterpretation of the observa-
tions, low reliability and the threatening aspect to the
participant. If recording equipment is used, the expense
increases, with concealed observations raising issues of
privacy.
Examples of interrogative instruments are the inter-
view, oral sessions, group interviews, and question-
naires. The interview has the advantages of flexibility,
in-depth penetration, opportunity for clarification, and
relative low costs of materials. Disadvantages include
time-consuming, personal contact can be threatening, and
the responses are potentially highly reactive and subjec-
tive .
The oral sessions or group interviews require skill
in the ability to deal with more than one participant,
and to facilitate the group interview. Usually, responses
are subjective, affective, and anecdotal. Advantages of


59
this technique include immediacy, directness and low
cost, with the disadvantage of the leader or a single
member being able to influence other group members. The
questionnaire has advantages of low cost, respondent sets
the pace, responses increase if anonymity is guaranteed,
and it allows for both open-ended or fixed-choices in
the responses.
The participatory tools of participant contact and
self-assessment both require the cooperation and commit-
ment of the participant. Searcy et al. (1982) indicate
that this tool involves agreement between the trainee
and the training staff (or among the training partic-
ipants themselves) stating specific training-related
knowledge, skills, and/or attitudes, and a schedule or
program outlining implementation. Some of the advantages
include the participant being directed and paced, with
the potential enhancement of motivation and on-the-job
reinforcement.
Disadvantages of using participatory tools include
the potential resentment of training staff monitoring
and follow-up, and potential organizational constraints
to implementation. The self-assessment tools, however,
are useful to participants and are non-threatening. To
be successful, though, it does require self-analytical
individuals. The information obtained in this manner
is totally subjective.


60
The grouping of organizational analyses usually
refers to any analysis of the existing record or docu-
ments of the organization which may measure organiza-
tional changes due to training. The documents, such
as program budgets, production schedules, or perfor-
mance appraisals could indicate performance changes or
learning application on an individual basis or on a
work unit basis. The advantages are the objectivity,
reliability, job-based, pre- and post-training compara-
bility. The disadvantages are the possible subjec-
tivity of the preparer or translator, and the need for
conversion of the information to training specific usable
form.
To meet the evaluation objectives previously
discussed, appropriate tools need to be chosen. In
Table 3 (page 61), the evaluation objectives have been
reduced to four, combining phases four and five of
Hamblin (1974). For each objective, tools identified
by Kirkpatrick (1983), Hamblin (1974), Searcy et al.
(1982) Bauman and Ott (1985) and Salinger and Deming
(1982) have been compiled.
The various tools may be used in any appropriate
combination. The purposes for the four objectives
are:


61
Table 3
Evaluation Tools to Measure Evaluation Objectives
Evaluation Objectives
Evaluation Tools Objective 1 Reactions Objective 2 Learning Objective 3 Behavior Objective 4 Results
Tests:
Written X
Performance X X
"Happiness Sheets" X
Observations:
Non-verbal X
Surveillance X X X
Interrogative:
Interviews X X X
Oral Sessions X
Questionnaires X X X
Participatory:
Participant Contrast X X
Self-assessment X
Org. Analyses:
Documents X X


62
Objective
Objective 1
Objective 2
Objective 3
Objective 4
Purpose
To assess participant's reactions
to the training experience.
To determine the skill or knowl-
edge gains of the trainee.
To determine, changes in the indi-
vidual's performance on the job
as a result of the training.
To determine the impact of the
training on an organization or the
work environment.
Pattan (1983) suggests evaluating management train-
ing by the use of tools closely related to those used in
technical training. His premise is that management
training has been plagued with difficulties in effec-
tively transferring classroom learning to on-the-job
performance.
That is to say, up to now there doesn't seem
to have been a practical method for evaluating the
effectiveness and establishing the value of manage-
ment training for an organization. The reasons for
this include problems associated with its object of
study and its nature, the lack of strict trainee
accountability for learning and then implementing
managerial skills on the job, and the consequent
lack of employee motivation to take management
training seriously. (Pattan, 1983, p. 34)
Pattan's (1983) management training and evaluation
model addresses the specific problems of a lack of


trainee accountability in the classroom and on the job,
and inadequate motivation to regard management training
and managerial skills as advancing the trainees' self-
63
interest. His suggested solutions modify the traditional
management training model to resemble certain features
of technical training (Pattan, 1983). The three features
of this model are:
Feature Description
Outward and practical Emphasis on externally
orientation observable behavior modifi- cation. Corresponds to Observable Run Time (ORT) of technical training.
Trainee accountability Grades issued for management
in the classroom training participants, and sending these report cards to trainees managers to be placed in personnel records.
Monitoring and account- To justify investment of
ability on the job money and time, a continuous monitoring and evaluating of progress from training to the job.
In the selection of tools for evaluating management
training, he agrees with the four objectives of


64
Kirkpatrick's (1959) model, but feels that the tools
recommended by Kirkpatrick and others for the evalua-
tion of the management training do not go far enough.
Pattan (1983) believes that
If we can show the value of management training
for the employees and the company and institute
a practical way of evaluating its results, we
can rescue management training from such an
unfavorable comparison with technical training.
(Pattan, 1983, p. 44)
The changes he recommends in the customary model of
management training are:
(1) emphasizing a practical behavior modifica-
tion orientation ; (2) grading trainees; and
(3) using the annual performance appraisal to
measure the success by which trainees transfer
their management skills from the classroom to
on-the-job performance. (Pattan, 1983, p. 45)
Similar to PAPA, the emphasis is on evaluation after the
training has been completed, with the measures relating
to work performance.
Therefore, the tools he would use in this evaluation
methodology would include examinations, performance-
anchored appraisal systems, and promotions and pay
increases tied to the practice of management skills.
These tools, while related to work performance, would
be tied into management training, putting teeth into
the monitoring and enforcing learning outcomes in job
performance. This does not contradict what is currently


65
considered as training-evaluation techniques, but was
designed to build on them.
Additional evaluation tools were described in other
literature. The annual publications of University
Associates, for example, contain tools for evaluating
management training programs. Burns and Gragg (1981)
-developed scales constructed for various types of
measurement situations, based on the Likert Scale (Hersey
& Blanchard, 1982). "These are intended to be not only
immediately useful but also instructive about how to
create other similar scales" (Burns & Gragg, 1981, p. 87).
The scales and their possible uses are included in
Figure 3.
These instruments, according to Burns and- Gragg
(1981), can be considered valid if they serve to focus
attention on processes that can be managed toward more
effectiveness. Of particular interest in this research
project is the Work-Group Effectiveness Inventory. The
use of the Likert Scale, as well as the content of the
questions are similar to the questionnaire used in the
interviews. The individual is asked how he or she works
within the work group in the areas of communications,
leadership, decisions, coordination, planning, respon-
siveness, control and influence, motivation and conflict
management (University Associates, 1981).


66
Scale Uses
Meeting-Evaluation Scale Evaluation of meetings Critiquing of process during meetings Action research on meeting quality Meeting planning
Work-Group-Effectiveness- Team Building
Inventory Organization survey Team self-assessment
Organizational-Process Survey Organization survey Team building with executives Management development
Learning-Group Process Scale Group self-assessment Clarification of expectations Comparative study of groups
Figure 3. Evaluation Uses for the Diagnostic Instru-
ments of Burns and Gragg (from "Brief Diagnostic Instru-
ments" by F. Burns and R. L. Gragg, 1981, The 1981
Annual Handbook for Group Facilitators, San Diego, CA:
University Associates, p. 87).


67
In the 1985 Annual of the University Associates, an
instrument was included which was developed by Goodstein
(1985). This tool was based on the theories of Peters
and Waterman (1982). Eight basic characteristics were
identified for achieving excellence as a result of the
research of Peters and Waterman. They were:
1. A bias for actiona preference for doing some-
thing;
2. Staying close to the customerlearning his
preferences and catering to them;
3. Autonomy and entrepreneurshipencouragement
to think independently and competitively;
4. Productivity through peoplecreating a desire
for employees to do their best and share in the rewards
of the company's success;
5. Hands-on, value drivenexecutives keep in
touch with the firm's essential business;
6. Stick to the knitting--remaining with the
business the company knows best;
7. Simple form, lean staff--few administrative
layers;
8. Simultaneous loose-tight propertiesfostering
a climate with dedication to central values of the
company (Goodstein, 1985).


68
Goodstein (1985), in developing his evaluation tool,
had a goal of helping managers within an organization to
identify their organization's degree of excellence, as
well as to enhance teamwork of management groups.
Organizational climate is important to training resulting
in organizational change. Some of the factors in this
technique were included in the instrument used in this
research project.
Evaluation Strategies
Some of the literature reviewed dealt with the
importance of the selection of evaluation strategies.
Zemke and Gunkler (1985), Kelley, Orgel, and Baer (1985),
Deming (1982), and Salinger and Deming (1982) identified
strategies'useful in the evaluation of management train-
ing programs and discussed in this section.
Zemke and Gunkler (1985) identified five strategies
that enhance the transfer of knowledge to job perfor-
mance (see Figure 4). The pretraining strategies are
the activities that take place before the trainee enters
the training program.
These are the things you can do to or for
trainees, before they begin the training to help
ensure they get the most out of the training and
that the skills and knowledge they acquire in the
training actually will have a chance of being
supported back on the job. (Zemke & Gunkler,
1985, p. 50)


69
SKILL TRANSFER FROM CLASSROOM TO THE JOB
Figure 4. Management Training Strategies to Increase the
Odds that Skills, Knowledge, and Behavior Acquired in
Training Will be Used Back on the Job (from "28 Techniques
for Transforming Training Into Performance" by R. Zemke
and J. Gunkler, 1985, Training).


70
The good training strategies are the principles
of good practice, or the basics, that assist in making
the management training and evaluation of the manage-
ment training effective. However, the analysis, design
and development phases alone will not solve the problem
of learning transfer. "'Good training' safeguards alone
will not guarantee transfer from a training room to
playing field" (Zemke & Gunkler, 1985, p. 52).
Transfer-enhancers relate to "those procedures and
strategies that are included in programs not for the
purpose of improving immediate end-of-training results
. . but rather to improve later, on-the-job results"
(Zemke & Gunkler, 1985, p. 54). The use of simulation
and the Participant Action Planning Approach (PAPA) are
mentioned as applicable techniques. Some of the benefits
of the PAPA approach are that it encourages transfer of
learning, provides for practicing the skill, gains
commitment to action, provides an opportunity for rein-
forcement, and provides an opportunity for evaluation
(Zemke & Gunkler, 1985). These are comments similar to
those previously discussed of Salinger and Deming
(1985).
Post-training strategies would include any effort
to defeat the poor transfer of knowledge, skills and
abilities from the classroom to the work situation.


71
Examples would include refresher courses, beefed-up
feedback or special attention from the returning
trainee's supervisor (Zemke & Gunkler, 1985).
Finessing strategies relate to finessing the problem
by not expecting the skills to be learned sufficiently
or at allaway from the job, but to bring the training
as close to the job as possible. One of the strategies
is the interim-project approach. This is similar to, but
different from PAPA. This process includes the original
training workshop, an on-the-job application project,
and a summary report workshop. The difference from PAPA
is that the project is being completed between the two
training sessions. This multi-phase training provides
more structure for the work-related project assignment.
Kelley et al. (1985) have devised a list of seven
strategies that guarantee training transfer. They are
the result of research done by Stokes and Baer in
reviewing 270 studies that measure transfer of training.
They wanted to. see what procedures actually
accomplish transfer, and found that they could
group them into seven categories. Drawing from
their conclusions, we have uncovered seven
strategies that program developers and trainers
can use to produce more rapid acquisition, reten-
tion and transfer of work skills, and provided
evidence from applied research demonstrating the
effectiveness of these strategiesparticularly
some long underused work by Goldstein and Sorcher.
(Kelley et al., 1985, p. 78).


72
The seven strategies that Kelley et al. (1985)
developed are:
1. Discover basic skills and concepts;
2. Analyze, define, and field test;
3. Produce and verify mastery;
4. Teach basic skills and general principles;
5. Teach trainees both correct and incorrect
examples;
6. Follow up; and
7. Make follow-up training relevant.
Kelley et al. (1985) stress the importance of post-
training strategies, such as practice and follow-up, in
guaranteeing how to transfer training to the job, result-
ing in protecting the bottom line and documenting any
cost benefits.
Trainees may be concerned about gaining trainees'
compliance with on-the-job assignments. But train-
ees who perform job-related assignments are rein-
forced by doing so because they eliminate their
problem, receive positive social reinforcement from
their trainer and other trainees, and their job is
made easier. (Kelley et al., 1985, p. 82)
Reinforcement of the newly acquired skills also comes
from documenting the success of the work-related assign-
ments, and the interaction between the trainee and the
supervisor.
Deming (1982) describes an evaluation strategy of
identifying tasks and their enabling skills. According
to Deming,


73
Assessment of the transfer of learning from train-
ing to actual performance is the exception rather
than the rule because it is not firmly established
as a necessary component of organizational and
human resource development. (Deming, 1982, p. 65)
The evaluating tasks are:
1. Assess the appropriateness and adequacy of
instructional objectives;
2. Assess the appropriateness and adequacy of
test instruments, test situations and the use of test
results for.training;
3. Assess the appropriateness and adequacy of
instructional strategies, both as planned and as carried
out in the actual events of instruction;
4. Assess the degree to which learning is trans-
ferred from training to actual performance situations;
5. Assess the endurance of learned performance
over time;
6. Assess the costs and benefits associated with
training;
7. Prioritize organizational training efforts in
terms of their relative value to the organization; and
8. Identify training as proactive or reactive.
Task #4 stated above is pertinent to this research
project. The enabling skills required for Task #4 are
the ability to match follow-up assessment methods with
different types of performance, and the ability to select


74
or design criteria formats to assist in the assessment
process. To accomplish this, the techniques, as recom-
mended by Deming, are:
Direct observation
Third-party observation
Assessment centers
Study of work results
Learner self-reporting (Deming, 1982).
In describing learner self-reporting, limitations
include the fact that one depends upon the learner's
own testimony for an assessment of learning, raising a
question of objectivity. Lack of ability to observe
actual performance presents a problem. However, Deming
does present a technique to bring a reasonable measure
of objectivity to self-reporting, using it as a means
for assessing the transfer of learning to the job. He
states:
The most effective way to generate accurate
and practical self-reporting on learning transfer
is by making the learner a partner in the formu-
lation of an action plan which he or she carries
back to the job and implements to the extent he
or she can. This approach has been taught as a
"self-change agenda technique" by James N. Mosel
of George Washington University and has been
incorporated into a more fully developed system
at the U.S. Office of Personnel Management.
(Deming, 1982, p. 70)
The last strategies for the evaluation of manage-
ment training discussed in this chapter were designed by


75
Salinger and Deming (1982) These are similar to those
devised by Deming alone, but are based on what is the
evaluation question, or we could say objectives. In
Figure 5, of particular importance to this dissertation
is the second evaluation question related to learning
transferred to the job. The strategies suggested include
the modified critical incident method, also known as
PAPA, over-the-shoulder evaluation and performance
analysis.
Over-the-shoulder evaluation uses a combination of
evaluation during the course and also post-course
measures on the job. This can be applicable to courses
that are given frequently, providing participants
currently in the course, or in previous courses.
Cautions in using this strategy include making sure the
sessions are the same or highly similar, and comparing
the trainee characteristics for the sessions being
compared.
"Performance analysis is a method used to determine
the causes for employee performance not meeting expected
levels and to suggest solutions for increasing or chang-
ing performance" (Salinger & Deming, 1982, p. 24). This
approach helps the trainer to determine the training
needs as a possible solution to the problem, and to be
able to give the training when needed and when it will


76
Evaluation Question Evaluation Strategy
1. To what degree does the training produce appro- priate learning? Delayed-treatment control group
2. To what degree is learning transferred to the job? Modified critical incident method Over-the-shoulder evaluation Performance analysis
3. To what degree is the knowledge or skill level maintained over time? Time-series evaluation
4. Does the value of par- ticipants improved performance meet or exceed the cost of training? Cost-benefit analysis
Figure 5. Evaluation Strategies Identified by Salinger
and Deming (from "Practical Strategies for Evaluating
Training" by R. D. Salinger and B. S. Deming, 1982,
Training and Development Journal, p. 21).


77
have the biggest impact. As in all of the training
strategies, the appropriateness of any strategy depends
on the characteristics of the training course, evalu-
ation resources and constraints (Salinger & Deming,
1982).
The evaluation strategies of these and other
authors seem to emphasize the importance of determin-
ing the results or outcomes of the management training.
There is a similarity in the strategies including the
period before the training, during the training, and,
most important, after the training. In the evaluation
strategy used after the training, some of the tools
previously discussed in this chapter are identified to
be appropriate. The strategy used in this research
project is, after the training has been completed, using
some of the techniques identified by these authors.
In summarizing, this review of literature has
included evaluation objectives, methods, tools, and
strategies. The importance of the literature to this
dissertation is:
1. The research is based on the evaluation objec-
tive of measuring the results of the management training,
and whether there was a transfer of knowledge, skills,
or abilities to the job;


2. The evaluation tools recommended for measuring
results or outcomes of management training include
interviews based on a structured questionnaire;
3. The evaluation strategy in follow-up months
after the completion of the training, using transfer
enhancing strategies to assess the degree to which
learning is transferred from training to the actual
performance situations.
This literature gives credibility to the approach
of using work-related projects as a measure of the
transfer of knowledge and skills from the classroom to
the work environment.


CHAPTER III
STATEMENT OF METHOD
This dissertation developed from interests in the
outcomes or results of training for public managers.
Due to the previously mentioned difficulties in evalu-
ation research in the training field, creating a
methodology and applying it on a case-study basis was
the intent of this study.
According to Laird (1985):
The ultimate impact of a training and develop-
ment program can only be determined by measurement
which occurs at some time after the training
itself. Such measurement needs to get at the
individual and the organization. With the indi-
vidual the concern is the perseverance of the
new behavior; with the organization, the focus
is the impact of the accumulated behaviors upon
the operation. (p. 257)
The use of Management Improvement Projects (MIPs),
designed during MCP3 and related to course content,
provided a method for measuring training impact after
the end of the course. MIPs consisted of work-related
problems identified by the participants, with a plan to
be implemented to correct the problem. Due to the
addition of MIPs to MCP3, these participants were
chosen for this research project.


80
The purpose of this research is exploration. Accord-
ing to Babbie (1975), "There are probably as many differ-
ent reasons for conducting social scientific research as
there are research projects" (p. 49). Babbie describes
exploration research as an attempt to develop an initial,
rough understanding of some phenomenon. A volume of
research is done, mainly for the purpose of exploring a
topic to familiarize the researcher and any subsequent
audiences with it.
Three purposes of exploratory studies are:
1. Simply to satisfy the researcher's curiosity
and desire for better understanding,
2. To test the feasibility of undertaking a more
careful study, and
3. To develop the methods to be employed in a
more careful study.(Babbie, 1975).
All three are applicable in this research project.
Though exploratory studies are valuable in social
scientific research, especially when breaking new ground
and providing new insights into topics for research,
there are limitations. Babbie (1975) states:
The chief shortcoming of exploratory studies is
that they seldom provide satisfactory answers to
research questions. They can hint at the answers,
and can provide insights into the research methods
that could provide definitive answers. (p. 50)
One main issue in exploratory research representa-
tiveness in connection with the sample. This issue is


81
diminished in this research, as the total population of
MCP3 was used. Babbie (1975) describes the representa-
tiveness as "That quality of a sample having the same
distribution of characteristics as the population from
which it was selected" (p. 499) Care should be taken,
though, in assuming that this population is representa-
tive of other similar populations.
This chapter includes (a) the research question,
(b) a description of the Management Certificate Program,
(c) the evaluation model developed, and (d) the research
methodology.
Research Question
The major research question addressed in this
dissertation is to what extent, if any, did the require-
ment of the MIPs in a management training program have
any impact on individual or organizational changes? The
evaluation model and the research methodology, applied
to the Management Certificate Program, were designed to
focus on this question.
Management Certificate Program
The particular training program that was evaluated
in this dissertation was the third year of the State of
Colorado's Management Certification Program (MCP3).


82
Management development became an issue in the 1981
legislative session of the State of Colorado. Previously,
training for public managers was not considered a high
priority. Efforts to improve the State Civil Service
System, including management development, resulted in
Senate Bill 308, which became law on June 1, 1981.
Among other items in the bill were provisions for
(a) a merit pay system based on performance, (b) a
stronger role for the State Department of Personnel in
promulgating personnel policy, and (c) a stronger linkage
between performance appraisal and other personnel actions
(Davies, 1983). In addition, an appropriation of
$150,000 was made to the State Department of Personnel
for the development of a Management Certification
Program, to provide training opportunities to Colorado
personnel system managers and prospective managers
(Davies, 1983). The first group of participants entered
the program in 1982. Aspects of this program, pertinent
to the research, are (a) the purpose, (b) the partic-
ipants, and (c) the course content.
Purpose
The purpose of the MCP was to help managers improve
in their work performance, as well as to provide train-
ing for career development for state managers who wanted


83
to acquire new knowledge, tools, perspectives, strate-
gies, and skills in order to perform more effectively
in state government. The courses were designed to be
given over a 6 month period, with classes being held on
a weekly basis.
Participants
Since the participants were working full-time, coor-
dination and cooperation were required of the employers,
as well as the employees. Participants in MCP were
managers or supervisors in the state's higher profes-
sional categories, the state paygrade levels of 74 or
above(Davies, 1983). The state paygrade of 74 was the
entry level for professional jobs. The program was and
is designed for people who are currently managing
programs, supervisors who did not manage programs, or
for those who are in a career development program leading
to supervision or management.
The agencies represented by the participants covered
the range of state government, including highways, health
services, social services, higher education, corrections,
natural resources and revenue, or finances. The educa-
tional and job experiences of the participants varied.
MCP was offered four times since 1982. The number
of participants enrolled in each year of the program, and
the number of participants who graduated are included


84
in Table 4. Beginning in 1984, participants were
required to submit a final written report on the comple-
tion of a Management Improvement Project (MIP) to receive
a graduation certificate. The percentage of graduates
noticeably dropped at that time.
Course Content
In the first year of MCP, a needs assessment was
conducted to elicit the perceived training needs of state
managers. This was the basis for the first year's
program content. The courses included in the curriculum
were:
Policy Development
Federal, State, and Local Government Relations
Management Development
Personnel Development
Public-Private Sector Relations
Organizational Development and Change
Quantitative Methods (Davies, 1983)
In the second year, the course content was the
result of a second needs assessment based on the newly
implemented performance appraisal system of the State of
Colorado. This performance appraisal system was called
the Factor Anchored Performance Appraisal System (FAPAS).
The dimensions of performance included in the system
were:


85
Table 4
MCP3 Enrollments and Graduates
______________The Participants_______________
Year Enrolled Graduated
1982 213 180
(85%)
1983 104 76
(73%)
1984 52 24
(46%)
1985 34 9
(26%)
Total
403
289
(72%)


86
1. Result areas, including
-program management
-human resource management
-fiscal management
-external relations
2. Behavior areas, including
-program analysis, decision making
-planning, organizing, coordinating
-leadership, interpersonal skills, and
communication
-organizational commitment and adaptability
The courses offered in the second year programs were
related to these dimensions of performance.
The course curriculum for MCP3 was based on FAPAS,
and included:
Introduction to Public Management
Program Management
Human Resource Management
Fiscal Management
External Relations
Twenty-four participants completed course work for MCP3
in August 1984. Approximately 50% of their time was
spent in lectures.
In addition, participants were involved in problem-
solving exercises, small group discussions, case studies