Citation
Effectiveness of performance management systems in state agencies

Material Information

Title:
Effectiveness of performance management systems in state agencies performance measurement, organizational culture and learning
Creator:
Williams, Arley D. ( author )
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file (112 pages). : ;

Subjects

Subjects / Keywords:
Performance management ( lcsh )
State governments ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
Most state governments have implemented performance management systems that incorporate some of all elements of a typical system, specifically strategic planning, performance measures, reporting, budgeting and evaluation. These systems are attempts to enhance government efficiency and effectiveness according to the principles of New Public Management. Despite mixed evidence of success, pressure on practitioners for use of these systems is growing, particularly from the federal government, governance entities, external funding sources and professional associations. By surveying state agency leaders across the nation, this research tests a model of performance measurement effectiveness. This study attempts to advance theory and practice by assessing factors both internal and external to a state agency that may positively impact the managerial effectiveness of a performance measurement system. This study adds knowledge to the literature by testing a middle-range theory of the effectiveness of performance measurement based on a nation-wide survey of leaders of selected state government agencies. While certain variable have direct impact on performance measurement effectiveness, other effects are indirect; this empirical study helps to better understand these mechanisms. The study adds to the body of knowledge in the literature by incorporating organizational culture into management reform in state government and demonstrates that certain elements for success can be common among different types of agencies. Given the traditional rule bound culture of government, a better understanding of the role of innovation, along with the characteristics of innovation, in achieving performance management effectiveness can be useful. The role of organizational learning is explored. The importance of organizational support, technical training and stakeholder participation in state government are reinforced in the results of this study. Sophisticated and creative approaches to management and learning are needed to maximize the effectiveness of these systems; these successes may be most likely to be achieved in the long-run.
Thesis:
Thesis (Ph.D.)--University of Colorado Denver. Public affairs
Bibliography:
Includes bibliographic references.
System Details:
System requirements: Adobe Reader.
General Note:
School of Public Affairs
Statement of Responsibility:
by Arley D. Willliams.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
903403263 ( OCLC )
ocn903403263

Downloads

This item has the following downloads:


Full Text
EFFECTIVENESS OF PERFORMANCE MANAGEMENT SYSTEMS
IN STATE AGENCIES: PERFORMANCE MEASUREMENT,
ORGANIZATIONAL CULTURE AND LEARNING
by
ARLEY D. WILLIAMS
B.S., New Mexico State University, 1986
M.S., New Mexico State University, 1987
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Public Affairs
2014


This thesis for the Doctor of Philosophy degree by
Arley D. Williams
has been approved for the
Public Affairs Program
by
Donald Klingner, Dissertation Advisor
Paul Teske, Examination Chair
Mary Guy
Kaifeng Yang


Williams, Arley, D. (Ph.D., Public Affairs)
Effectiveness of Performance Management Systems in State Agencies: Performance
Measurement, Organizational Culture and Learning
Thesis directed by Distinguished Professor Donald Klingner
ABSTRACT
Most state governments have implemented performance management systems that
incorporate some or all elements of a typical system, specifically strategic planning,
performance measures, reporting, budgeting and evaluation. These systems are attempts
to enhance government efficiency and effectiveness according to the principles of New
Public Management. Despite mixed evidence of success, pressure on practitioners for
use of these systems is growing, particularly from the federal government, governance
entities, external funding sources and professional associations. By surveying state
agency leaders across the nation, this research tests a model of performance measurement
effectiveness. This study attempts to advance theory and practice by assessing factors
both internal and external to a state agency that may positively impact the managerial
effectiveness of a performance measurement system.
This study adds knowledge to the literature by testing a middle-range theory of
the effectiveness of performance measurement based on a nation-wide survey of leaders
of selected state government agencies. While certain variables have direct impact on
performance measurement effectiveness, other effects are indirect; this empirical study
helps to better understand these mechanisms. The study adds to the body of knowledge
in the literature by incorporating organizational culture into management reform in state
government and demonstrates that certain elements for success can be common among
m


different types of agencies. Given the traditional rule bound culture of government, a
better understanding of the role of innovation, along with the characteristics of
innovation, in achieving performance management effectiveness can be useful. The role
of organizational learning is explored. The importance of organizational support,
technical training and stakeholder participation in state government are reinforced in the
results of this study. Sophisticated and creative approaches to management and learning
are needed to maximize the effectiveness of these systems; these successes may be most
likely to be achieved in the long-run.
The form and content of this abstract are approved. I recommend its publication.
Approved: Donald Klingner
IV


DEDICATION
I dedicate this work to my late grandmother, Vera.
v


ACKNOWLEDGMENTS
First, I want to thank the University of Colorado, Denver (UCD) community, and
particularly the members of my committee, Paul Teske and Mary Guy along with
Kaifeng Yang of Florida State University, for their willingness to serve on the committee.
The chair, Donald Klingner, has been a constant in this process since we first talked on
the phone about the Seminar in Public Management. My appreciation is extended to the
entire committee for their knowledge and contributions. It was a wonderful opportunity
to learn from the great faculty at the School of Public Affairs. Also at UCD, Alan Davis
kindly provided extensive dissertation guidance and comments and served as a role
model for his teaching style, while Loren Cobb provided early suggestions in this
research process.
Reflecting on these many years of study and research, there are too many people
to thank in this document, but here are a few words to acknowledge certain individuals
and their roles in this journey. My most sincere appreciation to Dick McGinity of the
University of Wyoming (UW) for his encouragement and guidance to finally finish the
dissertation; completion of this research was made possible by what I learned from him.
I am particularly grateful for his taking time to share his perspectives. A special thank
you to Mark Peterson of UW for inviting me to his class and discussing with me his
knowledge of AMOS software and structural equation models. Appreciation also to Bill
Mai, Jared Studyvin, Stephen Bieber and others at UW for their support and suggestions.
The kind words of members of the UW Exec Council are much appreciated. Thank you,
as well, to Richard Callahan of the University of San Francisco.
vi


David Abbey of the New Mexico Legislative Finance Committee (LFC) was
instrumental in my decision to engage in doctoral studies while working full-time. I was
fortunate to work on the states Accountability in Government Act and Legislating for
Results initiatives. Special thanks to Cathy Fernandez of LFC for encouragement to not
only start the program, but also finish it, as well as appreciation to Sylvia Barela of LFC
for her support.
Survey support and endorsement from national associations were important
factors in successfully administering the survey. Paul Lingenfelter, former President,
State Higher Education Executive Officers, Boulder, Colorado and John Horsley, former
Executive Director, Association of State Highway and Transportation Officials,
Washington, D. C. were helpful in this regard. I want to also thank Ron Regan of the
Association of Fish and Wildlife Agencies (AFWA) in Washington, D. C. for the AFWA
endorsement and overall enthusiasm for this research project.
Antoinette Sandoval assisted with UCDs paper work processes over the years,
and Melanie Drever assisted with formatting of dissertation drafts.
Thank you all.
Vll


TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION, RATIONALE AND IMPORTANCE OF RESEARCH.......................1
II. LITERATURE REVIEW.......................................................15
New Public Management................................................15
System Effectiveness.................................................17
Theoretical Model of PMM System Effectiveness........................24
Organizational Culture...............................................27
Organizational Learning..............................................32
Research Questions...................................................36
III. RESEARCH DESIGN AM) METHODS............................................38
Human Subjects Review Approval.......................................38
The Underlying Model.................................................39
Hypothesis Testing...................................................39
Selected State Agencies..............................................40
Agency Leadership....................................................41
Survey Instrument....................................................42
Review and Testing of Survey Instrument..............................44
Data Collection Methods..............................................46
Sponsors/Endorsements................................................48
Data Analysis........................................................49
Validity and Reliability.............................................51
Limitations..........................................................52
viii


IV. RESULTS AND FINDINGS.........................................53
Results....................................................53
Phase 1.................................................55
Phase II................................................57
Model Comparison........................................63
Findings...................................................63
V. CONCLUSIONS..................................................69
REFERENCES.......................................................73
APPENDIX.........................................................86
IX


LIST OF TABLES
Table
1: Fit Indices for Phase I and Phase II Models...................................................57
2: Construct reliabilities, item means, standard deviations and factor loadings..................60
x


LIST OF FIGURES
Figures
1: Yang and Hsieh (2007) Model of Performance Measure Effectiveness.................25
2: Phase I Model of Managerial Effectiveness of State Performance Measurement System
....................................................................................55
3: Phase II Model of Managerial Effectiveness of State Performance Measurement
System..............................................................................58
xi


CHAPTERI
INTRODUCTION, RATIONALE AND IMPORTANCE OF RESEARCH
This chapter will introduce the topic of this dissertation by reviewing current
practitioner challenges, as relates to state agencies and their need to improve performance
measurement and management (PMM) effectiveness. First, this dissertation is placed in
context by reviewing the increasing interest in PMM around the world, largely as part of
the field known in the academic literature as New Public Management (NPM). After
discussing these general trends, this chapter will then address PMM in federal and state
government, focusing on recent developments and on-going problems, barriers and
frustrations with scaling up a PMM system to deal with complex problems. This chapter
also considers the tensions between the broad recognition of the importance of PMM and
push back from cultures that tend to present that these systems cannot be effectively
applied to public agencies. Consistent with focusing on state agencies, this chapter
discusses one particularly important aspect of PMM for a state agency the role of the
legislature. Finally, this chapter concludes that the adoption, application and transfer of
PMM systems in state agencies may be effective, but additional research and study of
processes and practices for success are needed.
Around the world, there has been widespread interest in applying business
principles to government activities (Perrin, 2006). Citizens are demanding greater
accountability for, efficiency in and transparency of government services (Hibbing &
Theiss-Morse, 1995, 2002; Association of Government Accountants, 2008). Policy
makers are also concerned with the efficiency and effectiveness of government services,
particularly when fiscal constraints are present. At no time in modem history have state,
1


local, and provincial governments been under greater pressure to provide results that
matter to the public, often within severe resource constraints (National Performance
Management Advisory Commission, 2010, p. vii).
According to Ammons (2002), . .the key to measurements success as a
performance improvement catalyst lies in the ability of performance measures to inspire
managerial thinking. It is this managerial thinking that produces the strategies that bring
the desired improvements (p. 346). Citizens have greater confidence in their
government, particularly in state legislatures, when governments exhibit high quality
government management and innovation (Kelleher & Wolak, 2007). Published reviews
and rankings of government performance have generated growing practitioner interest in
the importance of management effectiveness and the tools needed to achieve desired
results (Barrett & Greene, 2008; Pew Center on the States, 2008).
Academic literature, practitioners and politicians use many names and terms for
public management tools, techniques and initiatives. Often these terms are used
interchangeably. In 2010, the National Performance Management Advisory Commission
defined performance management in the public sector as:
an ongoing, systematic approach to improving results through evidence-based
decision making, continuous organizational learning and a focus on accountability
for performance. Performance management is integrated into all aspects of an
organizations management and policy-making processes, transforming an
organizations practices so it is focused on achieving improved results for the
public. (p. 3)
2


Performance measurement helps governments monitor performance... .although
measurement is a critical component of performance management, measuring and
reporting alone have rarely led to organizational learning and improved outcomes (ibid,
2010, p. 3).
Government interest in improving management of operations and enhancing
accountability for results has spanned many years and political parties. The federal
government has a long history of performance management initiatives, through the
Clinton, Bush and Obama administrations. The cornerstone of the federal effort was the
Government Performance and Results Act (GPRA) of 1993 that required federal agencies
to develop long-term and annual goals through strategic plans and annual performance
plans and performance measures and to report progress on achieving goals (United States
Government Accountability Office, 2013, 2013). Other key federal initiatives in this area
were established under GPRA II, Executive Order 13450-Improving Government
Program Performance and OMB 10-24: Performance Improvement Guidance under
GPRA for 2011-2012 (Morgan & McCall, 2012). The Bush Administration attempted to
link program performance and budgets through the Program Assessment Rating Tool
(PART) (Joyce, 2011; Radin, 2011; Moynihan, 2013). The Government Performance
and Results Modernization Act of 2010 provided focus on key priorities, cross-agency
collaborative efforts, leadership and training, and government-wide reporting. The
updated authorization requires quarterly, data-driven performance reviews (United States
Government Accountability Office, 2013). Local governments were early adopters in
this field and have placed extensive emphasis on systems to enhance performance
management and accountability for their operations for many years (ONeill, 2013).
3


Most state governments have implemented systems for performance management
and budgeting; however, many have not implemented all components of the typical
system (Brudney, Hebert, & Wright, 1999; Melkers & Willoughby, 1998; Moynihan,
2006). Common elements of a performance management system include strategic
planning, performance measurement, budgeting, reporting and evaluation (National
Performance Management Advisory Commission, 2010). Actual performance data in
these systems is generated by agencies most often in the executive branch, and sometimes
in the judicial branch and other constitutional agencies (Kettl & Fesler, 2005).
Information on programs and costs are maintained within an agency and are reported out
to the legislature and citizens.
For greatest success, practitioners were advised to avoid building systems that put
penalties in place for missing performance targets in the formative stages of
implementing these systems. There was an emphasis on using data and results as
information for various reporting, strategic planning, decision making, budgeting, and
communication purposes. Building in incentives, such as enhanced operational and/or
budget flexibility as well as additional funding, was viewed as a mechanism for
influencing behavior (Liner, et al., 2001; National Conference of State Legislatures,
2003).
Duties and responsibilities of the two branches of government along with the
associated formal and informal relationships are an important consideration for these
systems. Based on economic theory, the relationship between the legislative and
executive branches can be viewed from the principal-agent perspective (Huber & Shipan,
2000). The principal-agent problem is defined as the problem of devising compensation
4


rules that induce an agent to act in the best interest of a principal (Parkin, 2003, p. 198).
For example, Parkin discusses how agents, whether they are managers or workers,
pursue their own goals and often impose costs on a principal (p. 198). From a separation
of powers perspective, the legislative branch authorizes agencies and programs and
enacts the budget to provide for agency operations (Rosenbloom, 1983). The executive is
charged with implementing the law. In this sense, the legislature acts as the principal.
Greater flexibility provided to the executive branch by the legislative branch (as well as
by agency leaders to its program managers) is a fundamental component of the NPM
attempt to address the principal-agent problem. The role of the executive branch can
result in significant information asymmetries with respect to the legislative branch.
Program output and outcome data are generated and maintained by executive, judicial
and constitutionally created agencies. Information dissemination can be impacted by
tensions between the branches of government.
It is a significant undertaking for an agency and its staff to identify and collect the
needed information for a particular program, or the agency as a whole, along with
ensuring accuracy and reliability of the data. Constraints on staff resources, including
budget constraints, are a notable challenge to this effort (Pattison, 2012). During times of
budget constraints, agencies may face difficulties complying with reporting requirements.
Anecdotally, several states appear to be struggling with complicated, resource-intensive
systems, yet as technology evolves, it becomes less expensive to do this type of
management (Morgan & McCall, 2012).
One recent blog noted Developing the right strategy for using performance
information in government decision making is a tough nut to crack (Pattison, 2012). A
5


few states are utilizing "stats" based approaches grounded in data-based decision making
originally implemented by local governments (Moore & Braga, 2003; Abramson & Behn,
2006; Behn, 2008). For example, New York City launched CompStat for law
enforcement, and Baltimore launched CitiStat for municipal services. These approaches
to performance management systems emphasize a data-based management model along
with multiple agency efforts to address overarching goals. Other characteristics include
engagement of top leaders in frequent, periodic meetings and persistent follow-up with
clear accountability and continuous learning (Moore & Braga, 2003; Abramson & Behn,
2006; Hatry & Davies, 2011). The State of Washington developed the award winning
Government Management Accountability and Performance (GMAP) system (State of
Washington, 2008), while Maryland uses a system named StateStat. Leaders in both
states are often present and participating in the performance meetings. Maryland also
merged the state's performance management system with the spending transparency
required by the American Recovery and Reinvestment Act (Rojas, 2012).
Effectively using PMM systems is difficult. In reviewing the academic literature,
there is little empirical evidence that state-level systems are broadly effective (Melkers &
Willoughby, 2001; Coggburn & Schneider, 2003; Rubin, 2005; Moynihan, 2006). In a
meta-analysis of over eight hundred research studies, Hill and Lynn (2005) found
traditional, hierarchical governance continues to predominate in the United States. These
authors concluded there has been a gradual addition of new administrative reforms...
within a system of constitutional authority that is necessarily hierarchical (p. 173). The
academic literature is continuing to quantitatively assess the effectiveness of PMM
systems in public agencies (Kroll, 2012), with relatively few studies focusing on state
6


government. Further, academic literature has only recently begun to explore the role of
the legislature in these systems. In an era when evidence-based policy making and
management are on the rise, the challenges are great. Heinrich (2007) notes: the more
we have come to know, the more aware we are of how tentative, limited and sometimes
erroneous the bases of our information and evidence are (p. 274).
Yet, despite the extensive practitioner challenges, the potential lack of political
will, changes in political leadership and the absence of conclusive empirical evidence that
these systems are effective, experts emphasize these systems are here to stay (Melkers
& Willoughby, 2004; Hatry, 2008; National Performance Management Advisory
Commission, 2010). In November 2006, the Board of Trustees of the Financial
Accounting Foundation reaffirmed that the Government Accounting Standards Board
(GASB) had the jurisdictional authority to include service efforts and accomplishments
in its financial accounting and reporting standards setting activities for state and local
governmental entities (GASB, no date). Subsequently, in April 2007, GASB announced
a project to develop principle based suggested guidelines for voluntary reporting of
performance information and to update its publication Concepts Statement 2, that is also
known as the Services Efforts and Accomplishments Reporting (Smith, Cheng, Smith,
& Schiffel, 2008). More recently, a report by the IBM Center for the Business of
Government noted performance and innovation are among the six key trends driving
change in government (Chenok, Kamensky, Keegan, & Ben-Yehuda, 2013). These
authors concluded government is shifting to a performance culture, providing incentives
to use data and emphasizing the use of evidence for decision making. The report also
emphasizes leaders must understand the value of innovation, and link innovation to
7


mission.
Many state leaders continue to emphasize the importance of these systems.
Maryland is noted for its success in using performance measures in budgeting (Burnett,
2013). A recent review of the New Jersey governors performance budgeting initiative
concluded the system is slightly ahead of schedule, and state agencies are embracing the
plan (Holzer, Mullins, Ferreira & Hoontis, 2012). These authors found that agency staff
view the initiative as a useful management tool and a culture-changing movement. In
December 2011, the governor of California issued an executive order instructing the
Department of Finance to modify its budget process to focus on efficiency and program
goal accomplishments (Pattison, 2012). In February 2012, the governor of Massachusetts
issued an executive order directing state agencies to publish two-year strategic plans with
goals and metrics (Pattison, 2012).
However, Alabama is a notable exception to this trend. In 2011, the State of
Alabama eliminated the requirement for agencies to link budget requests to strategic
plans, even though the system was implemented in 2004 to encourage strategic planning
and performance budgeting. The governor was concerned agency goals were not in
alignment with resource availability, given the states budget difficulties (Barrett &
Greene, 2012). Additional concerns included insufficient support from the legislature as
well as executive branch leadership, inadequate technology to support the program,
insufficient linkages to overall state goals/strategy along with lack of widespread
adoption. Most of the states other requirements of the performance management system
were retained. This state provides a reminder that when administrations change or
legislative leadership changes, so can the momentum on PMM systems.
8


Several initiatives attempted to identify common or model performance measures
and systems to use throughout the nation. The Council of State Governments launched
the State Comparative Performance Measurement Project to address comparative
performance data to help states set reachable targets and identify best practices (Council
of State Governments, no date). Areas of focus were transportation, public assistance and
child welfare. These initiatives lend themselves well to benchmarking, already quite
common in higher education, due to the extent of required federal reporting for
universities and colleges.
In February 2008, nine state and local organizations established the National
Performance Management Advisory Commission to develop a separate voluntary,
national principles-based framework for PMM. Sponsoring organizations were the
Association of School Business Officials International, National Association of State
Budget Officers, Council of State Governments, Government Finance Officers
Association, International City/County Management Association, National Association of
Counties, National Association of State Auditors, Comptrollers, and Treasurers, National
Conference of State Legislatures, the National League of Cities, and the United States
Conference of Mayors. The groups final report provides a framework for public sector
performance management based on seven key principles (National Performance
Management Advisory Commission, 2010). The principles were specified as follows (p.
8):
1. A results focus permeates strategies, processes, the organization culture, and
decisions.
9


2. Information, measures, goals, priorities, and activities are relevant to the priorities
and well-being of the government and the community.
3. Information related to performance, decisions, regulations, and processes is
transparent, i.e., easy to access, use, and understand.
4. Goals, programs, activities, and resources are aligned with priorities and desired
results.
5. Decisions and processes are driven by timely, accurate, and meaningful data.
6. Practices are sustainable over time and across organizational changes.
7. Performance management transforms the organization, its management, and the
policy-making process.
Despite this important report, frustrations in implementing and using these
systems continue to be evident, particularly among practitioners, leading them to ask
questions such as the following:
Are the right things being measured, and are the data of sufficient quality to be
useful for decision making?
How can performance measure data be used?
What is the next step when a performance target is not met?
How can performance metric information be incorporated into decision making
during times of tight budget constraints?
Are there best practices or steps to help ensure success?
If programmatic or budget staff propose next steps, will politicians and the
governance structure follow these recommendations?
How can a system of performance metrics focus on results, rather than process?
10


How can performance accountability for networks be incorporated?
Does this matter and can such a system make a difference?
Do both branches of government need to collaborate for the systems to be effective?
Under what conditions is cooperation between the branches possible?
The consequences of ignoring the need for an effective PMM system are
beginning to surface in areas such as higher education and transportation. Higher
education, in particular, is facing both growing need for effective systems and process,
along with increasing state and federal interest in funding based on improvements in
performance outcomes. Transportation agencies are facing significant linkages between
performance and federal funding availability. These developments signal that
expectations for PMM systems are high, and that leaders and managers must have moved
beyond implementation and learning-by-doing.
In the area of higher education institutional accreditation, there was significant
news that fourteen California community and junior colleges were placed on
probationary or warning status by their accreditors in 2008 (Basken, 2008). As
background, the accreditor had set expectations for colleges to define their own
performance measures and to learn from their managerial processes in 2002. The
accreditor indicated it had never enforced these rules. This enforcement action was
attributed to the Bush administration and the Spellings Commission report on higher
education that called for enhanced accountability. Accreditor interest in performance
continues, and the Higher Learning Commissions new criteria for accreditation issued in
January 2013 emphasize the importance of student persistence and completion and
assessment of student learning.
11


Performance outcomes are increasingly influencing funding in higher education.
According to the Wisconsin Center for the Advancement of Postsecondary Education, 20
states were using performance based funding for higher education through 2010, and
several other states are considering this approach (Tandberg & Hillman, 2013). The
National Governors Association has issued guidelines for governors to focus on student
completion and other performance measures (Garrett & Reindl, 2013). President Obama
has proposed a plan to measure university and college performance via a ratings system
based on performance metrics, particularly outcomes, and ultimately, availability of
federal student financial aid would be tied to this ratings system (Executive Office of the
President of the United States, 2013).
Transportation is another area of state government operations where funding is
increasingly tied to performance outcomes. The Moving Ahead for Progress in the 21st
Century Act of 2012 (MAP-21) authorizes surface transportation programs funding of
over $105 billion for federal fiscal years 2013 and 2014; the legislation was the first long-
term highway authorization enacted since 2005 (United States Department of
Transportation, 2013). The legislation establishes performance and outcome-based
programs for states to invest resources to make progress towards national goals in seven
areas: Safety, infrastructure condition, congestion reduction, system reliability, freight
movement, economic vitality, environmental sustainability, and reduced project delivery
delays. Through coordination processes, states will set performance targets for specific
areas (ibid, 2013). MAP-2ls legacy may be its provisions to implement a performance
measurement-based system for the federal aid highway program for which state
transportation agencies must comply.
12


In sum, state governments are operating in a climate of accelerating pressure to
effectively use PMM systems to increase efficiency and produce results, yet there is little
mid-level theory and insufficient evidence to identify the factors and practices that
influence or predict managerial effectiveness of this tool within the public sector. In
particular, the broad potential benefits of these systems, including strategic planning,
decision making, budgeting and communication, have received little empirical analysis as
a system. In addition, the role of organizational learning and organizational culture are
not well developed. It is not clear practitioners fully understand the management
practices needed to achieve the full benefits of these systems. As well, the role of the
state legislature in influencing system effectiveness needs further exploration.
This research is designed to reduce the gap between the increasing importance of
performance management in agencies and the dearth of mid-level theory and research
that identifies the factors and practices that might influence or predict managerial
effectiveness of this tool within the public sector. This study will review the literature on
NPM, system effectiveness, organizational culture and organizational learning in order to
modify a mid-level theoretical model to assess the current state of effectiveness of
performance management systems at the state government level. It will then test this
model to determine whether the key variables of external politics, stakeholder
participation, organizational support, technical training, performance measurement
adoption, organizational culture and organizational learning affect PMM in state
agencies.
This research can contribute to the literature and practice in several ways. First,
the study moves beyond the more typical descriptive approach. The study is empirical
13


and uses hypothesis testing to quantitatively test theory on a national scale by surveying
agencies in all fifty states. The study quantitatively tests key factors for their impacts on
performance and managerial effectiveness at the state level. The study incorporates
frameworks of organizational culture and organizational learning into an existing
theoretical model, tests the predictive value of those variables on the effectiveness of
PMM systems in state agencies, and recommends areas of future research.
Finally, the project will identify and recommend key factors and smart practices
for design, adoption, implementation and long-run system effectiveness for government
practitioners. Because each government has its own unique characteristics and history,
approaches that work well for one may not be appropriate for another. However, all good
performance management systems incorporate certain key principles (National
Performance Management Advisory Commission, 2010, p. 2).
To conclude, this chapter has reviewed the challenges facing government to apply
a set of tools and techniques originating in the private sector to meet pressures for
efficiency, effectiveness and improving performance. These performance improvement
demands may come in the form of meeting strategic and operational goals as well as
meeting public policy needs. Migrating a PMM system to a public agency has proven to
be difficult. Further, for leaders and managers, as well as states in general, there is a
growing risk that lack of PMM effectiveness can impact funding available to a public
organization or a state. Public leaders and managers need to understand the practices and
processes needed for success, as the consequences of falling short are becoming more
evident.
14


CHAPTER II
LITERATURE REVIEW
This chapter will review the literature that focuses on key concepts and branches
of theory that particularly relate to PMM in state agencies. The areas discussed include
NPM and effectiveness of PMM systems. In addition, specific details of an important
theoretical model of a PMM system are presented. The strengths of this model are noted
as this model serves as the basis for this research. Finally, the literature on organizational
culture and organizational learning, particularly as relates to state agency PMM, are
discussed to inform the potential for these concepts to impact system effectiveness.
New Public Management
Around the world, governments are seeking innovative and effective approaches
to solve complex problems. NPM is an area of the academic literature offering potential
solutions to these challenges.
Management approaches emphasizing flexibility, accountability and techniques
such as performance outcomes measurement fall within the umbrella of NPM. NPM is
considered a technique to improve government services through reducing organizational
hierarchy, focusing on mission and objectives, using tools and practices traditionally
found in the private sector, privatizing government services, injecting the use of market
forces and signals along with many other approaches. Rainey (2003) noted the movement
has taken various forms but has often emphasized the use in government of procedures
similar to those purportedly used in business and private market activities.... (p. 60).
15


NPM is considered by some to be the one of the most striking international
trends in public administration (Hood, 1995, p. 3). It offers a number of potential reform
mechanisms and has been adopted in various stages with differences in adoption
throughout the world (Kettl, 2005). The conventional wisdom is that the field has its
origins in public choice theory and managerialism (Gruenig, 2001). However, Gruenig
(2001) argues that performance measurement has its theoretical roots in classical public
administration, neoclassical public administration, principal agent theory, policy analysis
and rational public management. Performance management is considered by some to
have preceded and outlived NPM and is considered a key to governance (Ingraham &
Lynn, 2004; Kettl & Kelman, 2007).
In the public sector, NPM sets up a seemingly internal conflict. There are those
who view the approach as emphasizing top-down managerial control and lack of reaching
out to stakeholders, including citizens, for their government to serve their needs (DeLeon
& Denhardt, 2000; Piotrowski & Rosenbloom, 2002). There are those who emphasize
the opportunities to empower all levels of government staff as well as a change in focus
from process and deliberation to flexibility to achieve results.
The United States has a long history of interest in and some success at reform
(Stillman, 1996). NPM reforms in the United States have had mixed success (Wechsler,
1994; Kettl, 1998; Light, 1998; Radin, 1998; Thompson, 2000). Over twenty years ago,
the practitioner-led Reinventing Government movement influenced all levels of
government in the United States with its entrepreneurial message (Osborne & Gaebler,
1992). Federal government initiatives have spanned both political parties, but challenges
remain (Newcomer & Caudle, 2011). These approaches were implemented at the federal
16


government level through the Clinton Administration National Performance Review, an
unprecedented review of federal government operations led by Vice President Gore
(Rainey, 2003). Federal government implementation emphasis was formally authorized
with enactment of the Government Performance and Results Act in 1993 that required
strategic planning and performance information. The Bush Administration has been
credited with attempting to link performance and budgeting through the Presidents
Management Agenda (Joyce, 2003; Gilmour & Lewis, 2006) and the use of the Program
Assessment Rating Tool (PART) (Moynihan, 2013). The Obama Administration
continued the emphasis on the importance of performance management (Joyce, 2011),
presided over passage of the GPRA Modernization Act of 2010, and is moving towards
emphasizing performance requirements for state and local governments in the context of
funding availability, as discussed in the chapter on the importance of this research. In
comparison, state government initiatives are broad based with mixed results, but states
are expected to stay the course (Melkers & Willoughby, 2004). Local governments are
extensively involved in reforms based on NPM principles (Ammons, 2001; Ho, 2011).
While NPM has been held out as offering solutions, there is a need to better
understand how to successfully apply these tools in a government environment. More
specifically, a theoretical and practical understanding of a successful PMM system is
critical.
System Effectiveness
This section will discuss problems with PMM in state agencies as noted in the
academic literature, then turn to findings about elements, practices and processes of these
systems that may be particularly beneficial. Information and data are generated by
17


leaders, managers and staff managing and administering programs, but in a public
agency, the role of the legislature in providing authorization and funding is also noted.
This section will conclude with a discussion of the lack of a clear understanding from the
literature about the legislatures role in the effectiveness of PMM systems.
As discussed earlier, state adoption of performance management systems is
widespread, but some states struggle with even simple aspects such as performance
reporting. Moynihan (2006) evaluated a decade of reform and found some evidence of
performance information in the documents of 48 states. Of these 48 states, the top score
was 101 for Arizona. Other states with scores above 75 were Delaware, Iowa, Louisiana,
Missouri, Texas and Virginia. There were ten states that had performance information
scores below 25. In a more recent analysis using GASB reporting criteria, quality
performance reporting of key agencies was limited to a few departments spread
throughout the nation, and only Oregon had consistently strong reporting in all four types
of state agencies evaluated (Smith et al., 2008). Variation in data use has also been
shown by many other researchers, including Poister, Pasha and Edwards (2013) and
Pollitt (2006).
Yet several scholars have argued or found that state performance and
accountability systems are not completely consistent with the NPM model. For example,
based on a survey of fifty states, nearly 40% of state agencies (surveyed) had fully
implemented strategic planning, and only 20% of states had fully implemented training
programs to provide customer service. From the full implementation perspective, only
about 5% of states had simplified human resources rules, privatized major programs or
authorized greater agency discretion of end-of-year balances (Brudney, Hebert, &
18


Wright, 1999). Moynihan (2006) found that states had not implemented a consistent
package of changes. In particular, he found low levels of agency autonomy in the areas of
procurement, contracting, budgeting and human resources.
Differences in governments, such as government structure and underlying law,
and the role of governance and politics are among the reasons why states may need to
implement differing approaches (Behn, 2001, Kettl, 2005). Tailoring innovations to fit
local needs requires that we move beyond best practices to smart practices (Bardach,
2000; Klingner, 2006). Yet, there is a need for theories to explain the conditions and
mechanisms that can lead to successful PMM for a broad range of state agencies.
Ingraham, Joyce and Kneedler-Donahue (2003) were lead investigators for the
Government Performance Project (GPP) and the Federal Performance Project (FPP), with
the first analytical results published in 2003. This project represented the largest
assessment of public management capacity ever undertaken in the United States. Its
purpose was to build a model of government management capacity and its components.
The model reflects that government performance is a function of management capacity
and various environmental constraints and opportunities. These authors defined capacity
as governments intrinsic ability to marshal, develop, direct and control its financial,
human, physical, and information resources (p. 15). In this model, they hypothesized
that capacity was driven by four levers: the character of the governments management
systems, the level and nature of leadership emphasis, the degree of integration and
alignment across its management systems, and the extent to that it manages for results
(p. 15).
19


Subsequent survey and data updates of the state-level analysis were conducted
every two years by the Pew Center on the States and published as Grading the States
report cards in Governing magazine (Barrett & Greene, 2008). The Pew project used a
methodology similar to that of the original Government Performance Project, but
substituted grading categories that were more transparent to citizens: information
technology management became information; capital management became infrastructure;
financial management became money and human resources management became people.
States were assigned an overall grade for each of these four categories based on specified
subsidiary components of each category, that were given numerical ratings and graded as
either green, yellow or red (Pew Center on the States, 2008). Both the information and
money categories had criteria focused on performance measurement.
The Government Performance Project and Grading the States initiatives advanced
academic research and elevated practitioner emphasis on public management and use of
performance measures. These projects were grounded in public management theory and
provided a dataset of the population of states. Yet, the overall grades in each category
and the specific criteria rankings and scores of each report were not consistently readily
apparent and were not quantified. Therefore, it was difficult to assess the relative
importance of subsidiary elements or of overall categories in contributing to managerial
effectiveness. While helpful, these projects did not show practitioners the key elements
needed to achieve on-going success from a system of performance measures.
The design and methodology of the Grading the States project was focused on the
executive branch. The survey was administered by the Pew Center to the governors
office in each state, and responses were compiled and generally coordinated by the
20


executive branch. For the money category, there was not a single element of the
criteria that specifically identified the legislative branch, despite the power of the
legislature to pass authorizing and appropriating legislation. Of the fourteen criteria in
the information category, five criteria specifically mentioned the legislature or elected
officials.
The academic literature has attempted to identify practices leading to success of a
system of performance measures, but much remains to be done. Moynihan and Pandey
(2005) studied the impacts of external environment and levers of internal change and
technology on performance. Using data from a national study of state government health
and human services leaders, these authors used an ordinary least squares technique, and
their model explained 73.5% of the variation in managers perceptions of organizational
effectiveness. Significant independent variables included the support of elected officials
and the influence of the public and the media; these variables show the strongest
relationship with organizational effectiveness with beta coefficients of .315 and .365,
respectively. Internal to the organization, goal clarity and decentralized decision making
were also important and positively influenced organization effectiveness. These authors
emphasized the importance of a developmental organizational culture, defined to focus
on flexibility, adaptability and readiness, growth and resource acquisition. The beta
coefficient for development organizational culture was .177.
According to Wellman and VanLandingham (2008), use of performance-based
budgeting in Florida resulted in governmental operations that are more efficient,
accountable, and effective as well as improved legislative oversight and program
effectiveness. Their recommendations to enhance implementation and effectiveness of
21


such a system included designating a leader for the system, keeping information readable,
and making valid and reliable performance information readily available. These authors
recommend a hierarchy of measures so that the legislature does not deal with excessive
amounts of information, a patient, yet unrelenting approach, and keeping expectations
reasonable.
Hou, Lansford, Sides and Jones (2011) noted challenges in using performance
metrics for state budgeting, but also greater opportunities for their managerial application
in agencies. Their study found that executive agency leaders, middle managers and staff
strongly supported and used performance management systems, although to a lesser
degree during weak economic and fiscal conditions when budgets tend to drive policy
rather than the reverse.
Recent research focusing on federal agency experience with PMM systems is also
relevant to state efforts. For example, a 2013 United States Government Accountability
Office (GAO) study identifies "leading practices" to promote successful data-driven
performance reviews, required by the federal government. The studied relied on a survey
of performance improvement officers at 24 federal agencies and case studies of
implementation practices at the Department of Energy, Small Business Administration
and the Department of the Treasury. The report identified nine practices, including the
importance of data-driven performance reviews characterized by significant agency
leadership participation; rigorous preparation; attendance of key players at meetings to
enhance problem solving; and sustained follow-up on identified issues. There was a
significant emphasis on "rigorous" processes. The GAO report also emphasized ensuring
22


skill levels of staff to analyze and to communicate complex data for decision making
(United States Government Accountability Office, 2013).
Other research findings align well with these GAO recommendations. In a study
of international initiatives as well as US federal government initiatives, Newcomer and
Caudle (2011) recommend providing an appropriate organizational culture and climate
that includes strong leadership for reform and supporting managers responsible for
program performance, along with implementing a systems approach and providing
continuing support and capacity (p. 122). In a study of federal agencies and the PART,
Moynihan and Lavertu (2012) highlight the following practices to enhance use of
performance information: Leadership commitment to results; learning routines led by
supervisors; motivational tasks; and ability to link measures to action. These
recommendations are reinforced by Hatry and Davies (2011) who emphasized the need
for interested and engaged leadership, timely data on performance, and staff analysis
prior to review meetings as keys to successful performance reviews. This report was
based on a review of successful practices of federal, state and local agencies. These
performance reviews are now required by the GPRA Modernization Act of 2010 and are
intended to enhance agency effectiveness and efficiency (ibid, 2011).
Turning to the role of the legislative branch, previous research established the role
of the legislature as important to PMM systems (Liner, et al., 2001; National Conference
of State Legislatures, 2003; Bourdeaux & Chikoto, 2008). The information is
particularly useful as an accountability tool. Legislators and staff can use performance
information in hearings; to make appropriations and policy decisions; to provide
knowledge to inform policy development and improve communication with constituents;
23


and to change the approach to service delivery (National Conference of State
Legislatures, 2003: xii). Bourdeaux (2005, 2006) concluded that legislative engagement
in oversight of performance information resulted in greater use of performance measures
by both the legislative and the executive branches. However, Hou et al. (2011) found
performance-based budgeting was only selectively applied by legislators. Given the
contradictory and inconclusive nature of previous research on the role of the legislative
branch with respect to PMM systems and their effectiveness, additional research is
needed.
In sum, the literatures assessment of PMM systems shows mixed results for
effectiveness. Yet, patterns are emerging, and these factors will be discussed at the
conclusion of this chapter.
Theoretical Model of PMM System Effectiveness
Yang and Hsieh (2007) moved beyond descriptive research and used hypothesis
testing to quantitatively understand the impact of five independent variables on
managerial effectiveness of performance measurement. Their study was conducted at the
local government level and focused on Taipei, the capital of Taiwan, a democracy
pursuing a NPM agenda. With respect to research design and methodology, the study
involved surveying staff in every unit of the Taipei government, including 12 district
governments. The survey had 28 questions. The survey instrument was reviewed by a
peer review panel which had five experts on performance measurement and survey
methodology. The authors used the Dillman method to increase survey response rates and
had a survey response rate of approximately 61%. The authors used univariate analysis,
correlation analysis and structural equation modeling to analyze the results.
24


The study developed a scale for the dependent variable of performance
effectiveness and included eleven elements reflecting major aspects of performance
measurement, including indicator quality, use of performance results and performance
measurement effects. Of particular interest for this research study is that Yang and Hsieh
defined use of performance results to include decision making, strategic planning,
budgeting and communication. Of key importance to this dissertation research, these
authors included organizational learning as component of the scale of the dependent
variable. Further, Yang and Hsieh did not include a variable in the model to reflect
organizational culture.
Figure 1. Flow Chart of Performance Measurement Effectiveness. Adapted from
Managerial Effectiveness of Government Performance Measurement: Testing a Middle-
Range Model, by K. Yang and J. Hsieh, 2007, Public Administration Review, 67(5), p.
867. Copyright 2007 by the American Society for Public Administration.
Figure 1: Yang and Hsieh (2007) Model of Performance Measure Effectiveness
25


The authors included both internal and external organizational factors as
independent variables in their model. The study drew from public management and
organizational theory to develop independent variables for organizational support,
technical training and adoption. Using political science theory, these authors also
developed dependent variables for external political support and stakeholder
participation. The study involved the testing of eight hypotheses, principally through the
use of structural equation modeling. All five dependent variables were found to be
significant in the base case model, which had an R2 of .56.
Organizational support, including top management commitment, middle manager
support and subsystem collaboration, was determined to be the most important predictor
of performance measure effectiveness. The impact of organizational support on
performance measurement effectiveness had a coefficient of .52. Organizational support
also affected stakeholder participation (coefficient = .54) and adoption (coefficient = .29).
External political support was also found to affect managerial effectiveness directly
(coefficient = .10) and indirectly (such as via external stakeholder participation,
organizational support and technical training, with coefficients of. 11, .50 and .444,
respectively). External political support was determined by the extent that an agency
enjoyed political autonomy, authority and political support by elected officials for
initiatives. Stakeholder participation had a slightly positive impact on effectiveness, with
a coefficient of .17.
These authors concluded that adoption, and associated training, are important in
enhancing performance measure effectiveness. Adoption was measured by the type of
26


performance measures included in the system, an indication of the comprehensiveness of
the set of performance measures. However, adoption by itself was not sufficient for
performance effectiveness. The continual use and refinement of the system, which these
authors refer to as implementation, must also occur. The impact of technical training on
effectiveness was mediated by adoption.
This model has many strengths, but needs further testing. The model goes beyond
simply addressing whether PMM affects a single result, such as budgeting, to more
broadly consider the many uses of performance data. The model is grounded in public
management, organizational and political science theories and included considerations
both internal and external to a public agency. The measurement scales to create the
models constructs were developed with significant review. Finally, the model results
were significant and exhibit goodness of fit.
In sum, Yang and Hsieh identified areas for future research which are particularly
relevant to this dissertation research. They noted future studies are needed to test the
model with data collected in the United States and to validate the model by using more
objective measures (p. 872). Furthermore, additional variables, such as organizational
culture and organizational learning, are mechanisms to test the model and thereby
potentially enhance its usefulness for practitioners.
Organizational Culture
This section will address the review of the literature on organizational culture,
particularly as it may apply to PMM. Further, this section will consider how
organizational culture may be useful in considering the applicability of the Yang and
Hsieh model to PMM for state agencies.
27


In The Functions of the Executive, Barnard (1956) advanced the argument that the
informal organization could harmonize work in the organization (p. 279). In general,
failures of initiatives for organizational improvement are not caused by inadequate
policies or management incompetence (Ott & Shafritz, 1994). Yet while organizational
culture is a key to organizational success (McNabb & Sepic, 1995; Khademian, 2000), it
is presently a social construct rather than an operationally defined theoretical concept.
Organizational culture includes values, beliefs, assumptions, perceptions,
behavioral norms, artifacts and patterns of behavior; socially constructed, unseen and
unobservable forces; social energy; unifying themes and control mechanisms (Ott, 1989).
Further, organizational culture can be used as a means to understand the context
(constraints and opportunities) within which managers manage and how management
matters (Khademian, 2000, pp. 33-34). Culture and performance management
effectiveness are linked in other studies of government performance and reform (de
Lancer Julnes & Holzer, 2001; Garnett, Marlowe & Pandy, 2008; Caiden, 2010). The
concept of organizational climate is outside the scope of this research for two key
reasons: 1) Organizational climate can be perceived as an index rather than a causative
factor in an organizations operation (Jung, et al., 2009) and 2) organizational culture
may be a better variable when surveying agency leaders.
Performance management is generally associated with power cultures, not role
cultures (Van Wart, 2005). Role cultures are traditional for large, older bureaucracies
(Handy, 1993). Individual effectiveness in a power culture is typically measured by
results. In contrast, individual effectiveness in a role culture is measured by conforming
to rules and regulations (Handy, 1993; Fleenor & Bryant, 2002).
28


It is not clear if managerial reform or reinvention or other organizational
improvement processes can be achieved unless organizational culture is changed.
According to Ott (1995), Having made the pronouncements warning would be ...
implementers [of change] that they cannot succeed without first changing the
organizational culture too many ... articles and books [on managerial reform] simply
proceed to the next implementation step with a clear conscience (Ott, 1995, p. 365).
In this authors view, the Yang and Hsieh model raises two interesting questions
from the perspective of organizational culture. First, there is the issue of culture and the
generalizability of the findings from Taipei to the United States. Second, there is the issue
of culture which could be included as an independent variable in the model to assess its
relationship with other variables along with its impact on performance effectiveness.
Schein (2002) emphasizes the importance of role culture and cultural analysis in
management across national and ethnic boundaries, and Fitzpatrick, et al. (2011) noted
comparative studies may find problems with administrative reforms, when the role of
culture and its associated impact on interventions may, in fact, deserve greater attention
in the analysis. The Yang and Hsieh mid-level theory was tested in Taipei. At least one
scholar has concluded that there is an Asian model of democracy; because that model
emphasizes two political parties, the Asian model is more similar to the United States
than approaches to democracy in other parts of the world (Reilly, 2007). Yet, there is a
distinct bureaucratic culture based on the teachings of Confucius in East Asia.
Frederickson (2002) notes that Asian bureaucratic culture operates as follows: The
leaders or rulers of the state have a moral obligation to ensure peace, prosperity, and
justice so that the people will be happy and able to live full lives. The people have a
29


moral obligation to support their leaders so long as those leaders are meeting their moral
obligations to them (p. 613). Therefore, testing the Yang and Hsieh performance
effectiveness model in the United States would be useful.
The second issue relates to the need to consider including an independent variable
to reflect culture and innovation. One strand of the NPM literature revolves around the
role of government managers with a significant debate on the appropriate degree of
entrepreneurism in public service. Critics argue public sector entrepreneurs may be
characterized as rule breaking, self-promotion, and unwarranted risk takers who can
become loose cannons (Terry, 1998; DeLeon & Denhardt, 2000), while proponents view
them as exercising leadership and proactively engaging in strategic, astute initiatives to
creatively solve public-sector problems and avoid crises (Behn, 1988; Borins, 2000).
Bozeman & Kingsley (1998) found positive relationships between clear organizational
mission and top manager willingness to trust employees linked to risk culture. These
authors also found notable red tape, weak links between promotion and performance
along with significant high involvement with elected officials tend to reflect
organizations with a lower risk culture.
Cameron and Quinn (2006) use a competing values framework to identify four
categories of organizational culture: Clan, adhocracy, hierarchy and market. These
authors surveyed 43 public administration entities and found the dominant culture to be
hierarchy. There was little evidence of adhocracy in the organizational culture of the
public administration entities studied.
The lens of organizational culture and its impact on effectiveness of a system of
performance metrics has been studied by other public management researchers. In
30


particular, Kim (2010) viewed reforms within the umbrella of NPM as entrepreneurial,
requiring risk-taking, innovation and being proactive. This study involved a survey of
957 leaders of state government agencies in the 48 continental United States to assess
factors impacting performance. This model explained the impacts of the individual
variables on organizational performance with an R2 of .455. Being proactive had the
strongest relationship to performance. Other significant variables included risk-taking
and innovativeness. Moynihan and Pandey (2010) concluded that organizational culture
is important in determining whether local government agencies use performance
information.
The innovation culture approach is somewhat different than viewing innovation
as a process. Innovation in public organizations has been identified as new ways of
managing, organizing and delivering services (Walker, 2008: 600). Behn (2008)
discusses the innovation process has four stages: diffusion, transfer, propagation and
replication. Recent empirical research focused on innovation and innovation adoption
and its positive impact on performance management (Damanpour & Schneider, 2006,
2009; Walker, Damanpour & Devece, 2011). These authors concluded innovation
characteristics and attitudes, manager attitudes and organizational characteristics
influence innovation adoption. These authors concluded the impact of management
innovation on organizational performance is mediated by performance management, and
performance management positively impacts organizational performance. This study
reminds us that performance management is only one factor impacting overall
organizational performance.
31


Inclusion of organizational culture in this study of PMM effectiveness will
enhance our understanding of organizational culture, its relationships with the study
variables as well as its relationship with PMM effectiveness. Certain state government
functions are expected to exhibit differences in organizational culture. A key challenge
for researchers is a useful way to measure the personality of the organization, but there
may be a unique culture these agencies share that relates to the success of PMM.
Organizational Learning
Given that PMM systems have been in place for over twenty years by
practitioners in the United States, and yet many practitioners continue to be frustrated and
challenged to finding success in their implementation, one may wonder how effectiveness
might occur over the long-run. In particular, the idea of learning from mistakes and
incorporating what one learns into future efforts is fundamental to the concept of
organizational learning.
Cyert and March (1963) introduced the concept of organizational learning.
Among the many definitions of learning, Fiol and Lyles (1985) emphasize the following:
the development of insights, knowledge, and associations between past actions, the
effectiveness of those actions, and future actions (p. 811). According to Pawlowsky
(2001), the amount of literature that has appeared on the subject in the past two decades
is overwhelming (p. 63). Senge (1990) presents a theoretical model of organizational
learning which relies on knowledge, learning, dialogue and shared mental models. The
model has a long-term focus and feedback loops are therefore key.
Given the many complex challenges facing public administration in general,
knowledge management is the effective, purposive use of knowledge by leaders in
32


positions of authority or responsibility to achieve social, political, economic, cultural, or
environmental objectives... and is closely allied with building governance capacity
(Klingner & Sabet, 2005: 201). Advanced forms of learning are of particular interest,
because they are considered useful for solving new or complex problems, restructuring
whole processes or systems, reanalyzing a job from a completely new perspective or
reengineering an organization to adapt to major environmental changes (Argyris &
Schon, 1978; Senge, 1990). As well, this approach is particularly useful given trends for
change in the public sector, particularly the shift to management of a capital asset where
the knowledge, skills and abilities possessed by people defines what they are capable of
doing (McGregor, 2000, p. 134).
There are two types of learning in the classic model of organizational learning
(Argyris & Schon, 1978). In single-loop learning, individuals, groups, or organizations
modify their actions according to the difference between expected and obtained
outcomes. For double-loop learning, individuals, groups or an organization may question
underlying values, assumptions and policies. Specifically, second order learning occurs
when an organization is able to view and modify the underlying elements. In public
administration, Moynihan (2005) views broad understanding of policy choices and
effectiveness as examples of double-loop learning (p. 203).
Particularly in the public sector, applying organizational learning is challenging
due to inherent tensions. Common (2004) was the first to analyze the role of
organizational learning in improving public policy-making in a political environment.
Common found the following obstacles to organizational learning: 1) overemphasis on
the individual; 2) resistance to change and politics; 3) social learning is self-limiting, i.e.
33


individualism and 4) political blame culture resulting in a very narrower opportunity
for its application (p. 39). The desired focus on learning is in contrast to an emphasis on
implementation and execution, in that a focus on getting things done, and done right,
crowds out the experimentation and reflection vital to sustainable success (Edmondson,
2008, p. 62).
Moynihan (2005) emphasizes the need to include learning forums in management
reform design to incorporate double-loop learning. In his 2005 study, he used case studies
of the department of corrections in Vermont, Virginia and Alabama; these states were at
significantly different phases of managing for results. The discussion provides clear
examples of sophisticated management techniques involving data, analysis, learning and
questioning by the leadership of the Vermont Department of Corrections. In contrast, the
Virginia counterparts showed some single-loop learning gains with their performance
management reform process, while the Alabama performance management system for
corrections was completely ineffective.
As noted by Senge (1990) and Moynihan (2005), double-loop learning is
important to long-term success of a performance management system, and in their review
of the literature on strategic management, Bryson, Berry and Yang (2010) note the need
for academic research to focus on organizational learning. Therefore, testing to determine
if organizational learning could impact the effectiveness of a PMM system would be
beneficial, in contrast to the Yang and Hsieh approach to include organizational learning
in the scale to create the dependent variable.
In conclusion, there is little wide-spread, conclusive evidence to date of
significant effectiveness from state PMM systems for agency management purposes.
34


NPM has been viewed worldwide as a means to bring practices successfully used in the
private sector to enhance the effectiveness and efficiency of government. The
Reinventing Government movement was led by practitioners over twenty years ago as a
potentially beneficial tool. There have been mixed levels of implementation by states,
but virtually all states adopted some form of PMM systems.
Academic literature indicates success has been mixed, but key practices and
processes appear to make a notable difference for success. Among these factors, it is
clear that a system approach is needed. Routines and processes to collect, compile, report,
review and subsequently incorporate information into action are important, with both an
internal and external focus. At all levels of government, the literature shows meetings
and forums for groups to review data and its implications can influence success. A
willingness to restructure processes and reengineer programs and organizations may be
beneficial, along with a willingness to experiment, innovate and persevere. Leadership
and support for PMM is important at all levels of the organization, and training and
ensuring staff have appropriate levels of skill in data analysis and communication can be
contributing factors (Hatry & Davies, 2011). For agencies, involving key stakeholders
may be important as well as seeking political support, and the legislature may be
important in this regard. This dissertation will modify a relatively new model to more
fully incorporate the findings in the literature to quantitatively examine state PMM
effectiveness and test potential significant independent variables based on the areas
discussed in this section.
35


Research Questions
The purposes of this study are to: 1) determine how the political environment,
stakeholder participation, organizational support and training affect the adoption and
managerial effectiveness of performance management in state agencies and 2) to
understand the roles of and impacts of organizational culture and organizational learning
on a state PMM system.
Consistent with the Yang and Hsieh model, the original eight hypotheses of those
authors will be tested. In addition, two additional hypotheses will be added based on the
literature review. These two new hypotheses will test the relationships between: 1)
organizational culture and performance effectiveness and 2) organizational learning and
performance measurement effectiveness. These relationships are expected to be positive.
The hypotheses associated with these research questions are as follows:
HI: The level of performance measurement adoption is positively associated with the
managerial effectiveness of performance measurement.
H2: Organizational support is positively associated with the adoption and managerial
effectiveness of performance measurement.
H3: Technical training in performance measurement is positively associated with the
managerial effectiveness of performance management, but the relationship is mediated by
the adoption of performance measurement.
H4: Political support is positively associated with organizational support for
performance measurement.
H5: Political support is positively associated with higher levels of performance
measurement training.
36


H6: Political support has a direct, positive impact on the level of managerial
effectiveness of performance measurement but not on the level of adoption.
H7: External stakeholder participation is positively associated with the level of
managerial effectiveness of performance measurement but not with the adoption of
performance measurement.
H8: The level of external stakeholder participation in performance measurement is
positively affected by the level of political support that agencies receive and the level of
organizational support for performance measurement within agencies.
H9: Organizational culture positively impacts performance measure effectiveness.
H10: Organizational learning impacts performance measure effectiveness.
37


CHAPTER III
RESEARCH DESIGN AND METHODS
This research focuses on the relationships between factors external and internal to
a public organization and PMM system using a mid-level theoretical model developed by
Yang and Hsieh with modifications based on literature. The research uses hypothesis
testing to address research questions and relies on data from a survey of agency leaders in
selected state agencies across the nation. The model forming the basis for this research
involved structural equation modeling, and that approach is also used in this research.
This research also considers whether an alternative, reduced form model would improve
the statistical fit for the relationships for the survey data collected, while being consistent
with theory and recent research in this area.
Human Subjects Review Approval
The Colorado Multiple Institutional Review Board (COMIRB) approved the
research project and issued a three-year certificate of exemption in June 2011; the
certificate was subsequently extended to June 2017. Respondents to the research surveys
work in state government; therefore, the approval process by the COMIRB was not
extensive. These state government employees are not a protected class. Further, the
research topics addressed by this study are not sensitive. Additional contact was made
with institutional COMIRB representatives from the University of Colorado Department
of Political Science and School of Public Affairs in October 2011 and March 2014,
respectively, for guidance.
38


The Underlying Model
Yang and Hsieh (2007) developed a model to assess managerial effectiveness of
the use of performance measures. Effectiveness was specified as a function of
stakeholder participation, organizational support, technical training, external political
support and adoption. Further, effectiveness was defined as a scale variable which
included trustworthiness, decision making, communication, budgeting, accuracy,
reliability, value, productivity, motivation, organizational learning and strategic planning.
These authors administered a survey to staff in every unit of Taipei government. The
survey response rate was approximately 61%. All independent variables were
significant, and organizational support was the strongest predictor of performance
measure adoption and effectiveness. External political support impacted performance
management effectiveness both directly and indirectly. The indirect effects were through
stakeholder participation, organizational support and technical training along with
performance measurement adoption; this later impact occurred through organizational
support and technical training.
Hypothesis Testing
The research design used hypothesis testing to address the research questions.
Ten hypotheses were considered. The first eight hypotheses presented earlier were tested
by Yang and Hsieh in their mid-level model. In addition, the model was expanded to
consider two additional hypotheses, specifically that organizational culture and
organizational learning positively impact performance effectiveness.
39


Selected State Agencies
The survey was administered to key state agencies based on the agency typology
advanced by Wilson (1989). The four types of agencies are coping, procedural, craft and
production, and the distinctions between them are based on visibility of outputs and
outcomes. For a coping agency, neither outputs nor outcomes are visible. An example
is a higher education coordinating/governance agency. In contrast, a production agency
is one whose outputs and outcomes are visible. An example of this type of agency is a
state transportation department, whose completed highway projects are visible along with
the economic development and quality of life impacts of the transportation infrastructure.
A procedural agency produces visible outputs, but its outcomes are not visible. An
example of a procedural agency is a state personnel/human resource agency. The final
type of agency is a craft agency, whose outputs are not visible, but whose outcomes are
visible. An example of this type of agency is a state game, fish or wildlife agency, whose
responsibility for managing natural resources results in quality of life experiences for
consumption and non-consumptive users.
The Wilson concept overall is useful, but has specific limitations in that Wilson
applied this typology to federal agencies. Further, Wilson noted the distinctions between
agency types can be somewhat vague and blurred. Sampling within these agency types
was not necessary, and the entire population was the focus of this research. Other than
this typology approach to selecting four agency types for inclusion in the research,
distinctions between agency types were generally not were not made in the research,
although a discussion of analysis of variance is included.
40


Agency Leadership
The survey was targeted to agency leaders, rather than all agency employees in
the Yang and Hsieh study. The focus on senior leadership is appropriate as these leaders
have significant roles in setting organizational goals and objectives, using performance
metrics, impacting organizational outcomes, allocating resources, setting organizational
values and climate and working with external stakeholders to gain support (Ingraham,
Joyce & Donahue, 2003; Behn, 2006; Kelman & Myers, 2011; Villadsen, 2012;
Rabovsky, 2014). The perspectives of agency leadership may be different than other
employees (Frazier & Swiss, 2008). This approach was taken to enhance the response
rate through personalization of survey materials and particularly simplified requirements
of administering the survey. For the online survey, collection of names and email
addresses for agency leaders can be easier than collecting names and email addresses for
all agency employees. Given the time demands for agency leaders, the survey instrument
offered respondents the opportunity to ask a senior staff member familiar with the
agencys performance metrics to complete and submit the form.
For those agency leaders who are the focus of this study, many are appointed by
the Governor, although some are appointed by boards and commissions. A few may be
elected officials. Given the widespread potential turnover of governors in the November
2010 general election, it was deemed advisable to wait until late 2011 to begin
administering the survey. In this way, newly appointed agency heads would have some
experience in their position and would have completed their first legislative session.
41


Survey Instrument
The majority of the survey questions were based on the Yang and Hsieh survey
instrument; however, a few additional questions were added. In particular, based on
theory and literature, survey questions and scaled variables were developed for the two
new variables added to the model, organizational culture and the organizational learning
(see Appendix Al).
Survey items for managerial effectiveness in this research included eleven of
Yangs fourteen questions. Two questions to assess accuracy were excluded on the basis
that another question addressed whether performance results could be trusted, and it was
not anticipated that respondents would actually tell the researcher that their performance
indicators did not accurately reflect the quality of management or that those indicators
did not accurately reflect the work of the organization. The question on organizational
learning was included in the survey, but not used to develop the effectiveness scale for
this research, due to the inclusion of the organizational learning construct. Two of the
original survey questions for the dependent variable were restated to enhance clarity. All
of these items were measured on a seven-point agree/disagree Likert scale.
Survey items or questions for dependent variables included in the Yang model
were largely consistent in this research. Yangs original questions were used for
organizational support, external political support and performance measurement
adoption. Organizational support and external political support were measured on a
seven-point agree/disagree scale, while performance measurement adoption questions
were posed for a yes/no (dichotomous) answer. In the area of stakeholder participation,
Yangs original four questions were included as well as three additional questions which
42


addressed open, public meetings and involvement by legislators in data review and data
use. These questions were measured on a seven-point agree/disagree Likert scale.
Technical training questions were measured on a five-point scale, and Yangs original
two questions along with one additional question on the extent of training over time were
included in this survey. In this way, every variable had at least three questions or items
in the survey.
Jung, et al. (2009) found there is not an ideal instrument with which to assess
organizational culture. The authors recommend the researcher consider the research
purpose and need and how the information collected will be used. Of particular concern
for survey response is the length of the questionnaire. Survey items to assess
organizational culture were based on the organizational culture assessment instruments
developed by Cameron and Quinn (2006), but only six questions were included in order
to help reduce the total number of survey questions and potentially increase response
rates. These questions attempted to assess organizational culture characteristics of
entrepreneurial attitude, innovation, production-orientation, task and goal
accomplishment, growth and achievement. The organizational culture questions were
assessed on a seven-point agree/disagree Likert scale. The four types of state government
agencies in this study are expected to exhibit differences in organizational culture. This
analysis could enhance understanding of organizational culture in state agencies and the
associated relationship with performance effectiveness as well as other variables in the
study.
Survey items to assess organizational learning were developed principally on the
work of Moynihan (2005) along with practitioner articles and information, as discussed
43


earlier in the rationale and importance of research chapter. These four survey items were
assessed on a dichotomous scale (yes/no) and focused on questioning basic outcomes and
analyzing alternative approaches, using data and science for decision making, and using
public forums and legislative hearings as venues to discuss issues and strategies.
Review and Testing of Survey Instrument
Pretests and pilot studies are useful to ensure the survey design and process will
be successful (Majumdar, 2008, p. 246). A pretest is an initial test of one or more
components of a survey, while a pilot study involves testing of the entire research
instrument in a form similar to the one that is used in the final survey. The pilot study
focuses on the areas of question format, survey questionnaire length and the data
collection process (p. 246). Pilot costs are costly and time consuming, but are considered
particularly important for large survey project success (p. 246).
Extensive review and testing of the draft survey instrument was conducted. To
test the draft survey instrument, three groups were asked to take the survey. The
individuals selected for the groups were not included in the final group of survey
respondents. Nonetheless, the individuals selected for these groups were somewhat
similar to the final survey participants. The first focus group to pilot the survey consisted
of 100 members of the Denver chapter of the Association of Government Accountants,
now known as the Accountability in Government organization. This association has been
actively involved in professional development with respect to performance metrics and
their various uses, including accountability and performance management. The draft
survey was administered to federal employees, based on their email addresses. Only two
individuals completed the online survey located at Survey Monkey resulting in a response
44


rate of only 2%. One potential respondent emailed this author that they worked in the
federal government and the survey was not applicable to them. Subsequently, the contact
letter was rewritten, emphasizing the recipient was being asked to test the survey as part
of a focus group. An additional five responses were received, for a total response rate of
7%. The second focus group consisted of other members of the Denver chapter of the
Association of Government Accountants, but this group was exclusively individuals
working in state and local government. This selection was based on the individuals
email address. Again, survey response was minimal. Due to a low response rate from
the first two efforts, a third attempt at testing the survey was initiated. The selection
methodology relied on a snowball approach and focused on a group of state and federal
government employees and retirees, potentially familiar with performance metrics and
performance management, either through budget analysis or agency management. In
total, there were 17 responses.
Although the survey response rate was still quite low, the results and survey
instrument were reviewed. As a result, the survey instrument was revised somewhat to
enhance readability and appear shorter (Presser, Couper, Lessler, Martin, Martin,
Rothgeb, & Singer, 2004; Umbach, 2004). Text was added to clarify the use of the
terms organizational learning and stakeholder. The survey instrument was
reformatted using radial buttons across the side and to include components of questions
within a single heading (Couper, Traugott, & Lamias, 2001). Both of these approaches
resulted in visually shortening the survey instrument to attempt to increase the survey
response rate (Vicente & Reis, 2010).
45


Using the pilot results, the author attempted to run a factor analysis to determine
which items might be strongly correlated and therefore were redundant. The pilot
response rate was so low that it was difficult to make any conclusions. No questions
were deleted as a result of this step of the analysis.
Data Collection Methods
The initial round of surveys was administered online and employed Dillman
survey techniques to increase response rates (Dillman, Smyth & Christian, 2009). The
survey instrument consisted of 25 total questions, many of which had several sub-
questions. This approach was used to attempt to address respondent potential concerns
about survey instrument length and attempt to increase the response rate. All survey
materials are available from the author who can be contacted at
arlev.williams@ucdenver.edu or awilli4@hotmail.com These steps included pre-contact,
contact and follow-up contact letters. The contact and survey cover information provided
information about the purpose of the survey and potential uses of the survey results.
Respondents were ensured that individual responses would be considered confidential.
The initial survey for each type of state agency was administered electronically
with email notifications. Using the Dillman approach, the second contact communication
contained a link to the survey instrument located on the internet at the Survey Monkey
website. This survey approach minimized cost. Further, communication regarding the
survey indicated an executive summary of results would be made available to all
respondents at the conclusion of the study. This report was intended to stimulate interest
in the research in an additional attempt to enhance the response rate.
46


After the initial round of electronic surveys, the response rate remained low;
dissertation committee members indicated the low response rate was problematic and
advised the author to seek a solution. After a survey methodology literature search, the
author decided to use a mixed method approach. Mixed methods involve using one form
to administer the survey (such as email and electronic questionnaire/compilation),
followed by an alternative method such as regular mail and paper (Millar & Dillman,
2011). As a result, the initial online survey was followed by the same survey being
administered to non-respondents via United States mail. The returned responses were
hand-entered into Survey Monkey, then proof-read and cross-checked. The paper survey
distribution resulted in a significant cost.
Using the Dillman technique, the author took particular care to ensure the official
appearance of all survey materials, particularly those related to the paper survey. This
included the use of quality paper, color printing and color logo stickers applied to every
page of the survey. Survey materials included a personalized letter using the University
of Colorado logo with addresses in Denver and Colorado Springs explaining the purpose
of the survey and signed by the committee chair and the doctoral student. The survey
packet included a cover page for the survey, the survey instrument and a self-addressed,
stamped return envelope with color-coding by type of agency. This color coding was
particularly helpful for any needed follow-up or cross-checking of data, particularly when
some respondents removed the cover letter and survey cover page from of the survey
packet.
Data collection involved exporting data from each Survey Monkey survey (total
of eight) using an SPSS download format for compilation. All survey data was compiled
47


into one dataset with a streamlined format to show only respondent number and answers
to each survey item. The downloaded data was cross-checked against original paper
surveys, and it was discovered that responses for two survey questions were miscoded in
the automated download. The dataset was corrected. The survey response data was
examined for missing values and missing values were calculated using a regression
technique.
Sponsors/Endorsements
Consistent with the Dillman survey approach, this research sought endorsements
from relevant agencies to enhance the response rate. The survey was endorsed by three
associations: State Higher Education Executive Officers, Association of State Highway
and Transportation Officials and the Association of Fish and Wildlife Agencies. These
associations either publish membership contact information on their websites or provided
the information to the author. The National State Park Association also indicated a
willingness to participate, but was not willing to provide contact information. Given state
reorganizations between state park agencies and state wildlife agencies as well as lack of
contact information, a potential endorsement from the National State Parks Association
was not utilized. The author contacted the National Association of State Human
Resource Agencies requesting an endorsement, but that organization did not respond with
a willingness to endorse the survey and was unwilling to provide a list of members.
Subsequently, the author compiled contact information for state human
resources/personnel agencies. This was somewhat challenging for several reasons: 1)
information in published directories had become dated due to considerable turnover and
election transition; 2) differences between the states in cabinet structures; 3) the existence
48


of personnel offices within agencies, such as a personnel office within the Human
Services Department; and 4) significant reorganizations in the states to either centralize
and generate budget savings or decentralize to also generate budget savings. In some
cases, there was considerable difficulty in obtaining the email address of the agency head
to administer the survey electronically.
The Dillman Total Design methodology also encourages the use of token gifts to
enhance the response rate. Because the human resources/personnel agency survey did
not have an endorsement, the author contacted representatives of the COMIRB for
guidance to determine whether to file a request to be able to provide a token gift, such as
a book or textbook, to increase the response rate in 2012. Since the research was
originally rated as exempt, the author was directed to continue with the existing
COMIRB approval.
Data Analysis
Data analysis focused on factor analysis and structural equation modeling (SEM).
SPSS software was used for factor analysis to support the SEM. Exploratory factor
analysis was used to assess relationships between variables and items in a survey without
imposing a structural model, either from the Yang study or based on the literature
(Blunch, 2013). More specifically, exploratory factor analysis is a statistical
methodology to determine which survey items or questions can be grouped together
because of similar answers from respondents (Leech, Barrett, & Morgan, 2011, p. 65).
There are two main conditions necessary for exploratory factor analysis: 1) relationships
between variables and 2) larger sample size results in more reliable factors. The latter is
49


particularly true when comparing to the number of variables being considered in the
model (Leech et al., 2011, p. 65).
SEM is a statistical technique to test and estimate causal relationships using
statistical data and causal assumptions (Leech et al., 2011; Schumacker & Lomax, 2004;
Blunch, 2013). Its use is appropriate for theory testing in part because the technique
defines a model to explain an entire set of relationships (Hair, Black, Babin, & Anderson,
2006). It is considered a stronger technique than multiple regression because it takes into
account interactions, nonlinearity, correlated independent variables, measurement error,
correlated error terms and latest variables. SEM illustrates relationships among
constructs, which are the dependent and independent variables, included in the analysis.
Constructs are unobservable or latent factors and are represented by a number of items
which attempt to measure that construct (ibid, 2009). Loadings represent the
relationships from constructs to variables, similar to factor analysis, while path estimate
represent relationships between the constructs, similar to coefficients in regression
analysis (ibid, 2009). SEM software, specifically AMOS published by SPSS, was used
for the SEM, because of its compatibility with SPSS and its ease in creating visual
diagrams (Arbuckle & Wothke, 1999). Survey Monkey provides the capability to
download data into SPSS file format. The SEM was broken into two phases:
development and testing of the model based on the research questions and associated data
collection, and respecification of the model to improve goodness of fit.
50


Validity and Reliability
Validity means measuring what one is supposed to measure (Giannatasio, 2008).
There are two types of validity. Internal validity indicates the causal variable caused the
change in the dependent variable; and external validity reflects that there is support for
the generalizing of results beyond the study group. To increase the validity of this study,
survey questions were based on questions from a prior survey administered by Yang and
Hsieh in Taipei. Their survey was reviewed by a peer review panel of five experts on
performance measurement and survey methodology. The review included the survey
objectives and a construct map relating the constructs to specific survey items. To further
address validity, the study focused on four different types of state agencies, rather than a
single agency. Further, significant efforts were made to attain as high a response rate to
the survey as possible to improve validity of the data.
Selection bias is a threat to validity when the study subjects are not randomly
chosen. The design of this survey could result in some bias in that certain types of state
agency leaders were chosen for inclusion. As well, some bias in responses may be
evident in that individuals participating in the survey may be particularly supportive of
the use of performance metrics for a variety of purposes. Nonetheless, these leaders
perspectives are valuable, particularly given the need to understand potential factors
contributing to success of a PMM system.
Reliability is considered to be a measure that consistently operates in the same
manner (ibid, 2008). Additional survey questions are explicitly identified in the
appendix, and reliability of items in the survey and their inclusion in the model to support
identified variables relied in part on calculations of Cronbachs alpha. The Yang and
51


Hsieh survey was considered reliable. The most commonly used measures of internal
consistency reliability is Cronbachs coefficient alpha. This measure indicates the
consistency of a multiple-item scale (Leech et al., 2011; Blunch, 2013).
Limitations
There are several limitations to this research, its methodology, and the ability to
generalize to the larger population of state agencies. The most significant limitation is a
relatively small n included in the study, which affects both the ability to generalize as
well as test a complex SEM. In addition, in the area of public management and
performance effectiveness studies, Burke and Costello (2005) discuss the potential
difficulties of research methodologies when considering the results of local government
performance management implementation and effectiveness. These authors note survey
methods can overstate the level of success in implementing best practices such as
strategic planning and outcome measurements. Further, this research relies on survey
responses from agency leaders or their designees, in contrast to Yangs approach of
surveying the population of local government employees in Taipei. Frazier and Swiss
(2008) found large perceptual differences on NPM tools between management and lower-
level employees. Another limitation of this research is that the survey responses are
reflective of leaders at a particular point in time; the research would need to be replicated
at another time to determine if similar results could be obtained.
52


CHAPTER IV
RESULTS AND FINDINGS
This research is based on a mid-level model of the managerial effectiveness of
PMM, modified from its original publication, and tested based on a survey of leaders of
select state government agencies across the United States. Data analysis included
reviewing the data, conducting exploratory factor analysis and SEM. The quantitative
analysis of the survey response data was done in two phases. Phase I focused on testing
the proposed hypotheses using the SEM and paths as presented in the research proposal.
This approach is consistent with theory testing. Phase II focused on developing a second
model to improve statistical fit, while remaining consistent with underlying theory. This
second phase produced a reduced form model, which reflects the survey response data
collected. This approach is consistent with other studies utilizing SEM. According to
ORourke and Hatcher (2013), ... in most path analytic studies, the initial model fails to
provide adequate fit to the data and is subsequently modified (p. 169).
Results
Using SPSS AMOS 22, the initial model proposed to the committee was visually
diagrammed and tested with the dataset. While the model would run and generated %2
and fit statistics, the poor fit indicated an unacceptable model. Exploratory factor
analysis as well as model structure recommendations from Hair et al. (2006) and
modification indices through iterations of both the SEM and the factor analysis were then
used. Finally, SEM was used to confirm and evaluate the revised model.
53


Survey responses from leaders of four types of state government agencies were
combined into one dataset. Total number of responses was 115. The unit of analysis is
the agency leader. According to Hair et al. (2006), the minimum sample size for a SEM
depends on various factors, including the complexity of the proposed model as well as
communalities (average variance extracted among items) in each factor. With a small
sample size of 100 to 150, Hair et al. (2006) recommend a SEM can be adequately
estimated with five or fewer constructs. The proposed model in Phase I had eight
theoretical constructs which would ideally be estimated with a sample size greater than
500. Therefore, while the study took additional time to use a mixed-mode approach for
the survey to enhance the response rate, the small n was a limiting factor for testing the
proposed model due to its complexity.
Nonetheless, the exploratory factor analysis indicated Kaiser-Meyer-Olkin
measure of sampling adequacy was .859, which was greater than .70 indicating sufficient
items for each factor (Leech et al., 2011). The Bartletts test of sphericity was significant
(less than .05), indicating that the correlation matrix was significantly different from an
identity matrix (ibid, 2011). Finally, the initial communalities represent the relation
between the variable and all other variables before rotation. If many or most
communalities are low (.30), a small sample size is more likely to distort results (ibid,
2011), but in this case, initial communalities were at .424 or above.
54


Phase I
Consistent with the research proposal, the initial SEM was prepared to test for
confirmation of the research hypotheses. This initial model was developed with all items
from the survey responses; these items are shown in the appendix. This structural model
is shown below in Figure 2.
Figure 2: Phase I Model of Managerial Effectiveness of State Performance
Measurement System
The most commonly used measure of internal consistency reliability is
Cronbachs coefficient alpha, and this measure indicates the consistency of a multiple-
item scale (Leech et al., 2011). To assess whether the data from the eleven items creating
55


the effectiveness score formed a reliable scale, Cronbachs alpha (a) was computed,
along with a for other constructs in the model. The a for the external political support
(.62) indicated weak internal consistency. The a = .49 for the performance measure
adoption scale and the a = .463 alpha for the organizational learning scale indicated
minimally adequate reliability. The performance measure adoption scale would only be
improved to an a of .576 by deleting the item which asks whether the organization uses
satisfaction indicators to measure performance. The Cronbach alpha for the
organizational learning scale could only marginally be improved by removing items.
According to Hair et al. (2006), to assess predictive accuracy of a SEM, one
should consider a group of fit indices. These fit indices should include: %2 and the
associated degrees of freedom; one absolute fit index (such as Goodness-Of-Fit index
[GFI], Root Mean Square Error of Approximation [RMSEA] or Standardized Root Mean
Square Residual [SRMR]); one incremental fit index (like the Comparator Fit Index
[CFI] or the Tucker-Lewis Index (TLI)); one goodness of fit index (such as GFI, CFI or
TLI) and one badness of fit index (such as RMSEA or SRMR). In general, a general rule-
of-thumb is that there should be at least a value of .90 for the GFI, NFI, CFI and TLI.
For the GFI, the possible range of value is 0 to 1, with higher values indicating better fit.
In particular, one should focus on values of .90 to .95 or greater (ibid, p. 747). The CFI is
normed, so values will range between 0 and 1, with higher values indicating a better fit
between data and path models (ibid, p. 749). For the RMSEA and SRMR, which are
badness of fit measures, one would want lower values. For RMSEA, typical values are
below .10 for most acceptable models (ibid., p. 748).
56


Table 1 shows key fit indices for the Phase I model: model fit was poor. The
X2/degrees of freedom (df) ratio was 1.98 (x2= 1,361, df=687), which did meet the
traditional informal rule-of-thumb criteria that the ratio should be below 2; however,
values for the goodness-of-fit index (GFI, .625) and the comparative fit index (CFI, .757)
were less than 0.90, which indicated a poor fitting model. Further, the root mean square
error of approximation (RMSEA, .093) was higher than 0.08, also indicating an
unacceptable model fit.
Table 1: Fit Indices for Phase I and Phase II Models
(n=l15)
Model Chi Square df GFI RMSEA CFI Chi-square/df
Phase I 1361.338 687 0.625 0.093 0.757 1.981569141
Phase II 215.015 128 0.833 0.077 0.938 1.679804688
Note: X2 = chi-square; df = degrees of freedom
GFI = good of fit index
RMSEA = Root Mean Square Error of Approximation
CFI = comparative fit index
Chi-square/df = normed Chi square
Phase II
In Phase II, there was exploratory factor analysis, consideration of each survey
question and review of theory to improve the fit of the measurement and structural
model. The model was iterated to attempt to improve the fit. Modification indices were
considered, and the final model resulted in elimination of the adoption construct, the
external political support construct and the organizational learning construct along with
numerous survey items. The proposed model appears to best reflect the patterns of
association within the dataset.
57


The final SEM is shown in Figure 3 and was developed such that the items behind
each construct fit well together and demonstrated high loadings representing the
relationships from constructs to variables. Three constructs were dropped from the final
model, and exploratory factor analysis was prepared with the remaining five constructs.
This was done because of the poor fit of the eight-construct model, and the need to
reduce model complexity to five or fewer constructs. It was not possible to drop items
from low performing scales to bring their associated internal consistency above the
recommended minimum of .70 (Hopkins, 1998), while maintaining the recommended
number of items (Hair et al., 2006). This approach was counterbalanced with a desire to
keep as many variables as possible for future research and testing.
Figure 3: Phase II Model of Managerial Effectiveness of State Performance
Measurement System
58


Table 2 reports the final construct reliabilities, items means, standard deviations
and factor loadings for the constructs and items contained in the Phase II model.
Estimates of internal consistency as measured by Cronbachs alpha are within acceptable
limits for virtually all study variables (i.e., a >= .70). These coefficients range from .68
<= a <= .93, as reported in Table 2.
59


Table 2: Construct reliabilities, item means, standard deviations and factor loadings
Cronbach's Std. Factor
Alpha Mean Deviation Loading
Performance Measure Effectiveness 0.93
q2 This organizations performance measurement helps managers make better decisions. 5.47 1.20 0.87
q4 This organizations performance measurement helps communicate more effectively with many external groups. 5.33 1.37 0.79
q5 This organizations performance measurement helps budget planning and decision making. 5.30 1.32 0.80
q8 This organizations investment on performance management is worthwhile. 5.75 1.30 0.82
q9 This organizations performance measurement improves productivity. 5.00 1.42 0.89
qlO This organizations performance measurement motivates 4.60 1.40 0.86
employees.
Stakeholder Participation 0.68
ql 3 Citizens participate in designing this organizations performance indicators. 3.28 1.61 0.92
ql4 Elected officials participate in designing this organizations performance indicators. 3.88 1.74 0.59
ql 5 Citizens help this organization evaluate performance. 4.03 1.62 0.53
Organizational Support 0.88
q20 Top managers emphasize and care about the process of performance management. 5.63 1.22 0.94
q21 Top managers value and treat seriously the results of 5.55 1.24 0.96
performance management.
60


Table 2. Continued
Cronbach's Std. Factor
Alpha Mean Deviation Loading
q22 All offices and middle managers actively support performance management. 4.82 1.37 0.66
Technical Training 0.87
q23 How much technical training has been provided to performance management staff? 2.94 0.94 0.92
q24 How much technical training has been provided to managers and supervisors? 2.81 0.85 0.82
q25 To what extent is training ongoing over time? 2.67 0.85 0.77
Innovation Culture 0.79
q34 This agency is a very dynamic and entrepreneurial place. People are willing to stick their necks out and take risks. 4.56 1.36 0.77

q36 The glue that holds this agency together is a commitment to innovation and development. There is an emphasis on being first. 4.54 1.40 0.93
q38 My agency emphasizes growth and acquiring new resources. Readiness to meet new challenges is important. 5.27 1.18 0.60
Note: N= 115
As shown in Table 1, which compares fit measures for Phase I and Phase II
models, the later model also had an acceptable fit. The x2/degrees of freedom (df) ratio
was 1.68 (x2=215, df=128), which met the traditional informal rule-of-thumb criteria that
the ratio should be below 2. In this case, values for the goodness-of-fit index (GFI, .833)
61


was adequate. The comparative fit index (CFI, .938) was greater than 0.90, and the root
mean square error of approximation (RMSEA, .077) was lower than 0.08, all indicating
an acceptable model fit.
Indicators demonstrated convergent validity, as all t values for the loadings were
statistically significant, and the standardized factor loadings were nontrivial. All path
coefficients for the Phase II model were significant at the .05 level. This model is
theoretically grounded overall and proved to be strong. SPSS AMOS output with path
coefficients and significance is included in Appendix Table A2, and fit indices for the
final Phase II model are shown in Appendix Table A3.
The assumption of equal variances across groups was tested using Levenes test,
and was met (F3,m = .543, p = .654). ANOVA did not support rejecting the null
hypothesis (F3,m = 2.445, p = .067). The effect size for agency type was small (r|2 =
.062), associated with just over 6% of the variance in effectiveness. This analysis
supports the methodology of aggregating all agency responses together into one dataset.
Returning to the hypotheses to be tested in this research, the reduced form model
excludes the adoption construct (associated with HI, H2, H3 and H7), the political
support construct (associated with H4, H5, H6 and H8) or the organizational learning
construct (associated with H10). The final model confirmed the following relationships:
Innovation culture positively impacts performance management effectiveness but is
mediated by organizational support and training.
External stakeholder participation is positively associated with the level of managerial
effectiveness of performance measurement.
62


Organizational support has a positive impact on external stakeholder participation and
a positive impact on managerial effectiveness associated with performance
management.
Technical training is positively associated with performance management
effectiveness.
Model Comparison
In conclusion, goodness-of-fit indices for the two models are presented in Table 1.
The chi-square statistic is reported to enable comparisons between the baseline model
from phase I with the subsequent revised model. On the basis of these overall findings,
the revised model appears to best reflect the patterns of association within the dataset.
Revisions to the initial hypothesized model are theoretically tenable and led to improved
model estimation.
The Phase II model appearing as Figure 3 is proposed as the accepted or final
model. Future SEMs and path analytic studies are recommended, and additional
recommendations are discussed in the conclusions chapter.
Findings
As noted by Yang and Hsieh (2007), the interpretation of this model must be done
with care. The paths represented in this model may be better suited to explanation, rather
than for prediction. Because of the interactions between constructs, some exogenous
constructs may directly impact the endogenous construct of performance management
effectiveness. However, these affects may also be indirect through other constructs.
Nonetheless, the model is consistent with theory, and the model provides information on
63


significance and relative contributions of each construct to the overall performance
effectiveness for the survey responses in the study. Because variances for PMM
effectiveness do not vary significantly across agency type, these results show it is useful
to consider what these agencies have in common.
In the reduced form model, the construct of performance measurement
effectiveness was created with six items from the original survey. These items
emphasize decision making, communication, budgeting, productivity, employee
motivation and the worthwhile investment in such a system. In this model, strategic
planning was not included in these items, because that item did not load relatively as high
as the remaining items. The exclusion of strategic planning may be due to some
responses suppressing its strength or some type of interaction effect. The inclusion of
decision making is consistent with the need to actually use performance data, which is
reflective of strong strategic planning processes (Bryson, 2011).
The revised model has three items loading on each of the remaining constructs:
innovation culture, stakeholder support, organizational support and training. Each of
these variables is significant in the model with a positive impact on performance
management effectiveness. In Phase II of the analysis, the survey responses to items used
to create the culture scale reflected an emphasis on innovation culture. The
organizational learning, external political support and the adoption constructs were
eliminated from the final model.
This study found support for the role of culture positively impacting performance
measurement effectiveness, particularly innovation culture, although this impact indirect.
Culture is an important addition to this model of performance measurement for the
64


managerial effectiveness of state agencies (Schein, 1996). Culture incorporates artifacts,
beliefs, perceptions and behavior (Pettigrew, 1990: 416). The studys focus on executive
perceptions is consistent with the role of culture in impacting performance management
effectiveness; Damanpour and Schneider (2006) noted perhaps the most influential
people affecting innovation and change in organizations are top executives (p. 216). In
particular, the final construct in this model reflects an innovation culture, but its impact
on effectiveness is indirect. Innovative culture impacts organizational support and
training, which in turn impact effectiveness of the PMM system. The standardized path
coefficient for the relationship between innovation culture and organizational support was
.43, and the standardized path coefficient for the relationship between innovation culture
and training was .39.
This study initially separated organizational learning into a separate independent
variable to ascertain its impact on other variables in the model; however, this construct
was weak and was eliminated from the final model. One wonders whether organizational
learning could be captured by an innovative culture. Based on this analysis and the
academic literature, it seems reasonable that innovation culture would be included as a
single variable, rather than both culture and organizational learning. Organizational
learning may be viewed as learning culture (Yang, B., 2003). Particularly, organizational
culture may be considered part of the factors influencing innovation (Sta. Maria, 2003).
Further, innovative culture is characterized by a learning orientation (Amabile, 1996;
Glynn, 1996) that contributes to innovation (Cohen and Levinthal, 1990) and to
performance effectiveness (deLancer Julnes and Holzer, 2001). As shown in the table,
note the particularly strong loading of .93 for the survey item addressing the glue that
65


holds this agency together is a commitment to innovation and development.
Organizational learning can be particularly important for an organization in a changing
environment, and it seems unlikely that the state agency leaders in this study are
operating in a static environment. Additional considerations regarding the scale for this
construct and the need for additional research are discussed in a later chapter.
Stakeholder support, organizational support and training are other significant
variables in this model, and these components of the model are consistent with academic
literature as discussed by Yang and Hsieh (2007). The stakeholder support variable
includes both citizen and elected official participation in designing performance
indicators as well as citizen evaluation of performance. The factor loading of .92 for
citizen participation in designing performance indicators in particularly strong. The
impact of stakeholder support on performance management effectiveness is positive, but
the weakest relationship in this model with a standardized path coefficient of .15. The
organizational support factor includes survey items that top managers emphasize and
value the performance management process and results. Further, middle managers
actively support performance management. Organizational support was found to be the
most important predictor of performance management effectiveness, and also had an
impact on stakeholders. The standardized path coefficient for the relationship between
organizational support and effectiveness was .61, and the standardized path coefficient
for the relationship between organizational support and stakeholders was .37. The
training construct includes items for manager, supervisor and staff training along with an
emphasis of training over time, and the standardized path coefficient for the relationship
between training and effectiveness was .21. The on-going training component was a new
66


question not included in the Yang and Hsieh model and, as shown in the table, this item
had a strong factor loading of .77.
The organizational support and training constructs were correlated with each
other. This appears reasonable because organizational support and training are related to
each other. Further, the identification of this correlation by AMOS may indicate another
factor exists which could be impacting these relationships. An additional factor is not
measured by the survey instrument, so additional research is needed.
The exclusion of adoption seems reasonable and is particularly consistent with
the challenges discussed in the problem statement chapter. The adoption construct
consisted of items gauging the comprehensiveness of the measurement system, i.e. what
types of performance measures were used by the agency. Examples include input, output
and outcome measures. Adoption of a system is necessary, but not sufficient for
performance management success. In the early phases of adopting a system, there is a
great deal of discussion about the type of performance measures needed and the extent to
which agencies shift emphasis from input measures to output and outcome measures, in
particular. The emphasis on outcome measures can be particularly challenging, because
these measures are often difficult to measure and may involve factors beyond direct
control of the agency. In the process of implementing performance management systems,
some practitioners and policy makers were lead to believe that results could come from
adopting a system. Yet, simply adopting a system or sets of performance measures is not
enough not for an organization to achieve effectiveness. As discussed in the problem
statement, practitioner frustrations were evident that reporting performance measurement
information did not seem to result in changes or improvements. The internal and external
67


factors and on-going practices and processes for leaders, managers and staff addressed in
other components of this analysis must also be in place.
In the reduced form model, the removal of the weak external political support
construct could be viewed, not as an exclusion, but rather as information that was
incorporated elsewhere. In particular, the stakeholder variable reflects the support of
elected political officials. Further, the survey items for external political support focused
on agency authority and autonomy, which may not be as important as other factors in
determining performance management effectiveness.
68


CHAPTER V
CONCLUSIONS
Throughout the fields of public management, public administration and public
policy, various schools of thought offer tools and conceptual innovations through which
to navigate the challenges facing government (Salamon, 2002). Many of these emphasize
the need for government to take a new direction, to engage in new and innovative
approaches.
This study contributes to research, theory and practice. It adds knowledge to the
literature by testing a middle-range theory using state government agencies in the United
States. Performance management can be considered as a system of interlinking factors
and approaches, and the complex relationships between the constructs are shown through
the use of SEM. While certain variables have direct impact on performance management
effectiveness, other effects are indirect. This empirical study helps to better understand
these mechanisms. This analysis finds many elements of the Yang and Hsieh (2007)
PMM model can be generalized to the United States, particularly state government,
despite the unique culture of Asia, where the model was originally tested. The use of
Wilsons agency typology is a particularly useful way to select agencies for the study, but
did not demonstrate utility in providing variance between agencies to test the hypotheses.
The methodology used in this study enhances the potential to use structural equation
modeling by increasing sample size and contributes to a greater ability to generalize the
results. The studys inclusion of state wildlife management agencies is particularly
unique and not found elsewhere in the public administration literature. The importance
69


of organizational support, technical training and stakeholder participation are reinforced
in the results of this study.
This study incorporates organizational culture into a model of PMM effectiveness
and tests for its significance based on survey responses from state agency leaders. The
model supported the concept that organizational culture is an important factor in PMM,
but its impact was mediated by organizational support and organizational training. In
particular, for the state agencies in this study, innovation culture emerged as particularly
important to the effectiveness of a PMM system.
This study can have value for practitioners to better understand the key factors
and practices needed to achieve success with a PMM. A better understanding of the role
of innovation, along with the characteristics of innovation, in achieving performance
management effectiveness is useful. This information can be particularly beneficial for
the dialogue about the importance of improving state agency performance results, given
the rule bound culture of government and reluctance of some stakeholders to accept
innovation and its associated risks and other implications. As stakeholders demand
performance improvements from government, there must be a recognition that innovation
culture is important for government leaders. Further, the importance of organizational
support and technical training cannot be understated. In this authors view, the difficulty
for practitioners in achieving effectiveness with these systems combined with the mixed
results of academic studies may be contributing to an attitude of complacency on this
topic. As discussed in the problem statement, significant forces are demanding
performance improvements, and it is only through an understanding of the factors needed
for PMM success that these improvements can be made.
70


This research is useful to identify elements of a PMM that can contribute to
enhanced success. PMM systems can be successfully designed and implemented, but
maximum gains can be particularly limited by organizational capacity and the rule-based
culture of government. Sophisticated and creative approaches to management and
learning are needed to maximize the effectiveness of these systems; these successes may
be most likely to be achieved in the long-run.
As discussed in the limitations section, the study is limited by several factors, in
that responses are time-bound and can be influenced by personal bias and perception,
generalization of survey results is difficult, and leadership attributes are not an explicit
consideration in the model. That said, survey results and model analysis are generally
consistent with literature and other research.
Future research focused on state government is needed in a variety of areas. The
study could be expanded to other types of state agencies and extended beyond the
leadership of an agency to mid-level management and street-level workers. These
approaches would result in a larger set of respondents which would enrich the analysis.
In addition to allowing for greater model complexity, these methods would provide
opportunity to compare results between agencies and between organizational levels.
Additional case studies of successful (and unsuccessful) agencies and structured
interviews of leaders could add to the knowledge base on effective performance
management in state agencies. The study could be replicated over time to compare
results, including the impacts of different leaders, and to consider the literature on
executive replacement and organizational change. The survey could be extended beyond
the leadership of an agency to mid-level management and street-level workers. These
71


results could be coupled with an analysis of actual performance data to move beyond the
survey of perceptions at a given snapshot in time. The use of performance management
metrics and their application in times of fiscal constraints and crisis could be particularly
useful for the future. Finally, the role of the legislative branch in performance
management systems needs to be better understood, as very little is available in the
academic literature. This analysis could be focused both on the role of the legislature in
achieving managerial effectiveness of state agencies, but also separately on the
effectiveness of the legislative budget, decision and policy development processes.
72


REFERENCES
Abramson, M. A., & Behn, R. D. (2006). The varieties of CitiStat. Public Administration
Review, 66(3), 332-340.
Amabile, T. M. (1996). Creativity in context: Update to the social psychology of
creativity. Boulder, CO: WestviewPress.
Ammons, D. (2001). Municipal benchmarks: Assessing local performance and
establishing community standards (2nd Ed.). Thousand Oaks, CA: Sage
Publications.
Ammons, D. (2002). Performance measurement and managerial thinking. Public
Performance & Management Review, 25(4), 344-347.
Arbuckle, J. & Wothke, W. (1999). Amos 4.0 users guide. Smallwaters Corporation:
Chicago.
Argyris, C., & Schon, D. (1978). Organizational learning; a theory of action perspective.
Reading, MA: Addison-Wesley.
Association of Government Accountants. (2008). Public attitudes toward government
accountability and transparency. Retrieved from:
http://www.agacgfm.org/AGA/ToolsResources/CCR/pollreport2008.pdf
Bardach, E. (2000). A practical guide for policy analysis: The eightfold path to more
effective problem solving. New York: Chatham House.
Barnard, C. I. (1956). The functions of the executive. Cambridge, MA: Harvard
University Press.
Barrett, K., & Greene, R. (2008, March). Measuring performance: The state management
report card for 08. Governing. Retrieved from:
http://www.issuelab.org/resource/measuring performance the state management
report card for 2008.
Barrett, K. & Greene, R. (2012, October). What killed Alabamas performance
measurement plan? Governing. Retrieved from:
http://www.governing.com/columns/smart-mgmt/col-what-killed-alabama-
performance-measurement-plan.html,
Basken, P. (2008, May). Community colleges in California feel the heat. Chronicle of
Higher Education. May 9, 2008. Retrieved from:
http://chronicle.com/article/Communitv-Colleges-in/7116/.
73


Behn, R. D. (1988). What right do public managers have to lead? Public Administration
Review, 58(3), 209-24.
Behn, R. D. (2001). Rethinking democratic accountability. Washington, DC: Brookings
Institution Press.
Behn, R. D. (2006). The varieties of CitiStat. Public Administration Review, 66(3):332-
340.
Behn, R. D. (2008). Adoption of innovation: The challenge of learning to adapt tacit
knowledge. In Sandford, B. (Ed.), Innovations in government: Research,
recognition and replications (pp. 138-158). Washington, DC: Brookings
Institution.
Behn, R. D. (2008). Designing PerformanceStat. Public Performance and Management
Review, 32(2), 206-235.
Blunch, N. (2013). Introduction to structural equation modeling using IBM SPSS
statistics and AMOS (2nd Ed.). Los Angeles: Sage.
Borins, S. (2000). Loose cannons and rule breakers, or enterprising leaders? Some
evidence about innovative public managers. Public Administrative Review, 60(6),
498-507.
Bourdeaux, C. (2005, November) Do legislatures matter in budgetary reform? Paper
presented at the annual meeting of the Association of Budgeting and Financial
Management of the American Society for Public Administration, Washington,
DC.
Bourdeaux, C. (2006). Do legislatures matter in budgetary reform? Public Budgeting &
Finance, 26(1), 120-142.
Bourdeaux, C. & Chikoto, G. (2008). Legislative influences on performance management
reform. Public Administration Review, 68(2), 253-265.
Bozeman, B., & Kingsley, G. (1998). Risk culture in public and private organizations.
Public Administration Review, 58(2), 109-118.
Brudney, J. L., Herbert, F. T., & Wright, D. S. (1999). Reinventing government in the
American states: Measuring and explaining administrative reform. Public
Administration Review, 59(1), 19-30.
Bryson, J. M. (2011). Strategic planning for public and nonprofit organizations: A guide
to strengthening and sustaining organizational achievement (4th Ed.). San
Francisco: Jossey-Bass.
74


Bryson, J. M., Berry, F. S., & Yang, K. (2010). The state of public strategic
management research: A selective literature review and set of future directions.
The American Review of Public Administration, 40(5), 495-521.
Burke, B. F., & Costello, B. C. (2005). The human side of managing for results.
American Re vie ir of Public Administration, 35(3), 270-286.
Burnett, J. (2013). Eye on the prize: States looking at goals, outcomes for budget
decisions. Capitol Ideas, 56(2), 12-15.
Caiden, N. (2010). Challenges confronting contemporary public budgeting:
Retrospectives/prospectives from Allen Schick. Public Administration Review,
70(2), 203-210.
Cameron, K. S., & Quinn, R. E. (2006). Diagnosing and changing organizational
culture: Based on the competing values framework. San Francisco: Jossey-Bass.
Chenok, D. J., Kamensky, J. M., Keegan, M. J. & Ben-Yehuda, G. (2013). Six trends
driving change in government. Washington, D.C.: IBM Center for the Business
of Government.
Coggburn, J., & Schneider, S. (2003). The quality of management and government
performance: An empirical analysis of the American states. Public Administration
Review, 63(2), 206-213.
Cohen, W. M. & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on
learning and innovation. Administrative Science Quarterly, 36, 128-143.
Common, R. (2004). Organisational learning in a political environment: Improving
policy-making in UK government. Policy Studies, 25(1), 35-49.
Council of State Governments (no date). The State Comparative Performance
Measurement Proj ect. Retri eved from:
http://www.csg.org/programs/policyprograms/CPM.aspx
Couper, M., Traugott, M., & Lamias, M. (2001). Web survey design and administration.
Public Opinion Quarterly, 65(2), 230-253.
Cyert, R. M., & March J. G. (1963). A behavioral theory of the firm. Englewood Cliffs,
N.J.: Prentice Hall.
Damanpour, F. & Schneider, M. (2006). Phases of the adoption of innovation in
organizations: Effects of environment, organization and top managers. British
Journal of Management, 17, 215-236.
75


Damanpour, F. & Schneider, M. (2009). Characteristics of innovation and innovation
adoption in public organizations: Assessing the role of managers. Journal of
Public Administration Research and Theory, 19(3), 495-522.
de Lancer Julnes, P. & Holzer, M. (2001). Promoting the utilization of performance
measures in public organizations: An empirical study of factors affecting
adoption and implementation. Public Administration Review, 619(6), 693-708.
DeLeon, L., & Denhardt, R. B. (2000). The political theory of reinvention. Public
Administration Review, 60(2), 89-97.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode
surveys: The tailored design method. Hoboken, N.J.: Wiley & Sons.
Edmondson, A. (2008). The competitive imperative of learning. Harvard Business
Review, 86(7/8), 60-67.
Executive Office of the President of the United States. (2013). President Obamas Plan
for Higher Education. Retrieved from:
http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-
plan-make-college-more-affordable-better-bargain-
Fiol, C. M, & Lyles, M. (1985). Organizational learning. Academy of Management
Review, 19, 803-13.
Fitzpatrick, J, Goggin, M., Heikkila, T., Klingner, D. E., Machado, J. & Martell, C. A
new look at comparative public administration: Trends in research and an agenda
for the future. Public Administration Review, 71(6), 821-830.
Fleenor, J., & Bryant, C. (2002, April). Leadership effectiveness and organizational
culture: An exploratory study. Paper presented at the meeting of the Society for
Industrial and Organizational Psychology, Toronto, Canada.
Frazier, M., & Swiss, J. (2008). Contrasting views of results-based management tools
from different organizational levels. International Public Management Journal,
11(2), 214-234.
Frederickson, H. (2002). Confucius and the moral basis of bureaucracy. Administration &
Society, 33(6), 610-628.
Garnett, J., Marlowe, J., & Pandey, S. (2008). Penetrating the performance predicament:
Communication as a mediator or moderator of organizational cultures impact on
public organizational performance. Public Administration Review, 68(2), 266-
281.
Garrett, G. & Reindl, T. (2013). Beyond completion: Getting answers to the questions
governors ask to improve postsecondary outcomes. National Governors
76


Association Center for Best Practices. Washington, D.C. Retrieved from:
http://www.nga.org/files/live/sites/NGA/files/pdf/2013/1309BevondCompletionP
aper.pdf
Giannatasio, N. A. (2008). Threats to validity in research designs. In Miller, G., & Yang,
K. (Eds.), Handbook of research methods in public administration. New York,
NY: CRC Press.
Gilmour, J., & Lewis, D. (2006). Assessing performance budgeting at OMB: The
influence of politics, performance, and program size. Journal of Public
Administration Research and Theory, 16(2), 169-186.
Glynn, M. A. (1996). Innovative genius: A framework for relating individual and
organizational intelligences to innovation. Academy of Management Review, 21,
1081-1112.
Government Accounting Standards Board, (no date) Project pages service efforts and
accomplishments reporting. Retrieved from:
http://www.gasb. org/cs/ContentServer?pagename=GASB/GASBContent_C/Proje
ctPage&cid=l 176156646053
Gruenig, G. (2001). Origin and theoretical basis of new public management models.
International Public Management Journal, 4(1), 1-25.
Hair, J. F., Jr., Black, W. C., Babin, B. J. & Anderson, R. E. (2006). Multivariate data
analysis (6th Ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
Handy, C. (1993). Understanding organizations. London: Penguin Books.
Hatry, H. (2008) Emerging developments in performance measurement: An international
perspective. In de Lancer Julnes, P., Berry, F. S., Aristigueta, M. P., & Yang, K.
(Eds.), International handbook of practice-basedperformance management (pp.
3-24). Los Angeles: Sage Publications.
Hatry, H. & Davies, E. (2011). A guide to data-driven performance reviews.
Washington, DC: IBM Center for the Business of Government.
Heinrich, C. (2007). Evidence-based policy and performance management: Challenges
and prospects in two parallel movements. American Review of Public
Administration, 37(3), 255-277.
Hibbing J., & Theiss-Morse, E. (2002). Stealth democracy: Americans belief about how
government should work. New York: Cambridge University Press.
Hibbing, J., & Theiss-Morse, E. (1995). Congress as public enemy: Public attitudes
toward American political institutions. New York: Cambridge University Press.
77


Higher Learning Commission. (2013). The criteria for accreditation: Guiding values.
Chicago, IL. Retrieved from: https://www.ncahlc.org/Criteria-Eligibilitv-and-
Candidacv/guiding-values-new-criteria-for-accreditation.html.
Hill, C. J, & Lynn, L.E., Jr. (2005). Is hierarchical governance in decline? Evidence from
empirical research. Journal of Public Administration Research and Theory, 15(2),
173-195.
Holzer, M., Mullins, L. B., Ferreira, M. & Hoontis, P. (2012, October). Implementing
performance budgeting at the state level: Lessons learnedfrom New Jersey.
Paper presented at the Association (ABFM) Annual Conference, New York, NY.
Hood, C. (1995). The New Public Management in the 1980s: Variations on a theme.
Accounting, Organizations and Society, 20(2/3), 93-109.
Ho, A. T. (2011). PBB in American local governments: Its more than a management
tool. Public Administration Review, 71(3), 391-401.
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th
Ed.). Needham Heights, MA: Allyn & Bacon.
Hou, Y, Lunsford, R. S., Sides, K. C. & Jones, K. A. (2011). State performance-based
budgeting in boom and bust years: An analytical framework and survey of the
states. Public Administration Review, 71(3), 370-388.
Huber, J., & Shipan, C. (2000). The costs of control: Legislators, agencies and transaction
costs. Legislative Studies Quarterly, 25(1), 25-52.
Ingraham, P. W., Joyce, P., & Kneedler-Donahue, A. (2003). Government performance:
Why management matters. Baltimore, MD: Johns Hopkins University Press.
Ingraham, P. W. & Lynn, L.E., Jr. (2004). The art of governance: Analyzing
management and administration. Washington, DC: Georgetown University
Press.
Joyce, P. G. (2003). Linking performance and budgeting: Opportunities in the federal
budget process. Washington, DC: IBM Center for the Business of Government.
Joyce, P. G. (2011). The Obama administration and PBB: Building on the legacy of
federal performance-informed budgeting? Public Administration Review, 71(3),
356-367.
Jung, T., Scott, T., Davies, H. T. O., Bower, P. Whalley, D., McNally R., & Mannion, R.
(2009). Instruments for exploring organizational culture: A review of the
literature. Public Administration Review, 69(6), 1087-1096.
78


Kelleher, C., & Wolak, J. (2007). Explaining public confidence in the branches of state
government. Political Research Quarterly, 60(4), 707-721.
Kelman, S. and Myers, J. (2011). Successfully achieving ambitious goals in government:
An empirical analysis. The American Review of Public Administration, 41(3),
235-262.
Kettl, D. F. (1998). Reinventing government: A five-year report card. Washington, DC:
Brookings Institution Press.
Kettl, D. F. (2005). The global public management revolution (2nd Ed.). Washington: DC:
Brookings Institution Press.
Kettl, D. F., & Fesler, J. W. (2005). The politics of the administrative process (4th Ed.).
Chatham, NJ: Chatham House Publishers.
Kettl, D. F. & Kelman, S. (2007). Reflections on 21st century government management.
Washington, DC: IBM Center for the Business of Government.
Khademian, A. M. (2000). Is silly putty manageable? Looking for the links between
culture, management, and context. In Brudney, J.L., OToole, L. J., & Rainey, H.
G. (Eds.), Advancing public management: New developments in theory, methods
and practice (pp. 33-48). Washington, DC: Georgetown University Press.
Kim, Y. (2010). Improving performance in U.S. state governments: Risk-taking,
innovativeness, and proactiveness practices. Public Performance and
Management Review, 34(1), 104-129.
Klingner, D. E. (2006). Diffusion and adoption of innovations: A development
perspective. In Innovations in governance and public administration: Replicating
what works. New York, NY: United Nations Report ST/ESA/PAD/SER.E/72.
Klingner, D. E., & Sabet, M. G. (2005). Knowledge management, organizational
learning, innovation and technology transfer: what they mean and why they
matter. Comparative Technology Transfer and Society, 3(3), 199-210.
Kroll, A. (2012) Why public managers use performance information: Concepts, theory,
and empirical analysis (Doctoral dissertation, University of Potsdam).
Leech, N. L., Barrett, K. C., & Morgan, G. A. (2011). SPSS for intermediate statistics:
Use and interpretation (4th Ed.). New York, NY: Routledge Taylor and Francis
Group.
Light, P. C. (1998). Tides of reform. New Haven, CT: Yale University Press, p. 179-
215.
79


Liner, B., Hatry, H., Vinson, E., Allen, R., Dusenbury, P., Bryant, S. & Snell, R. (2001).
Making results based state government work. Washington, DC: The Urban
Institute.
Majumdar, S. (2008). Using the survey as an instrument of inquiry in research. In Miller,
G., & Yang, K. (Eds.), Handbook of research methods in public administration
(p. 246). New York, NY: CRC Press.
McGregor, E. B., Jr. (2000). Making sense of change. In Brudney, J.L., OToole, L. J. &
Rainey, H. G. (Eds.), Advancing public management: New developments in
theory, methods and practice (pp. 127-152). Washington, DC: Georgetown
University Press.
McNabb, D., & Sepic, F. (1995). Culture, climate and total quality management:
Measuring readiness for change. Public Productivity and Management Review,
18(4), 369-385.
Melkers, J., & Willoughby, K. (1998). The state of the states: Performance-based
budgeting requirements in 47 out of 50. Public Administration Review, 58(1), 66-
73.
Melkers, J., & Willoughby, K. (2001). Budgeters views of state performance-budgeting
systems: Distinctions across branches. Public Administration Review, 61(1), 54-
64.
Melkers J., & Willoughby, K. (2004). Staying the course: The use of performance
measurement in state governments. Washington, DC: IBM Center for the
Business of Government.
Millar, M. & Dillman, D. (2011). Improving response to web and mixed-mode surveys.
Public Opinion Quarterly, 75(2), 249-269.
Moore, M. H., & Braga, A. A. (2003). Measuring and improving police performance:
The lessons of CompStat and its progeny. Policing, 26(3), 439-453.
Morgan, S. L. & McCall, S. (2012, March). A systems approach to implementing
performance-based management and budgeting. Association of Government
Accountants audio conference.
Moynihan, D. (2005). Goal-based learning and the future of performance management.
Public Administration Review, 65(2), 203-216.
Moynihan, D. (2006). Managing for results in state government: Evaluating a decade of
reform. Public Administration Review, 66(1), 77-89.
80


Moynihan, D. & Lavertu, S. (2012). Does involvement in performance management
routines encourage performance information use? Evaluating GPRA and PART.
Public Administration Review, 72(4), 592-602.
Moynihan, D. & Pandey, S. (2005). Testing how management matters in an era of
government by performance management Journal of Public Administration
Research and Theory, 15(3), 421-439.
Moynihan, D. & Pandey, S. (2010). The big question for performance management: Why
do managers use performance information? Journal of Public Administration
Research and Theory, 20(4), 849-866.
Moynihan, D. (2013). The new federal performance system: Implementing the GPRA
Modernization Act. Washington, DC: IBM Center for the Business of
Government.
National Performance Management Advisory Commission. (2010). Retrieved from:
http: //pmcommi s si on, org/index, php? opti on=com content&task=vi ew&i d= 14&Ite
mid=26
National Conference of State Legislatures. (2003). Legislating for results. Denver,
Colorado.
Newcomer, K. & Caudle, S. (2011). Public performance management systems:
Embedding Practices for improved success. Public Performance and
Management Review, 35(1), 108-132.
ONeill, R., Jr. (2013). Local governments enduring reinvention imperative. Governing.
Retrieved from: http://www.governing.com/columns/smart-mgmt/col-
reinventing-government-book-osborne-gaebler-impact-local-innovation-
principles.html
ORourke, N. & Hatcher, L. (2013). A step-by-step approach to using SAS for factor
analysis and structural equation modeling. (2nd ed.). Cary, NC: SAS Institute
Inc.
Osborne, D., & Gaebler, T. (1992). Reinventing government: How the entrepreneurial
spirit is transforming the public sector. Reading, MA: Addison-Wesley
Publishing Company, Inc.
Ott, S. J. (1989). The organizational culture perspective. Belmont, CA: Wadsworth.
Ott, S. J. (1995). TQM, organizational culture, and readiness for change. Public
Productivity and Management Review, 18(4), 365-368.
81


Ott, S. J., & Shafritz, J. (1994). Toward a definition of organizational incompetence: A
neglected variable in organization theory. Public Administration Review, 54(4),
370-377.
Parkin, M. (2003). Microeconomics. Boston, MA: Addison Wesley.
Pattison, S. D. (2012). Performance information impacts on state budgets and
government management. Retrieved from: http://www.nasbo.org/budget-
blog/performance-information-%E2%80%93-impacts-state-budgets-and-
government-management
Pawlowsky, P. (2001). The treatment of organizational learning in management science.
In Dierkes, M., Antal, A., Child, J. & Nonaka, I. (Eds.), Handbook of
organizational learning and knowledge (pp. 61-88). London: Oxford University
Press.
Perrin, B. (2006). Moving from outputs to outcomes: Practical advice from governments
around the world. Washington, DC: IBM Center for the Business of Government.
Pettigrew, A. M. (1990). Organizational climate and culture: Two constructs in search of
a role. In Schneider, B. (Ed.), Organizational Climate and Culture (pp. 413-433).
San Francisco, CA: Jossey-Bass.
Pew Center on the States. (2008). Grading the States 2008. Retrieved March 2, 2008,
from http://www.pewcenteronthestates.org/template page.aspx?id=35360.
Piotrowski, S. J. & Rosenbloom, D. H. (2002). Nonmission-based values in results-
oriented public management: The case of freedom of information. Public
Administration Review, 62(6), 643-657.
Pollitt, C. (2006). Performance management in practice: A comparative study of
executive agencies. Journal of Public Administration Research and Theory,
16(1), 25-44.
Poister, T. H., Pasha, O. Q. & Edwards, L. H. (2013). Does performance management
lead to better outcomes? Evidence from the U.S. public transit industry. Public
Administration Review, 73(4), 625-636.
Presser, S., Couper, M., Lessler, J., Martin, E., Martin, J., Rothgeb, J. & Singer, E.
(2004). Methods for testing and evaluating survey questions. Public Opinion
Quarterly, 68 (1), 109-130.
Rabovsky, T. M. (2014). Using data to manage for performance at universities. Public
Administration Review, 74(2), 260-272.
82


Radin, B. A. (1998). The government performance and results act (GPRA): Hydra-
headed monster or flexible management tool? Public Administration Review,
58(4), 307-316.
Radin, B. A. (2011). Federalist No. 71: Can the federal government be held accountable
for performance? Public Administration Review, Special Issue, S128-S133.
Rainey, H. G. (2003). Understanding and managing public organizations (3rd ed.). San
Francisco, CA: Jossey-Bass.
Reilly, B. (2007). Democratization and electoral reform in the Asian-Pacific region.
Comparative Political Studies, 40(11), 1350-1371.
Rojas, F. M. (2012). Recovery Act transparency: Learning from states experiences.
Washington, DC: IBM Center for the Business of Government.
Rosenbloom, D. H. (1983). Public administrative theory and the separation of powers.
Public Administration Review, 43(3), 219-227.
Rubin, I. (2005). The state of state budget research. Public Budgeting and Finance,
25(45), 46-67.
Salamon, L. M., Ed. (2002). The tools of government: A guide to the new governance.
New York, NY: Oxford University Press.
Schein, E. H. (1996). Culture: The missing concept in organizational studies.
Administrative Science Quarterly, 41(2), 229-240.
Schein, E. H. (2002). Organizational culture and leadership. San Francisco, CA: Jossey-
Bass.
Schumacker, R. E., & Lomax, R. G. (2004). A beginners guide to structural equation
modeling (2nd Ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Publishers.
Senge, P. M. (1990). The fifth discipline. New York, NY: Doubleday.
Smith, K., Cheng, R., Smith, O., & Schiffel, L. (2008). Performance reporting by state
agencies: bridging the gap between current practice and GASB-suggested criteria.
Journal of Government Financial Management, 57(2), 42-47.
Sta. Maria, R. (2003). Innovation and organizational learning culture in the Malaysian
public sector. Advances in Developing Human Resources, 5(2), 205-214.
State of Washington government management accountability program: Hearing before
the New Mexico Legislative Finance Committee, testimony of Arley Williams
83


(2008), p. 7. Retrieved from:
(http://www.nmlegis.gov/lcs/minutes/lfcminmay06.08.pdf)
Stillman, R. J. (1996). The American bureaucracy: The core of modern government (2nd
ed.). Chicago, IL: Nelson-Hall Publishers.
Tandberg, D.A. & Hillman, N. W. (2013). State performance funding for higher
education: Silver bullet or red herring? Wisconsin Center for the Advancement of
Postsecondary Education. Retrieved from:
http://www.wiscape.wisc.edu/docs/WebDispenser/wiscapedocuments/pb018.pdf?
sfvrsn=4
Terry, L. (1988). Administrative leadership, neomanagerialism and the public
management movement. Public Administration Review, 58(3), 194-200.
Thompson, J. (2000). Reinvention as reform: Assessing the national performance review.
Public Administration Review, 60(6), 508-521.
Umbach, P. D. (2004). Web surveys: Best practices. New Directions for Institutional
Research (121), 23-38.
United States Department of Transportation. (2013). Fact Sheets: MAP-21-Moving
Aheadfor Progress in the 21st Century. Retrieved from:
http://www.fhwa.dot.gov/map21/factsheets/
United States Department of Transportation. (2013). Fact Sheet: MAP-21-Moving
Aheadfor Progress in the 21st Century Performance Management. Retrieved
from: http://www.fhwa.dot.gov/map21/factsheets/pm.cfm
United States Government Accountability Office. (2013). Data-driven performance
reviews
show promise but agencies should explore how to involve other relevant agencies.
GAO-13-228. Retrievedfrom: http://www.gao.gov/assets/660/652426.pdf
United States Government Accountability Office. (2013). Executive branch should more
filly implement the GPRA Modernization Act to address pressing governance
challenges. GAO-13-518. Retrieved from:
http://www.gao.gov/assets/660/655541.pdf
Van Wart, M. (2005). Organizational investment in employee development. In Condrey,
S., Handbook of human resource management in government (2nd ed.), (pp. 272-
294). San Francisco, CA: Jossey-Bass.
84


Villadsen, A. R. (2012). New executives from inside or out? The effect of executive
replacement on organizational change. Public Administration Review, 72 (5),
731-740.
Vicente, P. & Reis, E. (2010). Using questionnaire design to fit nonresponse bias in web
surveys. Social Science Computer Review, 28(2), 251-267.
Walker, R. M. (2008). An empirical evaluation of innovation types and organizational
and environmental characteristics: Towards a configuration approach. Journal of
Public Administration Research and Theory, 18(4), 591-615.
Walker, R. M., Damanpour, F., & Devece, C. A. (2011). Management innovation and
organizational performance: The mediating effect of performance management.
Journal of Public Administration Research and Theory, 21(2), 367-386.
Wechsler, B. (1994). Reinventing Floridas civil service system: The failure of reform.
Review of Public Personnel Administration, 14, 64-75.
Wellman, M., & VanLandingham, G. (2008). Performance-based budgeting in Florida:
Great expectations, more limited reality. In de Lancer Julnes, P., Berry, F. S.,
Aristigueta, M. P., & Yang, K. (Eds.), International handbook of practice-based
performance management (pp. 321-340). Los Angeles, CA: Sage Publications.
Wilson, J. (1989). Bureaucracy: What government agencies do and why they do it. New
York, NY: B asi c B ooks.
Yang, B. (2003). Identifying valid and reliable measures for dimensions of a learning
culture. Advances in Developing Human Resources, 5(2), 152-162.
Yang, K., & Hsieh, J. (2007). Managerial effectiveness of government performance
measurement: Testing a middle-range model. Public Administration Review,
67(5), 861-879.
85


APPENDIX
Table Al: Comparison of Survey Research Instruments
Managerial effectiveness of performance measure (dependent variable)
Yangs Original Question No. & Purpose Included in Yangs Survey Williams Survey Question No. Included in Williams Final Survey
1 This organizations performance measurement results can be trusted. V1; trustworthy XXX Q2-0001 ql XXX
2 This organizations performance measurement can help managers make better decisions. V2; decision making XXX Q2-0002 q2 Restated: This organizations performance measurement helps managers make better decisions.
3 This organizations performance measurement helps communicate more effectively with elected officials. V3: communication XXX Q2-0003 q3 XXX
4 This organizations performance measurement helps communicate more effectively with many external groups XXX Q2-0004 q4 XXX
5 This organizations performance measurement helps budget planning and decision making. V4: budgeting XXX Q2-0005 q5 XXX
6 This organizations performance indicators accurately reflect the quality of V5: accurate XXX Not Used


management.
7 This organizations performance indicators accurately reflect the work of the organization. XXX Not Used
8 The data in the performance measurement system is accurate. XXX Q3-0001 q6 XXX
9 This organizations performance indicators are reliable. V6: reliable XXX Q3-0002 q7 XXX
10 This organizations investment on performance management is worthwhile. V7: value XXX Q3-0003 q8 XXX
11 This organizations performance measurement improves productivity. V8: productivity XXX Q3-0004 q9 XXX
12 This organizations performance measurement motivates employees. V9: motivation XXX Q3-0005 qlO XXX
13 This organizations performance measurement stimulates organizational learning. VI0: learning XXX qll Not Used in Arleys Model
14 This organizations performance measurement results are used to adjust strategic planning. VI1: strategic planning XXX Q3-0007 ql2 Restated: This organizations performance measurement results are used in strategic planning.
Note: Items measured on a seven-degree agree/disagree scale:
Strongly disagree


Disagree
Somewhat disagree
Neutral
Somewhat agree
Agree
Strongly agree


Stakeholder participation (independent variable)
Yangs Original Question No. & Purpose Included in Yangs Survey Williams Survey Question No. Included in Williams Final Survey
1 Citizens participate in designing this organizations performance indicators. V12 XXX Q4-0001 ql 3 XXX
2 Elected officials participate in designing this organizations performance indicators. V13 XXX Q4-0002 ql4 XXX
3 Citizens help this organization evaluate performance. V14 XXX Q5 ql 5 XXX
4 Stakeholders are familiar with the results of this organizations performance management. V15 XXX Q6 ql6 XXX
5 There are open, public meetings of elected or appointed officials and program staff to discuss performance results. Q7 ql7 XXX
6 Data is reported to and reviewed by legislators and their staff. Q8-0001 ql 8 XXX
7 Data is used by legislators and legislative staff for decision making. Q8-0002 q!9 XXX
Note: Items measured on a seven-degree agree/disagree scale:
Strongly disagree
Disagree
Somewhat disagree
Neutral
Somewhat agree
Agree
Strongly agree


Full Text

PAGE 1

EFFECTIVENESS OF PERFORMANCE MANAGEMENT SYSTEMS IN STATE AGENCIES: PERFORMANCE MEASUREMENT, ORGANIZATIONAL CULTURE AND LEARNING by A RLEY D. W ILLIAMS B.S. New Mexico State University, 1986 M.S., New Mexico State University, 1987 A thesis submit ted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Public Affairs 2014

PAGE 2

ii This thesis for the Doctor of Philosophy degree by Arley D. Williams has been a pproved for the Public Affairs Program by Donald Klingner Dissertation Advisor Paul Teske, Examination Chair Mary Guy Kaifeng Yang July 25, 2014

PAGE 3

iii Williams Arley D. (Ph.D., Public Affairs ) Effectiveness of Performance Management Systems in Sta te Agencies: Performance Measurement, Organizational Culture and Learning Thesis directed by Distinguished Professor Donald Klingner ABSTRACT Most state governments have implemented performance management systems that incorporate some or all elements o f a typical system, specifically strategic planning, performance measures, reporting, budgeting and evaluation. These systems are attempts to enhance government efficiency and effectiveness according to the principles of N ew P ublic M anagement. Despite mi xed evidence of success, pressure on practitioners for use of these systems is growing, particularly from the federal government, governance entities, external funding sources and professional associations. By surveying state agency leaders across the nat ion, this research tests a model of performance measurement effectiveness. This study attempts to advance theory and practice by assessing factors both internal and external to a state agency that may positively impact the managerial effectiveness of a pe rformance measurement system. This study adds knowledge to the literature by testing a middle range theory of the effectiveness of performance measurement based on a nation wide survey of leaders of selected state government agencies. While certain vari ables have direct impact on performance measurement effectiveness, other effects are indirect; this empirical study helps to better understand these mechanisms. The study adds to the body of knowledge in the literature by incorporating organizational cult ure into manage ment reform in state government and demonstrates that certain elements for success can be common among

PAGE 4

iv different types of agencies. G iven the traditional rule bound culture of government, a better understanding of the role of innovation, al ong with the characteristics of innovation, in achieving performance management effectiveness can be useful. The role of organizational learning is explored. The importance of organizational support, technical training and stakeholder participation in st ate government are reinforced in the results of this study. Sophisticated and creative approaches to management and learning are needed to maximize the effectiveness of these systems; these successes may be most likely to be achieved in the long run. The form and content of this abstract are approved. I recommend its publication. Approved: Donald K lingner

PAGE 5

v DEDICATION I dedicate this work to my late grandmother, Vera

PAGE 6

vi ACKNOWLEDGMENTS First, I want to thank the University of Colorado, Denver (UCD) community, and particularly the members of my committee, Paul Teske and Mary Guy along with Kaifeng Yang of Florida State University, for their willingness to serve on the committee. The chair, Donald Klingner, has been a constan t in this process since we first talked on the phone about the Seminar in Public Management. My appreciation is extended to the entire committee for their knowledge and contributions. It was a wonderful opportunity to learn from the great faculty at the School of Public Affairs. Also at UCD, Alan Davis kindly provided extensive dissertation guidance and comments and served as a role model for his teaching style, while Loren Cobb provided early suggestions in this research process. Reflecting on these ma ny years of study and research, there are too many people to thank in this document, but here are a few words to acknowledge certain individuals and their roles in this journey. My most sincere appreciation to Dick McGinity of the University of Wyoming (U W) for his encouragement and guidance to finally finish the dissertation; completion of this research was made possible by what I learned from him. I am particularly grateful for his taking time to share his perspectives A special thank you to Mark Pete rson of UW for inviting me to his class and discussing with me his knowledge of AMOS software and structural equation models Appreciation also to Bill Mai, Jared Studyvin, Stephen Bieber and others at UW for their support and suggestions. The kind words of members of the UW Exec Council are much appreciated. Thank you, as well, to Richard Callahan of the University of San Francisco.

PAGE 7

vii David Abbey of the New Mexico Legislative Finance Committee (LFC) was instrumental in my decision to engage in doctoral st udies while working full time. I was Results initiatives. Special thanks to Cathy Fernandez of LFC for encouragement to not only start the program, but also finish it, as well as appreciation to Sylvia Barela of LFC for her support. Survey support and endorsement from national associations were important factors in successfully administering the survey. Paul Lingenfelter, former President, State Higher Education Exec utive Officers, Boulder, Colorado and John Horsley, former Executive Director, Association of State Highway and Transportation Officials, Washington, D. C. were helpful in this regard. I want to also thank Ron Regan of the Association of Fish and Wildlife Agencies (AFWA) in Washington, D. C. for the AFWA endorsement and overall enthusiasm for this research project. and Melanie Drever assisted with formatting of dissertation draf ts. Thank you all.

PAGE 8

viii TABLE OF CONTENTS CHAPTER I. INTRODUCTION, RATIONALE AND IMPORTANCE OF RESEARCH ................. 1 II. LITERATURE REVIEW ................................ ................................ ............................. 15 New Public Management ................................ ................................ ...................... 15 System Effectiveness ................................ ................................ ............................ 17 Theoretical Model of PMM System Effectiveness ................................ ............... 24 Organizational Culture ................................ ................................ .......................... 27 Organizational Learning ................................ ................................ ....................... 32 Research Questions ................................ ................................ ............................... 36 III. RESEARCH DESIGN AND METHODS ................................ ................................ .. 38 Human Subjects Review Approval ................................ ................................ ....... 38 The Underlying Model ................................ ................................ .......................... 39 Hypothesis Tes ting ................................ ................................ ................................ 39 Selected State Agencies ................................ ................................ ........................ 40 Agency Leadership ................................ ................................ ............................... 41 Survey Instrument ................................ ................................ ................................ 42 Review and Testing of Survey Instrument ................................ ............................ 44 Data Collection Methods ................................ ................................ ...................... 46 Sponsors/Endorsements ................................ ................................ ........................ 48 Data Analysis ................................ ................................ ................................ ........ 49 Validity and Reliability ................................ ................................ ......................... 51 Limitations ................................ ................................ ................................ ............ 52

PAGE 9

ix IV. RESULTS AND FINDINGS ................................ ................................ ...................... 53 Results ................................ ................................ ................................ ................... 53 Phase I ................................ ................................ ................................ ............. 55 Phase II ................................ ................................ ................................ ............ 57 Model Comparison ................................ ................................ .......................... 63 Find ings ................................ ................................ ................................ ................. 63 V. CONCLUSIONS ................................ ................................ ................................ .......... 69 REFERENCES ................................ ................................ ................................ ................. 73 APPENDIX ................................ ................................ ................................ ....................... 86

PAGE 10

x LIST OF TABLES T able 1: Fit Indices for Phase I and Phase II Mo dels ................................ ................................ 57 2: Construct reliabilities, item means, standard deviations and factor loadings ............... 60

PAGE 11

xi LIST OF FIGURES Figures 1: Yang and Hsieh (2007) Model of Performance Measure Eff ectiveness ...................... 25 2: Phase I Model of Managerial Effectiveness of State Performance Measurement System ................................ ................................ ................................ ................................ ........... 55 3: Phase II Model of Managerial Effectiveness of State Performance Measurement System ................................ ................................ ................................ ............................... 58

PAGE 12

1 CHAPTER I INTRODUCTION RATIONA LE AND IMPORTANCE OF RESEARCH This chapter will introduce the topic of this dissertation by reviewing current practitioner challenges, as relates to state agencies and their need to improve performance measurement and management (PMM) effectiveness. Firs t, this dissertation is placed in context by reviewing the increasing interest in PMM around the world, largely as part of the field known in the academic literature as New Public Management (NPM) After discussing the se general trends, this chapter will then address PMM in federal and state government, focusing on recent developments and on going problems, barriers and frustrations with scaling up a PMM system to deal with complex proble m s. This chapter also considers the tensions between the broad recog nition of the importance of PMM and r om cultures that tend to present that these systems cannot be effectively applied to public agencies. Consistent with focusing on state agencies, t his chapter discusses one particularly important aspect of PMM for a state agency -the role of the legislature. Finally, t his chapter concludes that the adoption, application and transfer of PMM systems in state agencies may be effective, but additional research and study of processes and practices for success are ne e ded. Around the world, there has been widespread interest in applying business principles to government activities (Perrin, 2006). Citizens are demanding greater accountability for, efficiency in and transparency of government services (Hibbing & Theiss Morse, 1995, 2002 ; Association of Government Accountants, 2008 ). Policy makers are also concerned with the efficiency and effectiveness of government services, tate,

PAGE 13

2 local, and provincial governments been under greater pressure to provide results that Management Advisory Commission, 2010, p. vii). According to Ammons (2002), performance improvement catalyst lies in the ability of performance measures to inspire managerial thinking. It is this managerial thinking that produces the strategies that bring Citizens have greater confidence in their government, particularly in state legislatures, when governments exhibit high quality government management and innovation (Kelleher & Wolak, 2007). Published reviews and rankings of government performance have generated growing practitioner interest in the importance of management effectiveness and the tools needed to achieve desired results (Barrett & Greene, 2008; Pew Center on the States, 2008). Academic literature, practitioners and politicians use many name s and terms for public management tools, techniques and initiatives. Often these terms are used interchangeably. In 2010, t he National Performance Management Advisory Commission defined performance management in the public sector as : tic approach to improving results through evidence based decision making, continuous organizational learning and a focus on accountability for performance. Performance management is integrated into all aspects of an ma king processes, transforming an p. 3)

PAGE 14

3 measurement is a critical component of performance m anagement, measuring and 2010 p. 3). Government interest in improving management of operations and enhancing accountability for results has spanned many years and political parties. The federal government has a long history of performance management initiatives, through the Clinton, Bush and Obama administrations. The cornerstone of the federal effort was the Government Performance and Results Act (GPRA) of 1993 t hat required federal agencies to develop long term and annual goals through strategic plans and annual performance plans and performance measures and to report progress on achieving goals (United States Government Accountability Office, 2013, 2013 ). Othe r key federal initiatives in this area were established under GPRA II, Executive Order 13450 Improving Government Program Performance and OMB 10 24: Performance Improvement Guidance under GPRA for 2011 2012 (Morgan & McCall, 2012). The Bush Administration attempted to link program performance and budgets through the Program Assessment Rating Tool (PART) ( Joyce, 2011; Radin, 2011; Moynihan, 2013). The Government Performance and Results Modernization Act of 2010 provided focus on key priorities, cross agenc y collaborative efforts, leadership and training, and government wide reporting. The updated authorization requires quarterly, data driven performance reviews (United States Government Accountability Office, 2013 ). Local governments were early adopters i n this field and have placed extensive emphasis on systems to enhance performance

PAGE 15

4 Most state governments have implemented systems for performance management and budgeting; h owever, many have not implemented all components of the typical system (Brudney, Hebert, & Wright, 1999 ; Melkers & Willoughby, 1998; Moynihan, 2006). Common elements of a performance management system include strategic planning, performance measurement, b udgeting, reporting and evaluation (National Performance Management Advisory Commission, 2010) Actual performance data in these systems is generated by agencies most often in the executive branch, and sometimes in the judicial branch and other constitutio nal agencies ( Kettl & Fesler, 2005). Information on programs and costs are maintained within an agency and are reported out to the legislature and citizens. For greatest success, practitioners were advised to avoid building systems that put penalties in pl ace for missing performance targets in the formative stages of i mplementing these systems. There was an emphasis on using data and results as information for various reporting, strategic planning, decision making, budgeting, and communication purposes. B uilding in incentives, such as enhanced operational and/or budget flexibility as well as additional funding was viewed as a mechanism for i n fluencing behavio r (Liner, et al., 2001; National Co nference of State Legislatures, 2003). Duties and responsibil ities of the two branches of government along with the associated formal and informal relationships are an important consideration for these systems. Based on economic theory, the relationship between the legislative and executive branches can be viewed fr om the principal agent perspective (Huber & Shipan, 2000). The principal

PAGE 16

5 p. 198). For example, Parkin di of powers perspective, the legislative branch authorizes agencies and programs and enacts the budget to pro vide for agency operations (Rosenbloom, 1983) The executive is Greater flexibility provided to the executive branch by the legislative branch (as well as by agency leaders to its program managers) is a fundamental component of the NPM attempt to address the principal agent problem. The role of the executive branch can result in significant information asymmetries with respect to the legislative branch. Program outpu t and outcome data are generated and maintained by executive, judicial and constitutionally created agencies. Information dissemination can be impacted by tensions between the branches of government. It is a significant undertaking for an agency and its staff to identify and collect the needed information for a particular program, or the agency as a whole, along with ensuring accuracy and reliability of the data. Constraints on staff resources, including budget constraints, are a notable challenge to this effort (Pattison, 2012). During times of budget constraints, agencies may face difficulties complying with reporting requirements. Anecdotally, several states appear to be struggling with complicated, resource intensive systems, yet as technology ev olves, it becomes less expensive to do this type of management (Morgan & McCall, 2012).

PAGE 17

6 few states are utilizing "stats" based approaches grounded in data based decision making originally implemented by local governments ( Moore & Braga, 2003; Abramson & Behn, 2006; Behn, 2008). For example, New York City launched CompStat for law enforcement a nd Baltimore launched CitiStat for municipal services. These approaches to performance management systems emphasize a data based management model along with multiple agency efforts to address overarching goals. Other characteristics include engage ment of t op leaders in frequent, periodic meetings and persistent follow up with clear accountability and continuous learning (Moore & Braga, 2003; Abramson & Behn, 2006 ; Hatry & Davies, 2011) The State of Washington developed the award winning Government Manageme nt Accountability and Performance (GMAP) system (State of Washington, 2008 ), while Maryland uses a system named StateStat. Leaders in both states are often present and participating in the performance meetings Maryland also merged the state's performanc e management system with the spending transparency required by the American Recovery and Reinvestment Act (Rojas, 2012) Effectively using PMM systems is difficult. In reviewing the academic literature, there is little empirical evidence that state level systems are broadly effective (Melkers & Willoughby, 2001; Coggburn & Schneider, 2003; Rubin, 2005; Moynihan, 2006). In a meta analysis of over eight hundred research studies, Hill and Lynn (2005) found traditional, hierarchical governance continues to pre dominate in the United States. These academic literature is continuing to quanti tatively assess the effectiveness of PMM systems in public agenc ies (Kroll, 2012), with relative ly few studies focusing on state

PAGE 18

7 government. Further, academic literature has only recently begun to explore the role of the legislature in these systems. In an era when evidence based policy making and we have come to know, the more aware we are of how tentative, limited and sometimes erroneous the bases of our information and Yet, despite the extensive practitioner challenges, the potential lack of political will changes in political leadership and the absence of conclusive empirical evidence that these systems are effective, experts emphasize these s ystems are here to stay (Melkers & Willoughby, 2004 ; Hatry, 2008; National Performance Management Advisory Commission, 2010 ). In November 2006, the Board of Trustees of the Financial Accounting Foundation reaffirmed that the Government Accounting Standa rds Board in its financial accounting and reporting standards setting activities for state and local ) Subsequently, in April 200 7, GASB announced that is also (Smith, Cheng, Smith, & Schiffel, 2008). More recently, a report by the IBM Center for the Business of Government noted performance and innovation are among the six key trends driving change in government (Chenok, Kamensky, Keegan & Ben Yehuda, 2013). T hese authors concluded government is shifting to a performance culture, providing incentives to use data and emphasizing the use of evidence for decision making. T he report also emphasizes leaders must understand the value of innovation and link innovati on to

PAGE 19

8 mission. Many state leaders continue to emphasize the importance of these systems. Maryland is noted for its success in using performance measures in budgeting (Burnett, iative concluded the system is slightly ahead of schedule, and state agencies are embracing the plan (Holzer, Mullins, Ferreira & Hoontis, 2012). The se authors found that agency staff view the initiative as a useful management tool and a culture changing movement. In December 2011, the governor of California issued an executive order instructing the Department of Finance to modify its budget process to focus on efficiency and program goal accomplishments (Pattison, 2012). In February 2012, the governor of Massachusetts issued an executive order directing state agencies to publish two year strategic plans with goals and metrics (Pattison, 2012). However, A labama is a notable exception to this trend. In 2011, the State of Alabama eliminated the requir ement for agencies to link budget requests to strategic plans, even though the system was implemented in 2004 to encourage strategic planning and performance budgeting. The governor was concerned agency goals were not in alignment with resource availabili ty, given the Greene, 2012). Additional concerns included insufficient support from the legislat ure as well as executive branch leadership, in adequate technology to support the program, insufficient linkages to overal l state goals/strategy along with lack of widespread adoption Most of the other requirements of the performance management system were retained. This state provides a reminder that when administrations change or legislative leadership changes, so can the momentum on PMM systems.

PAGE 20

9 Several initiatives attempted to identify common or model performance measures and systems to use throughout the nation. The Council of State Governments launched the State Comparative Performance M easurement Project to address comparative performance data to help states set reachable targets and identify best practices (Council of State Governments, no date) Areas of focus were transportation, public assistance and child welfare. These initiatives lend themselves we ll to benchmarking, already quite common in higher education, due to the extent of required federal reporting for universities and colleges. In February 2008, nine state and local organizations established the National Performance Management Advisory Commi ssion to develop a separate voluntary, PMM. Sponsoring organizations were the Association of School Business Officials International, National Association of State Budget Officers, Council of State Governments, G overnment Finance Officers Association, International City/County Management Association, National Association of Counties, National Association of State Auditors, Comptrollers, and Treasurers, National Conference of State Legislatures, the National League of Cities, and the United States provides a framework for public sector performance management based on seven key principles (National Performance Management Advisory Commission, 2010) The principles were specified as follows (p. 8) : 1. A results focus permeates strategies, processes, the organization culture, and decisions.

PAGE 21

10 2. Information, measures, goals, priorities, and activities are relevant to the priorities and well being of the government and the communit y. 3. Information related to performance, decisions, regulations, and processes is transparent, i.e., easy to access, use, and understand. 4. Goals, programs, activities, and resources are aligned with priorities and desired results. 5. Decisions and processes are driven by timely, accurate, and meaningful data. 6. Practices are sustainable over time and across organizational c hanges. 7. Performance management transforms the organization, its management, and the policy making process. Despite this important report, frust rations in implementing and using these systems continue to be evident, particularly among practitioners, leading them to ask questions such as the following: useful for de cision making? How can performance measure data be used? What is the next step when a performance target is not met? How can performance metric information be incorporated into decision making during times of tight budget constraints? Are there best practi ces or steps to help ensure success? If programmatic or budget staff propose next steps, will politicians and the governance structure follow these recommendations? How can a system of performance metrics focus on results, rather than process?

PAGE 22

11 How can perf ormance accountability for networks be incorporated? Does this matter and can such a system make a difference? Do both branches of government need to collaborate for the systems to be effective? Under what conditions is cooperation between the branches pos sible? The consequences of ignoring the need for an effective PMM system are beginning to surface in areas such as higher education and transportation Higher education, in particular, is facing both growing need for effective systems and process, along wi th increasing state and federal interest in funding based on improvements in performance outcomes. Transportation agencies are facing significant linkages between performance and federal funding availability. These developments signal that expectations f or PMM systems are high and that leaders and managers must have moved beyond implementation and learning by doing. In the area of higher education institutional accreditation, there was significant news that fourteen California community and junior coll eges were placed on probationary or warning status by their accreditors in 2008 (Basken, 2008). As background, the accreditor had set expectations for colleges to define their own performance measures and to learn from their managerial processes in 2002 T he accreditor indicated it had never enforced these rules. This enforcement action was attributed to the Bush administration and the Spellings Commission report on higher education that called for enhanced accountability. Accreditor interest in performanc e January 2013 emphasize the importance of student persistence and completion and assessment of student learning.

PAGE 23

12 Performance outcomes are increasingly influencing f unding in higher education. According to the Wisconsin Center for the Advancement o f Postsecondary Education 20 states were using performance based funding for higher education through 2010, and several other states are considering this approach (Tandber g & Hillman, 2013). The National Governors Association has issued guidelines for governors to focus on student completion and other performance measures (Garrett & Reindl, 2013). President Obama has proposed a plan to measure university and college p erfo rmance via a ratings system based on performance metrics, particularly outcomes, and ultimately, availability of federal student financial aid would be tied to this ratings system (Executive Office of the President of the United States, 2013). Transporta tion is another area of state government operations where funding is increasingly tied to performance outcomes. The Moving Ahead for Progress in the 21st Century Act of 2012 (MAP 21) authorizes surface transportation programs funding of over $105 billion for federal fiscal years 2013 and 2014 ; the legislation was th e first long term highway authorization enacted since 2005 (United States Department of Transportation, 2013) The legislation establishes performance and outcome based programs for states to in vest resources to make progress towards national goals in seven areas: Safety, infrastructure condition, congestion reduction, system reliability, freight movement, economic vitality, environmental sustainability, and reduced project delivery delays. Thr ough coordination processes, states will set performance targets for specific areas (ibid, 2013). MAP may be its provisions to implement a performance measurement based system for the federal aid highway program for which state transportation agencies must comply.

PAGE 24

13 In sum, state governments are operating in a climate of accelerating pressure to effectively use PMM systems to increase efficiency and produce results, yet there is little mid level theory and insufficient evidence to identify th e factors and practices that influence or predict managerial effectiveness of this tool within the public sector. In particular, the broad potential benefits of these systems, including strategic planning, decision making, budgeting and communication, have received little empirical analysis as a system. In addition, the role of organizational learning and organizational culture are not well developed It is not clear practitioners fully understand the management practices needed to achieve the full benefits of these systems. As well, the role of the state legislature in influencing system effectiveness needs further exploration. This research is designed to reduce the gap between the increasing importance of performance management in agencies and the deart h of mid level theory and research that identifies the factors and practices that might influence or predict managerial effectiveness of this tool within the public sector. This study will review the literature on NPM system effectiveness, organizational culture and organizational learning in order to modify a mid level theoretical model to assess the current state of effectiveness of performance management systems at the state government level. It will then test this model to determine whether the key var iables of external politics, stakeholder participation, organizational support, technical training, performance measurement adoption, organizational culture and organizational learning affect PMM in state agencies. This research can contribute to the l iterature and practice in several ways. First, the study moves beyond the more typical descriptive approach. The study is empirical

PAGE 25

14 and uses hypothesis testing to quantitatively test theory on a national scale by surveying agencies in all fifty states. Th e study quantitatively tests key factors for their impacts on performance and managerial effectiveness at the state level. The study incorporates frameworks of organizational culture and organizational learning into an existing theoretical model test s the predictive value of those variables on the effectiveness of PMM systems in state agencies, and recommend s areas of future research. Finally, the project will identify and recommend key factors and smart practices for design, adoption, implementation and long run system effectiveness for government approaches that work well for one may not be appropriate for another. However, all good performance management systems in corporate certain key principles (National Performance Management Advisory Commission, 2010 p. 2 ). To conclude, this chapter has reviewed the challenges facing government to apply a set of tools and techniques originating in the private sector to meet pr essures for efficiency, effectiveness and improving performance. These performance improvement demands may come in the form of meeting strategic and operational goals as well as meeting public policy needs Migrating a PMM system to a public agency has p roven to be difficult. Further, for leaders and managers, as well as states in general, there is a growing risk that lack of PMM effectiveness can impact funding available t o a public organization or a state. Public leaders and managers need t o understa nd the practices and processes needed for success, as the consequences of falling short are becoming more evident.

PAGE 26

15 CHAPTER II LITERATURE REVIEW This chapter will review the literature that focuses on key concepts and branches of theory that particularly relate to PMM in state agencies. The areas discussed include NPM and effectiveness of PMM systems. In addition, s pecific details of an important theoretical model of a PMM system are presented. The strengths of this model are noted as this model serves as the basis for this research. Finally, the literature on organizational culture and organizational learning, particularly as relates to state agency PMM are discussed to inform the potential for these concepts to impact system effectiveness. New P ublic Management Around the world, governments are seeking innovative and effective approaches to solve complex problems. NPM is an area of the academic literature offering potential solutions to these challenges. Management approaches emphasizing flexibi lity, accountability and techniques such as performance outcomes measurement fall within the umbrella of NPM. N PM is considered a technique to improve government services through reducing organizational hierarchy, focusing on mission and objectives, usin g tools and practices traditionally found in the private sector, privatizing government services, injecting the use of market forces and signals along with many other approaches. Rainey (2003) note d the movement sized the use in government of procedures

PAGE 27

16 N PM p. 3). It offers a number of potential reform mechanisms and has been adopted in various stages with differences in adoption throughout the world (Kettl, 2005). The conventional wisdom is that the field has its origins in public choice theory and managerialis m (Gruenig, 2001). However, Gruenig (2001) argues that performance measurement has its theoretical roots in classical public administration, neoclassical public administration, principal agent theory, policy analysis and rational public management. Perfor mance management is considered by some to have preceded and outlived NPM and is considered a key to governance (Ingraham & Lynn, 2004; Kettl & Kelman, 2007). In the public sector, NPM sets up a seemingly internal conflict. There are those who view the app roach as emphasizing top down managerial control and lack of reaching out to stakeholders, including citizens, for their government to serve their needs ( D eLeon & Denhardt, 2000 ; Piotrowski & Rosenbloom, 2002 ). There are those who emphasize the opportunit ies to empower all levels of government staff as well as a change in focus from process and deliberation to flexibility to achieve results. The United States has a long history of interest in and some success at reform (Stillman, 1996). N PM reforms in th e United States have had mixed success ( Wechsler, 1994; Kettl, 1998; Light, 1998; Radin, 1998; Thompson, 2000 ). Over twenty years ago, the practitioner led Reinventing Government movement influenced all levels of government in the United States with its en trepreneurial message (Osborne & Gaebler, 1992) Federal government initiatives have spanned both political parties but challenges remain (Newcomer & Caudle, 2011). These approaches were implemented at the federal

PAGE 28

17 govern ment level through the Clinton Administration National Performance Review, an unprecedented review of federal government operations led by Vice President Gore (Rainey, 2003). Federal government implementation emphasis was formally authorized with enactment of the Government Performance and Results Act in 1993 that required strategic planning and performance information. The Bush Administration has been Management Agenda (Joyc e, 2003; Gilmour & Lewis, 2006) and the use of the Program Assessment Rating Tool ( PART ) (Moynihan, 2013). The Obama Administration continued the emphasis on the importance of performance management (Joyce, 2011), presided over passage of the GPRA Moderniz ation Act of 2010 and is moving towards emphasizing performance requirements for state and local governments in the context of funding availability, as discussed in the chapter on the importance of this research. In comparison, state government initiativ es are broad based with mixed results, but states extensively involved in reforms based on NPM principles (Ammons, 2001; Ho, 2011). While NPM has been held out as offeri ng solutions, there is a need to better understand how to successful ly apply these tools in a government environment. More specifically a theoretical and practical understanding of a successful PMM system is critical. System Effectiveness This section w ill discuss problems with PMM in state agencies as noted in the academic literature, then turn to findings about elements, practices and processes of these systems that may be particularly beneficial. Information and data are generated by

PAGE 29

18 leaders, manager s and staff managing and administering programs, but in a public agency, the role of the legislature in providing authorization and funding is also noted. This section will conclude with a discussion of the lack of a clear understanding from the literatur As discussed earlier, state adoption of performance management systems is widespread, but some state s struggle with even simple aspects such as performance reporting. Moynihan (2006) evalu ated a decade of reform and found some evidence of performance information in the documents of 48 states. Of these 48 states, the top score was 101 for Arizona. Other states with scores above 75 were Delaware, Iowa, Louisiana, Missouri, Texas and Virginia. There were ten states that had performance information scores below 25. In a more recent analysis using GASB reporting criteria, quality performance reporting of key agencies was limited to a few departments spread throughout the nation, and only Oregon h ad consistently strong reporting in all four types of state agencies evaluated (Smith et al., 2008). Variation in data use has also been shown by many other researchers, including Poister, Pasha and Edwards (2013) and Pollitt (2006). Yet several scholar s have argued or found that state performance and accountability systems are not completely consistent with the NPM model For example, based on a survey of fifty states, nearly 40 % of state agencies (surveyed) had fully implemented strategic planning, an d only 20 % of states had fully implemented training programs to provide customer service. From the full implementation perspective, only about 5 % of states had simplified human resources rules, privatized major programs or authorized greater agency discret ion of end of year balances (Brudney, Hebert, &

PAGE 30

19 Wright, 1999). Moynihan (2006) found that states had not implemented a consistent package of changes. In particular, he found low levels of agency autonomy in the areas of procurement, contracting, budgeting and human resources. Differences in governments such as government structure and underlying law and the role of governance and politics are among the reasons why states may need to implement differing approaches (Behn, 2001, Kettl, 2005). Tailoring inno vations to fit local needs requires that we move beyond best practices to smart practices ( Bardach, 2000; Klingner, 2006). Yet, there is a need for theories to explain the conditions and mechanisms that can lead to successful PMM for a broad range of state agencies. Ingraham, Joyce and Kneedler Donahue (2003) were lead investigators for the Government Performance Project (GPP) and the Federal Performance Project (FPP), with the first analytical results published in 2003. This project represented the larges t assessment of public management capacity ever undertaken in the United States. Its purpose was to build a model of government management capacity and its components. The model reflects that government performance is a function of management capacity and various environmental constraints and opportunities. These authors define d capacity they hyp othesized that capacity wa systems, the level and nature of leadership emphasis, the degree of integration and alignment across its management systems, and the extent to that it manages (p. 15).

PAGE 31

20 Subsequent survey and data updates of the state level analysis were conducted every two years by the Pew Center on the States and published as Grading the States report cards in Governing magazine (Barrett & Greene, 2008). The Pew project used a methodology similar to that of the original Government Performance Project, but substituted grading categories that were more transparent to citizens: information technology management became information; capital management became infr astructure; financial management became money and human resources management became people. S tates were assigned an overall grade for each of these four categories based on specified subsidiary components of each category that were given numerical rating s and graded as either green, yellow or red (Pew Center on the States, 2008) Both the information and money categories had criteria focused on performance measurement. The Government Performance Project and Grading the States initiatives advanced acad emic research and elevated practitioner emphasis on public management and use of performance measures. These projects were grounded in public management theory and provided a dataset of the population of states. Yet, the overall grades in each category a nd the specific criteria rankings and scores of each report were not consistently readily apparent and were not quantified. Therefore, it was difficult to assess the relative importance of subsidiary elements or of overall categories in contributing to ma nagerial effectiveness. While helpful, these projects did not show practitioners the key elements needed to achieve on going success from a system of performance measures. The design and methodology of the Grading the States project was focused on the office in each state, and responses were compiled and generally coordinated by the

PAGE 32

21 executive branch. For the money category, there was not a single element of the criter ia that specifically identified the legislative branch, despite the power of the legislature to pass authorizing and appropriating legislation. Of the fourteen criteria in the information category, five criteria specifically mentioned the legislature or e lected officials. The academic literature has attempted to identify practices leading to success of a system of performance measures, but much remains to be done Moynihan and Pand e y (2005) studied the impacts of external environment and levers of inter nal change and technology on performance. Using data from a national study of state government health and human services leaders, these authors used an ordinary least squares technique, and their model explained 73.5 % ions of organizational effectiveness. Significant independent variables included the support of elected officials and the influence of the public and the media; these variables show the strongest relationship with organizational effectiveness with beta co efficients of .315 and .365, respectively. Internal to the organization, goal clarity and decentralized decision making were also important and positively influenced organization effectiveness. These authors on flexibility, adaptability and readiness, growth and resource acquisition. The beta coefficient for development organizational culture was .177. According to Wellman and VanLandingham (2008), use of performance based budgeting in Florida resulted in governmental operations that of

PAGE 33

22 such a system include d designating a leader for the system, keeping information readable, and making valid and reliable performance information readily available. These authors recommend a hierarchy of measures so that the legislature does not deal wit h excessive amounts of information , and keeping expectations reasonable. Hou, Lansford, Sides and Jones (2011) noted challenge s in using performance metrics for state budgeting, but also greater opportunities for thei r managerial application in agencies Their study found that executive agency leaders, middle managers and staff strongly support ed and used performance management systems, al though to a lesser degree during weak economic and fiscal conditions when budget s tend to drive policy rather than the reverse. Recent research focusing on f ederal agency experience with PMM systems is also relevant to state efforts For example, a 2013 United States Government Accountability Office (GAO) study identifie s "leading pr actices" to promote successful data driven performance reviews, required by the federal government. The studied relied on a survey of performance improvement officers at 24 federal agencies and case studies of implementation practices at the Department of Energy, Small Business Administration and the Department of the Treasury. The report identified nine practices including the importance of data driven performance reviews characterized by significant agency leadership participation ; rigorous preparation; attendance of key players at meetings to enhance problem solving ; and sustained follow up on identified issues. There was a significant emphasis on "rigorous" processes. The GAO report also emphasized ensuring

PAGE 34

23 skill levels of staff to analyze and to commun icate complex data for decision making (United States G overnment Accountability Office, 2013) Other research findings align well with the se GAO recommendations. In a study of international initiatives as well as US federal government initiatives Newco mer and Caudle (2011) recommend providing an appropriate organizational culture and climate that includ es program performance c ontinuing support and capacity (p. 122). In a study of federal agencies and the PART, Moynihan and Lavertu (2012) highlight the following practices to enhance use of performance information: L eadership commitment to results; l earning routines led by supe rvisors; m otivational tasks; and ability to link measures to action. These recommendations are reinforced by Hatry and Davies (2011) who emphasized the need for interested and engaged leadership, timely data on performance, and staff analysis prior to rev iew meetings as keys to successful performance reviews. This report was based on a review of successful practices of federal, state and local agencies. These performance reviews are now required by the GPRA Modernization Act of 2010 and are intended to e nhance agency effectiveness and efficiency (ibid, 2011) Turning to the role of the legislative branch, previous research established the role of the legislature a s important to PMM systems (Liner, et al 2001; National Conference of State Legislatures, 2003 ; Bou r deaux & Chikoto, 2008 ). The information is information in hearings; to make appropriations and policy decisions; to provide knowledge to inform policy de velopment and improve communication with constituents;

PAGE 35

24 Legislatures, 2003: xii). Bourdeaux (2005, 2006) concluded that legislative engagement in oversight of performance informa tion resulted in greater use of performance measures by both the legislative and the executive branches However, Hou et al. (2011) found performance based budgeting was only selectively applied by legislators Given the contradictory and inconclusive nat ure of previous research on the role of the legislative branch with respect to PMM systems and their effectiveness additional research is needed. for effectiveness. Yet, patterns are emerging, and these factors will be discussed at the conclusion of this chapter. Theoretical Model of PMM System Effectiveness Yang and Hsieh (2007) moved beyond descriptive research and used hypothesis testing to quantitatively understand the impact of five independent variables on managerial effectiveness of performance measurement. Their study was conducted at the local government level and focused on Taipei, the capital of Taiwan, a democracy pursuing a NPM agenda. With respect to research design and methodology, the study involved surveying staff in every unit of the Taipei government, including 12 district governments. The survey had 28 questions. The survey instrument was reviewed by a peer review panel which had five experts on performance mea surement and survey methodology. The authors used the Dillman method to inc rease survey response rates and had a survey response rate of approximately 61 % The authors used univariate analysis, correlation analysis and structural equation modeling to analy ze the results.

PAGE 36

25 The study developed a scale for the dependent variable of performance effectiveness and included eleven elements reflecting major aspects of performance measurement, including indicator quality, use of performance results and performance m easurement effects. Of particular interest for this research study is that Yang and Hsieh defined use of performance results to include decision making, strategic planning, budgeting and communication. Of key importance to this dissertation research, thes e authors included organizational learning as component of the scale of the dependent variable. Further, Yang and Hsieh did not include a variable in the model to reflect organizational culture. Figure 1. Flow Chart of Performance Measurement Effective ness Adapted from Public Administration Review 67(5), p. 867. Copyright 2007 by the American Society for Public Administratio n. Figure 1 : Yang and Hs i eh (2007) Mode l of Performance Measure Effectiveness

PAGE 37

26 The authors included both internal and external organizational factors as independent variables in the ir model. The study drew from public management and organizational theory to develop independent variables for organizational support, technical tr aining and adoption. Using political science theory, these authors also developed dependent variables for external political support and stakeholder participation. The study involved the testing of eight hypotheses, principally through the use of structura l equation modeling. All five dependent variables were found to be significant in the base case model, which had an R 2 of .56. Organizational support, including top management commitment, middle manager support and subsystem collaboration, was determined to be the most important predictor of performance measure effectiveness. The impact of organizational support on performance measurement effectiveness had a coefficient of .52. Organizational support also affected stakeholder participation (coefficient = .54) and adoption (coefficient = .29). External political support was also found to affect managerial effectiveness directly (coefficient = .10) and indirectly (such as via external stakeholder participation, organizational support and technical training, with coefficients of .11, .50 and .444, respectively). External political support was determined by the extent that an agency enjoyed political autonomy, authority and political support by elected officials for initiatives. Stakeholder participation had a slightly positive impact on effectiveness, with a coefficient of .17. These authors concluded that adoption, and associated training, are important in enhancing performance measure effectiveness. Adoption was measured by the type of

PAGE 38

27 performance measures i ncluded in the system, an indication of the comprehensiveness of the set of performance measures. However, adoption by itself was not sufficient for performance effectiveness. The continual use and refinement of the system, which these authors refer to as implementation, must also occur. The impact of technical training on effectiveness was mediated by adoption. This model has many strengths, but needs further testing. The model g oes beyond simply addressing whether PMM affects a single result, such as bu dgeting, to more broadly consider the many uses of performance data. The model is grounded in public management, organizational and political science theories and included considerations both internal and external to a public agency. The measurement scal e s to create the were significant and exhibit goodness of fit. In sum, Yang and Hsieh identif ied areas for future research which are particularly relevant to this diss ertation research. They noted future studies are needed to test the model with data collected in the United States and to Furthermore, additional variables, such as organizational culture and organizational learning, are mechanisms to test the model and thereby potentially enhance its usefulness for practitioners. Organizational Culture This section will address the review of the literature on organizational culture, particularly as it may apply to PMM. Further, this section will consider how organizational culture may be useful in considering the applicability of the Yang and Hsieh model to PMM for state agencies.

PAGE 39

28 In The Functions of the Executive Barnard (1956) advanced the argument that the failures of initiatives for organizational improvement are not caused by inadequate policies or management incompetence (Ott & Shafritz, 1994). Yet while organiza tional culture is a key to organizational success (McNabb & Sepic, 1995 ; Khademian, 2000 ), i t is present ly a social construct rather than an operationally defined theoretical concept. Organizational culture includes values, beliefs, assumptions, perception s, behavioral norms, artifacts and patterns of behavior; socially constructed, unseen and unobservable forces; social energy; unifying themes and control mechanisms (Ott, 1989). ext (constraints and opportunities) within which managers manage and how management pp. 33 34). Culture and performance management effectiveness are linked in other studies of government performance and reform ( de Lancer Julnes & Holzer, 2001 ; Garnett, Marlowe & Pandy, 2008; Caiden, 2010 ). The concept of organizational climate is outside the scope of this research for two key factor in an or may be a better variable when surveying agency leaders re traditional for large, older bureaucracies (Handy, 1993). Individual effectiveness in a power culture is typically measured by results. In contrast, individual effectiveness in a role culture is measured by conforming to rules and regulations (Handy, 19 93; Fleenor & Bryant, 2002).

PAGE 40

29 It is not clear if managerial reform or reinvention or other organizational improvement processes can be achieved unless organizational culture is changed. -warning wo implementers [of change] that they cannot succeed without first changing the organizational culture -, p. 365) from the perspective of organizational culture. First, there is the issue of culture and the generalizability of the findings from Taipei to the United States. Second, ther e is the issue of culture which could be included as an independent variable in the model to assess its relationship with other variables along with its impact on performance effectiveness. Schein (2002) emphasizes the importance of role culture and cultu ral analysis in management across national and ethnic boundaries and Fitzpatrick, et al. (2011) noted comparative studies may find problems with administrative reforms, when the role of culture and its associated impact on interventions may, in fact, dese rve greater attention in the analysis The Yang and Hsieh mid level theory was tested in Taipei. At least one emphasizes two political parties, the Asian model is more si milar to the United States than approaches to democracy in other parts of the world (Reilly, 2007). Yet, there is a distinct bureaucratic culture based on the teachings of Confucius in East Asia. Frederickson (2002) notes that Asian bureaucratic culture op leaders or rulers of the state have a moral obligation to ensure peace, prosperity, and justice so that the people will be happy and able to live full lives. The people have a

PAGE 41

30 moral obligation to support their leaders so long as tho se leaders are meeting their moral effectiveness model in the United States would be useful. The second issue relates to the need to consider including an independent variable to reflect culture and innovation. One strand of the NPM literature revolves around the role of government managers with a significant debate on the appropriate degree of entrepreneurism in public service. Critics argue public sector entrepreneurs may be characterized as rule breaking, self promotion, and unwarranted risk takers who can become loose cannons (Terry, 1998; D eLeon & Denhardt, 2000), while proponents view them as exercising leadership and proactively engaging in strategic, astute initiatives t o creatively solve public sector problems and avoid crises (Behn, 1988; Borins 2000). Bozeman & Kingsley (1998) found positive relationships between clear organizational mission and top manager willingness to trust employees linked to risk culture. These authors also found notable red tape, weak links between promotion and performance along with significant high involvement with elected officials tend to reflect organizations with a l ower risk culture. Cameron and Quinn (200 6 ) use a competing values fra mework to identify four categories of organizational culture: Clan, adhocracy, hierarchy and market. These authors surveyed 43 public administration entities and found the dominant culture to be hierarchy. There was little evidence of adhocracy in the o rganizational culture of the public administration entities studied. The lens of organizational culture and its impact on effectiveness of a system of performance metrics has been studied by other public management researchers. In

PAGE 42

31 particular, Kim (2010) viewed reforms within the umbrella of NPM as entrepreneurial, requiring risk taking, innovation and being proactive. This study involved a survey of 957 leaders of state government agencies in the 48 continental United States to assess factors impacting performance. Th is model explained the impacts of the individual variables on organizational performance with an R 2 of .455. Being proactive ha d the strongest relationship to performance. Other significant variables included risk taking and innovativenes s. Moynihan and Pandey (2010) concluded that organizational culture is important in determining whether local government agencies use performance information. The innovation culture approach is somewhat different than viewing innovation as a proces discusses th e innovation process has four stages: diffusion, transfer, propagation and replication. Recent empirical research focused on innovation and innovation adoption and its positive impact on performance management (Damanpour & Schneider, 2006 200 9 ; Walker, Damanpour & Devece, 2011). These authors concluded innovation characteristics and attit udes, manager attitudes and organizational characteristics influence innovation adoption. These authors concluded the impact of management innovation on organizational performance is mediated by performance management, and performance management positivel y impacts organizational performance. This study reminds us that performance management is only one factor impacting overall organizational performance.

PAGE 43

32 Inclusion of organizational culture in this study of PMM effectiveness will enhance our understandi ng of organizational culture, its relationships with the study variables as well as its relationship with PMM effectiveness. Certain state government functions are expected to exhibit differences in organizational culture. A key challenge for researchers but there may be a unique culture these agencies share that relates to the success of PMM. Organizational Learning Given that PMM systems have been in place for over twenty years by pr actitioners in the United S tates, and yet many practitioners continue to be frustrated and challenged to finding success in their implementation, one may wonder how effectiveness might occur over the long run. In particular, the idea of learning from mist akes and incorporating what one learns into future efforts is fundamental to the concept of organizational learning. Cyert and March (1963) introduced the concept of organizational learning Among the many definitions of learning, Fiol and Lyles (1985) emp hasize the following: t in the past two decades learning which relies on knowledge, learning, dialogue and shared mental models. The model has a long term focus and feedback loops are therefor e key. Given the many complex challenges facing public administration in general,

PAGE 44

33 positions of authority or responsibility to achieve social, political, economic, cultural, o r (Klingner & Sabet, 2005: 201). Advanced forms of learning are of particular interest, because they are considered useful for solving new or complex problems, restructuring whole processes or systems, reanalyzing a job from a completely new perspective or reengineering an organization to adapt to major environmental changes (Argyris & Schon, 1978; Senge, 1990). As well, this approach is particularly useful given trends for c hange in the public sector, particularly the shift to management of a capital asset where the knowledge, skills and abilities possessed by people defines what they are capable of p. 134). There are two types of learning in the clas sic model of organizational learning (Argyris & Schon, 1978). In single loop learning, individuals, groups, or organizations modify their actions according to the difference between expected and obtained outcomes. For double loop learning, individuals, gro ups or an organization may question underlying values, assumptions and policies. Specifically, second order learning occurs when an organization is able to view and modify the underlying elements. In public rstanding of policy choices and loop learning (p. 203). P articularly in the public sector, a pplying organizational learning is challenging due to inherent tensions Common (2004) was the first to analyze the role of or ganizational learning in improving public policy making in a political environment. Common found the following obstacles to organizational learning: 1) overemphasis on the individual; 2) resistance to change and politics; 3) social learning is self limitin g, i.e.

PAGE 45

34 for its application (p. 39) The desired focus on learning is in contrast to an emphasis on implementation and execution in that e, and done right, 2008 p. 62 ). Moynihan (2005) emphasizes the need to include learning forums in management reform design to incorporate double loop learning. In h is 2005 study, he used case studies of the department of corrections in Vermont, Virginia and Alabama; these states were at significantly different phases of managing for results. The discussion provides clear examples of sophisticated management technique s involving data, analysis, learning and questioning by the leadership of the Vermont Department of Corrections. In contrast, the Virginia counterparts showed some single loop learning gains with their performance management reform process, while the Alaba ma performance management system for corrections was completely ineffective. As noted by Senge (1990) and Moynihan (2005), double loop learning is important to long term success of a performance management system and in their review of the literature on strategic management, Bryson, Berry and Yang (2010) note the need for academic research to focus on organizational learning Therefore, testing to determine if organizational learning could impact the effectiveness of a PMM system would be beneficial in c ontrast to the Yang and Hsieh approach to include organizational learning in the scale t o create the dependent variable. In conclusion, there is little wide spread, conclusive evidence to date of significant effectiveness from state PMM systems for agen cy management purposes.

PAGE 46

35 NPM has been viewed worldwide as a means to bring practices successfully used in the private sector to enhance the effectiveness and efficiency of government. The Reinventing Government movement was led by practitioners over twenty years ago as a potentially beneficial tool. There have been mixed levels of implementation by states, but virtually all states adopted some form of PMM systems. Academic literature indicates success has been mixed, but key practices and processes appea r to make a notable difference for success. Among these factors, it is clear that a system approach is needed. Routines and processes to collect, compile, report, review and subsequently incorporate information into action are important, with both an inte rnal and external focus. At all levels of government, the literature shows meetings and forums for groups to review data and its implications can influence success. A willingness to restructure processes and reengineer programs and organizations may be b eneficial, along with a willingness to experiment, innovate and persevere. Leadership and support for PMM is important at all levels of the organization, and training and ensuring staff have appropriate levels of skill in data analysis and communication c an be contributing factors (Hatry & Davies, 2011) For agencies, involving key stakeholders may be important as well as seeking political support, and the legislature may be important in this regard. This dissertation will modify a relatively new model to more fully incorporate the findings in the literature to quantitatively examine state PMM effectiveness and test potential significant independent variables based on the areas discussed in this section.

PAGE 47

36 R esearch Q uestions The purposes of this study are to: 1) determine how the political environment, stakeholder participation, organizational support and training affect the adoption and managerial effectiveness of performance management in state agencies and 2) to understand the roles of and impacts of or ganizational culture and organizational learning on a state PMM system. Consistent with the Yang and Hs i eh model, the original eight hypotheses of those authors will be tested. In addition, two additional hypotheses will be added based on the literature review These two new hypotheses will test the relationships between: 1) organizational culture and performance effectiven ess and 2) organizational learning and performance measurement effectiveness. These relationships are expected to be positive. The hypotheses associated with these research questions are as follows: H1: The level of performance measurement adoption is positively associated with the managerial effectiveness of performance measurement. H2: Organizational support is positively associated with the adoption and managerial effectiveness of performance measurement. H3: Technical training in performance measurement is positively associated with the managerial effectiveness of performance management, but the relationship is mediated by the adop tion of performance measurement. H4: Political support is positively associated with organizational support for performance measurement. H5: Political support is positively associated with higher levels of performance measurement training.

PAGE 48

37 H6: Political s upport has a direct, positive impact on the level of managerial effectiveness of performance measurement but not on the level of adoption. H7: External stakeholder participation is positively associated with the level of managerial effectiveness of perfor mance measurement but not with the adoption of performance measurement. H8: The level of external stakeholder participation in performance measurement is positively affected by the level of political support that agencies receive and the level of organizat ional support for performance measurement within agencies. H9: Organizational culture positively impacts performance measure effectiveness. H10: Organizational learning impacts performance measure effectiveness.

PAGE 49

38 CHAPTER III RESEARCH DESIGN AND METHODS This research focuses on the relationships between factors external and internal to a public organization and PMM system using a mid level theoretical model developed by Yang and Hsieh with modifications based on literature. The research uses hypothesi s testing to address research questions and relies on data from a survey of agency leaders in selected state agencies across the nation. The model forming the basis for this research involved structural equation modeling, and that approach is also used in this research. This research also considers whether an alternative, reduced form model would improve the statistical fit for the relationships for the survey data collected, while being consistent with theory and recent research in this area. Human Sub jects Review Approval The Colorado Multiple Institutional Review Board (COMIRB) approved the research project and issued a three year certificate of exemption in June 2011; the certificate was subsequently extended to June 2017. Respondents to the resear ch surveys work in state government; therefore, the approval process by the COMIRB was not extensive. These state government employees are not a protected class. Further, the research topics addressed by this study are not sensitive. Additional contact wa s made with institutional COMIRB representatives from the University of Colorado Department of Political Science and School of Public Affairs in October 2011 and March 2014, respectively, for guidance.

PAGE 50

39 The Underlying Model Yang and Hsieh (2007) developed a model to assess managerial effectiveness of the use of performance measures. Effectiveness was specified as a function of stakeholder participation, organizational support, technical training, external political support and adoption. Further, effective ness was defined as a scale variable which included trustworthiness, decision making, communication, budgeting, accuracy, reliability, value, productivity, motivation, organizational learning and strategic planning. These authors administered a survey to staff in every unit of Taipei government. The survey response rate was approximately 61 % All independent variables were significant, and organizational support was the strongest predictor of performance measure adoption and effectiveness. External poli tical support impacted performance management effectiveness both directly and indirectly. The indirect effects were through stakeholder participation, organizational support and technical training along with performance measurement adoption; this later i mpact occurred through organizational support and technical training. Hypothesis Testing The research design used hypothesis testing to address the research questions. Ten hypotheses were considered. The first eight hypotheses presented earlier were t ested by Yang and Hsieh in their mid level model. In addition, the model was expanded to consider two additional hypotheses, specifically that organizational culture and organizational learning positively impact performance effectiveness.

PAGE 51

40 Selected State Agencies The survey was administered to key state agencies based on the agency typology advanced by Wilson (1989). The four types of agencies are coping, procedural, craft and production, and the distinctions between them are based on visibility of outp uts and outcomes. For a coping agency, neither outputs nor outcomes are visible. An example is a higher education coordinating/governance agency. In contrast, a production agency is one whose outputs and outcomes are visible. An example of this type of agency is a state transportation department, whose completed highway projects are visible along with the economic development and quality of life impacts of the transportation infrastructure. A procedural agency produces visible outputs, but its outcom es are not visible. An example of a procedural agency is a state personnel/human resource agency. The final type of agency is a craft agency, whose outputs are not visible, but whose outcomes are visible. An example of this type of agency is a state game fish or wildlife agency, whose responsibility for managing natural resources results in quality of life experiences for consumption and non consumptive users. The Wilson concept overall is useful, but has specific limitations in that Wilson applied thi s typology to federal agencies. Further, Wilson noted the distinctions between agency types can be somewhat vague and blurred. Sampling within these agency types was not necessary, and the entire population was the focus of this research. Other than this typology approach to selecting four agency types for inclusion in the research, distinctions between agency types were generally not were not made in the research although a discussion of analysis of variance is included.

PAGE 52

41 Agency Leadership The survey was targeted to agency leaders, rather than all agency employees in the Yang and Hsieh study. The focus on senior leadership is appropriate as these leaders have significant roles in setting organizational goals and objectives, using performance metrics, impacting organizational outcomes, allocating resources, setting organizational values and climate and working with external stakeholders to gain support (Ingraham, Joyce & Donahue, 2003; Behn, 2006; Kelman & Myers, 2011; Villadsen, 2012; Rabovsky, 2014). The perspectives of agency leadership may be different than other employees (Frazier & Swiss, 2008). This approach was taken to enhance the response rate through personalization of survey materials and particularly simplified requirements of administerin g the survey. For the online survey, collection of names and email addresses for agency leaders can be easier than collecting names and email addresses for all agency employees. Given the time demands for agency leaders, the survey instrument offered res pondents the opportunity to ask a senior staff member familiar with the For those agency leaders who are the focus of this study, many are appointed by the Governor, although some are appointe d by boards and commissions. A few may be elected officials. Given the widespread potential turnover of governors in the November 2010 general election, it was de emed advisable to wait until late 2011 to begin administering the survey. In this way, new ly appointed agency heads would have some experience in their position and would have completed their first legislative session.

PAGE 53

42 Survey Instrument T he majority of the survey questions were based on the Yang and Hsieh survey instrument; however, a few add itional questions were added. In particular, based on theory and literature, survey questions and scaled variables were developed for the two new variables added to the model, organizational culture and the organizational learning (see Appendix A1 ) Surve y items for managerial effectiveness in this research included eleven of that another question addressed whether performance results could be trusted, and it was not an ticipated that respondents would actually tell the researcher that their performance indicators did not accurately reflect the quality of management or that those indicators did not accurately reflect the work of the organization. The question on organiza tional learning was included in the survey, but not used to develop the effectiveness scale for this research, due to the inclusion of the organizational learning construct. Two of the original survey questions for the dependent variable were restated to enhance clarity. All of these items were measured on a seven point agree/disagree Likert scale. Survey items or questions for dependent variables included in the Yang model organizational support, external political support and performance measurement adoption. Organizational support and external political support were measured on a seven point agree/disagree scale, while performance measurement adoption questions were pose d for a yes/no (dichotomous) answer. In the area of stakeholder participation,

PAGE 54

43 addressed open, public meetings and involvement by legislators in data review and data use. These questions were measured on a seven point agree/disagree Likert scale. Technical training questions were measured on a five two questions along with one additional question on the extent of training over time we re included in this survey. In this way, every variable had at least three questions or items in the survey. Jung, et al. (2009) found there is not an ideal instrument with which to assess organizational culture. The authors recommend the researcher co nsider the research purpose and need and how the information collected will be used. Of particular concern for survey response is the length of the questionnaire. Survey items to assess organizational culture we re based on the organizational culture asse ssment instruments developed by Cameron and Quinn (200 6 ), but only six questions were included in order to help reduce the total number of survey questions and potentially increase response rates. These questions attempted to assess organizational culture characteristics of entrepreneurial attitude, innovation, production orientation, task and goal accomplishment, growth and achievement. The organizational culture questions were assessed on a seven point agree/disagree Likert scale. The four types of sta te government agencies in this study are expected to exhibit differences in organizational culture. This analysis could enhance understanding of organizational culture in state agencies and the associated relationship with performance effectiveness as wel l as other variables in the study. Survey items to assess organizational learning were developed principally on the work of Moynihan (2005) along with practitioner articles and information, as discussed

PAGE 55

44 earlier in the rationale and importance of research chapter. These four survey items were assessed on a dichotomous scale (yes/no) and focused on questioning basic outcomes and analyzing alternative approaches, using data and science for decision making, and using public forums and legislative hearings as venues to discuss issues and strategies. Review and Testing of Survey Instrument Pretests and pilot studies are useful to ensure the survey design and process will be successful (Majumdar, 2008 p. 246). c focuses on the areas of question format, survey questionnaire length and the da ta collection process (p. 246). Pilot costs are costly and time consuming, but are considered particularly important for large survey project success (p. 246). Extensive review and testing of the draft survey instrument was conducted. To test the draf t survey instrument, three groups were asked to take the survey. The individuals selected for the groups were not included in the final group of survey respondents. Nonetheless, the individuals selected for these groups were somewhat similar to the final survey participants. The first focus group to pilot the survey consisted of 100 members of the Denver chapter of the Association of Government Accountants, now known as the Accountability in Government organization. This association has been actively in volved in professional development with respect to performance metrics and their various uses, including accountability and performance management. The draft survey was administered to federal employees, based on their email addresses. Only two individua ls completed the online survey located at Survey Monkey resulting in a response

PAGE 56

45 rate of only 2 % One potential respondent emailed this author that they worked in the federal government and the survey was not applicable to them. Subsequently, the contact letter was rewritten, emphasizing the recipient was being asked to test the survey as part of a focus group. An additional five responses were received, for a total response rate of 7 % The second focus group consisted of other members of the Denver chap ter of the Association of Government Accountants, but this group was exclusively individuals email address. Again, survey response was minimal. Due to a low response rat e from the first two efforts, a third attempt at testing the survey was initiated. The selection methodology relied on a snowball approach and focused on a group of state and federal government employees and retirees, potentially familiar with performance metrics and performance management, either through budget analysis or agency management. In total, there were 17 responses. Although the survey response rate was still quite low, the results and survey instrument were reviewed. As a result, the survey i nstrument was revised somewhat to enhance readability and appear shorter (Presser, Couper, Lessler, Martin, Martin, Rothgeb, & Singer, 2004; Umbach, 2004). Text was added to clarify the use of the urvey instrument was reformatted using radial buttons across the side and to include components of questions within a single heading (Couper, Traugott, & Lamias, 2001). Both of these approaches resulted in visually shortening the survey instrument to atte mpt to increase the survey response rate (Vicente & Reis, 2010).

PAGE 57

46 Using the pilot results, the author attempted to run a factor analysis to determine which items might be strongly correlated and therefore were redundant. The pilot response rate was so lo w that it was difficult to make any conclusions. No questions were deleted as a result of this step of the analysis. Data Collection Methods The initial round of surveys was administered online and employed Dillman survey techniques to increase response rates (Dillman, Smyth & Christian, 2009). The survey instrument consisted of 25 total questions, many of which had several sub questions. This approach was used to attempt to address respondent potential concerns about survey instrument length and attempt to increase the response rate. All survey materials are available from the author who can be contacted at arley.williams@ucdenver.edu or awilli4@hotmail.com Th ese steps included pre contact, contact and follow up contact letters. The contact and survey cover information provided information about the purpose of the survey and potential uses of the survey results. Respondents were ensured that individual respon ses would be considered confidential. The initial survey for each type of state agency was administered electronically with email notifications. Using the Dillman approach, the second contact communication contained a link to the survey instrument located on the internet at the Survey Monkey website. This survey approach minimized cost. Further, communication regarding the survey indicated an executive summary of results would be made available to all respondents at the conclusion of the study. T his report was intended to stimulate interest in the research in an additional attempt to enhance the response rate.

PAGE 58

47 After the initial round of electronic surveys, the response rate remained low; dissertation committee members indicated the low respon se rate was problematic and advised the author to seek a solution. After a survey methodology literature search, the author decided to use a mixed method approach. Mixed methods involve using one form to administer the survey (such as email and electroni c questionnaire/compilation), followed by an alternative method such as regular mail and paper (Millar & Dillman, 2011). As a result, the initial online survey was followed by the same survey being administered to non respondents via United States mail. The returned responses were hand entered into Survey Monkey, then proof read and cross checked. The paper survey distribution resulted in a significant cost. Using the Dillman technique, the author took particular care to ensure the official appearance of all survey materials, particularly those related to the paper survey T his included the use of quality paper, color printing and color logo stickers applied to every page of the survey. Survey materials included a personalized letter using the Universit y of Colorado logo with addresses in Denver and Colorado Springs explaining the purpose of the survey and signed by the committee chair and the doctoral student. The survey packet included a cover page for the survey, the survey instrument and a self addr essed, stamped return envelope with color coding by type of agency. This color coding was particularly helpful for any needed follow up or cross checking of data, particularly when some respondents removed the cover letter and survey cover page from of th e survey packet. Data collection involved exporting data from each Survey Monkey survey (total of eight) using an SPSS download format for compilation. All survey data was compiled

PAGE 59

48 into one dataset with a streamlined format to show only respondent nu m ber and answers to each survey item. The downloaded data was cross checked against original paper surveys, and it was discovered that responses for two survey questions were miscoded in the automated download. The dataset was corrected. The survey respo nse data was examined for missing values and missing values were calculated using a regression technique. Sponsors/Endorsements Consistent with the Dillman survey approach, this research sought endorsements from relevant agencies to enhance the response r ate. The survey was endorsed by three associations: State Higher Education Executive Officers, Association of State Highway and Transportation Officials and the Association of Fish and Wildlife Agencies. These associations either publish membership cont act information on their websites or provided the information to the author. The National State Park Association also indicated a willingness to participate, but was not willing to provide contact information. Given state reorganizations between state pa rk agencies and state wildlife agencies as well as lack of contact information, a potential endorsement from the National State Parks Association was not utilized. The author contacted the National Association of State Human Resource Agencies requesting a n endorsement, but that organization did not respond with a willingness to endorse the survey and was unwilling to provide a list of members. Subsequently, the author compiled contact information for state human resources/personnel agencies. This was so mewhat challenging for several reasons: 1) information in published directories had become dated due to considerable turnover and election transition; 2) differences between the states in cabinet structures; 3) the existence

PAGE 60

49 of personnel offices within ag encies, such as a personnel office within the Human Services Department; and 4) significant reorganizations in the states to either centralize and generate budget savings or decentralize to also generate budget savings. In some cases, there was considerab le difficulty in obtaining the email address of the agency head to administer the survey electronically. The Dillman Total Design methodology also encourages the use of token gifts to enhance the response rate. Because the human resources/personnel agen cy survey did not have an endorsement, the author contacted representatives of the COMIRB for guidance to determine whether to file a request to be able to provide a token gift, such as a book or textbook, to increase the response rate in 2012. Since the research was originally rated as exempt, the author was directed to continue with the existing COMIRB approval. Data Analysis Data analysis focus ed on factor analysis and structural equation modeling (SEM) SPSS software was used for factor analysis to support the SEM Exploratory factor analysis was used to assess relationships between variables and items in a survey without imposing a structural model, either from the Yang study or based on the literature (Blunch, 2013). More specifically, explorator y factor analysis is a statistical methodology to determine which survey items or questions can be grouped together because of similar answers from respondents (Leech, Barrett & Morgan, 2011 p. 65). There are two main conditions necessary for exploratory factor analysis: 1) relationships between variables and 2) larger sample size results in more reliable factors. The latter is

PAGE 61

50 particularly true when comparing to the number of variables being considered in the model (Leech et al. 2011 p. 65). S EM is a statistical technique to test and estimate causal relationships using statistical data and causal assumptions (Leech et al. 20 11 ; Schumacker & Lomax, 2004; Blunch, 2013). Its use is appropriate for theory testing in part because the technique defines a model to explain an entire set of relationships (Hair, Black, Babin & Anderson, 200 6 ). It is considered a stronger technique than multiple regression because it takes into account interactions, nonlinearity, correlated independent variables, measurement e rror, correlated error terms and latest variables. SEM illustrates relati o nships among constructs, which are the dependent and independent variables, included in the analysis. Constructs are unobservable or latent factors and are r epresented by a number of items which attempt to measure that construct (ibid, 2009). Loadings represent the relationships from constructs to variables, similar to factor analysis, while path estimate represent relationships between the constructs, similar to coefficients in regr ession analysis (ibid, 2009). SEM software, specifically AMOS published by SPSS, was used for the SEM, because of its compatibility with SPSS and its ease in creating visual diagrams (Arbuckle & Wothke, 1999) Survey Monkey provides the capability to dow nload data into SPSS file format. The SEM was broken into two phases: development and testing of the model based on the research questions and associated data collection, and respecification of the model to improve goodness of fit.

PAGE 62

51 Validity and Relia bility Validity means measuring what one is supposed to measure (Giannatasio, 2008). There are two types of validity. Internal validity indicates the causal variable caused the change in the dependent variable ; and e xternal validity reflects that there is support for the generalizing of results beyond the study group. To increase the validity of this study, survey questions were based on questions from a prior survey administ er ed by Yang and Hsieh in Taipei. Their survey was reviewed by a peer review pane l of five experts on performance measurement and survey methodology. The review included the survey objectives and a construct map relating the constructs to specific survey items. To further address validity, the study focused on four different types of s tate agencies, rather than a single agency. Further, significant efforts were made to attain as high a response rate to the survey as possible to improve validity of the data. Selection b ias is a threat to validity when the study subjects are not randoml y chosen. The design of this survey could result in some bias in that certain types of state agency leaders were chosen for inclusion. As well, some bias in responses may be evident in that individuals participating in the survey may be particularly supp ortive of the use of performance metrics for a variety of purposes. Nonetheless, these perspectives are valuable, particularly given the need to understand potential factors contributing to success of a PMM system. Reliability is considered to b e a measure that consistently operates in the same manner (ibid, 2008). Additional survey questions are explicitly identified in the appendix, and reliability of items in the survey and their inclusion in the model to support identified variables relied i

PAGE 63

52 Hsieh survey was considered reliable. The most commonly used measures of internal consistency of a multiple item scale (Leech et al., 2011; Blunch, 2013). Limitations There are several limitations to this research, its methodology, and the ability to generalize to the larger population of state agencies. The most significant limitation is a included in the study, which affects both the ability to generalize as well as test a complex SEM. In addition, in the area of public management and performance effectiveness studies, Burke and Costello (2005) discuss the potential difficulties of resear ch methodologies when considering the results of local government performance management implementation and effectiveness. These authors note survey strategic planning and o surveying the population of local government employees in Taipei. Frazier and Swiss (2008) found large percep tual differences on NPM tools between management and lower level employees. Another limitation of this research is that the survey responses are reflective of leaders at a particular point in time; the research would need to be replicated at another time to determine if similar results could be obtained.

PAGE 64

53 CHAPTER IV RESULTS AND FINDINGS This research is based on a mid level model of the managerial effectiveness of PMM modified from its original publication, and tested based on a survey of leaders of select state government agencies across the United States. Data analysis included reviewing the data, conducting exploratory factor analysis and SEM The quantitative analysis of the survey response data was done in two phases. Phase I focused on testin g the proposed hypotheses using the SEM and paths as presented in the research proposal. This approach is consistent with theory testing. Phase II focused on developing a second model to improve statistical fit, while remaining consistent with underlying theory. This second phase produced a reduced form model, which reflects the survey response data collected. This approach is consistent with other studies utilizing SEM According to itial model fails to 169). Results Using SPSS AMOS 22, the initial model proposed to the committee was visually diagrammed and tested with the dataset. While the model would run and gener ated 2 and fit statistics, the poor fit indicated an unacceptable model. Exploratory factor analysis as well as model structure recommendations from Hair et al. (2006) and modification indices through iterations of both the SEM and the factor analysis were th en used. Finally, SEM was used to confirm and evaluate the revised model.

PAGE 65

54 Survey responses from leaders of four types of state government agencies were combined into one dataset. Total number of responses was 115. The unit of analysis is the agency lea der. According to Hair et al. (2006), the minimum sample size for a SEM depends on various factors, including the complexity of the proposed model as well as communalities (average variance extracted among items) in each factor. With a small sample size of 100 to 150, Hair et al. (2006) recommend a SEM can be adequately estimated with five or fewer constructs. The proposed model in Phase I had eight theoretical constructs which would ideally be estimated with a sample size greater than 500. Therefore, wh ile the study took additional time to use a mixed mode approach for proposed model due to its complexity. Nonetheless, the exploratory factor analysis indicated Ka iser Meyer Olkin measure of sampling adequacy was .859, which was greater than .70 indicating sufficient items for each factor (Leech et al. (less than .05), indicating that the correlation matrix was significantly different from an identity matrix (ibid, 2011). Finally, the initial communalities represent the relation between the variable and all other variables before rotation. If many or most communalities are low (.30), a small sample size is more likely to distort results (ibid, 2011), but in this case, initial communalities were at .424 or above.

PAGE 66

55 Phase I Consistent with the research proposal, the initial SEM was prepared to test for confirmation of the research hypotheses. This initial model was developed with all items from the survey responses; these items are shown in the appendix. This structural model is shown below in Figure 2. The most commonly used measure of internal consistency reliability is and t his measure indicates the consistency of a multiple item scale (Leech et al. 2011). To assess whether the data from the eleven items creat ing Figure 2 : Phase I Model of Managerial Effectiveness of State Performance Measurement System

PAGE 67

56 was computed constructs in the model The for the external political support (.62) indicated weak internal consistency. T he = .49 for the performance measure adoption scale and the = .463 alpha for the organizational learning scale indicated minimally adequate reliability. The performance measure adoption scale would only be improved to an of .576 by deleting the item which asks whether the organization uses satisfaction indicators to measure performance. The Cronbach alpha for the organizational learning sc ale could only marginally be improved by removing items. According to Hair et al. (2006), to assess predictive accuracy of a SEM one should consider a group of fit indices. T hese fit indices should include 2 and the associated degrees of freedom; one absolute fit index (such as Goodness Of Fit index [ GFI ] Root Mean Square Error of Approximation [ RMSEA ] or Standardized Root Mean Square Residual [ SRMR ] ); one incremental fit index (like the Comparator Fit Index [ CFI ] or the Tucker Lewis Index (TLI)); one goodness of fit index (such as GFI, CFI or TLI) and one badness of fit index (such as RMSEA or SRMR). In general, a general rule of thumb is that there should be at least a value of .90 for the GFI, NFI, CFI and TLI. For the GFI, the possible range of va lue is 0 to 1, with higher values indicating better fit. In particular, one should focus on values of .90 to .95 or greater (ibid, p. 747). The CFI is normed, so values will range between 0 and 1, with higher values indicating a better fit between data a nd path models (ibid, p. 749). For the RMSEA and SRMR, which are badness of fit measures, one would want lower values. For RMSEA, typical values are below .10 for most acceptable models ( ibid p. 748).

PAGE 68

57 Table 1 shows key fit indices for the Phase I mod el : model fit was poor. The 2 / d egrees of freedom (d f ) ratio was 1.98 ( 2 =1,361, df=687), which did meet the traditional informal rule of thumb criteria that the ratio should be below 2 ; however, v alues for the goodness of fit index (GFI, .625) and the c omparative fit index (CFI, .757) were less than 0.90, which indicated a poor fitting model. Further, the root mean square error of approximation (RMSEA, .093) was higher than 0.08, also indicating an unacceptable model fit. Table 1 : Fit Indices for Phas e I and Phase II Models (n=115) Model Chi Square df GFI RMSEA CFI Chi square/df Phase I 1361.338 687 0.625 0.093 0.757 1.981569141 Phase II 215.015 128 0.833 0.077 0.938 1.679804688 Note: X2 = chi square; df = degrees of freedom GFI = good of fit index RMSEA = Root Mean Square Error of Approximation CFI = comparative fit index Chi square/df = normed Chi square Phase II In Phase II, there was exploratory factor analysis, considerati on of each survey question and review of theory to improve the fit of the measurement and structural model. The model was iterated to attempt to improve the fit. Modification indices were considered, and the final model resulted in elimination of the ado ption construct, the external political support construct and the organizational learning construct along with numerous survey items. The proposed model appears to best reflect the patterns of association within the dataset.

PAGE 69

58 The final SEM is shown in Figu re 3 and was developed such that the items behind each construct fit well together and demonstrated high loadings representing the relationships from constructs to variables. Three constructs were dropped from the final model, and exploratory factor analy sis was prepared with the remaining five constructs. This was done because of the poor fit of the eight construct model, and the need to reduce model complexity to five or fewer constructs. It was not possible to drop items from low performing scales to bring their associated internal consistency above the recommended minimum of .70 (Hopkins, 1998) while maintaining the recommended number of items (Hair et al. 2006). This approach was counterbalanced with a desire to keep as many variables as possible for future research and testing Figure 3 : Phase II Model of Managerial Effectiveness of State Performance Measurement System

PAGE 70

59 Table 2 reports the final construct reliabilities, items means, standard deviations and factor load ings for the constructs and items contained in the Phase II model. limits for virtually all study variables (i.e. >= .70). These coefficients range from .68 <= <= .93 as reported in Table 2.

PAGE 71

60 Table 2 : Construct reliabilities, item means, standard deviations and factor loadings Cronbach's Alpha Mean Std. Deviation Factor Loading Performance Measure Effectiveness 0.93 q2 measurement helps managers make better decisions. 5.47 1.20 0.87 q4 measurement helps communicate more effectively with many external groups. 5 .33 1.37 0.79 q5 measurement helps budget planning and decision making. 5.30 1.32 0.80 q8 performance management is worthwhile. 5.75 1.30 0.82 q9 measure ment improves productivity. 5.00 1.42 0.89 q10 measurement motivates employees. 4.60 1.40 0.86 Stakeholder Participation 0.68 q13 Citizens participate in designing indicators. 3.28 1.61 0.92 q14 Elected officials participate in performance indicators. 3.88 1.74 0.59 q15 Citizens help this organization evaluate performance. 4.03 1.62 0.53 Organizational Support 0.88 q20 Top managers e mphasize and care about the process of performance management. 5.63 1.22 0.94 q21 Top managers value and treat seriously the results of performance management. 5.55 1.24 0.96

PAGE 72

61 Table 2. Continued Cronbach's Alpha Mean Std. Deviation Factor Loadi ng q22 All offices and middle managers actively support performance management. 4.82 1.37 0.66 Technical Training 0.87 q23 How much technical training has been provided to performance management staff? 2.94 0.94 0.92 q24 How much technical training has been provided to managers and supervisors? 2.81 0.85 0.82 q25 To what extent is training ongoing over time? 2.67 0.85 0.77 Innovation Culture 0.79 q34 This agency is a very dynamic and entrepreneurial place. People are willing to stick their necks out and take risks. 4.56 1.36 0.77 q36 The glue that holds this agency together is a commitment to innovation and development. There is an emphasis on being first. 4.54 1.40 0.93 q38 My agency emphasizes growth and acquiring new r esources. Readiness to meet new challenges is important. 5.27 1.18 0.60 Note: N = 115 As shown in Table 1, which compares fit measures for Phase I and Phase II models, the later model also had an acceptable fit. The 2 / degrees of freedom ( df ) ratio was 1.68 ( 2 =215, df=128), which met the traditional informal rule of thumb criteria that the ratio should be below 2. In this case, values for the goodness of fit index (GFI, .833)

PAGE 73

62 was adequate. The comparative fit index (CFI, .938) was great er than 0.90, and the root mean square error of approximation (RMSEA, .077) was lower than 0.08, all indicating an acceptable model fit. I ndicators demonstrated convergent validity, as all t values for the loadings were statistically significant, and the standardized factor loadings were nontrivial All path coefficients for the Phase II model were significant at the 05 level This model is theoretically grounded overall and proved to be strong. SPSS AMOS output with path coefficients and significance is included in Appendix Table A2 and fit indices for the final Phase II model are shown in Appendix Table A3 and was met (F 3,111 = .543, p = .654). ANOVA did not support rejecting the null hypothesis (F 3,111 2 = .062), associated with just over 6% of the variance in effectiveness. This analysis supports the methodology of aggregating all agency responses together into one dataset. Returning to the hypotheses to be tested in this research, the reduced form model excludes the adoption construct (associated with H1, H2, H3 and H7), the political support construct (associated with H4, H5, H6 and H8) or the organizational learning construct (asso ciated with H10). The final model confirmed the following relationships: Innovation culture positively impacts performance management effectiveness but is mediated by organizational support and training. External stakeholder participation is positively as sociated with the level of managerial effectiveness of performance measurement.

PAGE 74

63 Organizational support has a positive impact on external stakeholder participation and a positive impact on managerial effectiveness associated with performance management. Tec hnical training is positively associated with performance management effectiveness. Model Comparison In conclusion, goodness of fit indices for the two models are presented in Table 1. The chi square statistic is reported to enable comparisons between t he baseline model from phase I with the subsequent revised model. On the basis of these overall findings, the revised model appears to best reflect the patterns of association within the dataset. Revisions to the initial hypothesized model are theoretica lly tenable and led to improved model estimation. The Phase II model appearing as Figure 3 is proposed as the accepted or final model. Future SEMs and path analytic studies are recommended, and additional recommendations are discussed in the conclusions chapter. Findings As noted by Yang and Hsieh (2007) the interpretation of this model must be done with care. The paths represented in this model may be better suited to explanation, rather than for prediction. Because of the interactions between constr ucts, some exogenous constructs may directly impact the endogenous construct of performance management effectiveness H owever, these affects may also be indirect through other constructs. Nonetheless, the model is consistent with theory, and the model pr ovides information on

PAGE 75

64 significance and relative contributions of each construct to the overall performance effectiveness for the survey responses in the study. Because variances for PMM effectiveness do not vary significantly across agency type, these res ults show it is useful to consider what these agencies have in common. In the reduced form model, the construct of performance measurement effectiveness was created with six items from the original survey. These items emphasize decision making, commun ication, budgeting, productivity, employee motivation and the worthwhile investment in such a system. In this model, strategic planning was not included in these items, because that item did not load relatively as high as the remaining items. The exclusi on of strategic planning may be due to some responses suppressing its strength or some type of interaction effect. The inclusion of decision making is consistent with the need to actually use performance data, which is reflective of strong strategic plann ing processes (Bryson, 2011). The revised model has three items loading on each of the remaining constructs: innovation culture, stakeholder support, organizational support and training. Each of these variables is significant in the model with a positive impact on performance management effectiveness. In Phase II of the analysis, the survey responses to items used to create the culture scale reflected an emphasis on innovation culture. The organizational learning, external political support and the adoption constructs were eliminated from the final model. This study found support for the role of culture positively impacting performance measurement effectiveness, particularly innovation culture, although this impact indirect Culture is an import ant addition to this model of performance measurement for the

PAGE 76

65 managerial effectiveness of state agencies (Schein, 1996). Culture incorporates artifacts, beliefs, perceptions and behavior (Pettigrew, 1990 : perceptions is consistent with the role of culture in impacting performance management particul ar, the final construct in this model reflects an innovation culture, but its impact on effectiveness is indirect. Innovative culture impacts organizational support and training, which in turn impact effectiveness of the PMM system. The standardized path coefficient for the relationship between innovation culture and organizational support was .43, and the standardized path coefficient for the relationship between innovation culture and training was .39. This study initially separated organizational l earning into a separate independent variable to ascertain its impact on other variables in the model; however, this construct was weak and was eliminated from the final model. One wonders whether organizational learning could be captured by an innovative c ulture. Based on this analysis and the academic literature, it seems reasonable that innovation culture would be included as a single variable, rather than both culture and organizational learning. Organizational learning may be viewed as learning cultur e (Yang, B 2003). Particularly, organizational culture may be considered part of the factors influencing innovation (Sta. Maria, 2003). Further, innovative culture is characterized by a learning orientation (Amabile, 1996; Glynn, 1996) that contributes to innovation (Cohen and Levinthal, 1990) and to performance effectiveness (deLancer Julnes and Holzer, 2001). As shown in the table,

PAGE 77

66 holds this agency together is Organizational learning can be particularly important for an organization in a changing environment, and it seems unlikely that the state agency leaders in this study are operating in a static environment. A dditional considerations regarding the scale for this construct and the need for additional research are discussed in a later chapter. Stakeholder support, organizational support and training are other significant variables in this model, and these comp onents of the model are consistent with academic literature as discussed by Yang and Hsieh (2007). The stakeholder support variable includes both citizen and elected official participation in designing performance indicators as well as citizen evaluation of performance. The factor loading of .92 for citizen participation in designing performance indicators in particularly strong. The impact of stakeholder support on performance management effectiveness is positive, but the weakest relationship in this mo del with a standardized path coefficient of .15. The organizational support factor includes survey items that top managers emphasize and value the performance management process and results. Further, middle managers actively support performance managemen t. Organizational support was found to be the most important predictor of performance management effectiveness, and also had an impact on stakeholders. The standardized path coefficient for the relationship between organizational support and effectivenes s was .61, and the standardized path coefficient for the relationship between organizational support and stakeholders was .37. The training construct includes items for manager, supervisor and staff training along with an emphasis of training over time, a nd the standardized path coefficient for the relationship between training and effectiveness was .21. The on going training component was a new

PAGE 78

67 question not included in the Yang and Hsieh model and, as shown in the table, this item had a strong factor loa ding of .77. The organizational support and training constructs were correlated with each other. This appears reasonable because organizational support and training are related to each other. Further, the identification of this correlation by AMOS may indicate another factor exists which could be impacting these relationships. An additional factor is not measured by the survey instrument, so additional research is needed. The exclusion of adoption seems reasonable and is particularly consistent with t he challenges discussed in the problem statement chapter. The adoption construct consisted of items gauging the comprehensiveness of the measurement system, i.e. what types of performance measures were used by the agency. Examples include input, output a nd outcome measures. Adoption of a system is necessary, but not sufficient for performance management success. In the early phases of adopting a system, there is a great deal of discussion about the type of performance measures needed and the extent to w hich agencies shift emphasis from input measures to output and outcome measures, in particular. The emphasis on outcome measures can be particularly challenging, because these measures are often difficult to measure and may involve factors beyond direct c ontrol of the agency. In the process of implementing performance management systems, some practitioners and policy makers were lead to believe that results could come from adopting a system. Yet, simply adopting a system or sets of performance measures i s not enough not for an organization to achieve effectiveness. As discussed in the problem statement, practitioner frustrations were evident that reporting performance measurement information did not seem to result in changes or improvements. The interna l and external

PAGE 79

68 factors and on going practices and processes for leaders, managers and staff addressed in other components of this analysis must also be in place. In the reduced form model, the removal of the weak external political support construct c ould be viewed, not as an exclusion, but rather as information that was incorporated elsewhere. In particular, the stakeholder variable reflects the support of elected political officials. Further, the survey items for external political support focused on agency authority and autonomy, which may not be as important as other factors in determining performance management effectiveness.

PAGE 80

69 CHAPTER V CONCLUSIONS Throughout the fields of public management, public administration and public policy, various schools of thought offer tools and conceptual innovations through which to navigate the challenges facing government (Salamon, 2002). Many of these emphasize the need for government to take a new direction, to engage in new and innovative approaches. T his study contributes to research, theory and practice. It adds knowledge to the literature by testing a middle range theory using state government agencies in the United States. Performance management can be considered as a system of interlinking factor s and approaches, and the complex relationships between the constructs are shown through the use of SEM While certain variables have direct impact on performance management effectiveness, other effects are indirect. This empirical study helps to better understand these mechanisms. This analysis finds many elements of the Yang and Hsieh (2007) PMM model can be generalized to the United States, particularly state government, despite the unique culture of Asia, where the model was originally tested. The u se of but did not demonstrate utility in providing variance between agencies to test the hypothes e s. The methodology used in this study enhances the potential to use s tructural equation modeling by increasing sample size and contributes to a greater ability to generalize the unique and not found elsewhere in the public administration l iterature. The importance

PAGE 81

70 of organizational support, technical training and stakeholder participation are reinforced in the results of this study. This study incorporates organizational culture into a model of PMM effectiveness and tests for its si gnificance based on survey responses from state agency leaders. The model supported the concept that organizational culture is an important factor in PMM, but its impact was mediated by organizational support and organizational training. In particular, f or the state agencies in this study, innovation culture emerged as particularly important to the effectiveness of a PMM system. This study can have value for practitioners to better understand the key factors and practices needed to achieve success wit h a PMM. A better understanding of the role of innovation, along with the characteristics of innovation, in achieving performance management effectiveness is useful. This information can be particularly beneficial for the dialogue about the importance of improving state agency performance results, given the rule bound culture of government and reluctance of some stakeholders to accept innovation and its associated risks and other implications. As stakeholders demand performance improvements from governme nt, there must be a recognition that innovation culture is important for government leaders. Further, the importance of organizational for practitioners in achiev ing effectiveness with these systems combined with the mixed results of academic studies may be contributing to an attitude of complacency on this topic. As discussed in the problem statement, significant forces are demanding performance improvements, and it is only through an understanding of the factors needed for PMM success that these improvements can be made.

PAGE 82

71 This research is useful to identify elements of a PMM that can contribute to enhanced success. PMM systems can be successfully designed and im plemented, but maximum gains can be particularly limited by organizational capacity and the rule based culture of government. Sophisticated and creative approaches to management and learning are needed to maximize the effectiveness of these systems; these successes may be most likely to be achieved in the long run. As discussed in the limitations section, the study is limited by several factors, in that responses are time bound and can be influenced by personal bias and perception, generalization of surve y results is difficult, and leadership attributes are not an explicit consideration in the model. That said, survey results and model analysis are generally consistent with literature and other research. Future research focused on state government is ne eded in a variety of areas. The study could be expanded to other types of state agencies and extended beyond the leadership of an agency to mid level management and street level workers. These approaches would result in a larger set of respondents which would enrich the analysis. In addition to allowing for greater model complexity, these methods would provide opportunity to compare results between agencies and between organizational levels. Additional case studies of successful (and unsuccessful) agenc ies and structured interviews of leaders could add to the knowledge base on effective performance management in state agencies. The study could be replicated over time to compare results, including the impacts of different leaders, and to consider the lit erature on executive replacement and organizational change. The survey could be extended beyond the leadership of an agency to mid level management and street level workers. These

PAGE 83

72 results could be coupled with an analysis of actual performance data to mov e beyond the survey of perceptions at a given snapshot in time. The use of performance management metrics and their application in times of fiscal constraints and crisis could be particularly useful for the future. Finally, the role of the legislative br anch in performance management systems needs to be better understood, as very little is available in the academic literature. This analysis could be focused both on the role of the legislature in achieving managerial effectiveness of state agencies, but a lso separately on the effectiveness of the legislative budget, decision and policy development processes.

PAGE 84

73 REFERENCES Abramson, M. A., & Behn, R. D. (2006). The varieties of CitiStat. Public Administration Review 66(3), 332 340. Amabile, T. M. (1996). Creativity in context: Upda Boulder, CO: Westview Press. Ammons, D. (2001). Municipal benchmarks: Assessing local performance and establishing community standards ( 2 nd Ed.). Thousand Oaks, CA: Sag e Publications. Ammons, D. (2002). Performance measurement and managerial thinking. Public Performance & Management Review, 25(4), 344 347. Arbuckle, J & Wothke, W. (1999) Amos 4.0 u g uide Smallwaters Corporation: Chicago. Argyris, C., & Schon, D. (1978). Organizational learning; a theory of action perspective Reading, MA: Addison Wesley. Association of Government Accountants. (2008). Public attitudes toward government accountability and transparency Retrieved from: http://www.agacgfm.org/AGA/ToolsResources/CCR/pollreport2008.pdf Bardach, E. (2000). A practical guide for policy analysis: The eightfold path to more effective problem solving New York: Cha tham House. Barnard, C. I. (1956). The functions of the executive Cambridge, MA: Harvard University Press. Barrett, K., & Greene, R. (2008, March). Measuring performance: The state management Governing Retrieved from: http://www.issuelab.org/resource/measuring_performance_the_state_management _report_card_for_2008 Barrett, K. & Greene, R. (2012, October). Wh at killed measurement plan? Governing Retrieved from: http://www.governing.com/columns/smart mgmt/col what killed alabama pe rformance measurement plan.html Basken, P. (2008, May). Community colleges in California feel the heat. Chronicle of Higher Education. May 9, 2008. Retrieved from: http://chronic le.com/article/Community Colleges in/7116/

PAGE 85

74 Behn, R. D. (1988). What right do public managers have to lead? Public Administration Review 58(3), 209 24. Behn, R. D. (2001). Rethinking democratic accountability Washington, DC: Brookings Institution Pre ss. Behn, R. D. (2006). The varieties of CitiStat. Public Administration Review 66(3):332 340. Behn, R. D. (2008). Adoption of innovation: The challenge of learning to adapt tacit knowledge. In Sandford, B. (Ed.), Innovations in government: Research recognition and replications (pp. 138 158). Washington, DC: Brookings Institution. Behn, R. D. (2008). Designing PerformanceStat. Public Performance and Management Review 32(2), 206 235. Blunch, N. (2013). Introduction to structural equation modeli ng using IBM SPSS statistics and AMOS (2 nd Ed.). Los Angeles: Sage. Borins, S. (2000). Loose cannons and rule breakers, or enterprising leaders? Some evidence about innovative public managers. Public Administrative Review 60(6), 498 507. Bourdeaux, C. (2005, November) Do legislatures matter in budgetary reform? Paper presented at the annual meeting of the Association of Budgeting and Financial Management of the American Society for Public Administration, Washington, DC. Bourdeaux, C. (2006). Do legisla tures matter in budgetary reform? Public Budgeting & Finance 26(1), 120 142. Bourdeaux, C. & Chikoto, G. (2008). Legislative influences on performance management reform. Public Administration Review 68(2), 253 265. Bozeman, B., & Kingsley, G. (1998). Risk culture in public and private organizations. Public Administration Review, 58(2), 109 118. Brudney, J. L. Herbert, F. T. & Wright, D. S. (1999). Reinventing government in the American states: Measuring and explaining administrative reform. Public A dministration Review 59(1), 19 30. Bryson, J. M. (2011). Strategic planning for public and nonprofit organizations: A guide to strengthening and sustaining organizational achievement (4 th Ed.). San Francisco: Jossey Bass.

PAGE 86

75 Bryson, J M., Berry F. S., & Yang, K (2010) T he s tate of p ublic s trategic m anagement r esearch: A s elective l iterature r eview and s et of f uture d irections. The American Review of Public Administration, 40(5), 495 521. Burke, B. F., & Costello, B. C. (2005). The human side of man aging for results. A merican Review of Public Administration 35(3), 270 286. Burnett, J. (2013). Eye on the prize: States looking at goals, outcomes for budget decisions. Capitol Ideas 56(2), 12 15. Caiden, N. (2010). Challenges confronting contempora ry public budgeting: Retrospectives/prospectives from Allen Schick. Public Administration Review 70(2), 203 210. Cameron, K. S., & Quinn, R. E. (200 6 ). Diagnosing and changing organizational culture: Based on the competing values framework. San Franc isco: Jossey Bass. Chenok, D. J., Kamensky, J. M., Keegan, M. J. & Ben Yehuda G. (2013). Six trends driving change in government Washington, D.C .: IBM Center for the Business of Government. Coggburn, J., & Schneider, S. (2003). The quality of m anagement and government performance: An empirical analysis of the American states. Public Administration Review, 63(2), 206 213. Cohen, W. M. & Levinthal, D. A. (1990). Absorptive c apacity: A n ew p erspective o n l earning and i nnovation. Administrative Science Quarterly 36, 128 143. Common, R. (2004). Organisational learning in a political environment: Improving policy making in UK government. Policy Studies, 25(1), 35 49. Council of State Governments (no date). The State Comparative Performance Meas urement Project. Retrieved from: http://www.csg.org/programs/policyprograms/CPM.aspx Couper, M., Traugott, M., & Lamias, M. (2001). Web survey design and administration. Public Opinion Qu arterly 65(2), 230 253. Cyert, R. M., & March J. G. (1963). A behavioral theory of the firm Englewood Cliffs, N.J.: Prentice Hall. Damanpour, F & Schneider M (2006). Phases of the a doption of i nnovation in o rganizations: Effects of e nvironment, o rg anization and t op m anagers. British Journal of Management 17, 215 236.

PAGE 87

76 Damanpour, F. & Schneider, M. (2009). Characteristics of innovation and innovation adoption in public organizations: Assessing the role of managers. Journal of Public Administrati on Research and Theory 19(3), 495 522. d e Lancer Julnes, P & Holzer M. (2001). Promoting the u tilization of p erformance m easures in p ublic o rganizations: An e mpirical s tudy of f actors a ffecting a doption and i mplementation. Public Administration Revi ew, 619(6), 693 708. D eLeon, L., & Denhardt, R. B. (2000). The political theory of reinvention. Public Administration Review, 60(2), 89 97. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed mode surveys: The tailored des ign method. Hoboken, N.J.: Wiley & Sons. Edmondson, A. (2008). The competitive imperative of learning. Harvard Business Review 86(7/8), 60 67. for Higher Education Retrieved from: http://www.whitehouse.gov/the press office/2013/08/22/fact sheet president s plan make college more affordable better bargain Fiol, C. M, & Lyles, M. (1985). Organizational learning. Academy of Management Review, 19, 803 13. Fitzpatrick, J, Goggin, M., Heikkila, T., Klingner, D. E., Machado, J. & Martell, C. A new look at comparative public adm inistration: Trends in research and an agenda for the future. Public Administration Review 71(6), 821 830. Fleenor, J., & Bryant, C. (2002, April). Leadership effectiveness and organizational culture: An exploratory study Paper presented at the meetin g of the Society for Industrial and Organizational Psychology, Toronto, Canada. Frazier, M., & Swiss, J. (2008). Contrasting views of results based management tools from different organizational levels. International Public Management Journal 11(2), 214 234. Frederickson, H. (2002). Confucius and the moral basis of bureaucracy. Administration & Society 33 ( 6 ), 610 628. Garnett, J., Marlowe, J., & Pandey, S. (2008). Penetrating the performance predicament: Communication as a mediator or moderator of public organizational performance. Public Administration Review 68(2), 266 281. Garrett, G. & Reindl, T. (2013). Beyond completion: Getting answers to the questions governors ask to improve postsecondary outcomes. Na

PAGE 88

77 Association Center for Best Practices Washington, D.C. Retrieved from: http://www.nga.org/files/live/sites/NGA/files/pdf/2013/1309Beyo ndCompletionP aper.pdf Giannatasio, N. A. (2008). Threats to validity in research designs. In Miller, G., & Yang, K. (Eds.), Handbook of research methods in public administration New York, NY: CRC Press. Gilmour, J. & Lewis, D. (2006). Assessing perf ormance budgeting at OMB: The influence of politics, performance, and program size. Journal of Public Administration Research and Theory 16(2), 169 186. Glynn, M. A. (1996). Innovative g enius: A f ramework f or r elating i ndividual and o rganizational i nte lligences t o i nnovation. Academy of Management Review 21, 1081 1112. Government Accounting Standards Board. (n o d ate ) Project pages service efforts and accomplishments reporting. Retrieved from : http://www.gasb.org/cs/ContentServer?pagename=GASB/GASBContent_C/Proje ctPage&cid=1176156646053 Gruenig, G. (2001). Origin and theoretical basis of new public management models. International Public Man agement Journal 4(1), 1 25. Hair, J F., Jr., Black, W C. Babin, B J. & Anderson R. E. (200 6 ). Multivariate d ata a nalysis ( 6th Ed .) Upper Saddle River, NJ: Pearson Prentice Hall. Handy, C. (1993). Understanding organizations London: Penguin Books. Hatry, H. (2008) Emerging d evelopments in performance measurement: An international perspective. In de Lancer Julnes, P., Berry, F. S., Aristigueta, M. P., & Yang, K. (Eds.), International handbook of practice based performance management (pp. 3 24). Los Angeles: Sage Publications. Hatry, H. & Davies, E. (2011). A guide to data driven performance reviews Washington, DC: IBM Center for the Business of Government. Heinrich, C. (2007). Evidence based policy and performance management: Chall enges and prospects in two parallel movements. American Review of Public Administration, 37(3), 255 277. Hibbing J., & Theiss Morse, E. (2002 ) government should work. New York: Cambridge University Press. Hibbing, J., & Theiss Morse, E. (1995). Congress as public enemy: Public attitudes toward American political institutions New York: Cambridge University Press.

PAGE 89

78 Higher Learning Commission. (2013). The criteria for accreditation: Guiding values. Chicago, IL. Retrieved from: https://www.ncahlc.org/Criteria Eligibility and Candidacy/guiding values new criteria for accreditation.html Hill, C. J, & Lynn, L.E., Jr. (2005). Is hierarchical governance in decline? Evidence from empirical research. Journal of Public Administration Research and Theory, 15(2), 173 195. Holzer, M., Mullins, L. B., Ferreira, M. & Hoontis, P. (2012, October). Implementing performance budgeting at the state level: Lessons learned from New Jersey Paper presented at the Association (ABFM) Annual Conference, New York, NY. Acco unting, Organizations and Society 20(2/3), 93 109. tool. Public Administration Review 71(3), 391 401. Hopkins, K. D. (1998). Educational and psychological measuremen t and evaluation (8 th Ed.). Needham Heights, MA: Allyn & Bacon. Hou, Y, Lunsford, R. S., Sides, K. C. & Jones, K. A. ( 2011) S tate performance based budgeting in boom and bust years: An analytical framework and survey of the states. Public Administrati on Review, 71(3), 370 388. Huber, J., & Shipan, C. (2000). The costs of control: Legislators, agencies and transaction costs. Legislative Studies Quarterly, 25(1), 25 52. Ingraham, P W. Joyce, P., & Kneedler Donahue A ( 2003 ) Government performance: Why management matters. Baltimore, MD: Johns Hopkins University Press. Ingraham, P. W. & Lynn, L.E., Jr. (2004). The art of governance: Analyzing management and administration Washington, DC: Georgetown University Press. Joyce, P. G. (2003). Lin king performance and budgeting: Opportunities in the federal budget process. Washington, DC: IBM Center for the Business of Government. Joyce, P. G. (2011). The Obama administration and PBB: Building on the legacy of federal performance informed budgeti ng? Public Administration Review 71(3), 356 367. Jung, T., Scott, T., Davies, H. T. O. Bower, P. Whalley, D., McNally R., & Mannion, R. (2009). Instruments for exploring organizational culture: A review of the literature. Public Administration Review 69(6), 1087 1096.

PAGE 90

79 Kelleher, C., & Wolak, J. (2007). Explaining public confidence in the branches of state government. Political Research Quarterly 60(4), 707 721. Kelman, S. and Myers, J. (2011). Successfully achieving ambitious goals in government: A n empirical analysis. The American Review of Public Administration 41(3), 235 262. Kettl, D. F. (1998). Reinventing government: A five year report card Washington, DC: Brookings Institut ion Press. Kettl, D. F. (2005). The global public management revol ution (2 nd Ed.). Washington: DC: Brookings Institution Press. Kettl, D. F., & Fesler, J. W. (2005). The politics of the administrative process (4th Ed.). Chatham, NJ: Chatham House Publishers. Kettl, D. F. & Kelman, S. (2007 ). Reflections on 21 st centur y government management Washington, DC: IBM Center for the Business of Government. Khademian, A. M. (2000). Is silly putty manageable? Looking for the links between G. (Ed s.), Advancing public management: New developments in theory, methods and practice (pp. 33 48). Washington, DC: Georgetown University Press. Kim, Y. (2010). Improving performance in U.S. state governments: Risk taking, innovativeness, and proactiveness practices. Public Performance and Management Review 34(1), 104 129. Klingner, D. E. (2006). Diffusion and adoption of innovations: A development perspective. In Innovations in governance and public administration: Replicating what works New York, NY: Un ited Nations Report ST/ESA/PAD/SER.E/72. Klingner, D. E. & Sabet, M. G. (2005). Knowledge management, organizational learning, innovation and technology transfer: what they mean and why they matter. Comparative Technology Transfer and Society, 3(3), 199 210. Kroll, A. (2012) Why public managers use performance information: Concepts, theory, and empirical analysis (Doctoral di ssertation, University of Potsdam ) Leech, N. L., Barrett, K. C., & Morgan, G. A. (2011). SPSS for intermediate statistics: Use and interpretation (4th Ed.). New York, NY: Routledge Taylor and Francis Group. Light, P. C. (1998). Tides of reform New Haven, CT: Yale University Press. p. 179 215.

PAGE 91

80 Liner, B., Hatry, H., Vinson, E., Allen, R., Dusenbury, P., Bryant, S. & Snell, R. (2001). Making results based state government work Washington, DC: The Urban Institute. Majumdar, S. (2008). Using the survey as an instrument of inquiry in research. In Miller, G., & Yang, K. (Eds.), Handbook of research methods in public administra tion (p. 246). New York, NY: CRC Press. Rainey, H. G. (Eds.), Advancing public management: New developments in theory, methods and practice (pp. 127 152). Washington, DC: Georgetown University Press. McNabb, D., & Sepic, F. (1995). Culture, climate and total quality management: Measuring readiness for change. Public Productivity and Management Review 18(4), 369 385. Melkers, J., & Willoughby, K. (1998). The state of the states: Performance based budgeting requirements in 47 out of 50. Public Administration Review 58(1), 66 73. budgeting systems: Distinctions across branches. Public Admin istration Review 61(1), 54 64. Melkers J., & Willoughby, K. (2004). Staying the course: The use of performance measurement in state governments. Washington, DC: IBM Center for the Business of Government. Millar, M. & Dillman, D. (2011). Improving respo nse to web and mixed mode surveys. Public Opinion Quarterly 75(2), 249 269. Moore, M. H., & Braga, A. A. (2003). Measuring and improving police performance: The lessons of CompStat and its progeny. Policing 26(3), 439 453. Morgan, S. L. & McCall, S. (2012, March). A systems approach to implementing performance based management and budgeting Association of Government Accountants audio conference. Moynihan, D. (2005). Goal based learning and the future of performance management. Public Administrat ion Review 65(2), 203 216. Moynihan, D. (2006). Managing for results in state government: Evaluating a decade of reform. Public Administration Review 66(1), 77 89.

PAGE 92

81 Moynihan, D & Lavertu S (2012). Does i nvolvement in p erformance m anagement r outi nes e ncourage p erformance i nformation u se? Evaluating GPRA and PART Public Administration Review, 72(4), 592 602. Moynihan, D. & Pand e y, S. (2005). Testing how management matters in an era of government by performance management Journal of Public Ad ministration Research and Theory 15 ( 3 ), 421 439. Moynihan, D. & Pand e y, S. (2010). The big question for performance management: Why do managers use performance information? Journal of Public Administration Research and Theory 20 ( 4 ), 849 866. Moyniha n, D. (2013). The new federal performance system: Implementing the GPRA Modernization Act Washington, DC: IBM Center for the Business of Government. National Performance Management Advisory Commission. ( 2010 ) Retrieved from : http://pmcommission.org/index.php?option=com_content&task=view&id=14&Ite mid=26 National Conference of State Legislatures. (2003). Legislating for results. Denver, Colorado. Newcomer K. & Caudle, S. (2011). Public performance management systems: Embedding Practices for improved success. Public Performance and Management Review 35(1), 108 132. Governing Retrieved from: http://www.governing.com/columns/smart mgmt/col reinventing government book osborne gaebl er impact local innovation principles.html & Hatcher, L. (2013). A s tep by s tep a pproach to u sing SAS for f actor a nalysis and s tructural e quation m odeling (2 nd e d .). Cary, NC: SAS Institute Inc. Osborne, D., & Gaebler, T. (1992). Rein venting government: How the entrepreneurial spirit is transforming the public sector Reading, MA: Addison Wesley Publishing Company, Inc. Ott, S. J. (1989). The organizational culture perspective Belmont, CA: Wadsworth. Ott, S. J. (1995). TQM, organiza tional culture, and readiness for change. Public Productivity and Management Review, 18(4), 365 368.

PAGE 93

82 Ott, S. J., & Shafritz, J. (1994). Toward a definition of organizational incompetence: A neglected variable in organization theory. Public Administration Review 54(4), 370 377. Parkin, M. (2003). Microeconomics Boston, MA: Addison Wesley. Pattison, S. D. (2012). Performance information --impacts on state budgets and government management. Retrieved from: http://www.nasbo.org/budget blog/performance information %E2%80%93 impacts state budgets and government management Pawlowsky, P. (2001). The treatment of organizational learning in management science. In Dierkes, M., Antal, A., Child, J. & Nonaka, I. (Eds.), Handbook of organizational learning and knowledge (pp. 61 88). London: Oxford University Press. Perrin, B. (2006). Moving from outputs to outcomes: Practical advic e from governments around the world. Washington, DC: IBM Center for the Business of Government. Pettigrew, A. M. (1990). Organizational climate and culture: Two constructs in search of a role. In Schneider, B. (Ed.), Organizational Climate and Culture (pp. 413 433). San Francisco CA : Jossey Bass. Pew Center on the States. (2008). Grading the States 2008 Retrieved March 2, 2008, from http://www.pewcenteronthestates.org/te mplate_page.aspx?id=35360 Piotrowski, S. J. & Rosenbloom, D. H. (2002). Nonmission based values in results oriented public management: The case of freedom of information. Public Administration Review 62(6), 643 657. Pollitt, C. (2006). Performance management in practice: A comparative study of executive agencies. Journal of Public Administration Research and Theory 16(1), 25 44. Poister, T. H., Pasha, O. Q. & Edwards, L. H. (2013). Does performance management lead to better outcomes? Evidence f rom the U.S. public transit industry. Public Administration Review 73(4), 625 636. Presser, S., Couper, M., Lessler, J., Martin, E., Martin, J., Rothgeb, J. & Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarter ly 68 (1), 109 130. Rabovsky, T. M. (2014). Using data to manage for performance at universities. Public Administration Review 74(2), 260 272.

PAGE 94

83 Radin, B. A. (1998). The government performance and results act (GPRA): Hydra headed monster or flexible man agement tool? Public Administration Review 58(4), 307 316. Radin, B. A. (2011). Federalist No. 71: Can the federal government be held accountable for performance? Public Administration Review Special Issue, S128 S133. Rainey, H. G. (2003). Underst anding and managing public organizations (3 rd e d .). San Francisco, CA: Jossey Bass. Reilly, B. (2007). Democratization and electoral reform in the Asian Pacific region. Comparative Political Studies 40 ( 11), 1350 1371. Rojas, F. M. (2012). Recovery Act Washington, DC: IBM Center for the Business of Government. Rosenbloom, D. H. (1983). Public administrative theory and the separation of powers. Public Administration Review 43(3), 219 227. Rubin, I. (2 005). The state of state budget research. Public Budgeting and Finance 25(45), 46 67. Salamon, L. M., Ed. (2002). The t ools of g overnment: A g uide to the n ew g overnance New York NY : Oxford University Press. Schein, E. H. (1996). Culture: The m is sing c oncept in o rganizational s tudies. Administrative Science Quarterly 41 ( 2 ), 229 240. Schein, E. H. (2002). Organizational culture and leadership San Francisco, CA: Jossey Bass. Schumacker, R. E. & Lomax, R. G. (2004). ructural equation modeling (2nd Ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Publishers. Senge, P. M. (1990). The fifth discipline New York, NY: Doubleday. Smith, K., Cheng, R., Smith, O., & Schiffel, L. (2008). Performance reporting by state agencies: bridging the gap between current practice and GASB suggested criteria. Journal of Government Financial Management 57(2), 42 47. Sta. Maria, R. (2003). Innovation a nd o rganizational l earning c ulture i n t he Malaysian p ublic s ector. Advances in Developing Human Resources 5 (2), 205 214. State of Washington government management accountability program: Hearing before the New Mexico Legislative Finance Committee, testimony of Arley Williams

PAGE 95

84 (2008), p. 7. Retrieved from: ( http://www.nmlegis.gov/lcs/minutes/lfcminmay06.08.pdf ) Stillman, R. J. (1996). The American bureaucracy: The core of modern government (2 nd e d .). Chicago, IL: Nelson Hall Publishers. Tandberg, D.A. & Hillm an, N. W. (2013). State performance funding for higher education: Silver bullet or red herring? Wisconsin Center for the Advancement of Postsecondary Education Retrieved from: http://www.wiscape.wisc.edu/docs/WebDispenser/wiscapedocuments/pb018.pdf? sfvrsn=4 Terry, L. (1988). Administrative leadership, neomanagerialism and the public management movement. Public Administration Review 58(3), 194 200. Thom pson, J. (2000). Reinvention as reform: Assessing the national performance review. Public Administration Review 60(6), 508 521. Umbach, P. D. (2004). Web surveys: Best practices. New Directions for Institutional Research (121), 23 38. United States Depa rtment of Transportation. (2013). Fact Sheets: MAP 21 Moving Ahead for Progress in the 21 st Century Retrieved from: http://www.fhwa.dot.gov/map21/factsheets/ United States Department of Transportation. (2013). Fact Sheet: MAP 21 Moving Ahead for Pr ogress in the 21 st Century --Performance Management Retrieved from: http://www.fhwa.dot.gov/map21/factsheets/pm.cfm United States Government Accountability Office (2013). Data driven perf ormance reviews show promis e but agencies should explore how to involve other relevant agencies GAO 13 228. Retrieved from: http://www.gao.gov/assets/660/652426.pdf United States Government Acco untability Office. (2013). Executive b ranch should mo r e fully implement the GPRA Modernization Act to address pressing governance challenges. GAO 13 518. Retrieved from: http://www.gao.gov/assets/6 60/655541.pdf Van Wart, M. (2005). Organizational investment in employee development. In Condrey, S., Handbook of human resource management in government ( 2 nd e d .), (pp. 272 294). San Francisco, CA: Jossey Bass.

PAGE 96

85 Villadsen, A. R. (2012). New e xecutives f rom i nside o r o ut? The e ffect of e xecutive r eplacement on o rganizational c hange. Public Administration Review, 72 (5), 731 740. Vicente, P. & Reis, E. (2010). Using questionnaire design to fit nonresponse bias in web surveys. Social Science Compute r Review, 28(2), 251 267. Walker, R. M. (2008). An empirical evaluation of innovation types and organizational and environmental characteristics: Towards a configuration approach. Journal of Public Administration Research and Theory, 18(4), 591 615. Wa lker, R. M., Damanpour, F., & Devece, C. A. (2011). Management innovation and organizational performance: The mediating effect of performance management. Journal of Public Administration Research and Theory 21 ( 2 ), 367 386. Wechsler, B. (1994). Reinvent Review of Public Personnel Administration 14, 64 75. Wellman, M., & VanLandingham, G. (2008). Performance based budgeting in Florida: Great expectations, more limited reality. In de Lancer Juln es, P., Berry, F. S., Aristigueta, M. P., & Yang, K. (Eds.), International handbook of practice based performance management (pp. 321 340). Los Angeles, CA: Sage Publications. Wilson, J. (1989). Bureaucracy: What government agencies do and why they do i t. New York, NY: Basic Books. Yang, B. (2003). Identifying v alid a nd r eliable m easures f or d imensions o f a l earning c ulture. Advances in Developing Human Resources 5(2), 152 162. Yang, K., & Hsieh, J. (2007). Managerial effectiveness of government p erformance measurement: Testing a middle range model. Public Administration Review, 67(5), 861 879.

PAGE 97

APPENDIX Table A1 : Comparison of Survey Research Instrument s Managerial effectiveness of performance measure (dependent variable) iginal Question No. & Purpose Included Survey Williams Survey Question No. Included in Final Survey 1 performance measurement results can be trusted. V1; trustworthy XXX Q2 0001 q1 XXX 2 perfor mance measurement can help managers make better decisions. V2; decision making XXX Q2 0002 q 2 Restated : This performance measurement helps managers make better decisions. 3 performance measurement helps communicate more effectively with elected officials. V3: communication XXX Q2 0003 q3 XXX 4 performance measurement helps communicate more effectively with many external groups XXX Q2 0004 q4 XXX 5 performance measurement helps b udget planning and decision making. V4: budgeting XXX Q2 0005 q5 XXX 6 performance indicators accurately reflect the quality of V5: accurate XXX Not Used

PAGE 98

management. 7 performance indicators accurately reflect th e work of the organization. XXX Not Used 8 The data in the performance measurement system is accurate. XXX Q3 0001 q6 XXX 9 performance indicators are reliable. V6: reliable XXX Q3 0002 q7 XXX 10 investment o n performance management is worthwhile. V7: value XXX Q3 0003 q8 XXX 11 performance measurement improves productivity. V8: productivity XXX Q3 0004 q9 XXX 12 performance measurement motivates employees. V9: motivat ion XXX Q3 0005 q10 XXX 13 performance measurement stimulates organizational learning. V10: learning XXX q11 Not Used in 14 performance measurement results are used to adjust strategic planning. V11 : strategic planning XXX Q3 0007 q 1 2 Restated : This performance measurement results are used in strategic planning. Note : Items measured on a seven degree agree/disagree scale: Strongly disagree

PAGE 99

Disagree Somewhat disagree Neutral Somewhat agree Agree Strongly agree

PAGE 100

Stakeholder p articipation (independent variable) Original Question No. & Purpose Included in Survey Williams Survey Question No. Included in Final Survey 1 Citizens participate in designing this organ performance indicators. V12 XXX Q4 0001 q13 XXX 2 Elected officials participate in designing performance indicators. V13 XXX Q4 0002 q14 XXX 3 Citizens help this organization evaluate performance. V14 XXX Q5 q15 XXX 4 Stake holders are familiar with the results of this performance management. V15 XXX Q6 q16 XXX 5 There are open, public meetings of elected or appointed officials and program staff to discuss performance results. Q7 q17 XXX 6 Data is reported to and reviewed by legislators and their staff. Q8 0001 q18 XXX 7 Data is used by legislators and legislative staff for decision making. Q8 0002 q19 XXX Note : Items measured on a seven degree agree/disagree scale: Strongly disagree Disagree Somewh at disagree Neutral Somewhat agree Agree Strongly agree

PAGE 101

Organizational s upport (independent variable) Original Question No. & Purpose Included in Survey Williams Survey Question No. Included in Final Survey 1 Top managers emp hasize and care about the process of performance management. V16 XXX Q9 0001 q20 XXX 2 Top managers value and treat seriously the results of performance management. V17 XXX Q9 0002 q21 XXX 3 All offices and middle managers actively support performance ma nagement. V18 XXX Q10 q22 XXX Note : Items measured on a seven degree agree/disagree scale: Strongly disagree Disagree Somewhat disagree Neutral Somewhat agree Agree Strongly agree

PAGE 102

Technical t raining (independent variable) Original Question N o. & Purpose Included Survey Williams Survey Question No. Included in Final Survey 1 How much technical training has been provided to performance management staff? V19 XXX Q11 0001 q23 XXX 2 How much technical training has been provi ded to managers and supervisors V20 XXX Q11 0002 q24 XXX 3 To what extent is training ongoing over time? Q12 q25 XXX Note: Items measured on a five point scale: No training at all Little training Some training Much training Extensive training

PAGE 103

Exte rnal p olitical s upport (independent training) Original Question No. & Purpose Included in Survey Williams Survey Question No. Included in Final Survey 1 This organization has a high level of autonomy granted by elected official s. V21 XXX Q13 q26 XXX 2 Compared with other government units, this organization enjoys a high level of authority. V22 XXX Q14 q27 XXX 3 The policy initiative or request from this organization is always supported by elected officials. V23 XXX Q15 q28 XXX Note : Seven degree agree/disagree scale: Strongly disagree Disagree Somewhat disagree Neutral Somewhat agree Agree Strongly agree

PAGE 104

Performance m easurement a doption (independent variable) Original Question No. & Purpose Included Surv ey Williams Survey Question No. Included in Final Survey 1 This organization uses input indicators to measure performance. V24 XXX Q16 0001 q29 XXX 2 This organization uses output indicators to measure performance. V25 XXX Q16 0002 q30 XXX 3 This organization uses outcomes indicators to measure performance. V26 XXX Q16 0003 q31 XXX 4 This organization uses efficiency indicators to measure performance. V27 XXX Q16 0004 q32 XXX 5 This organization uses satisfaction indicators to measure perfor mance. V28 XXX Q16 0005 q33 XXX Note: Measured on dichotomous scale. 0=no 1=yes

PAGE 105

Organizational c ulture (independent variable) Original Question No. & Purpose Included Survey Williams Survey Question No. Included in Final Survey 1 This agency is a very dynamic and entrepreneurial place. People are willing to stick their necks out and take risks. No Q17 0001 q34 XXX 2 This agency is very production oriented. A major concern is with getting the job done. People are ver y personally involved. No Q17 0002 q35 XXX 3 The glue that holds this agency together is a commitment to innovation and development. There is an emphasis on being first. No Q18 0001 q36 XXX 4 The glue that holds this agency together is the emphasis on task and goal accomplishment. A production orientation is commonly shared. No Q18 0002 q37 XXX 5 My agency emphasizes growth and acquiring new resources. Readiness to meet new challenges is important. No Q19 0001 q38 XXX 6 My agency emphasizes compe titive actions and achievement. Measurable goals are important. No Q19 0002 q39 XXX Note : Items measured on a seven degree agree/disagree scale: Strongly disagree Disagree Somewhat disagree Neutral

PAGE 106

Somewhat agree Agree Strongly agree

PAGE 107

96 Organizational l earning and d ouble l oop l earning (independent variable) Original Question No. & Purpose Included Survey Williams Survey Question No. Included in Final Survey 1 Does management question basic agency outcomes and analyze alt ernative approaches? (examples would include program design, technology, modes of service delivery) No Q20 0001 q40 XXX 2 Does management use data and science to aid in understanding problems and potential solutions? No Q20 0002 q41 XXX 3 Is there a p ublic forum within the agency for staff and management dialogue and learning about important issues and strategies? No Q21 q42 XXX 4 Do legislative hearings serve as a forum for dialogue and learning about important issues and strategies? No Q22 q43 XXX Note : Measured on dichotomous scale. 0=no 1=yes

PAGE 108

97 Table A2 : Statistical Output #1 Regression Weights: (Group number 1 Default model) Estimate S.E. C.R. P Label Org_Support < --Culture .398 .092 4.343 *** Stakeholder < --Org_Support .465 .122 3.820 *** Training < --Culture .209 .057 3.679 *** Effectiveness < --Org_Support .647 .096 6.732 *** Effectiveness < --Stakeholder .130 .063 2.045 .041 Effectiveness < --Training .379 .152 2.495 .013 q3 8 < --Culture .541 .087 6.225 *** q36 < --Culture 1.000 q34 < --Culture .800 .103 7.784 *** q23 < --Training 1.238 .114 10.864 *** q24 < --Training 1.000 q25 < --Training .938 .102 9.179 *** q22 < --Org_Support .758 .086 8.846 *** q20 < --Org_Support .960 .050 19.334 *** q21 < --Org_Support 1.000 q2 < --Effectiveness .829 .062 13.395 *** q4 < --Effectiveness .859 .077 11.123 *** q5 < --Effectiveness .834 .074 11.255 *** q8 < --Effectiveness .842 .072 11.711 *** q9 < --Effectiveness 1.000 q10 < --Effectiveness .957 .073 13.135 *** q15 < --Stakeholder .574 .128 4.501 *** q14 < --Stakeholder .688 .143 4.806 *** q13 < --Stakeholder 1.000 Standardized Regression Weights: (Group number 1 Defaul t model) Estimate Org_Support < --Culture .433 Stakeholder < --Org_Support .374 Training < --Culture .390 Effectiveness < --Org_Support .614 Effectiveness < --Stakeholder .153 Effectiveness < --Training .210 q38 < --Culture .596 q36 < --Culture .930

PAGE 109

98 Estimate q34 < --Culture .765 q23 < --Training .917 q24 < --Training .821 q25 < --Training .772 q22 < --Org_Support .663 q20 < --Org_Support .937 q21 < --Org_Support .961 q2 < --Effectiveness .872 q4 < --Effectiveness .793 q5 < --Ef fectiveness .798 q8 < --Effectiveness .816 q9 < --Effectiveness .890 q10 < --Effectiveness .864 q15 < --Stakeholder .528 q14 < --Stakeholder .589 q13 < --Stakeholder .921 Covariances: (Group number 1 Default model) Estimate S.E. C.R. P L abel z5 < -> z3 .356 .084 4.244 *** Correlations: (Group number 1 Default model) Estimate z5 < -> z3 .520 Variances: (Group number 1 Default model) Estimate S.E. C.R. P Label Culture 1.674 .303 5.527 *** z3 1.153 .175 6.599 *** z2 1.886 .458 4.118 *** z5 .407 .081 5.029 *** z1 .498 .096 5.173 *** e38 .890 .129 6.878 *** e36 .262 .169 1.553 .120 e34 .759 .147 5.166 *** e23 .139 .045 3.075 .002

PAGE 110

99 Estimate S.E. C.R. P Label e24 .23 1 .042 5.543 *** e25 .286 .046 6.214 *** e22 1. 039 .143 7.272 *** e21 .116 .044 2.654 .008 e20 .180 .045 4.017 *** e2 .342 .056 6.142 *** e4 .685 .101 6.800 *** e5 .622 .092 6.773 *** e8 .561 .084 6.671 *** e9 .414 .071 5.857 *** e10 .491 .079 6.244 *** e15 1.867 .27 8 6.711 *** e14 1.955 .314 6.228 *** e13 .391 .364 1.073 .283

PAGE 111

100 Table A3 : Modification Indices (Group number 1 Default model) Covariances: (Group number 1 Default model) M.I. Par Change e13 < -> z3 6.705 .282 e13 < -> z5 4.600 .143 e1 4 < -> z1 6.509 .277 e2 < -> Culture 4.495 .175 e20 < -> e13 11.225 .191 e20 < -> e14 4.767 .152 e22 < -> Culture 4.084 .268 e22 < -> z3 5.162 .215 e22 < -> z5 8.125 .166 e22 < -> z1 7.969 .219 e22 < -> e13 4.982 .258 e25 < -> e22 9.330 .171 e 24 < -> z1 8.791 .119 e24 < -> e8 8.763 .118 e23 < -> Culture 4.013 .135 e23 < -> e8 5.327 .091 e34 < -> e24 4.918 .105 e38 < -> z3 10.007 .281 Variances: (Group number 1 Default model) M.I. Par Change Regression Weights: (Group number 1 D efault model) M.I. Par Change q13 < --q20 5.759 .219 q15 < --Org_Support 4.466 .238 q15 < --Effectiveness 5.568 .254 q15 < --q10 7.707 .262 q15 < --q9 4.479 .197 q15 < --q2 5.513 .258 q15 < --q20 6.203 .268 q2 < --Culture 4.495 .104 q2 < --q36 4.506 .092 q20 < --q13 4.251 .062 q22 < --Culture 4.084 .160

PAGE 112

101 M.I. Par Change q22 < --Training 6.075 .363 q22 < --Effectiveness 4.523 .169 q22 < --q10 4.632 .150 q22 < --q5 4.468 .157 q22 < --q4 6.639 .184 q22 < --q2 4.003 .163 q22 < --q25 13.0 37 .417 q22 < --q23 4.551 .222 q22 < --q36 4.717 .152 q25 < --q22 4.903 .088 q24 < --q8 7.938 .109 q24 < --q34 5.722 .089 q23 < --Culture 4.013 .081 q23 < --q34 4.902 .081 q38 < --Org_Support 14.895 .302 q38 < --Training 9.450 .424 q3 8 < --Effectiveness 12.286 .262 q38 < --q10 4.803 .143 q38 < --q9 9.361 .197 q38 < --q8 8.736 .208 q38 < --q5 12.602 .247 q38 < --q4 13.727 .248 q38 < --q2 7.096 .203 q38 < --q20 16.292 .301 q38 < --q21 13.079 .266 q38 < --q22 4.920 .148 q38 < --q25 4.781 .236 q38 < --q24 7.407 .294 q38 < --q23 7.795 .272