Citation
Exploring stakeholder influence on the development of performance measures

Material Information

Title:
Exploring stakeholder influence on the development of performance measures
Creator:
Butz, Eric G. ( author )
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file : ;

Subjects

Subjects / Keywords:
Public administration -- Decision making ( lcsh )
Public adminstration -- Evaluation ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
The modern public manager is concerned with organizational performance and is trained in performance measurement. Yet, despite substantial investment of time and money by public organizations, the assumed benefits of performance measurement—improved decision making, increased organizational effectiveness, more transparent oversight—do not always materialize. Extant theory has proven inadequate in trying to understand why some public managers are driven to develop new performance measures while others are content relying on existing measures or ignoring performance measurement all together. This study bridges this theoretical gap by proposing a model describing the influence of political and operational stakeholders on the measurement development of public managers. In so doing, it advances an explanation for why certain public management settings are amenable to innovation regarding performance measurement development.
Review:
This study is guided by the research question, How do political and operational stakeholders impact the development of performance measures in public organizations? To answer this question the study examined the performance measurement initiatives of five Colorado cities. The empirical analysis relied on data from city documentation and semi-structured interviews with city managers. Multivariate regression procedures were used to estimate the effects of stakeholder influence on the development of performance measures. Quantitative findings were complemented by qualitative interview data from social equity administrators, performance measurement administrators, and department-level planners.
Review:
The results in this dissertation provide evidence that City Council and departmental leadership support for performance measurement drives the development of new measures and the evaluation of existing measures. Findings also suggest that the impact of stakeholders on measurement development is mediated by the structure of city and departmental performance review processes. Specifically, the timescale of performance measurement review processes and the degree of interact between political and operational stakeholders affect measurement development.
Review:
Taken collectively, these results offer an interesting proposition for public organizations: design of effective performance measurement processes must extend beyond operational routines to include the strategic review processes of political stakeholders.
Review:
The primary contribution of this dissertation is to provide a better understanding of how political stakeholders impact the innovation tendencies of public managers regarding performance measurement, and the mediating effects of the structure of performance measurement processes.
Thesis:
Thesis (Ph.D.)-- University of Colorado Denver
Bibliography:
Includes bibliographical references.
System Details:
System requirements: Adobe Reader.
Statement of Responsibility:
by Eric G. Butz.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
Copyright Eric Butz. Permission granted to University of Colorado Denver to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
on10821 ( NOTIS )
1082139891 ( OCLC )
on1082139891

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

EXPLORING STAKEHOLDER INFLUENCE ON THE DEVELOPMENT OF PERFORMANCE MEASURES by ERIC G. BUTZ B. S ., Dartmouth College, 1990 M.S., University of Colorado, Boulder, 1995 M. P.P., University of Denver, 2009 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Public Affairs 2018

PAGE 2

ii 201 8 ERIC G. BUTZ ALL RIGHTS RESERVED

PAGE 3

iii This thesis for the Doctor of Philosophy degree by Eric G. Butz has been approved for the Public Affairs Program by Tanya Heikkila Chair John Ronquillo, Advisor William Swann Benoy Jacob Date: May 12 2018

PAGE 4

iv Butz, Eric G. (Ph.D., Public Affairs) Exploring Stakeholder Influence on the Development of Performance Measures Thesis directed by Assistant Professor John Ronquillo ABSTRACT The modern public manager is concerned with organizational performance and is trained in per formance measurement. Yet, despite substantial investment of time and money by public organizations the assumed benefits of performance measurement improved decision making, increased organizational effectiveness, more transparent oversight do not always materialize Extant theory has proven inadequate in trying to understand why some public managers are driven to develop new performance measures while others are content relying on existing measures or ignoring performance measurement all together This study bridge s this theoretical gap by proposing a model describing the influence of political and operational stakeholders on the measurement development of public managers. In so doing, it advances an explanation for why certain public management settings are amenable to innovation regarding performance measurement development This study is guided by the research question How do political and operational stakeholders impact the development of performance measures in public organizations ? To answer this question the study examine d the performance measurement initiatives of five Colorado cities. The e mpirical analysis relie d on data from city documentation and semi structured interviews with c ity mana gers Multivariate regression procedures were used to es timate the effects of stakeholder influence on the development of performance measures. Quantitative findings were complemented by qualitative interview data from social equity administrators, performance measurement administrator s, and department level pl anners

PAGE 5

v The results in this d issertation provide evidence that City Council and departmental leadership support for performance measurement drives the development of new measures and the evaluation of existing measures. Findings also suggest that the impact of stakeholders on measurement development is mediated by the structure of city and departmental performance review processes. Specifically, t he timescale of performance measurement review processes and the degree of interaction between political an d operational stakeholders affect measurement development Taken collectively, these results offer an in teresting proposition for public organizations : design of effective performance measurement processes must extend beyond operational routines to includ e the strateg ic review processes of political stakeholders. The primary contribution of th is dissertation is to provide a better understanding of how political stakeholders impact the innovation tendencies of public managers regarding performance measurem ent and the mediating effects of the structure of performance measurement processes. The form and content of this abstract are approved. I recommend its publication. Approved: John Ronquillo

PAGE 6

vi DEDICATION To Amy, for your encouragement, patien ce, laughter, and love.

PAGE 7

vii ACKNOWLEDGEMENTS I owe a debt of gratitude to my dissertation committee for their support and insight. Benoy Jacob for taking the time to be a mentor and a friend. You h elped me weed out the extraneous and hone in on the gap. I always looked forward to our discussions. John Ronquillo, my dissertation advisor, whose views on academia and life I took to heart and will hold onto. Tanya Heikkila, who seemed to know where my dissertation was h eading before I did. I have learned by your example how to develop concise and coherent arguments. William Swann, whose insight into the literature came at the exact right time. Thank you to t he faculty and staff at the S chool of P ublic A ffairs for giving me the opportunity to develop myself as a scholar. There are certain friends who seem to always come through when I need it the most. Thank you Jo nathan Schaefer for helping me understand my narrative. Lastly, I never would have made it without the suppor t of my family. Ron and Mary, my parents, have taught me to believe in myself. Amy, my wife, talked me down from the ledge more times than I care to remember. Delaney, my beautiful, strong daughter, never wavered in her support for my academic quest. Tali, my daughter and the creative light of my life, gave me the courage to branch out. Bridger, my son, radiated positive energy that I tapped into when things got tough. And Tait, who was a baby when this started, reminded me about what is important.

PAGE 8

viii TABLE OF CONTENTS CHAPTER I. I NTRODUCTION ................................ ................................ ................................ ........... 1 Empirical Context ................................ ................................ ................................ ........... 5 Summary of t he Methodology ................................ ................................ ........................ 6 Dissertation Roadmap ................................ ................................ ................................ ..... 7 II. L ITERATURE REVIEW ................................ ................................ ................................ 9 Performance Measurement ................................ ................................ ............................. 9 III. T HEORY AND HYPOTHESES ................................ ................................ .................... 18 Introduction and Research Question ................................ ................................ ............. 18 Theoretical Model ................................ ................................ ................................ ......... 20 Stakeholders ................................ ................................ ................................ .................. 23 Performance Measurement Processes ................................ ................................ ........... 28 Hypotheses ................................ ................................ ................................ .................... 34 Summary ................................ ................................ ................................ ....................... 43 IV. R ESEARCH DESIGN ................................ ................................ ................................ ... 45 Empirical Context ................................ ................................ ................................ ......... 45 Variables ................................ ................................ ................................ ....................... 47 Comparative Case Study ................................ ................................ ............................... 53 Partic ipants ................................ ................................ ................................ .................... 61

PAGE 9

ix Instrument: Interviews and Document Analysis ................................ ........................... 62 Research Design Limitations ................................ ................................ ........................ 65 V. DESCRIPTIVE COMPARISON OF PERFORMANCE MEASUREMENT DEVELOPMENT ACROSS FIVE COLORADO CITIES ................................ ............ 67 Introduction ................................ ................................ ................................ ................... 67 Summary of Findings ................................ ................................ ................................ .... 69 Arvada ................................ ................................ ................................ ........................... 75 Boulder ................................ ................................ ................................ .......................... 83 Denver ................................ ................................ ................................ ........................... 92 Fort Collins ................................ ................................ ................................ ................... 99 Longmont ................................ ................................ ................................ .................... 105 Conclusion ................................ ................................ ................................ .................. 111 VI. E XAMINING THE RELATIONSHIP BETWEEN STAKEHOLDER INFLUENCE AND PERFORMANCE MEASUREMENT VARIATION ................................ ......... 117 Introduction ................................ ................................ ................................ ................. 117 Stakeholder Influence on Measurement Variation ................................ ..................... 118 Stakeholder Influence on Social Equity Measurement Variation ............................... 136 Key Findings ................................ ................................ ................................ ............... 137 VII. C ONCLUSION ................................ ................................ ................................ ............ 139 Key Findings ................................ ................................ ................................ ............... 140

PAGE 10

x Research Limitations ................................ ................................ ................................ .. 146 Contributions to the Literature ................................ ................................ .................... 148 Future Research ................................ ................................ ................................ .......... 150 Conclusion ................................ ................................ ................................ .................. 151 REFERENCES ................................ ................................ ................................ .................... 154 APPENDIX A. I NTERVIEW INSTRUMENT ................................ ................................ ............... 168

PAGE 11

xi LIST OF TABLES TABLE 1. Operational and political stakeholders that influence management behavior. ......................... 24 2. Definition of measurement variation. ................................ ................................ ....................... 48 3. Interview questions used to identify city level measurement variation. ................................ ... 49 4. Independent variables and associated interview questions. ................................ ...................... 49 5. Interview questio ns used to identify influence of the Mayor or City Council. ......................... 50 6. Interview questions used to identify influence of agency leader ship. ................................ ...... 51 7. Interview questions used to identify influence of frontline workers. ................................ ....... 51 8. Interview questions used to understand city level performance measurement processes. ....... 52 ................. 52 10. Colorado city statistics (United States Census Bureau, 2017). ................................ ............... 55 11. Independent variables and the associated proxy measures. ................................ .................... 57 ................................ ................................ ................................ ................................ ....................... 59 13. Classification of cities into potential case types. ................................ ................................ .... 60 14. Case studies selected for study. Data from Colorado Department of Local Affairs (2015) and American Community Survey (2015). ................................ ................................ .......................... 61 15. City documents analyzed for the study. ................................ ................................ .................. 64 16. Summary of the fin dings for Arvada, Colorado (1 = to no extent; 5 = to a very great extent). ................................ ................................ ................................ ................................ ....................... 83 17. Summary of the findings for Boulder, Colorado (1 = to no extent; 5 = to a very great extent). ................................ ................................ ................................ ................................ ....................... 91

PAGE 12

xii 18. Summary of the findings for Denver, Colorado (1 = to no extent; 5 = to a ver y great extent). ................................ ................................ ................................ ................................ ....................... 99 19. Summary of the findings for Fort Collins, Colorado (1 = to no extent; 5 = to a very great extent). ................................ ................................ ................................ ................................ ........ 105 20. Summary of the findings for Longmont, Colorado (1 = to no extent; 5 = to a very great extent). ................................ ................................ ................................ ................................ ........ 111 21. Summary of variable rankings regarding performance measurement influence and measurement development (1 = to no extent; 5 = to a very great extent). ................................ .. 112 22. Summary of variable rankings regarding social equity influence and the extent of social equity performance measures (1 = to no extent; 5 = to a very great extent). ............................. 113 23. Interviewee responses to close ended questions regarding the influence of different stakeholders on measurement variation. The number of responses are listed in the table. ........ 119 24. Two sample t test examining respon ses regarding the influence of different stakeholders on the development of new performance measures (N=10). ................................ ........................... 120 Alpha measures for the close ................................ ... 122 26. Pairwise correlation coefficients examining the relationship between stakeholder influence and measurement variation. Significance level is given in (parens). ................................ .......... 123 27. Ordinary least squares analysis examining the impact of stakeholder influence on measurement variation. Coefficients are shown with corresponding significance. Standard errors are shown in (parens). ................................ ................................ ................................ ................. 125 28. Timescales for measurement reviews and the level of interaction between stakeholders for different performance m easurement processes. ................................ ................................ .......... 135 29. Interviewee responses to close ended questions regarding the influence of stakeholders regarding soc ial equity. The number of responses are listed in the table. ................................ .. 137 and each hypothesis is labeled as supported or unsupported by study data. In some cases, opposite effects were found (e.g., H 3 ). ................................ ................................ ................................ ................................ ..................... 144 31. Propositions regarding the impact of performance measurement review processes on the development of measures. ................................ ................................ ................................ ........... 145

PAGE 13

xiii LIST OF FIGURES FIGURE 1. Measurement development is defined in terms processes and products. Measurement variation dependent variable (Y). Performance measurement processes are the causal mechanisms (M) that connect stakeholder influence to measurement variation. ......................... 21 variation. ................................ ................................ ................................ ................................ ....... 26 3. Operational stakeholders serve as secondary independent variable (X2) that affects the measurement variation decisions of public managers. ................................ ................................ 27 4. Performance measurement processes serve as mechanisms by which stakeholder influence affects measurement variation. ................................ ................................ ................................ ..... 29 5. The model under investigation by this study. It includes measurement variation as the independent variable and stakeholder influence as the independent variables. ............................ 47 6. The proposed theoretical model comparing stakeholder influence against measurement variation. ................................ ................................ ................................ ................................ ....... 67 7. Levels of stakeholder influence over measurement variation. ................................ .................. 71 8. Levels of general performance measurement variation and social equity measurement variation. ................................ ................................ ................................ ................................ ....... 73 9. Levels of stakeholder commitment to social equity. ................................ ................................ 75 10. The proposed theoretical model compari ng stakeholder influence against measurement variation. ................................ ................................ ................................ ................................ ..... 117 ................................ ...... 133 ................................ ................... 133 ................................ .......... 134 14. Results of OLS ( ** p<0.05, p<0.10) ................................ ................................ .................. 142

PAGE 14

1 CHAPTER I INTRODUCTION This study asks the question : H ow do political and operational stakeholders influence the development of performance measures in public organizations? The assumption of performance measurement is that collecting and analyzing data on organizational performance goals can improve internal decision making and organizational effectiveness while assisting those in external oversight roles to better understand organizational performance Performance measurement has been presented as a useful management tool since the early twentieth century (S treib & Poister, 1999), and in recent decades the concept of performance has become central t o public management reform (Moy nihan, 2008, p. 3). As a result, modern public manager s are concerned with performance (Meier, Favero, & Zhu, 2015) and are trained in performance measurement (Hatry, 2014) Working in organization s that set performance goals and use performance indicators to track progress against those goals t hey understand the difficulties surrounding the collecti on of performance data and ha ve used data internally to support decision making or externally to report performance results to stakeholders (Bromberg, 2009) Therefore, a key role public manager s is developing and maintaining their measurement program. At a mac ro level their actions cont ribute to the development of an organizational c ulture that support s or discourage s the practice of performance measurement (Hood, 2012) At a micro level their decisions determine if and how performance measures will be used and the characteristics and structure of the measures themselves (Behn, 2003)

PAGE 15

2 M easurement development may be understood in terms of process es and products 1 P erformance measurement processes are the activities that facilitate performance measurement in public organizations Performance measurement products are the outputs of the measurement processes. Specifically, they are the set of performance measures that are developed and used by public managers. This study has two objectives related to measurement processes and products. The first objective is to better understand the performance measurement processes in use by city governments. The second objective is to examine how different stakeholders affect the development of new measures or the updating of existing measures. Management decisions to develop measures may be driven from a variety of sources (Behn, 2003) A common example is when a manager is tasked with a new goal. One respo nse to this task may be deciding how to evolve their performance measurement strategy. Is data collection and analysis worth the effort and will it contribute to achieving the goal? If so, are existing measures sufficient for the task or do new measures ne ed to be added? If new measures are required, where will the data for those measures come from and, once collected, how can the data best be used? Scholars are beginning to understand the factors affect ing public managers decisions about performance measu rement. D ecisions occur ring in organization s that operate in an environment characterized by multiple values (Radin, 2006), goal ambiguity (Moynihan, 2015), and stakeholders with competing interests (Moynihan, 2008). In addition managers may be more heavi ly influenced by organization al norms, or they may react in more logical, rule 1 The distinction between processes and products is borrowed from Gerlak and Heikkila (2011).

PAGE 16

3 following way s In other words, decision s are influenced by the circumstances of their use and the experiences and motivations of the individuals using them (Hood, 2012; Moyniha n, 2015). An understanding of how and why managers develop performance measures is important for several reasons. While many organizations are investing in performance measurement, the assumed benefits improved decision making, increased organizational effectiveness, more transparent oversight do not always materialize (Sanger, 2008). The empirical question is why some public managers are inclined to develop new measures, and why some public organizations have adapted their processes to encourage the dev elopment of performance measures. While scholars and practitioners have actively worked to improve our understanding of this performance measurement puzzle, questions remain. A growing literature has documented the impac t of management on performance (Boyn e, 2003; Sanger, 2013) and the factors that encourage and impede the use of performance measures by managers ( Moynihan & Hawes, 2012; Kroll, 2015 ) but there is substantially less focus on why managers make the performance related decisions they do (Meier Favero & Zhu 2015). Specifically, why do some managers decide to search out new performance measures and analyze and improve existing measures while others are content rely ing on existing measures or ignor ing performance measurement all together ? What dr ives that innovative tendency ? This study argues that political and operational stakeholders affect how manage rs develop measures. When these stakeholders exert their influence, managers are more likely to develop new performance measures and utilize organizational processes that encourage the development of performance measures. Without stakeholder oversight, managers will rely on existing processes and are less likely to innovate with regards to performance measur ement. The theoretical motivation of this study is to understand the role stakeholders play in management

PAGE 17

4 decision making surrounding measurement development While scholars studying performance measurement in public organizations have developed models of measurement adoption ( e.g., d e Lancer Julnes & Holzer, 2001) and others have explored the characteristics and dynamics of measures ( e.g., Behn 2013), none have developed adequate causal models of measurement change. In re sponse, this study will develop a s ystematic theory of how different stakeholders impact measurement development in public organizations. An empirical motivation of this study is to fill gaps in our understanding of how public managers develop and update measures the measurement development processes Such an understanding has important practical applications. First, it can help public managers determine how conducive their existing management environment is to developing performance measures Second, understanding how different s takeholders impact measurement development gives public managers a framework for engaging those stakeholders Both have the goal of designing management processes that produce measures supporting management decision making. T he same understanding can help public managers uncover existing institutional factors that may be subverting the development of performance measures. The dissertation contributes to the literature examining data centric innovation in public organizations It explores the prominent data centric practice of performance measurement the development of performance goals the adoption of indicators that can be used to provide evidence of progress against the goals, the collection of indicator data, and the analysis of indicator data to improv e decision making and organizational effectiveness. Specifically, it contributes to scholarship that emphasizes the political nature of data (e.g., Moynihan, 2008; Radin, 2006).

PAGE 18

5 March (2003) notes that organizations change through rational processes of co nsequential choice and political processes of negotiation among conflicting interests. Similarly, the performance measurement literature distinguishes between operational processes where performance measurement serves as a cybernetic (Zweig, Webster, & Sc ott, 2009) feedback tool for improving organizational efficiency and political processes where performance measurement serves s in turf wars, budget negoti ations, or political arguments (Kroll, 2015). The literature sugge sts that i n political processes the interpretation of data is highly subjective, controversial, and role induced (Kroll, 2015) This politicization of data pu rposeful (Moynihan, 2009) and dysfunctional uses of performance measurement. This study departs from scholarship that views politic ized performance measurement processes as a liability and instead suggests that the politicization of data is neither inhere ntly positive nor negative when it comes to its impact on organizational performance. Instead, this study builds on scholarship that assumes that data serves both political and operational objectives and that stakeholders and public managers are mindful of this dual role and use it to their advantage (e.g., Moynihan & Hawes, 2012) Th us a proposition central to the study is that the politicization of data can serve as a source of performance measurement innovation for public organizations. This proposition is integrated into the theoretical model that is introduced below and is developed into hypotheses that will be tested by the study. Empirical Context Measurement development is explored using data collect ed from a study examining the adoption of social equity measures in local government social equity programs. City social equity initiatives are aimed at closing quality of life gaps for individuals and groups. The recent

PAGE 19

6 upsurge in interest in social equit y in city governments provides us with a unique opportunity to observe how measures evolve within a shifting landscape. That is, it enables us to observe how social equity measures evolve alongside new city level social equity pr ograms and existing perform ance measurement programs. Summary of the Methodology Th is study employ s a comparative case analysis to provide evidence regarding the impact of various stakeholders on measurement development The goal of the case study is to promote analytic generalizability the demonstration of empirical support for the proposed theoretical model and the ability to generalize the theory beyond the cases examined in this study (Yin, 2010). T he performance measurement programs of five Colorado cities were examined in the study While the cities served as the unit of analysis, the main units of observation were public managers within city agencies. I nterviews were conducted with social equity administrators, performance measurement administrators, and dep artment level planners T he interview data and documents were analyzed with the goal of building theory by identifying meaningful concepts and understanding positive relationships within the data (Leech & Onwuegbuzie, 2008). In addition, d ocument analys is was conducted to develop an understanding of the roles of operational and political actors in performance measurement and the origin and development of performance goals and how these actors and goals influence the development of performance measures. I n depth semi structured interviews were held with two public managers from each of the five Colorado cities. Open ended interview questions were structured to allow participants to describe the development and use of performance measures in general as well as the utilization

PAGE 20

7 of social equity measures in particular. This data was used to develop a descriptive comparison of measurement development processes and outputs across the five cities. I nterviewees were also asked using a series of close ended questio ns, to rate levels of commitment to performance measurement for different stakeholders This data was used in a quantitative analysis examining the relationship between stakeholder influence and measurement development. Dissertation Roadmap The dissertation is organized into seven chapters : Chapter I introduce s the research topic, the literature review, the theoretical model, and the research design. Chapter II reviews the academic literature. Chapter III uses concepts from the literature t o develop a theoretical model that relates the measurement development behavior of managers to different types of stakeholders. A research question is developed which, when answered, fills the knowledge gap surrounding measurement development. Next, the li terature is explored to identify the stakeholders that have a strong likelihood of impacting measurement development. Hypotheses concerning the relationship between these stakeholders and measurement development are formulated. Chapter IV presents a detai led discussion of the research design. It describes the empirical context and case study approach, details the variables, and describes the data analysis techniques that will be used. Chapter V explores the case studies. It review s the performance measure ment and social equity environments of the cities and dependent variables.

PAGE 21

8 Chapter VI develop s the causal relationship by comparing and contras ting the observed causal effect, the extent of the obse rved effect and the relative uncertainty of that effect (Gerring, 2007) between the stakeholders (the independent variables) and measurement development (the dependent variable ) for each case. Empirical evidence in support of and in conflict with this st is presented. Chapter VI I synthesize s the results of the previous chapters, frame s the findings in terms of the original research question and objectives of the study, and discuss es future research avenues.

PAGE 22

9 CHAPTER II LITERATURE REVIEW Performance M easurement performance information through strategic planning and performance measurement routines and es are the management activities directed at the adoption and development of performance indicators and the collection and dissemination of indicator data. The development and use of measures by public managers has garnered extensive attention in the perf ormance m anagement literature. A ttention to performance measurement is not new and the academic literature is vast (Schatterman, 2008). Empirical a nd theoretical examinations of performance measurement as a managerial tool have a deep history in public adm inistration ( Taylor 1916, 2005) and has been presented as a useful local government management tool since the early twentieth century (Streib & Poister, 1999). were often Streib, 1999, p. 326). In the 1990s there was a resurgence of interest in performance measurement (e.g., Wholey & Hatry, 1992; Epstein, 1992) driven in part by several governm ent and academic resolutions urging governments to institute systems for goal setting and performance measurement (Poister & Streib, 1999). Recent decades have seen a resurgence in efforts to make government more performance oriented. As a result, the rout ines of performance management tools by which governments structure relationships, state values, and allocate resources with

PAGE 23

10 employees, third rou tines surrounding performance management, it is performance measurement that has garnered extensive attention in the literature. Rainey (2009) suggests that government investment in the development of performance goals and the measurement of progress again st those goals is one of City and county governments have adopted performance measurement systems and practices as suggested by Poister and Streib (1999) who review ed a number of sur veys and studies examining performance measurement adoption by local government s They note that emphasis on performance measurement was stimulated by resolutions of the Government Accounting Standards Board (1989), the National Academy of Public Administr ation (1991), the American Society for Public administration (1992), and the National Governors Association (1994). Broadening this parallel rise of performance centric reforms across d ifferent levels of government, t he 1993 Government Performance and Resu lts Act (GPRA) sought to improve program efficiency and effectiveness by requiring Federal managers to set program goals and by providing them with information about program results and service delivery (Office of Management and Budget, 1993). GPRA also so ught to improve congressional decision making and efficiency of Federal programs (Office of Management and Budget, 1993). T hese resolutions and reforms translated into elevated adoption rates by government agencies. A survey of municipal governments showed a large growth in adoption from 1976 (28%) to 1984 (68%) (Poister & McGowan, 1984). Another indication of the expanding d omain of performance measurement was that adoption was not isolated to a few departments L ocal governments developed performance

PAGE 24

11 monitoring systems across a number of departments including economic development, elementary and secondary education, higher e ducation, hospital care, mass transportation, police and fire services, public assistance, public health, road maintenance, and solid waste collection (Wholey & Hatry, 1992). Theories of P erformance M easurement The promise of the positive effects of measu rement lies at the heart of contemporary theories of performa nce management. These theories find their roots in precepts found in New Public Management (NPM) doctrine. The precepts of NPM include the clear assignment of responsibility, the definition of ex plicit standards and measures of performance, emphasis on results rather than procedures, greater competition in the public sector, and the use of private sector styles of management (Hood, 1991). One assumption of NPM doctrine is that is that supplying ma nagers and employees with feedback measures will instigate reflection on (Moynih an, 2005, p. 203). This is a model of behavior based on reinforcement learning, or feedback, wherein new behavior s are adopted because, in past experience, they yielded higher payoffs than other behaviors. In feedback models measures take on an ex post fun ction (Pavlov & Bourne, 2011): measure data is collected to enable management to compare actual performance to target performance. Analysis of the causes of variance between actuals and targets enables managers to "know what is working and what is not work ing" which, in theory, leads to reallocation of resources aimed at reducing the variance.

PAGE 25

12 performance data is not just a necessary condition for use but also a sufficient one. Such a theory would predict an increase in measurement driven decision making as the supply of performance data increases. Yet, empirical examination of the practice of performance measu rement has found that the presence of processes and systems for collection and retrieval of information do not necessarily lead to actual use of the information in decision making (Schmidle, 2011). This gap between theory and practice has encouraged empiri cal research that examines the factors impacting the use of performance measures. Factors found to positively impact performance data use include the presence of an open, innovative, and risk taking cul ture (Moynihan & Landuyt, 2009) and the routines that promote the examination and interpretation of performance information (Moynihan, 2005). A US Government Accountability Office survey of mid and upper level managers (US GAO, 2014) found statistically significant relationships between the use of performance data and managerial engage ment, the availability of sufficient information on measure validity, training in the use of performance information to make decisions, leadership commitment, and effective communication of performance information. review of empirical studies that have examined drivers of performance data use identifies factors that have shown a positive impact, including measurement system maturity, stakeholder involvement, leadership support, support capacity, innovative culture, and goal towards performance measures, networking behavior, and general political support ; along with ation size, financial distress, familiarity with performance measures, and hierarchical position (Kroll, 2015, p. 474). Driving the evolution of performance measurement theory is the accumulation of evidence from empirical studies examining the factors tha t impact performance measurement

PAGE 26

13 Strict feedback learning models are no longer sufficient explanations of measurement development and use. Instead, new frameworks that accommodate contingencies such as organizational culture, goal diversity, and leadershi p, among others, are evolving. Contemporary literature reveals two theoretical frameworks of note. The first framework adopts organizational learning theory and is comprised of structural and cultural approaches to understanding how performance measuremen t affects organizational performance (Popper & Lipshitz, 1998; Moynihan, 2005). Structural approaches examine the organizational learning mechanisms that enable organizations to collect, analyze, store, disseminate, and use information (for review, see Bap uji & Crossan, 2004). Cultural approaches examine organizational shared values and social practices that enhance or inhibit the development and use of performance measures (Schein, 2010; Popper & Lipshitz, 1998). The second framework adopts routine theory and is concerned with understanding how performance measurement impacts organizational performance by affecting the dynamics of organizational processes (Pavlov & Bourne, 2011). Routine centric models emphasize the distinction between the abstract idea of a routine and its expression in concrete action and how performance measurement affects the interaction between these abstract and concrete levels (Pavlov & Bourne, 2011, p. 111). Empirical studies examining measurement adoption in public organizations fo und that governments were increasingly reporting on indicators (Epstein, 1992) and were developing performance monitoring systems across a number of departments (Wholey & Hatry, 1992). Interestingly, empirical studies also revealed a gap between the adopti on of measures and the implementation or actual use of the measures (de Lancer Julnes, 1999; Berman, 2002; Moynihan, 2008). That is, while many public entities claimed to be adopting performance measurement,

PAGE 27

14 there was evidence that the data, once collected was then not used in the organizational decision processes. For example, in a survey of 25,000 municipalities Streib and Poister (1999) found that while performance measurement initiatives did appear to lead to changes in manager accountability and emplo yee focus on organizational goals, they had much less of an impact on (p. 119). Political M odels of P erformance M easurement The gap between the collection of performance measurement data and the use of that data for decision making is the underlying concern of scholarship examining the development and use of performance measurement in political environments. These scholars note that public agencies are marked by multiple values (Radin, 2006), goal ambiguity (Moynihan, 2015), and stakeholders with competing interests (Moynihan, 2008). The models of performance measurement in this tradition take a more subjective approach to measu rement and view data as being socially constructed and shaped by the circumstances of its collection and the experiences and motivations of the individuals using it ( Moynihan, 2008; Hood, 2012; Moynihan, 2015). De Lancer Julnes and Holzer (2001) developed a model of performance measurement that distinguishes between rational and political forces of organizational change. Rational logic of consequences with societ al and political norms and conformance with various internal and external interests. De Lancer Julnes and Holzer (2001) found the political / rational model tend s to dominate

PAGE 28

15 according to what phase of the performance measurement cycle one is operating on. T hat is, rational models tend to work better when examining the forces that drive design and adoption of performance measurement processes while political models tend to work better when examining the forces that drive the use of performance measures by or ganizational actors Moynihan (2008 ) develop ed that integrates ideas from the organizational learning, political, and routine theories just mentioned. This model emphasizes the political nature of performance information that it is not objective but rather is selected and presented by advocates seeking to persuade others (Moynihan, 2008, p. 16, book). Moynihan argues that the development and use of data does not resemble a consensus driven rational approach to decision making, but more closely resembles an interactive dialogue where actors use data to support their arguments and persuade others (Moynihan, 2008, p. 18, book). These dialogues can foster learning forums which in turn foster organizati to change existing organizational processes for the purposes of performance improvement, learning ways to improve organizational capacity, or challenging the basic unde rlying goals of focused on solution seeking, where actors collectively examine information, consider it n, 2008, p. 167, book). Conclusion The re is a growing acknowledgement that performance measurement is, in part, a political exercise. Yet there is a distinct lack of positive political models of performance measurement. Contemporary models of performance measurement tend to take a normative approach that emphasizes the positive consequences of rational and objective operational

PAGE 29

16 performance measurement routines and the negative disruption caused by political interference. Though the public administration li terature has a rich history of political models of bureaucratic behavior, these political models have not yet been fully integrated into the performance measurement literature. This study addresses this omission by putting political stakeholders at the ce nter of its theoretical model. In so doing, it departs from scholarship that views political involvement in performance measurement as a disruptive force. Instead, it embraces the political nature of performance measurement and attempt s to progress towards an unbiased understanding of political intervention. The literature review reveals that c ontemporary performance measurement research is between organizational adoption of performance measurement practices and the implementation or actual use of the measures to increase organizational effectiveness. This study places itself squarely into research regarding the performance measurement use gap. Specifically, it contributes in the literature examining performance measur ement focus on local government. The literature reveals a number of studies regarding performance measurement use in federal agencies. There has been much less of a focus on city governments. Given the level of innovation surrounding big data and data analysis i n cities (e.g., Wood, 2015 ; Mayer Sch enberger & Cukier, 2013), there is reason to believe that local government is fertile ground for performance measurement research. as an ong oing change process. This focus zeros in on why manager s decide to add new measures or

PAGE 30

17 revise existing measures. That is, it unpacks the adoption use approach and treats measurement development as an ongoing iterative process. Using a measurement developm ent lens thus creates a focus on the performance measurement processes that fall between measurement adoption and measurement use. While these three processes of adoption, development, and use are intertwined, this researcher believes that breaking out dev elopment as a unique area of study will add valuable insight into our understanding of why certain public organizations are able to develop effective performance measurement programs.

PAGE 31

18 CHAPTER III THEORY AND HYPOTHESES Introduction and R esearch Q uestion A principal problem under examination by performance measurement scholars is why different organizations have variable success with regards to implementing effect ive performance measurement systems (de Lancer Julnes, 1999; Berman, 2002; Moynihan, 2008). Empirical and theoretical progress has been made by scholars attempting to understand this effectiveness there is a growing list of factors known to impact the adoption and use of performance measures in decision making. A t the theoretical level, scholars are moving beyond models that assume rational, goal driven behavior, and are adopting new models that incorporate political and other normative factors. In addition, theories of performance measurement use are moving beyon d static models that view the adoption and use of performance measurement as singular events, and are instead beginning to view performance measurement as a dynamic system that co evolves alongside ever changing public and policy making sentiments. As a re sult, scholars are developing an increasingly detailed and nuanced understanding of what influences the development of effect ive performance measure ment systems. E ffective performance measurement systems require relevant and usable measures y et scholars have little understanding of the factors that contribute to the development of usable measures. While early scholarship understood that the development of a performance measurement system was an important precursor to the actual use of performance measures the focus was at the program level That is, it was about the development of performance measurement programs not the development of individual performance measures. Studies have

PAGE 32

19 explored factors that moderate the use of measures, including stakeholder influence (Moynihan & Hawes, 2012) ; leadership commitment, training, and effective communication (US GAO, 2014) ; the presence of an open, innovative, and risk taking cul ture (Moynihan & Landuyt, 2009) ; measurement system maturity ; goal clarity (Kroll, 2015) ; and the availability of sufficient information on measure validity (US GAO, 2014) Thus, while we have developed an understanding of why managers might choose to use performance measurement, we know much less about the processe s that underlie the actual development of the measures themselves This dissertation focuses on this gap. It is guided by the question How do political and operational stakeholders impact the development of performance measures in public organizations? T his question has both descriptive and associational components. First, there is limited scholarship examining the processes that managers employ to review and revise the performance measures they are using in their decision making and operational routines. A descriptive investigation of these processes is thus a primary contribution of the study. Second, the study examines the nature of the relationship between measurement development and various stakeholders. Extant theory has proven inadequate in trying t o explain why some public managers have been able to develop insightful and effective performance measures while others develop measures that are ineffective or minimally contribute to organizational effectiveness This study attempts to bridge this theore tical gap by proposing a model describing the influence of political stakeholders on the measurement development decisions of public managers. In so doing, it advances an explanation for why certain public manageme nt settings are amenable to innovation regarding measurement development

PAGE 33

20 Theoretical M odel This study argues that agencies gravitate toward different performance measurement systems based, in part, on the influence of political stakeholders Specifically it examines the contention that influen ce by political stakeholders leads to the development of new measures and changes to existing measures while influence by operational stakeholders leads to a status quo focus on existing measures. In addition, the influence of stakeholders on measurement d evelopment is affected by the structure of the organizational performance measurement processes. The remainder of this section develops the proposed theoretical model by exploring each of the model elements in turn. First, it examines the concept o f measurement development and deconstructs the concept into observable events that will serve as some of the Next, it considers the political and operational stakeholders that influence public managers as they make decisions rega rding measurement development. For each stakeholder a hypothesis is suggested regarding the nature of their influence on measurement development. The result is a model demonstrating how the performance measurement decisions of public managers change depen ding on operational and political stakeholder influence. Measurement Development Measurement development is a concept describing the actions and outputs of public managers as they respond to the performance measurement demands of various stakeholders. Thi in terms of two constructs: measurement development processes and products

PAGE 34

21 Figure 1 shows the dimensions of performance measurement development. M easurement development processes are the activities that facilitate performance measurement in public organizations. These proces ses might include performance measurement review meetings, periodic planning meetings, or budgeting sessions wherein departmental performance is reviewed. Measurement development products are the outputs of the development processes. Specifically, they are the set of performance measures that are developed and used by the public managers. The constructs are defined in terms of the variables performance measurement processes and measurement variation. Performance measurement processes are the processes, or m echanisms, that facilitate performance measurement within public organizations. Measurement variation is a variable that measures the creation of new performance measures, changes to existing measures, or the removal of measures. It measures changes to performance measures. Figure 1 Measurement development is defined in terms processes and products Measurement Performance measurement processes are the ca usal mechanisms (M) that connect stakeholder influence to measurement variation Measurement variation is the dependent variable (Y) for this study It is operationalized as a factor measuring the presence of new performance measures, changes to existing measures, or the removal of measures. Measurement variation may be identified directly. For example, city planning documents or performance dashboards may explicitly list

PAGE 35

22 measures that have been added or updated Measurement variation may also be identified through interviews with city employees. Performance measurement processes are examined later in this chapter. These processes serve as a causal pathways (Gerring, 2012) connecting stakeholder influence to measurement development. As such, pe rfor mance measurement processes serve as a mechanism (M) for this Measurement variation may be viewed as an innovative activity that involves experimentation with new approaches towards technologies or business processes. A new measure may provide an additional level of accountability to elected officials and citizens (Wang & Berman, 2001), or help build support and secure funding for departmental efforts (Berman & Wang, 1999). Yet, it would be a mistake to view measurement variation as an unqualified good. Developing new measures costs managers both time and effort, and there is a level of uncertainty regarding the returns on that effort. Time spent on measurement development is time not spent on leveraging existing competencies and working towards existing goals. In addition, the motivations behind the drive toward measurement variation must be considered. Measurement variation in response to political arguments or turf wars may encourage non purposeful (Moynihan, 2009) and dysfunctional us es of performance measurement. Lastly, organizations wherein inexperience with the new innovation leads to failure, which leads to further innovation and more failure (March, 2003).

PAGE 36

23 Stakeholders A stakeholder is any group or individual who can affect or is affected by the achievement to ensure the long term viability of orga nizations (Bryson, Gibbons, & Shaye, 2001) as well as policies, plans, and programs ( Kingdon, 1984; Baumgartner and Jones, 1993). In addition, i dentifying stakeholders and garnering their support can improve organizational outcomes that increase public va lue (Bryson, 2004). To identify the key stakeholders and to understand how they influence the performance measurement decision making of public managers, organizational and public administration literatures were reviewed A broad history of research has ex amined the forces that influence the decision making behavior of public managers. Two consistent findings from this literature inform this study First, in one of their roles, public managers serve as technocratic organizational actors who are responsible for delivering services in an efficient and effective manner. In this operational role the decision making of public managers may be understood in terms of a rational logic of consequence where choices anticipate future results or preferred ends (March, 1 994; Frederickson & Smith, 2003). Second, in public organizations decision making must be placed within the context of a political framework that acknowledges the role of powerful internal and external groups (de Lancer Julnes & Holzer, 2001). In this pol itical role the decision making of public managers can be understood as a response to political pressure rather than a rational response to the operational production demands of the organization.

PAGE 37

24 Political and operational factors influence public manageme nt decision making. These factors may include rules or regulations, mandated processes, performance requirements, and demands for accountability and transparency. As such, we look to the city level political and operational realms in our search for signifi cant stakeholders. Two key stakeholders are identified in Table 1 Table 1 Operational and political stakeholders that influence management behavior. Concept Description Political stakeholders City elected officials including the Office of the Mayor or City Council. Operational stakeholders Management and non management internal employees, including city departmental leadership and the frontline workers who implement city agency programs and services. The duel operational and political roles of public managers present s us with framework for developing a model of public management decision making. This model frames public management decision making as a tension between two logics. The operational logic views management decision making as a rational response to clear goals and strong systems of accountability. The political logic views management dec ision making as a political response to competing interests and external political persuasion. Adopting this political/operational lens provides the study with a literature that has studied how political and operational stakeholders influence management de cision making. Political Stakeholders This study is concerned with how political stakeholders impact the development of performance measures in public organizations Studies have demonstrated the influence of political stakeholders on management priorities (Nicholson Crotty & Nicholson Crotty, 2004), public organization innovation (Bori ns, 2002), and management goal setting. Early theories of

PAGE 38

25 political control of bureaucracy viewed public organizations as being filled with self int erested bureaucrats who resisted policymakers in order to ensure their own survival ( Downs, 1967 ). interest groups control the actions of government agencies throu gh their relationship with congressional committees (Lowi, 1969). Other research found that external control was more diffuse and that a diversity of consumer advocacy and other public interest groups exert influence over the decision making of public agen cies (Kingdon, 1984; Sabatier, 1988), even in the presence of powerful opposing interests (Nicholson Crotty & Nicholson Crotty, 2004). Within the performance measurement literature, scholars are beginning to integrate political factors into their models. In these models, measures are viewed as being inseparable from the incentives, interests, and politics of the performance measurement environment. That is, measures are, at least in part, socially constructed (Moynihan, 2008) and thus represent the intenti ons of the managers using the measures as much as they represent the outputs or outcomes being measured. De Lancer Julnes and Holzer (2001) developed a model of performance measurement adoption and use that separates the political from the rational factors They found that political factors tend to impact how measures are used while rational factors drive adoption of measures (De Lancer Julnes & Holzer, 2001). Moynihan (2008) developed a more micro level on. It emphasizes the political nature of performance information that it is not objective but rather it is selected and presented by advocates seeking to persuade others (Moynihan, 2008, p. 16 ). Moynihan argues that the development and use of data does no t resemble a consensus driven rational approach to decision making, but more closely resembles an interactive dialogue where managers use data to support their arguments and persuade others (Moynihan, 2008, p. 18, book).

PAGE 39

26 The growing awareness of the politi cal nature of performance measurement introduces us to our first stakeholder of interest. Political stakeholders serve as the primary stakeholders for this study. The fundamental relationship of interest is shown in Figure 2 For the purposes of this study p olitical stakeholders are city elected officials including members of the Office of the Mayor or City Council. As noted previously, the measurement developm ent products, or outcomes, are conceptualized in terms of measurement variation Figure 2 The relationship of interest regards the impact of political influence over measurement variation Political influence is the primary independent variable (X 1 ) for this study. It is a factor performance measures in decision making, and participation in the development of performance measures. Evidence of p olitical influence over measurement variation may be found in city planning documents and may also be identified through interviews with city employees. O perational S takeholders The contrast between operational and political forces is a central concern of public administration scholars Kettl (2000) notes that an understanding of public administration 30). In other words, the political environment is not separable from the administrative function and must be accounted for in models of public management behavior. Performance measurement studies have revealed an inherent tension between the use of performance measurement to improve both po litical accountability and operational performance (Chan & Gao, 2009; Halachmi, 2005; Bromberg, 2009). Scholars exploring operational factors have discovered a

PAGE 40

27 number of organizational elements that impact performance measurement (e.g., Kroll, 2015). Other scholarship has examined the political influences on performance measurement (e.g., Radin, 2006; Moynihan, 2008). These scholars have begun to understand how some of the uniquely political institutions surrounding public management decision making affect the development of performance measurement systems. The centrality of the tension between administrative and political forces in theories of public administration is the principle motivation for including operational stakeholders as the second dimension of the model. Operational stakeholders are management and non management internal employees who work with city departments or agencies. This study is particularly interested in how operational stakeholder influence interacts with t he influence political stakeholders have over public manager s decisions to develop performance measures. Operational influence on performance measurement is a factor me asuring the operational use of performance measures in decision making, and participation in the development of performance measures. In Figure 3 th model is expanded to include the influence of operational stakeholders. Figure 3 Operational stakeholders serve as secondary independent variable (X2) that affects the measurement variation decisions of public managers. Operati onal influence serves as a secondary independent variable ( X2 ) that directly affects the measurement development decisions of public managers It is thus likely to

PAGE 41

28 compromise a true unbiased estimate of the influence of political stakeholders (X1 ) on meas urement development (Y ) and must be included in the model. Performance Measurement Processes How does political demand for performance measures translate into measurement variation? To affect organizational change, political stakeholders must propagate th eir influence to public managers. This section explores the processes that underlie the stakeholder influence model defined above. Organizational processes include decision making, evaluation, communication, conflict resolution, and change and innovation (Rainey, 2009). Within organizational processes, groups and individuals respond to goals and incentives presented to them by p roducing the products and serv ices that result in effective performance (Rainey, 2009). Processes serve as mechanisms In model terminology a mechanism may be understood as a process by which a causal factor of theoretical interest is thought to affect an outcom e (Gerring, 2012). Recall that measurement development is defined in terms of development processes and development products. In model terms, this means that measurement development consists of the mechanisms underlying the causal relationship (development processes) in addition to the outcomes that are facilitated by the mechanism (development products). This section of the study is concerned with p e rformance measurement processes and their eff ect on measurement variation in relationship to the stakeholder s These are the processes, or mechanisms, that promote performance measurement within public organizations. In the context of this study, these are the mechanisms that facilitate the relationship between stakeholder influ ence and measurement variation.

PAGE 42

29 In Figure 4 performance measurement processes. Figure 4 Performance measurement processes serve as mechanisms by which stakeholder influence affects measurement variation. This section focuses on the combination of t wo processes (mechanisms) derived from the public administration literature. The first are the processes of exploration and exploitation. This study uses exploration and exploitation to describe the processes of organizational and individual adaptation and change. The second is the process of goal setting. This study views the goal setting process as the primary mechanism for translating stakeholder influence into public manager action. Goal setting and e xploration/exploitation processes are two drivers of organizational change. The characteristics of goal setting process affects how managers we igh their exploration/exploitation decisions. March (2003) argues that a fundamental requirement for organizations is to maintain a balance between exploration and exploitation, and that this quest essential for organizations, they compete for sc arce resources (March, 1991), such as time and money, creating a tension for managers as they try to balance them. In order to successfully manage this tension, many org anizations focus on using goal setting to ensure that sufficient attention and resources are given to both explorative and exploitative activities ( Stetler & Magnusson, 2015).

PAGE 43

30 The following sections review the processes of exploratio n/exploitation and goal setting. Specifically the literature is reviewed to understand h ow goal setting impacts the exploration/exploitation decisions of public managers. Exploration and Exploitation Organizational adaptation and change has been studied from a number of different perspectives (see e.g., Van de Ven & Poole, 1995; March, 2003; Nutt & Wilson, 2010 ). A central concern of studies of change processes is the relation between the exploration o f new possibilities and the exploitation of existing knowledge (March, 1991; Schumpeter, 1934). Organizational theories of exploration and exploitation present us with a rich literature that has examined adaptation and change within organizations. Specifi cally, the explore/exploit literature has studied the factors that drive individuals to make decisions about whether to pursue exploration or exploitation actions. This makes it particularly well examination of the measurement develo pment decisions of public mangers. The essence of exploration activities is creating variety in experience associated with sting knowledge base (Mom, Bosch, Van Den & Volberda 2007). Exploration involves searching fo r alternative organizational opportunities, structures, and routines (March, 1991; Zollo & Winter, 2002); experimenting with new approaches towards technologies, business processes, or markets (McGrath, 2001); innovating and adopting a long term orientatio reconsidering existing beliefs and decisions (Floyd & Lane, 2000; in Mom et al., 2007). The essence of exploitation activities is creating reliability in experience that is associated with deepening a manage Exploitation involves activities such as using and refining existing knowledge (Levinthal and March, 1993), applying

PAGE 44

31 and extending existing competenc i es (March, 1991), and elaborating on existing beliefs and decisions (Floyd and Lane, 2000; in Mom et al., 2007). This study adopts an exploration/exploitation lens for examining the measurement development decision making of public managers. That is, it examines the factors that drive exploration decisions in man agers resulting in the search for and development of new measures or changes to existing measures, and the factors that drive exploitation behavior resulting in an increased focus on using existing measures. Goal Setting Goal setting processes are concern assessing how well the organization has attained those goals (Daft, 2007). A goal is the aim of an limit (Locke & Latham, 2002). The premise of goal setting theory is that goal setting can affect the motivation ( Chun & Rainey, 2005; Jung, 2011; Rainey & Jung, 2015; Pandey & Wright 2006) and innovation (Stetler & Magnusson, 2015 ) can impact overall organizational performance ( Locke & Latham, 2002). Goals affect organization members through various mechanisms: they direct attention and effort toward goal relevant activities ; they have an energizing function wherein high goals lead t o greater effort than low goals ; they affect persistence ; and they affect action by contributing to the discovery and use of task relevant knowledge and strategies (Locke & Latham, 2002). G oal setting processes differ across different levels of t he organi zation. Political goal settin g may be more focused on agenda setting such as promoting political agen das. Departmental leaders use goals to establish departmental priorities and ensure employees are devoting their time and energy to tasks that align with the organizational mission. Public

PAGE 45

32 managers and frontline workers are largely responsible for converting goals into tasks and executing those tasks. Locke (1996) notes that high goal specificity, achieved mainly through quantification or enumeration, reduc es variance in performance. In addition, it has been shown that the use of clear project goals increases employee attention and motivation and generally affects creative performance (Stetler & Magnusson, 2015). Goals that are both specific and difficult le ad to the highest performance (Lock, 1996). Organizational actors are more likely to commit to goals when they are convinced that the goal is important and attainable (Lock e, 1996), and f eedback during the process has been found to further improve performa nce. (Stetler & Magnusson, 2015). Of particular concern to scholars examining organizational effects of goal setting are the impacts of goal ambiguity and goal conflict. Goal ambiguity refers to the extent to which an organizational goal or set of goals a llows leeway for interpretation, when the organizational goal ambiguity is generally associated with lower organizational performance (Jung, 201 4 ) and has been shown to negative ly impact managerial effectiveness and employee motivation (Chun & Rainey, 2005 ; Davis & Stazyk, 2016 ). A number of public administration scholars agree that public sector organizations have more goal ambiguity than th eir private sector co unterparts ( Rainey, 2009; Chun & Rainey, 2005). Sources of goal ambiguity in government organizations include external political interference from diverse constituencies, interest groups, and authorities; more trade offs between values; and less availabili ty of clear profit indicators (Rainey, 2010). Yet, to date, little research exists to validate these claims and there is increasing evidence demonstrating that public

PAGE 46

33 sector organizations do not have greater goal ambiguity than private sector organizations (Wilson, 1989; Boyne 2002; Rainey and Bozeman 2000). Goal conflict refers to the degree to which individuals feel that performance expectations among multiple goals are incompatible. This occurs when the achievement of one goal is seen as interfering with the achievement of other goal s and individuals are not able to make compensatory trade offs among distinct goals (Simon, 1955; Ethiraj & Levinthal, 2009). Research has demonstrated a negative relationship between goal conflict and performance outcomes ( L ocke, Smith Erez, Chah, & Schaffer, 1994 ). I mposing multiple goals can lead to a performance freeze wherein organizational actors are not able to identify choices that enhance performance ( Ethiraj & Levinthal, 2009). In addition, perceived goal difficulty can increase perceived goal conflict which can, in turn, have a negative effect on task performance ( Cheng, Luckett, & Mahama, 2007) Public organizations are especially prone to goal conflict. They are often tasked with addressing highly complex (wicked) policy problems (Rittel & Webber, 1973), but they also operate in an environment characterized by a multiplicity of participants and perspectives (Pressman & Wildavsky, 1984). A typical mechanism of political conflict resolution is to pass on intractable conflicts for resolution (or continued irresolution) to the administr ative level (Lipsky, 2010 ). Summary This section unpacked the performance measurement processes, or mechanisms, that used by different stakeholders to influence the measurement variation decisions of public managers. Specifically, it used the ideas of exploration and exploitation to describe the decision making processes of public managers.

PAGE 47

34 Further, it used goal setting as the primary me chanism for translating stakeholder influence into public manager action. Three aspects of goal setting were reviewed: goal alignment, goal ambiguity, and goal conflict. While all of these different aspects can be present across all levels of an organizati on, the circumstances of different stakeholders can create an operating environment that emphasizes one aspect over another. Goal ambiguity may be emphasized in the goal setting environ ment of political stakeholders, while t he goal setting environment of d epartmental leadership may be characterized by an emphasis on g oal alignment. Similarly, goal conflict may be emphasized in the goal setting environment of frontline workers. A premise of this study is that an emphasis on different aspects of goal setting affects how stakeholder s influence the performance measurement decisions of managers. While unclear has to how these processes will affect the influence stakeholders have over the measurement variation decisions of public managers. Under what circumstances does goal setting drive exploration or exploitation? Do different aspects of goal setting strengthen or inhibit political influence over measurement variation? The following section attempts to answer these questions by further explor ing the public affairs literature Hypotheses This section more closely examines the operational and political stakeholders that impact public managers as they make decisions regarding me asurement variation T he literature is examined to help us understand what previous scholarship would predict regarding different influence on the measurement develop ment decisions o f managers. For each factor a hypothesis is suggested regarding the nature of their influence.

PAGE 48

35 Political S takeholders This section explore s the effect that political stakeholders have on the exploration and exploitation tendencies of public managers. It draws on g oal theory and develops the idea that interference from diverse constituencies, interest groups, and authorities, and the need to make trade offs between values, incentivizes political leaders to develop ambiguou s goals. This goal ambiguity creates an operating environment that encourages public managers to diversify the measures they use for reporting Public sector decision making takes place under conditions of greater openness to the participation and influenc e of the media and other political officials (Ring & Perry, 1985 in Rainey, 2009). In addition, public organizations depend directly on the budgeting priorities of policy makers and indirectly on citizen sentiment for their resources. In addition, because public organizations lack market incentives and information, they must be overseen by external relations with political authorities, the media, and interest g Political agendas are characterized by multiple values (Radin, 2006) and stakeholders with competing interests (Moynihan, 2008) which can lead to goal ambiguity (Moynihan, 2015). Goal ambiguity refers to the extent to which an organizational goal allows leeway for interpretation (Chun & Rainey, 2005). Sources of goal ambiguity in government organizations include external political interference from diverse constituencies, interest groups, and authorities; more trade offs betw een values; and less availability of clear profit indicators (Rainey, 2010). Goal ambiguity exists because the pluralistic institutional and political environments of government agencies. In these environments the multiple authorities and actors in the s ystem do

PAGE 49

36 not necessarily agree on the goals and performance criteria for public organizations (Rainey, 2009). Where goals and values cannot be agreed upon, it is even more difficult to agree upon specific sub goals and the indicators that will be used to m easure progress against those goals. As such, goals originating from political stakeholders often remain high level and ambiguous, creating leeway for interpretation at the operational level. Performance information provides a language for communicating wi th external stakeholders (Moynihan, 2008, p. 7) and is one of the primary means for communicating agency effectiveness and efficiency. As a result, public organizations have become adept at funneling performance information to disparate stakeholders throug h a variety of communication channels. Public management scholars have noted that goal ambiguity may contribute to exploration in public organizations. The goal expectations of public managers can be ambiguous and conflicting, increasing discretion by reducing control by those with political authority (Lipsky, 2010). This creates a platform for communication and dialog among stakeholder groups (Jung, 2014) which allows for the interpretive space needed for new and different ideas to emerge (Brun & Stre 2009). Other researchers have argued that sustaining certain ambiguities can be useful if novelty and flexibility are project priorities (Brun & Stre, 2009). Stetler and Magnusson (2015) unpacking the effects of different levels of goal clarity, f ound that idea novelty increases under conditions of either high or low levels of goal clarity, whereas mid range levels of goal clarity are related to fewer novel ideas The participation of multiple constituencies with competing interests in political goal se tting can lead to lack of specification regarding organizational goals. This g oal ambiguity gives leeway to public managers to explore new and different ideas. In the context of performance

PAGE 50

37 measurement reporting, goal ambiguity creates a decision making en vironment that encourages public managers to diversify the measures they use for reporting This suggests the following hypothesis regarding the relationship between political stakeholders and measurement development: H 1 : Political influence is positively related to measurement variation Operational Stakeholders The second goal of the study is to explore the effect that operational stakeholders have on the measurement variation decisions of public managers. This subsection draws on agency theory, theories of information flows, and goal theory to inform hypotheses regarding the influence of opera tional stakeholders on decision making. Operational stakeholders include departmental leadership and frontline workers For each stakeholder a hypothesis is suggeste d measurement development behavior of managers. Departmental leadership Departmental leadership refers to support from senior management within public organization s Studies have demonstrated the link between leadership support and the use of performance measures (Kroll, 2015), but it is less clear as to how organizational leadership influences the actual development of measures. Two arguments are made regarding how departmental leadership affects exploration and exploitation in public managers The first argument examines manag ement goal setting from the perspective of agency theory and develops the idea that one objective of leadership is to reduce employee opportun ism by collecting information about their preferences and activities. The second argument is grounded in ideas of knowledge flows and makes the case that top down knowledge flows are principally aimed at increasing reliability and are thus positively relat ed to exploitation.

PAGE 51

38 Many models of political control of bureaucra cy take a top down view of goal setting by examining the political influence on bureaucracy from an agency theory perspective ( Pandey & Wright, 2006 ). Two essential components found in princi pal agent models are conflict of interest and information asymmetry (Bendor, 1988). When one party (the principal) delegates work to another (the agent) who performs that work, an agency problem arises when the goals of the principal and agent conflict and it is difficult for the principal to verify what the agent is doing (Eisenhardt, 1989). In this context, the reporting of performance information by the agent provides a governance mechanism to the principal that curbs agent opportunism. The stronger the agency effect, the more aligned are the goals and values of the principal and the agent and the stronger the disinclination for variety producing activities. Similarly, in multitask principal agent models, where the principal either has several different t asks for the agent to perform or the their various duties (Ho lstrom & Milgrom, 1991). In other words, principal monitoring and agent reporting reduces the span of focus of the agent. Consequently, as leaders increase their he principal Top down goal setting tends to be oriented towards the alignment of worker behavior with management goals. Goal al ignment discourages exploratory behavior by directing learning processes towards a deepening of existing knowledge instead of a broadening of new knowledge. and e xperience which is typically associated with exploitation rather than towards increasing

PAGE 52

39 Mom, van Neerijnen, Reinmoeller, & Verwaal, 2015 p. 813; March, 1993). Subsequently, as individuals deepen their existing processes, making them less inclined to explore new or unexpected ideas (Mom et al., 2015). unwritten norms of conformity, control and compliance that deter individuals from acting in ways that promote innovative organizational have demonstrated that auto nomy with respect to goals and supervision are associated with Therefore, goal alignment, by increasing conformity and compliance, can lead to a decreased willingnes s to develop new knowledge or invest in more exploratory learning (Mom et al., 2015). Lastly, Mom et al. (2007) argue that top down knowledge flows in a management hierarchy positively relate to the lower but are un likely to relate to their exploration activities. Because top down flows of knowledge tend to be confined to vertical chains containing actors with common functional or technical expertise (Gupta & Govindarajan, 1991; Hedlund, 1994), the knowledge itself t ends to be narrow in scope (Winter & the breadth (Mom et al., 2007). Furthermore, top down flows of knowledge are generally unambiguous in that they tend to possess a clear and proven understanding of cause effect current a ctivities is normally well known (Schulz, 2003). Top down knowledge thus encourages

PAGE 53

40 the recipient manager to respond to problems in familiar ways, and increases their ability to efficiently perform existing activities (Daft and Lengel, 1986; Galunic & Roda n, 1998). In other words, top down knowledge flows encourage recipient managers to increase reliability, rather than variety, in experience (Mom et al., 2007). These arguments indicate that the relationship between organizational leadership and lower level down Invo lvement of departmental leadership tends to reduce employee opportunism and increase reliability, leading to the following hypothesis: H 2 : Departmental leadership influence is negatively related to measurement variation Frontline workers Frontline worker s, sometimes referred to as s treet level bureaucrats in the public administration literature, are the public service workers who interact directly with citizens in the course of their jobs. While the primary function of street level bureaucrats is to deliv er government services, because they have substantial discretion in the execution of their work (Lipsky, 2010) challenge is thus to provide guidance to workers to encourage complianc e with organizational goals. Performance measures are one tool used by managers to get feedback on worker behavior and provide guidance on expected behavior. Performance measures thus play a bidirectional role in the relationship between managers and front line workers: they flow from the top down in order to provide guidance to the worker, and they flow from the bottom up in order to provide information to the manager.

PAGE 54

41 Th e argument regarding the influence of frontline workers on the measurement development decisions of public managers is grounded in goal theory and develops the idea that goal conflict combined with bureaucratic discretion and autonomy encourages exploratory behavior in frontline workers. Actors in public organizations exercise discretion as they carry out their daily routines (Lipsky, 2010) and see themselves primarily as citizen agents who act in response to individuals and circumstances rather than state agents who use their discretion to make their work easier and more rewarding within the confines of political rules and procedures (Maynard Moody & Musheno, 2000). From this perspective bureaucrats are not so much captured by their clients as they are faced with difficult social tasks and limited resources which requires them to cope by exercising some form of control over their work (Lipsky, 2010). There is evidence that public managers respond creatively to these difficult social tasks: studies indicate that citizen demand (Walker, 2014) and the influence of nongovernmental external sta keholders (Wang & Feeney, 2014) positively encourage public sector innovation. In addition, there is a growing trend to rely on citizen input in the development of performance indicators (Heikkila & Isett, 2007). In other words, citizen responsiveness crea tes an environment where government actors are forced to confront novel situations and conflicting goals and bureaucratic discretion gives these actors increased opportunity to pursue exploratory behavior. Responsiveness to citizen needs may also encourag e explorative behavior because it facilitates new experiences which generate new knowledge. Studies on citizen responsiveness indicate that public managers often see themselves first as service providers, and secondarily as public servants (Gruber, 1987 in Frederickson & Smith, 2003). In these models agency behavior is driven not as a response to powerful interest groups, but by a need to innovate in order to meet

PAGE 55

42 the service demands of citizens. In addition, the policy problems being addressed by public or ganizations are often highly complex with uncertain cause and effect relationships. When there is uncertainty over what will and will not work regarding program interventions, there is greater room for admitting and tolerating a variety of approaches and t echniques (Lipsky, 2010). In addition, a s frontline workers respond to these demands, they encounter conflict s between client centered goals and organizational goals. T he need to treat people as individuals is compromised by the need of the organization t o process work quickly (Lipsky, 2010) As a legitimate (Lipsky, 2010). When performance goals are considered illegitimate workers may ignore the goals or may engag e in selective reporting (Lynn, Heinrich, & Hill, 2001; Moynihan, 2008) or manipulation and misrepresentation of measures (Talbot, 2010; Pollitt, 2013). In other words, it is often necessary for s treet level bureaucrats to work to maintain and expand thei r autonomy from organizational goals which can put them in conflict desires to restrict worker discretion in order to secure certain results (Lipsky, 2010). In response, frontline workers may develop surrogate performance indicators (Lipsky, 2010). Within organizations, multiple and conflicting goals can have a significant influence on organizational attitudes, behaviors, structure, processes, and authority (Rainey & Jung, 2015). The pursuit of diverse goals and responsiveness to diverse int erests imposes trade offs in organizations (Rainey, 2009), and effective workers must balance their work by concurrently managing these diverse goals (Quinn & Rohrbaugh, 1983; March 1991; Rainey, 2009). The need for flexibility to manage those diverse goal s goes against the requirement to adhere to a reduced set of actions dictated by the need to show results against management directed performance

PAGE 56

43 measures. Given the chance, frontline workers would expand the set of measures beyond the organizational goals used to include the client centered goals they are confronted with. These arguments from the literature suggest that the operating environment of street level workers is characterized by conflicting demand between organizational goals and client centere d goals and conflicting service and program demands from citizens. This suggests the following hypothesis: H 3 : Frontline worker influence is positively related to measurement variation. Summary This chapter developed a theoretical model to address the research question, How do political and operational stakeholders impact the development of performance measures in public organizations? First, it defined the concept of measurement development in t erms of development processes and products. These constructs were further defined in terms of the operational variables performance measurement processes and measurement variation. Next, it identified the city level stakeholders that are likely to influenc e the measurement development decisions of public managers. Public stakeholders influence public managers through agenda setting and include the Office of the Mayor or City Council. Operational stakeholders include management and non management internal em ployees. C ity departmental leaders influence public managers as superiors in the organizational hierarchy. Frontline workers influence public managers through program level advocacy and cooperation. It then developed the concepts for describing the perform ance measurement processes used by different stakeholders to influence the measurement development decisions of public managers. Specifically, it used the ideas of exploration and exploitation to describe the decision making processes of public managers. F urther, it explored goal setting as a primary process for translating stakeholder influence into public manager action.

PAGE 57

44 For each stakeholder a hypothesis was suggested regarding the nature of their influence o ver measurement variation A political environment that promotes goal ambiguity makes it likely that political stake holders will encourage exploratory behavior in public managers resulting in increase d measurement variation. alignment in pursuit of ope rational efficiency tends to discourage exploration and encourage exploitation behavior in public managers Lastly, p ublic organizations are often tasked with addressing highly complex policy problems, creating an operating environment prone to goal confli ct. In response, f rontline workers encourage public managers to expand their working set of measures beyond those focused on operational efficiency to include a wider variety of client centric goals. The result is a model demonstrating how the influence of political stakeholders over the performance measurement decisions of public managers is confounded by operational stakeholders and is mediated by performance measurement processes. The remaining sections of this dissertation define the research design tha

PAGE 58

45 CHAPTER IV RESEARCH DESIGN The goal of the study is to test theoretical proposition s regarding the impact of political and operati onal stakeholders on measurement development As such, the study is concerned with analytic generalizability the demonstration of empirical support for the theoretical model and the ability to generalize the theory beyond the cases examined in this study ( Yin, 2010). This section summarizes the research design It begins by describing the empirical context and case study approach. Next, it details the variables and describes the data analysis techniques that will be used. Lastly, it reviews threats to stud y validity and suggested design treatments to mitigate those threats. Empirical Context The research question will be explored using data collected from a study examining social equity measures in local government social equity programs. The National Academy of Standing Panel on Social Equity in Governance offers the following definition of social equity: The fair, just and equitable management of all institutions serving the public directly or by contract; the fair just and equitable distribution of public services and implementation of public policy; and the commitment to promote fairness, justice, and equity in the formation of public policy (National Academy of Public Administration, n.d.). While public administ ration scholars have long been aware of the importance of equity, it was the widespread racial and class conflicts of the 1960s that brought it to the forefront of the the results of governmental policy and the work of public administrators implementing those policies were

PAGE 59

46 period that the phrase social equity became a prominent feature of pub lic administration with an attendant set of concepts and a cluster of shared values (Frederickson, 2005). As a result, a growing number of government agencies emphasize equity goals and many cit y sustainability alongside economy and environment. Nonetheless, social equity has historically been, and continues to be, the most neglected of the three (Svara et al., 2014). Yet, recent events have created a renewed focus on social equity. Examples incl ude t he Occupy Wall Street movement which wa s organized around the message that the economic system is rigged for the very few while the majority continue to fall further behind (Levitin, 2015). Similarly, the Black Lives Matter movement wa s organized arou nd the message of racial bias in the use of force by the police. These events have driven an upsurge in interest in social equity in city governments and a corresponding interest in the development of measures to track progress against social equity goals Evidence suggests that cities are having mixed results when it comes to the development and use of equity measures (Svara et al., 2014 ). One reason for the gap in the development of equity measures may be definitional ambiguity In the early years of app lying concepts of social equity to public administration, emphasis was on issues of race and gender in employment, democratic participation, and service delivery (Frederickson, 2005 ). Over the years social equity (Frederickson, 2005, p. 33). Another reason for the lack of equi ty measure s may be the disconnect between the indicators developed and used by scholars and the operational indicators used by practitioners (Jacob & Larson, 2015). While a number of established econometric

PAGE 60

47 measures of equity are regularly used by scholars these measures are rarely used by practitioners looking to evaluate the social equity performance of their agencies (Jacob & Larson, 2015). Instead, practitioners tend to use functional measures of procedural fairness, access to services, quality of serv ices, and overall outcomes (Gooden, 2006). Government agencies and public administration scholars thus find themselves at an intersection of high demand for social equity measures and little consensus as to what those measures should be or how to develop t hem. For the purposes of this study, this intersection p rovides a unique opportunity to observe how measures evolve within a shifting landscape. That is, it enables us to observe how social equity measures evolve alongside new city level social equity prog rams and existing performance management programs. Variables The model under investigation by this study is shown in Figure 5 The central goal is to understand the impact of political influence (X 1 ) over the measurement variation decisions (Y) of public managers. This study is also interested in how operational stakeholders affect measurement variation Agency leadership influence (X2) and frontline worker influence (X3) are thus included as additional independent variable s Figure 5 The model under investigation by this study. It includes measurement variation as the independent variable and stakeholder influence as the independent variab les. Measurement variation, defined in Table 2 is the dependent variable model. It is operationalized as a factor measuring the presence of new perf ormance measures,

PAGE 61

48 changes to existing measures, or the removal of measures. The level of analysis for the study is U.S. cities. What does city level measurement variation mean? Decisions to develop measures may occur at the level of an individual manager, or may occur as part of a group decision. Thus, while it is understood that the measurement development decisions are made by individuals or groups, it is the city level artifacts evidence of new or updated measures that are the focus of the study. Table 2 Definition of measurement variation. Variable Definition Measurement variation A factor measuring the addition of new measures or the change or elimination of existing measures. Measurement variation may be identified directly in city documents or on performance dashboards. Measurement variation was measured directly through examination of city planning documents and performance dashboards which explicitly list measures that ha ve been added or updated In addition, closed ended Likert scale questions and open ended interview questions shown in Table 3 asked about specific examples of measu res that were recently added or eliminated Close ended interview questions, explained in more detail below, asked interviewees to answer questions accor ding to a scale from one to five (1 = To no extent; 5 = To a very great extent). quantitative variable rankings are based on responses to the close ended interview questions. Rankings are an average of the close ended questions associated with the variable. If a close ended question was not answered by an interviewee, it was excluded f rom the calculation. For example, if an interviewee y city/departm ent regularly adds new y city/department regularly evaluates the effectiveness of existing measures the final quantitative ranking would be 3.5.

PAGE 62

49 Table 3 Interview questions used to identify city level measurement variation. Closed My city/department regularly adds new measures. Closed My city/department regularly evaluates the effectiveness of existing measures Open Can you give me an example of a performance measure that was recently added Open Can you give me an example of a measure that was eliminated? This study examines the influence of three categories of stakeholders. Political stakeholders are city elected officials including members of the Office of the Mayor or City Council. Two categories of operational stakeholders are included. Agency leadership are the senior executives from city departments. Frontline workers ar e the street level program and service managers and employees. They are the professionals who interact directly with citizens on behalf of the state (Lipsky, 2010) This study is interested in how these stakeholders influence the measurement variation decis ions of public managers. Political and operational influence thus serve as the independent variables for the study. Their definitions are listed in Table 4 Table 4 Independent variables and associated interview questions. Concept Variable Definition Political influence Mayor or City Council A factor measuring demand for performance measurement and social equity by elected officials. Questions measuring this factor include: Does the Office of the Mayor or City Council demonstrate a strong commitment to performance measurement? Operational influence Agency leader ship A factor measuring demand for performance measurement and social equity by organizational leaders. Questions measuring this factor include: Is there an internal policy requiring the use of equity measures? Does agency leadership use equity measures to guide decision making? Frontline workers A factor measuring demand for performance measurement and social equity by street level workers Questions measuring this factor include: A re street level workers aware of the social equity goals ? Do they use equity measures to guide thei r work?

PAGE 63

50 The p olitical stakeholder s of interest for the study are the city level elected officials. Depending on the political structure of the city these may be the Office of the Mayor or the City Council. Political influence is thus operationalized as the variable Mayor or City Council This variable serves the primary independent variable Evidence of Mayor or City Council influence over measurement variation was found in city planning documents and may also be identified through interviews wit h city employees. In addition, close ended Likert scale questions and open ended interview questions shown in Table 5 asked about the level of commitment to perform ance measurement, use of performance measures in decision making, and participation in the development of performance measures Table 5 Interview questions used to identify influence of the Mayor or City Council. Closed The office o f the Mayor (or City Council) demonstrates a strong commitment to performance measurement. Closed The office of the Mayor (or City Council) uses performance measures to guide decision making. Closed The office of the Mayor (or City Council) is a source of new performance measures. Closed The office of the Mayor (or council) influences agency managers to develop new measures. Closed The office of the Mayor (or council) evaluates and eliminates existing performance measures. Closed The office of the Mayor (or council) influences agency managers to evaluate and eliminate existing measures. Open Is there an external policy requiring the use of performance measures? The operational stakeholders of interest for the study are agency leadership and departmental frontline workers. Operational influence is thus operationalized as the variables agency leadership and frontline workers Agency leadership is a variable that me asures the extent of support and involvement with the development of performance measures by city departmental leaders.

PAGE 64

51 Evidence of agency leadership influence over measurement variation may be found in city planning documents and may also be identified through interviews with city employees. In addition, close ended and open ended interview questions shown in Table 6 asked about the level of commitment to performan ce measurement, use of performance measures in decision making, and participation in the development of performance measures: Table 6 Interview questions used to identify influence of agency leadership. Closed leadership demonstrates a strong commitment to performance measurement. Closed Closed My agency's top leadership evaluates and eliminates existing performance measures. Open Is there an internal agency policy requiring the use of performance measures? Frontline workers is a variable that measures the extent of involvement with the development of performance measures by public managers and public services workers who interact directly with citizens in the course of their jobs. Evidence of frontline worker influence over measurement variation may be found in city planning documents and may also be identified through interviews with city employees. In close ended Likert scale questions shown in Table 7 asked about the level of commitm ent to performance measurement and participation in the development of performance mea sures: Table 7 Interview questions used to identify influence of frontline workers. Closed Frontline workers demonstrate a strong commitment to performance measurement. Closed Frontline workers are a source of new performance measures. Closed Front line workers evaluate and eliminate existing performance measures. T he study also examined the p erformance measurement processes that serve as mechanisms by which stakeholders affect the measurement variation decisions of public managers The goal of the study was to achieve a qualitative understanding of the performance

PAGE 65

52 measurement processes cities use to facilitate the adding or updating of performance measures. Open ended interview questions, shown in Table 8 encouraged city employees to describe who participates in the processes, the timing of the processes, and their opinions as to the effectiveness of the processes. Table 8 Interview questions used to understand city level performance measurement processes. Open Does your city/department have a process for discovering and adding new measures? Open What does the process of adding new performance measures look lik e? Open Does your city/department have a process for evaluating the effectiveness of existing measures? That is, the model does not specify what is being measured. The questions described above address performance measurement in general and are used to establish the overall validity of the model. But, the study is also interested in one context in particular: the development of social equity measures. As such, the interv iew also included a complementary set of questions aimed at understanding stakeholder commitment to social equity and the extent of social equity measurement variation. Overall, for every general performance measurement question, a parallel question was as ked regarding social equity performance measures in particular. Table 9 shows a sample of the social equity questions. The full set can be found in Appendix A (Intervi ew Instrument). Table 9 Closed The office of the Mayor (or City Council) demonstrates a strong commitment to social equity. Closed The office of the Mayor (or City Council) uses equity measures to guide decision making. Closed Frontline workers are aware of social equity in general. Closed Frontline workers measure their progress toward social equity goals. Open Has this measurement review process occurred for social equity measures? Open Can you give me an example of a social equity measure that was added?

PAGE 66

53 Comparative Case Study The study use d a comparative case study research method. T here is limited scholarship examining the processes that managers employ to review and revise their performance measures. A descriptive understanding of these processes is thus a principal component of the study. Case study field research is well suited to build a general understanding of complex social phenomena (Singleton & Straits, 2010). In addition, comparative case studies, which analyze data across several cases to test or develop a theoretical framework, allow for a higher degree of certainty in the findings (Dinour, Kwan, & Freudenberg, 2017). While a single case study with a strong through multiple case studies that we can begin to map the underlying path ways between X and 2014, p. 7). Two criteria for case selection were used: the expected relationship criteria and the variation in case characteristics criteria (Weller & Barnes, 2014). The primary concern of the study is to understand the nature of the relationship between political influence (X ) and measurement variation (Y) and how operational influence affects the relationship To satisfy the expec ted relationship criteria it is necessary to identify cases where the X variable is related to the Y variable (Weller & Barnes, 2014). In the context of this study, this implies selecting cities where there is a referenced political commitment to performan ce measurement as well as some evidence of performance measure s While the presence of these two elements does not guarantee there will be a relationship between X and Y, it at least makes it more probable that both variables will be present for analysis. To satisfy the variation in case characteristics criteria for case selection it was necessary to identify cases that feature different

PAGE 67

54 levels of the X/Y relationship (Weller & Barnes, 2014). This necessitated selecting cities with different levels of performance measurement influence by political and operational stakeholders as well as cities with different levels of performance measures Several steps, detailed in this section, were taken to identify cities for case selection. First, convenience samp ling was used to identify an initial population of cities Second, a web search was performed to identify cities with a referenced operational or political commitment to performance measurement Third, a web search was performed to identify cities conducti ng development of performance measures Fourth, cities were classified into different case types according to the degree of influence by stakeholders and different levels of performance measure ment development The first step used convenience sampling to select which cities would serve as the population. Colorado c ities with populations over 75,000 were selected for the study shown in Table 10 Fourteen Colorado citi es have a population greater than 75,000. Focusing on Colorado cities allow ed for the researcher to conduct in person interviews and to leverage local contacts to obtain interviews. It also r educe d bias due to regional variation

PAGE 68

55 Table 10 Colorado city statistics (United States Census Bureau, 201 7 ). City Population Denver 682,545 Colorado Springs 456,568 Aurora 359,407 Fort Collins 161,175 Lakewood 152,597 Thornton 133,451 Arvada 115,368 Westminster 113,130 Centennial 109,741 Pueblo 109,412 Boulder 107,349 Greeley 100,883 Longmont 92,088 Loveland 75,182 Mean 197,778 Standard Deviation 177,352 The n ext step was to identify operational and political stakeholder influence F or f inding evidence of the independent variable political, agency leadership, and frontline worker demand for performance measurement using web searches proved difficult. S pecific s takeholder s are not generally referenced in city plans or on city web sites. Instead proxy measures were used. P olitical stakeholder influence was identified as being present if social equity performance goals or measures were referenced in the c ity plan or if there was a centralized performance measurement group, dashboard or initiative. The assumption is that c it y plans are a primary vehicle of influence by external political stakeholders Operational stakeholder influence was identified as being present if social equity performance goals or measures were found in city

PAGE 69

56 departmental web sites or plans. The assumpti on is that a gency plans are developed by agency leadership and thus serve as a proxy for influence by organizational leadership. Data was collected on the proxy variables using three criteria, shown in Table 11 For the but had no sp For the second column, labeled the web sites and agency plans for the city level housing, tr ansportation, and development departments were searched. Agency plans which listed equity as a primary goal, that had specific equity goals, or had equity performance measures were marked Plans that did not mention equity, or that only mentioned it in a cursory way, were marked as

PAGE 70

57 Table 11 Independent variables and the associated proxy measures. Variable Political Influence Operational Infl uence Proxy Measure Social equity referenced in City Plan Social equity referenced in departmental plan Arvada N o N o Aurora P artial N o Boulder Y es N o Centennial N o N o Colorado Springs N o N o Denver Y es Partial Fort Collins Yes Yes Greel e y N o N o Lakewood P artial No Longmont P artial No Loveland Partial Partial Pueblo N o No Thornton Partial No Westminster No No The next step was to identify the presence of variation of social equity measures in the cities (the dependent variable). Two p roxy measures were used to identify measurement variation First, variation was identified as being present if social equity measures were found on performance dashboards or in city plans. The assumption is that the inclusion of equity meas ures in city plans or performance dashboards indicates the existence of activities or processes that drive the generation of new measures Second, variation was identified as being present if there was a performance measurement department or group. The assumption is that a primary step in setting up a performance measurement dashboard is the identification of meaningful measures.

PAGE 71

58 This identification process is likely to contain variation processes aimed at adding new measures and selection processes aimed at weeding out inappropriate measures. The assumption is that a primary function of a performance measurement department or group is to identify meaningful measures through processes of measurement development and analysis. Data was collected on the proxy variables using two criteria shown in Table 12 First, city web sites were searched for social equity performance measures. These measures could appear in performance dashboards or on agency specific plans or pages. Cities with dashboards or p lans with quantitative equity equity goals but had no associated quantitative measures were m not have equity measures or that only mentioned them Second results were further researched to d etermine if they were related performance measurement as defined by this study. Cities with identifiable performance measurement dashboards, departments measurement, but with no

PAGE 72

59 Table 12 pendent variable and the associated proxy measures. Measurement Variation Proxy Measure : Social equity measures on dashboard or in plan Per formance measurement dashboard, department, or group Arvada P artial Y es Aurora N o Partial Boulder P artial Partial Centennial N o Y es Colorado Springs N o N o Denver Partial Y es Fort Collins Partial Y es Greel e y No No Lakewood Yes N o Longmont Y es N o Loveland Yes Partial Pueblo N o N o Thornton No Partial Westminster Partial Partial The final step was to classify the cities according to their different levels of stakeholder influence and measurement variation. Table 13 classifies the population of cities according to four case types N o measurement variation there was no data found for any of the dependent or independent proxy variables. Cities were classified as if there was no evidence of politi cal or operational influence by stakeholders but there was some evidence of measurement variation ( There is an important measurement variation found for any of the variables thus making it difficult to dra w any conclusions as to how the city would fit into this

PAGE 73

60 No stakeholder influence was found of operational or political influence, but some evidence was found of measurement development, imp lying that there is familiarity with measurement development, but for some reason there is no equity measurement development occurring .) or operational influence by equity stakeholders and there was evidence of social equity evidence of political and operational influence by equity stakeholders and there was evidence of proxy variable. Table 13 Classification of cities into potential case types. Case Types Classification criteria Cities No measurement variation No evidence of political or operational influence op erational influence columns in Table 11 ) and no evidence of m easurement variation columns in Table 12 ). Colorado Springs Greeley Pueblo No stakeholder influence No evidence of political or operational influence operational influence columns in Table 11 ) and some evidence of i n the variation columns in Table 12 ). Arvada Centennial Westminster Some stakeholder influence Evidence of political or operational influence operational influence columns in Table 11 ) and evidence of in the variation columns in Table 12 ). Aurora Boulder Lakewood Longmont Loveland Thornton Strong stakeholder influence Evidence of both political and operational operational influence columns in Table 11 ) or in the variation columns in Table 12 ). Denver Fort Collins

PAGE 74

61 The final step was to select the cities that would serve as the sample. Five cities were selecte d : one stakeholder influence two from stakeholder influence and two from stakeholder influence The final list of cities selected as case studies for the study is shown in Table 14 Table 14 Case studies selected for study Data from Colorado Department of Local Affairs (2015) and American Community Survey (2015). City Case Type Arvada No influence Boulder Some influence Longmont Some influence Denver Strong influence Fort Collins Strong influence Participants Purposeful sampling (Singleton & Straits, 2010) was used to select initial interviewees according to their ability to contribute rich and reliable information (Curtis et al., 2000). Information rich participants are those who can provide additional context to the phenomenon being studied and who can inform the study by providing detailed information and multiple perspe ctives. At the same time, the researcher understands the difficulty in getting interview time with city administrators so target ed interviewees who were willing to participate. In addition, snowball sampling (Singleton & Straits, 2010) was used as a referr al technique wherein initial interviewees were asked to identify other individuals that could be useful to the study. Interviews were held with two public managers from each of the five Colorado cities for a total of ten interviews. I nterview data was draw n from three populations :

PAGE 75

62 Social equity administrators were interviewed to understand the history of the program, how the goals and objectives of the program are created, evaluated, and re shaped, and the communicated to the departmental and political levels. Performance measurement administrators were interviewed to understand the extent of measure development and use and the characteristics of the performance measurement program. Department level planne rs were interviewed to understand if measures are being used to track equity goals and the characteristics of those measures. Instrument: Interviews and Document Analysis Semi structured interviews (Singleton & Straits, 2010) were selected as the primary data collection instrume nt for several reasons. W hile a growing literature has documented the factors that encourage and impede the use of performance measures by managers ( Kroll, 2015 ), there is limited scholarship examining the measurement development pr ocesses of organizations. O rganization and public management literatures have been leveraged to develop the underlying theory of the study, and there is a substantial amount that is unknown about measurement zable concept in city departments? How do public managers identify different stakeholders? Interviews permit ted a level of concept exploration that would not otherwise have been av ailable with a survey. I nterviews also allow ed for the context of the interv iewee to be considered. For example, if an interviewee self identifie s as a strong advocate for performance measurement their answers should be considered in the con text of that role. T he goal of the r esearch is to derive facts and precise quantitative d escriptions and to

PAGE 76

63 processes Semi structured interviews serve this purpose by enabling flexibility in hypothesis development and theory building (Singleton & Stra its, 2010). In depth semi structured interviews were held with each participant, making use of open ended questions, as outlined in the interview instrument (Appendix A). The interview questions were structured to allow participants to describe the development and use of performance measures in general as well as the utilization of social equity measures in particular. Participants were encouraged to reflect, expand, and elaborate on their experiences with performance measurement. Interviewees were a lso asked a series of close ended questions using the following scale: 1 2 3 4 5 To no extent To a small extent To a moderate extent To a great extent To a very great extent The questions asked interviewees to rate levels of commitment to performance measurement and social equity for the Mayor or City Council, departmental leadership, and frontline workers. The interviewees were also asked to rate the influence of the different stakeholders have on the development of new measu res and the evaluation and elimination of existing measures. Document analysis was also conducted for the sample city agencies. Documents are an important source of indicators and measures of social structural variables and processes (Singleton & Straits, 2010). City and department level documentation thus serve d as a primary source of data for identifying performance measurement processes and measures and the key stakeholders that influence those processes and measures. City web sites were searched for pla nning documents that reference d performance indicators and t hese documents were examined to develop an understanding of the source of social equity goals and how these goals influence the development of social equity measures.

PAGE 77

64 Table 15 lists the documents that were analyzed for each city and the year of their publication. Four categories of documents were examined: city strategic plans, social equity plans, budgets, a nd departmental plans. The goal was to select documents that were similar across the five cities. Table 15 City documents analyzed for the study. Document City Title Year Comprehensive Plan Arvada City Strategic Plan 2017 Boulder Boulder Valley Comprehensive Plan 2015 Denver Blueprint Denver Diagnostic 2017 Fort Collins Strategic Plan 2016 Longmont Comprehensive Plan 2016 Social Equity Plan Arvada Sustain Arvada Plan 2012 Boulder Human Services Strategy 2017 Denver 2020 Sustainability Goals 2017 Fort Collins Social Sustainability Strategic Plan 2016 Longmont Sustainability Plan 2016 Budget Arvada 2017 2018 Biennial Operating and Capital Budget, Volume II, Performance based Budgeting 2017 Boulder 2017 Annual Budget Volume I 2017 Denver 2017 Fort Collins Biennial Budget 2017 Longmont 2017 Operating Budget 2017 Departmental Plan Arvada Imagine Arvada, Parks, Trails and Open Space 2016 Master Plan 2016 Boulder Boulder Parks and Recreation Department Master Plan 2014 Denver Denver Parks and Recreation 2017 Game Plan Update 2017 Fort Collins 2015 2017 Operational Plan 2014 Longmont City of Longmont, Parks, Recreation & Trails Master Plan 2014

PAGE 78

65 Complete congruence of documentation between cities was not possible because of the different approaches taken by cities with regards to budgeting and planning. While some cities rely on departmental plans to track performance measures, o thers may put their measures into their higher level city plans or budgets. The hope is that the chosen cross section of documents provides a sufficient account of performance measures. The documents were analyzed in NVivo ( QSR International 2011 ) in orde r to convert the qualitative data into a form that could be analyzed using a comparative analysis technique. Ten interviews of city employees took place. The semi structured interviews varied in length from 30 minutes to 75 minutes. All interviews were rec orded, then transcribed in full. Transcribed interviews were uploaded to NVivo software for the coding process. Initial coding categories were created based on interviews conducted for Larson, Jacob, & Butz (2017 ), a previous study. For the previous study interviews were conducted across three U.S. cities to understand how practitioners were using social equity performance measures. Research Design Limitations Several research design limitations restrict our ability to generalize the findings beyond the sample cities. First, o ur understanding of performance measurement processes is drawn from open ended interview questions and city documentation. No close ended questions addressed performance measurement processes. This makes it difficult to compare the p rocesses between cities as interviewees used different language to describe city processes and the data collected regarding the characteristics of measurement processes was not equivalent across the cities. Second, the study relies on interviewee percepti ons of stakeholder influence and measurement processes, rather than on direct observation by the researcher. As a result, lack of truthfulness, misunderstanding of

PAGE 79

66 questions, inability to recall past event (Singleton & Straits, 2010, p. 271). Third, the level of measurement is interviews with city managers while the level of analysis is the city. The study relies on the perceptions of city managers re garding the influence of different stakeholders and levels of measurement development and attempts to draw city level conclusions from that data. Given that only two managers per city were interviewed, this introduces the likelihood of measurement error d ue to the limited city level perspectives.

PAGE 80

67 CHAPTER V A DESCRIPTIVE COMPARISON OF PERFORMANCE MEASUREMENT DEVELOPMENT ACROSS FIVE COLORADO CITIES Introduction This study seeks to better understand the impact of political and operational stakeholders on the measurement development practices of public managers. The principal theoretical argument of the study is that political and operational stakeholders, by activ ating exploration and exploitation tendencies in managers, encourage different behavior aimed at the development of performance measures. Figure 6 shows the proposed theoretical model hypothesiz ing how different levels of political and operational stakeholder influence impact the performance measurement development behavior of public ma nagers. I t suggests that a performance measurement environment characterized by strong commitment by political stakeholders will encourage public managers to create and update measures (exploration) ; s trong commitment by agency leadership will encourag e pu blic managers to use existing measures (exploitat ion) ; and s trong commitment by frontline workers will encourag e public managers to create and update measures (exploration). Figure 6 The proposed theoretical model comparing st akeholder influence against measurement variation

PAGE 81

68 This chapter examines data from five case cities in Colorado to test the theoretical model. Three steps are used to test the model. 1. I dentify the levels of political and op erational stakeholder influence ( the independent variables ) 2. Identify the lev els of measurement variation ( the dependent variable ) 3. Perform a comparative analysis to determine the relationship between stakeholder influence and measurement variation Does the analysis support hypotheses regarding performance measures in general? Does it also support the hypotheses regarding social equity measures in particular? This chapter addresses the first two steps. It reviews the performance measurement and social equity environments of t he cities and measures variables for each city Chapter VI then examines the causal relationship by comparing and contrasting the observed causal effect between the independent and dependent variables for each case. E model is presented. Three primary sources were used to collect data on stakeholder influence and measurement variation : city d ocumentation open ended interview question s and close ended Likert scale questions. First c ity and department level documentation was collected and analyzed to identify performance measurement processes and measures. Four categories of documents were examined: city strategic plans, soci al equit y plans, budgets, and departmental plan s Second, open ended interviews were conducted with c ity employees Two employees were interviewed from each city. A performance measurement administrator was interviewed to understand the extent of measure ment development and the characteristics of the performance measurement processes And then if the City had a social equity group or a department that housed a social equity initiative, a manager from that group or department or group was

PAGE 82

69 interviewed. If the City did not have a social equity group, a manager from a department with equity or social sustainability programs was interviewed. I n addition to the open ended interview questions, each manager was asked a series of close ended questions using the follow ing scale: 1 2 3 4 5 To no extent To a small extent To a moderate extent To a great extent To a very great extent The close ended questions can be found in the interview instrument (Appendix A) Th is chapter continues by summarizing the results of the data analysis across the five cities. T he chapter then goes into detail regarding the performance measurement and social equity findings for each city. Next, the key findings are discussed. These findings, extracted from the city documentation, qualitative interviews, and close ended questions, are then used to assign fitting each c ity into the theoretical model Lastly, the comparative analysis is summarized and any limitations are discussed. Summary of Findings Figure 7 summarizes from the close ended interview ques tions regarding stakeholder influence. For each city it shows the level of influence, on a scale of 1 5, for Mayor or City Council, agency leadership, and frontline worker influence. Arvada shows evidence of high levels of influence across all stakeholder s. Boulder shows evidence of moderate levels of influence for agency leadership and f rontline workers and small influence for frontline workers In Denver, Mayor al and agency leadership influence on performance measurement are high while frontline worker i nfluence is very low Fort Collins shows a pattern similar to Denver with higher levels of influence from City Council and agency leadership and lower levels of

PAGE 83

70 influence from frontline workers. I n Longmont there is evidence of moderate influence by City Council and agency leadership while frontline worker influence is very low. Figure 7 suggests that Arvada has one of the stronger cultures of performance measur ement, and qualitative interview data reinforces this assertion. Arvada has an integrated citywide performance management system, holds regular performance review meetings between City Council and agency leadership, and surfaces quantitative performance me asures in financial reports, departmental plans, and on the public performance measurement dashboard. Arvada interviewees also stressed that frontline workers were regularly included in performance measurement development and review processes (City Intervi ews, 2017) Figure 7 also suggests that Denver and Fort Collins have strong support for performance measurement from the Mayor and City Council and departmental leade rship. The interview data reinforces this assertion, though, in both cases, interviewees felt that the majority of performance measurement decision making in these two cities occurred at the departmental level. That is, while politicians supported performa nce measurement, they were not highly involved in the nuts and quantitative questions did not specifically ask about stakeholder involvement in performance processes. Boulder a nd Longmont show small to moderate influence for all stakeholders, suggesting that performance measurement may be less of a focus across these cities. The interview data reinforces this observation. While there is a strong commitment to performance measure ment in Boulder, this commitment originates more from an organizational, bottom up confidence in the value of performance measurement, rather than from a top down mandate from the City Council. Performance measurement processes in Boulder and Longmont are largely decentralized and

PAGE 84

71 managed by individual departments. As a result, departments have developed their own performance measurement programs, some of which are more effective than others (City Interviews, 2017) Responsibility for performance measuremen t in Longmont also resides at the departmental and program levels. Performance measurement is thus a decentralized function and the development of measures is not guided by any centralized, formal processes. Rather new measures tend to be introduced in an ad hoc manner in response to planning initiatives or periodic departmental performance reviews (City Interviews, 2017). Figure 7 L evels of stakeholder influence o ver measurement variation Figure 8 shows the levels of measurement variation across the case cities It lists levels performance measurement variation alongside levels of social equity performance measurement variation. As det ailed in the previous section, variation is a variable measuring the addition of new measures or the change or elimination of existing measures Arvada shows evidence of having strong measurement variation for performance measures in general, but there was no evidence of social equity measurement variation. Boulder shows evidence of small to moderate

PAGE 85

72 levels of measurement variation in general and lower levels of social equity measurement variation. Denver shows evidence of moderate levels of measurement var iation in general, but only small levels of social equity measurement variation. Fort Collins shows evidence of greater levels of performance measurement in general and was the only city to show moderate levels of social equity performance measures. Longmo nt demonstrated only small levels of measurement variation in general and there was no evidence of social equity measures. The levels of general performance measurement variation shown in Figure 8 roughly correspond to the overall levels of stakeholder involvement shown in Figure 7 While it is not surprising that there is a corre lation between stakeholder interest in performance measurement and the actual development of performance measures, this correlation between stakeholder influence and measurement variation does give us some confidence regarding the construct validity of key close ended questions Figure 8 also exposes a complication of the study: only one city, Fort Collins, is actively developing social equity measures while two other cities, Denver and Boulder, have developed only a small number of social equity performance measures. to include those Colorado c ities most likely to have social equity measures, and initial examination of city web sites indicated that all five cities had city level social equity goals and personnel dedicated to social equity. Nonetheless, qualitative interviews found that while the cities do indeed have social equity goals, only three are beginning the process of understanding and developing social equity measures and, with the exception of Fort Collins no quantitative social equity measures have been created. One objective of this study is to test the theoretical model by examining the variation of social equity measures in particular. The presence of strong performance measurement processes

PAGE 86

73 does not guarantee that measures focused on specific values, like soci al equity, will be developed. Some level of commitment to social equity from political and operational stakeholders may also be a necessary component of social equity measurement development. To account for this, the interviews included a complementary set of questions aimed at understanding stakeholder commitment to social equity. For every general performance measurement question, a parallel question was asked regarding social equity performance measures in particular Figure 8 L evels of general performance measurement variation and social equity measurement variation Figure 9 shows the levels of stakeholder commitment to social equity. Ar vada shows evidence of moderate levels of commitment to social equity from City Council and greater influence for agency leadership. Frontline workers evidence moderate awareness of social equity. Boulder shows evidence of moderate levels o f commitment fro m City Council and greater commitment from agency leadership. Frontline workers evidence small levels of awareness of social equity. In Denver, Mayoral and agency leadership shows evidence of

PAGE 87

74 moderate levels of commitment to social equity. Frontline workers evidence little awareness of social equity. Fort Collins shows evidence of high levels of influence across all stakeholders. Figure 9 shows a level of stakeholder commitment to social equity that is relatively high. study. This assertion is not fully supported b y the interview data and suggests that there may be problems with the interview instrument. Interview data suggests that while there is some commitment to social equity in some departments in the city of Arvada, this commitment is not widespread and is not shared by political stakeholders. Similar mismatches between the commitment shown in Figure 9 and actual performance measurement action towards measuring social equi ty occur within the other case cities as well. There are two ways to interpret this incongruence between the higher levels commitment to social equity suggested by the close ended questions and the lower levels of commitment suggested by the qualitative i nterviews. The first possibility is that this study is occurring during a transitional moment in the adoption of social equity by local governments. That is, the pursuit of social equity as an important value may be a new phenomenon for these cities. Thus, they may have strong intentions for developing social equity measures, but are just at the beginning phases of the initiative. Interview data suggests this may be the case. Interviewees from Boulder and Longmont suggested that there was momentum to develo p social equity measures, but that they were still several months away. A second interpretation is that the close ended questions related to social equity are being misinterpreted by the interviewees. That is, the close ended questions are not a valid meas ure of commitment to social equity. That is not to say the question was invalid. In fact it directly asked The office of the Mayor (or City

PAGE 88

75 Council) demonstrates a strong commitment to performan Rather, it may be more about how the interviewees define social equity. Several interviewees thought that the development of affordable housing demonstrated a strong commitment to social equity by the city. Thus, it is likely that the conc ept of social equity requires clarification, a point noted by other researchers (e.g., Jacob & Larson, 2015). Figure 9 Levels of stakeholder commitme nt to social equity The remainder of this section provides information on the five case cities and presents additional findings from the open ended and close ended interview questions The data is used as Arvada The City of Arvada ( City of Arvada, 2014 p. 1 1) located in Jefferson and Adams counties northwest of Denver. With a population of 1 10,909 (Colorado Dep artment of Local Affairs, 2015), it ranks as the seventh largest city in Colorado. T he largest racial groups in Arvada are White alone ( 81 % of followed by Hispanic or Latino (13.7%), and Asian (2.2%) ( United State s

PAGE 89

76 Census Bureau, n.d. ). The estimated 2017 unemployment rate was 2.2% ( United States Bureau of Labor Statistics, n.d. ). The City of Arvada has a h ousehold median income of $ 69,938 and an estimated 8.4 % of all people live below the poverty line (Colorado Department of Local Affairs, 2015). Performance M easurement in Arvada City and departmental plans reveal evidence of a strong performance measurement focus in Arvada In June 2013, the Arvada City Council adopted a strategic plan that identifies ies ( City of Arvada, 2017a, p. 6 ). city departments to utilize performance based management practices and requires contracts for city services to contain specific performance measurements. The City Counci l s trat egic p lan is the foundation for FOCUS an integrated citywide performance management system that utilizes data to drive decision making and budgeting. Under FOCUS each department operates within a strategic business plan containing objective performance measures. Departments share performance updates and outcomes on a quarte rly basis with c ity leadership (City of Arvada, 2017b). As a result of FOCUS c ity staff dev elop departmental business plans to promote data driven change based upon analyzing progress with performance measures (C ity of Arvada, 2017a). Performance measures are surfaced in financial reports, departmental plans, and on the public performance measur ement dashboard. For example the 2017 mid year financial report identifies a number of performance measures, including e mergency call response time s, percent of neighborhoods with organized associations, and water consumption, a mong other measure s (City of Arvada, Finance Department, 2017). These performance measures are quantitative, have clear targets, and are measured on a quarterly or yearly basis. Departmental plans (see e.g., City

PAGE 90

77 of Arvada Parks, Golf and Hospitality Services Department, 2016) ext ensively reference quantitative performance measures and progress against established performance baselines. dashboard (City of Arvada, Office of Performance Management 2017). T he dashboard serves both external and internal goals It highlight s the value the community receives from its investment in Arvada and encourage s process improvement and innovation for City departments. The measures on the dashboard are principally output measures that track completion of specific service or capital projects. For example, measures track the building of homeless shelters and the number of neighborhood community associations. The dashboard contains no outcome measures that track specific demographic groups and there are no social equity specific measures. Interviews with Arvada city managers provide evidence suggesting a strong culture of performance measurement across the City Council and c ity departments Though the focus on performance measurement can vary by department each department is required to develop a strategic business plan that contains performance measures for the ir departmental programs. In addition, City Council emphasize s performance measurement. There is an annual retreat with City Council members and key staff from the departments. As part of the retreat performance measures for different initiatives are reviewed ( City Interviews, 2017) There is a hierarchy of policy documents that guide s performance me asurement. First, there are the high level plans, such as the Comprehensive Plan and the Sustain Arvada Plan, that outline policies and guide program strategy. Underneath these plans are the programs, and each program has performance measures associated wi th it. Similarly, there is a hierarchy of performance measures. Measures that are important to council members may be tracked in the

PAGE 91

78 top level plans and might be reviewed with the City Council during the annual retreat. In addition, there may be performanc e measures proposed and tracked by departmental leaders or front line workers. These may be tracked in departmental or program level plans and are review City Interviews, 2017). Social Equity in Arvada City documents reveal little evidence that social equity is a political priority. The most recent 201 7 City Strategic Plan (City of Arvada, 201 7a ) does not mention social equity as a priority nor does it refer to any social equity specific projects. The Ar vada Sustainable Action Plan (2010) includes social sustainability as one of its three main pillars alongside economic and environmental sustainability, but no details or plans for pursuing social sustainability are given. Lastly, the Sustain Arvada Plan ( 2012) is primarily focused on environmental sustainability and does not mention social equity or social sustainability. Interviews with c ity managers suggest that there is a small but growing focus on social equity in the City of Arvada. As a result, some city programs have begun integrating social equity components. The Community Vitality program encourages community involvement through food production and distribution, so it touches on equity issues like food access and food vulnerability (City of Arvada, 2012). There are also programs aimed at increasing the number of neighborhood associations and HOAs with the goal of giving a voice to different members of the community. While a number of programs have social equity components, there is no centralized ef fort to promote social equity as a City value and social equity is not a focus of City Council ( City Interviews, 2017). In addition, social equity is rarely mentioned in City documents and there are no specifically labeled social equity measures. Lack of p olitical will was given as one reason for

PAGE 92

79 the lack of emphasis on social equity ( City Interviews, 2017) though the lack of political will may simply mirror the fact that Arvada lacks diversity (it is 90% white alone) compared to other cities in the sample Interviewees noted that social equity is generally understood and well regarded within most departments suggesting that there is no operational resistance to social equity. Key Findings Stakeholder Influence Arvada City Council serve s as a key influence driving performance measurement. The City Council Strategic Plan, which is adopted by resolution on a yearly basis, reviews the main priorities of the Council and identifies a common set of strategic goals. A quantitative performance measure is a ssociated with most strategic goals. The following is a n example 45% of the City fleet will be capable of using alternative fuel sources In addition, City Council reviews strategic results every year at an offsite retreat. C ouncil members use the offsite retreat to introduce new strategic goals. It is then the job of departmen tal staff to design programs that will carry out the goal and pro pose a set of performance measures that can be used to gauge the progress of the program The Council then adopt s the changes by resolution ( City Interviews, 2017). While the City Council establishes the priorities for the City, it is generally up to depar tmental leadership and staff to develop relevant measures. One interviewee emphasized the participation of departmental workers during the quarterly Stat review meetings: heads It was all front line staff t alking about their performance measures with the city manager and deputy city managers. That is a great example that it

PAGE 93

80 is not just these talking heads at the top sitting down to talk about perf ormance measures. (City Interviews, 2017) Performance measurement is viewed as an operational function in Arvada not a political function. One interviewee described the performance measurement environment as a bottom up environment where frontline workers introduce new measures and request the elimination of are going to recognize something first and then start to figure out what needs to get fixed and then m ake that proposal to leadership (City Interviews, 2017). Measurement Variation The City of Arvada has clear performance measurement processes that serve as vehicles for the development of new measures and the evaluation of existing measures. Departmental leadership and operational team members from the reporting department, the Performance Budget Manager, and representatives from other departments. In order to add change or modify any performance measure it has to be proposed by that Members of not j ust leadership teams, but front line staff and managers, attend as well as cit y managers and deputy city managers and the core stat team and myself So, people ( City Interview s 2017) representatives from various departments. This team is responsible for reviewing the p erformance measures for every department For each measure departmental managers fill out a survey that examines whether there is a source of data that can be used for tracking, the reliability of the data, and whether the department is analyzing and reporting on the data correctly.

PAGE 94

81 E vi dence of measurement variat ion was found in City financial reports, departmental plans, and on the public performance measurement dashboard. Examples include: B y 2019, 50% of identified neighborhoods, who in 2013 had no organized groups, will have organized neighborhood associations (City of Arvada, 201 7c). By 2025, locate new parks so that 100 percent of Arvada households are located no greater than mile from a community ( City of Arvada Parks, Golf and Hospitality Services Department, 2016). By 2019, the use of alternate travel modes for commuting to work by Arvada residents will increase from 12% to 15% (City of Arvada, 2017a). There is also evidence of measures being updated and removed The City Strategic Plan (City of Arvada, 2017a) explicitly lists measures that have been removed or updated since the last plan. 10% of Arvada Center operating budget is derived from charitable donations by 2019 2.5% of Arvada households will be engaged with the Arvada Center as paying patrons on an annual basis Social Equity Performance Measures There is some, though limited, momentum towards the adoption of social equity as a political follows: Council has some strategic results around homelessness which to me falls under the side, but last they are still not all the way there. Like affordable housing is not something they want to talk about. But with the food health stuff they are totally on board. So it sort of depends on what the topic is (City Interviews, 2017) In other words, there is political awareness of social equity, but also a distinct lack of emphasis on the topic. This lack of political will regarding social equity has percolated down to the departments resulting in a dearth of social equity language in departmental documents and a lack of social equity performance measures. Interviewees pointed to several measures that can be

PAGE 95

82 ewee suggested that a c ity indicator that tracks the development of neighborhood groups is a measure of social equity. Low cost access to local food and recreation were also mentioned as equity related community issues that are tracked with performance mea sures (City Interviews, 2017) Yet, these measures are not referred to as social equity or social sustainability measures and they tend to lack specificity when it comes to which demographic is benefitting. Classifying Arvada The goal of this section is to categorize the City of Arvada across the dependent variables (measurement variation and measurement selection) and the independent variables (political and operational influence). Each variable is rated according to the following scale: 1 2 3 4 5 To no extent To a small extent To a moderate extent To a great extent To a very great extent The rankings are based on the quantitative responses to the close ended interview questions The rankings are an average of the close ended questions associated with the variable as detailed in the variables section of the research design. If a close ended question was not answered by an interviewee, it was excluded from the calculation. Analysis of t he data summarized in Table 16 reveals that the City of Arvada has a strong culture of performance measurement and all three groups under examination elected officials, organizational leadership, and front line workers ar e encouraged to propose new measures. In addition, City interviews suggest that social equity values are fairly prevalent throughout City departments.

PAGE 96

83 Table 16 Summary of the findings for Arvada, Colorado (1 = to no extent; 5 = t o a very great extent). City Council Influence on performance measurement 4.6 Commitment to social equity 3 .5 Agency Leadership Influence on performance measurement 4.8 Commitment to social equity 4.5 Frontline Workers Influence on performance measurement 3.8 Commitment to social equity 3.5 Measurement variation 5.0 Social equity measurement variation 1.0 Arvada data revealed strong processes for developing measures centered around periodic strategic reviews involving City Council and operational members of c ity departments. There is a high level of measurement change. City documentation explicitly notes ne w measures and summarizes reasons when previously used measures are removed. That said, there a re a lmost no measures that might be considered social equity measures. This incongruence between having robust performance measurement processes, but no social e quity measures, is discussed further in Chapter VI. Boulder The City of Boulder a council manager government where the elected City Council sets policies and the council appointed city manager administers them is l ocated 30 miles northwest of Denver with a p opulation of 94,857 (Colorado Dep artment of Local Affairs, 2015). T he largest racial groups in Boulder are White ( 82 followed by Hispanic (9%), and Asian (5%) (ACS, 2015). The 2017 unemployment rate was 2.8 %, below the Colorado unemployment rate of 3.4% and the United States unemployment rate of 4.9% ( Boulder

PAGE 97

84 Economic Council, 2017). The City of Boulder has a h ousehold median income of $ 58,484 and an estimated 23.1 % of all people live below the poverty line (Colorado Department of Local Affairs, 2015). The City of Boulder is facing several demographic challenges that drive city strategic planning. These include an aging population and an increasing poverty rate for local households. These changes are expected to drive higher demand for human services (City of Boulder, 2010). In addition, Boulder continues to lack adequate amounts of housing for low and moderate income households. Since 1977, the City of Boulder and Boulder County have jointly developed and ado pted a comprehensive plan that guides land use decisions in the Boulder Valley (City of Boulder, 2010) The Bou lder Valley Comprehensive Plan policies guide decisions about growth, development, preservation, environmental protection, economic development, affordable housing, culture and the arts, urban design, neighborhood character and transportation. The and hum an services are provided Performance M easurement in Boulder Boulder is one of 77 cities across the country participating in What Works Cities, a Bloomberg Philanthropies initiative that partners with 100 U.S. cities to build capacity for using data and evidence driven governance (City of Boulder Department of Human Services, 2017). The Comprehensive Plan, which guides overall strategy for the C ity, sta tes that the C ity will establish indicators that can be used to measure progress against C ity goals tho ugh the plan itself contains very few indicators. Instead, the City of Boulder relies on a set of tools and plans to translate the Comprehensive Plan into action. These include C ity departmental master plans,

PAGE 98

8 5 zoning regulations, and specific programs and s ervices whose funding is allocated annually through the city budget (City of Boulder, 2010). The locus of performance measurement decision making resides in the c ity departments. Most city departments use master plans to guide the provision ing of services and facilities (City of Boulder, 2017b) These plans establish detailed policies, funding priorities, and service standards. Some departments also use program specific strategic plans in addition to the overall department master plan. Departmental plans v ary in terms of their development of performance measures. For example, t he Parks and Recreation Master Plan (2013) ou tlines a set of high level goals and initiatives and presents targeted departmental objectives based on three financial scenarios: Fiscall y Constrained, Action, and Vision. Nonetheless, while the departmental plan has detailed action plans, t here are few quantitative indicators of success the plan is more of a detailed description of planned actions. The Transportation Master Plan, on the ot her hand, establishes nine measureable objectives for the C ity ( City of Bo ulder Transportation Division, 2016 ). These are quantitative objectives that include vehicle miles traveled, greenhouse gas reductions, and crash mitigation, among others. Discussions with c ity managers suggests that the processes surrounding the development of new measures and the evaluation of existing measures are somewhat ad hoc and vary across the different departments (City Interviews, 2017) Thus, some departments may be able to describe a process for evaluating measures while others have no identifiable processes. There has been a push to grow a more centralized performance measurement program in the past few years C entralize d performance measurement activities comp lement the performance measurement processes that have existed in some c ity departments for years. Activities include the development of the ommunity d ashboard

PAGE 99

86 which provides the public with data related to city programs and community i ndicators, organized according to the City of Boulder, n.d.). T he goal of the Sustainability Framework is to align efforts across the city by establishing a common language for goals and priorities. Boulder Measures includes community indicators that measure pavement conditions, emergency response times, and levels of service, among others. For each indicator a target is defined, historic trends are mapped, and progress against the target is indicated. A Chief Innovat ion and Analytics Officer was recently hired in Boulder. Her sole function is facilitat ing the performance measurement work throughout the city ( City Interviews, 2017). She does not have a staff, but works with departmental data analytics managers. Aside f rom her most performance measurement work is done by employees embedded in the departments. Social E quity in Boulder The study found two primary sources of equity goals and indicators in Boulder : centralized stakeholders and th e City Council, and city departments Centralized stakeholders tend to focus on developing high level equity goals. These goals are then embedded in strategic plans. For example, the City of Boulder Valley Comprehensive Plan is a central planning document that outlines recommended goals and policies, guided by a set of core values, which direct future spending and development ( City of Boulder, 2010 ). Regarding social equity, a C omprehensive P lan ( City of Boulder, 2010 p. 4 ). Th is principle of sustainability dictates that decision making in Boulder is therefore guided by the issues of environment, economy, and social equity.

PAGE 100

87 The Comprehensive Plan identifies the equitable distribution of resources as a key goal. This includes ensuring that basic services are accessible and affordable to those most in need, and the consideration of impacts of policies and planning efforts on lo w income and special needs populations. Comprehensive Plan references to social equity d o not identify race as a key variable, but instead tend to focus on income, age, and special needs as the primary equity categories (City of Boulder, 2010). For example i n the Transportation Master Plan one objective ( that brushes up against equity ) aims to expand viable transportation options for older adults and people with disabilities. There is no central group or program responsible for proselytizing social equity across the City. While social equity as a value is starting to infiltrate departmental plans, initiatives to include equity criteria in program funding or develop equity measures are limited and occur on an ad hoc basis at the departmental level (City Int erviews, 2017) A growing amount of social equity related activity is occuring in the City of Boulder Human Services Department. Their 2017 strategy includes increased social equity within the community as one of its key goals (City of Boulder Department o f Human Services, 2017). Considerations of economics, race, and ethnicity are factored into decisions regarding allocation of resources for services, programs and investment. Nonetheless, quantitative measures of progress towards social equity goals are s till lacking and existing measures tend to focus on agency outputs ( see e.g., City of Boulder Department of Human Services, 2017). Data driven decision making is one of the three core principles of the plan, indicating a renewed interest in performance mea surement and presenting the possibility that future measurement and evaluation processes will be developed that enable better tracking of equity outcomes.

PAGE 101

88 Key Findings Stakeholder Influence There is a strong commitment to performance measurement in Boulder, but this commitment is more of a bottom up belief in the value of performance measurement instead of a top down mandate from the City Council. Performance measurement is thus a departmental function in Boulder The City Council develops the strate gy and approves focus areas and p erformance measures are then depending on whether or not the department itself is committed to developing performance measures Two reasons were given for the lack of influence ove r performance measurement by political stakeholders. First, it is generally accepted that operational decisions should be left to the departments. The City Council sets overall strategic direction, but tries to stay out of day to day operation al decisions P erformance measurement is seen as an operational function something used to improve the effectiveness of governmental operations. Second, in departments where performance measurement has become establis hed practice, there is momentum driving the continua tion of existing performance measurement practices and processes. This momentum makes it difficult for external stakeholder s to exert influence. One interviewee suggested that there would likely be greater political influence in departments with less perf ormance City Interviews, 2017). Because performance measurement is decentralized, it is inconsistently applied across different departments depending on the level of commitment from departmental leadership. For example, the Di City Interviews, 2017). Because of this, the Department of Transportation

PAGE 102

89 is one of the main adherents to performance measurement while also having a s trong culture of quantitative measurement. Measurement Variation and Selection The presence of measures and the processes surrounding the development of measures varies across c ity d epartments. Decision making regarding performance measurement is decentral ized. As a result, d epartments hav e developed their own performance measurement processes, some of which are more effective than others. Transportation, one of the earliest adopters of performance measures, does not have a clear process for adding new meas ures or reviewing existing measures. The sporadic updating of the departmental Master Plan is the main source of change for performance measures. During the update of the Master Plan, measures that were tracked in the old plan are updated or removed. Measu res may also be reviewed during the annual budgeting process, though there is no formal process for review associated with budgeting. The primary data source for evaluation of existing measures comes from community surveys that are conducted every two to t hree years. Nonetheless, outside of the update cycle there is not a clear process for reviewing or adding performance measures (City Interviews, 2017) One interviewee suggested that when performance measurement is driven by political forces it can result in dysfunctional indicators. An initiative to measure and reduce greenhouse gases was cited as an example. Political involvement created a drive to create measures that demonstrate near term change. As a result the conversion of city buildings to use more efficient lightbulb s was adopted as a measure of progress toward greenhouse gas reduction. The interviewee suggested that w hile the changing of lightbulbs might be meaningful at some level, compared to the large structural changes that would be required t o actually mitigate the

PAGE 103

90 production of greenhouse gasses, the measure is somewhat meaningless. But, because it was something that could be shown to change in the near term, it was developed as a measure of performance (City Interviews, 2017). There is also a lack of performance measurement processes within the H uman Services Department. I think a lot of the performance measurement work we do is reactionary for the time being. A lot of it is driven by City Council. Some of it is driven by prior leadership. Bu t ot of performance measures yet. (City Interviews, 2017) Thus, while there is an understanding amongst senior m anagers in the department and amongst department staff that choosing the right performance measures and leveraging them to actually manage programs is an effective practice to put in place, in many cases the departments have not ime capacity or the knowledge City Interviews, 2017). Social Equity Performance Measures City Interviews, 2017) and there is a general understanding of the value of social equity, both in political and operational realms: We probably talk about [social equity] in somewhat different ways. Probably in more of a diffuse or dispersed way. So there is certainly a lot of work and concern around affordable housing. There is a lot of work and concern around kids, and kid nu trition, and having access to programs and facilities. And City Council talks about it. And I think the current City Council has talked about it more than previous ones. ( City Interviews, 2017) Nonetheless, there are few social equity performance measures. City documents reveal some discussion of equity with respect to affordable housing, and Human Services focuses on social equity as a primary objective, but with no quanti tative measures. One interviewee suggested that there was momentum to develop social equity measures, but those were still several months away.

PAGE 104

91 Classifying Boulder (measure ment variation and measurement selection) and the independent variables (political and operational influence). Each variable is rated according to a scale from one to five (1 = To no extent; 5 = To a very great extent). The rankings are based on the quanti tative responses to the close ended interview questions. The rankings are an average of the close ended questions associated with the variable as detailed in the variables section of the research design. If a close ended question was not answered by an int erviewee, it was excluded from the calculation. Analysis of the data, summarized in Table 17 reveals that the City of Boulder has only a moderate commitment to performance measurement from political stakeholders. At the operational level, commitment to performance measurement varies by department, driven by agency leader ship Social equity as a value is embedded in c ity d ocumentation, but it has not taken hold across c ity departments and is mainly centralized in the Human Services Department. Table 17 Summary of the findings for Boulder, Colorado (1 = to no extent; 5 = to a very great extent). City Council Influence on performance measurement 2.3 Commitment to social equity 3.5 Agency Leadership Influence on performance measurement 3.7 Commitment to social equity 4.0 Frontline Workers Influence on performance measurement 3.3 Commitment to social equity 2.5 Measurement variation 2.5 Social equity measurement variation 1.5

PAGE 105

92 Because responsibility for performance measurement is decentralized, it is incumbent on departments to develop their own performance measurement processes, some of which are more effective than others. There was no evidence of clear processes for adding new measures or reviewing existing measures, even in departments with a fair number of working performance measures. The updating of measures appears to be mainly centered around the development of departmental plans. Denver Denver, a city of 636,923 (Colorado Department of Local Affairs, 2015), is the most populous city in Colorado. The 20 11 201 5 American Community Survey 5 ye ar estimates (ACS, 2015 ) indicate that the largest racial groups in Denver are White ( 53 followed by Black or African American ( 9.7 %) and Asian ( 3.6%) 30.9 % of the population is of Hispani c or Latino ethnicity. The estimated 2017 unemployment rate was 2.2% ( United States Bureau of Labor Statistics, n.d. ). The City of Denver has a h ousehold median income of $ 53,637 and an estimated 17.3 % of all people live below the poverty line (ACS, 2015 ). Performance Measurement Mayor Michael B. Hancock created the Peak Performance program in 2011 to empower city department s and agency employees to improve work processes through training, coaching and support ( City of Denver, M 2017 ) Peak Performance requires departments to submit periodic Peak Performance r eviews. These reviews are quantitative assessments of progress against c ity and department goals. For example, the Denver Parks & Rec 2016 Peak Performance review identifies numerous quantitative measures including park acreage designation, recycle tonnage, and ranger service calls ( City of Denver Parks & Recreation,

PAGE 106

93 2016 ) Most Peak Performance review measures are comparative and track progress against results from previous years De nver does not have a community facing performance measurement dashboard. Instead, it relies on the departmental Peak Performance reviews as the main format for disseminating performance data to the public. In addition, th e City B udget different city priorities ( City and County of Denver, 2017 ). These measures are quantitative indicators of performance levels from prior years. Interviewees noted that agency leadership is the focal point for the management and development of performance measures Previous initiatives attempted to build more of a top down performance measurement environment where c exert more control over the process. These have mostly foundered due to strong pushbac k from city agencies. As a result, current efforts from the central performance management group tend to focus on encouraging the development of performance measures by departments rather than dictating the development of specific measures The performance measurement structure i s thus fairly decentralized, with some exceptions. Performance measures are occasionally tied to For example Executive Order 13 9 establishes the development of quantitative metrics that can be used to measure the well being of children ( City of Denver Executive Order 139, 201 2) Social E quity in Denver City of Denver documents reveal a widespread concern for social equity. City of Denver departme ntal Peak P erformance reviews track the gender, ethnic, and age diversity within the department. These indicators track how different racial groups are represented at different job levels within the city The Denver Community Profile, a report used to iden tify neighborhoods in

PAGE 107

94 need of increased focus and resources, calls out social equity as a growing concern ( City of Denver, 2017 ). It summarizes both racial and economic indicators across different Denver neighborhoods. The City of Denver budget references social equity as a key priority, but programs geared towards specific racial or economic sub classes are no t specifically called out. W hile there are performance targets for increasing low cost housing or homeless initiatives, these are specified in genera l terms and do not single out specific groups n or are they referred to City and County of Denver, 2017). The Human Rights and Community Partnerships (HRCP) program is one area of centralization for so cial equity concerns ( City of Denver, n.d.). The HRCP ad dresses issues concerning older adults ; racial, ethnic and religious minorities ; women and families ; people with disabilities ; gay, lesbian, bisexual and transgender individuals ; and immigrants and re fugees. It tracks measures for race, foreign born citizens other languages spoken, disabilit ies aging, and income s across A number of departmental plans reference social equity. Equity and Access is a key theme in the 2017 Par k s a nd Recreation plan ( Denver Parks and Recr eation, 2017 ). For example, t of ethnic and racial diversity, the lowest incomes, and the highest levels of obesity and chronic disease (p. 56). A demand map for park access identifies different levels of access to City parks for different racial and economic categories and concludes that while areas of Denver may lack amenities and park access, across the board certain demographic groups are not suffering the lack of service more than others ( Denver Parks and Recr eation, 2017 p. 57). The Denver Health annual report devotes a section to health equity, though with no quantitative measures ( Denver Public Health, 2016). Th e Department of E nvironmental H ealth (DEH) maintains a

PAGE 108

95 neighborhood equity index that measures families in poverty, education levels, access to full service grocery stores and parks, access to prenatal health care, and child obesity rates ( City of Denver 2017 ) The Office of Sustainability does not reference social equity nor is there an attempt to identify individual racial or economic groups ( City of Denver Office of Sustainability, 2017). Key Findings Stakeholder Influence O ffice and there is a central group focused on the development of c i ty level performance measures. But, responsibility for performance measurement is decentralized and resides within depar tments. Commitment to performance measurement and the use of performance measures thus varies across department s Departmental leaders who are operationally focused and who are interested in understanding and improving operational effectiveness tend to be more willing to adopt and use performance measures D epartmental leaders who are more interested in promoting a specific political agenda tend to be less interested in measuring the success of individual programs (City Interviews, 2017) City p rograms are a key source of performance measure s As a result, new measures tend to originate at the program level and fr ont line workers are primarily responsible for creating the transactional data that is used to track performance measure. In addition, n ew measure s might be developed due to the need to understand the impacts of a new project or service ( City Interviews, 2017). One interviewee suggested that earlier top down efforts aimed at encouraging performance measurement across the cit y failed because of lack of buy in from departmental

PAGE 109

96 leaders Only recently have agencies begun to develop their own measures (City Interviews, 2017) Measurement Variation Performance measures are in use across the city and are referenced during the Peak Performance reviews. The re is thus a culture of performance measurement and an expectation that performance measurement is a best practice for individuals wanting to convince political leadership of program effectiveness. T here is evidence of performance measurement variation due to e arly agency plans tend ing to have an excess of performance measures. The reason for this was that a lot of interested parties were included in the plan, and it was difficult to identify which measures might be more or less meaningful. When Technology Services first created their strategic plan they had 178 performance measures that were all at the same level. N ow they have it down to four strategies and 12 to 18 tactics that they measure So 178 down to 20 total You are asking a lot of questions abou pairing back to get to some sort of equilibrium where you had a couple of dozen measures that really mattered. (City Interviews, 2017) Performance measures are being evaluated and el iminated as city departments begin to learn which measures are meaningful and which indicators cannot be measured or do not represent anything worth measuring. Social Equity Performance Measures Social equity measures in Denver are scarce, but can be found. The City of Denver departmental Peak Performance reviews track gender, ethnic and ag e diversity across all levels of government. lth and education

PAGE 110

97 Affairs does not list social equity as a goal, the measures are based on race and socioeconomic status. The overall impression i s that while social equity is a legitimate value for the City of Denver, the decentralized nature of performance measurement processes and the tendency for c ity p rograms to be the source of performance measurement has resulted in an ad hoc approach to the development of social equity measures. In other words, the presence of social equity measures is largely dependent upon decision making at lower levels of government. The exception to this is the tracking of internal human resource statistics which show ra cial disparities across different job classifications. At a political level, c ity officials are committed to social equity as a value, but this commitment has not translated into action regarding the development of social equity measures that could be used to track progress toward social equity. [social equity] and actually doing something. I it. (City Interviews, 2017) Th e dearth of social equity measures is not the r esult of lack of data. Denver is very strong on data collection, largely in the form of neighborhood maps that break down community attributes across geographic, racial, and socioeconomic lines. If you are as king about the social equity stuff. I think what we do best is we map stuff and we see where that stuff is falling out. That is our primary tool for social equity evaluation. (City Interviews, 2017) Instead, t he lack of formal processes seems to be a cont ributing factor to the lack of social equity measures. I think the take away for me is that we do a good job talking about social equity. It is data driven kind of way. Although there are groups that are doing it. Maybe w e need a formal way. (City Interviews, 2017)

PAGE 111

98 Classifying Denver This section categorizes the City of Denver (measurement variation and measurement selection) and the independent variables (political and operational in fluence). Each variable is rated according to a scale from one to five (1 = To no extent; 5 = To a very great extent). The rankings are based on the quantitative responses to the close ended interview questions. The rankings are an average of the close end ed questions associated with the variable as detailed in the variables section of the research design. If a close ended question was not answered by an interviewee, it was excluded from the calculation. Analysis of the data, summarized in Table 18 reveals that political stakeholders in the City of Denver are committed to performance measurement. At the operational level, commitment to performance measurement varies by department, again commitment to performance measurement. There is a fairly widespread concern for social equity at political and operational levels within the City of Denver evidenced in departmental Peak Performance reviews which quantitatively track gender, ethnic, and age diversity, among other indicators. In addition, the Denver Budget references social equity as a key priority, but there are no social equity indicators specified ( City and County of Denver, 2017) While s ocial equity performance measures are scarce in the City of Denver, they can be found.

PAGE 112

99 Table 18 Summary of the findings for Denver, Colorado (1 = to no extent; 5 = to a very great extent). City Council Influence on performance measurement 4.0 Commitment to social equity 3.0 Agency Leadership Influence on performance measurement 4.5 Commitment to social equity 3.0 Frontline Workers Influence on performance measurement 1.3 Commitment to social equity 1.0 Measurement variation 3.5 Social equity measurement variation 1.5 The processes surrounding measurement variation and selection vary across different departments. But, there is evidence of performance variation and selection. Performance measures are in use across c ity departments and are referenced during the Peak Performance reviews. Fort Collins The City of Fort Collins located 65 miles north of Denver, is the most populous municipality of Larimer County. The estimated population was 158,600 as of 201 5 ( City of F ort Collins (n.d.). T he largest racial groups in Boulder are White ( 83 followed by Hispanic (10%), and Asian (3%) ( United States Census Bureau, 201 0 ). The estimated 2017 unemployment rate was 1.8% ( United States Bureau of Labor Statistics, n.d. ). The City of Fort Collins has a h ousehold median income of $ 55,647 and an estimated 18.6 % of all people live below the poverty line (Colorado Department of Local Affairs, 2015).

PAGE 113

100 Performance Measurement There is a strong emphasis on performance measurement in Fort Collins. Both the City Strategic Plan ( City of Fort Collins, 2016 ) and the b udget (City of Fort Collins, 201 7 a ) heavily rely on well defined quantitative measures to track departmental progress against c ity strategic s year on including performance 201 7 a p. 2). The nonprofit organization Rocky Mountain Performan ce Excellence (RMPEx) awarded Fo rt Collins Peak status in 2014 ( City of Fort Collins City Manager, n.d.). Peak status, the highest level in the state program recognizes applicants who demonstrate systematic and mature approaches to performance management. The City of Fort Collins is only the fifth organization to receive this honor in the thirteen year history of RMPEx. Fort Collins maintains a performance metrics community dashboard that is updated on a quarterly basis (City of Fort Collins, 201 7b). Historically, Fort Collins tended to track internal operational performance metrics. In 2013 efforts were made to tie metrics to specific outcomes and objectives found in the c ity strategic plan. As such, the current dashboard is orga nized around the c health, culture and recreation, economic health, environmental health, safe community, transportation, and high performing government. Social Equity in Fort Collins The City of Fort Collins established a formal Social Sustainability Department in 201 2 to support a diverse and equitable community that successfully meets the bas ic needs of all ( City of Fort Collins Social Sustainability, 2016 p. 6) This is accomplished through support of programs, policies, and partnerships that provide equity and opportunity for all,

PAGE 114

101 allocating funding to affordable housing and human service agencies, and implementing policies that promote and support self suffici ency ( Larson, Jacob, & Butz, 2017) Fort Collins conducted a 2014 Social Sustainability Gap s Analysis The analysis provided quantitative and qualitative assessment of supportive service gaps in the community related to various components of social sustainability, including housing, homelessness, poverty, health, wellness, education, diversity, and the needs of special populations (BBC Research & Consulting, 2014 p. 1 ). The analysis concluded that the City of The Gaps Analysis complement s the Social Sustainability Strategic Plan which articulates the c of social, environmental, and economic sustainability ( City of Fort Collins Social Sustainability, 2016 ). The Social Sustainability Strategic Plan identifies high level goals strategies to achieve the goals, specific actions that comprise the strategies, and metrics to measure progress and performance. Some of the metrics are quantitative indicators, though most set relative rather than absolute targets. For LGBTQIA acceptance ranking by 2020, as mea sured City of Fort Collins Social Sustainability, 2016 p. 20). Key Findings Stakeholder Influence Performance measurement is embraced by the Fort Collins City Council and by the executive team There is a highly str uctured approach to mapping performance measures to Council strategic objectives. At the top of the hierarchy, there are seven strategic outcomes such as transportation, environmental health, and neighborhood livability. These are tracked on the

PAGE 115

102 public com munity dashboard. Each strategic outcome has four to eight indicators tied to it that give an indication of how the c ity is doing in achieving each of th e outcomes. There is a quantitative target associated with each indicator. Below that, at the operation al level are all the programs and services that the c ity operates. In the budgeting process, when a department makes a budget request, there has to be at least one performance measure associated with it. While City Council expects performance measures, they do not say what to measure. Occasionally they will give guidance on me trics but it is not commonplace ( City Interviews, 2017). There is kind of an expectation of when items are brought to Council, we wi ll link whatever the item is to a strategic objective in the strategic plan, so they see where it connects. (City Interviews, 2017) The actual development of measures is pushed to the operational level. Individual departments are expected to track and revi ew the high T hey are also encouraged to track their own, internal, metrics that can be used to measure the effectiveness of individual programs or services. Measurement Variation There is strong eviden ce of both the elimination and the updating of existing measures in Fort Collins. A link on the community dashboard points to six measures that were modified during the 2017 review by city managers (City of Fort Collins, 2017c). Interviews indicate that th ese changes originated within the individual departments. During the biannual review of the community dashboard departmental representatives presented justifications for wanting to alter or eliminate specific community measures. For example, the indicator timely matter making the original measure misleadi ng.

PAGE 116

103 There are several processes that contribute to measurement variation For example, the strategic objectives tracked on the public community dashboard ar e reviewed biannually and any changes to objectives or indicators are noted. M easurement variation also occurs as the result of the processes focused on tying operational activity to Council level strategic objectives. Initial efforts produced hundreds of operational measures, overwhelming the ability to tie the measures to the six to nine Council strategic objectives. In response, departments pare d back the number of measures to a more manageable number that could reasonably be reviewed on a quarterly basi s ( City Interviews, 2017). Processes surrounding the development of new measures reside at the operational level. These processes are again tied to the quarterly review of strategic outcomes. When an area is underperforming it is incumbent on the departmen t to develop an action plan to improve performance. Tied to that action plan are measureable indicators that can be used to track progress. It is within this process where departmental managers are forced to reflect on their existing measures and think abo ut whether there are better meas ures that can tell their story (City Interviews, 2017). Social Equity Performance Measures Council sometimes requires a sustainability assessment that assesses the social impacts of a program or service. Sustainability asse ssments result from a c ity level strategic objective from these assessments, the Department of Social Sustainability is where most social equity initiatives g enerally reside ( City Interviews, 2017). Interviewees mentioned a few social equity

PAGE 117

104 measures. One measure is associated with affordable housing inventory. The Social Sustainability Strategic Plan has set a target of wanting 9% of housing invento ry to be co nsidered affordable. Another social equity measure is the housing opportunity index which look s at housing costs relative to AMI and Fort Collins compared to Western regions. Yet, outside of these housing metrics, t he Department of Social Sustainability has found it challenging to create valuable social equity metrics. One reason is that metrics lend ing themselves to social equity tend to focus on longer term outcomes they are metrics that change on an annual or long er basis Fort Collins tries t o f ocus on more frequent quarterly reporting Classifying Fort Collins (measurement variation and measurement selection) and the independ ent variables (political and operational influence). Each variable is rated according to a scale from one to five (1 = To no extent; 5 = To a very great extent). The rankings are based on the quantitative responses to the close ended interview questions. T he rankings are an average of the close ended questions associated with the variable as detailed in the variables section of the research design. If a close ended question was not answered by an interviewee, it was excluded from the calculation. Analysis o f the data, summarized in Table 19 reveals that there is a strong emphasis on performance measurement in Fort Collins, with both the City Strategic Plan ( City of For t Collins, 2016 ) and the budget (City of Fort Collins, 201 7a) evidencing quantitative measures used to track departmental progress against City strategic goals. The City of Fort Collins has a formal S ocial Sustainability Department, with a core mission of supporting social equity across the City. In addition, a 2014 analysis concluded that

PAGE 118

105 s Social Sustainability City of Fort Collins Social Sustainability, 2016 ) and contains some quantitative indicators focused on social equity. Table 19 Summary of the findings for Fort Collins, Colorado (1 = to no extent; 5 = to a very great extent). City Council Influence on performance measurement 3.7 Commitment to social equity 5.0 Agency Leadership Influence on performance measurement 4.7 Commitment to social equity 4.0 Frontline Workers Influence on performance measurement 2.0 Commitment to social equity 4.0 Measurement variation 4.0 Social equity measurement variation 3.0 Fort Collins has a highly structured process for mapping operational performance measures to political strategic objectives. There is evidence that new performance measures have been recently added and that existing performance measures are regularly updated. T he City also has strong processes focused on reporting departmental progress against quantitative performance measures. Fort Collins maintains a performance metrics community dashboard that is updated on a quarterly basis and l inks on the dashboard summar ize reasons behind any updates to previously reported measures. Longmont The City of Longmont, with a population of 89,096 (Colorado Department of Local Affairs, 2015), is the thirteenth largest city in Colorado. T he largest racial groups in Longmont are W hite ( 67 followed by Hispanic (2 6 %), and Asian (3%) (ACS,

PAGE 119

106 201 5 ). The estimated 2017 unemployment rate was 2.1% ( United States Bureau of Labor Statistics, n.d. ). The City of Longmont has a h ousehold median income of $ 62,208 and an estimated 14.1 % of all people live below the poverty line (Colorado Department of Local Affairs, 2015). In 2016, the City conduct ed a comprehensive assessment to identify current and future needs for services. The assessment identifie s a prioritized list of needs by service area that the City can use to create an investment strategy and develop appropriate indicators and outcomes to measure success (Corona Insights, 2016). The assessment identified several challenges that will drive future investment d by families, but is sizable foreign born population has been declining recently. A greater proportion of residents were in po verty in 2014 compared to 2009. Housing dominates as the greatest unm et human service need among low and moderate income residents. About 9,000 adults in Longmont need housing help, but only 1,000 to 2,000 are getting the help they need, resulting in unmet need for almost 8,000 adult residents (Corona Insights, 2016). In te rms of equity, t here is a large In addition, Hispanic students have the lowest high school completion rates. Hispanic families are three times more li kely to be experiencing poverty than families with a whit e householder (Corona Insights, 2016). Performance Measurement In 2015, the City of Longmont initiated a community wide planning effort to update the Longmont Area Comprehensive Plan and the Longmont Multimodal Transportation Plan. Plan provides policy guidance for elected and appointed o cials in making choices regarding the long range needs of the community (City of Longmont,

PAGE 120

107 2016 a ). The Comprehensive Plan is a public document and an o cial statement of land use policy approved by the Planning & Zoning Commission and adopted by the City Council The Comprehensive Plan demonstrates some commitment to performance measurement. Each of the goals presented in the Longmont Comprehensive Plan are accompanied by a set of 2016a). Nonetheless hanges in th e indicators will be measured, recorded, or tracked, and shared with the community on a regular basis, through a report prepared by City sta and an online dashboard (City of Longmont, 2016a p. 18), there was no evidence of any departmental (or other) pl ans where the indicators were actually tracked and measured. Social Equity in Longmont The City of Longmont Comprehensive Plan includes social equity as part of its goal of housing, services, amenities and opportunities for all (City of Longmont, 2016a). Nonetheless, of Longmont, 2016a, p. 173). In general, equity concerns in the Comprehensive Plan tend to be focused on insuring access for older adults rather than any s pecific racial or ethnic group Outside of the Comprehensive Plan, several programs and initiatives t ouch on social equity. A multicultural strategic planning process emerged from the findings of a 2001 needs assessment of Boulder County Latinos (City of Longmont, 2002). The resulting Multicultural Plan was a five year plan designed to guide the community in becoming a multicultural and

PAGE 121

108 inclusive community. Today, t he Community Services Department houses a Multicultural Action Committee that continues to address the issues identified in the Multicultural Plan The City of Longmont Sustainability Plan is an other document that promotes social equity as a key goal. It of environmental stewardship, social equity, and economic vitality (City of Longmont, 2016a, p. 5). Strategies for achieving sustainability are listed across different City departments. Each of these strategies is then mapped to one of the triple bottom line values. The Sustainability Plan establishes general social equity goals for example ss to transportation infrastructure fo r all segments of the community and identifies actions to accomplish the goals, but does not establish specific quantitative performance measures Key Findings Stakeholder Influence Responsibility for performance measurement in Longmont resides at the departmental and program levels. There is no centralized person or team that does high level performance m etrics or monitoring for City programs or services. Those responsibilities are v ery much housed within individual departments and divisions and are often attached to specific programs As a result, the development of new performance measures tends to occur at the departmental and program management level and flow up the chain of comma nd to the Council. F ront line workers are generally not involved with the development of performance measures ( City Interviews, 2017) Measurement Variation The development of new measures does not seem to be guided by any formal processes. Rather new measu res are introduced in an ad hoc manner in response to planning initiatives or

PAGE 122

109 periodic departmental performance reviews. For example, several new performance measures were developed during the development of the most recent Sustainability Plan. These metri cs grew out of discussions with community members, research into the sustainability efforts of other cities, and internal operational reviews. In the end, the Plan did not specify specific indicators or metrics, but discussions with Departmental employees indicates that there is a process underway for specifying quantitative measures. Social Equity Performance Measures There is a general awareness of social equity as a n underlying value that is important. Some of the awareness of social equity came from a big push from the community in 2015 to get Longmont back on board with the sustainability work ( City Interviews, 2017). In addition, Longmont is laun ching a sustainability dashboard that will aggregate sustainability information from a number of department s. This information will focus on economic, environmental, and social sustainability. Nonetheless, there are only a scattering of what might be considered social equity measures. Social equity is not a primary objective for City leadership. This lack of po litical support was cited as being one of the main factors inhibiting the development of social equity performance measures. Thus, while there is some momentum behind the pursuit of social equity from the community and from within individual departments, t here is little support from political leaders. As a result, there is little incentive for departments to develop social equity measures and bring them in front of the City Council for discussion and approval. The push to develop social equity measures is a lso confronted with lack of data and knowledge of what types of measures should be tracked. Because there are few resources dedicated to social equity, and it is more of a self initiated, internally driven initiative, it tends to be neglected

PAGE 123

110 Classifying Longmont This section categorizes the City of Longmont (measurement variation and measurement selection) and the independent variables (political and operational influence). Each variable is rated according to a scal e from one to five (1 = To no extent; 5 = To a very great extent). The rankings are based on the quantitative responses to the close ended interview questions. The rankings are an average of the close ended questions associated with the variable as detailed in the variables section of the research design. If a close ended question was not answered by an interviewee, it was excluded from the calculation. Analysis of the data, summarized in Table 20 reveals that the City of Longmont has only a moderate commitment to performance measurement from political stakeholders. While the City Plan suggests that departmental indicators will be measured and tracked on a regular basis, few performance measures were found in departmental or other plans. Responsibility for performance measurement in Longmont resides at the departmental and pro gram levels and there is no centralized person or team that does high level performance m etrics or monitoring for City programs or services. There is some evidence of political commitment to social equity. For example, the Comprehensive Plan references s ocial equity as one of its goals. These equity concerns tend to be focused on insuring access for older adults rather than any specific racial or ethnic group. The City of Longmont Sustainability Plan establishes general social equity go als across City dep artments, and identifies actions to accomplish the goals, but does not establish specific quantitative performance measures.

PAGE 124

111 Table 20 Summary of the findings for Longmont, Colorado (1 = to no extent; 5 = to a very great extent). City Council Influence on performance measurement 2.8 Commitment to social equity 3.0 Agency Leadership Influence on performance measurement 2.7 Commitment to social equity 2.0 Frontline Workers Influence on performance measurement 1.3 Commitment to social equity 4.0 Measurement variation 2.0 Social equity measurement variation 1.0 There was little evidence of formal measurement variation and selection processes. New measures seem to be introduced in an ad hoc manner in response to planning initiatives or periodic departmental performance reviews. Nonetheless, some performance measur es are currently in use and there is evidence of recently developed measures being put to use. Conclusion Table 21 summarizes the independent and dependent variable s for the five cities. Arvada and Fort Collins demonstrated the strongest performance measurement initiatives and showed the most measurement variation In both cities the City Council is strongly invested in quantitative measurement of performance and both have developed performance reporting structures that emphasize the mapping of departmental performance measures to Council strategic objectives.

PAGE 125

112 Table 21 Summary of variable rankings regarding performance measurement influe nce and measurement development (1 = to no extent; 5 = to a very great extent). Influence over performance measurement Measurement variation Mayor or City Council Agency leadership Frontline workers Performance measures Social equity measures Arvada 4.6 4.8 3.8 5 .0 1.0 Boulder 2.3 3.7 3.3 2 .5 1.5 Denver 4.0 4.5 1.3 3.5 1.5 Fort Collins 3.7 4.7 2.0 4 .0 3.0 Longmont 2.8 2.7 1.3 2.0 1.0 Four cities Boulder, Denver, Fort Collins, Longmont have embraced social equity as a value, but only Boulder, Denver, and Fort Collins have adopted spe cific social equity goals. In addition, only Fort Collins and Denver are actively attempting to measure their progress against social equity goals. Results from the study are listed in Table 22 In some cases, political stakeholders are openly hostile to the concept because it implies government favoritism of specific demographic groups. In cities where social equity was not a p riority for political stakeholders there was little chance of finding performance measures. Even in cities that a re supportive of social equity there are roadblocks to developing social equity performance measures.

PAGE 126

113 Table 22 Summary of variable rankings regarding social equity influence and the extent of social equity performance measures (1 = to no extent; 5 = to a very great extent). Support for social equity Measurement Mayor or City Council Agency leadership Frontline workers Social equity measures Arvada 3.5 4 .5 3.5 1 .0 Boulder 3.5 4.0 2.5 1 .5 Denver 3.0 3.0 1.0 1.5 Fort Collins 5.0 4.0 4.0 3 .0 Longmont 3.0 4.0 2.0 1 .0 Two reasons were given for the lack of equity measures. First, in some cases, while there was general agreement that social equity is a worthwhile value, there was considerably less understanding as to what constitutes a social equity goal. Interviewees co uld point to affordable housing as promoting social equity, but were less clear on how to promote social equity in other areas such as transportation and recreation. This general lack of understanding of social equity created an environment where public ma nagers found it difficult to identify quantitative measures of equity. from a d based which can confound empirical methods of analysis which depend on unambiguous problem definition. Interview data suggests how int erview questions regarding social equity performance measurement can be improved for future studies. T here was a general lack of understanding as to what constitutes social equity among the ten city managers interviewed for this study. Interviewees tended to associate s ocial equity with specific city program s or citizen

PAGE 127

114 demographic s In Boulder, interviewees referenced affordable housing as a social equity centric initiative aimed at addressing income inequities In Denver, interviewees referenced the O ffic e of as an organization that uses social equity performance measures to understand racial inequities. Th e tendency for interviewees to define social equity in terms of specific city programs or citizen demographics reveal s several shortc w instrument. Generic questions difficult to answer for the interviewees. The interview instrument could thus be improved by asking interviewees about programs o r initiatives that are likely to utilize social equity measures. Does the Department of Transportation consider race or income when allocating the ir budget or when approving new programs? Political s takeholder and agency leadership support f or performance measurement are key factors driving the development of performance measures. The influence of frontline workers on performance measurement is less significant. While many of the high level performance goals originate from City Council or the City departments to develop the actual quantitative measures. In so doing, measurement development is pushed to where there is greater domain expertise. The performance measurement structure thus consists of poli tical stakeholders who give guidance by listing specific indicators and goals and departmental managers who are tasked with developing measures and delivering them for use in performance dashboards, budgeting documents, or levy requests In some cases ther e is also a central ized performance measurement group that maintains a platform for disseminating performance measures Beyond the influence of stakeholders, two other factors were found to influence the development of performance measures: the processes surrounding the creation and review of

PAGE 128

115 performance measures and the timescales of those processes. A number of different performance measurement review processes were found. In Boulder and Longmont, performance measures tend to be reviewed as part of revising City or departmental plans. In Denver and Fort Collins, p erformance measures are also reviewed as part of annual budg eting process es Arvada and Fort Collins also have quarterly or semi annual performance review meeting s that serve as another process driving measurement variation Interviews also revealed that performance measures change at different rates depending on the city. The rate of change is largely a factor of the timescales of the performance management review processes. In Boulder and Longmont performance measures are reviewed as part of revising City plans, which may occur o nce every several years. In Arvada and Fort Collins performance measures are reviewed as part of quarterly or biannual performance reviews. Interviewees mentioned s everal other factors that influence performance measurement These variables should be cons idered with some skepticism as they are largely based on the perceptions of individuals. Nonetheless, they are worth noting as possible variables to consider in future studies. Some interviewees suggested that political values can influence the level of fo cus on social equity. Specifically, it was suggested that social equity can be a touch y subject with conservative policymakers. In addition, m acroeconomic trends can influence the level of focus on performance measurement. During economic recessions c ities tend to be less dynamic and are less likely to add new programs. As a result, there is less incentive to apply resources to measuring change or progress. Lastly, one interviewee suggested that cities with large universities may produce a more engaged and more informed citizenry. The citizens of these cities tend to be more data savvy and thus may encourage the use of

PAGE 129

116 performance measures by City officials. The racial demographics of the city is notable by its absence from discussions regard ing factors that influence the development of social equity performance mea sures. Specifically, there was no suggestion that cities with greater numbers of minority populations are more likely to be concerned with social equity.

PAGE 130

117 CHAPTER VI EXAMINING TH E RELATIONSHIP BETWEEN STAKEHOLDER INFLUENCE AND PERFORMANCE MEASUREMENT VARIATION Introduction This study examines the impact of stakeholder influence on performance measurement development. The model shown in Figure 10 presents the hypothesized relationship between stakeholder influence (the independent variables) and measurement variation (the dependent variable ). The hypothesized relationships are the on es developed in the theory and hypothesis section of the study. The model predicts that the Office of the Mayor or City Council will encourage the development of an expanded set of new measures, agency leadership will encourage the use of a reduced set of measures, and frontline workers will encourage the development of an expanded set of new measures Figure 10 The proposed theoretical model comparing stakeholder influence against measurement variation. This chapter explores the hypotheses by comparing and contrasting the observed causal effect the extent of the observed effect and the relative uncertainty of that effect (Gerring, 2007) between the independent and dependent variables for each hypothesis Empirical evidence in sup port of or be presented. Each

PAGE 131

118 hypothesis will be tested by analyzing the close ended interview data for quantitative evidence, and by reviewing the open ended interview data for qualitative evidence. Stakeholder Influence on Measurement Variation The proposed theoretical model examines the impact of political and operational influence on measurement variation. Political influence on measurement variation was conceptualized as a factor measuring demand for performance measurement by the Office of the Mayor or City Council Operational influence on performance measurement was conceptualized as a factor measuring demand for performance measurement by departmental leadership and frontline workers. The study hypothes es derived in Chapter III, are as follows: H 1 : Political influence on performance measurement is positively related to measurement variation. H 2 : Agency leadership influence on performance measurement is negatively related to measurement variat ion. H 3 : Frontline worker influence on performance measurement is positively related to measurement variation. At the c ity level, the principal claim of the first hypothesis ( H 1 ) is that cities with Mayor or City Council members who strongly support and encourage performance measurement will be mo re likely to adopt new measures. Conversely, the second ( H 2 ) hypothesis predicts that cities where departmental leaders show strong support for performance measurement will be less likel y to adopt new measures. The third ( H 3 ) hypotheses predicts that cities where frontline workers show strong support for performance measurement will be more likely to adopt new measures. T test of S ample M eans Interviewees were asked close ended Likert s ca le questions and open ended probe s regarding the influence of stakeholders on measurement variation City employees were asked

PAGE 132

119 several questions regarding stakeholder commitment to performance measurement and how much influence different stakeholders have over measurement variation (see Tables 5 7). The aggregated responses to the close ended questions regarding the influence of different stakeholders on measurement variation are shown in Table 23 The number of responses are listed in the table. For example, t he table shows that two interviewee s responded that the Mayor or City Council influences the development of new performance mea small extent four responded that the Mayor or City Council influence measurement variation A visual examination of the data suggests that, in gene ral, agency leadership influence s the development of performance measures to a greater extent than the Mayor or City Council and frontline workers. It is less clear as to whether there is a difference between the influence of the Mayor or City Council and frontline workers. Table 23 Interviewee responses to close ended questions regarding the influence of different stakeholders on measurement variation. The number of responses are listed in the table. Influences measurement variati on Stakeholder 1 To no extent 2 To a small extent 3 To a moderate extent 4 To a great extent 5 To a very great extent Mayor / City Council 0 2 4 3 1 Agency leadership 0 0 2 4 4 Frontline workers 2 3 3 1 1 We next examine whether the sample means are significantly different for the three stakeh olders. A difference of means test is used to determine whether the samples could have been drawn from the same population, or whether the sample means are so different that they could not have been drawn from th e same population (Meier, Brudney, & B ohte, 2009). A two sample t test with assumptions of equal variance and unpaired data was used to test the samples. The results are show n in Table 24 The first row compares the sample means of

PAGE 133

120 the results regarding the influence of the Mayor or City Council versus frontline workers. Analysis reveals no significant difference between the means implying that we cannot reject the null hypothesis and we thus conclude that it is likely that the Mayor or City Council and frontline workers do not have significantly different levels of influence. The second row compares the sample means of the results regarding the influence of agency leadership versus frontline workers. The t value of 3.51 is statistically significant at the 99% level implying that we can reject the null that there is no differenc e in influence between the sample means and conclude that it is likely that agency leadership and frontline workers have different levels of influence. In this case, agency leadership tends to be rated 1.55 points ikert scale. The third row compares the sample means of the results regarding the influence of the Mayor or City Council versus agency leadership. The t value of 3.14 is statistically significant at the 95% level implying that we can reject the null and c onclude that it is likely that political stakeholders and agency leadership have different levels of influence. In this case, agency Table 24 Two sample t test examining responses regarding the influence of different stakeholders on the development of new performance measures (N=10) Stakeholder 1 Stakeholder 2 t df Significance Mean difference Mayor or City Council Frontline worker 1.68 8 0.1313 0.82 Agency leadership Frontline worker 3.51 8 0.008 1.55 Mayor or City Council Agency leadership 3.14 8 0. 013 0.73 Factor A nalysis In this section a lpha ( Bland & Altman, 1997 ) is used to measure the internal consistency of the model constructs. A factor analysis is then used to determine whether

PAGE 134

121 the close ended questions load significantly onto their respective variables (Mayor or City Council influence, agency leadership influences, or fr ontline worker influence) This analysis used Stata version 12 (StataCorp, 2011 ) to test the constructs and the relationship between the constructs. Table 25 reports t he alpha measure for Mayor or City Council influence, agency leadership influence, and frontline worker influence. The alpha coefficient for the three items is .91, .94, and .93 respectively suggesting that the items have relatively high internal consisten cy (a reliability coefficient of .70 or higher is considered Factor loadings for the close ended questions are reported in the right column of Table 25 Loadings for Mayor or City Council influence range from .50 .96. Loadings for a gency leadership influence range from .92 .99. Loadings for frontline workers ra n ge from .72 1. These factor loadings, combined with the alpha values, indicate the model constructs are relatively strong.

PAGE 135

122 Table 25 close ended questions comprising the Mayor or City Council influe nce ( = .91) The office of the Mayor (or City Council) demonstrates a strong commitment to performance measurement. .90 The office of the Mayor (or City Council) uses performance measures to guide decision making. .96 The office of the Mayor (or City Council) is a source of new performance measures. .84 The office of the Mayor (or council) influences agency managers to develop new measures. .50 The office of the Mayor (or council) evaluates and eliminates existing performance measures. .85 The office of the Mayor (or council) influences agency managers to evaluate and eliminate existing measures. .93 Agency leadership influence ( = .94) measurement. .93 top leadership is a source of new performance measures. .8 9 My agency's top leadership evaluates and eliminates existing performance measures. .92 Frontline worker influence ( = .93) Frontline workers demonstrate a strong commitment to performance measurement. 72 Frontline workers are a source of new performance measures. 1 Front line workers evaluate and eliminate existing performance measures. 1 C orrelation A nalysis The previous section examined City employee perceptions of stakeholder influence on measurement variation. The data suggests that the Mayor or City Council and frontline workers do not have statistically significant differences in their influence on measur ement variation, though both of these stakeholders have less influence over measurement variation than agency leadership In this section we compare c ity level data to further test the impact of stakeholder influence on measurement variation.

PAGE 136

123 A correlation analysis is used t o examine the relationship between stakeholder influence and measurement variation Table 26 shows the pairwise correlation coefficients for the variables of interest with the significance level given in parentheses below the coefficients. The Mayor or City Council influence has a positive, statistically significant correlation with measurement variation. Similarly, agency leadership influ ence also has a positive, statistically significant correlation with measurement variation. For frontline workers, there is no evidence of a relationship between support for performance measurement and measurement variation. Table 26 Pairwise correlation coefficients examining the relationship between stakeholder influence and measurement variation S ignificance level is given in (parens). M easurement v ariation Mayor or City Council influence Agency leadership influence Frontline w orker influence M easurement v ariation 1.0000 Mayor or City Council influence 0. 8 409 (0. 00 45 ) 1.0000 Agency leadership influence 0. 9031 (0. 0 008 ) 0. 7270 (0. 0265 ) 1.0000 Frontline worker influence 0. 3787 (0. 3148 ) 0. 0 796 ( 0.8386 ) 0. 1819 (0. 6 396 ) 1.0000 The correlation analysis reveals two results of note First, agency leadership has a hypothesis which predicted a negative relationship between agency leadership influence and measurement variation. Second, Mayor or C ity Council influence is positively correlated with agency leadership influence, a result that will need to be considered when including the variables in a regression analysis, the subject of the next section.

PAGE 137

124 Regression A nalysis The relationship between stakeholders and measurement variation is shown in Equation 1 : Variation = 0 + 1 Political + 2 Leadership + 3 Frontline ( 1 ) Variation a measure of the amount of measurement variation found in the city. Political a measure of support for performance measurement by the Mayor or City Council. Leadership a measure of support for performance measurement by departmental leadership. Frontli ne a measure of support for performance measurement by frontline workers. All variables are Likert s cale values from one to five using the following scale: 1 2 3 4 5 To no extent To a small extent To a moderate extent To a great extent To a very great extent An ordinary least squares (OLS) analysis was used to estimate the relationship between the independent and dependent variables. While there is some debate regarding the val idity of treating ordinal data ( converted to numbers ) as interval data ( Cari fio & Perla 2008), there is some academic consensus regarding the validity of using parametric tests to analyze Likert scale responses (Sullivan & Artino, 2013) The results are shown in Table 27 Four different regressions were run to test the model. In columns (1) and (2) the Political and Leadership variables were independently tested against Variation In both cases the independent variables show a positive and statistically significant relationship to measurement variation.

PAGE 138

125 Table 27 Ordinary least squares analysis examining the impact of stakeholder influence on measurement variation. Coe fficients are shown with corresponding significance. Standard errors are shown in (parens). (1) (2) (3) (4) Variables OLS OLS OLS OLS Political 1.0 1 *** 0. 47 0. 51 ** (0.2 5 ) (0.2 4 ) (0.1 9 ) L eadership 1.22 0.83** 0. 75 (0. 22 ) (0.27 ) (0.21 ) Frontline 0.2 5* (0. 11 ) Constant 0. 04 1.53 1.54 1.92 ** ( 0.85 ) ( 0.9 ) ( 0 76 ) ( 0 .6 1 ) Adj. R squared 66 78 .87 .9 1 *** p<0.01, ** p<0.05, p<0.10 The third column (3) tests both the Political and Leadership variables together T he slope coefficients show low statistical significance while the adjusted r squared value is high. This suggests that there may be some multicollinearity between Political and Leadership We test this by examining the correlations in Table 26 (above) and see there is some correlation (though not perfect correlation) between the variables. This presents a problem as OLS models assume there is no perfect linear relationship between any of the independent variables On the other hand, the standard errors for the variables in column (2) are relatively low indicating that multicollinearity may not be a concern. In addition, a post regression estimation of variance inflation factors for the variables all returned values of less than three which is well below the rule of thumb target value of ten (see Rethemeyer 2003) While this does not guarantee that there is no multicollinearity, it does lessen our concern. Column (4) shows the regr ession results after including all three independent variables. The political variable is positive and statistically significant indicating that political stakeholders

PAGE 139

126 have a positive impact on measurement variation. For every one point increase in politic al support for performance measurement we would expect a corresponding 0. 51 increase in measurement variation. The leadership variable is also positive and statistically significant indicating that agency leadership has a positive impact on measurement var iation. For every one point increase in political support for performance measurement we would expect a corresponding 0. 75 increase in measurement variation. The frontline variable is positive and statistically significant indicating that frontline workers have a positive impact on measurement variation. For every one point increase in political support for performance measurement we would expect a corresponding 0.25 increase in measurement variation Equation 2 shows the relationship between stakeholder in fluence and measurement variation: Variation = 0. 51 Political + 0. 75 Leadership + 0.2 5 Frontline 1.92 ( 2 ) Interpretation of this formula is problematic due to the ordinal nature of the variables. Nonetheless, the formula presents some insight. It suggests that agency leadership has the greatest influence on measurement variation, followed by the Mayor or City Council and then frontline workers. In addition only relatively large values (values of 4 and 5) for both Political and Leadership resul t in a value of Variation > 3 where a value of three for variation corresponds to If either Political or Leadership are rated a 1 (No influence), th e n even ratings of 5 (Very great influence) for the other variables will only result in moderate measurement variation. We would thus not expect to see much measurement variation without significant support from both political stakeholders and agency leadership.

PAGE 140

127 Interview D ata The previous sections analyzed close ended interview data to de termine how different stakeholders influence measurement variation. The data s uggests that influence from political stakeholders agency leadership, and frontline workers can positively impact measurement variation to different degrees. A review of respons es to the open ended interview questions by c ity employees summarized below, reinforces these findings. Political stakeholders City employees perceive the Office of the Mayor or City Council as being key stakeholders that influence the development of new performance measures (City Interviews, 2017). While there is evidence that political stakeholders are often involved in the deve lopment of new measures, interview data suggests that there may be less involvement when it comes to reviewing and updating existing measures. In some cases political stakeholders do periodic reviews of quantitative measures of progress against city strat egic objectives. In these cases, it becomes evident to political stakeholders that the measure being discussed is not an appropriate indicator for the political objective. This can drive public managers to revise their measures. I n general, interviewees ag reed that it was not feasible for political stakeholders to do thorough reviews of departmental performance measures. In c ities where this had been attempted, political stakeholders pushed back. This dynamic was noted by a departmental manager: The [perfor mance measurement] top level of the pyramid we have 38 metrics, at the middle layer we have 58 strategic objectives with an average of three measures per objective, which is about 150 measures. We are still struggling with what we want to do with Council. A year ago in August we tried to have a results review and honestly Council glazed over very quickly. It was too much detail. (City Interviews, 2017) Lastly, in ci ties where change processes centered around periodic performance reviews, the timeline for developing measures tended to be much shorter. Depending on the city,

PAGE 141

128 performance reviews might be quarterly, bi annually, or yearly. Political stakeholders often us ed these performance review meetings to discuss new priorities and request new performance indicators. In these cases, it was up to departmental representatives to propose a new measure during the next performance review a turnaround that could be as short as a few months. It was thus necessary to track performance measures outside of departmental plans. In most cases cities with performance review processes also tended to have performance dashboards used for tracking performance measures. Agency leadership City employees emphasize two key points regarding the influence of agency leadership on measurement variation. First, there is agreement that agency leaders are key stakeholders with regards to measurement development. At one end of the spectrum Boulder and Denver view performance measurement as a departmental function In these cities p olitica l stakeholders take more of a ha nds off approach to performance measurement and limit their involvement to setting the political agenda. At the other end of the spectrum in Arvada and Fort Collins, agency buy in on performance measurement is seen as necessary but political oversight of the performance measurement process is taken more seriously. In these cities there tends to be an emphasis on cultivatin g interaction between political sta keholders and agency leadership Second, interviewees stressed that performance measurement processes can vary greatly across departments. Several reasons were given for this variation. Political support for performance measurement was seen as a key driver of departmental measures. In cases where political stakeholders supported performance measurement and agency leadership was less enthusiastic, the capturing of performance measures might occur in the Ci ty budget, City plan, or other city level documents. But, without buy in from departmental leadership these tended to be

PAGE 142

129 less quantitative performance measures. One interviewee suggested that in c ity departments with weak processes surrounding performance measurement, it was more likely that political stakeholders would influence what was measured. This indicates the development of with and commitment to performanc e measurement. engineer. So he is all about numbers and all the rest of us are about numbers. H e is very much a numbers guy. So, the people he has hired that ref lect that. There is a very strong orientation to [performance measurement] (City Interviews, 2017) As mentioned previously, the two main processes underlying the development of new measures are the revision of departmental plans and performance reviews wi th political stakeholders. E very time we do an update to the transportation master plan we identify focus areas. The Council approves focus areas. And often times there are things that come out of those focus areas that are reflected in the measures. (City Interviews, 2017) If neither political stakeholders nor agency leadership supported performance measurement, it was unlikely that performance measures would make it into departmental p lan s Lastly, the development of new performance measures was also dri ven by the need to justify the budget requests for the department I t was easier to justify the budget when there were quantitative measures of program progress. Regarding the review of existing measures, interviewees generally agreed that the majority of performance measurement evaluation occurred at the agency level it was rare for political stakeholders to be directly involved in discussing the usefulness of specific measures. Review and evaluation of measures tended to be associated with updates to depa rtmental plans. W e t hink about [the performance measures] every two years when we do a transportation report but we certainly think about what are the better ways to measure this objective. (City Interviews, 2017)

PAGE 143

130 Updates happened more frequently in departments with agency leadership that strongly supported the practice of performance measurement. Lastly, a s the performance measurement processes of city departments mature agency managers tend to concentrate on a smaller set of more meaningful indicators. One interviewee described the situation where, upon instituting a departmental performance measurement program, a large number of measures were created. Over time, the agencies found it difficult to collect data and report on such a large number of measures. In addition, it was difficult to retain the interest of political stakeholders when reporting on such a large number of measures. Over time these agencies undertook a process of winnowing out the less useful measures. Frontline workers Regarding the influence of frontline workers on measurement variation, interview data suggests that there is less participation in measurement development the further y ou go down the chain of command A manager in charge of performance management noted: [The development of existing performance measures] is at the agency leadership level. The street level workers are not creating it line workers] are primarily responsible for creating the transactional da ta that is used to create the performance measure. (City Interviews, 2017) F rontline workers tend to not be aware of the high level political objectives and thus do not know how their work ties to those objectives. Therefore, it is difficult for them to b e directly involved in the developme s more likely they will only collect data to support reporting against the measures In addition, interview data suggests that frontline workers are too far removed from political and agen cy objectives to be much help in evaluating the usefulness of specific performance measures. The exception are cities such as Arvada and Fort Collins, with cross departmental teams focus ing on performance measurement:

PAGE 144

131 We also have a team called The Deep D ivers. That team is comprised of our analytical have a survey they fill out for every performance measure th at looks at: I s there a data source ? I s the source reliable ? I s the department analyzing the data correctly ? A re they reporting it correctly ? (City Interviews, 2017) Lastly, several i nterviewees mentioned that the addition of a new program or service to a department triggered measurement development processes. In these cases frontline workers mainly program managers served as key stakeholders supporting the development of new measures. Performance measurement processes Chapter III described measurement development in terms of products (measurement variation) and processes (performance measurement processes) with the processes being mechanisms that connect different stakeholders and facilitate measurement development. These products and proc esses are interdependent you cannot have one without the other. In other words you cannot have measurement variation without some kind of process to facili tate the variation, and if there is no measurement variation there is no need for measurement proces ses. Interviewees regularly reference d different performance measurement activities, or processes. Certain activities such as measurement review meetings or budgeting cycles were often mentioned as drivers of measurement variation. This suggests that the c oncept of measurement variation overlaps with the concept of performance measurement processes for the interviewees a validation of the product/process deconstruction of the measurement development concept described in Chapter III. This section reviews the findings from the interviews regarding the different performance measurement processes. An understanding of these processes lends some insight as to how stakeholders utilizes processes to bring about measurement variation.

PAGE 145

132 Interview data reveals three pe rformance measurement processes that drive the development of new measures and the review and updating of existing measures: The first performance measurement process that drives measurement variation revolves around updating city level and department lev el strategic plans. This process, labeled as a departmental plan ning process is shown in Figure 11 Political stakeholders initiate a performance planning process by introducing new goals and objectives into their city strategic plans, which are updated every two to ten years. These city level goals do not generally have quantitative performance indica tors associated with them. The performance planning process continues at the departmental level when agency representatives include the updated goals into their departmental strategic plans. There is wide variation in the revision cycle for departmental p lans, but it tends to happen every two to five years. If the department has a strong commitment to performance measurement, or there is an external mandate to use performance measures in departmental plans, then the new departmental strategic plan might in clude performance measures which are tied to the city level strategic goals. It is during the process of revising the departmental strategic plan when public managers are likely to undertake the development of new measures or review and eliminate existing measures. The timeline for developing measures tends to be longer in cities using a departmental planning process because city and departmental plans are revised infrequently. The timeline for introducing new performance measures is dependent on both the r evision cycle of the city plan and the revision cycle of the departmental plan. In the case of social equity, this means that social equity goals would first have to be introduced into the city plan. Only after that occurs

PAGE 146

133 would the social equity goals, an d the associated social equity performance measures, be introduced into the departmental strategic plan. Figure 11 The second process wherein political stakeholders i nfluence the development of performance measures involves the tracking of performance goals and measures in conjunction budget review process is shown in Figure 12 In cities utilizing budget review processes, yearly budget cycles serve as the main driver of reviewing and reporting on performance measures. In these cities performance measures are tracked i n budget documents, and the performance measures serve as data points for political stakeholders when allocating budget dollars. Figure 12 The third process wherein political stakeholders influence the development of performance measures involves goal setting in conjunction with periodic performance review meetings. This performance review process is shown in Figure 13 S ome cities have quarterly or semi annual performance review meetings between the Office of the Mayor or City Council and representatives from city agencies. During performance review meetings, agency representativ es

PAGE 147

134 report on the progress of their programs by collecting measurement data in spreadsheets or by uploading it to online performance measurement dashboards. Political stakeholders use these meetings to assess agency performance and introduce new priorities or suggest new areas of focus. Timelines for adding new performance measures or updating existing measures are accelerated in cities that have instituted performance review processes. For example, in one city, the City Council members would often introduc e new strategic goals in the performance review meetings. It was then up to the departmental managers to come up with an appropriate measur e of performance for the goal. The measure would then be reviewed and adopted by City Council i n subsequent performan ce review meetings. Figure 13 Table 28 summarizes three important characteristic s of the performance measurement review processes from the case cites. First, the timescale for adding or updating performance measures is different across the cities, and this difference is largely related to the structure of the performance review proces ses. Table 28 shows that the timescale for adding or updating performance measures is shortest in cities using performance review processes and is longest in cities us ing departmental planning processes. In addition, the level of interaction between political and operational stakeholders is highest in cities using performance review processes and lowest in cities using departmental planning processes.

PAGE 148

135 Levels of measurem ent variation measured by the close ended interview questions, also vary across the different performance measurement processes. Higher levels of measurement variation are found in cities with performance measurement processes characterized by shorter tim escales for measurement review, and higher levels of stakeholder interaction. Lower levels of measurement variation are found in cities with processes characterized by longer timescales for performance reviews, and lower levels of stakeholder interaction. Table 28 Timescales for measurement reviews and the level of interaction between stakeholders for different performance measurement processes. Departmental planning process Budget review process Performance review process Timescale of performance measurement review processes 2 10 years Yearly Quarterly or semi annually Level of interaction between political and operational stakeholders Low Medium High Average level of measurement variation (from close ended questions ) 2.3 3.5 4.5 These findings suggest that political stakeholders who are concerned with performance measurement are more likely to create processes wherein operational performance measurement data is surfaced via performance review meetings, dashboards, or other reporti ng tools. In cities where political stakeholders are less interested in performance measurement, it is likely that responsibility for developing or reviewing performance measures will be pushed to the agencies or the budgeting department. But, care must be taken when drawing conclusions about causal direction. It is also possible that cities with strong performance measurement processes and infrastructure push political stakeholders to become more engaged with performance measurement. Lastly, interview dat a suggests that stakeholder commitment to performance measurement drives the development of stronger performance measurement processes, but the

PAGE 149

136 reverse is not true. In no cases were strong performance measurement processes sustained in the face of disinter ested political and operational stakeholders. In some city departments performance measures developed under previous agency leaders were retained over many years in departmental plans. As such, the measures themselves sometimes showed a certain amount of stasis. Though, a plausible interpretation is that the stasis was due to lack of formal measurement review processes. Stakeholder Influence on Social Equity Measurement Variation Interviewees were also asked close ended questions and open ended probes rega rding the influence of stakeholders on social equity measurement development. In general, there was a distinct lack of emphasis on the development of social equity measures (City Interviews, 2017). While some city employees might be aware of the need for s ocial equity measures, lack of support from agency leadership and City Council tended to put any social equity initiatives at the back of the queue. Therefore, the information gathered regarding social equity measures must be considered incom plete and care must be taken before using this data to make general claims regarding social equity measurement development Nonetheless, some insight may be drawn from the data. The responses to the close ended questions regarding the influence of different stakeholders regarding social equity are shown in Table 29 This data does not reflect the influence of stakeholders over the development of social equity measures, rather it ref lects a general awareness social equity. Again, very few cities actually had quantitative me asures of social equity, so questions regarding stakeholder influence over social equity measures were disregarded. The number of responses regarding social equity influence are listed in the table. For example, t he table shows that one interviewee responded that the Mayor or City Council is

PAGE 150

137 committed to social equity very are committed to social equit y great exte nt A visual examination of the data suggests that, in general, the Mayor or City Council and agency leadership display a strong commitment to social equity while frontline workers are less awa re of social equity. Table 29 Interviewee responses to close ended questions regarding the influence of stakeholders regarding social equity. The number of responses are listed in the table. Commitment to social equity Stakeholder 1 To no extent 2 To a small extent 3 To a moderate extent 4 To a great extent 5 To a very great extent Mayor / City Council 0 0 6 3 1 Agency leadership 0 0 3 5 2 Frontline workers 2 3 2 1 1 Interpreting this data is difficult. Given the apparent support for social equity by the mayor or city Council and agency leadership, one would expect there to be at least some efforts at developing quantitative measures of social equity. Yet, o f the five cities involved in the study, quanti tative social equity measures were only found in Fort Collins. Two cities were just starting the process of dev eloping social equity measures, while the final two cities had no plans to develop measures. T his lack of social equity measurement variation con strained the ability to do a comparative analysis examining the impact of stakeholder influence on the development of social equity measures across the case cities Key Findings The interview data suggests that Mayor or City Council support for performance measurement is positively r elated to measurement variation. Specifically while political

PAGE 151

138 stakeholders are not themselves a source of new measures, they are a key factor that influences public managers to develop new measures. The study found that agency leadership support for performance measurement is also positively related to measurement variation. This is somewhat surprising as the expectation, derived from the literature, was that agency leaders would try to minimize the num ber of measures they are responsible for. Adding departmental measures generally leads to additional work to collect data and increases the level of accountability. It is possible that this finding it is possible to interpret the result as simply pointing out that agency leadership support for performance measurement is a necessary ingredient to the development of measures. It does not necessarily mean that agency leaders encourage public managers to develop new measures. Rather, without their support there would likely not be any new measures developed. Lastly, the study found that frontline workers do not exert much influence regarding performance measurement variation This too, is surprising as c urrent scholarship suggests that involvement by employees is an important ingredient to the adoption and use of performance measures ( Kroll, 2015 ). This study suggests that the role of frontline workers may be more muted and that, at a city level, support by agency leadership may be more consequential to the development of performance measurement within departments. In other words, frontline workers do not exert much influence on performance measurement decisions

PAGE 152

139 CHAPTER VII CONCLUSION Performance measurement has been presented as a useful management tool since the early twentieth century (Streib & Poister, 1999), and in recent decades the concept of performance has become central to public managem ent reform (Moynihan, 2008 ). As a result, the modern public manager is concerned with performance (Meier, Favero, & Zhu, 2015) and is trained in performance measurement (Hatry, 2014). This research was motivated by the observation that while a growing literature has documented the impact of management on pe rformance (Boyne, 2003; Sanger, 2013) and the factors that encourage and impede the use of performance measures by managers (Moynihan & Hawes, 2012; Kroll, 2015), there is substantially less focus on the factors that drive public managers to search out new performance measures and analyze and improve existing measures. This study argues that political stakeholders influence managers to develop performance measures. When these stakeholders exert their influence, managers are more likely to develop measures that support political and organizational goals. When these stakeholders are absent, managers may have less motivation to develop measures or may tend to rely on existing measures. The study was informed by public administration sch olarship regarding perfo rmance information adoption and use (e.g., de Lancer Julnes, 1999; Berman, 2002; Moynihan, 2008) and the political nature of performance measurement (e.g., Moynihan, 2008; Radin, 2006 ; Julnes and Holzer, 2001 ) I t also utilized scholarship examining the re lationship between goal setting and management exploration and innovation ( e.g., Mom et al., 2015 ; Lipsky, 2010 ; Rainey & Jung, 2015 ).

PAGE 153

140 The study employed a comparative case analysis research design to test the theoretical proposition s regarding the impact of stakeholders on measurement development The study examined the performance measurement and social equity programs of five Colorado cities: Arvada, Boulder, Denver, Fort Collins, and Longmont. D ocument analysis was conducted to develop an understanding of the roles of operational and political actors regarding performance measurement in general and the development of social equity goals and measures in particular City strategic plans, social equity plans, budgets, and departmental plans were coded to id entify performance measurement processes and social equity measures. In addition semi structured i nterviews were conducted with social equity administrators, performance measurement administrators, and department level planners This conclusion summarizes the central findings of the study and present s an answer to with an examination of Key F ind ings This study sought to better understand the performance measurement environment of public managers through the question How do political and operational stakeholders impact the development of performance measures in public organizations ? The answer to this question is summarized as follows: The findings from this research indicate that in the context of local city government, there are three key elements that drive measurement variation : political support for performance measurement agency leadership support for performance measurement and the structure of the performance measurement processes. The political stakeholders of interest for the study are the city level elected officials including the Office of the Mayor or the City Council. The operation al stakeholders of interest

PAGE 154

141 are agency leadership and departmental frontline workers. Measurement variation is a variable that measures the addition of new measures or the change or elimination of existing measures. City comprehensive plans and budgets wer e analyzed for references to stakeholder influence and measurement variation In addition, c ity employees were interviewed and were asked both Likert scale questions and open ended questions: Does the Office of the Mayor or City Council demonstrate a stron g commitment to performance measurement? Do they use performance measures to guide decision making? Were they aware of any performance measurement processes focused on developing new measures and could they specifically identify new measures? Chapter V ex plored the case cities. It analyzed city documents and interview data to identify the levels of political and operational stakeholder influence (the independent variables) and levels of measurement variation (the dependent variable). The analysis found var iation in stakeholder influence and measurement variation across the case cities. Most cities showed at least a small amount of political stakeholder influence over performance measurement. Two of the cities showed a great amount of influence from politica l stakeholders. Agency leaders showed the greatest amount of influence out of the three stakeholders, while frontline workers showed the least amount of influence. The analysis found almost no evidence of quantitative social equity measures. While all five cities claim to support social equity as a value in their city and departmental plans, this support has translated into quantitative measures only for Fort Collins and, to a lesser degree, Denver Chapter VI analyzed the Likert scale data to determine th e relationship between the independent and dependent variables. Analysis of the data, summarized in Figure 14 shows that

PAGE 155

142 agency leadership has the greatest influence on measurement variation, followed by the Mayor or City Council and then frontline workers. Figure 14 Results of OLS ( ** p<0.05, p<0.10 ) on performance measurement are presented in Table 30 alongside the relevant hypotheses. The study found that political support for performance measurement is positively related to measurement variation ( H 1 ). Specifically findings indicate that while political stakeholders are not themselves a source of new measures, they are a key factor that influences public managers to develop new measures. This finding reinforces what would be expected fro m existing theory, as summarized in Chapter III. The study predicted that support for performance measurement by agency leadership would negatively impact measurement variation ( H 2 ) S tudy data suggests the opposite: political support for performance measurement is positively related to measurement variation This finding is somewhat difficult to interpret. The finding makes sense in that agency leadership support fo r performance measur ement is probably a necessary ingredient to the development of measures. In other words without th e support of agency leadership there would likely be very little performance measurement activity in general, and subsequently there would be no measure ment development activity. But, it is also likely that the interview instrument did not accurately capture the concept of agency leadership influence as understood by this researcher and as described in the research

PAGE 156

143 design. The goal of the study was to underst and how a measurement development environment dominated by agency leadership might impact measurement variation. In other words, do agency leadership imposed (versus politically imposed) performance measurement processes result in a lower rate of measureme nt variation and a reduced set of quantitative measures? Instea d, the involvement with performance measurement, and it is possible to be involved with performa nce measurement without being the primary stakeholders driving performance measurement. A better close ended question would have asked interviewees to rank stakeholder influence. For example: Agency leadership controls the development of performance measur es The relationship between agency leadership influence and measurement variation thus requires more research before conclusions can be generalized. Lastly, t he study found that frontline worker support for performance measurement is positively related to measurement variation ( H 3 ) though this influence is muted This finding partially validates what would be expected from existing theory, as summarized in Chapter III. But, the relationship betw een frontline workers and measurement variation also requires additional unpacking. A study examining the processes that frontline workers put in place absent in fluence from agency leadership w ould interview dat a suggests that frontline workers are the least informed about performance measurement goals and processes within city departments, but they are also a primary source for evaluating the viability of operational performance measures. Are they a passive sour ce of input to the formal performance measurement processes developed by political stakeholders and agency leaders, or do they adopt their own performance measurement routines outside of these formal processes?

PAGE 157

144 Table 30 Summary of study findings. s hypotheses are listed and e ach hypothesis is labeled as supported or unsupported by study data. In some cases, opposite effects were found (e.g., H 3 ). Hypothesis Findings H 1 : Political influence on performance meas urement is positively related to measurement variation. Supported H 2 : Agency leadership influence on performance measurement is negatively related to measurement variation. Unsupported H 3 : Frontline worker influence on performance measurement is positively related to measurement variation. Partially supported This study also examined the performance measurement processes and their effect on measurement variation in relationship to stakeholder influence These are the processes that drive performance measurement within public organizations by facilitat ing the relationship between stakeholder influence and measurement variation. Two important characteristics of performance measurement process es emerged from the interview data. First, the t imescale of performance measurement review s play s an important role in the development of performance measures. Cities with yearly (or longer) performance measurement review cycles tended to have fewer, less quantitative performance measures, while cities with quarterly performance measurement review cycles tended to have a greater number of quantitative performance measures. Second, the level of interaction between political and operational stakeholders during performance measurement reviews also affects the development of performance measures. The performance measurement review processes of cities are characterized by different levels of interaction between political and operational stakeholders. They are also characterized by different levels of informat ion exchange between the stakeholders. Interview data suggests that greater levels of interaction between stakeholders contributed to the development o f performance measures.

PAGE 158

145 These two findings regarding the impact of performance measurement processes on measurement variation are included a s propositions that warrant further examination. The propositions are listed in Table 31 The first proposition ( P 1 ) regards the i mpact of shorter performance measurement review cycles on measurement development. The second proposition ( P 2 ) regards how greater levels of interaction between political and operational stakeholders impacts measurement development. These propositions iden tify promising areas for future research. Table 31 Propositions regarding the impact of performance measurement review processes on the development of measures. P 1 : Shorter performance measurement review cycles are positively rela ted to measurement variation. P 2 : Processes that include political stakeholders and agency leadership are positively related to measurement variation What conclusions can we draw from these findings? The data suggests that, at the city level, political stakeholders who take a strong interest in performance measurement encourage the development of performance measures. Nonetheless, w hile p olitical stakeholders may encourage performance measurement in city departments without support from agency leadershi p there will be minimal impact of performance measurement variation Of greater importance may be the processes that are put in place as the result of political and operational commitment to performance measurement. The findings from this study suggest th at political and operational stakeholders influence the structure and timing of performance review processes, which, in turn, influences measurement variation. Performance review processes can affect how well political goals are disseminated to operational managers and also affect how informed political stakeholders are regarding the usefulness of existing performance measures. The structure of these processes thus affects the measurement variation decisions of public managers.

PAGE 159

146 Research L imitations The rese arch findings should be considered with an appreciation of several limitations regarding the study data, methods, and context. Specifically, t he study is susceptible to threats to construct validity, and external validity Construct validity refers, in part, to the validity of the operationalization of key concepts. T hreats to construct validity ar ose from reliance on perceptual data collected from interviewees. Some interviewees may have presented a biased interpretation of their organizational performa nce management and social equity activities. For example, one source of bias may have come from a predisposition in favor of (or a skepticism of) the validity of performance measurement as a worthwhile practice. Managers who were strong believers in perfor mance measurement in general may have been more predisposed to notice the development and use social equity measures, while managers who were skeptical of performance measurement may have downplay ed or ignore d the presence of equity measures. The second t hreat to construct validity comes from the nature of the interview instrument. S ome of the questions were adapted from the Larson et al (2017) study regarding soci al equity performance measures. Nonetheless, the majority of interview questions have not been previously validated. This presents the possibility that answers to the questions may not present valid measures of the construct s in question. For example, interviewees were asked to rate (from 1 to 5) the The office of the M ayor (or C ity C ouncil) is a source of new performance measures. t was found that some interviewees interpreted this question to mean that the mayor or City Council develops the actual performance measure s while other interviewees interpreted it to mean the Mayor or C ity Council directs others to develop the performance measures.

PAGE 160

147 External validity is concerned with the issue of whether and to what extent results obtained in this study can be generalized to hold true in settings, time periods, and populations differe nt from the ones used in the study (Meier et al., 2009). The composition of the city sample and interviewee sample introduce t hreats to external validity. T he scope of the study was limited to those cities with identifiable social equity init iatives In addition, while interviewees were questioned regarding wider performance measurement programs, the majority of the focus was on social equity measures. This limited scope reduce s the ability to generalize beyond the sample. For example, differe nt categories of measures economic or environmental, for example may evolve differently than social equity measures. As a result, any conclusions drawn from the study may be most relevant to other cities contemplating the development of a social equity pro gram. In addition reliance on a small N sample also introduces t hreats to external validity. This study focused on identifying the relationship between operational and political stakeholders and measurement development. Estimating causal effect in a case study is problematic because of the large number of possible mediating variables that cannot be accounted for when using a small N sample (Gerring, 2007). The use o f open ended interviews allow ed for some exploration of possible confounding variables, but it is possible that important factors were missed. Lastly, because social equity is an evolving concept and there are limited examples of social equity measures be ing actively developed and used, it is possible that the cities under investigation are unique in characteristics not controlled for in this study (see e.g. Svara et al., 2014).

PAGE 161

148 Contributions to the L iterature The aim of this dissertation is to further ou r understanding of what contributes to or inhibits the development of performance measures in public organizations. Chapter I I raised s everal concerns drawn from the literature, regarding the impact of politics on performance measurement in public organ izations. First the performance measurement literature generally distinguishes between operational processes where performance measurement serves as a cybernetic (Zweig, Webster, & Scott, 2009) feedback tool for improving organizational efficiency, and po litical processes negotiations, or political arguments (Kroll, 2015). Scholars suggest that in political processes the interpretation of performance data is highl y subjective, controversial, and role induced (Kroll, 2015). This politicization of performance purposeful (Moynihan, 2009) and dysfunctional uses of per formance measurement. This empirical investigation of the influence of politics on measurement development suggests that the traditional view regarding the politicization of performance measurement may be overly cynical when it comes to measurement develo pment in local government While this study does not make claims regarding the negative outcomes of political influence on performance measurement, it did find that political stakeholders serve as a source of performance measurement innovation. In this rol e political stakeholders, by defining the values and goals of the City, create an environment that encourages the exploration of new performance measures that might be used to track progress against those goals.

PAGE 162

149 Second, the performance measurement litera ture raises questions about the ability of government agencies to establish their own performance measurement processes while under pressure from political stakeholders intent on exerting their influence. Here the relationship between elected officials an d agency managers is portrayed as a principal agent relationship characterized by conflict of interest and information asymmetry (Bendor, 1988). In this context, the reporting of performance information by c ity departments provides a governance mechanism t o the City Council or Office of the Mayor that curbs departmental opportunism. The stronger the agency effect, the more aligned are the goals and values of the principal and the agent and the stronger the disinclination for departmental variation from the values of the political principals. Empirical findings from this study suggest that political influence o n departmental behavior regarding performance measurement development is mediated by the presence of operational performance measurement processes That is, without strong pe rformance measurement processes, political influence had less of an impact In other words, political willpower alone is not enough to create an environment conducive to developing new measures. There also must be a reasonable lev el of performance measurement processes. In addition, the study found that increased demand for performance measurement by political stakeholders is positively related to stronger performance measurement processes in c ity departments. Conversely, in citie s where political demand for performance measurement was weaker there tended to be weaker operational performance measurement processes While this study found some agencies were able to maintain a culture of performance measurement usually driven by agen cy leadership that is strongly committed to performance measurement it was more likely that the development of performance measures atrophied when confronted with political leadership that did not demand quantitative performance reporting. In this context

PAGE 163

150 political stakeholders improve departmental performance measurement capability and subsequently create an operational environment conducive to measure development by demanding quantitative measures of departmental performance Future Research An observati on from this study that warrant s further investigation regards the measurement review and updating processes This researcher found that the development of performance measures change at different rates for different cities The rate of change is largely a factor of the timescales of performance management review processes. In some cases performance measures are reviewed as part of city plan revision which may occur once every several years. In other cases performance measures are reviewed as part of qua rterly or biannual performance reviews. The shorter review cycles are best suited to the review of policy problems that demonstrate change within similar timescales. That is, if change related to a policy initiative is only evident over a timescale of year s or decades, there is little sense in reviewing the outcome measures on a quarterly basis. Similarly, if change related to a policy initiative is evident on a daily basis, the tracking of that change on a semi annual basis makes it difficult to tie the po licy intervention to the outcome. An empirical question is whether the timescale of performance measurement processes influences the types of problems that are addressed by government agencies. Such a study would contribute to existing research regarding t he use of big data for performance measurement ( e.g., Bettencourt, 2013) A second topic of interest is the relationship between the level of interaction between performance management stakeholders and the strength of performance measurement processes. This researcher observed that the performance measurement routines with c itie s are characterized by different levels of interaction between political and operational stakeholders.

PAGE 164

151 They are also characterized by different levels of information exchange between the stakeholders. In some cases the primary exchange of information comes through the publication of c ity comprehensive plans. That is, politicians will set strategic goals and these will be capture d in the c ity plan. It is then up to the departments to translate the strategic goals into programs and services and develop measur es to track their progress. In other cases departmental leaders are tasked with presenting their performance measures directly to City Council and educating them on the value of the measures. It is an empirical question as to whether these different levels of interaction contribute to stronger performance measurement processes. Such a study would contribute to goal setting and relational capital literatures (e.g., Mom et al., 2015). Conclusion In the first decades of the twenty first century public manager s find themselves at the intersection of two trends. First, public managers are increasingly being relied upon to collect and analyze large amounts of data (Mayer Schnberger & Cukier, 2013; Nambiar, 2010) and secondly, public managers find themselves in a n era of performance management: In this era, public managers are asked to justify their actions not just in terms of efficiency but also by the outcomes they produce. They meet performance reporting mandates, are asked to do more with less, and must expl ain the performance of their programs (Moynihan, 2008) Working in organizations that set performance goals and use performance indicators to track progress against those goals, public managers have become adept at using performance data internally to supp ort decision making or externally to report performance results to stakeholders (Bromberg, 2009). In addition, their decisions determine the characteristics of the measures and if and how they will be used (Behn, 2003). The purpose of this dissertation is to improve our understanding of how different stakeholders impact the development of performance measures by c ity level public managers It

PAGE 165

152 examined how demand for performance measurement from political and operational stakeholders influences the developme nt of new performance measures. It also examined how different levels of commitment to social equity by those same actors impact measurement development The findings suggest that commitment to performance measurement by political stakeholders is a necessary, but not sufficient, component contributing to performance measurement development Commitment to performance measurement by agency leadership is also necessary. We would not expect to find much measurement variation without significant support from both political stakeholders and agency leadership. The impact of political stakeholders and agency leadership on measurement variation is mediated by the structure of the goal setting and performance review processes. The timescale of performance meas urement review processes and the degree of interaction between political and operational stakeholders impact levels of measurement variation. How might our City agencies improve with a better understanding of the impact of political and operational stakeh olders o n the development of performance measures ? P olitical stakeholders might appreciate the importance of their participation in the process of developing performance measures. T heir definition of goals and insistence on quantitative reporting against those goals is a key component of a functional performance measurement process. In addition, adjustments to performance review processes to accommodate political participation can influence measurement development and the usefulness of those measures. W hen politicians neglect quantitative accountability and delegate too much of the performance measurement role to City departments, the result may be counter intuitive. Instead of empowering the departments to grow their own performance measurement competency, political

PAGE 166

153 neglect is more likely to result in an atrophying of operational pe rformance measurement capabilities. Agency leaders might temper their cynicism regarding the influence of political stakeholders on performance measurement and instead realize that political involvement is a key driver of measurement innovation. They might recognize the importance of tying their operational performance measures to the values and strategic objective s of the political stakeholders and how the flow of performance i nformation works best when the measures are connected to the priorities of City officials. Lastly, a greater understanding of the roles of both political and operational stakeholders might lead to a reframing of performance measurement as a n ongoing devel opment process, rather than a static task of collecting data for a budget or an online dashboard. This reframing presents performance measurement as an iterative process wherein the quality of performance measures is shaped by the involvement of different stakeholders, the level of interaction among those stakeholders, and the timing and structure of measurement review processes

PAGE 167

154 REFERENCES American Community Survey (ACS). ( 2015 ). 20 11 201 5 ACS 5 Year Data Profiles Retrieved from https://www.census.gov/acs/www/data/data tables and tools/data profiles/2015/ Astbury, B., & Leeuw, F. L. (2010). Unpacking Black Boxes: Mechanisms and Theory Build ing in Evaluation. American Journal of Evaluation 31 (3), 363 381. http://doi.org/10.1177/1098214010371972 Baumgartner, F. R., & Jones, B. D. (1993). Agendas and instability in American politics Chicago, IL: University of Chicago Press. BBC Research & Con sulting. (2014). Fort Collins social sustainability gaps analysis Fort Collins, CO. Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review 63 (October), 586 606. http://doi.org/10.1111/1540 6210.00322 Bendor, J. (1988). Formal models of bureaucracy. British Journal of Political Science 18 (3), 353 395. Retrieved from http://www.jstor.org/stable/10.2307/193842 Berman, E. (2002). How useful is performance measurement. Public Performance & Mana gement Review 25 (4), 348 351. Berman, E., & Wang, X. (1999). Performance Measurement in U.S. Counties: Capacity for Reform. Public Administration Review 60(5), 409 420. Bettencourt, L. M. A. (2013). The Uses of Big Data in Cities Santa Fe, NM. Bland, J. M., & Altman, D. G. (1997). Sta tistics notes: Cronbach's alpha BMJ. 314 572. Borins, S. (2002). Leadership and innovation in the public sector. Leadership & Organization Development Journal 23 (8), 467 476. http://doi.org/10.1108/01437730210449357 Bould er Economic Council. (2017). Boulder economic indicators Q2 2017. Retrieved September 13, 2017, from http://bouldereconomiccouncil.org/bec_publications/boulder economic indicators q2 2017/ erence? Journal of Management Studies 39 (1), 97 122. Bromberg, D. (2009). Performance measurement: A system with a purpose or a purposeless system? Public Performance & Management Review 33 (2), 214 221. http://doi.org/10.2753/PMR1530 9576330202

PAGE 168

155 Brun E., & Stre, A. S. (2009). Managing ambiguity in new product development projects. Creativity and Innovation Management 18 (1), 24 34. http://doi.org/10.1111/j.1467 8691.2009.00509.x Bryson, J. M. (2004). What to do when Stakeholders matter. Public Manag ement Review 6 (1), 21 53. http://doi.org/10.1080/14719030410001675722 Bryson, J. M., Gibbons, M. J., & Shaye, G. (2001). Enterprise schemes for nonprofit survival, growth, and effectiveness. Nonprofit Management & Leadership 11 (3). Carifio L ., & Perla R. (2008). Resolving the 50 year debate around using and misusing Likert scales. Medical Education 42 (12) 1150 1152. Chan, H. S., & Gao, J. (2009). Putting the cart before the horse: accountability or performance? The Australian Journal of Public Administr ation 68 (S1), S51 S61. http://doi.org/10.1111/j.1467 8500.2009.00621.x performance goals and goal difficulty on task performance. Accounting and Finance 47 (Augu st 2005), 221 242. Chun, Y. H., & Rainey, H. G. (2005). Goal ambiguity and organizational performance in U.S. federal agencies. Journal of Public Administration Research and Theory 15 (4), 529 557. City of Arvada. (2014). Comprehensive plan. Arvada, CO. Ci ty of Arvada. (201 2 ). Sustain Arvada Arvada, CO. City of Arvada. ( 2017a ). Focus Arvada Arvada, CO. Retrieved October 1, 2017, from http://arvada.org/city hall/transparency/focus City of Arvada. ( 2017b) Plan Arvada (current and long range plans) Arvada, CO. Retrieved October 1, 2017, from http://arvada.org/business/development/arvada plans 1445521249 City of Arvada (2017 c ). 2017 2018 Biennial operating and capital budget, Volume II, Performance based budgeting Arvada, CO City of Arvada Finance Department ( 2017 ). 2017 mid year financial report Arvada, CO. Retrieved October 2, 2017, from https://issuu.com/cityofarvada/docs/2017_2q_report_for_epub City of Arvada, Office of Performance Management (2017). Community dashboard Office of Performance Management. Arvada, CO. Retrieved November 17 2017, from http://arvada.clearpointstrategy.com/

PAGE 169

156 City of Arvada Parks, Golf and Hospitality Services Department ( 2016 ). Imagine Arvada, Parks, trails and open space 2016 master plan Arvada, CO. Retrieved Oct ober 3, 2017, from http://arvada.org/source/Parks/Arvada%20Master%20Plan.pdf City of Boulder. (2010). The Boulder Valley Comprehensive Plan 2010 Boulder, CO. Retrieved May 24, 2017 from https://bouldercolorado.gov/bvcp City of Boulder. (2017 a ). Revised Boulder Valley comprehensive plan (BVCP) detailed outline Boulder, CO. City of Boulder. (2017b). Master plans for city of boulder departments. Boulder, CO. Retrieved October 11, 2017, from https://bouldercolorado.gov/planning/department master plans and s trategic plans City of Boulder. (n.d.). Boulder measur es. Boulder, CO. Retrieved October 11, 2017, from https://bouldercolorado.gov/boulder measures City of Boulder Department of Human Services. (2017). Human services strategy: Mapping our future 2017 2022 Boulder, CO. Retrieved from https://bouldercolorado.gov/human services plan/human services strategy City of Boulder Parks and Recreation. (2013). Boulder parks and recreation department master plan Boulder, CO. Retrieved October 9, 2017 from https://bou ldercolorado.gov/planning/department master plans and strategic plans City of Boulder Transportation Division. (2016). The Transportation Report on Progress Boulder, CO. City of Denver. (2017). Community profile Denver, CO. City of Denver. (n.d.). Human rights and community partnerships. Retrieved November 27, 2017, from https://www.denvergov.org/content/denvergov/en/human rights and community partnerships.html City of Denver Executive Order 139 (201 2 ). Executive order no. 139 Denver, CO. Retrieved Dec ember 12 2017, from https://www.denvergov.org/content/dam/denvergov/Portals/executiveorders/139 Denver Childrens Cabinet.pdf City of Denver Office of Sustainability. (2017). 2020 Sustainability Goals Denver, CO. City of Denver Parks & Recreation (2016). Denver Parks & Recreation Peak Performance 2016 Denver, CO.

PAGE 170

157 City of Denver, M (2017). Peak performance Retrieved November 22, 2017, from https://www.denvergov.org/content/denvergov/en/mayors office/programs initiatives/peak performance.html City and County of Denver. (2017). Denver, CO. City of Fort Collins. (n.d.). Fort Collins Facts Retrieved September 12, 2017 from https://www.fcgov.com/visitor/fcfacts.php City of Fort Collins. (2016). 2016 Strategic Plan Fort Collins, CO. Retrieved from http://www.nshealth.ca/sites/nshealth.ca/files/nsha strategic plan poster final.pdf City of Fort Collins. (2017 a ). 2017 2018 Adop ted Biennial Budget Fort Collins, CO. City of Fort Collins. (2017 b ). City of Fort Collins community performance measurement dashboard. Retrieved November 10, 2017, from https://fortcollins.clearpointstrategy.com/ City of Fort Collins. (2017c). City of For t Collins community performance measurement dashboard. Retrieved November 10, 2017, from https://fortcollins.clearpointstrategy.com/ City of Fort Collins City Council. (n.d.). Fort Collins City Council Retrieved November 8, 2017 from https://www.fcgov.com/council/ City of Fort Collins City Manager. (n.d.). Fort Collins City Manager Retrieved November 8, 2017 from https://www.fcgov.com/citymanager/ City of Fort Collins Social Sustainabilit y. (2016). Fort Collins social sustainability strategic plan Fort Collins, CO. Retrieved from http://www.fcgov.com/sustainability/pdf/SocialSustainability_FINAL_web ready_reduced.pdf City of Fort Collins Sustainability Services. (2014). Social Sustainabil ity Retrieved from http://www.fcgov.com/sustainability/pdf/SSD2014flyer.pdf. City of Longmont. (2002). Longmont multicultural plan Longmont, CO. City of Longmont. (2016 a ). Multimodal & comprehensive plan Longmont, CO. City of Longmont. (2016 b ). Thrive today and tomorrow: Creating a sustainable Longmont Longmont, CO. Colorado Department of Local Affairs. (2015). American Community Survey Data for Colorado. Retrieved September 12, 2017 from https://demography.dola.colorado.gov/census acs/american commun ity survey data/#american community survey data for colorado Corona Insights. (2016). Human services needs assessment: Final report Long.

PAGE 171

158 Curtis, S., Gesler W., Smith, G., & Washburn, S. (2000). Approaches to sampling and case selection in qualitative research: Examples in the geography of health. Social Science and Medicine 50 (7 8), 1001 1014. http://doi.org/10.1016/S0277 9536(99)00350 0 4). The Replication Dilemma Unraveled : How Organizations Enact Multiple Goals in Routines Transfer. Organization Science 7039 1 26. http://doi.org/10.1287/orsc.2014.0913 Daft, Richard, L. (2007). Organization theory and design (9th ed.). Mason, OH: Thoms on/South Western. Engagement in Networked Environments and Goal and Role Ambiguity. Journal of Public Administration Research and Theory 26 (3), 433 447. http://doi.org/10.1093/jopart/muv023 de Lancer Julnes, P. (1999). Lessons learned about performance measurement. International Review of Public Administration 4 (2), 45 55. http://doi.org/10.1080/12294659.1999.10804932 de Lancer Julnes, P., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: an empirical study of factors affecting adoption and implementation. Public Administration Review 61 (6), 693 708. http://doi.org/10.1111/003 3 3352.00140 de Vaus, D. (2001). Research Design in Social Research Los Angeles, CA: Sage Publications. of Denver. Denver, CO. Retrieved December 27, 2017 from h ttps://www.denvergov.org/content/dam/denvergov/Portals/713/documents/reports/StatusOf DenversChildren.pdf Denver Parks and Recreation. (2017). Denver Parks and Recreation 2017 Game Plan Update Denver, CO. Denver Public Health. (2016). 2016 annual report Denver, CO. Retrieved from https://www.icagruppen.se/globalassets/rapportportal/arsredovisning 2016/01. omslag/ica gruppen_annual_report_2016.pdf Dinour, L. M., Kwan, A., & Freudenberg, N. (2017). Use of Comparative Case Study Methodology for US Public Hea lth Policy Analysis. Journal of Public Health Management and Practice 23 (1), 81 89. http://doi.org/10.1097/PHH.0000000000000406 Downs, A. (2008). The life cycle of bureaus. In J. M. Shafritz & A. C. Hyde (Eds.), Classics of Public Administration (6th ed., pp. 239 251). Boston, MA: Wadsworth Cengage Learning.

PAGE 172

159 Edelman, L., Newell, S., & Scarbrough, H. (2004). The Benefits and Pitfalls of Social Capital: Empirical Evidence from Two Organizations in the United Kingdom. British Journal of Management 15 S59 S69. http://doi.org/10.1111/j.1467 8551.2004.00400.x Eisenhardt, K. M. (1989). Agency Theory: An Assessment and Review. The Academy of Management Review 14 (1), 57 74. Ethiraj, S. K., & Levinthal, D. (2009). Hoping for A to Z while rewarding only A: Co mplex organizations and multiple goals. Organization Science 20 (1), 4 21. http://doi.org/10.1287/orsc.1080.0358 Frederickson, G. (2005). The state of social equity in American public administration. National Civic Review 94 (4), 31 38. http://doi.org/10.1 002/ncr.117 Frederickson, H. G., & Smith, K. B. (2003). The public administration theory primer Cambridge, MA: Westview Press. Freeman, R. E. (1984) Strategic Management: A Stakeholder Approach Boston, MA: Pitman Gavetti, G., Greve, H. R., Levinthal, D. a., & Ocasio, W. (2012). The behavioral theory of the firm: Assessment and prospects. The Academy of Management Annals 6 (1), 1 40. http://doi.org/10.1080/19416520.2012.656841 Gerlak, A. K., & Heikkila, T. (2011). Building a theory of learning in collabora tives: Evidence from the everglades restoration program. Journal of Public Administration Research and Theory 21 (4), 619 644. http://doi.org/10.1093/jopart/muq089 Gerring, J. (2006). Case study research: Principles and practices Cambridge University Press. Gerring, J. (2007). Techniques for Choosing Cases. In Doing Case Studies (pp. 86 150). Cambridge, MA: Cambridge University Press. Gerring, J. (2012). Social Science Methodology, A Unified Framework (2nd ed.). Cambridge, MA: Cambridge University Pres s. Gibson, C. B., & Birkinshaw, J. (2004). The antecedents, consequences, and mediating role of organizational ambidexterity. The Academy of Management Journal 47 (2), 209 226. Glaser, B. G. (1965). The Constant Comparative Method of Qualitative Analysis. Social Problems1 12 (4), 436 445. Gooden, S. T. (2006). Addressing racial disparities in social welfare programs: using social equity analysis to examine the problem. Journal of Health & Social Policy 22 (2), 1 12. http://doi.org/10.1300/J045v22n02_01

PAGE 173

160 Grim melikhuijsen, S. G., & Meijer, A. (2012). Effects of Transparency on the Perceived Trustworthiness of a Government Organization: Evidence from an Online Experiment. Journal of Public Administration Research and Theory 24 137 157. http://doi.org/10.1093/j opart/mus048 Gruber, J. (1987). Controlling bureaucracies Berkeley, CA: University of California Press. Guy, M. E., & Mccandless, S. a. (2012). Social Equity: Its Legacy, Its Promise. The American Society for Public Administration 72 (November/December), S5 S13. http://doi.org/10.111/j.1540 6210.2012.02635.x.Social Halachmi, A. (2005). Performance measurement: test the water before you dive in. International Review of Administrative Sciences 71 (2), 255 266. http://doi.org/10.1177/0020852305053884 Hatry, H P. (2014). Transforming Performance Measurement for the 21st Century Washington, DC. Retrieved from http://www.urban.org/sites/default/files/alfresco/publication pdfs/413197 Transforming Performance Measurement for the st Century.PDF Heikkila, T., & Ise tt, K. R. (2007). Citizen Involvement and Performance Management in Special Purpose Governments. Public Administration Review 67 (April), 237 247. Holmstrom, B., & Milgrom P. (1991). Multitask principal agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics, & Organization 7 (January 1991), 24 52. http://doi.org/10.1016/j.jet.2008.05.008 Hood, C. C. (1991). A public management for a ll seasons? Public Administration 69 (1), 3 19. http://doi.org/10.1111/j.1467 9299.1991.tb00779.x Hood, C. C. (2012). Public management by numbers as a performance enhancing drug: two hypotheses. Public Administration Review 72 (S1), 585 592. http://doi.or g/10.111/j.l540 6210.2012.02634.x.Public Jacob, B., & Larson, S. J. (2015). Social equity and performance measurement: Evaluating the status of scholarly and practical metrics on a road less traveled Working paper. James, O. (2011). Performance Measures a nd Democracy: Information Effects on Citizens in Field and Laboratory Experiments. Journal of Public Administration Research and Theory 21 (3), 399 418. http://doi.org/10.1093/jopart/muq057 Johnson, N. J., & Svara, J. H. (Eds.). (2011). Justice for all: Pr omoting social equity in public administration. Armonk, NY: ME Sharpe.

PAGE 174

161 Jung, C. S. (2011). Organizational Goal Ambiguity and Performance: Conceptualization, Measurement, and Relationships. International Public Management Journal 14 (2), 193 217. http://doi .org/10.1080/10967494.2011.589760 Jung, C. S. (2014). Extending the Theory of Goal Ambiguity to Programs: Examining the Relationship between Goal Ambiguity and Performance. Public Administration Review 74 (2), 205 219. http://doi.org/10.1111/puar.12176.Ext ending Kaufman, H. (2006). The forest ranger: a study in administrative behavior (Special reprint edition). Washington, D.C.: Resources for the Future. Kettl, D. F. (2000). Public administration at the millennium: The state of the field. Journal of Public Administration Research and Theory 10 (1), 6 34. Kingdon, J. W. (1984). Agendas, alternatives, and public policies Glenview, IL: Scott, Foresman and Company. Kochan, T. A., Huber, G. P., & Cummings, L. L. (1975). Determinants of intraorganizational conflict in collective bargaining in the public sector. Administration & Society 20 (1), 10 23. Kroll, A. (2015). Drivers of Performance Information Use: Systematic Literature Review and Directions for Future Research. Public Performance & Management Revi ew 38 (3), 459 486. http://doi.org/10.1080/15309576.2015.1006469 Larson, S. J., Jacob, B., & Butz, E. (2017). Linking social equity and performance measurement: Denver, CO. Lawton, A., McKevitt, D., & Millar, M. (2000). Developmen ts: Coping with Ambiguity: Reconciling External Legitimacy and Organizational Implementation in Performance Measurement. Public Money & Management 20 (3), 13 20. http://doi.org/10.1111/1467 9302.00218 Leech, N. L., & Onwuegbuzie, A. J. (2008). Qualitative Data Analysis: A Compendium of Techniques and a Framework for Selection for School Psychology Research and Beyond. School Psychology Quarterly 23 (4), 587 604. http://doi.org/10.1037/1045 3830.23.4.587 Levinthal, D. A., & March, J. G. (1993). The myopia of learning. Strategic Management Journal 14 95 112. http://doi.org/10.1017/CBO9781107415324.004 Levitin, M. (2015). The triumph of occupy wall street. The Atlantic 1 7. Retrieved from http://www.theatlantic.com/politics/archive/2015/06/the triumph of occ upy wall street/395408/

PAGE 175

162 Lipsky, M. (2010). Street level bureaucracy: Dilemmas of the individual in public services (1980) New York, NY: Russel Sage Foundation. Locke, E. A & Latham, G. P. (2002). Building a practically useful theory of goal setting and t ask motivation: A 35 year odyssey. The American Psychologist 57 (9), 705 717. http://doi.org/10.1037/0003 066X.57.9.705 Locke, E. A., Smith, K. G., Erez, M., Chah, D. O., & Schaffer, A. (1994). The effects of intra individual goal conflict on performance. Journal of Management 20 (1), 67 91. Lowi, T. J. (1969). The end of liberalism: Ideology, policy, and the crisis of public authority New York, NY: Norton. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science 2 (1), 71 87. March, J. G. (1994). A primer on decision making: How decisions happen New York, NY: The Free Press. March, J. G. (2003). Understanding organizational adaptation. Society and Econom 25 (1), 1 10. March, J. G., & Simon, H. A. (1993). Organizat ions (Second). Cambridge, MA: Blackwell Business. Mayer Schnberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work and think Boston, MA: Houghton Mifflin Harcourt. Maynard Moody, S., & Musheno, M. (2000). State agen t or citizen agent: Two narratives of discretion. Journal of Public Administration Research and Theory 10 (2), 329 358. Mcgrath, R. G. (2001). Exploratory learning, innovative capacity, and managerial oversight. The Academy of Management Journal 44 (1), 11 8 131. Meier, K. J., Favero, N., & Zhu, L. (2015). Performance Gaps and Managerial Decisions: A Bayesian Decision Theory of Managerial Action. Journal of Public Administration Research and Theory 25 (4), 1221 1246. http://doi.org/10.1093/jopart/muu054 Meie r, K. J., Brudney, J. L., & Bohte, J. (2009). Applied statistics for public & nonprofit administration. Seventh edition. Thomson Wadsworth. Performance Gaps and Managerial Decisions: A Bayesian Decision Theory of Managerial Action. Journal of Public Admin istration Research and Theory 25 (4), 1221 1246. http://doi.org/10.1093/jopart/muu054 exploration and exploitation activities: The influence of top down, bottom up, a nd

PAGE 176

163 horizontal knowledge inflows. Journal of Management Studies 44 (6), 910 931. http://doi.org/10.1111/j.1467 6486.2007.00697.x Mom, T. J. M., van Neerijnen, P., Reinmoeller, P., & Verwaal, E. (2015). Relational capital and individual exploration: unravell ing the influence of goal alignment and knowledge acquisition. Organization Studies 36 (6), 809 829. http://doi.org/10.1177/0170840615580009 Mom, T. J. M., van den Bosch, F. A. J., & Volberda H. W. (2009). Understanding variation in and personal coordination mechanisms. Organization Science 20 (4), 812 828. http://doi.org/10.1287/orsc.1090.0427 Moynihan D. P. (2008). The dynamics of performance management: Constructing information and reform (Vol. 15). Washington, D.C.: Georgetown University Press. Moynihan, D. P. (2015). Uncovering the circumstances of performance information use findings from an experiment. Public Performance & Management Review 39 (1), 33 57. http://doi.org/10.1080/15309576.2016.1071160 Moynihan, D. P., & Hawes, D. P. (2012). Resp onsiveness to Reform Values: The Influence of the Environment on Performance Information Use. Public Administration Review 72 (s1), 95 105. http://doi.org/10.111/j.1540 6210.2012.02653.x.Responsiveness Nambiar, R. (2010). Data, data everywhere: A special r eport on managing information The Economist Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/23381205 National Academy of Public Administration. (n.d.). Social Equity in Governance Retrieved October 25, 2016, from http://www.napawash.org/fellows/standi ng panels/social equity in governance.html Nicholson Crotty, S. & Nicholson Crotty, J. (2004). Interest group influence on managerial priorities in public organizations. Journal of Public Administration Research and Theory 14 (4), 571 583. http://doi.org/ 10.1093/jopart/muh037 Osborne, D., & Gaebler, T. (1992). Reinventing government: How the entrepreneurial spirit is transforming the public sector New York, NY: Penguin Group. Pandey, S. K., & Wright, B. E. (2006). Connecting the Dots in Public Management: Political Journal of Public Administration Research and Theory 16 (4), 511 532. Pollitt, C. (1990). Managerialism and the public services: The Anglo American experience Cambridge, MA: Basil Blackwell.

PAGE 177

164 Pollitt, C. (2013). The logics of performance management. Evaluation 19 (4), 346 363. http://doi.org/10.1177/1356389013505040 Poole, M. S., & Van de Ven, A. H. (1989). Using paradox to build management and organization theor ies. The Academy of Management Review 14 (4), 562 578. Pressman, J. L., & Wildavsky, A. (1984). Implementation (3rd ed.). Berkeley, CA: University of California Press. QSR International. (2011). NVivo 11 for Windows Help. Doncaster Victoria, Australia. Quinn, R. E., & Rohrbaugh J. (1983). A spatial model of effectiveness criteria: Towards a competing values approach to organizational analysis. Management Decision 29 (3), 363 377. Radin, B. A. (2006). Challenging the performance movement: Accountability, complexity, and democratic values Washington, D.C.: Georgetown University Press. Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quantitative strategies Berkeley, CA: University of California Press. Ragin, C. C., Shulman, D., Weinberg, A., & Gran, B. (2003). Complexity, Generality, and Qualitative Comparative Analysis. Field Methods 15 (4), 323 340. http://doi.org/10.1177/1525822X03257689 Rainey, H. G. (2009). Understanding and managing public organizations (4th ed.). San Franc isco, CA: Jossey Bass. Rainey, H. G., & Bozeman, B. (2000). Comparing public and private organizations: Empirical research and the power of the a priori. Journal of Public Administration Research & Theory 10 (2), 447. http://doi.org/Article Rainey, H. G., & Jung, C. S. (2015). A conceptual framework for analysis of goal ambiguity in public organizations. Journal of Public Administration Research and Theory 25 (1), 71 99. http://doi.org/10.1093/jopart/muu040 Raisch, S., & Birkinshaw, J. (2008). Organizationa l ambidexterity: antecedents, outcomes, and moderators. Journal of Management 34 (3), 375 409. http://doi.org/10.1177/0149206308316058 Rittel, W. J., & Webber, M. M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4(2), 155 169.

PAGE 178

165 Rethemeyer R. Karl. (2003 present). PAD 705 Handout: Multicollinearity. Albany: Rockefeller College of Public Affairs and Policy, University at Albany SUNY (http://www.albany.edu/faculty/kretheme/PAD705/overview.html). Sabatier, P. A. (1988). An advoca cy coalition framework of policy change and the role of policy oriented learning therein. Policy Sciences 21 (2), 129 168. Retrieved from http://www.springerlink.com/index/g02w853358162113.pdf Sanger, M. B. (2008). From measurement to management: Breaking through the barriers to state and local performance. Public Administration Review 68 (s1), S70 S85. Sanger, M. B. (2013). Does Measuring Performance Lead to Better Performance? Journal of Policy Analysis & Management 32 (1), 185 208. http://doi.org/10.1002 /pam.21657 Schneider, C. Q., & Wagemann, C. (2012). Methods for the social sciences: A guide to qualitative comparative analysis Cambridge, UK: Cambridge University Press. Schulze, P. (2009). Balancing exploitation and exploration: Organizational antecedents and performance effects of innovation strategies Gabler Research. Schumpeter, J. A. (1934). The theory of economic development Cambridge, MA: Harvard University Press. Simon, H. A. (1997/1947). Administrative behavior (4th ed.). New York: The Free Press. Singleton, R. A., & Straits, B. C. (2010). Approaches to social research (5th ed.). Oxford, England: Oxford University Press. Smith, K. G., & Locke, E. A. (1990). Goal setting, planning, and organizational performance: An experimental simulati on. Organizational Behavior and Human Decision Processes 46 (1), 118 134. StataCorp. ( 2011 ) Stata Statistical Software: Release 12. College Station, TX: StataCorp LP Stetler, K. L., & Magnusson, M. (2015). Exploring the tension between clarity and ambigui ty in goal setting for innovation. Creativity and Innovation Management 24 (2), 231 246. http://doi.org/10.1111/caim.12102 Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory Thousand Oaks, CA. Sage. Sullivan, G. M., & Artino, A. R. (2013). Analyzing and Interpreting Data From Likert Type Scales. Journal of Graduate Medical Education 5 (4), 541 542. http://doi.org/10.4300/JGME 5 4 18

PAGE 179

166 Svara, J. H., Watt, T., & Takai, K. (2014). Advancing social equity goals to achieve sustainability U.S. Department Of Housing & Urban Development Washington, D.C. evolutionary and revolutionary change. California Management Rev iew 38 (4), 8 30. http://doi.org/10.1080/09652540903536982 United States Bureau of Labor Statistics ( n.d. ). Retrieved September 9, 201 7 from https://www.google.com/publicdata United States Census Bureau. (201 0 ). Community Facts Retrieved September 29, 2 01 7 from https://factfinder.census.gov United States Census Bureau. (201 6 ). Income, Poverty and Health Insurance Coverage in the United States: 2015 Retrieved September 29, 2016, from http://www.census.gov/newsroom/press releases/2016/cb16 158.html United States Census Bureau. ( n.d. ). QuickFacts Retrieved September 9, 201 7 from https://www.census.gov/quickfacts United States Government Accountability Office (USGAO). (2014). Managing for Results: n to Make Decisions (GAO 14 747) Washington, DC. Retrieved from http://www.gao.gov/assets/670/666187.pdf Van de Ven, A. H., & Poole, M. S. (1995). Explaining development and change. The Acade m y of Management Review 20 (3), 510 540. http://doi.org/10.5465/ AMR.1995.9508080329 Walker, R. M. (2014). Internal and External Antecedents of Process Innovation: A review and extension. Public Management Review 16 (1), 21 44. http://doi.org/10.1080/14719037.2013.771698 Wang, S., & Feeney, M. K. (2014). Determinants of information and communication technology adoption in municipalities. The American Review of Public Administration (October), 1 22. http://doi.org/10.1177/0275074014553462 Wang, X., & Berman, E. (2001). Hypotheses about Performance Measurement in Counties : Findings from a Survey. Journal of Public Administration Research and Theory 11(3), 403 428. Retrieved from http://www.scopus.com/inward/record.url?eid=2 s2.0 0347117546&partnerID=40&md5=bbab9e8445d70ce71511c9dec98111e3 Weller, N., & Barnes, J. (2014). Pathway Analysis and the Search for Causal Mechanisms. Sociological Methods & Research 1 34. http://doi.org/10.1177/0049124114544420 Weible C. M. (2007). An advocacy coalition framework approach to stakeholder analysis: Understanding the political context of California marine protected area policy. Journal of

PAGE 180

167 Public Administration Research and Theory 17 (1), 95 117. http://doi.org/10.1093/jo part/muj015 Wilson, J. Q. (1989). Bureaucracy. New York, NY: Basic Books. Wood, C. (2015). i nitiative t argets 100 mid sized metros Retrieved April 22, 2015, from http://www.govtech.com/data/Bloombergs What Works Cities Ini tiative Targets 100 Mid Sized Metros.html Yin, R. K. (2009). Case Study Research: Design and Methods (4th ed.). Thousand Oaks, CA: Sage. Zollo, M., & Winter, S. G. (2002). Deliberate learning and the evolution of dynamic capabilities. Organization Science 13 (3), 339 351. http://doi.org/10.1287/orsc.13.3.339.2780 Zweig, D., Webster, J., & Scott, K. A. (2009). Making the Decision to Monitor in the Workplace: Cybernetic Models and the Illusion of Contr ol. In G. P. Hodgkinson & W. H. Starbuck (Eds.), The Oxford Handbook of Organizational Decision Making (pp. 1 13). Oxford University Press. http://doi.org/10.1093/oxfordhb/9780199290468.003.0006

PAGE 181

168 APPENDIX A INTERVIEW INSTRUMENT Subject: Interview instrument Study Title: The evolution of data in public organizations: exploring stakeholder influence on the development of performance measures Principal Investigator: Eric Butz 60068 -ADM VCR COMIRB Record Number: 17 0960 Version Date: 1 3 J ul 2017 Introduction Thank you for taking the time to talk to me about your organization. Let me begin by giving you a bit of background about myself and the project. First, I am a PhD Candidate at the University of Colorado, Denver working on my dissertation. I project I am working on explores how different stakeholders influ ence the development of performance measures in public organizations. Specifically, it looks at the development of performance measures surrounding social equity programs and initiatives. So, broadly speaking, I am interested in understanding: 1. What does t he process of adding new performance measures look like? 2. What does the process of measure evaluation or elimination look like? 3. How do different internal and external stakeholders influence these processes? So, this is an exploration into the activities sur rounding the development of performance measures. The results of the work will be made publically available through conference presentations, professional reports, and academic papers. If at any time you wish to end the

PAGE 182

169 interview just let me know and we ca n stop. Or if there are things you want to keep confidential, just let me know and I will stop the recording and will stop taking notes. Before we proceed, do you have any questions or concerns? Some interview questions will use the following scale: 1 2 3 4 5 To no extent To a small extent To a moderate extent To a great extent To a very great extent First, can you begin by describing the structure of your organization and where you fit in that structure? Next, I want to understand a little bit about the performance measurement and social equity initiatives for your organization. (Performance Measurement) To what extent does your city/ organization/department measure performance? What resources are commi tted to performance measurement (technology, sta ff, dedicated department)? Would you consider the performance measurement initiative to be a centralized or decentralized activity? Probe: Do departments have autonomy? (Social Equity) Can you tell me a bit about the (social equity initiative)? In general, who/what is driving the social equity initiative? Probe: What other stakeholders or city officials provide support for the social equity initiative? (Mayor, City Council regional groups, nonprofit stakeholders, community groups) What resource s are committed to social equity?

PAGE 183

170 Probe: Are the performance measurement resources currently supporting the measurement of social equity? Who participates in setting social equity priorities and goals? Is there an external committee? Probe: Do departments have autonomy? Stakeholder influence (IV) Office of the M ayor (or City C ouncil) (Performance Measurement) a) The office of the Mayor (or City Council ) demonstrates a strong commitment to performance measurement. b) The office of the Mayor (or City Council) uses performance measures to guide decision making. Probe: Is there an external policy requiring the use of performance measures? (Social Equity) a) The office of the Mayor (or City Council ) demonstrates a strong commitment to social equity. b) The office of the Mayor (or City Council ) uses equity measures to guide decision making. Probe: Is there an external policy requiring the tracking of equity goals or the use of equity measures. Agency leadership Departmental leadership refers to department heads, senior m anagers, or agency leaders. (Performance Measurement) a) demonstrates a strong commitment to performance measurement. b) uses performance measures to g uide decision making Probe: Is there an internal agency policy requiring the use of performance measures? (Social Equity) a) My agency's top leadership demonstrates a strong commitment to social equity. b) Probe: Is there an internal agency pol icy requiring the tracking of equity goal s or the use of equity measures? Front line workers (Performance Measurement)

PAGE 184

171 a) Front line workers demonstrate a strong commitment to performance measurement. (Social Equity) a) Front line workers are aware of social equity in general. b) Front line workers measure their progress toward social equity goals. Measurement Variation (DV) The presence of new measures, or the extent of activities or processes that drive the generation of new measur es. a) My city/department regularly adds new measures Probe: Does your city/department have a process for discovering and adding new measures? What does the process of adding new performance measures look like? Can you give me an example of a measure that was added? Has this new measure process occurred for social equity measures? Can you give me an example of a social equity measure that was added? Evidence of measures being selected or reduced according to effectiveness, or the extent of activities or pr ocesses wherein measures are evaluated and selected according to certain criteria a) My city/department regularly evaluates the effectiveness of existing measures. Probe: Does your city/department have a process for evaluating the effectiveness of existing me asures? What does the process of performance measure evaluation or elimination look like? Can you give me an example of a measure that was eliminated? Has this measurement review process occurred for social equity measures? Can you give me an example of a social equity measure that was eliminated?

PAGE 185

172 IV DV relationship Who drives the adding of new performance measures? Who drives the evaluation or elimination of existing measures? Office of the M ayor (or City C ouncil) (Mayor Variation) a) The office of the Mayor (or City Council ) is a source of new performance measures. b) The office of the Mayor (or council) influences agency managers to develop new measures. Probe: Examples ? What did th e process look like? (Mayor Selection) a) The office of the Mayor (or coun cil) evalua tes and eliminat es existing performance measures. b) The office of the Mayor (or council) influences agency managers to evaluate and eliminate existing measures. Probe: Examples ? What did th e process look like? Agency leadership Variation) a) My agency's top leadership is a source of new performance measures. b) My agency's top leadership influence s the development of new measures. Probe: Examples? What did the process look like? (Leadership Selection) a) My agency's top leadership evaluates and eliminat es existing performance measures. Probe: Examples? What did the process look like? Front line workers (Workers Variation) a) Front line workers are a source of new performance measures. b) Front line workers influence the develop ment of n ew measures. Probe: Examples? What did the process look like? (Workers Selection) a) Front line workers evaluate and eliminate existing performance measures. b) Front line workers influence the evaluation and eliminat ion of existing measures. Probe: Examples? What did the process look like?