Citation
Performance management in Colorado's child welfare system : the link between systemic constraints and permanency outcomes

Material Information

Title:
Performance management in Colorado's child welfare system : the link between systemic constraints and permanency outcomes
Creator:
Chapman, Carrie L.
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English

Thesis/Dissertation Information

Degree:
Doctorate ( Doctor of philosophy)
Degree Grantor:
University of Colorado Denver
Degree Divisions:
School of Public Affairs, CU Denver
Degree Disciplines:
Public affairs
Committee Chair:
Varda, Danielle M
Committee Members:
Ronquillo, John
deLeon, Peter
Hicks, Darrin

Notes

Abstract:
Studies of performance management have long been central to the field of public management, noted for their importance in understanding organizational behavior and facilitating better outcome achievement. Most studies, however, have explored performance management as either a dependent variable affected by broader systemic constraints or as an independent variable influencing the attainment of organizational objectives. This dissertation proposes that performance management is better understood as both predictor and outcome and examines this proposition in Colorado’s child welfare system. Using a population study of 64 county child welfare agencies, this thesis analyzed how systemic constraints including population size, economic profiles, and geography impacted agencies’ abilities to achieve performance management standards and, in turn, the extent to which performance management impacted permanency outcomes for children in out-of-home care. A quantitative research design employing cluster analysis and logistic regression for rare events indicated that only limited empirical support existed, suggesting that future studies should continue to develop richer insight regarding the role of performance management to bolster our theoretic and practical understanding of this complex concept.

Record Information

Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
Copyright Carrie L. Chapman. Permission granted to University of Colorado Denver to digitize and display this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.

Downloads

This item has the following downloads:


Full Text
PERFORMANCE MANAGEMENT IN COLORADO’S CHILD WELFARE SYSTEM:
THE LINK BETWEEN SYSTEMIC CONSTRAINTS AND PERMANENCY OUTCOMES
by
CARRIE L. CHAPMAN
B.A., University of North Carolina at Asheville, 2010
A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Public Affairs Program
2019


©2019
CARRIE L. CHAPMAN ALL RIGHTS RESERVED
11


This thesis for the Doctor of Philosophy Degree by Carrie L. Chapman has been approved for the Public Affairs Program by
Danielle M. Varda, Advisor John Ronquillo Peter deLeon Darrin Hicks
Date: May 18, 2019


Chapman, Carrie L. (PhD, Public Affairs Program)
Performance Management in Colorado’s Child Welfare System: The Link Between Systemic
Constraints and Permanency Outcomes
Thesis directed by Associate Professor Danielle M. Varda
ABSTRACT
Studies of performance management have long been central to the field of public management, noted for their importance in understanding organizational behavior and facilitating better outcome achievement. Most studies, however, have explored performance management as either a dependent variable affected by broader systemic constraints or as an independent variable influencing the attainment of organizational objectives. This dissertation proposes that performance management is better understood as both predictor and outcome and examines this proposition in Colorado’s child welfare system. Using a population study of 64 county child welfare agencies, this thesis analyzed how systemic constraints including population size, economic profiles, and geography impacted agencies’ abilities to achieve performance management standards and, in turn, the extent to which performance management impacted permanency outcomes for children in out-of-home care. A quantitative research design employing cluster analysis and logistic regression for rare events indicated that only limited empirical support existed, suggesting that future studies should continue to develop richer insight regarding the role of performance management to bolster our theoretic and practical understanding of this complex concept.
The form and content of this abstract are approved. I recommend its publication.
Approved: Danielle M. Varda
IV


This dissertation is dedicated to my parents, Daniel and Diane Chapman, whose unwavering support throughout a journey across many bumpy roads allowed me to fly.
v


ACKNOWLEDGMENTS
There are many people responsible for the completion of this degree. There was many a night when I was not convinced this pursuit would ever manifest into its credentialed completion. There are many nights, still, when I reflect in disbelief at my reality. But thanks to so many extraordinary people, with their guidance, brilliance, and support, I have been able to arrive humbly at the train’s final stop.
Danielle Varda, this dissertation would never have been possible without you. Your expertise, kindness, knowing just what I needed and delivering every time, brought this process to completion. To my committee members, Peter deLeon, John Ronquillo, and Darrin Hicks, thank you for your patience, feedback, and willingness to go on this journey with me. I am deeply indebted to you all. To Dawn Savage, you are a rock star of epic proportions.
Along the way, I have been fortunate to have so many friends, colleagues, and family guide me, including: Todd Boesdorfer, Jon Pierce, Kate and Robert Cope, Chelsey Weaver, Bethany Johnson, Alice Hall, Sandra Hodgin, Warren Eller, Brian Gerber, Benoy Jacob, Erin Crites, Diane Johnson, Mary and Paul Bougie, Lena Lucivero, Kathleen Gallagher, Vanessa Fenley, Brenda Dickhoner, Bill Sabo, Mark Gibney, Ida Drury, Alex Henderson, Mike Sabato, and Erin Lynch. To Roger Desrosiers, thank you for being my greatest mentor. To the other wonderful influences whose names are too numerous to mention, know how much I thank and appreciate you.
Finally, I would not be here today were it not for the support of my family. Mom and Dad, thank you for being my strongest champions. Laura, Chris, and Ray, thank you, always, for bringing laughter and joy to my life. Nonne and Papa, I know you’ve been here all along. My love for you all knows no bounds.
vi


TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION........................................................................1
Significance of the Problem..........................................................2
Evidence from the Literature.........................................................4
Research Questions and Hypotheses....................................................6
Overview of Dissertation.............................................................7
II. REVIEW OF THE LITERATURE...........................................................8
Definitions: What is Performance Management?.........................................8
Why Performance Management? Importance to Research and Practice.....................10
The Factors Affecting Performance Management in Organizations.......................12
Eligible Population..............................................................13
Economic Profile.................................................................14
Geography........................................................................15
Performance Management’s Impact on Organizational Outcome Attainment................15
Conceptual Framework................................................................18
Study Context.......................................................................20
III. METHODOLOGY......................................................................25
Research Questions and Hypotheses...................................................25
Population and Sample...............................................................27
Data Collection.....................................................................28
Empirical Measures..................................................................32
Systemic Factors Influencing Performance Management (Concepts 1-3)...............34
Performance Management (Concept 4)...............................................38
Control Variables (Concepts 5-7).................................................40
Permanency Outcomes (Concept 8)..................................................43
Data Analysis.......................................................................45
Descriptive Statistics...........................................................45
Means Comparisons, Correlations, and Chi-square..................................47
Logistic Regression for Rare Events..............................................48
Testing for Mediation: Bootstrapping.............................................51
IV. RESULTS..........................................................................53
Descriptive Findings................................................................53
Descriptive Statistics...........................................................53
vii


Two-Step Cluster Analysis.........................................................60
Inferential Findings.................................................................66
Means Comparisons, Correlations, and Chi-Square Statistics........................66
Logistic Regression...............................................................76
Bootstrapping for Mediation.......................................................77
Level of Support for Hypotheses......................................................78
V. DISCUSSION AND CONCLUSION..........................................................80
Discussion..........................................................................80
Study Limitations....................................................................83
Next Steps...........................................................................86
REFERENCES.............................................................................90
APPENDIX
A. Variable Labels...................................................................100
B. All Variables by County for Fiscal Year 2013......................................101
C. All Variables by County for Fiscal Year 2014......................................104
D. All Variables by County for Fiscal Year 2015......................................107
E. All Variables by County for Fiscal Year 2016......................................110
F. All Variables by County for Fiscal Year 2017......................................113
G. Percent Change Across Variables by County from Fiscal Year 2013 to Fiscal Year 2017.116
H. Selected Histograms Demonstrating Normality Approximations........................119
I. Mann-Whitney U Means Comparisons for Fiscal Years 2016 and 2017......................120
J. Summary of Firth’s Logistic Regression Models......................................122
viii


LIST OF TABLES
TABLE
1. Overview of Research Questions, Concepts, and Measures...........................33
2. Descriptive Statistics for FY2013................................................55
3. Descriptive Statistics for FY2014................................................56
4. Descriptive Statistics for FY2015................................................57
5. Descriptive Statistics for FY2016................................................57
6. Descriptive Statistics for FY2017................................................58
7. Cluster Analyses forFY2013........................................................61
8. Cluster Analyses for FY2014.......................................................62
9. Cluster Analyses for FY2015.......................................................63
10. Cluster Analyses for FY2016.....................................................64
11. Cluster Analyses for FY2017.....................................................65
12. Mann-Whitney U Means Comparisons for FY2013......................................67
13. Mann-Whitney U Means Comparisons for FY2014......................................69
14. Mann-Whitney U Means Comparisons for FY2015......................................70
15. Correlation Matrix for FY2013...................................................72
16. Correlation Matrix for FY2014...................................................72
17. Correlation Matrix for FY2015...................................................73
18. Correlation Matrix for FY2016...................................................74
19. Correlation Matrix for FY2017...................................................75
20. Chi-Square Tests forFY2013 to FY2017............................................76
21. Firth Logistic Regression for FY2013, Performance Management and Permanency.....77
IX


LIST OF FIGURES
FIGURE
1. A conceptual framework of performance management...................................4
2. A conceptual framework of performance management...................................19
3. Configuration of states' administrative structures.................................21
4. Colorado county geographic designations............................................37
5. Visual representation of cluster formation.........................................46
6. Rare event bias in logistic regression.............................................49
7. Changes in performance management relative to changes in permanency success........59
8. Summary of support for hypotheses..................................................78
x


I. INTRODUCTION
The concept of performance management has historically been of importance to the broader fields of public management and public administration (Kroll & Moynihan, 2017). Currently, the concept of performance management has garnered considerable attention as research attempts to unpack what has long been considered a “black box,” the likes of which are regarded as essential facets of organizational goal attainment but equally challenging to define theoretically and assess empirically (Boyne, Meier, O’Toole, Jr., & Walker, 2005).
Typically, the concept of performance management is studied through one of two lenses: either as a causal mechanism, influencing organizational outputs, or as a dependent variable, being affected by some series of events within an agency. This dissertation proposes to examine performance management as a mediating influence—that is, to analyze performance management as the link between broader systemic predictors and organizational outcome realization. It is proposed that exogenous influences affect organizational performance, understood here as performance management, and that, in turn, performance management impacts the attainment of organizational outcomes. Thus, rather than explore a direct relationship between exogenous forces and outcome achievement, it is posited that performance management exists as a mediating variable between the two. For example, the size of the population eligible for services may be hypothesized to impact organizational outcome attainment through a strain on the agency’s resource capacity, but it is argued here that population strains will first impact the ability to achieve performance management standards, the effects of which will then impact outcome realization. In this way, performance management is the intermediate link between systemic factors and outcomes.
1


The remainder of this introduction identifies the significance of the problem, including a statement of the dissertation’s contribution to theory and practice, summarizes key themes from the literature, articulates the dissertation’s research questions and hypotheses, and concludes with an overview of the dissertation’s organization.
Significance of the Problem
Effective performance management has been deemed essential to facilitate organizational outcome attainment. In the face of resource scarcity, competing expectations from stakeholders, and public sector systems burdened by excessive workload and service demands, understanding the factors that drive organizational outcomes is critical (Arnaboldi, Lapsley, & Steccolini,
2015). Yet, despite recognition in both theory and practice that performance management is necessary to realize broader organizational objectives, the construct remains ambiguous in its definition, making it difficult to ascertain just what, exactly, organizations are striving to achieve and how they might best set about achieving it. Among the many definitions that exist, this dissertation employs Radnor and Barnes’ (2007) conceptualization, which defines performance management as:
the quantifying, either quantitatively or qualitatively, of the input, output or level of activity of an event or process. Performance management is action, based on performance measures and reporting, which results in improvements in behavior, motivation and processes and promotes innovation. (Radnor & Barnes, 2007, p. 393)
When understood as an action, performance management can be robustly defined as a dynamic
variable both critical to organizational success and shaped by events that precede its impact on
larger goals.
In reflecting on extant research, these two patterns of performance management conceptualization were observed. In the first pattern, performance management is treated as a dependent variable of sorts, being subject to variability from both organizational and
2


environmental constraints. In the latter, performance management is conceived as a predictor, a driving force behind the attainment of organizational outcomes. While such conceptualizations have contributed to the development of a rich body of research, this dissertation posits that examining performance management exclusively as outcome or predictor does not warrant comprehensive understanding. Instead, it is argued that performance management is better classified as a mediating influence, being at once impacted by systemic constraints and in turn affecting organizational outcome achievement. It is within this reconceptualization that the dissertation aims to offer its theoretic contribution.
Beyond theoretic advancement, this dissertation also hopes to have a viable contribution to practice. With limited time and resources available, public agencies are known to struggle in making the best use of performance management data (Heinrich, 1999; Moynihan & Pandey, 2010; Moynihan & Kroll, 2016). If performance management is demonstrated to impact outcomes, however, then refinement of and improvements to organizational practices may be best actualized through the insights gleaned from performance management indicators. Framed differently, it is important for agencies to understand if they are accurately measuring progress and processes that contribute to overall effectiveness, and if not, to use performance data to better leverage existing resources in alignment with intermediate outcomes known to have a meaningful impact on long-term goals. By examining performance management within the context of child welfare using measures identified by the Colorado Department of Human Services, this dissertation can offer a practical contribution to an organizational structure that has already defined the performance benchmarks to which service providers are expected to adhere.
3


Evidence from the Literature
There exists an extensive body of research dedicated to the study of performance management in public organizations. Typically, performance management is conceptualized as either an outcome, being affected by larger managerial, organization, and environmental constraints, or predictor, influencing the achievement of broader organizational objectives (cf. Heinrich, 2002; McBeath & Meezan, 2010; Amirkhanyan, Kim, & Lambright, 2014). In this dissertation, performance management is conceptualized as both predictor and outcome, arguing that the construct is better understood as an intermediate output between systemic constraints, on the one hand, and outcome achievement, on the other. Specifically, this relationship is explored in the context of Colorado’s child welfare system, where outcomes are defined as permanency, which refers to either biological reunification or adoption as an exit from out-of-home care for system-involved youth (D’Andrade, Osterling, & Austin, 2008). Figure 1 below presents this conceptual framework.
Traditional Conceptualization: PM as DV Systemic a Performance
Constraints Management
Traditional Conceptualization: PM as IV Performance b Organizational
Management Outcomes
Current Conceptualization: PM as DV and IV Systemic a Performance b Organizational
Constraints Management Outcomes
Empirical Conceptualization: PM as Mediator
Systemic
Constraints
Performance
Management
c
Organizational
Outcomes
Figure 1. A conceptual framework ofperformance management.
4


To explain the role of performance management, several lines of reasoning are considered. First, performance management is expected to be impacted by macro-level systemic factors, including the population eligible for child welfare system involvement, a locality’s economic profile with an emphasis on poverty, and geographic location, all of which are consistent with the traditional conceptualization of performance management as a dependent variable (path a in the diagram above). Each of these systemic influences have been examined in extant child welfare scholarship with considerable theoretic and empirical support. Second, the achievement of performance management standards is anticipated to affect organizational outcomes, as depicted using path b in the second traditional conceptualization. As with systemic factors (that is, the social, economic, and geographic factors that can influence organizational processes and outcomes), the literature pertaining to performance management’s influence on outcomes is situated within the study of child welfare.
With the traditional conceptual frameworks established, this dissertation argues that neither comprehensively captures the dynamic nature of performance management and therefore proposes an alternative conceptualization. In this way, performance management is considered to exist between paths a and b, being a dependent variable (that is, affected by systemic constraints) in path a and, in turn, acting as a casual mechanism that impacts outcome achievement in path b. To test this conceptual framework, performance management is empirically represented as a mediating variable, where its influence may be analyzed quantitatively as an intermediate path between systemic constraints and outcomes (path c). For performance management to exert a mediating influence, it is assumed that systemic constraints exhibit a direct effect on permanency outcomes. To evaluate the conceptual framework, two research questions and four affiliated hypotheses are posited.
5


Research Questions and Hypotheses
Based on the literature highlighted in the previous section, the dissertation is guided by
two central research questions and four associated hypotheses. The first research question is:
RQ1: To what extent do systemic factors affect a child welfare agency’s ability to achieve performance management standards?
Three hypotheses are proposed for research question 1. The first hypothesis involves the eligible population and is stated as:
HI: Agencies with larger eligible populations will be less likely to achieve performance management standards than those agencies with smaller populations.
The dissertation’s second hypothesis is grounded in the county’s economic profile and posits:
H2: Agencies that operate within economically stable counties will be more likely to achieve performance management standards than agencies that operate within economically struggling counties.
The third and final hypothesis associated with research question 1 considers the impact of a
county’s geographic location. Hypothesis 3 states:
H3: Agencies within urban settings will be more likely to achieve performance management standards than agencies within rural or frontier settings.
With performance management designated as a dependent variable in the first research
question, the second part of this dissertation reimagines its role as a predictor influencing
organizational outcomes. Toward that aim, the second research question is:
RQ2: How does the attainment of performance management standards impact permanency outcomes in child welfare agencies?
A fourth hypothesis is proposed to consider the directionality of the relationship between
performance management and outcomes. This final hypothesis proposes:
H4: Agencies that achieve performance management standards will have better permanency outcomes than agencies that do not achieve standards.
6


To determine levels of support for the proposed hypotheses, a quantitative research design was utilized. Descriptive and inferential statistics were employed for each of the four hypotheses, and implications for theory and practice are discussed.
Overview of Dissertation
This dissertation consists of five chapters. The first chapter has been this introduction, which has specified the purpose of the dissertation, provided a broadened overview of the relevant literature, and articulated the dissertation’s conceptual framework, research questions, and hypotheses. The next chapter synthesizes pertinent literature within performance management and child welfare from which the research questions were derived. It also proposes the dissertation’s conceptual framework. Finally, this review of the literature concludes with the dissertation’s context of the Colorado child welfare system. The third chapter contains detailed descriptions of the dissertation’s methods, including the population and sample, data collection, empirical measures, and analytic strategy. The fourth chapter summarizes the results of the empirical design to determine the amount of support evidenced for the dissertation’s hypotheses. Finally, the fifth chapter discusses the dissertation’s implications and limitations and proposes directions for further research. Specifically, the discussion centers around the importance of aligning our theoretic understandings with the practical applications of performance management. The refinement of theory as a mechanism to inform practice can enhance conceptual development in research to move theoretical understanding of these concepts forward while operationalizing sound empirical measurement in organizational settings, the absence of which may undermine agencies’ abilities to achieve outcomes and limit the theoretic utility of performance management.
7


II. REVIEW OF THE LITERATURE
This review of the literature will examine performance management within the context of theories related to public management and child welfare, including the concept’s importance to research and practice, its antecedents, and its impact on organizational outcomes. The literature review is organized in the following subsections: Definitions: What is Performance Management; Why Performance Management? Importance to Research and Practice; The Factors Affecting Performance Management in Organizations; Performance Management’s Impact on Organizational Outcome Attainment; Conceptual Framework; and Study Context. Definitions: What is Performance Management?
In attempting to conceptualize performance management, it is readily apparent that no singular definition exists. Like other complex constructs in public management, performance management has undergone much theoretic revision. Or, as Alach (2017) succinctly stated, performance management is difficult to define. Despite its conceptual ambiguity, however, it can be argued that the core of performance management relates to the measurement of an organizational expectation, the results of which inform decision-making.
As Moynihan (2008) explains, performance management is defined as “a system that generates performance information through strategic planning and performance measurement routines, and connects this information to decision venues, where, ideally, the information influences a range of possible decisions” (p. 5). In a more complex specification, Pollitt (2013) conceptualized performance management as a dynamic system of interrelated elements, each with its own set of decision-making criteria, ambiguities, and implications for organizational effectiveness. According to Pollitt (2013), the elements of program activity, measurement, data, application of criteria (or standards), quantitative information synthesis, and informed decisions exist cyclically to produce a performance management system within governmental
8


organizations. Alternatively, performance management has been conceptualized as a process
described as “defining, controlling and managing both the achievement of outcomes as well as
the means used to achieve these results” (Broadbent & Laughlin, 2009, p. 283, as cited in Alach,
2017). Of the several conceptual definitions, one is particularly applicable to this dissertation.
The definition elected for application here was posed by Radnor and Barnes in 2007. In
articulating their conceptualization, the authors distinguished two interrelated constructs-
performance measurement and performance management.
Performance measurement is the quantifying, either quantitatively or qualitatively, of the input, output or level of activity of an event or process. Performance management is action, based on performance measures and reporting, which results in improvements in behavior, motivation and processes and promotes innovation. (Radnor & Barnes, 2007, p. 393)
Radnor and Barnes’ (2007) distinction between performance measurement and management is important because it appears to reflect accurately the complexities of performance management in practice. Organizations must decide which actions to take based on quantified performance measurement, the consequences of which are presumed to affect broader organizational behaviors. Radnor and Barnes’ (2007) definition also assumes positivity in the direction of the relationship, meaning that better performance management is expected to improve organizational effectiveness. This dissertation accepts this foundational definition but challenges the extent to which performance management may be affected by exogenous systemic constraints, which are anticipated to impact the actionable steps undertaken by agencies in pursuit of outcome achievement. Before that discussion, however, it is important to first explain the theoretical and practical rationale for studying performance management.
9


Why Performance Management? Importance to Research and Practice
Despite is permanency in the public management literature, the question “why study performance management?” remains. This sub-section provides justification as to why this concept continues to be relevant to both research and practice.
The concept of performance management has long been of theoretic importance to the fields of public management and administration (Boyne, Meier, O’Toole, Jr., & Walker, 2005). As entities that must be responsive to the needs of various stakeholders-including the legislature, auditors, media, the public, and non-governmental organizations-often simultaneously, government agencies have relied on performance management to inform decision-making processes, justify expenditures, and improve outcomes (Moynihan & Pandey, 2010).
The contemporary emphasis on what can be referred to as performance-based organizations (Lynn Jr., 2006) is not entirely surprising. Organizations have historically been thought to justify their importance through the attainment of some tangible or measurable goal. Indeed, as Thompson originally posited in 1967, organizations are rationally attuned to performance measures and spend considerable energies in the pursuit of their achievement (Thompson, 2003). Among his propositions, Thompson (2003) argued that “complex organizations are most alert to and emphasize scoring well on those criteria which are most visible to important task-environment elements” (p. 90). Yet, despite organizations’ apparent recognition of performance management’s importance, government agencies have long been criticized for being ineffective, inefficient, and unresponsive to stakeholder interests (see Peters, 1996, for an overview of such perspectives across theoretic traditions).
The result of such criticisms was the theoretic and practical shift away from traditional bureaucratic mechanisms to an entrepreneurial, resource-minimal, enterprise-inspired approach
10


to managing government organizations, more commonly known as New Public Management (Hood & Dixon, 2015). According to Fryer, Antony, and Ogden (2019), “In order to change the public’s views, governments brought in legislation, changed the language that was used to describe the public sector and introduced concepts such as ‘value for money’ and ‘performance measurement.’ Hence ‘new public management’ (NPM) was born” (p. 479). For all its ubiquity, NPM did not ultimately survive the test of time, and was replaced by such institutional reconceptualizations as “the new public service” (Denhardt & Denhardt, 2011) and “governance” (Bingham, Nabatchi, & O’Leary, 2005) that emphasized greater government involvement in decision-making processes and promoted collaboration across sectors.
But, even as the tidal wave of New Public Management began to ebb, the push for governmental effectiveness remained intact. Around the same time that the popularly influential Reinventing Government (Osborne & Gaebler, 1992) and Breaking Through Bureaucracy (Barzelay, 1992) made their way into the public sector, the Clinton Administration enacted The Government Performance and Results Act, ushering in a new era of performance management (Bozeman & Feeney, 2011) that has been upheld by subsequent presidential administrations (Moynihan & Kroll, 2016).
With the concept of performance management continuing to remain central among organizations in practice, researchers have responded by asking how such information may be used to enhance organizational outputs. Over the last twenty-five years, this core line of inquiry has arisen in multiple instances. For example, in 1995, Behn questioned, “how can public managers use measures of the achievements of public agencies to produce even greater achievements?” (p. 321). Some thirteen years later, Van Dooren argued that “if we want to study
11


the successes and failures of performance movements, we have to study the use of performance information” (2008, p. 22).
By examining agency-determined indicators of performance management, this dissertation seeks to enhance practitioners’ knowledge and use of this key information to better inform decision-making and outcomes. With public managers often struggling to make adequate use of performance data (Moynihan & Pandey, 2010), this dissertation hopes to fill a gap by leveraging existing information for application in practice. Perhaps more importantly, by examining the link between performance management and child welfare permanency, this dissertation echoes the call of performance management scholars and experts to look beyond outputs or processes and consider the critical importance of organizational outcomes in achieving long-term social and institutional change (Van Dooren, 2011). In so doing, the dissertation hopes to contribute to our theoretic understanding of performance management, its antecedents, and its impact on these long-range outcomes.
The Factors Affecting Performance Management in Organizations
Given this dissertation’s central premise that performance management exists as a mediating influence between predictors and organizational outcomes, it is necessary to examine the factors thought to influence performance management. Such factors can be broadly categorized as endogenous-existing within the organization-and exogenous, originating outside of the agency. While internal organizational characteristics affecting performance management are well-known, less research has examined the impact of broader systemic influences on organizations’ ability to achieve and sustain performance management benchmarks (Walker & Andrews, 2013).
12


While systemic constraints may encompass an array of environmental or exogenous influences, three are especially historically important in the delivery of child welfare services: the size of the population eligible for system involvement; the economic health of the community in which the child welfare agency is based; and the geographic classification of the county. Each of these constructs is highlighted in the literature as systemically important to child welfare outcomes. Their definitions and importance in child welfare service delivery are discussed in greater detail in the subsections below.
Eligible Population
The size of the eligible population was determined to be a predictor of performance
management because of its implications for capacity. In this regard, two divergent arguments
have been proposed. The first argument is that the larger the population eligible for a service (in
this case, child welfare), the greater the potential for use of that service. In human service
organizations, which are already taxed by resource scarcity, more cases from a larger pool of
eligible persons may place additional undue strain on agencies already at workload capacity that
will subsequently struggle to achieve performance management standards and overall
effectiveness (Wulczyn & Halloran, 2017). Alternatively, the second argument claims that
organizations providing services in densely-populated areas are also more likely to have broader
service access and information sharing capabilities that improve performance management,
outweighing the capacity burdens relative to organizations that must be concerned with the daily
management of their dispersed populations (Arsneault, 2006; Andrews, Boyne, Moon, &
Walker, 2010). Even with such disparate perspectives, populations have long been recognized as
an external influence on organizations. As Kaufman posited:
the composition and distribution of the human population in which organizations form keep changing ... [a]nd the rates of change are not the same everywhere or over time ...
13


local and regional patterns as well as national and worldwide ones vary a great deal.
... Thus does demography intensify the variability of the organizational environment.
(1985, p. 38-39)
If demographic shifts are presumed to impact the environments in which organizations operate, and environmental conditions are tied to organizational capacity and effectiveness, then it follows that the nature of such demographic differences, including economic and geographic disparities, may contribute to the ability of child welfare agencies to achieve performance management standards. In other words, not only may variations in population density impact organizational performance, but economic and geographic differences may also help explain variation in the attainment of performance management standards.
Economic Profile
A locality’s economic profile refers to the overall economic health of a given community. Economic health has been measured in a variety of ways, but within the context of child welfare, the theme of poverty has remained integral to the narrative of out-of-home involvement (Barth, Wildfire, & Green, 2006). Specifically, child maltreatment-the overwhelming reason why children are placed in out-of-home care settings-occurs at substantially higher rates among poverty-stricken families (McGuinness & Schneider, 2007). As Gainsborough (2010) remarked, “Despite debate over the extent to which child welfare policy can and should be treated separately from poverty, the two are inextricably linked ... [as] children involved with the child welfare system disproportionately come from poor families” (p. 10). Thus, in considering a systemic constraint on performance management in child welfare, poverty, conceived here more broadly as an economic profile, was taken into account. The issue of poverty may be further compounded by a county’s geographic profile, where rurality has long been associated with a
14


lack of service availability that may otherwise mitigate entry into the child welfare system (Belanger & Stone, 2008).
Geography
Like the issue of economic health, a locality’s geographic classification has long been considered a systemic constraint on organizational performance in the context of child welfare (Rine, Morales, Vanyukevych, Durand, & Schroeder, 2012). Common barriers to the effective provision of services in rural contexts include challenges in retaining staff, reduced physical proximity to available services, increases in costs required to administer services across dispersed populations, and cultural differences between rural communities and urban locales in which many services are based (Elgin & Carter, 2019). While the more commonly-accepted perspective holds that urbanicity is associated with greater resource availability and, subsequently, better organizational performance, a compelling counter-perspective was proposed by Wulczyn, Chen, and Courtney (2011). The authors’ rigorous empirical analysis demonstrated that, while poverty remained an important predictor of poorer outcomes, urbanicity was also a driver of lower outcomes, specifically reunification. Although there was little elaboration as to why, beyond statistical modeling, urbanicity performed opposite conventional wisdom, the incongruent expectations regarding urban settings warrant further examination as a systemic constraint in this dissertation.
Performance Management’s Impact on Organizational Outcome Attainment
Having considered the factors that influence performance management, this literature review now turns to the effect that performance management is thought to have on the achievement of broader organizational outcomes. Since the implementation of GPRA in 1993, a vast body of research has examined the link between performance management and outcomes.
As covering such an expansive literature is beyond the scope of this dissertation, this subsection
15


focuses specifically on the performance management-outcome link within the context of child welfare, which is the setting for the current research.
The relationship between performance management and outcome achievement in child welfare agencies is well-established theoretically and supported empirically. According to McBeath and Meezan (2010), “the logic of hierarchical governance suggests that performance initiatives affect the administration and structure of service programming, which shape interactions between caseworkers and service recipients, thereby altering client outcomes” (p. i 102). The authors further argue that child welfare systems, in particular, have been concerned about better understanding and improving the connection between performance and outcomes as they have generally been perceived to be ineffective and unresponsive to client needs (McBeath & Meezan, 2010).
Outcomes in child welfare are typically categorized in three ways: safety, well-being, and permanency. Safety-based outcomes refer to the prevalence and persistence of child maltreatment. The goal of child welfare agencies, then, is to mitigate such prevalence and persistence through investigation, referral, therapeutic interventions, and ultimately removal from an unsafe environment when other measures have failed. Well-being outcomes center around developing a child’s physical, mental, and educational health in order to prepare children for success upon exit from the system. Finally, permanency outcomes reflect the timeliness of reunification efforts, where reunification can take the form of either biological family reunion or adoption (D’Andrade, Osterling, & Austin, 2008). While safety and well-being are undoubtedly critical measures of success, permanency outcomes are the focus of this dissertation.
In any given year, over 20,000 children in the United States age out of the child welfare system without having achieved placement permanency in either a reunified or adoptive setting
16


(Lockwood, Friedman, & Christian, 2015). Children in the child welfare system are known to be at heightened risk for homelessness, substance abuse, and criminal involvement, and children that do not experience permanency experience these risks at even greater rates (Lockwood, Friedman, & Christian, 2015). To mitigate these negative long-term effects, considerable research has investigated the various factors driving permanence instability.
Both individual-level and organizational-level characteristics have been examined for their links to outcome achievement in child welfare systems. At the individual level, factors including behavioral distortions of children in care (Orsi, Lee, Winokur, & Pearson, 2018), children from families with complex substance abuse and mental health needs (Yampolskaya, Sharrock, Armstrong, Strozier, & Swanke, 2014), and case worker experience (Fluke, Corwin, Hollinshead, & Maher, 2016) have all been theoretically and empirically linked to permanency outcomes. At the organizational level, variables such as the availability and effectiveness of parenting intervention programs (Spieker, S. J., Oxford, M. L., & Fleming, C. B., 2014), placement distinctions, especially within kinship and congregate care settings (Carnochan, Lee, & Austin, 2013; Winokur, Holtan, & Batchelder, 2014), and performance-management driven systems of professionalism (Wasted, White, Broadhurst, Peckover, & Pithouse, 2010) have all been demonstrated to impact organizational outcome attainment.
In the context of permanency outcomes, the definition of performance management has been established in practice without much counter-argument theoretically. With respect to outcomes, then, performance management is defined as timeliness, with timeliness including an array of benchmarks throughout the duration of a child’s case, including timeliness of adjudication, first court hearing, and termination of parental rights (Flango, Gatowski, & Sydow, 2015). According to Flango, Gatowksi, and Sydow (2015), the rationale behind establishing
17


timeliness measures was to “encourage states to begin the process of measuring court performance and when the first results came in, states would be encouraged to probe further-perhaps using other measures” (p. 21). Instead, child welfare agencies incorporated timeliness as a main facet of their performance management system, leading to an implicit definition of performance management as efficiency, rather than quality, of services. Despite the emphasis on efficiency, the core concept of performance management as an action undertaken by child welfare agencies remains intact and consist with the broader theoretic definition articulated by Radnor and Barnes (2007).
Conceptual Framework
Based on the reviewed literature, it is argued that performance management is a dynamic concept with important implications for organizational outcome achievement. Towards that aim, the conceptual framework is built on the straightforward premise that performance management need not be exclusively examined as either predictor or outcome, but rather is better understood as an intermediate output, at once affected by broader systemic constraints and in turn driving organizational outcomes. To illustrate this foundational argument, Figure 2 below depicts a pathway diagram representing the dissertation’s conceptual framework.
18


Traditional Conceptualization: PM as DV Systemic a Performance
Constraints Management
Traditional Conceptualization: PM as IV Performance b Organizational
Management Outcomes
Current Conceptualization: PM as DV and IV Systemic a Performance b Organizational
Constraints Management Outcomes
Empirical Conceptualization: PM as Mediator Systemic a ^ Performance Management b Organizational
Constraints c â–º Outcomes
Figure 2. A conceptual framework ofperformance management.
As illustrated in the figure above, performance management is traditionally thought to
exist in one of two ways. In the first conceptualization, represented as path a, performance management is treated as an outcome affected by broader systemic factors such as social and economic conditions within a service delivery area. In the second, depicted as path b, performance management is presented as a predictor driving organizational outcome attainment. Alternatively, this dissertation argues that examining either path in isolation does not provide a comprehensive understanding of the dynamic role performance management plays in organizations. Instead, the current conceptualization indicates that performance management is best understood as a link between path a (wherein the concept is treated in accordance with the first tradition as a dependent variable) and path b (where it is consistent with the other theoretic tradition as a predictor of outcome attainment). In order to test if this conceptual framework is appropriate, however, performance management must be empirically regarded as a mediating influence where its influence between systemic constraints and organizational outcomes can be
19


properly modeled. The supposition of mediation requires an adjacent line of argumentation; namely, that systemic constraints exert some direct effect on organizational outcome attainment. In the absence of such a relationship, performance management would have nothing to mediate.
It should be noted that this is not the first study to posit the presence of a mediating variable historically conceptualized as either predictor or outcome. A recent example from Damoe, Hamid, and Sharif (2017) empirically demonstrated the mediation effect of organizational climate, typically conceived exclusively as either indicator or dependent variable, as the missing link between human resource management practices and human resource outcomes in the Libyan public sector. Indeed, much scholarship in varied contexts has examined mediating influences across a host of organizational variables ranging from process management’s mediating effect between senior leadership and performance in Chinese service firms (Zhang, Kang, & Hu, 2018) to the role of teamwork and employee satisfaction in tempering the relationship between sustainability-oriented human resource management and organizational performance (Lee, 2019), and even the impact of instructional practices on explaining the relationship between teacher and student motivation (Schiefele, 2017).
Therefore, while performance management may not typically be conceptualized as a mediating influence, comparable studies of organizational mediators provide a theoretical reason to suggest that this construct may uphold a similar form and function. To test its foundational assertion, this dissertation examines systemic constraints, performance management, and permanency outcomes across child welfare agencies in Colorado.
Study Context
Within the United States, the foster care system exists at many levels of government. Originating at the federal, foster care policies and services are directed through the Children’s
20


Bureau, whose immediate overhead organization is the Administration of Children and Families, itself a subsidiary of The Department of Health and Human Services. Despite funding under Title IV-E of the Social Security Act and occasional broad-sweeping policy reforms initiated at the national level, the provision of foster care services is almost exclusively delegated to state governments (Gainsborough, 2010). As relatively autonomous entities, individual states define the structure of service delivery most appropriate to fit their geographic and population needs. These structures generally fall under one of three categories as defined by the U.S. Department of Health and Human Services. The structures include state administered, county administered, and hybrid systems (U.S. DHHS, n.d.). Figure 3 below presents a visual display of the national landscape pertaining to these administrative structures.
Figure 3. Configuration of states’ administrative structures. Reproducedfrom The Child Information Gateway (U.S. DHHS, n.d.). Retrievedfrom https:/fwww.childwelfare.gov/vnibs/factsheets/services/
21


As depicted in the figure above, Colorado maintains one of the rarer organizational forms-the county-administered system. It is within this context of county-based service delivery that the empirical study exists.
Within Colorado, foster care service provision employs a “state-supervised and county-administered” system, meaning that provisional authority is devolved to the county level for the care of children in out-of-home placements (Howard-Moroney, 2016, p. 211). For purposes of this dissertation, the implication of a county-administered system is that county agencies will be the focus of analysis (rather than the state) as they are directly responsible for managing children’s care.
As a state-supervised, county-administered system, Colorado is dual-layered in its provision of child welfare services. At the state level, Colorado’s supervising agency is the Department of Human Services, which is responsible for ensuring counties’ compliance with federally-mandated outcomes, state statutory requirements, budgeting maintenance and the allocation of resources, and licensure of private organizations that assist in out-of-home placement for children under county care (CDHS Child Welfare, n.d.). At the local level, all sixty-four counties in Colorado maintain child welfare agencies that are responsible for the direct provision of services, which include investigating reports of child abuse and neglect, completing assessments that determine whether out-of-home removal is appropriate based on the initial abuse and neglect referral, overseeing the cases of children placed in out-of-home care, managing contacts with treatment providers, private placement providers, and school systems to ensure a continuum of care, and working towards permanency to either reunify or find adoptive families for children (CDHS Child Welfare, n.d.).
22


To determine counties’ success in achieving such a broad array of objectives, Colorado maintains an online database, TRAILS, that operates as the state’s federally-required statewide automated child welfare information system (SACWIS). SACWIS is funded at the federal level of government under The Children’s Bureau and exists to provide a comprehensive case management system that enables better performance management and permanency outcomes through the tracking of such measures as the timeliness of assessment closure after a report of child abuse or neglect has been filed and the percentage of children who achieve permanency placements within twelve-months of entry into out-of-home care environments (NCSL Child Welfare Information Systems, 2015).
Beyond maintaining a robust historical record of child welfare metrics from which a thorough analysis could be constructed, Colorado’s counties exhibit considerable variation across the independent and dependent variables of interest in this dissertation. The sixty-four counties differ across their eligible populations, their levels of local economic stability, and their geographic classifications. Success in achieving performance management standards has been varied, as has success in securing permanency placements for children in out-of-home care. Counties also rely to different extents on external placement providers for the delivery of services and have notable ranges of children under their care (with smaller counties such as Jackson and Huerfano often having no children removed from a family unit to metro areas such as Denver and Jefferson County having active populations in the hundreds during any given fiscal year). Such variability provides the foundation for a sound empirical study, in addition to the theoretic and practical considerations presented below.
Child welfare is an appropriate study context for the following reasons. Many critical elements of child welfare-accountability, contracting partnerships, turnover and staff retention,
23


and, of course, performance management, to name a few-have received either scant attention or yielded inconclusive findings regarding best practices, making it a rich context for further theoretic application and empirical inquiry (see, e.g., Hwang, 2016; Willis, Chavkin, & Leung, 2016; Collins-Camargo & McBeath, 2017; Jolles, Collins-Camargo, McBeath, Bunger, & Chuang, 2017). Beyond topical coverage, child welfare agencies, like many human service organizations, are presumed to be impacted by systemic constraints. In a contemporary era reinforcing calls for greater performance management and organizational outcome attainment in the face of resource scarcity and competing demands for responsiveness to stakeholders, an understanding of what factors truly influence organizational effectiveness in both its immediate and long-term forms is critical and timely. Finally, there is a practical rationale that should not be overlooked.
Nationally, nearly 400,000 children reside in foster care (Children’s Bureau, 2016). With such high numbers of cases in a system that was designed to protect children from adverse childhood experiences, it is perhaps not surprising that child maltreatment is now considered a preeminent public health crisis (Latzman, Lokey, Lesesne, Klevens, Cheung, Condron, & Garraza, 2019). In Colorado alone, some 80,000 referrals for child abuse and neglect are made annually. Of the referrals made in 2015, over 29,000 were referred for further investigation, and more than 5,000 children were ultimately placed in out-of-home care (Child Welfare League, 2017).
Research has continually concluded that children placed in child welfare systems are at heightened risk for substance abuse, homelessness, involvement in criminal activity, unemployment, and suicidal ideation, among other adverse prospects (see, for example: Katz,
Au, Singal, Brownell, Roos, Martens... Sareen, 2011; Shook, Goodkind, Pohlig, Schelbe,
24


Herring, & Kim, 2011; Fowler, Marcal, Zhang, Day, & Landsverk, 2017; Yampolskaya, Chuang, & Walker, 2019). The quality of care and services received while in out-of-home placements is essential to mitigating such long-term risks (Lockwood et al., 2015). Therefore, a study examining the factors influencing performance management and, by extension, outcome achievement, serves to inform practitioners about their organizations’ general areas of strength and opportunities for improvement in order to facilitate far-reaching positive effects for youth in their care.
With both theoretic and practical rationales considered, this dissertation now turns to a presentation of the methods that were employed to empirically assess the degree to which performance management was subject to exogenous influences and the impact of performance management on achieving permanency outcomes in Colorado’s child welfare system.
III. METHODOLOGY
Having identified the prominent literature related to performance management and child welfare, this chapter describes the dissertation’s methodology and consists of the following subsections: Research Questions and Hypotheses; Population and Sample; Data Collection; Empirical Measures; and Data Analysis.
Research Questions and Hypotheses
The reviewed literature falls broadly under three categories: systemic constraints; performance management; and permanency outcomes. Systemic constraints include the eligible population for services, the local economic profile, and the area’s geographical classification.
The larger concept of systemic factors drove this dissertation’s first research question, while the constructs contained therein were used to develop the affiliated hypotheses. The first research question is:
25


RQ1: To what extent do systemic factors affect a child welfare agency’s ability to achieve performance management standards?
Associated with this question, three hypotheses are proposed to examine the relationship between systemic influences and performance management. The first hypothesis involves the eligible population and is stated as:
HI: Agencies with larger eligible populations will be less likely to achieve performance management standards than those agencies with smaller populations.
The logic underlying the direction of the first hypothesis is that locations with larger populations
will have a greater volume of families with potential involvement in the child welfare system,
thereby putting a strain on agency resources and mitigating the likelihood of achieving
performance management standards (Wulczyn & Halloran, 2017).
The dissertation’s second hypothesis is grounded in the county’s economic profile and
posits:
H2: Agencies that operate within economically stable counties will be more likely to achieve performance management standards than agencies that operate within economically struggling counties.
Extant research has continually examined and reinforced the link between poverty and child
welfare involvement. Therefore, the second hypothesis accepts this foundation and proposes that
economically stable counties will have less entry into the child welfare system than communities
that are struggling. Like the first hypothesis, the second reiterates that greater involvement will
place more exertion on agencies that may subsequently struggle to achieve performance
management standards (Barth, Wildfire, & Green, 2006).
The third and final hypothesis associated with research question 1 considers the impact of
a county’s geographic location. Hypothesis 3 states:
H3: Agencies within urban settings will be more likely to achieve performance management standards than agencies within rural or frontier settings.
26


Similar to the link between poverty and child welfare, urbanicity has long been regarded as favorable to organizational performance. In urban settings, resources are presumed to exist in greater quantity than the scarcity that defines most rural locations. With supposed greater abundance of resource availability, urban areas are projected to achieve performance management standards more readily than their rural and frontier counterparts (Belanger & Stone, 2008).
With performance management designated as a dependent variable in the first research question, the second part of this dissertation reimagines its role as a predictor influencing organizational outcomes. Toward that aim, the second research question is:
RQ2: How does the attainment of performance management standards impact
permanency outcomes in child welfare agencies?
A fourth hypothesis is proposed to consider the directionality of the relationship between performance management and outcomes. This final hypothesis states:
H4: Agencies that achieve performance management standards will have better
permanency outcomes than agencies that do not achieve standards.
The logic underlying hypothesis four is straightforward. It argues that agencies performing better at earlier stages in child welfare cases will yield improved long-term outcomes relative to agencies that cannot meet performance management standards.
The remainder of this chapter details the analytic approaches employed to test the proposed hypotheses.
Population and Sample
The population for this dissertation includes all sixty-four counties in Colorado. Each county within the state has a Department of Human Services responsible for the administration of child welfare services under the county’s jurisdiction (Child Welfare, n.d.). Because Colorado
27


operates under a state-supervised, county-administered model wherein counties directly provide services either internally or in cooperation with private Child Placement Agencies (CPAs), counties, rather than the state, serve as the appropriate level of analysis.
Data were gathered on all sixty-four counties for fiscal years 2013 through 2017 across as many measures as had available data. Because the total N for this dissertation is small, it was critical to obtain as close to 100% of the population as possible to ensure that the sample was not substantially biased. In a similar study using a small N population (57 hospitals in Croatia), Zmuk, Lutilsky, and Dagija (2016) determined that a final sample exceeding at least 70% of the population was sufficient to generate confidence in the sample’s overall representativeness, thereby making it appropriate for further statistical inference. For this research, then, a sample of at least forty-five counties was required for inferential analysis. Although some counties had missing data across a few measures, no analytic procedure ever captured fewer than the necessary forty-five counties. The sources of these measures are described below in the Data Collection subsection.
Data Collection
With IRB approval, data for this dissertation were synthesized from a variety of sources to produce a unique dataset. Data gathering occurred between September 2018 and February 2019. All data for this dissertation were taken from publicly-available sources that included: Colorado’s Department of Human Services; the Colorado State Demographer’s Office; the U.S. Bureau of Economic Analysis; the Colorado State Office of Rural Health; and the Colorado Department of Education. Each of these data sources and the corresponding variables extracted are discussed in further detail below.
28


Five of the ten variables for this dissertation were gathered from Colorado’s Department of Human Services (CDHS). CDHS is Colorado’s state-level supervisory agency responsible for overseeing the sixty-four counties that deliver child welfare services. As part of this supervisory role, CDHS maintains TRAILS, Colorado’s Statewide Automated Child Welfare Information System (SACWIS). Importing and updating data in SACWIS was federally mandated from 1993 until 2016, when states were granted the autonomy to amend their electronic record systems in a manner best suited to the state’s technological capabilities and informational needs (DHHS, 2016). Despite the decentralization allowances, Colorado elected to retain TRAILS until a more advanced replacement became available (Child Welfare Data and Accountability, n.d.). As of 2019, TRAILS is still operational, and has mostly consistent records for all counties between FY2013 and FY2017.
Four of the five variables obtained from CDHS were included in the TRAILS system. These variables were: the timeliness of assessment closure (for the concept performance management); children under county care (for the control variable workload); placement type (for the control variable out-of-home setting); and length of stay under county care (for the concept permanency outcomes). Every variable was extracted from the TRAILS website by county for each fiscal year and transferred into Excel for cleaning and transformation.
Beyond TRAILS, CDHS also reports details on the child welfare providers licensed by the state of Colorado. The variable “availability of providers”, used to capture the control variable of resources, was constructed from the provider list (CDHS Service Providers, n.d.). Like the TRAILS data, the provider list was exported into Excel for cleaning. The provider list included the organizational names of the licensed child placement agencies in Colorado and a link to each organization’s website when available. The website data were used to inform the
29


provider’s service area and an online search engine (i.e., Google) was used to obtain information on those providers for whom a website was not listed. More details on how provider data were constructed can be found under the Empirical Measures subsection on page 42.
The remaining five variables were collected from four different sources. Information on the eligible population was gathered from the Colorado State Demographer’s Office website (State Demography Office, n.d.). Within the website, data were identified after performing the following search sequence:
Home > Population > Data > Profile Lookup.
The total number of housing units was then selected for every county between FY2013 and FY2017. Data downloaded automatically into Excel.
For a county’s economic profile, data were collected from the U.S. Bureau of Economic Analysis’s website (Bureau of Economic Analysis, n.d.). From the home page, the following search sequence was performed:
Data > Data by Topic > Employment > Employment by County, Metro, and Other Areas > Interactive Data > Interactive Tables: Regional Accounts Data > Local Area Personal Income and Employment > Economic Profile (CAINC30/MAINC30) > CAINC30 -Economic Profile (counties and states) > Colorado > All Counties and All Statistics in Tables > Years 2013 to 2017.
The economic profile data downloaded automatically into Excel.
Geographic data were obtained from a series of county-level maps produced by the Colorado State Office of Rural Health. These maps were made publicly available on the Colorado Rural Health Center’s website (Colorado Rural Health Center, n.d.) and were found using the following search sequence:
30


Home > Resources > Maps > Learn More
The maps were used to extract the names of counties falling within certain geocoding designations.
Finally, the variable “free and reduced lunch eligibility,” used as a measure of an economic profile, was obtained from the Colorado Department of Education’s website (Colorado Department of Education, n.d.). The following search sequence was performed to gather eligibility information:
Home > School & District Information > Colorado Education Statistics > Pupil
Membership > Previous School Years > 2012-2013 Pupil Membership Data > District
Level Data > K-12 Free and Reduced Lunch Eligibility by County and District (XLS). The last two sequences in the search function were replicated for each year in the study and data were exported automatically to Excel.
All variables were cleaned, transformed, coded, and merged to yield a comprehensive county-level dataset. This dataset was then used to conduct a secondary analysis. Although the data collection process involved the extraction, cleaning, and transformation of datapoints into a new dataset, the dataset is considered a secondary source as it did not involve the firsthand collection of new information not previously available (Hox & Boeije, 2005).
Because the population of interest is small, thereby limiting statistical power (Cohen, 1992), collecting information on a single year’s worth of data would be insufficient to generate robust conclusions. Therefore, all data were extracted over a five-year span, from fiscal year 2013 through fiscal year 2017. The state of Colorado operates on fiscal years defined as July 1 to June 30. Rather than extract calendar years, data were aligned with fiscal years for consistency with Colorado’s system of operation.
31


Because child welfare data exist in the aggregate, it was impossible to determine whether observations were independent or included the same children at multiple points in time. As such, panel data could not be constructed. In lieu of panel data, then, this analysis relied upon a time series of repeated cross-sections, wherein each of the five fiscal years was treated as an independent cohort in order to mitigate the likelihood of violating the interdependence of observations assumption. While causal inference is limited relative to panel data, when modeled correctly, repeated cross-sections can inform robust understandings of trends over time (Moffitt, 1993). Observations within the ten variables were thus collected for each fiscal year. The following subsection defines the empirical measures used to conduct the analysis.
Empirical Measures
Eight constructs were considered for this dissertation. In addition to the primary concept of interest-performance management-systemic factors including the eligible population, economic profiles, and geography were examined for their influences on achieving management standards. Performance management was then studied for its relationship with permanency outcomes (concept 8). The three remaining constructs-workload, types of out-of-home settings, and resource availability-were analyzed as control variables. Table 1 below summarizes these key concepts alongside the research questions, hypotheses, variables, measures, and data source. Detailed descriptions of each concept’s empirical measures follow.
32


Table 1. Overview of Research Questions, Concepts, and Measures
Research Questions Concepts Hypotheses Variables Measures Data Sources
RQ1: To what extent do systemic factors affect a child welfare agency's ability to achieve performance management standards? Eligible Population (IV) HI: Agencies with larger eligible populations will be less likely to achieve PM standards than those agencies with smaller populations Housing Units Number of housing units within a county per fiscal year CO State Demography Office
Economic Profile (IV) H2: Agencies that operate within economically stable counties will be more likely to achieve PM standards than agencies that operate within economically struggling counties. Income Per capita personal income per fiscal year U.S. Bureau of Economic Analysis
Unemployment Per capita unemployment insurance compensation per fiscal year U.S. Bureau of Economic Analysis
Free and Reduced Lunch Eligibility Percentage of students qualifying for free and reduced lunch within a countyper fiscal year CO Dept, of Education
Geography (IV) H3: Agencies within urban settings will be more likely to achieve PM standards than agencies within rural or frontier settings. Geocoding Classification County designation as urban, rural, or frontier per fiscal year CO State Office of Rural Health
Performance Management (DV) Timeliness of Assessment Closure Percentage of assessments completed within 60 days of referral; binary indicator CO Dept, of Human Services (TRAILS)
RQ2: How does the attainment of performance management standards impact permanency outcomes in child welfare agencies? Performance Management (IV) H4: Agencies that achieve PM standards will have better permanency outcomes than agencies that do achieve standards. Timeliness of Assessment Closure Percentage of assessments completed within 60 days of initial referral CO Dept, of Human Services (TRAILS)
Workload (Control) Children under County Care Number of children placed under county care per fiscal year CO Dept, of Human Services (TRAILS)
Out-of-Home Setting (Control) Placement Type Percentage of children in congregate, foster (non-family), kinship, and detention placements within a county per fiscal year CO Dept, of Human Services (TRAILS)
Resources (Control) Availability of Providers Number and type of child placement agencies within a county CO Dept, of Human Services (Service Providers)
Permanency Outcomes (DV) Length of Stay under County Care Percentage of children under county care who achieved permanency within 12 months of entry; binary indicator CO Dept, of Human Services (TRAILS)
33


Systemic Factors Influencing Performance Management (Concepts 1-3)
Three systemic factors were analyzed for their potential impacts on performance management. These factors include: the eligible population within a county, a county’s economic profile, and the county’s geography. The measures constructed for the variables associated with each concept are presented below.
Eligible Population
The eligible population within a county was conceptualized as a systemic constraint on performance management in child welfare. To capture eligible population, the variable “housing units” was collected from the Colorado State Demographer’s Office. Unlike total population, which counts the number of individuals residing within a county, a housing unit is defined as “a house, an apartment, a group of rooms, or a single room occupied or intended for occupancy as a separate living quarter” (Census Bureau definitions, n.d., p. 3). Because children exist within family units, housing units were a better approximation of the population potentially eligible for child welfare services than a direct count of raw population numbers, which would not distinguish the number of families from the overall count of residents. The number of housing units was calculated by county for each fiscal year to measure the county’s eligible population. Economic Profile
A county’s economic profile was also identified as a systemic factor hypothesized to influence performance management. Unlike eligible population and geography, which are relatively straightforward in both theoretic definition and empirical measurement, the concept of an economic profile is greater in its complexity. As an illustration, the U.S. Bureau of Economic Analysis (BEA)’s definition comprises 31 unique indicators that span various levels of analysis, ranging from total employment and retirement to proprietors’ income and per capita dividends
34


(BEA CAINC30, n.d.). Of the components listed in the BEA’s definition, two were selected for inclusion in this dissertation: income and unemployment. Poverty has long been associated with heightened risk levels as low-income families are disproportionately represented in the child welfare system (e.g., Barth, Wildfire, & Green, 2006; McGuinness & Schneider, 2007). Therefore, income and unemployment were identified as proxies of poverty.
Both income and unemployment data were obtained from the BEA. Income was measured as per capita personal income, which is defined as “the personal income of a given area [county] divided by the resident population of the area” (BEA CAINC30, n.d.). Similarly, per capita unemployment insurance compensation was used to measure unemployment. Unemployment insurance compensation is defined as “benefits consisting mainly of the payments received by individuals under state-administered unemployment insurance (UI) programs, but they include the special benefits authorized by federal legislation for periods of high unemployment” (BEA CAINC30, n.d.). The total UI was divided by the resident population to create the per capita metric.
Finally, free and reduced lunch (FRL) eligibility was included as an additional variable within the broader construct of economic profile. Given poverty’s prevalence among families involved with child welfare, the metric of FRL was considered another indicator of a county’s overall economic health. Further, FRL has frequently been employed as a measure of socioeconomic status in social science research, most notably education (see Harwell & LeBeau, 2010, for an overview).
To measure free and reduced lunch eligibility, data were collected for each fiscal year from the Colorado Department of Education. These data existed at the school district level of analysis and were subsequently aggregated to produce county-level totals. The rate of FRL
35


eligibility was calculated as the percentage of children qualifying for either free or reduced lunch out of the total number of children in the county’s school systems.
Geography
When conceptualizing geography, perhaps the most common descriptor distinguishes locations based on urban, suburban, and rural classifications, having been captured in that manner since the 1950 census (Census Bureau, n.d.). In Colorado, however, the geographic distinctions define urban, rural, and frontier communities. According to the Colorado Rural Health Center, a nonprofit established in 1991 to serve as the State Office of Rural Health, all counties that “are not designated as parts of Metropolitan Areas” are considered rural, and frontier communities are further defined as “those counties with a population density of six or fewer persons per square mile” (Colorado Rural Health Center, 2017).
To determine appropriate geographical classifications for this dissertation, county maps generated by the Colorado Rural Health Center were examined. Figure 4 below reproduces one of these maps to highlight the county geocoding schema.
36


The definition of rural and frontier varies depending on the purpose of the program or policy in which they are used. Therefore, these are referred to as programmatic designations, rather than definitions. One designation commonly used to determine geographic eligibility for federal grant programs is based on information obtained through the Office of Management and Budget: All counlies that are not designated as parts of Metropolitan Areas (MAs) are considered rural, The Colorado Rural Health Center frequently assumes this designation, as well as further classifies frontier counties as those counties with a population density of six or fewer persons per square mile. You may visit the Rural Health Grants Eligibility Advisor to determine if a count)' or address is designated rural, or contact the Office of Rural Health Policy at (301) 443-0835.
COLORADO Rural Health Center
The State Office of Rural Health
Figure 4. Colorado county geographic designations. Reproducedfrom the Colorado Rural Health Center (http.\ conmalhealth.wpengine.netdna-cdn.com wp-content uploads 2017 07 2017-Riiral-County-Designation.pdf).
As illustrated in the figure above, Colorado had 17 urban, 24 rural, and 23 frontier counties in 2017. Because this dissertation’s timeframe includes four years before the map’s publication, prior maps were sought to ensure consistency in the geocoding classifications. An additional map from 2014 was publicly available and showed no differentiation in county designations (Colorado Rural Health Center, 2014), indicating that county classifications were consistent in 2014 and 2017. Although data from 2013 and additional points in time between 2014 and 2017 were not available, the analysis operated on the assumption that the county classifications were the same throughout the duration of the study given the comparable data at two points in time.
37


To measure geocoding classification, binary and categorical indicators were created for each of the designations. In the binary classification, urban counties received a “1” if they were classified as urban and a “0” if they were not. Similarly, the “1” and “0” coding was applied to rural and frontier communities. In this way, counties fitting within a particular designation could be readily compared with those of a different classification. For predictive modeling, the binary categories were collapsed into a single categorical variable, where urban counties were coded as “1”, rural counties as “2”, and frontier counties as “3”.
Performance Management (Concept 4)
Child welfare agencies in Colorado report quarterly numbers on two performance management metrics. The first metric is the timeliness of initial response and is measured as the percentage of children who are interviewed within policy guidelines. The second metric is timeliness of assessment closure and is measured as the percentage of assessments completed within sixty days of the initial referral date (CDHS Community Performance Center, n.d.).
While both metrics have been identified by the state of Colorado as important to child welfare outcomes, the latter metric is better suited to analysis in this dissertation. The reason for such an assertion is twofold. First, the metric “timeliness of initial response” is ill-defined. Policy guidelines for length of time to conduct an interview are not specified. Second, when considering the necessity of variation on a variable of interest, the timeliness of initial response metric showed little variation over time, with the average percentage of guideline-approved interviews hovering around 90% between fiscal years 2013 and 2017. Of all the quarters for which percentages were calculated during the study’s time frame, the lowest reported percentage occurred in December 2013 at 84% and the highest percentage was calculated at 94% in June 2017 (University of Kansas Initial Response, n.d.). Given these percentages, it is clear that
38


counties have consistently performed well on this metric and there is not substantial variation on which to observe differences.
Timeliness of assessment closure, on the other hand, has both a precise definition and enough variability across counties over time to warrant further investigation. Therefore, it has been selected as the performance management indicator for this dissertation. Four measures were constructed for timeliness of assessment closure. The first measure calculated the percentage of assessment closures successfully completed within each county during the last quarter of each fiscal year. This calculation was intended to reinforce consistency with the workload control variable, which was measured as the number of children placed under county care during the last quarter of the fiscal year (additional details may be found in the variable description for workload later in this chapter). Due to the aggregation of the data, it could not be determined if the number of children recorded in the from one quarter to the next represented distinct individuals, meaning that each quarter reported additions to the existing numbers under care, or if children who had been recorded in one quarter were subsequently re-reported in future quarters if they had not yet exited the system. As a result, only the last quarter within a fiscal year was recorded so as not to misrepresent the total child population.
In contrast to the ambiguous nature of child placement data, performance measurement data are recorded as unique observations within the TRAILS system. As such, each quarter in the fiscal year contains the number of new assessments completed and eventually closed. Therefore, in addition to the measure of performance management as the percentage of successful closures within sixty days during the last quarter, an average across all four quarters of each fiscal year was calculated for all counties that had available data. The logic behind the second measure was
39


to capture as much variation as possible and not inadvertently favor or penalize a county based on a singular assessment period.
Finally, the third and fourth measures of performance management were dichotomous indicators. CDHS has articulated a statewide goal of achieving 90% success rates in closing assessments within sixty days of the initial referral (University of Kansas Assessment Closure, n.d.). The binary measure of performance management thus scored counties achieving the 90% threshold as a “1” and those that did not meet the threshold as a “0”. The binary indicator was initially calculated for the fourth quarter percentage and was replicated for the four-quarter average percentage. While a binary indicator may not be as descriptively nuanced as the percentage, given the limited number of observations, it was considered an important measure for predictive modeling.
Control Variables (Concepts 5-7)
Three control variables were included alongside performance management in an exploration of the factors affecting outcome achievement. The broader constructs from which the variables were operationalized included workload, out-of-home setting, and resources.
For this dissertation, workload referred to the number of children placed under county supervision. As discussed in the subsection on performance management, the difficulty with measuring workload was attributed to the aggregate nature of the data. Lacking the ability to distinguish new from existing placements, workload data could only be collected at one point in time during the fiscal year. Given that fourth quarter numbers are presented in the CDHS annual reports (Colorado OCYF, 2017), the last quarter of each fiscal year was used to estimate the active population. The concept of workload, then, was measured as the number of children under each county’s care on June 30th for each of the dissertation’s five fiscal years.
40


Out-of-home setting
Both child welfare research and practice have examined how the type of placement in which a child is situated affects different aspects of the child’s case, including their emotional well-being, future prospects, and timeliness to case closure (Orsi, Lee, Winokur, & Pearson, 2019). Specifically, movements away from congregate care settings in favor of kinship care and, to a lesser extent, foster care homes, have received considerable attention and scrutiny (Ehrle & Geen, 2002). To account for the importance of out-of-home setting, this dissertation constructed a measure of placement type using data from TRAILS (Univ. of KS, Placement Type, n.d.).
CDHS defines placement type under five broad categories. These categories are: congregate care; family-like setting; independent living arrangement; runaway/walkaway; and other. Congregate care placements include group homes, hospital/psychiatric facilities; residential placement; detention; and youth corrections secure placements. County foster homes, private child placement agency homes, kinship care, youth corrections foster placement, and trial home visits are captured under the family-like setting category. The remaining three categories are self-contained in that they are not broken into further subcategories (Univ. of KS, Placement Type, n.d.).
To measure placement type, a nominal indicator was created that aggregated many of the categories and subcategories. The measure of congregate care included group homes, hospital/psychiatric facilities; and residential placements. The subcategories of detention and youth corrections secure placements included in the CDHS definition were identified separately to distinguish juvenile-justice involved youth from those in more traditional congregate settings. Youth corrections foster placement, trial home visits, and county and child placement agency foster care placements were aggregated into a “foster care” category, while kinship care
41


remained a distinct category. Finally, independent living arrangements, runaway/walkaway placements, and other were condensed into an “other” category.
Having identified the categories, relative percentages of placements were then calculated. In particular, the categories of “congregate care,” “foster care,” and “kinship care” were studied in greater detail as they comprised most placements whereas “detention” and “other” constituted relatively small percentages of the overall total.
Resources
The final control variable included in this dissertation involved a measure of the county’s resources, defined as the number and type of service providers. Provider data were elicited from CDHS, which retains an active list of all licensed welfare-involved organizations (CDHS Service Providers, n.d.). According to CDHS, there are currently 106 licensed providers with whom county agencies contract for service provision. Provider names were listed alongside the type of service offered and, when available, the organization’s website.
Of the 106 providers listed by CDHS, 5 organizations did not have any information accessible. A website was not provided, and additional searches could not positively identify the organizations’ locations. Without such identifying information, these organizations were excluded from the analysis. Additionally, 4 of the 106 organizations specialized in international-only adoption services and were therefore removed from the study as such services do not fall under county-level purview. The exclusion of the 9 agencies yielded a total sample of 92% of all providers.
In addition to the number of providers, CDHS identifies nine types of provider services. These types include: residential child care facilities; secure residential treatment facilities; day treatment facilities; psychiatric residential treatment facilities; governing bodies; child placement
42


agency adoption; child placement agency foster homes; group homes; group centers; and homeless youth shelters (CDHS Service Providers, n.d.).
To determine the breadth of resource availability within a county, these service categories were combined with the organization’s location and service area. Two measures were therefore calculated related to resources. The first measure counted the total number of provider organizations within a county. A provider’s area of service was determined from the organization’s website. When the website was not listed or service area was not specified on the organization’s page, the physical address was employed as a proxy. The second measure used the CDHS list of service types to calculate the number of unique services available within a county. Many organizations offered a variety of services, making it important to distinguish the raw resources (i.e., total providers) from a more comprehensive view of the scope of resource availability.
With the control variables identified, the final step involved measuring the dependent variable, permanency outcomes.
Permanency Outcomes (Concept 8)
Permanency outcomes were selected as the dependent variable for this dissertation. As with performance management, permanency outcomes were identified and defined by CDHS. In examining the TRAILS database, three subsets of permanency outcomes were listed. These subsets include: permanency in 12 months for children in care; permanency in 12 months for children in care 12-23 months; and permanency for children in care over 24 months (CDHS Permanency Outcomes, n.d.).
According to CDHS (Permanency Outcomes, n.d.), the measure of permanency in 12 months is reported as the number of children who achieved permanency within 12 months of
43


entering care, where permanency is defined as a “a discharge to reunification, other relative, guardianship, or adoption” (Univ. of KS, Permanency in 12 Months, n.d.). The metric for permanency over 12-23 months is slightly more complex but refers to the number of children who achieved permanency within 12 months after having already been under county supervision between 12 and 23 months (Univ. of KS, Permanency 12-23 Months, n.d.). Similarly, the third measure of permanency for 24+ months captures the number of children who achieved permanency after having been under county supervision in excess of 24 months (Univ. of KS, Permanency 24 Months, n.d.).
Of the three potential measures, the metric “permanency in 12 months” was selected for this dissertation. Relative to the other two metrics, permanency in 12 months is the optimal goal and therefore perhaps the strongest indicator of outcome achievement. Further, from a practical standpoint, the lengthier measures of 12-23 months and 24+ months did not have sufficient historical records from which to analyze the observations. That is, while the 12-month measure yielded archival data on nearly every county, the latter metrics were missing in excess of 30% record retainment, which rendered them unsuitable for analysis.
For permanency within 12 months, both the total number of children reported within a county and the percentage of successful reunifications were gathered. Additionally, a binary indicator was constructed that distinguished counties performing at or above average within a given fiscal year from those that underperformed. Unlike the performance management measure, which employed a CDHS-identified 90% threshold for assessment closure timeframes, the equivalent goal for permanency outcomes was established under federal, rather than state, guidelines (Univ. of KS, Permanency in 12 Months, n.d.). The federal threshold was not specified as a point of comparison among Colorado counties for two reasons. First, the federal
44


threshold reported an average of 40% success across all U.S. states in achieving permanency outcomes within a 12-month period. Nearly every county in Colorado exceeded this threshold, which would not allow for variation to be observed. Second, even if variation was noticed, the reported 40% threshold was consistent across all fiscal years, suggesting that either the national average did not change over time or that only the most current average is reported. Rather than rely on an ambiguous measure, calculating a binary variable based on the average performance of the counties under study provided a more accurate comparison. Thus, the binary measure of permanency outcomes designated counties performing at or above the statewide average in a given fiscal year as a “1” and those performing below the statewide average as a “0”.
With all variables collected and measured, the dissertation next proceeded to determine an analytic strategy, which is reviewed in the subsection below.
Data Analysis
This dissertation’s analytic approach was quantitative and included both descriptive and inferential statistics. Data were collected and cleaned in Excel before being transferred to SPSS Statistics version 25 (SPSS), or R 3.5.1 (R) for further analysis. SPSS was able to accommodate all descriptive and some inferential procedures but did not have the capability to perform more advanced modeling techniques Therefore, R was employed to conduct such tests. The remainder of this section describes the analytic approaches utilized and consists of the following subsections: Descriptive Statistics; Means Comparisons, Correlations, and Chi-square; Logistic Regression for Rare Events; and Testing for Mediation: Bootstrapping.
Descriptive Statistics
This dissertation employed a variety of descriptive statistics to understand thoroughly the
nature of the variables being analyzed. Measures of central tendency (means and medians) and dispersion (standard deviation) were analyzed alongside measures of distribution normality
45


(skewness and kurtosis). Additionally, a two-step clustering procedure was employed to better understand how counties naturally grouped in the data across a variety of metrics.
Cluster analysis is a method of inquiry designed to identify similar groups of entities or objects in a sample (Mooi & Sarstedt, 2011). Its main purpose is to break systematically a dataset into groups or “clusters” that share the most like characteristics in order to identify patterns across large amounts of data. There are several variations of clustering that employ different underlying modeling algorithms to measure similarities and distances between groups, but these variants all share the same objective: to emphasize similar characteristics while separating data points that look most different, as shown in Figure 5 below.
Figure 5. Visual representation of cluster formation. Reproducedfrom Pandre (2012).
As Figure 5 illustrates, cases that share the most similar characteristics are joined in a
cluster and are differentiated from other clusters by the magnitude or “distance” of their differences. The specific clustering procedure selected for this dissertation was two-step cluster analysis, generated in SPSS. Two-step cluster analysis distinguishes similar groups using a two-step procedure. The first step involves creating pre-clusters so that the second step can more easily group like cases without getting “lost” or “stuck” in the vast amounts of data.
46


Having determined the pre-cluster formations, the second step of the analysis employs hierarchical clustering to finalize the precise number of groups that exist in the dataset. Hierarchical clustering is an agglomerative recursive merging procedure wherein similar entities are joined together repeatedly until all cases can hypothetically form one whole cluster1. Of course, in reality, the formation of a single cluster is not particularly useful as it does not distinguish groups of observations based on similarities or differences. Indeed, the presence of one cluster would merely indicate that every county is so similar to every other that no differences between them can be identified. Alternatively, then, the algorithm stops merging when it has determined an optimal number of clusters.
Descriptive information gathered from the above-listed procedures was used to inform the predictive modeling techniques. Overviews of these techniques are presented in the subsections that follow.
Means Comparisons. Correlations, and Chi-square
The first level of inferential analysis involved calculating test statistics for differences in means across groups and correlations between variables. To compute means comparisons and correlations, parametric and non-parametric approaches were considered. For means comparisons, the parametric form was an independent samples t-test. The non-parametric equivalent was a Mann-Whitney U test (Everitt & Hothom, 2010). Two grouping variables were identified for these tests. The first group distinguished counties using the binary indicator for performance management, where counties that achieved the threshold in a given year were denoted as a “1”, and counties failing to reach the standard were coded as a “0”. The counties
1 There are two primary distance criteria by which to calculate similarities within groups and distances between them: log-likelihood and Euclidean. For this analysis, log-likelihood distance was employed as it accommodated both continuous and categorical variables (IBM, n.d.).
47


were then compared for differences across the independent variables. The second group examined differences in means across the independent variables when the outcome of interest (that is, the grouping variable) was permanency, where counties that achieved a rate of reunification at or above average were coded as a “1” and those counties that fell below the average were denoted by a “0”.
To calculate correlations between variables, three types of inferential statistics were considered: Pearson’s r, Spearman’s rho, and chi-square. Pearson’s r and Spearman’s rho are both measures of association between continuous variables but differ in their specifications for the functional form required (Hauke & Kossowski, 2011). Simply stated, Pearson’s r is appropriate when variables exhibit normality in their distributions, while Spearman’s rho accommodates non-parametric data. When variables are categorical-in this case, the dichotomous measures of performance management and permanency outcomes-Chi-square is the appropriate test statistic to generate. As McHugh (2013) explains: “the Chi-square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the study groups or homoscedasticity in the data” (p. 143). Therefore, Chi-square was employed as a supplement to Pearson’s r and Spearman’s rho coefficients.
Logistic Regression for Rare Events
Traditional logistic regression models, or logits, are employed when the dependent variable is dichotomous (Everitt & Hothom, 2010). Logits are frequently employed in the social sciences but may suffer from bias when the total sample size or the relative number of observations on the event (that is, the “1,” where the “0” refers to a non-event) is too small (Bergtold, Yeager, & Featherstone, 2018). Bias emerges from the model’s inability to properly calculate the coefficients, which in turn leads to overestimated odds ratios, and may therefore
48


increase the likelihood of committing a Type I error (Nemes, Jonasson, Genell, & Steineck, 2009). When depicted visually, this bias takes the form of overlapping densities, where the event-that is, the “1” or the short lines on the horizontal axis in Figure 6 below-may be mis-specified as a non-event (or vice versa) because there exists an insufficient number of observations from which the model may compute maximum likelihood coefficients (King & Zeng, 2001).
Y-0 Y-1
Figure 6. Rare event bias in logistic regression. Reproducedfrom King and Zeng (2001, p. 146).
To mitigate the impact of model mis-specification, a bias-reduction alternative to logistic regression was developed by Firth (1993). The basic premise of Firth’s logit was to penalize the maximum likelihood coefficients as a means by which to reduce biased estimates (Firth, 1993). Despite the development of more complex bias-reduction models, Firth’s logit remains among the most common and widely accepted modeling procedures for rare events data (Puhr, Heinze, Nold, Lusa, & Geroldinger, 2017).
Because this dissertation has a maximum possible N of 64 (reflecting the total population of child welfare agencies in Colorado), the likelihood of bias in a traditional regression is high. Therefore, Firth’s logit was selected as the technique of choice when the dependent variables assumed their binary form. The basic equation for a rare events logit in R is as follows:
Fit <- logistf(“data”, depvary ~ indvarxi + indvam ...)
49


Where fit is the statistical model’s output, logistf specifies Firth’s logistic regression as the procedure of choice, data identifies the data set in which the variables are located, depvary is the dependent variable (either performance management or permanency outcomes depending on the phase of the analysis), and indvarx(x) are the predictor variables.
For this dissertation, four iterations of the Firth logistic regression model were specified. Each of these four models was replicated across every year of the study for a total of 20 models. The modeling equations are:
Model A <- logistf(“data”, Hit90PrcntClose_4Qrtsyear + HousingUnits year +
FYyearPCPI + FYyearPCUIC + PercentFreeReducedyear + Geography)
Model B <- logistf(“data”, AboveAvgPerm ~ PrcntSucc4Qrtsyear)
Model C <- logistf(“data”, AboveAvgPerm ~ PrcntSucc4Qrtsyear + PrcntFosterCareyear + TotalServices)
Model D <- logistf(“data”, AboveAvgPerm ~ HousingUnits_year + FYyear PCPI + FYyearPCUIC + PercentFreeReducedyear + Geography)
Model A was constructed to capture the potential impact of systemic constraints on performance management. The binary indicator of performance management was selected as the dependent variable, while housing units, income, unemployment insurance compensation, free and reduced lunch eligibility, and geography were input as the predictor variables. Model B examined the relationship between performance management and permanency outcomes, where the measure of performance management was percent success across four quarters of a fiscal year and the binary measure of permanency outcomes served as the dependent variables. Model C considered the effect of performance management when controlling for hypothesized predictors including out-of-home placement type and the number of providers available. Finally, Model D sought to
50


determine the direct effects of the original predictor variables from Model A on permanency outcomes.
Testing for Mediation: Bootstrapping
As the core premise of this dissertation argues that performance management exists as a mediating influence between systemic constraints and permanency outcomes, modeling these effects separately does not provide a complete understanding of the role performance management has in tempering the impact of the other predictor variables. Instead, tests for mediation must be conducted in addition to the above-mentioned modeling procedures.
There are two primary methods used in mediation analysis: Baron and Kenny’s causal steps approach and bootstrapping, with the former being arguably the most commonly-employed technique (Hayes, 2009). In brief, Baron and Kenny’s (1986) model requires an estimation of the causal paths that could exist between a predictor variable (X) and an outcome of interest (Y). Having determined an exhaustive number of pathways, statistical criteria are used to ascertain whether a third variable (M) exists as a mediator. M is presumed to be a mediator if the multiplicative product of its relationships between X and Y (assuming statistical significance), added to the direct effect between X and Y, is farther away from zero than the direct effect itself (Baron & Kenny, 1986). Despite its ubiquity, however, Baron and Kenny’s approach has been heavily criticized for its lack of statistical power, assumption of normality, and inaccuracy in small sample sizes when supplemented with the frequently-used Sobel test for validity (Hayes, 2009). Alternatively, then, bootstrapping was considered as the test for mediation in this dissertation.
Bootstrap methods were first introduced to mediation analysis in 1990 by Bollen and Stine. The basic technique of the bootstrap family (there are multiple variations; cf. Cheung &
51


Lau, 2008; Koo, Leite, & Algina, 2016) is to approximate the distribution of the population by simulating thousands of resampled datasets using the original sample data. The indirect, or mediating effect, is calculated similarly to Baron and Kenny’s computation by taking the multiplicative product of the predictor-mediator and the mediator-outcome coefficients. But, where Baron and Kenny’s (1986) estimate determined the presence of mediation by comparing the difference between the indirect and direct effects, bootstrapping generates a confidence interval using the simulated coefficients to determine upper and lower bounds. As in traditional regression, if the value of zero does not fall within the confidence interval, a mediating effect can be concluded (Hayes, 2009).
For purposes of this dissertation, then, mediation was considered between the systemic-level predictor variables and permanency outcomes in the presence of performance management. Next, the fourth chapter presents the findings of these descriptive and inferential procedures.
52


IV. RESULTS
This chapter presents the findings of the empirical tests described in the methodology chapter. It should be noted that both the descriptive and inferential statistics examined the relationship between the systemic-level predictor variables and permanency outcome achievement. While these relationships are not linked to a specific research question, they were included in this dissertation for two reasons: first, to better illuminate the landscape of child welfare service provision by investigating the potential direct link between predictors and outcomes, and second, because one criterion of identifying a mediating variable (i.e., performance management, which is the dissertation’s main focus) is to determine whether there is a direct relationship to mediate. Framed differently, without a confirmed association between the predictor and dependent variables, performance management could not mediate a nonexistent relationship. Therefore, potential associations between systemic factor and permanency outcomes were considered. Descriptive results are discussed first, followed by the inferential statistics. Based on the empirical findings, levels of support for the dissertation’s four hypotheses are considered.
Descriptive Findings
Descriptive statistics were run in SPSS for each of the variables in the dissertation. Traditional descriptive statistics are presented first, followed by the results of the cluster analysis.
Descriptive Statistics
As this research focused on performance management over the course of five years, each year is presented separately in the tables that follow. Additionally, a sixth descriptive table (available in Appendix G) summarizes the percent change from FY2013 to FY2017 in order to
53


assess the shifts in the child landscape from the study’s earliest year to the most recent. Percent change was calculated as:
PC = ((MR Var - E_Var) / (E_Var) * 100)
Where PC = percent change, MR Var = most recent (2017) observation on a variable, and E Var = earliest (2013) observation on a variable Complete county-level information by fiscal year is available in Appendices B-F with variable descriptions presented in Appendix A. The tables below instead present the descriptive statistics for the dissertation’s independent, dependent, and control variables. Summary statistics include the total N, means, medians, standard deviations, skewness, kurtosis, and the minimum and maximum values for each variable. Note that measures of central tendency and dispersion (i.e., mean, median, and standard deviation) were not calculated for the variable “Percent Success in Assessment Closures for 4 Quarters,” as this measure already captured the average across a year’s fiscal quarters and an “average of averages” computation would therefore be an imprecise estimate. Because each table contains a wide array of variables, those variables of most interest are shaded in yellow and discussed in greater detail at the end of each table.
Table 2 below summarizes the descriptive statistics for FY2013.
54


Table 2. Descriptive Statistics for FY2013
Variable N Mean Med SD Skew Kurtosis Min Max
Housing Units 64 35334.55 8658.50 67676.06 2.58 5.95 765 294752
Income 64 42601.80 39642.50 12969.52 1.64 6.07 18493 102940
Unemployment 64 177.44 175.50 38.64 0.39 -0.16 110 268
% FRU 63 0.47 0.47 0.17 0.06 -0.31 0.07 0.88
Asmt Clos June 59 0.90 0.93 0.13 -1.55 1.60 0.51 1.00
Asmt Clos 4 Qrts 62 -0.69 -0.73 0.48 1.00
# of Children 64 78.03 10.50 173.54 3.18 10.51 0.00 889.00
% Foster Care 59 0.46 0.48 0.25 -0.35 -0.04 0.00 1.00
% Congregate 59 0.29 0.21 0.25 1.46 2.16 0.00 1.00
% Kinship Care 59 0.14 0.08 0.19 2.39 7.76 0.00 1.00
# of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37
# of Services 64 15.16 12.00 8.09 3.60 14.68 11 57
% Permanency 57 0.66 0.67 0.22 -0.69 1.32 0.00 1.00
# Permanency 57 46.37 10.00 101.02 3.21 10.76 0 510
As seen in Table 2, two of the four independent variables displayed substantial ranges across counties. These two variables-housing units and per capita income-varied greatly, as evidenced by standard deviations of over 67,000 and nearly 13,000, respectively. With skewness departing heavily from zero and kurtosis in excess of 3, they were also positively skewed and leptokurtic (Ho & Yu, 2015), indicating that observations were concentrated and non-normally distributed. Conversely, unemployment insurance compensation and percent eligibility for free and reduce lunch were closer approximations of a normal distribution.
Both measures of performance management were negatively skewed, which indicates that most counties performed well on the timeliness of assessment closure measures. This observation is reinforced by: a) the mean score of assessment closures in June of FY2013, which was calculated at 90% and b) that no county closed fewer than 50% of assessments within the allocated 60-day window (the lowest performing county, Alamosa, had a 51% success rate).
Despite success at hitting performance management benchmarks, some counties showed greater struggle with achieving permanency standards, as measured by a mean score of 66%. Ten
55


counties were at or below 50% success in reunification efforts, and the lowest performing county, Jackson, had a 0% success rate, although it should be noted that due to Jackson’s small total population of children in out-of-home care, success rates are expected to fluctuate considerably. Overall, the descriptive statistics from FY13 reveal positive performance management adherence.
Table 3. Descriptive Statistics for FY2014
Variable N Mean Med SD Skew Kurtosis Min Max
Housing Units 64 35753.25 8720.00 68613.14 2.58 5.99 765 300694
Income 64 45211.48 41529.50 15374.57 2.48 11.54 18541 126741
Unemployment 64 96.50 93.00 22.33 0.37 -0.27 57 152
% FRU 63 0.48 0.49 0.17 0.01 -0.07 0.06 0.86
Asmt Clos June 59 0.89 0.94 0.14 -1.66 2.04 0.43 1.00
Asmt Clos 4 Qrts 62 -1.34 1.11 0.38 1.00
# of Children 64 77.58 10.50 169.19 2.98 8.56 0.00 805.00
% Foster Care 56 0.42 0.42 0.23 0.02 0.33 0.00 1.00
% Congregate 56 0.27 0.20 0.21 1.19 1.45 0.00 1.00
% Kinship Care 56 0.23 0.20 0.21 1.40 2.68 0.00 1.00
# of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37
# of Services 64 15.16 12.00 8.09 3.60 14.68 11 57
% Permanency 59 0.63 0.65 0.25 -0.86 0.99 0.00 1.00
# Permanency 59 44.12 7.00 95.99 3.18 10.06 0 459
As depicted in Table 3, housing units and per capita income were again positively skewed and leptokurtic, suggesting that early signs of a consistent pattern may be observed. The mean score for percentage of successful assessment closures showed a slight dip from the year prior, with the lowest performing county (Washington) sitting at a 43% success rate. Similarly, the percentage of successful reunification efforts declined from a mean score of 66% in 2013 to 63% in 2014, which was in part attributable to a greater number of counties sitting at or below 50% success (N=12).
56


Table 4. Descriptive Statistics for FY2015
Variable N Mean Med SD Skew Kurtosis Min Max
Housing Units 64 36217.64 8745.00 69579.58 2.58 6.00 765 306478
Income 64 47043.98 43715.00 15866.57 2.59 11.82 22220 131562
Unemployment 64 95.55 92.00 21.23 0.23 -0.81 54 141
% FRU 63 0.48 0.48 0.17 0.04 -0.16 0.05 0.87
Asmt Clos June 59 0.86 0.94 0.21 -2.28 5.57 0.00 1.00
Asmt Clos 4 Qrts 62 -2.14 6.71 0.14 1.00
# of Children 64 75.92 12.00 164.93 2.96 8.36 0.00 790.00
% Foster Care 60 0.44 0.42 0.27 0.35 0.07 0.00 1.00
% Congregate 60 0.26 0.18 0.27 1.74 2.68 0.00 1.00
% Kinship Care 60 0.24 0.20 0.23 0.94 0.70 0.00 1.00
# of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37
# of Services 64 15.16 12.00 8.09 3.60 14.68 11 57
% Permanency 58 0.61 0.59 0.25 -0.30 0.39 0.00 1.00
# Permanency 58 42.72 8.00 90.85 3.09 9.53 0 437
The descriptive statistics for FY2015 reveal a continuation of the pattern observed during the first two fiscal years; namely, that housing units and median income are increasing while performance management and permanency outcomes keep trending down. In 2015, the county of Saguache did not successfully close any assessments within the 60-day window, and the number of counties at or below 50% permanency success increased by two-thirds, from 12 to 20.
Table 5. Descriptive Statistics for FY2016
Variable N Mean Med SD Skew Kurtosis Min Max
Housing Units 64 36676.64 8760.00 70635.86 2.59 6.10 771 314631
Income 64 46739.50 43753.00 16100.08 2.87 14.13 22566 136025
Unemployment 64 92.53 89.00 21.37 0.32 -0.86 54 137
% FRU 63 0.49 0.51 0.17 -0.12 -0.21 0.04 0.87
Asmt Clos June 61 0.86 0.95 0.21 -2.22 4.60 0.07 1.00
Asmt Clos 4 Qrts 62 -2.07 4.25 0.22 1.00
# of Children 64 79.19 15.00 178.77 3.28 11.33 0.00 960.00
% Foster Care 56 0.41 0.41 0.24 0.27 0.44 0.00 1.00
% Congregate 56 0.23 0.19 0.20 1.62 3.70 0.00 1.00
% Kinship Care 56 0.28 0.27 0.22 0.90 1.08 0.00 1.00
# of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37
# of Services 64 15.16 12.00 8.09 3.60 14.68 11 57
% Permanency 58 0.58 0.57 0.23 -0.24 0.67 0.00 1.00
# Permanency 58 38.59 5.00 82.97 3.14 10.42 0 431
57


In 2016, the mean of assessment closures stayed consistent with the year prior at 86%. While the mean permanency decreased from 61% to 58%, no additional counties fell below the 50% reunification rate from FY15. Ultimately, the rate of growth was most notable when comparing the two years. As an example, Denver county (the largest in Colorado) grew by more than 8,000 household units and witnessed an increase its child welfare population of 21% (from an N of 790 in FY15 to 960 in FY16). Given this presumably challenging strain on capacity, it is perhaps not surprising that Denver dropped 5% in both its performance management and permanency outcomes rates during this time.
Table 6. Descriptive Statistics for FY2017
Variable N Mean Med SD Skew Kurtosis Min Max
Housing Units 64 37233.88 8769.50 71851.77 2.60 6.12 779 321513
Income 64 48509.81 44948.00 17164.51 2.85 14.20 19443 143812
Unemployment 64 73.50 70.00 15.62 0.45 -0.13 41 111
% FRU 63 0.48 0.48 0.17 -0.02 -0.04 0.05 0.90
Asmt Clos June 62 0.88 0.96 0.19 -2.69 8.80 0.00 1.00
Asmt Clos 4 Qrts 62 -2.98 11.76 0.00 1.00
# of Children 64 80.08 14.00 180.00 3.33 11.73 0.00 954.00
% Foster Care 57 0.40 0.43 0.20 -0.04 0.74 0.00 1.00
% Congregate 57 0.19 0.14 0.20 2.05 5.42 0.00 1.00
% Kinship Care 57 0.34 0.31 0.24 0.64 0.31 0.00 1.00
# of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37
# of Services 64 15.16 12.00 8.09 3.60 14.68 11 57
% Permanency 59 0.56 0.56 0.26 -0.31 0.43 0.00 1.00
# Permanency 59 41.17 8.00 90.17 3.16 9.75 0 422
In examining the descriptive statistics in 2017, counties successfully closed more assessments, on average, than they had since 2014 (88% in FY17; 86% in FY15-16; 89% in FY14). What is interesting, then, is that permanency outcomes continued the trend of decline, dipping 2 percentage points from the year prior and falling 10 percentage points from the success rates reported just five years earlier.
58


When comparing the percentage change from FY13 to FY17, it became apparent that a disconnect exists between performance management and permanency outcomes. Specifically, 27 counties (just under 50% when excluding those counties with missing records) had inconsistent patterns of change between the two variables, meaning that if performance management increased, permanency outcomes decreased, or vice versa. Boulder county, for instance, increased its timeliness to assessment closures by more than 15% between FY13 and FY17, but decreased its permanency outcomes by 11% during this same time. Figure 7 below depicts a scatterplot displaying the relative changes in performance management and permanency success between fiscal years 2013 and 2017, and the exact county-level percent changes may be found in Appendix G.
% Change in Permanency Success


a o S • • •
©C C3 a C3 5 • • • • •
at CJ S3 a s 3 0m % 0 • • • • • • X)
0 - 10 -2 * °* 0 • • * • 2 * i • Q 0 6 0 8 0 1C •
a a» ©fl S3 cs ^ f 10 • •• • • . • • •
JS u £ •


St^

Figure 7. Changes in performance management relative to changes in permanency success.
As depicted in Figure 7, nearly half of all counties, represented as blue dots in the scatterplot, displayed inconsistent relationships between performance management and
59


permanency outcomes. These inconsistencies are captured in the yellow-shaded quadrants, while expected direct relationships (that is, increases or decreases in permanency corresponding to similar increases or decreases in performance management) are presented in the non-shaded quadrants. Although the initial purpose of the proposed cluster analysis was to better understand natural breaks and patterns in the data broadly speaking, the procedures were expanded to try to determine where the discrepancy between performance management and permanency was situated.
Two-Step Cluster Analysis
Within each fiscal year, counties were analyzed for their level of shared characteristics with one another. The goal of the two-step clustering procedure was to better understand patterns across variables before proceeding to inferential statistics. In this way, cluster analysis was considered an additional descriptive layer important for its utility in capturing trends that may be otherwise unclear from a review of the sample distributions.
Three iterations of cluster analyses were performed for each fiscal year. The first cluster analysis examined data patterns across the independent variables and performance management. The second looked exclusively at the patterns between performance management and permanency outcomes. The third iteration considered the independent variables alongside permanency outcomes. The results of the cluster analyses, by fiscal year, are presented in the tables below. Following each table is a brief discussion of the key observations.
60


Table 7. Cluster Analyses for FY2013
Cluster Analysis 1: Independent Variables and Performance Management
Per. Mgt. Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 26.2% 0.7657 115801.56 45397.06 38.5% 16 0 0
2 24 39.3% 0.8114 12779.46 41467.92 49.7% 0 24 0
3 21 34.4% 0.8714 3657.10 40674.38 51.4% 0 0 21
Cluster Analysis 2: Performance Management and Permanency
Cluster N % Per. Mgt. Mean Permanency Mean
1 19 33.3% 0.6171 0.5918
2 38 66.7% 0.9094 0.6961
Cluster Analysis 3: Independent Variables and Permanency
Permanency Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 28.6% 0.6239 115801.56 45397.06 38.5% 16 0 0
2 18 32.1% 0.6526 3771.50 37766.39 53.0% 0 0 18
3 22 39.3% 0.6895 13204.50 38387.91 52.3% 0 22 0
The cluster analyses for FY2013 reveal many interesting patterns. In the first cluster iteration, cluster 3 had the highest overall success with performance management (mean=87.14%) while retaining the smallest number of housing units (mean=3657 units) but the lowest average income (mean=$40,674.38) and the greatest percentage of free and reduced lunch eligibility (mean=51.4%). Cluster 3 also comprised exclusively frontier counties. While the small number of housing units descriptively reinforces Hypothesis 1, which posits that lower eligible populations are equated with higher performance management, the lower average income and lunch eligibility are inconsistent with Hypothesis 2, and the frontier counties are incompatible with Hypothesis 3. Almost conversely, the highest average performing cluster in the third specification (cluster 3), had mid-level numbers of housing units (mean=13204), income (mean=$38,387.91) free and reduced lunch (mean=52.3%) and were exclusively rural counties.
61


The discrepancies within and across the two clusters are surprising, as they do not readily lend
themselves to definitive conclusions regarding similarities amongst counties. Table 8. Cluster Analyses for FY2014
Cluster Analysis 1: Independent Variables and Performance Management
Per. Mgt. Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 21 34.4% 0.8267 3646.67 42965.71 54.1% 0 0 21
2 24 39.3% 0.8578 12847.88 45001.25 49.4% 0 24 0
3 16 26.2% 0.8595 117298.38 48053.00 39.0% 16 0 0
Cluster Analysis 2: Performance Management and Permanency
Per. Mgt. Permanency
Cluster N % Mean Mean
1 14 23.7% 0.7016 0.3265
2 45 76.3% 0.8881 0.7225
Cluster Analysis 3: Independent Variables and Permanency
Permanency Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 27.6% 0.5799 117298.38 48053.00 39.0% 16 0 0
2 24 41.4% 0.7255 12847.88 45001.25 49.4% 0 24 0
3 18 31.0% 0.5352 4088.50 40856.33 54.1% 0 0 18
In FY2014, performance-management based clusters displayed greater theoretic consistency. While the means for performance management were fairly close in the first iteration (ranging from 82.67% in cluster 1 to 85.95% in cluster 3), cluster 3, with the highest mean score, also had the highest average income (mean=$48,053.00), lowest free and reduced lunch eligibility (mean=39.0%) and were exclusively urban counties. The only predictor variable misaligned with the proposed hypotheses was the number of housing units, of which cluster 3 had the highest concentration (mean=l 117298 units). The second iteration also seemed to provide some descriptive support for the dissertation’s fourth hypothesis that argues counties with higher performance management will also have greater permanency outcomes. As
62


illustrated in the table, cluster 1 had a lower average performance management score (mean=70.16%) and lower performance management success (mean=32.65%) than cluster 2.
Table 9. Cluster Analyses for FY2015
Cluster Analysis 1: Independent Variables and Performance Management
Per. Mgt. Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 21 34.4% 0.7946 3664.86 43125.52 53.4% 0 0 21
2 24 39.3% 0.8774 12922.88 47443.04 49.1% 0 24 0
3 16 26.2% 0.8940 118957.25 49830.06 38.0% 16 0 0
Cluster Analysis 2: Performance Management and Permanency
Per. Mgt. Permanency
Cluster N % Mean Mean
1 14 24.1% 0.6603 0.6468
2 44 75.9% 0.9270 0.6004
Cluster Analysis 3: Independent Variables and Permanency
Permanency Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 18 31.6% 0.7378 3990.11 42360.33 53.4% 0 0 18
2 23 40.4% 0.5308 13345.13 47142.52 49.8% 0 23 0
3 16 28.1% 0.5806 118957.25 49830.06 38.0% 16 0 0
For FY2015, the patterns observed in the first iteration of the year prior remained consistent. Specifically, cluster 3 had the highest performance management average (mean=89.40%), the highest average income (mean=$49,830.06), the lowest average free and reduced lunch eligibility (mean=38.0%) and comprised all urban districts. Alternatively, the third iteration showed a reverse trend wherein the highest average performing cluster with respect to permanency outcomes (cluster 1, with a mean of 73.78%) also had the lowest average income ($42,360.33), the highest eligibility for free and reduced lunch (53.4%) and captured all frontier counties. Finally, the second iteration displayed an interesting pattern in the descriptive connection between performance management and permanency outcomes. In this iteration, the
63


two clusters were separated by 26.67 percentage points on the predictor, but only 4.64 percentage points on the dependent variable. While clusters do not reflect causal mechanisms, the lack of clarity in the cluster distinctions suggests that the relationship between performance and outcomes may be muddled in this fiscal year.
Table 10. Cluster Analyses for FY2016
Cluster Analysis 1: Independent Variables and Performance Management
Per. Mgt. Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 26.2% 0.9035 120610.19 50318.94 38.3% 16 0 0
2 24 39.3% 0.8770 12999.92 47775.13 49.7% 0 24 0
3 21 34.4% 0.7761 3713.05 41379.33 53.9% 0 0 21
Cluster Analysis 2: Performance Management and Permanency
Per. Mgt. Permanency
Cluster N % Mean Mean
1 52 89.7% 0.8953 0.5786
2 6 10.3% 0.4047 0.5722
Cluster Analysis 3: Independent Variables and Permanency
Permanency Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 28.1% 0.5112 120610.19 50318.94 38.3% 16 0 0
2 24 42.1% 0.6125 12999.92 47775.13 49.7% 0 24 0
3 17 29.8% 0.5948 4087.29 41480.71 55.5% 0 0 17
The clusters in FY2016 indicate a similar inconsistent pattern as some of the prior study years. Notably, when comparing the clusters in the first iteration with those in the third, identical means are reported across all predictor variables in each iteration’s first two clusters. This suggests (and was confirmed through a comparative analysis) that the same counties are captured in both sets of clusters. The reason that the third cluster contained different centroids in the two iterations was due to the drop in observations from 21 frontier counties in iteration 1 to 17 in iteration 3. The implication of this descriptive observation is that the same counties that achieved
64


performance management success in FY2016 also struggled with attaining permanency outcomes. This assertion is further supported by the second iteration, which contained nearly identical permanency means across clusters 1 and 2 (mean=57.86% and 57.22%, respectively) but a 49.06 percentage point difference in average performance management scores.
Table 11. Cluster Analyses for FY2017
Cluster Analysis 1: Independent Variables and Performance Management
Per. Mgt. Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 16 26.2% 0.9368 122616.69 52146.19 38.7% 16 0 0
2 24 39.3% 0.8870 13095.75 49300.63 48.1% 0 24 0
3 21 34.4% 0.7955 3727.48 44136.19 54.7% 0 0 21
Cluster Analysis 2: Performance Management and Permanency
Cluster N % Per. Mgt. Mean Permanency Mean
1 43 72.9% 0.8937 0.5549
2 9 15.3% 0.8091 0.9778
3 7 11.9% 0.7566 0.0490
Cluster Analysis 3: Independent Variables and Permanency
Permanency Housing Income FRL Urban Rural Frontier
Cluster N % Mean Mean Mean Mean Freq. Freq. Freq.
1 20 34.5% 0.5564 3842.70 43724.85 54.6% 0 0 20
2 22 37.9% 0.5986 13477.45 45669.77 49.2% 0 22 0
3 16 27.6% 0.4985 122616.69 52146.19 38.7% 16 0 0
Unlike previous years, FY2017 had the only inclusion of three, rather than two, clusters between performance management and permanency outcomes. As seen in the second iteration, cluster 1 had the highest average performance management score (mean=89.37%) but the midranking permanency score (mean=55.49%), while those respective scores were reversed in cluster 2. Cluster 3 displayed some theoretic support with the lowest performance management scores (mean=75.66%) corresponding to the lowest permanency scores (mean=4.9%). Adding to
65


the ambiguous descriptive connection between performance management and permanency, the first and third iterations showed similar inconsistencies as those reported in previous years. Lacking general descriptive consistency, the dissertation next generated inferential statistics to further unpack the complexities between the predictor, mediating, and dependent variables. Inferential Findings
Having calculated and reviewed the descriptive statistics, the dissertation proceeded to conduct inferential analyses. The subsections below present the findings of the inferential tests. Means Comparisons. Correlations, and Chi-Square Statistics
Before generating more advanced statistical models, comparisons across means and variables were examined. Means comparison utilized Mann-Whitney’s U, and variable associations used Spearman’s rho and Chi-squares to test for correlations. Inferential tests were computed for each of the dissertation’s five fiscal years. For organizational clarity, the findings from each type of test are presented in the following order: Mann-Whitney U, Spearman’s rho correlations, and Chi-square tests.
66


Table 12. Mann-Whitney U Means Comparisons for FY2013
Ranks: IVs and Performance Management Ranks: IVs and Permanency
Per. Mgnit. 4Q; N Mean Sum of Permanency 12-mo; N Mean Sinn of
l=Success Rank Ranks l=Success Rank Ranks
Foo 37 36.16 1338.00 Foo 27 35.63 962.00
Housing r1.00 25 24.60 615.00 Housing rl.00 30 23.03 691.00
Total 62 Total 57
roo 37 29.35 1086.00 roo 27 30.63 827.00
Income r1.00 25 34.68 867.00 Income 'i.oo 30 27.53 826.00
Total 62 Total 57
Unemp- loyment roo r1.00 Total 37 25 62 35.78 25.16 1324.00 629.00 it r°° Unemp- y , v 1.00 l0>inent Total 27 30 57 32.46 25.88 876.50 776.50
Free & roo 37 32.68 1209.00 Free & r°° 27 28.37 766.00
Red. r1.00 24 28.42 682.00 Red. rl.00 29 28.62 830.00
Lunch Total 61 Lunch Total 56
Total# of, Children Total 27 37.56 1014.00
30 21.30 639.00
57
„ ^ roo % Foster r „ TOO Care _ . . Total 27 29 56 26.15 30.69 706.00 890.00
„ roo % Cons r „ 1.00 Caie Total 27 29 56 29.22 27.83 789.00 807.00
% Kin r00 „ 1.00 C^e ^ , Total 27 33.00 891.00
29 24.31 705.00
56
Total# ofr^^ D M 100 Providers Total 27 35.41 956.00
30 23.23 697.00
57
Total# of^ c . 1.00 Services Total 27 35.19 950.00
30 23.43 703.00
57
The results of the means comparison for FY2013 indicate that differences between successful and unsuccessfully-performing counties with respect to performance management were statistically significant across housing units and income. More precisely, the twenty-five counties that achieved performance management standards had smaller eligible populations and higher average income levels than those counties that did not meet standards. The significant


differences in counties across these metrics lend support to the dissertation’s first two hypotheses that eligible population size and economic profiles are associated with performance management. Similarly, the size of a county’s eligible population was associated with significant differences in counties that achieved permanency outcome standards compared to those that did not. When examining the control variables in 2013, significant differences were observed across permanency groups for the total number of children, percentage of kinship care, and availability of providers. Interestingly, however, successful counties had fewer available providers and services. While this at first appeared counter-intuitive, in recalling the descriptive statistics, FY2013 was the only study year where frontier districts out-performed their rural and urban counterparts. Given that provider availability was associated with geography (as evidenced in the correlation tables later in this chapter), the difference in means based on service availability was not as surprising.
68


Table 13. Mann-Whitney U Means Comparisons for FY2014
Ranks: IVs and Performance Management Ranks: IVs and Permanency
Per. Mgmt. 4Q; N Mean Sum of P eimanency 12-mo; N Mean Sum of
l=Success Rank Ranks l=Success Rank Ranks
.00 29 34.10 989.00 .00 26 33.27 865.00
Housing 1.00 33 29.21 964.00 Housing 1.00 33 27.42 905.00
Total 62 Total 59
.00 29 32.66 947.00 .00 26 33.19 863.00
Income 1.00 33 30.48 1006.00 Income 1.00 33 27.48 907.00
Total 62 Total 59
Unemp- loyment .00 1.00 Total 29 33 62 34.93 28.48 1013.00 940.00 IT 00 Uneinp- L0Q loyment ' Total 26 33 59 29.38 30.48 764.00 1006.00
Free & .00 29 29.66 860.00 Free & 00 26 26.54 690.00
Red. 1.00 32 32.22 1031.00 Red. 1.00 32 31.91 1021.00
Limck Total 61 Punch Total 58
™"ofi°00 Children Total 26 33 59 32.02 28.41 832.50 937.50
% Foster „ TOO Care m , Total 25 31 56 28.98 28.11 724.50 871.50
%cong ^ Care m , Total 25 31 56 27.80 29.06 695.00 901.00
*Kn loo Care m , Total 25 28.54 713.50
31 56 28.47 882.50
Total# of/^ D « 100 Providers Total 26 34.73 903.00
33 26.27 867.00
59
Total# of/^ c • 1.00 Seraces Total 26 34.69 902.00
33 26.30 868.00
59
In FY2014, the only significant differences in groups existed on the dependent variable permanency outcomes relative to service provision. As with the year prior, counties with fewer resources were able to perform better with respect to outcomes relative to those districts with more ample resource opportunities. While the distinction of frontier county success compared to rural and urban districts was not evident as it had been in FY2013, in FY2014 rural counites
69


appeared to perform equally-well to urban counties, which may explain why lower provider availability was still affiliated with successful outcomes.
Table 14. Mann-Whitney U Means Comparisons for FY2015
Ranks: IVs and Performance Management Ranks: IVs and Permanency
Per. Mgmt. 4Q; N Mean Sum of P ennanency 12-mo; N Mean Sum of
l=Success Rank Ranks l=Success Rank Ranks
.00 33 29.76 982.00 .00 30 30.67 920.00
Housing 1.00 29 33.48 971.00 Housing 1.00 28 28.25 791.00
Total 62 Total 58
.00 33 26.27 867.00 .00 30 30.33 910.00
Income 1.00 29 37.45 1086.00 Income 1.00 28 28.61 801.00
Total 62 Total 58
Unemp- loyment .00 1.00 Total 33 29 62 35.06 27.45 1157.00 796.00 TT 00 Uiemp. 1(K) loyment ' Total 30 28 58 30.28 28.66 908.50 802.50
Free & .00 33 35.39 1168.00 Free & 00 30 29.63 889.00
Red. 1.00 28 25.82 723.00 Red. 1.00 27 28.30 764.00
Lunch Total 61 Lunch Total 57
™*ofi°00 Children Total 30 28 58 31.25 27.63 937.50 773.50
n r- r00 % F oster r „ 1.00 Care m , Total 29 27 56 29.17 27.78 846.00 750.00
n r00 %c°m -100 Care Total 29 27 56 29.93 26.96 868.00 728.00
?oo Care m , Total 29 28.28 820.00
27 56 28.74 776.00
Total# of,00 o y 100 Providers Total 30 28 58 30.08 28.88 902.50 808.50
Total# of,00 c . 1.00 Seraces Total 30 28 58 29.73 29.25 892.00 819.00
As evidenced in Table 14 above, only one statistically significant difference across groups existed in FY2015. This difference was observed on the variable free and reduced lunch eligibility, where counties that achieved performance management standards had a significantly
70


smaller percentage of free and reduced lunch-eligible children than those counties that did not meet the standard. This finding lends some support to the dissertation’s hypothesis that economically-stable counties will see greater performance management achievement than those counties that are struggling.
While some descriptive differences were noted in both FY2016 and FY2017, no comparisons demonstrated statistical significance. For the full Mann-Whitney tables on these two years, please refer to Appendix I.
The next step in the inferential analysis involved the creation of correlation matrices to examine the significance (if any) of the relationships between the independent and predictor variables. Matrices were created for each fiscal year in the study and are presented in the tables that follow. Both independent samples t-tests and Spearman’s rho were calculated and produced comparable test statistics, which suggests that the correlations were robust across a variety of specifications. However, because most variables violated the assumption of normality in their distributions, the results using Spearman’s rho (the non-parametric test) are presented. Statistically significant correlations are highlighted in gold, where significance is represented as: * p<10; ** p< 05; *** p< 001. Given the prevalence of significant correlations between variables in all fiscal years, a few highlights are discussed followed by a summary of key takeaways across the duration of the study’s time period.
71


Table 15. Correlation Matrix for FY2013
Correlation Matrix: FY2013
PercentFr PrcntSuc PrcntSuc PrcntFost P rent Con Total Uniq Total NM PrcntMet
HousingU 2013_PC 2013_PC eeReduce 1=U;2=R; cJune20 c4Qrts20 erCare201 gCare201 PrcntKinC Total Send ueProvide et_Perm_ 12Month
nits 2013 PI UIC d2013 3=F 13 13 Total2013 3 3 are2013 ces rs 2013 s_2013
HousingUnits_2013 1.000 0.167 0.226 -.257’ -.758” -.466” -.403” .767” 0.043 0.036 .431” .664” .682” .707” -.267'
2013_PCPI 1.000 -.308 -.546” -0.154 0.088 0.230 -0.106 -.547” .322’ 0.152 0.043 0.069 0.101 -0.086
2013_PCUIC 1.000 .323” -0.148 -.428” -.427” 0.219 .292' -0.186 0.129 -0.006 0.001 0.224 -0.153
PercentFreeReduced2013 1.000 .254’ -0.043 -0.152 0.048 .265 -0.126 -0.111 -0.132 -0.135 -0.108 0.031
1=U;2=R;3=F 1.000 .343” .284’ -.694” -0.043 -0.023 -.358” -.684” -.699” -.615” 0.143
Prcn tS uccJune2013 1.000 .738” -.535” -0.067 0.088 -.327’ -.407” -.409” -.512” .414”
PrcntSucc4Qrts2013 1.000 -.509” -0.220 0.042 -0.212 -.273’ -.275’ -.431” .211
Total2013 1.000 0.143 -0.049 .620” .684” .701” .915” -.299'
PrcntFosterCare2013 1.000 -.436” -0.254 0.122 0.112 0.016 0.128
PrcntCongCare2013 1.000 -0.223 -0.148 -0.134 -0.105 -0.032
PrcntKinCare2013 1.000 .350” .367” .643” -0.151
Total Sendees 1.000 .994” .641” -.279'
Total UniqueProviders 1.000 .655” -.281'
Total NM et_Perm_2013 1.000 -0.260
Prcn tMet12Mon ths_2013 1.000
As depicted in the correlation matrix for FY13, the percentage of unemployment insurance compensation and the size of the eligible population had the strongest correlations with performance management of all the predictor variables (rho=-.427, p=<05 and rho=-.403, p=<05, respectively). The inverse nature of the relationship between both these variables and the outcome of interest suggests that counties with lower rates of unemployment and fewer housing units had higher levels of performance. The highest reported association with permanency outcomes was performance management (rho=.414, p<05), which indicates that counties with
higher levels of performance management also had greater levels of outcome achievement.
Table 16. Correlation Matrix for FY2014
Correlation Matrix: FY2014
PercentFr PrcntSuc PrcntSuc PrcntFost P rent Con Total Uniq TotalNM PrcntMet
HousingU 2014_PC 2014_PC eeReduce Geograph cJune20 c4Qrts20 erCare201 gCare201 PrcntKinC TotalSerd ueProvide et_Perm_ 12Month
nits 2014 PI UIC d2014 y 14 14 Total2014 4 4 are2014 ces rs 2014 s_2014
HousingUnits_2014 1.000 0.237 0.214 -.302’ -.760” -0.149 -0.075 .784” 0.238 -0.176 -0.044 .665” .683” .663” -0.140
2014_PCPI 1.000 -0.239 -.583” -0.188 -0.031 0.089 -0.065 0.002 0.231 -.354” 0.093 0.122 0.022 -0.117
2014_PCUIC 1.000 .273’ -0.115 -0.238 -0.220 0.194 0.100 -.361” 0.167 0.024 0.031 0.125 -0.016
PercentFreeReduced2014 1.000 .316’ 0.114 0.049 -0.015 0.003 -0.085 .315’ -0.150 -0.156 0.077 0.116
Geography 1.000 0.108 0.034 -.740” -0.188 0.056 0.071 -.684” -.699” -.641” 0.056
Prcn tS uccJune2014 1.000 .638” -0.223 -0.031 0.240 -0.116 -0.186 -0.183 -0.204 0.018
PrcntSucc4Qrts2014 1.000 -0.076 0.054 0.198 -0.177 0.010 0.014 0.041 0.032
Total 2014 1.000 .309' -0.245 0.199 .676” .695” .873” -0.132
PrcntFosterCare2014 1.000 -.463” -.384” -0.007 0.028 0.130 -0.034
PrcntCongCare2014 1.000 -.349” -0.031 -0.025 -0.075 0.036
PrcntKinCare2014 1.000 0.125 0.113 0.172 -0.036
TotalSendces 1.000 .994” .638” -.324'
Total UniqueProvi ders 1.000 .655” -.325’
Total NM et_Perm_2014 1.000 0.132
PrcntM et12Months_2014 1.000
In 2014, statistically significant correlations were observed among the predictor and
control variables, but no meaningful associations were detected between the independent
72


variables and performance management. Similarly, the only predictor variables demonstrating a
significant relationship with permanency outcomes were the control measures for number of services and providers. Interestingly, the direction of the relationship between these control variables reversed when comparing the total number of successful permanency placements and the percentage of successful outcomes. One possibility for this discrepancy is that counties with a greater number of successful placements are situated in urban locations, which have higher levels of resource availability. Because the outcome is measured as the raw number of successful placements, the relatively effectiveness of counties is not considered, leading to the direct relationship with service providers. Alternatively, the percent of successful permanency placements accounts for the varied range in workload size to more accurately reflect outcome achievement than the raw number alone. Using this measure, then, counties with greater permanency outcomes also had fewer resource opportunities, which is consistent with the Mann-Whitney means comparisons for this fiscal year.
Table 17. Correlation Matrix for FY2015
Correlation Matrix: FY2015
PercentFr PrcntSuc PrcntSuc PrcntFost P rent Con TotalUniq Total NM PrcntM et
HousingU 2015_PC 2015_PC eeReduce Geograph cJune20 c4Qrts20 erCare201 gCare201 PrcntKinC TotalSerd ueProvide et_Perm_ 12Month
nits 2015 PI UIC d2015 y 15 15 Total 2015 5 5 are2015 ces rs 2015 s_2015
HousingUnits_2015 1.000 .249' 0.158 -.307' -.759" -0.136 0.059 .715" 0.227 -0.020 0.046 .664" .682" .758" -0.121
2015_PCPI 1.000 -0.143 -.640" -0.217 0.180 .305' -0.079 0.096 0.167 -0.187 0.052 0.082 0.126 0.048
2015_PCUIC 1.000 .324" -0.073 -0.216 -.351" 0.230 -0.061 -0.073 0.083 -0.020 -0.008 0.132 -0.122
PercentFreeReduced2015 1.000 .320' -0.194 -.314' 0.024 -.327' -0.051 .262' -0.157 -0.166 -0.099 -0.087
Geography 1.000 0.182 -0.147 -.691" -0.226 0.017 -0.089 -.684" -.699" -.616" .285'
Prcn tS uccJune2015 1.000 .704" -.330' 0.195 -0.197 -0.086 -0.150 -0.157 -0.182 0.232
PrcntSucc4Qrts2015 1.000 -0.088 .409" -.336" -0.077 0.118 0.114 -0.046 -0.010
Total 2015 1.000 0.099 -0.022 .390" .647" .665" .850" -0.125
PrcntFosterCare2015 1.000 -.471" -.420" 0.218 0.215 0.081 -0.124
P rent Cong Care 2015 1.000 -.256' 0.043 0.069 -0.004 0.038
PrcntKinCare2015 1.000 -0.020 -0.013 0.232 -0.005
Total Sendees 1.000 .994" .680" -0.076
Total UniqueProviders 1.000 .697" -0.092
Total NM et_Perm_2015 1.000 0.077
PrcntM et12M onths_2015 1.000
Correlations in FY2015 indicated the presence of a moderate, statistically significant inverse relationship between unemployment insurance compensation and performance management (rho=-.351, p<05). Consistent with the correlation in FY2013, this association
73


suggests that counties with lower levels of unemployment will also have higher levels of
performance management success. Beyond unemployment, income and free and reduced lunch eligibility showed associations of moderate strength with performance management, but statistical significance was minimal (rho=.305, p<.10 and rho=-.314, p<10, respectively). Similarly, there was a borderline significant association between geography and permanency outcomes (rho=.285, p<10), but otherwise no predictor variables, including performance management, had a demonstrable relationship with this outcome.
Table 18. Correlation Matrix for FY2016
Correlation Matrix: FY2016
PercentFr PrcntSuc PrcntSuc PrcntFost P rent Con Total Uniq Total NM PrcntM et
HousingU 2016_PC 2016_PC eeReduce Geograph cJune20 c4Qrts20 erCare201 gCare201 PrcntKinC TotalSerd ueProvide et_Perm_ 12Month
nits 2016 PI UIC d2016 y 16 16 Total2016 6 6 are2016 ces rs 2016 s_2016
HousingUnits_2016 1.000 .296' 0.235 -.343" -.759" -0.039 0.064 .741" 0.093 -0.010 -0.030 .664" .682" .704" 0.034
2016_PCPI 1.000 -0.185 -.639" -.295' 0.223 .309' -0.065 0.060 0.038 -0.137 0.112 0.139 0.025 .341"
2016_PCUIC 1.000 0.240 -0.116 -0.227 -0.226 .275' .275' -0.058 -0.160 0.002 0.019 .266' -0.124
PercentFreeReduced2016 1.000 .357" -0.249 -0.196 -0.005 -0.155 0.030 0.127 -0.171 -0.182 0.033 -0.192
Geography 1.000 -0.087 -0.110 -.671" -0.141 0.089 0.050 -.684" -.699" -.508" 0.077
PrcntSuccJune2016 1.000 .784" -.291' 0.018 -0.156 0.019 0.006 -0.005 -0.124 0.166
PrcntSucc4Qrts2016 1.000 -0.204 0.016 -0.092 -0.039 0.077 0.075 -0.011 .345"
Total 2016 1.000 0.245 -0.180 0.118 .663" .681" .843" -0.173
PrcntFosterCare2016 1.000 -.531" -.406" 0.192 0.219 0.167 0.031
PrcntCongCare2016 1.000 -.322' -0.019 -0.028 -0.133 0.090
PrcntKinCare2016 1.000 -0.233 -0.237 0.075 -0.089
Total Services 1.000 .994" .554" -0.083
Total UniqueProviders 1.000 .571" -0.096
Total NMet_Perm_2016 1.000 0.142
PrcntM et12M onths_2016 1.000
In FY2016, income was again observed to have a moderate, direct association with performance management (rho=.309) although the statistical significance continued to hover around the p<10 mark. Income did, for the first time, have a statistically significant relationship with permanency outcomes (rho=.341, p<05). Performance management also showed a moderate, positive relationship (rho=.345, p<05) with permanency, thereby lending additional support to the argument that counties with greater levels of performance management achievement will also have higher levels of outcome attainment.
74


Table 19. Correlation Matrix for FY2017
HousingU 2017_PC 2017_PC
nits 2017 PI UIC
Housing Units_2017 1.000 .272 .336”
2017_PCPI 1.000 -0.114
2017_PCUIC 1.000
PercentFreeReduced2017 Geography PrcntSuccJune2017 PrcntSucc4Qrts2017
Total2017
P rent FosterCare2017
PrcntCongCare2017
PrcntKinCare2017
TotalSer\ices
Total UniqueProviders
Total NM etPerm _2017
PrcntMetl 2M onths_2017
Correlation Matrix: FY2017
PercentFr PrcntSuc PrcntSuc PrcntFost
eeReduce Geograph cJune20 c4Qrts20 erCare201
d2017 y 17 17 Total2017 7
-.308 -.759” -0.135 0.087 .742” 0.218
-.635” -.268' 0.060 0.177 -0.068 0.138
0.179 -0.177 -.370” -.257' .374” 0.217
1.000 .318' -0.114 -0.127 0.068 -0.112
1.000 0.001 -0.216 -.593” -0.110
1.000 .692” -0.205 -0.192
1.000 -0.032 -0.123
1.000 .297'
1.000
P rent Con Total Uniq TotalNM PrcntMet
gCare201 PrcntKinC TotalSer\i ueProvide et_Perm_ 12Month
7 are2017 ces rs 2017 s_2017
0.224 -0.201 .664” .682” .687” 0.053
0.202 -.337' 0.120 0.143 -0.041 -0.051
0.002 -0.109 0.043 0.065 .403” -0.026
-0.017 0.258 -0.146 -0.152 0.122 0.030
-0.201 0.093 -.684” -.699” -.562” -0.008
-0.063 0.021 0.033 0.010 -0.126 -0.241
0.183 -0.141 0.218 0.203 0.012 -0.146
0.069 0.000 .615” .633” .905” -0.010
-0.138 -.591” 0.183 0.212 .323' 0.000
1.000 -.468” 0.188 0.184 0.016 0.045
1.000 -0.168 -0.181 -0.076 0.079
1.000 .994” .570” -0.108
1.000 .584” -0.110
1.000 0.200
1.000
As seen in the correlation matrix for FY17, only one association was detected between a predictor variable and performance management-the measure for unemployment insurance
compensation (rho=-.257, p< 10). No statistically significant relationships were observed
between performance management and outcomes or predictor and control variables on
permanency. Before advancing to the Chi-square test findings, it is first necessary to briefly discuss the observed levels of association among the predictor and control variables.
The presence of correlations across many indicator variables led to a discrepancy
between empirical fidelity and practical utility. In other words, the strong associations between
predictor variables suggests the presence of multicollinearity or perhaps interaction effects,
which is problematic for predictive modeling. On the other hand, the associations across these
variables reinforces the argument that multiple systemic factors work collectively to impose
organizational constraints that may affect performance management and, ultimately, outcome
achievement.
With correlations examined, the dissertation next proceeded to calculate Chi-square test
statistics for performance management and permanency outcomes using their binary measures.
The binary measures were included in this dissertation for two reasons: the first was to permit
predictive modeling under a small sample size constraint, where a rare events logistic regression
75


was the most appropriate technique given that the total population does not exceed 64 counties, and the second was to check that relationships between performance management and permanency outcomes were consistent across a variety of operationalized specifications. Explained differently, if consistent significance could be detected across different measures of the same concepts, confidence in the models’ reliability would be increased. Toward this aim, Table 20 below depicts the findings of the Chi-square tests for FY13 through FY2017.
Table 20. Chi-Square Tests for FY2013 to FY2017
FY2013 Chi-Sc FY2014 uare Tests FY2015 FY2016 FY2017
Value Sig. (2-sided) Value Sig. (2-sided) Value Sig. (2-sided) Value Sig. (2-sided) Value Sig. (2-sided)
Pearson Chi-Square 4.712 0.030 0.013 0.908 0.608 0.436 6.905 0.009 1.454 0.228
N ofValid Cases 57 59 58 58 59
As evidenced by Table 20, two years of the study demonstrated statistically significant relationships between performance management and permanency outcomes. FY2013 yielded a Pearson’s Chi-square value of 4.712 (significant at p<05), while FY2016 produced a Chi-square value of 6.905 (significant at p<001). In these two years, the null hypothesis (Ho: There is no association between performance management and permanency outcomes) is rejected, indicating that a statistically significant association was confirmed between the predictor and outcome. In the remaining years, the null hypothesis was accepted, meaning that no discernible significance was observed.
Logistic Regression
To this point, several associations between variables have been observed. To use a well-known adage, however, correlation is not causation. This sentiment was particularly apparent in the dissertation’s predictive modeling techniques, as only one model out of the twenty was
76


statistically significant. Table 21 below contains the output from the significant model found in FY2013. The remaining 19 outputs may be found in Appendix J.
Table 21. Firth Logistic Regression for FY2013, Performance Management and Permanency
2013 model b
(Intercept)
PrcntSucc4Qrts2013
coef se(coef) -3.830284 1.601005 4.843787 1.926340
lower 0.95 upper 0.95 Chisq -7.184829 -0.9354582 6.909999 1.340511 8.8486743 7.568580
P
(Intercept) 0.008571504
PrcntSucc4Qrts2013 0.005939457
Likelihood ratio test=7.56858 on 1 df, p=0.005939457, n=57 Wald test = 6.322721 on 1 df, p = 0.01192006
As seen in Table 21, a statistically significant bivariate relationship was observed between performance management and permanency outcomes. Model significance was interpreted using the likelihood ratio test, which is based on Firth’s bias-reduction estimates. In the model for FY2013, significance was observed at p<001, offering some support for the dissertation’s fourth hypothesis that as performance management increases, so will outcome achievement.
Bootstrapping for Mediation
In reviewing the results of the logistic regressions, it became clear that the proposed bootstrapping procedure for mediation would not be appropriate. To briefly review, the underlying premise of a mediating variable is that it tempers the relationship between some set of predictor variables (X) and an outcome of interest (Y). Bootstrapping enables the calculation of the mediating effect (M)- the degree to which variation in Y is associated with a direct effect from X relative to the indirect effect of M-through repeated resampling of the data. Lacking a direct effect, as was observed in this dissertation, there is nothing for the mediating variable to mediate. In other words, if the hypothesized predictors did not exert a statistically significant
77


influence on permanency outcomes, then performance management could not logically be the mediating effect between the two.
Still, while the dissertation’s findings were not empirically significant, useful implications for practice may be gleaned. Before that discussion, though, a summary of the support level for the hypotheses based on the empirical analyses is presented.
Level of Support for Hypotheses
Based on the empirical analyses, support for the dissertation’s hypotheses was limited, at best. Figure 8 below presents a summary of the four hypotheses and levels of support based on the cluster analyses, Mann-Whitney U, correlations and Chi-squares, and the rare events logistic regressions.
Research Question Hypothesis Descriptive Support: Clusters Mann-Whitney U Support Correlations/ Chi-Square Support Rare Events Logit Support
RQl:To what extent do systemic factors affect a child welfare agency's ability to achieve performance management standards? HI: Agencies with larger eligible populations will be less likely to achieve PM standards than those agencies with smaller populations FY13 FY13 - -
H2: Agencies that operate within economically stable counties will be more likely to achieve PM standards than agencies that operate within economically struggling counties. FY14 through FY17 FY13 FY15 - -
H3: Agencies within urban settings will be more likely to achieve PM standards than agencies within rural or frontier settings. FY14 through FY17 - - -
RQ2: How does the attainment of performance management standards impact permanency outcomes in child welfare agencies? HA: Agencies that achieve PM standards will have better permanency outcomes than agencies that do achieve standards. FY13 FY14 - FY13 FY16 FY13
Figure 8. Summary of support for hypotheses.
Reasons for the lack of hypothesis support are discussed in greater detail in the next
chapter. There are two reasons, however, that are currently presented: 1) the empirical models did not adequately capture lagged effects, and 2) the presence of multicollinearity undermined the models’ empirical accuracy. To check for the limitation of lagged effects, performance management and permanency outcomes from one year prior were modeled as predictor variables
78


in a subsequent study year, also known as lagging the dependent variable from time t to time t-1 (Allison, 2015). For example, performance management in 2014 was assessed for its impact on permanency outcomes in FY2015.
To examine multicollinearity, a series of sensitivity analyses were conducted wherein highly-correlated predictors (e.g., housing units and income) were added and removed from the models using stepwise regression. Such tests revealed the presence of multicollinearity across many of the predictor variables, but model adjustments did not result in greater statistical significance, suggesting that the redundancy did not have sufficient impact on the models to affect their interpretation or empirical utility. Because the models with the complete set of predictor variables were of theoretic relevance and the exclusion of various independent variables did not yield statistically significant results, all variables were ultimately retained in the final models.
The lack of additional statistically-significant relationships upon model re-specification suggests that, while imperfect, the original models were robust in their specifications. More practically important, however, is that even with modeling adjustments, no significant causal mechanisms were identified, prompting further supposition as to why that might occur. This supposition is the focus to which the dissertation now turns.
79


V. DISCUSSION AND CONCLUSION
Having presented the empirical findings of the dissertation, the final chapter of this dissertation will discuss the implications of the results and suggest directions for further research. Toward that end, the conclusion has three subsections: Discussion, Study Limitations, and Next Steps.
Discussion
This dissertation was guided by two primary research questions. The first asked to what extent systemic constraints impact performance management in child welfare agencies, while the second inquired as to the effect of performance management on organizational outcome achievement. Without measured certainty from the quantitative methods, the questions did not enjoy empirical support. More specifically, the dissertation could not conclude analytically that performance management was affected by systemic factors or that it had a demonstrable impact on organizational outcomes. Instead, the core issue of performance management as a mediating influence remains uncertain, as evidenced by the presence of correlational, but not predictive, association.
Why might the discrepancy between correlational association and causal mechanisms persist in this dissertation? One possibility is that variables were connected to one another in a relational sense, but not a predictive one. To explain, consider this illustrative example. In several fiscal years, an association was determined to exist between income (a systemic constraint) and performance management. The presence of this association suggests that agencies achieving higher performance management standards are also situated in counties with greater economic stability. Therefore, a correlational relationship may exist between these two constructs. The lack of predictive significance, though, implies that income is not a driver of performance management, meaning that, while theoretically and practically important, income in
80


and of itself does not determine performance. Despite its predictive limitations, however, the dissertation’s empirical analysis yielded some notable patterns across variables, summarized in the numbered list below.
1. Counties with smaller eligible populations had better performance management records
2. Counties with stronger economic profiles had better performance management records
3. Counties with fewer service providers had better performance management records
4. An inconsistent relationship existed between performance management and permanency As seen in the list above, several key takeaways may be gathered from the empirical
analysis. Consistent with theoretic expectations, counties with smaller populations generally had stronger performance management achievement. Greater population size was projected to impose a larger strain on organizational capacity, thereby limiting performance management, and its descriptive association with performance management suggests that child welfare agencies may fare better when there is a smaller pool of clients for which service needs arise.
Similarly, counties with stronger economic profiles, particularly those with higher average per capita income and lower dependence on free and reduced lunch, also performed better with respect to performance management. There was a direct relationship between population size and income and an inverse relationship between population and school lunch eligibility. In other words, counties with larger populations also generally had higher per capita income and lower eligibility for free and reduced lunch. It is interesting, then, that smaller counties with less economic stability achieved performance management rather consistently with larger counties that had greater economic stability. Such an inconsistent observation indicates that it is not entirely clear whether population size or economic health has a greater influence on achieving performance management standards. Further research should study this disconnect in
81


greater detail in order to recommend meaningful strategies to organizations that are constrained by factors outside their immediate control.
The third observed pattern involved the availability of service providers. In general, counties with fewer resource opportunities out-performed county agencies with greater resource availability. One possible explanation is that the quantity of services available does not equate to the quality of service provision. Smaller counties with fewer agencies may have higher-quality providers that have stayed operational because they are able to meet contract expectations and outlive potential competition. Alternatively, if quality of service provision is not the driver, it may be that the quantity of providers in larger areas is disproportionate to the number of children in care. That is, even when larger urban counties have a greater number of providers, there may not be enough providers or breadth of services to attend to all children.
Finally, the inconsistent relationship between performance management and permanency outcomes should not be understated. Of all the noted patterns, this is arguably the most important because of its implications for theory and practice. If achieving performance management standards does not equate to achieving better long-term outcomes, our theoretic expectations may need to be reconsidered or agencies may be reinforcing behaviors and organizational goals that ultimately do not facilitate success.
Thus, while the lack of empirical support was understandably disappointing, it does not negate the dissertation’s theoretic and practical contributions. If anything, the ambiguity surrounding systemic constraints, performance management, and permanency outcomes underscores the need for further examination of these constructs from a theoretic vantagepoint and careful consideration of their application in organizational settings. To illustrate the urgency
82


of conceptually rich, empirically sound, and practically meaningful constructs of performance management, consider the following example.
In a 2014 workload study submitted to The Office of the State Auditor, CDHS declared the gravity of the resource scarcity within county child welfare offices (CDHS Workload, 2014). According to the report, “estimated workload levels (i.e., amount of time that should be spent on a case per month) would require between 18 and 157 percent more time per month for each service than the actual amount of time child welfare workers spent on each service during the time of the study” (CDHS Workload, 2014, p. v). Perhaps more astonishing, to meet the expected time requirements, CDHS articulated a need for an additional 574 caseworkers and 122 supervisor positions (CDHS Workload, 2014, p. v). In the face of such tremendous restriction, it is critical to understand what adjustments can be made internally to improve performance management and to find solutions for working under conditions of environmental uncertainty.
If the theoretic constructs are misaligned with performance management data in practice, researchers may be missing a valuable opportunity to define, measure, and improve organizational effectiveness. Perhaps more immediately, if organizations such as child welfare agencies in Colorado are fundamentally assessing performance adherence against unreasonable standards, employees already under tight resource constraints may be working towards objectives that do not enhance agency performance or facilitate better long-term outcomes for the children in their care.
Study Limitations
As with any research study, this project was not immune to limitations. There are two main types of study limitations: threats to validity and reliability (Singleton, Jr., & Straits, 2010). Threats to validity include two forms: internal and external. Internal validity reflects the extent to
83


which the operationalized measures are consistent with the concepts being studied. A misalignment between the conceptual and empirical variables can diminish internal validity. External validity refers to generalizability-that is, the extent to which the dissertation’s conclusions are applicable across a variety of contexts. Finally, reliability is a measure of consistency used to determine the likelihood that measures are recorded in a uniform manner (Singleton & Straits, 2010).
Perhaps the greatest limitation in the dissertation was the threat to internal validity. Internal validity can be assessed in a myriad of ways, but two types were especially relevant in this research: predictive validity and construct validity. According to Tashakkori and Teddlie (1998), a measure is thought to have high predictive validity if it can accurately predict an outcome of interest and high construct validity if it is appropriately correlated with measures of the same construct and not correlated with theoretically unrelated concepts. In this dissertation, predictive validity was low, as demonstrated by the lack of statistical inference able to be generated from the predictive models. Construct validity fared better, as similar measures of both performance management and permanency outcomes had high levels of correlation in each of the study years.
Beyond internal validity, threats to external validity were also considered. While a lack of generalizability beyond the context of child welfare may be perceived as a threat to external validity, it is important to note that many elements of the dissertation were not intended to be applicable beyond their current context, as measures were drawn directly from the study setting. Despite this empirical limitation, it is anticipated that some of the broader theoretic constructions are of import to other areas of inquiry.
84


Issues of reliability were presumably not a threat, as counties input the same metrics into the case management system (TRAILS) and the measures have been consistent over time. Unless there was an internal change to the metric calculations that was not stated, reliability should be high. Finally, the issue of replicability deserves mention. This dissertation provided a detailed overview of the data collection, variable measurement, and analysis procedures so that future researchers should be able to replicate the dissertation exactly and derive similar findings to those presented here.
There are, at minimum, four plausible explanations accounting for the dissertation’s limitations. The first explanation is that the conceptual framework was mis-specified. While the theoretic footing was grounded in extant research, the supposition that performance management is a mediating influence may have been overly simplified. Still, this dissertation’s purpose was not to be exhaustive in its treatment of public management, but rather to elucidate how systemic constraints may impact performance, which may in turn affect outcomes. In that pursuit, the conceptual framework appeared appropriate.
The second explanation of the dissertation’s limitations is that the measures of performance management and permanency outcomes are inconsistent with our theoretic expectations. These variables, in both name and data, were taken directly from a governmental agency that has identified them as the metrics capturing performance and outcomes. It is possible that such measures are not truly reflective of performance and outcomes, despite their naming conventions. If this is the case, caution should be exerted when reinforcing programmatic, policy, and budgetary expectations aligned with the achievement of these objectives.
The third, and simplest, explanation lies in the small population size. A quantitative approach was appropriate given the numeric nature of the data, but it is well-known that
85


statistical power is limited in especially small sample sizes (Cohen, 1992). While the modeling techniques sought to be as responsive to this limitation as possible by limiting the number of predictor variables and specifying a binary outcome appropriate for a rare events logistic regression designed to be robust against sample size limitations, these efforts may not have been enough to overcome the limits.
Finally, and most likely, is that the dissertation’s limitations reflect some combination of the above-mentioned explanations. It is not inconceivable that further theoretic refinement would lead to the inclusion and analysis of variables outside of the original dissertation’s scope. Concomitantly, the measures of performance management and permanency may be theoretically inconsistent and child welfare agencies continue to unknowingly incentivize ineffective objectives. And, of course, the small sample size is not discounted for its inherent limitations. With these limitations considered, the following sub-section proposes a series of steps to be undertaken that may further address these areas of concern.
Next Steps
The purpose of this dissertation was threefold: first, to have made a contribution to the field of public management’s understanding of performance management by emphasizing the concept’s importance to organizations; second, to have collected and analyzed data that will have a meaningful impact in practice for child welfare agencies in Colorado; and third, to serve as a starting point for further study. In continued pursuit of those objectives, four additional steps are proposed.
First, a re-visitation of the dissertation’s conceptual framework may be warranted. The core premise of this research embraced the conceptual foundation that performance management is best understood as a mediating influence between systemic constraints and organizational
86


outcome attainment. While theoretic argument reasoned that examining the concept in its mediating form was an underexplored facet of performance management, the lack of corresponding empirical support suggests that perhaps the conceptual framework was oversimplified and did not adequately capture the dynamic nuances in sufficient detail to garner statistical support. Further refinement of this conceptual frame to explicitly examine endogenous influences such as management practice, organizational culture, and public service motivation may be required to enhance the theoretic contribution.
The second step involves outreach to practitioners within Colorado child welfare agencies. As Moynihan and Pandey (2010) argued, understanding how and why government agencies use performance management data is critical to improve public management both in theory and practice. More specifically, in the context of child welfare, accurate reporting is essential for compliance with federal and state regulations, and more importantly, to enable greater proficiency amongst agency employments to help improve long-term outcomes. Simply stated by Jolles et al. (2017), “Workers are unlikely to accurately and consistently report performance information if they do not understand performance measures, or do not believe they are meaningful.... In short, managing frontline workers’ engagement in performance measurement is critical to success” (p. 1169). Because Colorado has long required consistent, systematic electronic data entry under its case management system TRAILS, it is unlikely that inaccuracy persists in data collection, but Jolles’ et al.’s (2017) emphasis on the competency/engagement connection among frontline workers should not be underestimated. Therefore, engaging with the practitioner community is an important step in refining and furthering this research.
87


The third step seeks to extend the current dissertation using an alternative research design: qualitative methods. As broad patterns could not be ascertained from the quantitative analyses conducted here, perhaps qualitative methods can provide additional insight not gleaned from a quantitatively-informed systems-level perspective. Indeed, in engaging child welfare stakeholders to provide them with thorough, longitudinal data specific to their jurisdictions for the purpose understanding and improving performance, information-sharing may be leveraged. Using such an opportunity to develop and execute a supplementary qualitative dissertation would be of considerable import to the child welfare agencies responsible for service provision and may illuminate additional theoretic insights.
Finally, the last step aims to articulate policy alternatives that may be considered when examining performance management in practice. In both child welfare research and practice, performance management is often defined as timeliness, a measure that is also utilized by the Colorado Department of Human Services. However, this definition may overemphasize the importance of efficiency, rather than quality, of services provided and may therefore fail to capture the essence of performance management. Instead, metrics that emphasize the quality of agency actions should be considered.
As one example, in 2013, CDHS developed C-Stat, a “performance-based analysis strategy that allows every CDHS program to better focus on and improve performance outcomes” (CDHS C-Stat, n.d.). While still accounting for timeliness as an indicator of performance management, C-Stat also assesses agencies on a more comprehensive assortment of quality metrics, including the recurrence of child maltreatment, the use of out-of-home emancipation transition plans, and recidivism into the child welfare system (CDHS, 2019). Although these data are not currently publicly-available at a county-level, they may offer a more
88


nuanced approach to the study of performance management than an emphasis on timeliness alone.
If agencies can refine their data collection and assessment procedures to gather more holistic information on performance management practices, then theoretic research may be able to better align with empirically-derived conceptualizations. Currently, child welfare literature overwhelmingly relies on agency-determined indicators of performance management instead of advancing theoretic alternatives. Greater cooperation between scholars and ground-level experts should strengthen both a conceptual understanding of performance management and reinforce agency behaviors towards a system of performance management that truly impacts organizational outcomes.
In short, this dissertation has sought to make both theoretic and practical contributions. With theoretic refinement, community outreach, and further study, a trajectory is envisioned to continue exploring the important role performance management has within governmental organizations and to improve the experiences of children placed in out-of-home care.
89


REFERENCES
Alach, Z. (2017). Towards a standard conceptual typology of public sector performance
measurement. Kotuitui: New Zealand Journal of Social Sciences Online, 72(1), 56-69. doi: 10.1080/1177083X.2016.1225579
Allison, P. (2015). Don’t put lagged dependent variables in mixed models. Retrieved from https://statisticalhorizons.com/lagged-dependent-variables.
Amirkhanyan, A. A., Kim, H. J., & Lambright, K. T. (2014). The performance puzzle:
Understanding the factors influencing alternative dimensions and views of performance. Journal of Public Administration Research and Theory, 27(1), 1-34. doi: 10.1093/j opart/mut021
Andrews, R., Boyne, G. A., Moon, M. J., & Walker, R. M. (2010). Assessing organizational performance: Exploring differences between internal and external measures. International Public Management Journal, 73(2), 105-129. doi:
10.1080/10967491003766533
Arnaboldi, M., Lapsley, I., & Steccolini, I. (2015). Performance management in the public
sector: The ultimate challenge. Financial Accountability & Management, 37(1), 1-22. doi: 10.1111/faam. 12049
Arsneault, S. (2006). Implementing welfare reform in urban and rural communities-Why place matters. American Review of Public Administration, 36(2), 173-188.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182.
Barth, R. P., Wildfire, J., & Green, R. L. (2006). Placement into foster care and the interplay of urbanicity, child behavior problems, and poverty. American Journal of Orthopsychiatry, 76(3), 358-366. doi: 10.1037/0002-9432.76.3.358
Barzelay, M. (1992). Breaking through bureaucracy: A new vision for managing in government. Berkeley, CA: University of California Press.
Behn, R. D. (1995). The big questions of public management. Public Administration Review, 55(4), 313-324. doi: 10.2307/977122
Belanger, K., & Stone, W. (2008). The social service divide: Service availability and
accessibility in rural versus urban counties and impact on child welfare outcomes. Child Welfare, 57(4), 101-124.
90


Full Text

PAGE 1

i THE LINK BETWEEN SYSTEMIC CONSTRAINTS AND PERMANENCY OUTCOMES by CARRIE L. CHAPMAN B.A., University of North Carolina at Asheville, 2010 A thesis submitted to the Faculty of the Graduate Sc hool of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Public Affairs Program 2019

PAGE 2

ii © 2019 CARRIE L. CHAPMAN ALL RIGHTS RESERVED

PAGE 3

iii This thesis for the Doctor of Philoso phy Degree by Carrie L. Chapman has been approved for the Public Affairs Program by Danielle M. Varda, Advisor John Ronquillo Peter deLeon Darrin Hicks Date: May 18, 2019

PAGE 4

iv Chapman, Carrie L. (PhD, Public Affairs Program ) Performance Management in Co The Link Between Systemic Constraints and Permanency Out comes Thesis directed by Associate Professor Danielle M. Varda ABSTRACT Studies of performance management have long been central to the field of public management, no ted for their importance in understanding organizational behavior and facilitating better outcome achievement. Most studies, however, have explored performance ma nagement as either a dependent variable affected by broader systemic constraints or as an inde pendent variable influencing the attainment of organizational objectives. This dissertation proposes that performance management is better understood as both pred ictor and outcome and pulation study of 64 county child welfare agencies, this thesis analyzed how systemic constraints including population size, economic profiles, and geography impa performance management standards and, in turn, the extent to which performance management impacted permanency outcomes for children in out of home care. A quantitative research design employing cluster analysis and logis tic regression for rare events indicated that only limited empirical support existed, suggesti ng that future studies should continue to develop richer insight regarding the role of performance management to bolster our theoretic and practical understanding of this complex concept. The form and content of this abstract are approved. I recommend its publication. Approved: Danielle M. Varda

PAGE 5

v This dissertation is dedicated to my parents, Daniel and Diane Chapman, whose unwavering support throughout a journey across many bumpy roads allowed me to fly .

PAGE 6

vi ACKNOWLEDGMENTS There are many peopl e responsible for the completion of this degree. There was many a night when I was not convinced this pursuit w ould ever manifest into its credentialed completion. There are many nights, still, when I reflect in disbelief at my reality. But thanks to so ma ny extraordinary people, with their guidance, brilliance, and support, I have been able to Danielle Varda, t his dissertation would never have been possible without you. Your expertise, kindness, knowing just what I needed and delivering every time, brought this process to completion . To my committee members, Peter deLeon, John Ronquillo, and Darrin Hicks, thank you for your patience, feedback, and willingness to go on this journey with me. I am deeply indebted to y ou all. To Dawn Savage, you are a rock star of epic proportions. Along the way, I have been fortunate to have so many friends, colleagues, and family guide me, including: Todd Boesdorfer, Jon Pierce, Kate and Robert Cope, Chelsey Weaver, Bethany Johnson, Alice Hall, Sandra Hodgin, Warren Eller, Brian Gerber, Benoy Jacob, Erin Crites, Diane Johnson, Mary and Paul Bougie, Lena Lucivero , Kathleen Gallaghe r, Vanessa Fenley, Brenda Dickhoner, Bill Sabo, Mark Gibney, Ida Drury, Alex Henderson, Mike Sabato, and Erin Lynch. To Roger Desrosiers, thank you for being my greatest mentor. To the other wonderful influences whose names are too numerous to mention, know how much I thank and appreciate you. Finally, I would not be here today were it not for the support o f my family. Mom and Dad, thank you for being my strongest champions. Laura, Chris, and Ray, thank you, always, for e all along. My love for you all knows no bounds.

PAGE 7

vii TABLE OF CON TENTS CHAPTER I. INTRODUCTION ................................ ................................ ................................ ................................ ... 1 Significance of the Problem ................................ ................................ ................................ .................... 2 Evidence from the Lit erature ................................ ................................ ................................ ................. 4 Research Questions and Hypotheses ................................ ................................ ................................ ..... 6 Overvi ew of Dissertation ................................ ................................ ................................ ........................ 7 II. RE VIEW OF THE LITERATURE ................................ ................................ ................................ ..... 8 Definitions: What is Performance Management? ................................ ................................ ................ 8 Why Performance Management? Importance to Research and Practice ................................ ....... 10 The Factors Affecting Performance Management in Organizations ................................ ............... 12 Eligible Population ................................ ................................ ................................ .............................. 13 Economic Profile ................................ ................................ ................................ ................................ 14 Geography ................................ ................................ ................................ ................................ ........... 15 .............................. 15 Conceptual Framework ................................ ................................ ................................ ........................ 18 Study Context ................................ ................................ ................................ ................................ ........ 20 III. METHODOLOGY ................................ ................................ ................................ ............................. 25 Research Questions and Hypotheses ................................ ................................ ................................ ... 25 Population and Sample ................................ ................................ ................................ ......................... 27 Data Collection ................................ ................................ ................................ ................................ ...... 28 Empirical Measures ................................ ................................ ................................ .............................. 32 Systemic Factors Influencing Performance Management (Concepts 1 3) ................................ .......... 34 Performance Management (Con cept 4) ................................ ................................ .............................. 38 Control Variables (Concepts 5 7) ................................ ................................ ................................ ....... 40 Permanency Outcomes (Concept 8) ................................ ................................ ................................ .... 43 Data Analysis ................................ ................................ ................................ ................................ ......... 45 Descriptive Statistics ................................ ................................ ................................ ........................... 45 Means Comparisons, Correlations, and Chi square ................................ ................................ ............ 47 Logistic Regression for Rare Events ................................ ................................ ................................ ... 48 Testing for Mediation: Bootstrapping ................................ ................................ ................................ . 51 IV. RESULTS ................................ ................................ ................................ ................................ ............ 53 Descriptive Findings ................................ ................................ ................................ ............................. 53 Descriptive Statistics ................................ ................................ ................................ ........................... 53

PAGE 8

viii Two Step Cluster Analysis ................................ ................................ ................................ ................. 60 Inferential Findings ................................ ................................ ................................ .............................. 66 Means Comparisons, Correlations, and Chi Square Statistics ................................ ............................ 66 Logistic Regression ................................ ................................ ................................ ............................. 76 Bootstrapping for Mediation ................................ ................................ ................................ ............... 77 Level of Support for Hypotheses ................................ ................................ ................................ ......... 78 V. DISCUS SION AND CONCLUSION ................................ ................................ ................................ .. 80 Discussion ................................ ................................ ................................ ................................ .............. 80 Study Limitations ................................ ................................ ................................ ................................ .. 83 Next Steps ................................ ................................ ................................ ................................ .............. 86 REFERENCES ................................ ................................ ................................ ................................ .......... 90 APPENDIX A. Variable Labels ................................ ................................ ................................ ................................ . 100 B. All Variables by County for Fiscal Ye ar 2013 ................................ ................................ ................ 101 C. All Variables by County for Fiscal Year 2014 ................................ ................................ ................ 104 D. All Variables by County for Fiscal Year 2015 ................................ ................................ ................ 107 E. All Variables by County for Fiscal Year 2016 ................................ ................................ ................ 110 F. All Variables by County for Fiscal Year 2017 ................................ ................................ ................ 113 G. Percent Change Across Variables by County from Fiscal Year 2013 to Fiscal Year 2017 ........ 116 H. Selected Histograms Demonstrating Normality Approximations ................................ ................ 119 I. Mann Whitney U Me ans Comparisons for Fiscal Years 2016 and 2017 ................................ ....... 120 ................................ ................................ ............ 122

PAGE 9

ix LIST OF TABLES TABLE 1. Overview of Research Questions, Concepts, and Measures ................................ ..................... 33 2. Descriptive Statistics fo r FY2013 ................................ ................................ ............................. 55 3. Descriptive Statistics for FY2014 ................................ ................................ ............................. 56 4. Descriptive Statistics for FY2015 ................................ ................................ ............................. 57 5. Descriptive Statistics for FY2016 ................................ ................................ ............................. 57 6. Descriptive Statistics for FY2017 ................................ ................................ ............................. 58 7. Cluster Analyses for FY2013 ................................ ................................ ................................ .... 61 8. Cluster Analyses for FY2014 ................................ ................................ ................................ .... 62 9. Clust er Analyses for FY2015 ................................ ................................ ................................ .... 63 10. Cluster Analyses f or FY2016 ................................ ................................ ................................ .. 64 11. Cluster Analyses for FY2017 ................................ ................................ ................................ .. 65 12. Ma nn Whitney U Means Comparisons for FY2013 ................................ ............................... 67 13. Mann Whitney U Means Comparisons for FY2014 ................................ ............................... 69 14. Mann Whitney U Means Comparisons for FY2015 ................................ ............................... 70 15. Correlation Matrix for FY2013 ................................ ................................ ............................... 72 16. Correlation Matrix for FY2014 ................................ ................................ ............................... 72 17. Correlation Matrix for FY2015 ................................ ................................ ............................... 73 18. Correlation Matrix for FY2016 ................................ ................................ ............................... 74 19. Correlation Matrix for FY2017 ................................ ................................ ............................... 75 20. Chi Square Tests for FY2013 to FY2017 ................................ ................................ ............... 76 21. Firth Logistic Regression for FY2013, Performance Management and Permanency ............ 77

PAGE 10

x LIST OF FIGURES FIGURE 1. A co nceptual framework of performance management. ................................ ............................. 4 2. A conceptual framework of performance management. ................................ ........................... 19 3. Configuration of states' admini strative structures. ................................ ................................ .... 21 4. Colorado county geographic designations. ................................ ................................ ............... 37 5. Visual re present ation of cluster formation.. ................................ ................................ .............. 46 6. Rare event bias in logistic regression. ................................ ................................ ....................... 49 7. Changes i n performance management relative to changes in permanency success. ................ 59 8. Summary of support for hypotheses. ................................ ................................ ........................ 78

PAGE 11

1 I. INTRODUCTION The concept of performance manageme nt has historically been of import ance to the broader fields of public management and public administration ( Kroll & Moynihan, 2017 ) . Currently, the concept of performance management has garnered considerable attention as research attempts to unpack what h the likes of which are regarded as essential facets of organizational goal attainment but equally challenging to define theoretically and assess empirically ( ) . Typically, th e concept of performance management is studied through one of two lenses: either as a causal mechanism , influencing organizational outputs, or as a dependent variable, being affected by some series of events within an agency. This dissertation proposes to examine performance management as a me diating influence that is, to analyze performance management as the link between broader systemic predictors and organizational outcome realization . It is proposed that exogenous influences affect organizational perfor mance, understood here as performance management, and that, in turn, performance management impacts the attainment of organizational outcomes. Thus, rather than explore a direct relations hip between exogenous forces and outcome achievement, it is posited t hat performance management exists as a mediating variable between the two. For example, the size of the population eligible for services may be hypothesized to impact organizational outco me attainment through a strain it is argued here that population strains will first impact the ability to achieve performance management standards, the effects of which will then impact outcome realization. In this way, performance management is the intermediate link between systemic f actors and outcomes.

PAGE 12

2 The remainder of this introduction identifies the significance of the problem, including a statement of the dissertation mmarizes key themes from the literature, articulates the dissertation research questions and hypotheses, and concludes with Significance of the Problem Effective p erformance management has been deemed essential to facilitate organizational outcome attainment. In the face of r esource scarcity, competing expectations from stakeholder s , and public sector systems burdened by excessive workload and service demands, understanding t he factors that drive organizational outcomes is critical ( Arnaboldi, Lapsley, & Steccolini, 2015 ). Yet , despite recognition in both theory and practice that performance management is necessary to realize broader organizational objectives, the construct re mains ambiguous in its definition, making it difficult to ascertain just what, exactly, organizations a re striving to achieve and how they might best set about achieving it. Among the many definitions that exist, this dissertation employs Radnor and Barnes management as: the quantifying, either quantitati vely or qualitatively, of the input, output or level of activity of an event or process. Performance management is action, based on performance measures and reporting, which results in improvements in behavior, motivation and processes and promotes innovat ion. (Radnor & Barnes, 2007, p. 393) When understood as an action, performance management can be robustly defi ned as a dynamic variable both critic al to organizational success and shaped by events that precede its impact on larger goals. In reflecting on exta nt research, these two patterns of performance management conceptualization were observed. In the first pattern, performance management is treated a s a dependent variable of sorts, being subject to variability from both organizational and

PAGE 13

3 environmenta l constraints. In the latter, performance management is conceived as a predictor, a driving force behind the attainment of organizational outcomes. While such conceptualizations have contributed to the development of a rich body of research, this dissertat ion posits that examining performance management exclusively as outcome or predictor does not warrant comprehensive understanding. Instead, it is argued that performance management is better classified as a mediating influence, being at once impacted by sy stemic constraints and in turn affecting organizational outcome achievement. It is within this reconceptualization that the dissertation aims to offer it s theoretic contribution. Beyond theoretic advancement, this dissertation also hopes to have a viable contribution to practice. With limited time and resources available, public agencies are known to struggle in making the best use of performance management data ( Heinrich, 1999; Moynihan & Pandey, 2010; Moynihan & Kroll, 2016 ). If performance management i s demonstrated t o impact outcomes, however, then refinement of and improvements to organizational practices may be best actualized through the insights gleaned from performance management indicators. Framed differently, it is important for agencies to unde rstand if they a re accurately measuring progress and processes that contribute to overall effectiveness, and if not, to use performance data to better leverage existing resources in alignment with intermediate outcomes known to have a meaningful impact on long term goals. By examining performance management within the context of child welfare using measures identified by the Colorado Department of Human Services, this dissertation can offer a practical contribution to an organizational structure that has al ready defined th e performance benchmarks to which service providers are expected to adhere.

PAGE 14

4 Evidence from the Literature There exists an extensive body of research dedicated to the study of performance management in public organizations. Typically, pe rformance manage ment is conceptualized as either an outcome, being affected by larger managerial, organization, and environmental constraints, or predictor, influencing the achievement of broader organizational objectives ( c f. Heinrich, 2002; McBeath & Mee zan, 2010; Amirkhanyan, Kim, & Lambright, 2014 ). In this dissertation , performance management is conceptualized as both predictor and outcome, arguing that the construct is better understood as an intermediate output between systemic constraints, on the on e hand, and outcome achievement, on the other. Specifically, this relationship is explored which refers to either biological reunification or adoption as an exit f rom out of home care for system involved youth ( ). Figure 1 below presents this conceptual framework. Figure 1 . A c onceptual f ramework of p erformance m anagement .

PAGE 15

5 To explain the role of perfor mance management, several lines of reasoning are considered. First, performance management is expected to be impacted by macro level systemic factors, including the population eligible for child welfare system involvement, a lo economic profile wit h an emphasis on poverty, and geographic location , all of which are consistent with the traditional conceptualization of performance management as a dependent variable (path a in the diagram above) . Each of these systemic influences ha ve been examined in extant ch ild welfare scholarship with considerable theoretic and empirical support. Second, the achievement of performance management standards is anticipated to affect organizational outcomes , as depicted using path b in the second tra ditional conceptualization . A s with systemic factors ( that is, the social , economic , and geographic factors that can influence organizational processes and outcomes) , the literature pertaining to outcomes is situated within the study of child welf are. With the traditional conceptual frameworks established, this dissertation argues that neither comprehensively captures the dynamic nature of performance manageme nt and therefore proposes an alternative conceptualization. In this way, performance mana gement is considered to exist between paths a and b, being a dependent variable (that is, affected by systemic constraints) in path a and, in turn, acting as a casual mechanism that impacts outcome achievement in path b. To test this conceptual framework, per formance management is empirically represented as a mediating variable, where its influence may be analyzed quantitatively as an intermediate path between systemic con straints and outcomes (path c). For performance management to exert a mediating influence , it is assumed that systemic constraints exhibit a direct effect on permanency outcomes. To evaluate the conceptual framework, two research questions and four affilia ted hypotheses are posited.

PAGE 16

6 Research Questions and Hypotheses Based on the literature h ighlighted in the previous section, the dissertation is guided by two central research questions and four associated hypotheses. The first research question is: pe rformance management standards? Three hypotheses a re proposed for research question 1. The first hypothesis involves the eligible population and is stated as: H1: Agencies with larger eligible populations will be less likely to achieve performance manage ment standards than those agencies with smaller popu lations. The dissertation H2: Agencies that operate within economically stable counties will be more likely to achieve perf ormance management standards than agencies that oper ate within economically struggling counties. The third and final hypothesis associated with research question 1 considers the impact of a H3: Agencies within urban settings will be more likely to achiev e performance management standards than agencies within rural or frontier settings. With performance management designated as a dependent variable in the first research question, the second part of th is dissertation reimagines its role as a predictor i nfluencing organizational outcomes. Toward that aim, the second research question is: RQ2: How does the attainment of performance management standards impact permanency outcomes in child welfare agencie s? A fourth hypothesis is proposed to consider the directionality of the relationship between performance management and outcomes. This final hypothesis proposes : H4: Agencies that achieve performance management standards will have better permanency ou tcomes than agencies that do not achieve standards.

PAGE 17

7 To determine levels of support for the proposed hypotheses, a quantitative research design was utilized. Descript ive and inferential statistics were employed for each of the four hypotheses, and implicat ions for theory and practice are discussed. Overview of Dissertation This dissertation consists of five chapters. The first chapter has been this introduction, wh ich has specified the purpose of the dissertation , provided a broadened overview of the re levant literature, and articulated the dissertation conceptual framework, research questions , and hypotheses. The next chapter synthesizes pertinent literature within performance management and child welfare from which the research questions were derived . It also proposes the dissertation his review of the literat ure concludes with the dissertation of the Colorado child welfare system. The third chapter contains detailed descriptions of the dissertation s, including the population and sample, data collection, empirical measures, and analytic strat egy. The fourth chapter summarizes the results of the empirical design to determine the amount of support evidenced for the dissertation Finally, t he fifth chapter discusses the dissertation s directi ons for further research . Specifically, the discussion centers around the importance of aligning our theoretic understandings with the practical applications of performance management. The refinement of theory as a mechanism to inform practice can enhance c onceptual development in research to move theoretical understanding of these concepts forward while operationalizing sound empirical measurement in organizationa l settings , the absence of t the theoretic utility of performance management.

PAGE 18

8 II. REVIEW OF THE LITERATURE This review of the literature will examine performance management within the context of theories related to public management and child welfare , including the research and practice, its antecedents, and its impact on organizational outcomes. The literature review is organized in the following subsections: D efinitions: What is Performance Management; Why Performance Management? Importance to Research and Practice; The Impact on Organizational Outcome Attainment; Conceptual Fra mework ; and Study Context . Definitions: What is Performance Management? In attempting to conceptualize performance management, it is readily apparent that no singular definition exists. Like other complex constructs in public management, performance ma nagement has undergone much theoretic revision. Or, as Alach (2017) succinct ly stated, p erforman c e management is difficult to define. Despite its conceptual ambiguity, however, it can be argued tha t the core of performance management relates to the measure ment of an organizational expectation, the results of which inform decision making. As Moynihan (2008) explains, performance management is defined as generates performance information through strategic planning and performance measurement r outines, and connects this information to decision venues, where, ideally, t he information p. 5). In a more complex specification, Pollit t (2013) conceptualized performance management as a dynamic system of interr elated elements, each with its own set of decision making criteria, ambiguit ies, and implications for organizational effectiveness. According to Pollitt (2013) , the elements of program activity, measurement, data, application of criteria (or standards), qu antitative information synthesis, and informed decisions exist cyclically to produce a performance management system within governmental

PAGE 19

9 organizations. Alternatively, performance management has been conceptualized as a process olling and managing both the achievement of outcomes as well as the means us 2017). Of the several conceptual definitions, one is particularly applicable to this dissertati on . The definition elected for application here was posed by Radnor and Barnes in 2007. In articulating their conceptualization, the authors distinguished two interrelated constructs performance measurement and performance management : Performance measurem ent is the quantifyi ng, either quantitatively or qualitatively, of the input, output or level of activity of an event or process. Performance management is action, based on performance measures and reporting, which results in improvements in behavior, moti vation and processes and promotes innovation . ( Radnor & Barnes, 2007, p. 393) important because it appears to reflect accurately the complexities of performance manage ment in practice. Or ganizations must decide which actions to take based on quantified performance measurement, the consequences of which are presumed to affect broader organizational n the direction of t he relationship, meaning that better performance management is expected to improve organizational effectiveness. This dissertation accepts this foundational definition but challenges the extent to which performance management may be aff ected by exogenous s ystemic constraints, which are anticipated to impact the actionable steps undertaken by agencies in pursuit of outcome achievement. Before that discussion, however, it is important to first explain the theoretical and practical rational e for studying perfo rmance management.

PAGE 20

10 Why Performance Management? Importance to Research and Practice Despite is permanency in the public management literature, the question why study performance management remains. This sub section provides justif ication as to why this concept continues to be relevant to both research and practice. The concept of performance management has long been of theoretic import ance to the fields of public management and administra , 2005). As entities that must be responsive to the needs of various stakeholders including the legislature, auditors, media, the public, and non governmental organizations often simultaneously, government agencies have relied on performance management to inform decision making processes, justify expenditures, and improve outcomes ( Moynihan & Pandey, 2010 ). The contemporary emphasis on what can be referred to as performance based organizations (Lynn Jr., 2006) is no t entirely surprising. Organizations have historically been thought to justify their importance through the attainment of some tangible or measurable goal. Indeed, as Thompson originally posited in 1967, organizations are rationally attuned to performance measures and spend considerable energies i n the pursuit of their achievement ( Thompson , 2003). Among his propositions, Thompson (2003) argued that organizations are most alert to and emphasize scoring well on those criteria which are most visible to important task . 90). nt agencies have long been criticized for being ineffective, inefficient, and unresponsive to stakeholder interests ( see Peters, 1996, for an overview o f such perspectives across theoretic traditions ). The result of such criticisms was the theoretic and practical shift away from traditional bureaucratic mechanisms to an entrepreneurial, resource minimal, enterprise inspired approach

PAGE 21

11 to managing governme nt organizations, more commonly known as New Public Management ( Hood & Dixon, 2015 ). According to Fryer, describe th For all its ubiquity, NPM did not ultimately survive the test of time, and was replaced by such institu tional re Denhardt & Denhardt, 2011 ( B that emphasized greater government involvement in decision making processes and promoted collaboration across sectors . But, e ven as the tidal wave of New Public Management began to ebb, the push for governmental effecti veness remained intact. Around the same time that the popularly influential Reinventing Government (Osborne & Gaebler, 1992) and Breaking Through Bureau cracy (Barzelay , 1992) made their way into the public sector, the Clinton Administration enacted The Government Performance and Results Act , ushering in a new era of performance management (Bozeman & Feeney, 2011) that has been upheld by subsequent preside ntial administr ations (Moynihan & Kroll, 2016) . With the concept of performance management continuing to remain central among organizations in practice, researchers have responded by asking how such information may be used to enhance organizational output s. Over the las t twenty five years, this core line of inquiry has arisen in multiple instances. For example, in 1995 , B ehn questioned managers use measures of the achievements of public agencies to produce even greater 21). Some thirt een years

PAGE 22

12 the successes and failures of performance movements, we have to study the use of performance By examining agency determined indicators of performance management, thi s dissertation inform decision making and outcomes. With public managers often struggling to make adequate use of performance data (Moynihan & Pandey, 2010), this dissertat ion hopes to fill a gap by leveraging existing information for application in practice. Perhaps more importantly, by examining the link between performance management and child welfare permanency, this dissertation echoes the call of perform ance management scholars and experts to look beyond outputs or processes and consider the critical importance of organizational outcomes in achieving long term social and institutional change (Van Dooren, 2011). In so doing, the dissertation hopes to contr ibute to our theoretic understanding of performance management, its antecedents, and its impact on these long range outcomes. The Factors Affecting Performance Management in Organizations Given this dissertation ise that performance manage ment exists as a mediating influence between predictors and organizational outcomes, it is necessary to examine the factors thought to influence performance management. Such factors can be broadly categorized as endogenous exist ing within the organization and exogenous, originating outside of the agency. While internal organizational characteristics affecting performance management are well known, less research has examined the impact of broader systemic influences on organizatio sustain performance management benchmarks ( Walker & Andrews, 2013 ) .

PAGE 23

13 While systemic constraints may encompass an array of environmental or exogenous influences, three are especially historically important in the delivery of child welfare services: the size of the population eligible for system involve ment; the economic health of the community in which the child welfare agency is based; and the geographic classification of the county. Each of thes e constructs is highlighted in the literature as systemically important to child welfare outcomes. Their de finitions and importance in child welfare service delivery are discussed in greater detail in the subsections below. Eligible Population The size of the eligible population was determined to be a predictor of performance management because of its implic ations for capacity. In this regard, two divergent arguments have been p roposed . The first argument is that the larger the population eligible for a service (in this case, child welfare), the greater the potential for use of that service. In human service organizations, which are already taxed by resource scarcity, more cases from a larger pool of eligible perso ns may place additional undue strain on agenc ies already at workload capacity that will subsequently struggle to achieve performance management stan dards and overall effectiveness ( Wulczyn & Halloran, 2017 ). Alternatively, t he second argument claims that organizations providing services in densely populated areas are also more likely to have broader service access and information sharing capabilities that improve performance management, outweighing the capacity burdens relative to organizations that must b e concerned with the daily management of their dispersed populations (Arsneault, 2006; Andrews, Boyne, Moon, & Walker, 2010) . Even with such dispara te perspectives, populations have long been recognized as an external influence on organizations. As Kaufman posited: t he composition and distribution of the human population in which organizations for m

PAGE 24

14 local and regional patterns as well as national and worldwide ones vary a gr eat deal. . ( 1985, p. 38 39) If demographic shifts are presumed t o impact the environments in which organizations operate, and environmental conditions are tied to organizat ional capacity and effectiveness, then it follows that the nature of such demographic differences , including economic and geographic disparities, may contribute to the ability of child welfare agencies to achieve performance management stan dards. In other words, not only may variations in population density impact organizational performance, but economic and geographic differences may also help explain variation in the attainment of performance management standards. Economic Profile refers to the overall economic health of a given community. Economic health has been measured in a variety of ways, but wit hin the context of child welfare, the theme of poverty has remained integral to the narrative of out of home involvement (Barth, Wildfire, & Green, 2006). Specifically, child maltreatment the overwhelming reason why children are placed in out of home care settings occurs at substantially higher rates among poverty stricken families (McGuinness & Schneider, 2007). As Gainsborough (2010) remarked, separately from poverty, welfare system disproportionate systemic constraint on performance management in child welfare, poverty, conceived here more broadly as an economic profile, was taken into account. The issue of poverty may be further compounded

PAGE 25

15 lack of service availability that may otherwise mitigate entry into the child welfare system (Belanger & Stone, 2008). Geography Like the issue of economic health, a considered a systemic constraint on organizational performance in the context of child welfare (Rine, Morales , Vanyukevych, Durand, & Schroeder, 2012). Common barriers to the effective provision of service s in rural contexts include challenges in retaining staff, reduced physical proximity to available services, increases in costs required to administer services across dispersed populations, and cultural differences between rural communities and urban local es in which many services are based (Elgin & Carter, 2019) . While the more commonly accepted perspective holds that urbanicity is associated with greater resour ce availability and, subsequently, better organizational performance, a compelling counter persp ective was proposed that, while poverty remained an important predi ctor of poorer outcomes, urbanicity was also a driver of lower outcomes, specifically reunificat ion. Although there was little elaboration as to why, beyond statistical modeling, urbanicity performed opposite conventional wisdom, the incongruent expectatio ns regarding urban settings warrant further examination as a systemic constraint in this dissert ation . Having considered the factors that influence performance management, this literatu re review now turns to the effect that performance management is thought to have on the achievement of broader organizational outcomes. Since the implementation of G PR A in 19 93, a vast body of research has examined the link between performance management a nd outcomes. As covering such an expansive literature is beyond the scope of this dissertation , this subsection

PAGE 26

16 focuses specifically on the performance management outcome lin k within the context of child welfare, which is the setting for the current resear ch. The relationship between performance management and outcome achievement in child welfare agencies is well established theoretically and supported empirically. According to performance initiatives affect the administration and structure of service programming, which shape interactions between caseworkers and service recipients, thereby altering i102). The authors further argue that child welfare systems , in particular , have been concerned about better understanding and improving the connection between performance and outcomes as they have generally been perceived to be ineff ective and unresponsive to client needs (McBeath & Meezan, 2010). Outcomes in c hild welfare are typically categorized in three ways: safety, well being, and permanency. Safety based outcomes refer to the prevalence and persistence of child maltreat ment. The goal of child welfare agencies, then, is to mitigate such prevalence and pers istence through investigation, referral, therapeutic interventions, and ultimately removal from an unsafe environment when other measures have failed. Well being outcomes cen ter to prepare children for success upon exit from the system. Finally, permanency outcomes reflect the timeliness of reunification efforts, where reunification can take the form of either biological family reunion or 2008). While safety and well being are undoubtedly critical measures of success, permanency outcomes are the focus of this dissertation . In any given year, over 20,000 children in the United States age out of the child welfare system without having achiev ed placement permanency in either a reunified or adoptive setting

PAGE 27

17 (Lockwood, Friedman, & Christian, 2015). Children in the child welfare s ystem are known to be at heightened risk for homelessness, substance abuse, and criminal involvement, and children tha t do not experience permanency experience these risks at even greater rates (Lockwood, Friedman, & Christian, 2015). To mitigate these neg ative long term effects, considerable research has investigated the various factors driving permanence instability. Both individual level and organizational level characteristics have been examined for their links to outcome achievement in child welfare systems. At the individual level, factors including behavioral distortions of children in care (Orsi, Lee, Winokur, & Pearson, 2018), children from families with complex substance abuse and mental health needs (Yampolskaya, Sharrock, Armstrong, Strozier, & Swanke, 2014), and case worker experience (Fluke, Corwin, Hollinshead, & Maher, 2016) have all been theoretically and empirically linked to permanency outcomes. At the organizational level, variables such as the availability and effectiveness of parenting intervention programs (Spieker, S. J., Oxford, M. L., & Fleming, C. B., 2014), placement distinctions, especially wit hin kinship and congregate care settings (Carnochan, Lee, & Austin, 2013; Winokur, Holtan, & Batchelder, 2014) , and performance management driven systems of professionalism (Wastell, White, Broadhurst, Peckover, & Pithouse, 2010) have all been demonstrated to impact organizational outcome attainment. In the context of permanency out c omes, the definiti on of performance management has been established in practice without much c ounter argument theoretically. With respect to outcomes, then, performance manag ement is defined as timeliness, with timeliness including an arra y of benchmarks throughout the duration of a child s case , including timeliness of adju dication, fir st court hearing, and termination of parental rights (Flango, Gato w ski, & Sydow, 2015). According to Flang o, Gatowksi, and Sydow (2015), the rationale behind establ ishing

PAGE 28

18 timeliness measures was to encourage states t o begin the process of measuring court performance and when the firs t results came in, states would be encouraged to probe further perhaps using other measures (p. 21). Instead, child welfare agencies incorporated timeliness as a main facet of their performa nce management system, leading to an implicit defin ition of performance management as efficiency , rather than quality, of services. Despite the emphasis on efficiency, the core concept of performance management as a n action undertaken by child welfare agencies remains intact and consist with the broader t heoretic definition articulated by Radnor and Barnes (2007). Conceptual Framework Based on the reviewed literature, it is argued that performance management is a dynamic concept with important implications for organizat ional outcome achievement. Toward s that aim, the conceptual framework is built on the straightforward premise that perf ormance management need not be exclusively examined as either predictor or outcome, but rather is better understood as an intermediate ou tput , at once affected by broader systemic constraints and in turn driving organizational outcomes. To illustrate this foundational argument, Figure 2 below depicts a pathway diagram representing the dissertation

PAGE 29

19 Figure 2 . A conceptual framework of performance management. As illustrated in the figure above, performance management is traditionally thought to exist in one of two ways. In the first conceptualization, represented as path a, performance manage ment is treated as an outcome affected by broader systemic factors suc h as social and economic conditions within a service delivery area. In the second, depicted as path b, performance management is presented as a predictor driving organizational outcome a ttainment. Alternatively, this dissertation argues that examining eith er path in isolation does not provide a comprehensive understanding of the dynamic role performance management plays in organizations. Instead, the current conceptualization indicates th at performance management is best understood as a link between path a (wherein the concept is treated in accordance with the first tradition as a dependent variable) and path b (where it is consistent with the other theoretic tradition as a predictor of ou tcome attainment). In order to test if this conceptual framework is ap propriate, however, performance management must be empirically regarded as a mediating influence where its influence between systemic constraints and organizational outcomes can be

PAGE 30

20 prope rly modeled. The supposition of mediation requires an adjacent line of argumentation; namely, that systemic constraints exert some direct effect on organizational outcome attainment. In the absence of such a relationship, performance management would have nothing to mediate. It should be noted that this is not the first st udy to posit the presence of a mediating variable historically conceptualized as either predictor or outcome. A recent example from Damoe, Hamid, and Sharif (2017) empirically demonstrat ed the mediation effect of organizational climate, typically conceived exclusively as either indicator or dependent variable, as the missing link between human resource management practices and human resource outcomes in the Libyan public sector . Indeed, m uch scholarship in varied contexts has examined mediating influences across a host of organizational variables ranging from process effect between senior leadership and performance in Chinese service firms (Zhang, Kang, & H u, 2018) t o the role of teamwork and employee satisfaction in tempering the relationship between sustainability oriented human resource management and organizational performance ( Lee, 2019) , and even the impact of instructional practices on explaining the relationsh ip between teacher and student motivation (Schiefele, 2017). Therefore, while performance management may not typically be conceptualized as a mediating influence, comparable studies of organizational mediators provide a theoretical reason to su ggest that th is construct may uphold a similar form and function. To test its foundational assertion, this dissertation examines systemic constraints, performance management, and permanency outcomes across child welfare agencies in Colorado. Study Context Within t he United States, the foster care system exists at many levels of government.

PAGE 31

21 Bureau, whose immediate overhead organization is the Administration of Children and Families, itself a subsidiary of The Department of Health and Human Services. Despite funding under Title IV E of the Social Security Act and occasional broad sweeping policy reforms initiated at the national level, the provision of foster ca re service s is almost exclusively delegated to state governments (Gainsborough, 2010). As relatively autonomous entities, individual states define the structure of service delivery most appropriate to fit their geographic and population needs. These struct ures gener ally fall under one of three categories as defined by the U.S. Department of Health and Human Services. The structures include state administered, county administered, and hybrid systems (U.S. DHHS, n.d.). Figure 3 below presents a visual display of the na tional landscape pertaining to these administrative structures. Figure 3 . Configuration of states' administrative structures. Reproduced from The Child Information Gateway ( U.S. DHHS , n.d.). Retrieved from https://www.childwelfa re.gov/pubs/factsheets/services/

PAGE 32

22 As depicted in the figure above, Colorado maintains one of the rarer organizational forms the county administered system. It is within this context of county based service delivery that the empirical study exists. Withi supervised and county care of children in o ut of home placements (Howard Moroney, 2016, p. 211). For purposes of this dissertation , the implication of a county administered system is that county agencies will be the focus of analysis (rather than the state) as they are directly responsible for mana ging As a state supervised, county adm inistered system, Colorado is dual layered in its Department of Human Services, which is responsible for federally mandated out comes, state statutory requirements, budgeting maintenance and the allocation of resources, and licensure of private organizations that assist in out of home placement for children under county care (C DHS Child Welfare, n.d. ). At the local level, all sixty four counties in Colorado maintain child welfare agencies that are responsible for the direct provision of services, which include investigating reports of child abuse and neglect, completing assess ments that determine whether out of home removal is appro priate based on the initial abuse and neglect referral, overseeing the cases of children placed in out of home care, managing contacts with treatment providers, private placement providers, and schoo l systems to ensure a continuum of care, and working towa rds permanency to either reunify or find adoptive families for children (C DHS Child Welfare, n.d. ).

PAGE 33

23 To determine maintains federally required statewide automated child welfare information system (SACWIS). SACWIS is funded at the federal level case management system that enables better performance m anagement and permanency outcomes through the tracking of such measures as the timeliness of assessment closure after a report of child abuse or neglect has been filed and the percentage of children who achieve permanency placements within twelve months of entry into out of home care environments (NCSL Child Welfare Information Systems, 2015). Beyond maintaining a robust historical record of child welfare metrics from which a thorough analysis could variation across the independent and dependent variables of interest in this dissertation . The sixty four counties differ across their eligible populations, their levels of local economic stability, and their geographic classifications. Success in achievi ng performance management standards has been varied, as has success in securing permanency placements for children in out of home care. Counties also rely to different extents on external placement p roviders for the delivery of services and have notable ra nges of children under their car e (with smaller counties such as Jackson and Huerfano often having no children removed from a family unit to metro areas such as Denver and Jefferson County having act ive populations in the hundreds during any given fiscal y ear). Such variability provides the foundation for a sound empirical study, in addition to the theoretic and practical considerations presented below. Child welfare is an appropriate study context for the following reasons. M any critical elements of chil d welfare accountability, contracting partnerships, turnover and staff retention,

PAGE 34

24 and, of course, performance management , to name a few have received either scant at tention or yielded inconclusive findings regarding best practices , making it a rich contex t for further theoretic application and empirical inquiry ( see, e.g., Hwang, 2016; Willis, Chavkin, & Leung, 2016; Collins Camargo & McBeath, 2017; Jolles, Collins Ca margo, McBeath, Bunger, & Chuang, 2017 ). Beyond topical coverage, child welfare agencies, like many human service organizations, are presumed to be impacted by systemic constraints. In a contemporary era reinforcing calls for greater performance management and organizational outcome attainment in the face of resource scarcity and competing dema nds for responsiveness to stakeholders, an understanding of what factors truly influence organizational effectiveness in both its immediate and long term forms is critical and timely. Finally, there is a practical rationale that should not be overlook ed. such high numbers of cases in a system that was designed to protect ch ildren from adverse childhood experiences, it is perhaps not surprising that child maltrea tment is now considered a preeminent public health crisis (Latzman, Lokey, Lesesne, Klevens, Cheung, Condron, & Garraza, 2019). In Colorado alone, some 80,000 referra ls for child abuse and neglect are made annually. Of the referrals made in 2015, over 29,0 00 were referred for further investigation, and more than 5,000 children were ultimately placed in out of home care (Child Welfare League, 2017). Research has contin ually concluded that children placed in child welfare systems are at heightened risk for s ubstance abuse, homelessness, involvement in criminal activity, unemployment, and suicidal ideation , among other adverse prospects (see, for example : Katz, Shook, Goodkind , Pohlig, Schelbe,

PAGE 35

25 Herring, & Kim, 2011; Fowler, Marcal, Zhang, Day, & Landsverk, 2017 ; Yampolskaya, Chuang, & Walker, 2019 ). The quality of care and services received while in out of home placements is essential to mitigating such long term risks ( Lockwood et al., 2015 ). Therefore, a study examining the factors influencing performance management and, by extension, outcome achievement, serves to inform practitioners about their organiz and opportunities for improvement in order to facilitate far reaching posi tive effects for youth in their care . With both theoretic and practical rationales considered, this dissertation now turns to a presentation of the methods that were employed to empirically assess the degree to which performance management was subject to exogenous influence s and the impact of performance III. METHODOLOGY Having identified the prominent literature related to performance management and child welfare, th is chapt er describes the dissertation and consists of the following subsections: Research Questions and Hypotheses; Population and Sample; Da ta Collection; Empirical Measures; and Data Analysis. Research Questions and Hypotheses The reviewed liter ature falls broadly under three categories: systemic constraints; performance management; and permanency outcomes. Systemic constraints include the eligible Th e larger concept of systemic factors drove this dissertation constructs contained therein were used to develop the affiliated hypotheses. The first research question is:

PAGE 36

26 RQ1: To what extent do systemic factors affect a performance management standards? Associated with this question , three hypotheses are proposed to examine the relationship between systemic influences and performance management. The first hypothesis involves t he eligible population and is stated as: H1: Agencies with larger eligible populations will be less likely to achieve performance ma nagement standards than those agencies with smaller populations. The logic underlying the direction of the first hypothesis is that locations with larger populations will have a greater volume of families with potential involvement in the child welfare sy stem, thereby putting a strain on agency resources and mitigating the likelihood of achieving performance management standar ds ( Wulczyn & Halloran, 2017 ) . The dissertation posits: H2: Agencies that operate within economically stable counties will be more likely to achieve performance management standards than agencies that operate within economically st ruggling counties. Extant research has continually examined and reinforced the link between poverty and child welfare involvement. Therefore, the second hypothesis accepts this foundation and proposes that economically stable counties will have less entr y into the child welfare system than communities that are struggling. Like the first hypothesis, the second reiterates that greater involvement will place more exertion on agencies that may subsequently stru ggle to achieve performance management standards ( Barth, Wildfire, & Green, 2006 ) . The third and final hypothesis associated with research question 1 considers the impact of H3: Agencies within urban settings will be more likely to achieve performan ce management standards than agencies within rural or frontier settings.

PAGE 37

27 Similar to the link between poverty and child welfare, urbanicity has long been regarded as favorable to organizational performance . In urban settings, resources are presumed to ex ist in greater quantity than the scarcity that defines most rural locations. With supposed greater abundance of resource availability, urban areas are projected to achieve performance management standards mo re readily than their rural and frontier counterp arts ( Belanger & Stone, 2008 ) . With performance management designated as a dependent variable in the first research question, the second part of this dissertation reimagines its role as a predictor influen cing organizational outcomes. Toward that aim, the second research question is: RQ2: How does the attainment of performance management standards impact permanency outcomes in child welfare agencies? A fourth hy pothesis is proposed to consider the direct ionality of the relationship between performance management and outcomes. This final hypothesis states: H4: Agencies that achieve performance management standards will have better permanency outcomes than agenc ies that do not achieve standards. The log ic underlying hypothesis four is straightforward. It argues that agencies performing better at earlier stages in child welfare cases will yield improved long term outcomes rela tive to agencies that cannot meet performance management standards. The remainder of th is chapter details the analytic approaches employed to test the proposed hypotheses. Population and Sample The population for this dissertation includes all sixty f our counties in Colorado. Each county within the state has a Department of Human Services responsible for the administration Child Welfare, n.d. ). Because Colorado

PAGE 38

28 operates under a state supervised, county administered model wherein counties directly provide services either internally or in cooper ation with private Child Placement Agencies (CPAs), counties, rather than the state, serve as the appropriate level of analysis. Data were gath ered on all sixty four counties for fiscal years 2013 through 2017 across as many measures as had available data. Because the total N for this dis sertation is small, it was critical to obtain as close to 100% of the population as possible to ensure that the sample was not substantially bias ed . In a similar study using a small N population (57 hospitals in Croatia), Zmuk, Lutilsky, and Dagija (2016) determined that a final sample exceeding at least 70% of the population was sufficient to iveness, thereby making it appropriate for further statistical inference . For this research, then, a sample of at least forty five counties was required for inferential analysis. Although some counties had missing data across a few measures, no analytic pr ocedure ever captured fewer than the necessary forty five counties. The sources of these measures are described below in the Data Collection subsection. Data Collection With IRB approval, d ata for this dissertation were synthesized from a variety of sources to produce a unique dataset. D ata gathering occurred between September 2018 and February 2019. All data for this dissertation were taken from publicly available sources that included: Bureau of Economic Analysis; the Colorado State Office of Rural Health; and the Colorado Department of Edu cation. Each of these data sources and the corresponding variables extracted are discussed in further detail below.

PAGE 39

29 Five of the ten variables for this dissertation were gathered of Human Services (CDHS). CDHS te level supervisory agency responsible for overseeing the sixty four counties that deliver child welfare services. As part of this supervisory System ( SACWIS). Importing and updating data in SACWIS was federally mandated from 1993 until 2016, when states were granted the autonomy to amend their electronic record systems in a abilities and informational n eeds (DHHS, 2016). Despite the decentralization allowances, Colorado elected to retain TRAILS until a more advanced replacement became available (Child Welfare Data and Accountability, n.d.) . As of 2019, TRAILS is still operational, and has mostly consiste nt records for all counties between FY2013 and FY2017. Four of the five variables obtained from CDHS were included in the TRAILS system. These variables were: the timeliness of assessment closure (for the concept performance management); children under c ounty care (for th e c ontrol variable workload); placement type (for the c ontrol variable out of home setting); and length of stay under county care (for the concept permanency outcomes). E very variable was extracted from the TRAILS website by county for ea ch fiscal year and transferred into Excel for cleaning and transformation. Beyond TRAILS , CDHS also reports details on the child welfare providers licensed by the state of Colorado. The variable availability of providers , used to capture the con trol var iable of resources, was constructed from the provider list (CDHS Service Providers, n.d.) . Like the TRAILS data, the provider list was exported into Excel for cleaning. The provider list included the organizational names of the licensed child placement age ncies in Colorado and a li nk ata were used to inform the

PAGE 40

30 on those providers for whom a website was not listed. More details on how provider data were constructed can be found under the Em pirical Measures subsection on page 42 . The remaining five variables were collected from four different sources. Information on the eligible population was gathered f (State Demography Office, n.d.) . Within the website, data were identified after performing the following search sequence: Home > Population > Data > Profile Lookup. The total number of housing units was then selected for every county between FY2013 and FY2017. Data downloaded automatica lly into Excel. (Bureau of Economic Analysis, n.d.) . From the hom e page, the following search sequence was performed: Data > Data by Topic > Employme nt > Employment by County, Metro, and Other Areas > Interactive Data > Interactive Tables: Regional Accounts Data > Local Area Personal Income and Employment > Economic P rofile (CAINC30/MAINC30) > CAINC30 Economic Profile (counties and states) > Colorad o > All Counties and All Statistics in Tables > Years 201 3 to 2017. The economic profile data downloaded automatically into Excel. Geographic data were obtained from a s eries of county level maps produced by the Colorado State Office of Rural Health. The se maps were made publicly available on the (Colorado Rural Health Center, n.d.) and were found using the following search sequence:

PAGE 41

31 Home > Resources > Maps > Learn More The maps were used to extract the names of cou nties falling within certain geocoding designations. Finally, , used as a measure of an economic profile, w as obtained f Department of Education, n.d.). The following search sequence was performed to gather eligibility information: Home > School & District Information > Colorado Education Statistics > P upil Membe rship > Previous School Years > 2012 2013 Pupil Membership Data > District Level Data > K 12 Free and Reduced Lunch Eligibility by County and District (XLS). The last two sequences in the search function were replicated for each year in the study and dat a were exported automatically to Excel. All variables were cleaned, transformed, coded, and merged to yield a comprehensive county level dataset. Th is dataset was then used to conduct a secondary analysis. Although the data collection process involved the extraction, cleaning, and transformation of datapoints into a new dataset, the dataset is considered a secondary source as it did not involve the firsthand col lection of new information not previously available (Hox & Boeije, 2005). Because the populatio n of interest is small, thereby limiting statistical power (Cohen, ate robust conclusions. Therefore, all data were extracted over a five year span, from fiscal ye ar 2013 through fiscal year 2017. The state of Colorado operates on fiscal years defined as July 1 to June 30. Rather than extract calendar years, data were ali gned with fiscal years for consistency

PAGE 42

32 Because child welfa re data exist in the aggregate, it was impossible to determine whether observations were independent or included the same children at multiple points in time. A s such, panel data could not be constructed. In lieu of panel data, then, this analysis relied u pon a time series of repeated cross sections, wherein each of the five fiscal years was treated as an independent cohort in order to mitigate the likelihood of violating the interdependence of observations assumption. While causal inference is limited rela tive to panel data, when modeled correctly, repeated cross sections can inform robust understandings of trends over time (Moffitt, 1993). Observations within th e ten variables were thus collected for each fiscal year. The following subsection defines the e mpirical measures used to conduct the analysis . Empirical Measures Eight con structs were considered for this dissertation . In addition to the primary concept of interest performance management systemic factors including the eligible population, economic profiles, and geography were examined for their influences on achieving management standards. Performance management was then studied for its relationsh ip with permanency outcomes (concept 8 ). The three remaining con structs workload, types of out of home settings , and resource availability were analyzed as control variables. Table 1 below summarizes these key concepts alongside the research questions, hyp otheses, variables, measures, and data source. es follow.

PAGE 43

33 Table 1 . Overview of Research Questions, Concepts, and Measures

PAGE 44

34 Systemic Factors Influencing Performance Management (Concepts 1 3) Three systemic factors were analyzed for their potential impacts on performance management. These factors include: the eligible po omic each concept are presented below. Eligible Population The eligible population within a county was conceptualized as a systemic constraint on performance total population, which counts the number of individuals residing within a county, a housing unit i house, an apartment, a group of rooms, or a single room occupied or intended for occupancy as a itions, n.d., p. 3). Because children exist within family units, housing units were a better approx imation of the population potentially eligible for child welfare services than a direct count of raw population numbers, which would not distinguish the numb er of families from the overall count of residents. The number of housing units w as calculated by c Economic Profile influence performance management. Unlike eligible population and geography, whic h are relatively straightforward in both theoretic definition and empirical measurement, th e concept of an economic profile is greater in its complexity. As an illustration, the U.S. Bureau of Economic tors that span various levels of analysis, ranging from total employment and retirement to

PAGE 45

35 (BEA CAINC30, n.d.). inclusion in this dissertat ion : income and unemployment. Poverty has long been associated with heightened risk levels as low income families are disproportionately represented in the child welfare system (e.g., Barth, Wildfire, & Green, 2006; McGuinness & Schneider, 2007). Therefore , income and unemployment were identified as proxies of poverty. Both i ncome and un employment data were obtained from the BEA. Income was measured as per capita personal income area [county] divided by Similarly, per capita une mployment insurance compensation was used to measure unemployment. U nemployment insurance compensation is defined as payments received b y individuals under state administered unemployment insurance (UI) programs, but they include the special benefits authorized by federal legislation for periods of . The total UI was divided by the resident population to create the per capita metric. Finally, f ree and r educed l unch (FRL) e ligibility was included as an additional variable mong families involved with child welfare, the metric overall economic health. Further, FRL has frequently been employed as a measure of socioeconomic status in social science research, most notably edu cation (see Harwell & LeBeau, 2010, for an overview). To measure free and reduced lunch eligibility, data were collected for each fiscal year from the Colorado Department of Education. These data existed at the school district level of analysis and were s ubsequently aggregated to produce county level totals. The rate of FRL

PAGE 46

36 eligibility was calculated as the percentage of children qualifying for either free or reduced lunch Geography When conceptualizing geography, perhaps the most common des criptor distinguishes locations based on urban, suburban, and rural classifications, having been captured in that manner since the 1950 census (Census Bureau, n.d.). In Colorado, however, the geographic distinctions define urban, rural, and frontier commun ities. According to the Colorado Rural Health Center, a nonprofit established in 1991 to serve as the State Office of Rural Health, all frontier communities are fur ity of six or 2017 ). To determine appropriate geographical classifications for this dissertation , county maps generated by the Colorado Ru ral Health Center were examined. Figure 4 below reprod uces one of these maps to highlight the county geocoding schema.

PAGE 47

37 As illustrated in the figure above, Colorado had 17 urban , 24 rural, and 23 frontier counties in 2017. Because this dissertation prior maps were sought to ensure consistency in the geocoding classifications. An additional map from 2014 was publicly available and showed no differentiation in county designations (Colorado Rural Health Center, 2014) , indicating that county classifications were co nsistent in 2014 and 2017. Although data from 2013 and additional points in time between 2014 and 2017 were not available, the analysis operated on the assumption that the county classifi cations were the same throughout the duration of the study given the comparable data at two points in time. Figure 4 . Colorado county geographic desig nations. Reproduced from the Colorado Rural Health Center (http://coruralhealth.wpengine.netdna cdn.com/wp content/upload s/2017/07/2017 Rural County Designation.pdf).

PAGE 48

38 To measure geocoding classification, binary and categorical indicators were created for each of the designations. In the binary classification, ur rural and frontier communities. In this way, counties fitting within a particular designation could be readily compare d with those of a different classification. For predictive modeling, the binary categories were collapsed into a single categorical variable, where urban counties were coded as Performance Manageme nt (Concept 4) Child welfare agencies in Colorado report quarterly numbers on two performance management metrics. The first metric is the timeliness of initial response and is measured as the percentage of children who are interviewed within policy guide lines. The second metric is timeliness of assessment closure and is m easured as the percentage of assessments completed within sixty days of the initial referral dat e (CDHS Community Performance Center, n.d.). While both metrics have been identified by t he state of Colorado as important to child welfare outcomes, the latt er metric is better suited to analysis in this dissertation . The reason for defined. Policy guideli nes for length of time to conduct an interview are not specified. Sec ond, when considering the necessity of variation on a variable of interest, the timeliness of initial response metric showed little variation over time, with the average percentage of gui deline approved interviews hovering around 90% between fiscal years 2 013 and 2017. Of all the quarters for which percentages were calculated during , the lowest reported percentage occurred in December 2013 at 84% and the highest perc entage was calculated at 94% in June 2017 (University of Kansas Initial Response, n.d.). Given these percentages, it is clear that

PAGE 49

39 counties have consistently performed well on this metric and there is not substantial variation on which to observe differenc es. Timeliness of assessment closure, on the other hand, has b oth a precise definition and enough variability across counties over time to warrant further investigation. Therefore, it has been selected as the performance management indicator for this diss ertation . Four measures were constructed for timeliness of asse ssment closure. The first measure calculated the percentage of assessment closures successfully completed within each county during the last quarter of each fiscal year. This calculation was in tended to reinforce consistency with the workload control varia ble, which was measured as the number of children placed under county care during the last quarter of the fiscal year (additional details may be found in the variable description for workload l ater in this chapter). Due to the aggregation of the data, it c ould not be determine d if the number of children recorded in the from one quarter to the next represented distinct individuals, meaning that each quarter reported additions to the existing numb ers under care, or if children who had been recorded in one qua rter were subsequently re reported in future quarters if they had not yet exited the system. As a result, only the last quarter within a fiscal year was recorded so as not to misrepresent the t otal child population. In contrast to the ambiguous nature of child placement data, performance measurement data are recorded as unique observations within the TRAILS system. As such, each quarter in the fiscal year contains the number of new assessments completed and eventually closed. Therefore, in addition to the measure of performance management as the percentage of successful closures within sixty days during the last quarter, an average across all four quarters of each fiscal year was calculated for all counties that had available data. The logic behind the sec ond measure was

PAGE 50

40 to capture as much variation as possible and not inadvertently favor or penalize a county based on a singular assessment period. Finally, the third and fourth measure s of perf ormance management w ere dichotomous indicator s . CDHS has articu lated a statewide goal of achieving 90% success rates in closing assessments within sixty days of the initial referral (University of Kansas Assessment Closure, n.d.). The binary measure of per formance management thus scored counties achieving the 90% thre initially calculated for the fourth quarter percentage and was replicated for the four quarter averag e percentage. While a binary indicator may not be as descriptiv ely nuanced as the percentage, given the limited number of observations, it was considered an important measure for predictive modeling. Control Variables (Concepts 5 7) Three control variabl es were included alongside performance management in an exploration of the factors affecting outcome achievement. The broader con struct s from which the variables were operationalized include d workload, out of home setting, and resource s. For this dissert ation , workload referred to the number of children placed under county supervision. As discussed in the subsection on performance management, the difficulty with measuring workload was attributed to the aggregate nature of the data. L acking the ability to distinguish new from existing placements, workload data could only be collected at one point in time during the fiscal year. Given that fourth quarter numbers are presented in the CDHS annual reports (Colorado OCYF, 2017), the last qu arter of each fiscal year was used to estimate the active population. The concept of workload, then, was measured as the number of children under th for each of the dissertation

PAGE 51

41 Out of home setting Both child welfare research and practice have examined how the type of placement in well being, future prospects, and timeliness to case closure (Orsi, Lee, Winokur, & Pearson, 2019) . S pecifically, movements away from congregate care settings in favor of kinship care and, to a lesser extent, foster care homes, have received considerable attention and scrutiny (Ehrle & Geen, 2002 ). To account for the importance of out of home setting, thi s dissertation constructed a measure of placement type using data from TRAILS (Univ. of KS, Placement Type, n.d.). CDHS defines placement type under five broad categories. These categories are: congregate care; family like sett ing; independent living arra ngement; runaway/walkaway; and other. Congregate care placements include group homes, hospital/psychiatric facilities; residential placement; detention; and youth corrections secure placements. County foster homes, private child placement agency homes, kin ship care, youth corrections foster placement, and trial home visits are captured under the family like setting category. The remaining three categories are self contained in that they are not broken into further subcategories ( Univ. of KS, Placement Type, n.d.). To measure placement type, a nominal indicator was created that aggregated many of the categories and subcategories. The measure of congregate care included group homes, hospital/psychiatric facilities; and residential placements. The subcategorie s of detention and youth corrections secure placements included in the CDHS definition were identified separately to distinguish juvenile justice involved youth from those in more traditional congregate settings. Youth correctio ns foster placement, trial h ome visits, and county and child placement agency

PAGE 52

42 remained a distinct category. Finally, independent living arrangements, runaway/walkaway placements, and other were c Having identified the categories, relative percentages of placements were then calculated. were studied i n greater detail as they com prised most relatively small percentages of the overall total. Resources The final control variable included in this dissertation resources, defined as the number a nd type of service providers. Provider data w ere elicited from CDHS, which retains an active list of all licensed welfare involved organizations (CDHS Service Providers, n.d.). According to CDHS, there are currently 106 licensed providers with whom county agencies contract for service provision. Provider names were listed alongside the type of Of the 106 providers listed by CDHS, 5 organizations did not have any information accessible. A webs ite was not provided, and additional searches could not positively identify the excluded from the analysis. Additionally, 4 of the 106 organizations specialized in int ernational only adoption services and were therefore removed from the study as such services do not fall under county level purview. The exclusion of the 9 agencies yielded a total sampl e of 92% of all providers . In addition to the number of providers, CD HS identifie s nine types of provider services. These types include: residential child care facilities; secure residential treatment facilities; day treatment facilities; psychiatric resi dential treatment facilities; governing bodies; child placement

PAGE 53

43 agency adoption; child placement agency foster homes; group homes; group centers; and homeless youth shelters (CDHS Service Providers, n.d.). To determine the breadth of resource availability within a county, these service categories were combined with the orga calculated related to resources. The first measure counted the total number of provider organizations within a county. A site. When the website was not listed or service area was not specified on the CDHS list of service types to calculate the number of unique services available wi thin a county. Many organizations offered a variety of services, making it important to distinguish the raw resources (i.e., total providers) from a more comprehensive view of the scope of resource availability. With the control variables identified, the final step involved measuring the dependent variable, permanency outcomes. Permanency Outcomes (Concept 8) Permanency outcomes were selected as the dependent variable for this dissertation . As with performance management, permanency outcomes were identi fied and defined by CDHS. In examining the TRAILS database, three subsets of permanency outcomes were listed . These subsets include: permanency in 12 months for children in care; permanency in 12 months for children in care 12 23 months; and permanency for children in care over 24 months (CDHS Permanency Outcomes, n.d.). According to CDHS (Permanency Outcomes, n .d.), the measure of permanency in 12 months is reported as the number of children who achieved permanency within 12 months of

PAGE 54

44 entering care, wher of KS, Permanency in 12 Months, n.d.). The metric for permanency over 12 23 months is slightly more complex but refers to the number of children who achieved permanency within 12 months after having already been under county supervision between 12 and 23 months (Univ. of KS, Permanency 12 23 Months, n.d.). Similarly, the third measure of permanency for 24+ months captures the number of children wh o achieved permanency after having been under county supervision in excess of 24 months (Univ. of KS, Permanen cy 24 Months, n.d.). Of the three potential measures, this dissertation . Relative to the o ther two metrics, permanency in 12 months is the optimal goal and therefore perhaps the strongest indicator of outcome achievement. Further, from a practical standpoint, the lengthier measures of 12 23 months and 24+ months did not have sufficient historic al records from which to analyze the observations. That is, while the 12 month measure yielded archival data on nearly every county, the latter metrics were missing in excess of 30% record retainment, which rendered them unsuitable for a nalysis. For perm anency within 12 months, both the total number of children reported within a county and the percentage of successful reunifications were gathered. Additionally, a binary indicator was constructed that distinguished counties performing at or above average w ithin a given fiscal year from those that underperformed. Unlike the performance management measure, which employed a CDHS identified 90% threshold for assessment closure timeframes, the equivalent goal for permanency outcomes was establ ished under federal , rather than state, guidelines (Univ. of KS, Permanency in 12 Months, n.d.). The federal threshold was not specified as a point of comparison among Colorado counties for two reasons. First, the federal

PAGE 55

45 threshold reported an average of 4 0% success across a ll U.S. states in achieving permanency outcomes within a 12 month period. Nearly every county in Colorado exceeded this threshold, which would not allow for variation to be observed. Second, even if variation was noticed, the reported 40 % threshold was con sistent across all fiscal years, suggesting that either the national average did not change over time or that only the most current average is reported. Rather than rely on an ambiguous measure, calculating a binary variable based on the average performanc e of the counties under study provided a more accurate comparison. Thus, the binary measure of permanency outcomes designated counties performing at or above the statewide average in a w the statewide ave With all variables collected and measured, the dissertation next proceeded to determine an analytic strategy, which is reviewed in the subsection below. Data Analysis This dissertation ve and included bot h descriptive and inferential statistics. Data were collected and cleaned in Excel before being transferred to SPSS Statistics version 25 (SPSS), or R 3.5.1 (R) for further analysis. SPSS was able to accommodate all descriptive and some inferential procedu res but did not have the capability to perform more advanced modeling techniques Therefore, R was employed to conduct such tests. The remainder of this section describes the analytic approaches utilized and consists of the following subsections: Descriptiv e Statistics ; Means Comparisons, Correlations, and Chi sq uare; Logistic Regression for Rare Events ; and Testing for Mediation: Bootstrapping. Descriptive Statistics This dissertation employed a variety of descriptive statistics to understand thoroughly the nature of th e variables being analyzed. Measures of central tendency (means and medians) and dispersion (standard deviation) were analyzed alongside measures of dis tribution normality

PAGE 56

46 (skewness and kurtosis). Additionally , a two step clustering procedure was employed to better understand how counties naturally grouped in the data across a variety of metrics. Cluster analysis is a method of inquiry designed to identify similar groups of entities or objects in a sample (Mooi & Sarstedt, 2011). Its main purpose is to brea k systematically a dataset across la rge amounts of data. There are several variations of clustering that employ different underlying modeling algorithms to mea sure similarities and distances between groups, but these variants all share the same objective: to emphasize similar characteristics while separating data points that look most different, as shown in Figure 5 below. Figure 5 . Vi sual representation of cluster formation. Reproduced from Pandre (2012). As Figure 5 illustrates, cases that share the most similar c haracteristics are joined in a d ifferences. The specific clustering procedure selected for this dissertation was two step cluster analysis, generated in SPSS. Two step cluster analysis distinguishes similar groups using a two step procedure. The first step involves creating pre clusters so that the second step can more

PAGE 57

47 Having determined the pre cluster formations, the second step of the analysis employs hierarchical clustering to finalize the precise nu mber of groups that exist in the dataset. Hierarchical clustering is an aggl omerative recursive merging procedure wherein similar entities are joined together repeatedly until all cases can hypothetically form one whole cluster 1 . Of course, in reality, the formation of a single cluster is not particularly useful as it does not dis tinguish groups of observations based on similarities or differences. Indeed, the presence of one cluster would merely indicate that every county is so similar to every other that no differences between them can be identified. Alternatively, then, the alg orithm stops merging when it has determined an optimal number of clusters. Descriptive information gathered from the above listed procedures was used to inform the predictive mode ling techniques. Overviews of these techniques are presented in the subsecti ons that follow. Means Comparisons, Correlations, and Chi s quare The first level of inferential analysis involved calculating test statistics for differences in means across grou ps and correlations between variables. To compute means comparisons and correlations, parametric and non parametric approaches were considered. For means comparisons, the parametric form was an independent samples t test. The non parametric equivalent was a Mann Whitney U test ( Everitt & Hothorn, 2010 ). Two gro uping variables were identified for these tests. The first group distinguished counties using the binary indicator for performance management, where counties that achieved the threshold in a given yea r were 1 There are two primary distance criteria by which to calculate similarities within groups and dis tances between them: log likelihood and Euclidean. For this analysis, log likelihood distance was employed as it accommodated both continuous and categorical v ariables (IBM, n.d.).

PAGE 58

48 were then compared for differences across the independent variables. The second group examined differences in means across the independent variables when the outcome of interest (that is, the grouping variable) was permanency , where counties that achieved a rate of To calcul ate correlations between variables, three types of infer ential statistics were both measures of association between continuous variables but differ in their specifi cations for the functional form required ( Hauke & Kossow ski, 2011 accommodates non parametric data. When variables are categorical in this case, the dichotomous measures of performance manag ement and permanency outcomes Chi square is square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the stu square was Logistic Regression for Rare Events Tradition al logistic regression models, or logits, are employed w hen the dependent variable is dichotomous ( Everitt & Hothorn, 2010 ). Logits are frequently employed in the social sciences but may suffer from bias when the total sample size or the relative number of event) is too small (Bergtold, Yeager, & Featherstone, 2018). properly calculate the coefficients, which in turn lead s to overestimated odds ratios, and may therefore

PAGE 59

49 increase the likelihood of committing a Type I error (Nemes, Jonasson, Genell, & Steineck, 2009). When depicted visually, this bias takes the form of o verlapping densities, where the event on the horizontal axis in Figure 6 below may be mis specified as a non event (or vice versa) because there exists an insufficient number of observations from which the model may compute maximum likelihood coefficients (King & Zeng, 2001). Figure 6 . Rare event bias in logistic regression. Reproduced from King and Zeng (2001, p. 146). To mitigate the impact of model mis specification, a bias reduction alternative to logistic regression was developed by Firth (1993). The basic prem maximum likelihood coefficients as a means by which to reduce biase d estimates (Firth, 1993). Despite the development of more complex bias the most common and widely acc epted modeling procedures for rare events data (Puhr, Heinze, Nold, Lusa, & Geroldinger, 2017). Because thi s dissertation has a maximum possible N of 64 (reflecting the total population of child welfare agencies in Colorado), the likelihood of bias in a t raditional regression is high. endent variables assumed their binary form. The basic equation for a rare events logit in R is as follows: Fit < y ~ indvar x1 + indvar x2

PAGE 60

50 Where fit logistf procedure of choice, data identifies the data set in which the variables are located, depvar y is the dependent variable (either performance man agement or permanency outcomes depending on the phase of the analysis), and indvar x(x) are the predictor variables. For this dissertation , four iterations of the Firth logistic regression model were specif ied. Each of these four models was replicated acr oss every year of the study for a total of 20 models. The modeling equations are: Model A < FYyear_PCPI + FYyear_PCUIC + PercentFreeReducedy ear + Geography) Model B < eAvgPerm ~ PrcntSucc4Qrtsyear) Model C < + TotalServices) Model D < + FYyear _PCUIC + PercentFreeReducedyear + Geogr aphy) Model A was constructed to capture the potential impact of systemic constraints on performance management. The binary indicator of performance management was selected as the dependent variable, while h ousing units, income, unemployment insurance comp ensation, free and reduced lunch eligibility, and geography were input as the predictor variables. Model B examined the relationship between performance management and permanency outcomes, where the measure of performance management was percent success acr oss four quarters of a fiscal year and the binary measure of permanency outcomes served as the dependent variables. Model C considered the effect of performance management when controlling for hypothesized p redictors including out of home placement type an d the number of providers available. Finally, Model D sought to

PAGE 61

51 determine the direct effects of the original predictor variables from Model A on permanency outcomes. Testing for Mediation: Bootstrapping As the core premise of this dissertation argues t hat performance management exists as a mediating influence between systemic constraints and permanency outcomes, modeling these effects separately does not provide a complete understanding of the role performance management has in tempering the impact of t he other predictor variables. Instead, tests for mediation must be conducted in addition to the above mentioned modeling procedures. There are two primary methods used in mediation analysis : steps approach and bootstrapping , with the former being arguably the most commonly employed causal paths that could exist between a p redictor variable (X) and an outcome of interest (Y). Having dete rmined an exhaustive number of pathways, statistical criteria are used to ascertain whether a third variable (M) exists as a mediator. M is presumed to be a mediator if the multiplicative pro duct of its relationships between X and Y (assuming statistical s ignificance) , added to the direct effect between X and Y, is farther away from zero than the direct effect itself ach has been heavily criticized for its lack of statistical power , assumption of normality, and inaccuracy in small sample sizes when supplemented with the frequently used Sobel test for validity (Hayes, 2009). Alternatively, then, bootstrapping was consid ered as the test for mediation in this dissertation . Bootstrap methods were first introduced to mediation analysis in 1990 by Bollen and Stine. The basic technique of the bootstrap family (there are multiple variations; cf. Cheung &

PAGE 62

52 Lau, 2008; Koo, Leite , & Algina, 2016) is to approximate the distribution of the popul ation by simulating thousands of resampled datasets using the original sample data. The indirect, or m ultiplicative product of the predictor mediator and the mediator outcome coefficients. But, the difference between the indirect and direct effects, bootstrapping gene rates a confidence interval using the simulated coefficients to d etermine upper and lower bounds. As in traditional regression, if the value of zero does not fall within the confidence interval, a mediating effect can be concluded (Hayes, 2009). For purp oses of this dissertation , then, mediation was considered between the systemic level predictor variables and permanency outcomes in the presence of performance management. Next, the fourth chapter presents the findings of these descriptive and inferential procedures.

PAGE 63

53 IV. RES ULTS This chapter presents the findings of the empirical tests described in the methodology chapter. It should be noted that both the descriptive and inferential statistics examined the relationship between the systemic level predictor variables and permanency outcome achievement. While these r elationships are not linked to a specific research question, t hey were included in this dissertation for two reasons: first, to better illuminate the landscape of child welfare service provision by investigating the potential direct link between predictors and outcomes, and second, because one criterion of identifyin g a mediating variable (i.e., performance management, which is the dissertation is a direct relationship to mediate. Framed differently, without a con firmed association between the predictor and dependent variabl es, performance management could not mediate a non existent relationship. Therefore, potential associations between systemic factor and permanency outcomes were considered. Descriptive results are discussed first, followed by the inferential statistics. B ased on the empirical findings, levels of support for the dissertation are considered. Descriptive Findings Descriptive statistics were run in SPSS for each of the variables in the dissertation . Traditional descriptive statistics are presented first, followed by the results of the cluster analysis. Descriptive Statistics As this research focused on performance management over the course of five years, each year is presented separately in the tables that follow. Additionally, a sixth descriptive table (available in Appendix G) summarizes the percent change from FY2013 to FY2017 in order to

PAGE 64

54 assess the shifts in t ent. Percent change was calculated as: PC = ((MR_Var E_Var) / (E_Var) * 100) W here PC = percent change, MR_Var = most recent (2017) observation on a variable , and E_Var = earliest (2013) observation on a variable Complete county level information by f iscal year is available in Appendices B F with variable descriptions presented in Appendix A . The tables below instead present the descriptive statistics for the dissertatio n the t otal N, means, medians, standard deviations, skewness, kurtosis, and the minimum and maximum values for each variable. Note that measures of central tendency and dispersion (i.e., Percent Success in would therefore be an imprecise estimate. Because each table contains a wide array of variables, those variables of most interest are shaded in yellow and discussed in greater detail at the end of each table. Table 2 below summarizes the descriptive statistics for FY2013.

PAGE 65

55 Table 2 . Descriptive Statistics f or FY2013 Variable N Mean Med SD Skew Kurtosis Min Max Housing Units 64 35334.55 8658.50 67676.06 2.58 5.95 765 294752 Income 64 42601.80 39642.50 12969.52 1.64 6.07 18493 102940 Unemploy ment 64 177.44 175.50 38.64 0.39 0.16 110 268 % FRL 63 0.47 0.4 7 0.17 0.06 0.31 0.07 0.88 Asmt Clo s June 59 0.90 0.93 0.13 1.55 1.60 0.51 1.00 Asmt Clos 4 Qrts 62 0.69 0.73 0.48 1.00 # of Children 64 78.03 10.50 173.54 3.18 10.51 0.00 889.00 % Foster Care 59 0.46 0.48 0.25 0.35 0.04 0.00 1.00 % Congreg ate 59 0.29 0.21 0.25 1.46 2.16 0.00 1.00 % Kinship Care 59 0.14 0.08 0.19 2.39 7.76 0.00 1.00 # of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37 # of Services 64 15.16 12.00 8.09 3.60 14.68 11 57 % Permanency 57 0.66 0.67 0.22 0.69 1.32 0.00 1.00 # Permanency 57 46.37 10.00 101.02 3.21 10.76 0 510 As seen in Table 2 , t wo of the four independent variables displayed substantial ranges across counties. These two variables housing units and per capita income varied greatly, as evidenced by standard deviations of over 67,000 and nearly 13,000, respectively. With skewness depar ting heavily from zero and kurtosis in excess of 3, they were also positively skewed and leptokurtic (Ho & Yu, 2015), indicating that observations were concentrated and non norma lly distributed. Conversely, unemployment insurance compensation and percent e ligibility for free and reduce lunch were closer approximations of a normal distribution. Both measures of performance management were negatively skewed, which indicates that m ost counties performed well on the timeliness of assessment closure measures . This observation is reinforced by : a) the mean score of assessment closures in June of FY2013, which was calculated at 90% and b) that no county closed fewer than 50% of assessme nts within the allocated 60 day window (the lowest performing county, Alamosa, had a 51% success rate). Despite success at hitting performance management benchmarks, some counties showed greater struggle with ac hieving permanency standards, as measured b y a mean score of 66% . Ten

PAGE 66

56 counties were at or below 50% success in reunification efforts, and the lowest performing total population of children in out of home care, success rates are expected to fluctuate considerably. Overall, the descriptive statistics from FY13 reveal positive performance management adherence. Table 3 . Descriptive Statistics for FY2014 Variable N Mean Med SD Skew Kur tosis Min Max Housing Units 64 35753.25 8720.00 68613.14 2.58 5.99 765 300694 Income 64 45211.48 41529.50 15374.57 2.48 11.54 18541 126741 Unemployment 64 96.50 93.00 22.33 0.37 0.27 57 152 % FRL 63 0.48 0.49 0.17 0.01 0.07 0.06 0.86 Asmt Clos June 59 0.89 0.94 0.14 1.66 2.04 0.43 1.00 Asmt Clos 4 Qrts 62 1.34 1.11 0.38 1.00 # of Children 64 77.58 10.50 169.19 2.98 8.56 0.00 805.00 % Foster Care 56 0.42 0.42 0.23 0.02 0.33 0.00 1.00 % Congregate 56 0.27 0.20 0.21 1.19 1.45 0.00 1.00 % K inship Care 56 0.23 0.20 0.21 1.40 2.68 0.00 1.00 # of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37 # of Services 64 15.16 12.00 8.09 3.60 14.68 11 57 % Permanency 59 0.63 0.65 0.25 0.86 0.99 0.00 1.00 # Permanency 59 44.12 7.00 95.99 3.18 10.06 0 459 As depicted in Table 3 , housing units and per capita income were again positively skewed and leptokurtic, suggesting that early signs of a consistent pattern may be observed. The mean score for percentage of successful assessment closures showed a sl ight dip from the year prior, with the lowest performing county (Washington) sitting at a 43% success rate. Similarly, the percentage of successful reunification efforts d eclined from a mean score of 66% in 2013 to 63% in 2014, which was in part attributab le to a greater number of counties sitting at or below 50% success (N=12) .

PAGE 67

57 Table 4 . Descriptive Statistics for FY2015 Variable N Mean Med SD Skew Kurtosis Min Max Housing Units 64 36217.64 8745.00 69579.58 2.58 6.00 765 306478 Income 64 47043.98 43715.00 15866.57 2.59 11.82 22220 131562 Unemployment 64 95.55 92.00 21.23 0.23 0.81 54 141 % FRL 63 0.48 0.48 0.17 0.04 0.16 0.05 0.87 Asmt Clos June 59 0.86 0.94 0.21 2.28 5.57 0.00 1.00 Asmt Clos 4 Qrts 62 2.14 6.71 0.14 1.00 # of Children 64 75.92 12.00 164.93 2.96 8.36 0.00 790.00 % Foster Care 60 0.44 0.42 0.27 0.35 0.07 0.00 1.00 % Congregate 60 0.26 0.18 0.27 1.74 2.68 0.00 1.00 % Kinship Care 60 0.24 0.20 0.23 0.94 0.70 0.00 1.00 # of Providers 64 11.89 10.0 0 5.02 3.33 12.51 9 37 # of Services 64 15.16 12.00 8.09 3.60 14.68 11 57 % Permanency 58 0.61 0.59 0.25 0.30 0.39 0.00 1.00 # Permanency 58 42.72 8.00 90.85 3.09 9.53 0 437 The descriptive statistics for FY2015 reveal a continuation of the pattern observed during the first two fiscal years; namely, that housing units and median income are increasing while performance management and permanency outcomes keep trending down. In 2015, the county of Saguache did not su ccessfully close any assessments with in the 60 day window, and the number of counties at or below 50% permanency success increased by two thirds, from 12 to 20. Table 5 . Descriptive Statistics for FY2016 Variable N Mean Med SD Skew Kurtosis Min Max Housing Units 64 3 6676.64 8760.00 70635.86 2.59 6.10 771 314631 Income 64 46739.50 43753.00 16100.08 2.87 14.13 22566 136025 Unemployment 64 92.53 89.00 21.37 0.32 0.86 54 137 % FRL 63 0.49 0.51 0.17 0.12 0.21 0.04 0.87 Asmt Clos June 61 0.86 0.95 0.21 2.22 4.60 0. 07 1.00 Asmt Clos 4 Qrts 62 2.07 4.25 0.22 1.00 # of Children 64 79.19 15.00 178.77 3.28 11.33 0.00 960.00 % Foster Care 56 0.41 0.41 0.24 0.27 0.44 0.00 1.00 % Congregate 56 0.23 0.19 0.20 1.62 3.70 0.00 1.00 % Kinship Care 56 0.28 0.27 0.22 0.90 1.08 0.00 1.00 # of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37 # of Services 64 15.16 12.00 8.09 3.60 14.68 11 57 % Permanency 58 0.58 0.57 0.23 0.24 0.67 0.00 1.00 # Permanency 58 38.59 5.00 82.97 3.14 10.42 0 431

PAGE 68

58 In 2016, the mean of asse ssment closures stayed consistent with the year prior at 86%. While the mean permanency decreased from 61% to 58%, no additional counties fell below the 50% reunification rate from FY15. Ultimately, the rate of growth was most notable when comparing the tw o years. As an example, Denver county (the largest in Colorado ) grew by more than 8,000 household units and witnessed an increase its child welfare population of 21% (from an N of 790 in FY15 to 960 in FY16). Given this presumably challenging strain on cap acity, it is perhaps not surprising that Denver dropped 5% in both its performance management and permanency outcomes rates during this time. Table 6 . Descriptive Statistics for FY2017 Variable N Mean Med SD Skew Kurtosis Min Max Housing Units 64 37233.88 8769.50 71851.77 2.60 6.12 779 321513 Income 64 48509.81 44948.00 17164.51 2.85 14.20 19443 143812 Unemployment 64 73.50 70.00 15.62 0.45 0.13 41 111 % FRL 63 0.48 0.48 0.17 0.02 0.04 0.05 0.90 Asmt Clos June 62 0.88 0.96 0.19 2.69 8.80 0.00 1.00 Asmt Clos 4 Qrts 62 2.98 11.76 0.00 1.00 # of Children 64 80.08 14.00 180.00 3.33 11.73 0.00 954.00 % Foster Care 57 0.40 0.43 0.20 0.04 0.74 0.00 1.00 % Congregate 57 0.19 0.14 0.20 2.05 5.42 0.00 1.00 % Kinship Care 57 0.34 0.31 0.24 0.64 0.31 0.00 1.00 # of Providers 64 11.89 10.00 5.02 3.33 12.51 9 37 # of Services 64 15.16 12.00 8.09 3.60 14.68 11 57 % Permanency 59 0.56 0.56 0.26 0.31 0.43 0.00 1.00 # Permanency 59 41.17 8.00 90.17 3.16 9.75 0 422 In exa mining the descriptive statistics in 2017 , counties successfully closed more assessments, on average, than they had since 2014 (88% in FY17; 86% in FY15 16; 89% in FY14). What is interesting, then, is that permanency outcomes continued the t rend of decline , dipping 2 percentage points from the yea r prior and falling 10 percentage points from the success rates reported just five years earlier.

PAGE 69

59 When comparing the percentage change from FY13 to FY17, it became apparent that a disconnect exists between perform ance management and permanency outcomes. S pecifically, 27 counties (just under 50% when excluding those counties with missing records) had inconsistent patterns of change between the two variables, meaning that if performance management increased, permanen cy outcomes decreased, or vice versa. Boul der county, for instance, increased its timeliness to assessment closures by more than 15% between FY13 and FY17, but decreased its permanency outcomes by 11% during this same time. Figure 7 below depicts a scatter plot displaying the relative changes in pe rformance management and permanency success between fiscal years 2013 and 2017, and the exact county level percent changes may be found in Appendix G . As depicted in Figure 7 , nearly half of all counties , represe nted as blue dots in the scatterplot, displayed inconsistent relationships between performance management and Figure 7 . Changes in performance management relative to chang es in permanency success.

PAGE 70

60 permanency outcomes. These inconsistencies are captured in the y ellow shaded quadrants, while expected direct relationships (that is, increases or decreases in permanency corresponding to similar increases or decreases in performance management) are presented in the non shaded quadrants. Although the initial purpose of the proposed cluster analysis was to better understand natural breaks and pattern s in the data broadly speaking, the procedures were expanded to try to determine where the discrepancy between performance management and permanency was situated. Two Step Cluster Analysis Within each fiscal year, counties were analyzed for their level of shared characteristics with one another. The goal of the two step clustering procedure was t o better understand patterns across variables before proceeding to in ferential statistics. In this way, cluster analysis was considered an additional descriptive layer important for its utility in capturing trends that may be otherwise unclear from a review of the sample distributions. Three iterations of cluster analyses were performed for each fiscal year. The first cluster analysis examined data patterns acro ss the independent variables and performance management. The second looked exclusively at the patterns between performance management and permanency outcomes. The t hird iteration considered the independent variables alongside permanency outcomes. The resul ts of the cluster analyses, by fiscal year, are presented in the tables below. Following each table is a brief discussion of the key observations.

PAGE 71

61 Table 7 . Cluster Analyses for FY2013 The cluster analyses for FY2013 reveal ma ny interesting patterns. In the first cluster iteration, cluster 3 had the highest overall success with performance management (mean=87.14%) while retaining the smallest number of housing units (mean=3657 units) but the lowest average income (mean=$40,67 4. 38) and the greatest percentage of free and reduced lunch eligibility (mean=51.4%). Cluster 3 also comprised exclusively frontier counties. While the small number of housing units descriptively reinforces Hypothesis 1, which posits that lower eligible po pu lations are equated with higher performance management, the lower average income and lunch eligibility are inconsistent with Hypothesis 2, and the frontier counties are incompatible with Hypothesis 3. Almost conversely, the highest average performing clu st er in the third specification (cluster 3), had mid level numbers of housing units (mean=13204), income (mean=$38,387.91) free and reduced lunch (mean=52.3%) and were exclusively rural counties.

PAGE 72

62 The discrepancies within and across the two clusters are sur pr ising, as they do not readily lend themselves to definitive conclusions regarding similarities amongst counties. Table 8 . Cluster Analyses for FY2014 In FY2014, performance management based clusters displayed greater theoretic c onsistency. While the means for performance management were fairly close in the first iteration (ranging from 82.67% in cluster 1 to 85.95% in cluster 3), cluster 3, with the highest mean score, also had the highest average income (mean=$48,053.00), lowe st free and reduced lunch eligibility (mean=39.0%) and were exclusively urban counties. The only predictor variable misaligned with the proposed hypotheses was the number of housing units, of which cluster 3 had the highest concentration (mean=1117298 unit s) . The second iteration also seemed to provide some descriptive support for the dissertation with higher performance management will also have greater permanency outcomes. As

PAGE 73

63 illustrated in the table, cluster 1 had a lower average performance management score (mean=70.16%) and lower performance management success (mean=32.65%) than cluster 2. Table 9 . Cluster Analyses for FY2015 For FY2015, the patterns observed in the first iteration of th e year prior remained consistent. Specifically, cluster 3 had the highest performance management average (mean=89.40%), the highest average income (mean=$49,830.06), the lowest average free and reduced lunch eligibility (mean=38.0% ) and comp rised all urban districts. Alternatively, the third iteration showed a reverse trend wherein the highest average performing cluster with respect to permanency outcomes (cluster 1, with a mean of 73.78%) also had the lowest average income ($42,360.33), the highest eligibi lity for free and reduced lunch (53.4% ) and captured all frontier counties. Finally, the second iteration displayed an interesting pattern in the descriptive connection between performance management and permanency outcomes. In this iteratio n, the

PAGE 74

64 two clus ters were separated by 26.67 percentage points on the predictor, but only 4.64 percentage points on the dependent variable. While clusters do not reflect causal mechanisms, the lack of clarity in the cluster distinctions suggests that the re lationship betw een performance and outcomes may be muddled in this fiscal year. Table 10 . Cluster Analyses for FY2016 The clusters in FY2016 indicate a similar inconsistent pattern as some of the prior study years. Notably, when comparing the clusters in the first iteration with those in the third, identical suggests (and was confirmed through a comparative analysis) that the same count ies are capture d in both sets of clusters. The reason that the third cluster contained different centroids in the two iterations was due to the drop in observations from 21 frontier counties in iteration 1 to 17 in iteration 3. The implication of this desc riptive observa tion is that the same counties that achieved

PAGE 75

65 performance management success in FY2016 also struggled with attaining permanency outcomes. This assertion is further supported by the second iteration, which contained nearly identical permanency means across c lusters 1 and 2 (mean=57.86% and 57.22%, respectively) but a 49.06 percentage point difference in average performance management scores. Table 11 . Cluster Analyses for FY2017 Unlike previous years, FY2017 had the only inclusion of three, rather than two, clusters between performance management and permanency outcomes. As seen in the second iteration, cluster 1 had the highest average performance management score (mean=89.37%) but the mid ranking permanency score (mean=55.49%), wh ile those respective scores were reversed in cluster 2. Cluster 3 displayed some theoretic s upport with the lowest performance management scores (mean=75.66%) corresponding to the lowest permanency scores (mean=4.9%). Adding to

PAGE 76

66 the ambiguous descriptive co nnection between performance management and permanency, the first and third iterations showe d similar inconsistencies as those reported in previous years. Lacking general descriptive consistency, the dissertation next generated inferential statistics to fu rther unpack the complexities between the predictor, mediating, and dependent variables. In ferential Findings Having calculated and reviewed the descriptive statistics, the dissertation proceeded to conduct inferential analyses. The subsections below pr esent the findings of the inferential tests. Means Comparisons, Correlations, and Chi Square Statistics Before generating more advanced statistical models, comparisons across means and variables were examined. Means comparison utilized Mann Whitney U, and variable squares to test for correlations. Inferential tests were computed for each of the dissertation from each type of test are presented in th e f ollowing order: Mann correlations, and Chi square tests.

PAGE 77

67 Table 12 . Mann Whitney U Means Comparisons for FY2013 The results of the means comparison for FY2013 indicate that differences between s uccessful and unsuccessfully performing counties with respect to performance management were statistically significant across housing units and income. More precisely, the tw enty five counties that achieved performance management standards had smaller elig ible populations and higher average income levels than those counties that did not meet standards. The significant

PAGE 78

68 differences in counties across these metrics lend support t o the dissertation that eligible population size and econom ic profiles are associated with performance management. counties that ach ieved permanency outcome standards compared to those that did not. When examining the control variables in 2013, significant differences were observed across permanency groups for the total number of children, percentage of kinship care, and availability o f providers. Interestingly, however, successful counties had fewer available provi ders and services. While this at first appeared counter intuitive, in recalling the descriptive statistics, FY2013 was the only study year where frontier districts out perfor med their rural and urban counterparts. Given that provider availability was assoc iated with geography (as evidenced in the correlation tables later in this chapter), the difference in means based on service availability was not as surprising.

PAGE 79

69 Table 13 . Mann Whitney U Means Comparisons for FY2014 I n FY2014 , the only significant differences in groups existed on the dependent variable permanency outcomes relative to service provision. As with the year prior, counties wi th fewer resources were able to perform better with respect to outcomes relative to those districts with more ample resource opportunities. While the distinction of frontier county success compared to rural and urban districts was not evident as it had bee n in FY2013, in FY2014 rural counites

PAGE 80

70 appeared to perform equally well to urban cou nties, which may explain why lower provider availability was still affiliated with successful outcomes. Table 14 . Mann Whitney U Means Comparisons f or FY2015 As evidenced in Table 14 above, only one statistically significant dif ference across groups existed in FY2015. This difference was observed on the variable free and reduced lunch eligibility, where counties that achieved performance management standards had a significantly

PAGE 81

71 smaller percentage of free and reduced lunch eligibl e children than those counties that did not meet the standard. This finding lends some support to the dissertation economically stable counties will see greater performance management achievement than those counties that are struggling. While some descriptive differences were noted in both FY2016 and FY2017 , no comparisons demonstrated statistical s ignificance. For the full Mann Whitney tables on these two years, please refer to Appendix I. The next step in the inferential analysis invol ved the creation of correlation matrices to examine the significance (if any) of the relationships between the ind ependent and predictor variables. Matrices were created for each fiscal year in the study and are presented in the tables that follow. Both in dependent samples t comparable test statistics, which sugges ts that the correlations were robust across a variety of specifications. However, because most variables violated the assumption of normality in their parametric test) are presented. Statistically si gnificant correlations are highlighted in gold, where significance is represented as: * p<.10; ** p<.05; *** p<.001. Given the prevalence of s ignificant correlations between variables in all fiscal years, a few highlights are discussed followed by a summar y of key

PAGE 82

72 Table 15 . Correlation Matrix for FY2013 As depicted in the correlation matrix for FY13, the percentage of unemployment insurance compensation and the size of the eligible population had the strongest correlations with performance management of all the predictor variables (rho= .427, p=<.05 and rho = .403, p=<.05, respectively). The inverse nat ure of the relationship between both these variables and the outcome of interest suggests that counties with lower rates of unemployment and fewer housing units had higher levels of performance. The highest rep orted association with permanency outcomes was performance management (rho=.414, p<.05), which indicates that counties with higher levels of performance management also had greater levels of outcome achievement. Table 16 . Correlatio n Matrix for FY2014 In 2014, statistically significant correlations were observed among the predictor and control variables, but no meaningful associations were detected between the independent

PAGE 83

73 variables and performance management. Similarly, the only p redictor variable s demonstrating a significant relationship with permanency outcomes w ere the control measures for number of services and providers. Interestingly, the direction of the relationship between these control variables reversed when comparing th e total number of successful permanency placem ents and the percentage of successful outcomes. One possibility for this discrepancy is that counties with a greater number of successful placements are situated in urban locations, which have higher levels of resource availability. Because the outcome is measured as the raw number of successful placements, the relatively effectiveness of counties is not considered, leading to the direct relationship with service providers. Alternatively, the percent of successf ul permanency placements accounts for the vari ed range in workload size to more accurately reflect outcome achievement than the raw number alone. Using this measure, then, counties with greater permanency outcomes also had fewer resource opportunities, whi ch is consistent with the Mann Whitney means c omparisons for this fiscal year. Table 17 . Correlation Matrix for FY2015 Correlations in FY201 5 indicated the presence of a moderate, statistically significant inverse relationship b etween unemployment insurance compensation and performance management (rho= .351, p<.05). Consistent with the correlation in FY2013, this association

PAGE 84

74 suggests that counties with lower levels of unemployment will also have higher levels of performance manag ement success. Beyond unemployment, income and free and reduced lunch eligibility showed associations of moderate strength with performance management, but statistical significance was minimal (rho=.305, p<.10 a nd rho= .314, p<.10, respectively). Similarly , there was a borderline significant association between geography and permanency outcomes (rho=.285, p<.10), but otherwise no predictor variables, including performance management, had a demonstrable relationsh ip with this outcome. Table 18 . Correlation Matrix for FY2016 In FY2016, income was again observed to have a moderate, direct association with performance management (rho=.309) although the statistical significance continued to hover around the p<.10 mark. Income did, for the first time, have a statistically significant relationship with permanency outcomes (rho=.341, p<.05). P erformance management also showed a moderate, positive relationship (rho=.345, p<.05) with permanency, t hereby lending additional support to the argu ment that counties with greater levels of performance management achievement will also have higher levels of outcome attainment.

PAGE 85

75 Table 19 . Correlation Matrix for FY2017 As seen in t he correlation matrix for FY17, only one asso ciation was detected between a predictor variable and performance management the measure for unemployment insurance compensation (rho= .257, p<.10). No statistically significant relationships were observed betwe en performance management and outcomes or pre dictor and control variables on permanency. Before advancing to the Chi square test findings, it is first necessary to briefly discuss the observed levels of association among the predictor and control variables . The presence of correlations across many indicator variables led to a discrepancy between empirical fidelity and practical utility. In other words, the strong associations between predictor variables suggests the presence of multicollinearity or perhap s interaction effects, which is problematic f or predictive modeling. On the other hand, the associations across these variables reinforces the argument that multiple systemic factors work collectively to impose organizational constraints that may affect pe rformance management and, ultimately, outcome achievement. With correlations examined, the dissertation next proceeded to calculate Chi square test statistics for performance management and permanency outcomes using their binary measures. The binary measures were included in this dissertation for t wo reasons: the first was to permit predictive modeling unde r a small sample size constraint, where a rare events logistic regression

PAGE 86

76 was the most appropriate technique given that the total population does not exceed 64 counties, and the second was to chec k that relationships between performance management and perm anency outcomes were consistent across a variety of operationalized specifications. Explained differently, if consistent significance could be detected across different measures of the same concep reliability would be increased . Toward this aim, Table 20 below depicts the findings of the Chi square tests for FY13 through FY2017. Table 20 . Chi Square Tests for FY2013 to FY2017 As evidenced by Table 20 , two years of the study demonstrated statistically significant relationships between performance management and permanency outcomes. FY2013 yielded a square value of 4.712 (significant at p<.05), while FY2016 produced a Chi square v alue of 6 .905 (significant at p<.001). In these two years, the null hypothesis (H 0 : There is no association between performance management and permanency outcomes) is rejected, indicating that a statistically significant association was confirmed between t he predic tor and outcome. In the remaining years, the null hypothesis was accepted, meaning that no discernible significance was observed. Logistic Regression To this point, several associations between variables have been observed. To use a well known adage, however, correlation is not causation. This sentiment was particularly apparent in the dissertation of the twenty was

PAGE 87

77 statistically significant. Table 21 below contains the output from the sign ificant model found in FY2013. The remaining 19 outputs may be found in Appendix J. Table 21 . Firth Logistic Regression for FY2013, Performance Management and Permanency As seen in Table 21 , a statistically significant bivariate relationship was observed between performance management and permanency outcomes. Model significance was interpreted using the likelihood ratio test reduction estimates. In the model for FY2013, significance was observed at p<.001, offering some support for the dissertation achievement. Boots trapping for Mediation In reviewing the results of the logistic regression s , it became clear that the prop osed bootstrapping procedure for mediation would not be appropriate. To briefly review, the underlying premise of a mediating variable is that it tempers the relationship between some set of predictor variables (X) and an out come of interest (Y). Bootstrap ping enables the calculation of the mediating effect (M) the degree to which variation in Y is associated with a direct effect from X relative to the indirect effect of M through repeated resampling of the data. Lacking a di rect effect, as was observed in this dissertation , there is nothing for the mediating variable to mediate. In other words, if the hypothesized predictors did not exert a statistically significant

PAGE 88

78 influence on permanency outcomes, then performance managemen t could not logically be the me diating effect between the two. Still, while the dissertation implications for practice may be gleaned. Before that discussion, though, a summary of the support level for the hypotheses based on the emp irical analyses is presented. Level of Support for Hypotheses Based on the empirical analyses, support for the dissertation best. Figure 8 below presents a summary of the four hypotheses and le vels of support based on the cl uster analyses, Mann Whitney U, correlations and Chi squares, and the rare events logistic regressions. Figure 8 . Summary of support for hypotheses. Reasons for the lack of hypothesis support are discussed in greater detail in the next chapter. There are two reason s , however, that are currently presented : 1) the empirical models did not adequately capture lagged effects , and 2) the presence of multicollinearity undermined . To check for the limitati on of lagged effects , performance management and permanency outcomes from one year prior were modeled as predictor variables

PAGE 89

79 in a subsequent study year , also known as lag ging the dependent variable from time t to time t 1 (Allison, 2015) . For example, perf ormance management in 2014 w as assessed for its impact on permanency outcomes in FY2015. To examine multicollinearity, a series of sensitivity analyses were conducted wh erein highly correlated predictors (e.g., housing units and income) were added and rem oved from the models using stepwise regression. Such tests revealed the presence of multicollinearity across many of the predictor variables, but model adjustments did no t result in greater statistical significance, suggesting that the redundancy did not h ave sufficient impact on the models to affect their interpretation or empirical utility. Because the models with the complete set of predictor variables were of theoretic relevance and the exclusion of various independent variables did not yield statistica lly significant results, all variables were ultimately retained in the final models. The lack of additional statistically significant relationships upon model re specification sugges ts that, while imperfect, the original models were robust in their speci fications. More practically important, however, is that even with modeling adjustments, no significant causal mechanisms were identified, prompting f urther supposition as to why that might occur. This supposition is the focus to which the dissertation now turns.

PAGE 90

80 V. DISCUSSION AND CONCLUSION Having presented the empirical findings of the dissertation , the final chapter of this dissertation will discuss the implications of the results and suggest directions for further research. Toward that end, the conc lusion has t hree subsections: Discussion , Study Limitations, and Next Steps. Discussion This dissertation was guided by two primary research questions. The first asked to what extent systemic constraints impact performance management in child welfare age ncies, while the second inquired as to the effect of performance management on organizational outcome achie vement. Without measured certainty from the quantitative methods, the questions did not enjoy empirical support. More specifically, the dissertation could not conclude analytically that performance management was affected by systemic factors or that it had a demonstrable impact on organizational outcomes. Instead, the core issue of performance management as a mediating influence remains uncertain , as e videnced by the presence of correlational, but not predictive, association. Why might the discrepancy betwe en correlational association and causal mechanisms persist in this dissertation ? One possibility is that variables were connected to one another in a relational sense, but not a predictive one. To explain, consider this illustrative example. In several fis cal years, an association was determined to exist between income (a systemic constraint) and performance management. The presence of this association suggests that agencies achieving higher performance management standards are also situated in counties wit h greater economic stability. Therefore, a correlational relationship may exist between these two constructs. The lack of predictive significance, th ough, implies that income is not a driver of performance management, meaning that, while theoretically and practically important, income in

PAGE 91

81 and of itself does not determine performance. Despite its predictive limitations, however, the dissertation cal analysis yielded some notable patterns across variables, summarized in the numbered list below. 1. Counties with smaller eligible populations had better performance management records 2. Counties with stronger economic profiles had better performance managem ent records 3. Counties with fewer service providers had better performance management records 4. An incon sistent relationship existed between performance management and permanency As seen in the list above, several key takeaways may be gathered from the empiri cal analysis. Consistent with theoretic expectations, counties with smaller populations generally ha d stronger performance management achievement. Greater population size was projected to impose a larger strain on organizational capacity, thereby limiting performance management, and its descriptive association with performance management suggests that ch ild welfare agencies may fare better when there is a smaller pool of clients for which service needs arise. Similarly, counties with stronger economic prof iles, particularly those with higher average per capita income and lower dependence on free and redu ced lunch, also performed better with respect to performance management. There was a direct relationship between population size and income and an inverse r elationship between population and school lunch eligibility. In other words, counties with larger po pulations also generally had higher per capita income and lower eligibility for free and reduced lunch. It is interesting, then, that smaller counties with less economic stability achieved performance management rather consistently with larger counties tha t had greater economic stability. Such an inconsistent observation indicates that it is not entirely clear whether population size or economic health has a greater influence on achieving performance management standards. Further research should study this disconnect in

PAGE 92

82 greater detail in order to recommend meaningful strategies to organizations that are constrained by factors outside their immediate control. The third observed pattern involved the availability of service providers. In general, counties with fewer resource opportunities out performed county agencies with greater resource availability. One possible explanation is that the quantity of services av ailable does not equate to the quality of service provision. Smaller counties with fewer agencies ma y have higher quality providers that have stayed operational because they are able to meet contract expectations and outlive potential competition. Alternat ively, if quality of service provision is not the driver, it may be that the quantity of providers i n larger areas is disproportionate to the number of children in care. That is, even when larger urban counties have a greater number of providers, there may not be enough providers or breadth of services to attend to all children. Finally, the inconsiste nt relationship between performance management and permanency outcomes should not be understated. Of all the noted patterns, this is arguably the most impor tant because of its implications for theory and practice. If achieving performance management standa rds does not equate to achieving better long term outcomes, our theoretic expectations may need to be reconsidered or agencies may be reinforcing behaviors and organizational goals that ultimately do not facilitate success. Thus, w hile the lack of empiric al support was understandably dis appointing , it does not negate the dissertation surroun ding systemic constraints, performance management, and permanency outcomes underscores the need for further examination of these constructs from a theoretic vantagepoint and careful consideration of their application in organizational settings. To illustra te the urgency

PAGE 93

83 of conceptually rich, empirically sound, and practically meaningful constructs of per formance management, consider the following example. In a 2014 workload study submitted to The Office of the State Auditor, CDHS declared the gravity of t he resource scarcity within county child welfare offices (CDHS Workload, 2014). hat should be spent on a case per month) would require between 18 and 157 percent more time per month for each service than the actual amount of time child welfare workers spent on each service during the haps more astonishing, to meet the expected time requirements, CDHS articulated a need for an additio nal 574 caseworkers and 122 supervisor positions (CDHS Workload, 2014, p. v). In the face of such tremendous restriction, it is critical to understand what adjustments can be made internally to improve performance management and to find solutions for worki ng under conditions of environmental uncertainty. If the theoretic constructs are misaligned with performance management data in practice, researchers ma y be missing a valuable opportunity to define, measure, and improve organizational effectiveness. Per haps more immediately, if organizations such as child welfare agencies in Colorado are fundamentally assessing performance adherence against unreasonable s tandards, employees already under tight resource constraints may be working towards objectives that d o not enhance agency performance or facilitate better long term outcomes for the children in their care. Study Limitations As with any research study, this project was not immune to limitations. There are t wo main types of study limitations: threats to v alidity and reliability (Singleton, Jr., & Straits, 2010) . Threats to validity include two forms: internal and external. Inte rnal validity reflects the extent to

PAGE 94

84 which the operationalized measures are consistent with the concepts being studied. A misalignm ent between the conceptual and empirical variables can diminish internal validity. External validity refers to generalizabili ty that is, the extent to which the dissertation conclusions are applicable across a variety of contexts . Finally, r eliability is a measure of consistency used to determine the likelihood that measures are recorded in a uniform manner (Singleton & Strait s, 2010). Perhaps the greatest limitation in the dissertation was the threat to i nternal validity. Internal validity can be asses sed in a myriad of ways, but two types were especially relevant in this research: predictive validity and construct validity. According to Tashakkori and Teddlie (1998), a measure is thought to have high predictive v alidity if it can accurately predict an outcome of interest and high construct validity if it is appropriately correlated with measures of the same construct and not correlated with theoretically unrelated concepts. In this dissertation , predictive validit y was low, as demonstrated by the lack o f statistical inference able to be generated from the predictive models. Construct validity fared better, as similar measures of both performance management and permanency outcomes had high levels of correlation in e ach of the study years. Beyond interna l validity, threats to external validity were also considered. While a lack of generalizability beyond the context of child welfare may be perceived as a threat to external validity, it is important to note that many elements of the dissertation were not i ntended to be applicable beyond their current context, as measures were drawn directly from the study setting. Despite this empirical limitation, it is anticipated that some of the broader theoretic constructions are of import to other areas of inquiry.

PAGE 95

85 Issues of reliability were presumably not a threat, as counties input the same metrics into the case management system (TRAILS) and the measures have been consistent over time. Unless there was an internal change to the metric calculations that was not sta ted, reliability should be high. Finally, the issue of replicability deserves mention . This dissertation provided a detailed overview of the data collection, variable measurement, and analysis procedures so that futu re researchers should be able to replica te the dissertation exactly and derive similar findings to those presented here. There are, at minimum, four plausible explanations accounting for the dissertation limitations. The first explanation is that the c onceptual framework was mis specified. W hile the theoretic footing was grounded in extant research, the supposition that performance management is a mediating influence may have been overly simplified. Still, this dissertation not to be exhau stive in its treatment of public managem ent, but rather to elucidate how systemic constraints may impact performance, which may in turn affect outcomes. In that pursuit, the conceptual framework appeared appropriate. The second explanation of the dissert ation of performance management and permanency outcomes are inconsistent with our theoretic expectations. These variables, in both name and data, were taken directly from a governmental agency that has identified them as the metrics capturing performance and ou tcomes. It is possible that such measures are not truly reflective of performance and outcomes, despite their nam ing conventions. If this is the case , caution should be exerted when reinforcing programmatic, policy, and budgetary expectations aligned with the achievement of these objectives. The third, and simplest, explanation lies in the small population size. A quantitative approach was appropriate given the numeric nature of the data, but it is well known that

PAGE 96

86 statistical power is limited in especial ly small sample sizes (Cohen, 1992). While the modeling techniques sought to be as responsive to this limitation as possible by limiting the number of predictor variables and specifying a binary outcome appropriate f or a rare events logistic regression des igned to be robust against sample size limitations, these efforts may not have been enough to overcome the limits. Finally, and most likely, is that the dissertation the above mentioned explanations. It is not inc onceivable that further theoretic refinement would lead to the inclusion and analysis of variables outside of the original dissertation Concomitantly, the me asures of performance management and permanency may be theoretically inconsistent and chil d welfare agencies continue to unknowingly incentivize ineffective objectives. And, of course, the small sample size is not discounted for its inherent limitations. W ith these limitations considered, the following sub section proposes a series of steps to be undertaken that may further address these areas of concern. Next Steps The purpose of this dissertation was threefold: first, to have made a contribution to the c a meaningful impact in practice for child welfare agencies in Colorad o; and third, to serve as a starting point for further study. In continued pursuit of thos e objectives, four additional steps are proposed. First, a re visitation of the dissertation The core premise of this resea rch embraced the conceptual foundation that performance management is best understood as a mediating influence between systemic constraints and organizational

PAGE 97

87 outcome attainment. While theoretic argument reasoned that examining the concept in its mediating form was an underexplored facet of performance management, the lack of corresponding empi rical support suggests that perhaps the conceptual framework was oversimplified and did not adequately capture the dynamic nuances in sufficient detail to garner stat istical support. Further refinement of this conceptual frame to explicitly examine endogen ous influences such as management practice, organizational culture, and public service motivation may be required to enhance the theoretic contribution. The second s tep involves outreach to practitioners within Colorado child welfare agencies. As Moynihan and Pandey (2010) argued, understanding how and why government agencies use performance management data is critical to improve public management both in theory and p ractice. More specifically, in the context of child welfare, accurate reporting is essenti al for compliance with federal and state regulations, and more importantly, to enable greater proficiency amongst agency employments to help improve long term outcome s. Simply ntly report performance information if they do not understand performance measures, or do not believe they gement in performance long required consistent, systematic electronic data entry under its case management system TRAILS, it is unlikely that inaccuracy persists in data collection, but Jo competency/engagement connection among frontline wor kers should not be underestimated. Therefore, engaging with the practitioner community is an important step in refining and furthering this research.

PAGE 98

88 The third step seeks to extend the current dissertation using an alternative research design: qualitativ e methods. As broad patterns c ould not be ascertained from the quantitative analyses conducted here , perhaps qualitative methods can provide additional ins ight not gleaned from a quantitatively informed systems level perspective . Indeed, in engaging child welfare stakeholders to provide them with thorough, longitudinal data specific to their jurisdictions for the purpose understanding and improving performan ce, information sharing may be leveraged. Using such an opportunity to develop and execute a suppleme ntary qualitative dissertation would be of considerable import to the child welfare agencies responsible for service provision and may illuminate additiona l theoretic insights. Finally, the last step aims to articulate policy alternatives that may be con sidered when examining performance management in practice. In both child welfare r es earch and practice, performance management is o ften defined as timeliness, a measure that is also utilized by the Colorado Department of H u man Services. However, this definition may overemphasize the importance of efficiency, rather than quality, of serv ices pro vide d and may therefore fail to ca pture the essence of performance management. Instead, metrics that emphasize the quality of agency a ctions should be considered . As one example, in 2013, CDHS developed C Stat, a performance based analysis strategy that allows every CDHS program to better focus on and improve performance outcomes (CDHS C Stat, n .d.). W hi le still a ccounting for timeliness as an indicator of performance management, C Stat also assesses agencies on a more comp rehensive assortment of quality me trics, including the recurrence of child ma ltreatment, the use of out of home emancipat ion transition plans, and recidivism into the child welfare system (CDHS, 2019). Although these data are not currently publicly availabl e at a county level, they may offer a more

PAGE 99

89 nuanced approach to the stud y of performance management than an emphasis on timeliness alone. If agenc ies can refine their data collection and assessment proc edures to gather more holistic information on performance management practices, then theor etic research may be able to better align with empirically derived conceptualizations. Currently, child welfare literature overw helming ly relies on agency determined indicators of performance managem ent instead of advancing theoretic alternatives. Greater cooperation between scholars and ground level experts should strengthen both a conceptual understanding of performance management and reinforce agency behaviors towards a system of perfor mance management that truly impacts organizational outcomes. In short, this dissertation has sought to make both theoretic and practical contributions. Wit h theoretic refinement, community outreach, and further study, a trajectory is envisioned to continue exploring the important role performance management has within governmental organizations and to improve the experiences of children placed in out of home care.

PAGE 100

90 REFERENCES Alach, Z. (2017). Towards a standard conceptual typology of public sector perfor mance measurement. Kotuitui: New Zealand Journal of Social Sciences Online , 12 (1), 56 69. doi: 10.1080/1177083X.2016.1225579 https://statisticalhorizons.com/lagged dependent variables . Amirkhanyan, A. A., Kim, H. J., & Lambright, K. T. (2014). The performance puzzle: Understanding the fa ctors influencing alternative dimensions and views of performance. Journal of Public Administration Research and Theory , 24 (1), 1 34. doi:10.1093/jopart/mut021 Andrews, R., Boyne, G. A., Moon, M. J., & Walker, R. M. (2010). Assessing organizational performance: Exploring differences between internal and external measures. International Public Management Journal , 13 (2), 105 129. do i: 10.1080/10967491003766533 Ar naboldi, M., Lapsley, I., & Steccolini, I. (2015). Performance management in the public sector: The ultimate challenge. Financial Accountability & Management , 31 (1), 1 22. doi: 10.1111/faam.12049 Ars neault, S. (2006). Imp lementing welfare reform in urba n and rural communities Why place matters. American Review of Public Administration , 36 (2), 173 188. Baron, R. M., & Kenny, D. A. (1986). The moderator mediator variable distinction in social psychological research: Conce ptual, strategic, and statistical considerations. Journal of Personality and Social Psychology , 51 , 1173 1182. Barth, R. P., Wildfire, J., & Green, R. L. (2006). Placement into foster care and the interplay of urbanicity, child behavior problems, and pov erty. American Journal of Orthopsychiatry , 76 (3), 358 366. doi: 10.1037/0002 9432.76.3.358 Barzelay, M. (1992). Breaking through bure aucracy: A new vision for managing in government . Berkeley, CA: University of California Press. Behn, R. D. (1995). The big questions of public management. Public Administration Review , 55 (4) , 313 324. doi: 10.2307/977122 Belanger, K., & Stone, W. (2008). The social service divide: Service availability and accessibility in rural versus urban counties and impact on child welfare outcomes. Child Welfare , 87 (4), 101 124.

PAGE 101

91 Bergtold , J. S., Yeager, E. A., & Featherstone, A. M. (2018). Inferences from logistic regression models in the presence of small samples, rare events, nonlinearity, and multicollinearity with observati onal data. Journal of Applied Statistics , 45 (3), 528 546. doi : 10.1080/02664763.2017.1282441 processes for stakeholder and citizen participation in the work of governm ent. Public Administration Review , 65 (5), 547 558. Bollen, K. A., & Stine, R. (1990). Direct and indirect effects: Classical and bootstrap estimates of variability. Sociological Methodology , 20 , 115 140. lker, R. M. (2005). Where next? Research directions on performance in public organizations. Journal of Public Administration Research and Theory , 15 (4), 633 639. Bozeman, B., & Feeney, M. K. (2011). Rules and red tape: A prism for public administration theory and research . Armonk, NY: M. E. Sh arp. Carnochan, S., Lee, C., & Austin, M. J. (2013). Achieving exits to permanency for children in long term care. Journal of Evidence Based Social Work , 10 (3), 220 234. doi: 10.1080/15433714.2013.788952 Cheung , G. W., & Lau, R. S. (2008). Testing mediation and suppression effects of latent variables: Bootstrapping with structural equation models. Organizational Research Methods , 11 (2), 296 325. doi: 10.1177/1094428107300343 Child Welfare League of America (20 17). . Retrieved from https://www.cwla.org/wp content/uploads/2017/03/COLORADO.pdf The AFCARS report FY 2015 (No. 23). Washington, D.C.: U.S. Department of Health and Human Services. Retrieved from http s://www.acf.hhs.gov/cb/research data technology/statistics research/afcars . Cohen, J. (1992). Statistical power analysis. Current Directions in Psychological Science , 1 (3), 98 101. Collins Camargo, C., & McBeath, B. (2017). Child welfare practice withi n the context of public private partnerships. Social Work , 62 (2), 130 138. doi: 10.1093/sw/swx004 Colorado Department of Education (n.d.). Retrieved from https://www.cde.state.co.us/ . Colorado Department of H uman Serv ices (n.d.). C Stat . Retrieved from https://www.colorado.gov/pacific/cdhs/c stat .

PAGE 102

92 Colorado Department of Human Services. (n.d.). Child welfare . Retrie ved from https://www.colorado.gov/pacific/cdhs/child welfare 0 . Colorado Department of Human Services. (n.d.). Child welfare data and accountability . Retrieved from https://www.colorado.gov/pacific/cdhs/child welfare policies data accountability. Colorado De partment of Human Services (n.d.). Community performance center . Retrieved from https://www.cdhsdatamatters.org/ . Colorado Department of Human Services (n.d.). Permanency outcomes . Retrieved from https://www.cdhsdatamatters.org/data by topic.html . Co lorado Department of Human Services (n.d.). Service providers and partners . Retrieved from https://www.colorado.gov/ pacific/cdhs/news/service providers partners . C olorado Department of Human Services (2014). Colorado child welfare county workload study . Retrieved from https://leg.colorado.gov/sites/default/files/documents/audits/1 354s_ _colorado_childrens_welfare_workload_study_report_august_2014.pdf . Colorado Department of H uman Ser vices (2019). C Stat: Summary report . Retrieved from https://drive.google.com/drive/folders/1gieKH0MlZxa4EIGhIfANDEHkVjxx6dfn . Colorado Department of Local Affairs, State Demography Office (n.d.). Retrieved from https ://demography.dola.colorado.gov/ . Colorado Office of Children, Families, and Youth (2017). 2018 annual progress and services report. Retrieved from file:///C:/Users/carri/Downloads/Combined%202018%20APSR%20&%20Appendices. pdf. Colorado Rural Health C enter (n.d.). Retrieved from https:/ /coruralhealth.org/ . Colorado Rural Health Center (2014). Colorado: Federally certified rural health clinics within county designations, 2014. Retrieved from https://coruralhealth wpengine.netdna ssl.com/wp content/up loads/2013/10/2014.RHCs_.Free_.Provi der.pdf . Colorado Rural Health Center ( 2017) . Colorado: County designations, 2017. Retrieved from http://coruralhealth.wpengine.netdna cdn.com/wp content/uploads/2017/07/2017 Rural County Designation.pdf . A., Osterling, K. L., & Austin, M. J. (2008). Understanding and measuring child welfare outcomes. Journal of Evidence Based Social Work , 5 (1 2), 135 156. doi: 10.1300/J394v05n0106

PAGE 103

93 Damoe, F. M. A., Hamid, K., & Sharif, M. (2017). The mediating effect of organizational climate on the relationship between HRM practices and HR outcomes in the Libyan public sector. Journal of Management Development , 36 (5), 626 643. doi: 10.1108/JMD 04 2015 0055 Denhardt, J. V., & Denhardt, R. B. (2011). The new public serv ice: Serving, not steering . Armonk, NY: M. E. Sharpe. Ehrle, J., & Geen, R. (2002). Kin and non kin foster care findings from a national survey. Children and Youth Services Review , 24 (1), 15 3 5. Elgin, D. J., & Carter, D. P. (2019). Administrative (de)c entralization, performance equity, and outcome achievement in rural contexts: An empirical study of U.S. child welfare systems. Governance , 32 , 23 43. doi: 10.1111/gove.12343 Everitt, B. S., & Hothorn, T. (2010). A handbook of statistical analyses using R (2 nd ed.). New York, NY: CRC Press. Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika , 80 (1), 27 38. Flango, V. E., Gatowski, S., & Sydow, N . E. (2015). Using outcome measur ement to promote continuous quality improvement for children in foster care. Juvenile and Famil y Court Journal , 66 (3), 19 32. doi: 10.1111/jfcj.12030 Fluke, J. D., Corwin, T. W., Hollinshead, D. M., & M aher, E. J. (2016). Family preservation or child safety? Associations perspectives. Children and Youth Services Review , 69 , 210 218. doi: 10.1016/j.childyouth.2016.08.012 Fowler, P. J., Marcal, K. E., Zhang, J., Day, O., & Landsverk, J. (2017). Homeless and aging out of foster care: A national comparison of child welfare involved adolescents. Children and Youth Services Review , 77 , 27 33. doi: 10.1016/j.childyouth.2017.03. 017 Fryer, K., Antony, J., & Ogden, S. (2009). Performance management in the public sector. In ternational Journal of Public Sector Management , 22 (6), 478 498. doi: 10.1108/09513550910982850 Gainsborough, J.F. (2010). Scandalous politics: Child welfare policy in the states . Washington, DC: Georgetown University Press. Harwell, M., & LeBeau, B. ( 2010). Student eligibility for a free lunch as an SES measure in education research . Educational Researcher , 39 (2), 120 131. doi: 10.3102/0013189X10362578 correlation coe fficients on the same sets of data. Quaestiones Geographicae , 30 (2), 87 93. doi: 10.2478/v10117 011 0021 1

PAGE 104

94 Hayes, A. F. (2009). B eyond Baron and Kenny: Statistical mediation analysis in the new millennium. Communication Monographs , 76 (4), 408 420. doi: 10.1080/03637750903310360 Heinrich, C. J. (1999). Do government bureaucrats make effective use of performance management information? Journal of Public Administration Research and Theory , 9 (3), 363 393. Heinrich, C. J. (2002). Outcomes based performan ce management in the public sector: Implications for government accountability and effectiveness. Public Administration Review , 62 (6), 712 725. doi: 10.1111/1540 6210.00253 Ho, A. D., & Yu, C. C. (2015). Descriptive statistics for modern test score distr ibutions: Skewness, kurtos is, discreteness, and ceiling effects. Educational and Psychological Measurement , 75 (3), 365 388. doi: 10.1177/001316441458576 Hood, C., & Dixon, R. (2015). What we have to show for 30 years of new public management: Higher cos ts, more complaints. Governance , 28 (3), 265 267. doi: 10.1111/gove.12150 Howard Moroney, M. (2016). Exploring the linkages between collaboration and innovation using faith based partnerships in the child welfare system. In L. Julnes & E. Gibson (Eds.), I nnovation in the public and nonprofit sectors: A public solutions handbook (pp. 205 241). New York, NY: Routledge. Hox, J. J., & Boeije, H. R. (2005). Data collection, primary vs. secondary. In K. K empf Leonard (Ed.), The encyclopedia of social measurem ent (pp. 593 599). New York, NY: Elsevier. Hwang, K. (2016). Accountability practices in public child welfare services. International Journal of Public Administration , 39 (8) 587 596. doi: 10.1080/01900692.2015.1028644 IBM (n.d.). TwoStep cluster analysi s. Retrieved from https://www.ibm.com/support/knowledgecenter/en/SSLVMB_24.0.0/spss/base/idh_twost ep_main.html . Jolles, M. P., Collins Camargo, C., McBeath, B., Bunger, A. C., & Chuang, E . (2017). Managerial strategies to influence frontline worker und erstanding of performance measures in nonprofit child welfare agencies. Nonprofit and Voluntary Sector Quarterly , 46 (6), 1166 1188. doi: 10.1177/0899764017728366 Katz, L. Y., Au, W., Singal ). Suicide and suicide attempts in children and adolescents in the child welfare system. Canadian Medical Association Journal , 183 (17), 1977 1981. doi: 10.1503/cmaj.111008 Kaufman, H. (1985). Time, chance, and organizations: Natural selection in a perilo us environment . Chatham, NJ: Chatham House Publishers, Inc.

PAGE 105

95 King, G., & Zeng, L. (2001). Logistic regression in rare events data. Political Analysis , 9 (2), 137 163. Koo, N., Leite, W. L., & Algina, J. (2016). Mediated effects with the parallel proces s latent growth model: An evaluation of methods for testing mediation in the presence of nonnormal data. Structural Equation Modeling: A Multidisciplinary Journal , 23 (1), 32 44. doi: 10.1080/10705511.2014.959419 Kroll, A., & Moynihan, D. P. (2017). The d esign and practice of integrating evidence: Connecting performance management with program evaluation. Public Administration Review , 78 (2), 183 194. doi: 10.1111/puar.12865 Latzman, N. E., Lokey, C., Lesesne, C. A., Klevens, J., Cheung, K., Condron, S., & Garraza, L. G. (2019). An evaluation of welfare and child welfare system integration on rates of child maltreatment in Colorado. Children and Youth Services Review , 96 , 386 395. doi: 10.1016/j.childyouth.2018.12.009 Lee, H W. (2019). How does sustainab ility oriented human resource management work? Examining mediators on organizational performance. International Journal of Public Administration , p. 1 11. doi: 10.1080/01900692.2019.1568459 Lockwood, K. K., Friedman, S., & Christian, C. W. (2015). Perma nency and the foster care system. Current Problems in Pediatric and Adolescent Health Care , 45 (10), 306 315. doi: 10.1016/j.cppeds.2015.08.005 Lynn, L. E. Jr. (2006). Public management: Old and new . New York, NY: Routledge. McBeath, B., & Meezan, W. (2 010). Governance in motion: Service provision and child welfare outcomes in a performance based, managed care contracting environment. Journal of Public Administration Research and Theory , 20 (Supple ment 1: The State of Agents: A Special Issue), p. i101 i1 23. McGuinness, T. M., & Schneider, K. (2007). Poverty, maltreatment, and foster care. Journal of the American Psychiatric Nurses Association , 13 (5), 296 303. doi: 10.1177/1078390307308421 McHugh, M. L. (2013). The Chi square test of independence. Bioch emia Medica , 23 (2), 143 149. doi: 10.11613/BM.2013.018 Moffitt, R. (1993). Identification and estimation of dynamic models with a time series of repeated cross sections. Journal of Econometrics , 59 (1), 99 123. Mooi, E., & Sarstedt, M. (2011). A concise guide to market research: The process, data, and method s using IBM SPSS statistics . Berlin, Germany: Springer Verlag. doi: 10.1007/978 3 642 12541 6_9

PAGE 106

96 Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform . Washington, D.C.: Georgetown University Press. Moynihan, D. P., & Kroll, A. (2016). Performance management routines that work? An early assessment of the GPRA Modernization Act. Public Administration Review , 76 (2), 314 323. doi: 10.1111/puar.12434 Moy nihan, D. P., & Pandey, S. K. (2010). The big que stion for performance management: Why do managers use performance information? Journal of Public Administration Research and Theory , 20 , 849 866. doi: 10.1093/jopart/muq004 National Conference of State Le gislatures (2015). Child welfare information systems . Retrieved from: http://www.ncsl.org/research/human services/child welfare information systems.aspx . Nemes, S., Jonasson, J. M., Genell, A., & Steineck, G. (2009). Bias in odds ratios by logistic reg ression modelling and sample size. BMC Medical Research Methodology , 9 (1), 1 5. outcomes for children and youth involved in child welfare and youth corrections. Youth Violence and Juvenile Justice , 16 (1), 3 17. doi: 10.1177/1541204017721614 Osborne, D., & Gaebler, T. (1992). Reinventing government: How the entrepreneurial spirit is transforming the public sector . Reading, MA: Addison Wesley. Pandre, A. (2012). Cluster analysis: See it 1 st . Retrieved from https://apandre.wordpress.com/visible data/cluster analysis/ . Peters , B. G. (1996). Models of governance for the 1990s. In D. F. Kettl and H. B. Milward (Eds.), The state of public management (pp. 15 44). B altimore, MD: The John Hopkins University Press. Pollitt, C. (2013). The logics of performance management. Evaluation , 19 (4), 346 363. doi: 10.1177/1356389013505040 Puhr, R., Heinze, Nold, M., Lusa, L., & Geroldinger sion with rare events: accurate effect estimates and predictions? Statistics in Medicine , 36 (14), 2302 2317. doi: 10.1002/sim.7273 Radnor, Z., and Barnes, D. (2007). Historical analysis of performance measurement and management in operations management . International Journal of Productivity and Performance Management , 56 (5/6), p. 245 260. Rine, C. M., Morales, J., Vanyukevych , A. B., Durand, E. G., & Schroeder, K. A. (2012). Using GIS mapping to assess foster care: A picture is worth a thousand words. Journal of Family Social Work , 15 (5), 375 388. doi: 10.1080/10522158.2012.719184

PAGE 107

97 Schiefele, U. (2017). Classroom management a nd mastery oriented instruction as mediators of the effects of teacher motivation on student motivation. Teaching and Teacher Ed ucation , 64 , 115 126. doi: 10.1016/j.tate.2017.02.004 Shook, J., Goodkind, S., Pohlig, R. T., Schelbe, L., Herring, D., & K im, K. H. (2011). Patterns of mental health, substance abuse, and justice system involvement among youth aging out of child welf are. American Journal of Orthopsychiatry , 81 (3), 420 432. doi: 10.1111/j.1939 0025.2011.01110.x Singleton, R. A. Jr., & Strait s, B. C. (2010). Approaches to social research (5 th ed.). New York, NY: Oxford University Press. Spieker , S. J., Oxford, M. L. , & Fleming, C. B. (2014). Permanency outcomes for toddlers in child welfare two years after a randomized trial of a parenting intervention. Children and Youth Services Review , 44 , 201 206. doi: 10.1016/j.childyouth.2014.06.017 Tash akkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches . Thousand Oaks, CA: Sage Publications, Inc. Thompson, J. D. (2003). Organizations in action: Social science bases of administrative theory . New Brunswick, NJ: Transaction Publishers. University of Kansas (n.d.). Child population in out of home placement by type of placement. Retrieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx?RVID=773 . University of Kansas (n.d.). (Federal) p ermanency in 12 months for chil dren in care. Retrieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx?RVID=698 . University of Kansas (n.d.). (Federal) permanency in 12 months for children in care 12 23 months. Retrieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx? RVID=701 . University of Kansas (n.d.). (Federal) permanency in 12 months for children in c are over 24 months. Retrieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx?RVID=704 . University of Kansas (n.d.). Timeliness of assessment closure. Ret rieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx?RVID=53 . University of Kans as (n.d.). Timeliness of initial response. Retrieved from https://rom.socwel.ku.edu/CO_Public/AllViews.aspx?RVID=652 . U.S. Bureau of Economic Analysis (n.d.). CA INC30 definitions. Retrieved from https://apps.bea.gov/iTable/iTable.cfm?reqid=70&step=1&isuri=1 . U.S. Bureau of Economic Analysis (n.d.). Retrieved from https://www.bea.gov/ .

PAGE 108

98 U.S. Census Bureau (n.d.). Definitions and explanations. Retrieved from ht tps://www.census.gov/housing/hvs/definitions.pdf . U.S. Census Bureau (n.d.). Retrieved from https://www.census.gov/history/www/programs/geography/urban_and_rural_areas.html . U. S. Department of Hea l th and Human Services, Administration for Children and Families (2016). Comprehensive child welfare information system. Federal Register , 81 (106), 35450 35482. U.S. Department of Health and Human Services, Child Welfare Information Gateway. (2018). State vs. county administration of child welfare services. Retrieved from https://www.childwelfare.gov/pubs/factsheets/services/ Van Dooren, W. (2008). Nothing new under the sun? Change and continuity in twentieth century performance movements. In S. Van de Walle and W. Van Dooren (eds.), Performance informati on in the public sector: How it is used (pp. 11 23). Houndmills, UK: Palgrave. Walker, R. M., & Andrews, R. (2013). Local government management and performance: A review of evidence. Journal of Public Administration Research and Theory , 25 , 101 133. doi: 10.1093/jopart/mut038 Wastell, D., White, S., Broadhurst, K., Peckover, S., & in the iron cage of performance management: Street level bureaucracy and the spectre of Svejkism. International Journal of Social Welf are , 19 , 310 320. doi: 10.1111/j.1468 2397.2009.00716.x Willis, N., Chavkin, N. sized turnover: Application of seminal management principles for administration and research in U.S. public child welfare agencies. Advances in Social Work , 17 (2), 116 133. doi: 10.18060/20856 Winokur, M., Holtan, A., & Batchelder, K. E. (2014). Kinship care for the safety, permanency, and well being of children removed from the home for maltreatment. Cochrane Data base of Systematic Reviews , 1 , 1 240. doi: 10.1002/14651858.CD006546.pub3 Wulczyn, F., Chen, L., & Courtney, M. (2011). Family reunification in a social structural context. Children and Youth Services Review , 33 , 424 430. doi: 10.1016/j.childyouth.2010.0 6.021 Wulczyn , F., & Halloran, J. (2017). Foster care dynamics and system science: Implications for research and policy. International Journal of Environmental Research and Public Health , 14 , 1181 1193. doi: 10.3390/ijerph14101181

PAGE 109

99 Yampolskaya, S., Chua ng, E., & Walk er, C. (2019). Trajectories of substance use among child welfare involved youth: Longitudinal associations with child maltreatment history and emotional/behavior problems. Substance Use & Misuse , 1 12. doi: 10.1080/10826084 Yampolskaya, S. , Sharrock, P., Armstrong, M. I., Strozier, A., & Swanke, J. (2014). Pro file of children placed in out of home care: Association with permanency outcomes. Children and Youth Services Review , 36 , 195 200. doi: 10.1016/j.childyouth.2013.11.018 Zhang, H., K ang, F., & Hu, S. (2018). Senior leadership, customer orientation, and s ervice firm performance: The mediator role of process management . Total Quality Management & Business Excellence , p. 1 16 . doi: 10.1080/14783363.2018.1492873 Zmuk, B., Lutilsky, I. D., & Dragija , M. (2016). The choice of a sampling procedure for a (too) small target population: The case of Croatian public hospitals. Zbornik Ekonomskog fakulteta u Zagrebu , 14 (2), 19 44.

PAGE 110

100 APPENDIX A . Variable Labels The following table presents a ke y to the variable abbreviations contained in Appendices B F. These county system with information not otherwise readily synthesized in order to facilitate better knowledge of the ir co Variable Abbreviation Variable Name and Description HU Housing Units: number of housing units per county PCI Per Capita Income: average income per county PCU Per Capi ta Unemployment Insurance Compensation: average unemployment compensation per county FRL Free and Reduced Lunch: percentage of free/reduced lunch eligible students per county (all school districts aggregated) GEO Geography: 1 = urban; 2 = rural; 3 = fron tier AC_J Assessment Closures June: percentage of assessments closed within 60 day guidelines as of June 30 in a given fiscal year, per county 90_J 90% Assessment Closures June: 1 = achieved 90% threshold for successful assessment closures; 0 = did not a chieve threshold; as of June 30 in a given fiscal year, per county AC_4 Assessment Closures Fiscal Year: percentage of assessments closed within 60 day guidelines across 4 quarte rs in fiscal year, per county 90_4 90% Assessment Closures Fiscal Year: 1 = achieved 90% threshold for successful assessment closures; 0 = did not achieve threshold; across 4 quarters in fiscal year, per county TK Total Kids: total number of children in out of home placements as of June 30 in a given fiscal year, per county FC F oster Care: of those children in out of home placements, the percentage in foster care settings CC Congregate Care: of those children in out of home placements, the percentage in congregate care settings KC Kinship Care: of those children in out of home placements, the perce ntage in kinship care settings P Providers: the total number of placement providers per county S Services: the total number of services offered across placement providers per county SPM Success in Permanency: percentage of children who achieved permanency placements in 12 months or less in a given fiscal year, per county TKP Total Kids with Permanency Placem ents: the total number of children in out of home care who achieved permanency in 12 months or less

PAGE 111

101 APPENDIX B . All Variab les by County for Fiscal Year 2013 All Variables by County: FY13 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Adams 165921 34845 190 0.48 1.00 0.86 0.00 0.81 0.00 407.00 0.58 0.20 0.14 15 22 0.52 216 Alamosa 6702 30643 218 0.73 2.00 0.51 0.00 0.52 0.00 32.00 0.47 0.25 0.28 10 12 0.57 16 Arapahoe 241942 49731 174 0.41 1.00 0.83 0.00 0.65 0.00 440.00 0.53 0.29 0.07 25 33 0.59 232 Archuleta 8932 34224 193 0.53 2.00 1.00 1.00 0.86 0.00 24.00 0.71 0.08 0.21 10 12 0.67 4 Baca 2252 42446 121 0.56 3.00 1.00 1.00 1.00 1.00 2.00 0.00 0.50 0.00 9 11 1.00 2 Bent 2249 24974 168 0.71 3.00 0.96 1.00 0.95 1.00 11.00 0.73 0.18 0.00 9 11 0.88 14 Boulder 129890 56515 154 0.26 1.00 0.91 1.00 0.82 0.00 104.00 0.49 0.17 0.28 17 19 0.63 60 Broomf ield 22908 55051 163 1.00 0.91 1.00 0.94 1.00 25.00 0.32 0.32 0.32 12 14 0.80 16 Chaffee 10303 35516 145 0.41 2.00 0.93 1.00 0.88 0.00 9.00 0.78 0.11 0.00 11 14 1.00 4 Cheyenne 985 55697 114 0.36 3.00 1.00 1.00 2.00 0.00 0.00 1.00 11 14 Clear Creek 5711 50202 198 0.25 1.00 1.00 1.00 0.96 1.00 19.00 0.84 0.16 0.00 12 14 0.88 7 Conejos 4335 29114 225 0.67 2.00 0.86 0.00 0.55 0.00 8.00 0.50 0.13 0.00 10 12 0.71 5 Costilla 2696 28915 244 0.88 3.00 0.56 0.00 0.61 0.00 6.00 0.67 0.17 0.00 10 12 1.0 0 2 Crowley 1565 18493 139 0.70 2.00 1.00 1.00 0.92 1.00 7.00 0.57 0.43 0.00 9 11 0.83 5 Custer 4153 37161 138 0.45 3.00 0.67 0.00 1.00 0.00 1.00 0.00 10 12 0.25 1 Delta 14624 32012 190 0.47 2.00 0.91 1.00 0.64 0.00 32.00 0.66 0.19 0.09 13 21 0.63 1 7 Denver 294752 61761 191 0.73 1.00 0.89 0.00 0.87 0.00 889.00 0.33 0.20 0.37 37 57 0.60 510 Dolores 1469 31875 193 0.40 3.00 1.00 1.00 1.00 1.00 0.00 9 11 1.00 1 Douglas 111653 62493 141 0.12 1.00 0.89 0.00 0.48 0.00 114.00 0.39 0.41 0.08 14 17 0 .63 42 Eagle 31523 55274 178 0.43 2.00 0.98 1.00 0.97 1.00 12.00 0.25 0.50 0.25 10 12 0.75 3 El Paso 258772 41272 189 0.37 1.00 0.91 1.00 0.76 0.00 742.00 0.47 0.21 0.21 31 48 0.61 432 Elbert 9041 47155 154 0.24 1.00 1.00 1.00 0.98 1.00 22.00 0.41 0.32 0.18 11 14 0.79 19 Fremont 19439 28726 184 0.53 2.00 0.98 1.00 0.83 0.00 124.00 0.42 0.15 0.23 12 16 0.43 26 Garfield 23489 44957 190 0.47 2.00 0.95 1.00 0.85 0.00 24.00 0.17 0.54 0.17 10 12 0.50 12 Gilpin 3592 41037 179 0.30 1.00 0.77 0.00 0.82 0.00 9. 00 0.44 0.44 0.11 10 12 0.50 5

PAGE 112

102 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Grand 16407 38618 175 0.33 2.00 1.00 1.00 0.92 1.00 3.00 0.67 0.00 0.00 9 11 1.00 2 Gunnison 11599 37569 165 0.25 3.00 0.83 0.00 0.93 1.00 11.00 0.45 0. 09 0.27 10 12 0.67 8 Hinsdale 1407 39935 113 0.23 3.00 0.00 9 11 Huerfano 5165 34571 268 0.71 3.00 0.73 0.00 0.61 0.00 13.00 0.69 0.15 0.08 9 11 0.69 18 Jackson 1297 38264 171 0.53 3.00 1.00 1.00 1.00 0.00 1.00 0.00 9 11 0.00 0 Je fferson 232758 51313 178 0.34 1.00 0.61 0.00 0.52 0.00 550.00 0.34 0.22 0.34 16 23 0.64 261 Kiowa 816 57635 122 0.52 3.00 1.00 1.00 0.88 0.00 2.00 0.50 0.50 0.00 9 11 0.67 2 Kit Carson 3529 34628 110 0.58 3.00 0.88 0.00 0.90 1.00 3.00 0.67 0.00 0.00 11 1 4 0.75 3 La Plata 26428 54558 153 0.35 2.00 0.96 1.00 0.88 0.00 18.00 0.67 0.11 0.22 12 14 0.48 11 Lake 4326 30297 215 0.73 2.00 0.93 1.00 0.98 1.00 3.00 1.00 0.00 0.00 11 14 0.44 4 Larimer 137669 42560 156 0.32 1.00 0.81 0.00 0.74 0.00 184.00 0.51 0.11 0.28 21 27 0.68 133 Las Animas 8385 34623 261 0.52 3.00 0.92 1.00 0.75 0.00 25.00 0.48 0.04 0.28 9 11 0.57 13 Lincoln 2437 23963 134 0.40 3.00 1.00 1.00 0.86 0.00 9.00 0.67 0.33 0.00 11 14 0.70 7 Logan 9023 47072 147 0.47 2.00 0.87 0.00 0.57 0.00 48.00 0.60 0.23 0.08 10 12 0.55 21 Mesa 64097 37697 215 0.45 1.00 0.62 0.00 0.65 0.00 257.00 0.58 0.18 0.17 18 26 0.47 54 Mineral 1225 56514 163 0.56 3.00 1.00 1.00 1.00 1.00 0.00 10 12 Moffat 6230 39350 190 0.44 3.00 0.92 1.00 0.92 1.00 8.00 0.63 0.38 0.00 9 11 0.44 7 Montezuma 12126 35587 187 0.55 2.00 0.69 0.00 0.52 0.00 29.00 0.41 0.17 0.24 11 13 0.40 12 Montrose 18479 32290 218 0.54 2.00 0.64 0.00 0.61 0.00 74.00 0.55 0.22 0.11 12 14 0.56 39 Morgan 11559 36721 171 0.64 2.00 1.00 1.00 0.99 1. 00 61.00 0.41 0.41 0.13 12 14 0.60 12 Otero 9817 32204 222 0.68 2.00 0.98 1.00 0.73 0.00 38.00 0.58 0.03 0.32 9 11 0.62 18 Ouray 3149 47756 158 0.36 2.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 9 11 Park 14169 36595 172 0.40 1.00 1.00 1.00 0.98 1.00 6.00 0.83 0.17 0.00 12 15 0.67 4 Phillips 2108 42011 120 0.45 2.00 1.00 1.00 0.89 0.00 6.00 0.00 0.17 0.83 10 12 1.00 7 Pitkin 13059 102940 197 0.07 2.00 0.93 1.00 0.92 1.00 1.00 0.00 1.00 0.00 10 12 Prowers 5957 38329 153 0.65 2.00 1.00 1.00 0.85 0.00 8.00 0.50 0.25 0.13 9 11 1.00 3 Pueblo 70322 32044 224 0.61 1.00 0.80 0.00 0.76 0.00 264.00 0.47 0.13 0.32 15 20 0.81 179 Rio Blanco 3331 44008 176 0.27 3.00 1.00 1.00 1.00 1.00 9.00 0.33 0.22 0.44 9 11 0.86 12 Rio Grande 6675 36377 232 0.62 2.00 0 .95 1.00 0.77 0.00 15.00 0.47 0.53 0.00 10 12 1.00 6

PAGE 113

103 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Routt 16420 60065 176 0.23 2.00 0.93 1.00 0.84 0.00 2.00 0.00 1.00 0.00 9 11 0.75 3 Saguache 3939 26428 257 0.81 3.00 1.00 1.00 0. 64 0.00 1.00 1.00 0.00 0.00 10 12 0.75 3 San Juan 765 55784 263 0.69 3.00 0.00 9 11 San Miguel 6702 62156 203 0.32 3.00 0.91 1.00 0.76 0.00 0.00 9 11 Sedgwick 1416 51646 142 0.46 3.00 1.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 1 0 12 0.00 0 Summit 30262 51446 163 0.32 2.00 1.00 1.00 0.99 1.00 2.00 0.50 0.50 0.00 10 12 0.67 6 Teller 12806 42313 196 0.37 1.00 0.58 0.00 0.52 0.00 28.00 0.46 0.32 0.07 11 14 0.32 10 Washington 2443 41453 132 0.48 3.00 1.00 1.00 0.88 0.00 6.00 0.33 0 .50 0.00 10 12 0.79 11 Weld 99730 38820 170 0.52 1.00 0.95 1.00 0.95 1.00 232.00 0.51 0.37 0.04 15 19 0.65 120 Yuma 4481 50286 115 0.59 3.00 1.00 1.00 0.95 1.00 10.00 0.40 0.60 0.00 10 12 0.75 3

PAGE 114

104 APPENDIX C . All Variables by County for Fiscal Year 20 14 All Variables by County: FY14 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Adams 167569 36827 107 0.49 1.00 0.84 0.00 0.81 0.00 591.00 0.42 0.15 0.38 15 22 0.54 163 Alamosa 6746 32195 122 0.71 2.00 0.64 0.00 0.66 0.00 54.00 0.30 0.13 0.54 10 12 0.76 25 Arapahoe 244965 53297 96 0.42 1.00 0.82 0.00 0.81 0.00 385.00 0.58 0.23 0.05 25 33 0.56 206 Archuleta 9019 37854 101 0.50 2.00 0.86 0.00 0.58 0.00 19.00 0.79 0.00 0.21 10 12 0.50 1 Baca 2252 39606 64 0.64 3.00 0.63 0.00 3.00 0.00 0.67 0.33 9 11 0.80 4 Bent 2249 25549 74 0.72 3.00 1.00 1.00 0.96 1.00 10.00 0.50 0.40 0.10 9 11 1.00 12 Boulder 131441 60467 87 0.28 1.00 0.52 0.00 0.64 0.00 115.00 0.46 0.21 0.31 17 19 0.50 48 Broomfield 23792 57660 86 1.00 1.00 1.00 0.99 1.00 20.00 0.55 0.30 0.10 12 14 0.76 22 Chaffee 10404 38968 76 0.41 2.00 0.70 0.00 0.84 0.00 17.00 0.41 0.18 0.24 11 14 0.67 4 Cheyenne 986 53353 59 0.44 3.00 0.86 0.00 0.90 1.00 0.00 11 14 Clear Creek 5718 52129 107 0.28 1.00 0.94 1.00 0.94 1 .00 31.00 0.74 0.13 0.06 12 14 0.27 4 Conejos 4350 29733 126 0.61 2.00 0.67 0.00 0.38 0.00 9.00 0.56 0.11 0.22 10 12 0.57 4 Costilla 2723 29738 146 0.86 3.00 1.00 1.00 0.66 0.00 3.00 0.00 0.67 0.33 10 12 0.25 1 Crowley 1568 18541 57 0.70 2.00 1.00 1.00 0.96 1.00 10.00 0.40 0.20 0.40 9 11 0.67 6 Custer 4199 40041 75 0.51 3.00 1.00 1.00 0.83 0.00 0.00 10 12 0.75 3 Delta 14634 33206 116 0.50 2.00 0.90 1.00 0.91 1.00 43.00 0.33 0.28 0.16 13 21 0.63 24 Denver 300694 68147 105 0.73 1.00 0.87 0.00 0.86 0.00 805.00 0.30 0.19 0.38 37 57 0.59 412 Dolores 1469 33162 92 0.46 3.00 1.00 1.00 0.94 1.00 0.00 9 11 1.00 3 Douglas 114288 66088 76 0.12 1.00 0.99 1.00 0.92 1.00 83.00 0.39 0.42 0.04 14 17 0.51 54 Eagle 31629 60948 92 0.42 2.00 1.00 1.00 0.97 1.00 6.00 0.17 0.50 0.17 10 12 0.63 5 El Paso 261819 43284 104 0.38 1.00 0.97 1.00 0.95 1.00 690.00 0.45 0.19 0.26 31 48 0.65 459 Elbert 9117 49075 83 0.25 1.00 0.54 0.00 0.76 0.00 14.00 0.36 0.64 0.00 11 14 0.80 12 Fremont 19490 30315 94 0.54 2.00 0.98 1.00 0.97 1.00 112.00 0.43 0.10 0.29 12 16 0.60 44 Garfield 23564 50311 102 0.47 2.00 0.86 0.00 0.88 0.00 38.00 0.39 0.26 0.32 10 12 0.85 22 Gilpin 3606 41678 92 0.29 1.00 0.92 1.00 0.87 0.00 12.00 0.33 0.50 0.17 10 12 0.71 5

PAGE 115

105 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Grand 16430 41381 91 0.33 2.00 1.00 1.00 0.93 1.00 7.00 0.86 0.14 0.00 9 11 1.00 4 Gunnison 11663 40400 91 0.26 3.00 0.93 1.00 0.97 1.00 6.00 0.33 0.17 0.33 10 12 0.25 1 Hinsdale 1411 43872 76 0.29 3.00 0.00 9 11 Huerfano 5179 36553 140 0.74 3.00 0.94 1.00 0.96 1.00 11.00 0.82 0.00 0.09 9 11 0.90 9 Jackson 1302 40845 96 0.57 3.00 0.00 9 11 Jefferson 234058 54593 97 0.34 1.00 0.89 0.00 0.71 0.00 539.00 0.37 0.16 0.38 16 23 0.69 356 Kiowa 820 52473 70 0.49 3.00 1.00 1.00 0.67 0.00 0.00 9 11 0.00 0 Kit Carson 3553 35194 63 0.52 3.00 1.00 1.00 0.91 1.00 6.00 0.33 0.17 0.50 11 14 0.00 0 La Plata 26686 58706 85 0.32 2.00 0.96 1.00 0.97 1.00 18.00 0.44 0.22 0.22 12 14 0.6 3 12 Lake 4326 31774 116 0.73 2.00 0.94 1.00 0.95 1.00 5.00 0.40 0.20 0.40 11 14 0.67 2 Larimer 140268 45189 87 0.32 1.00 0.90 0.00 0.83 0.00 159.00 0.42 0.12 0.33 21 27 0.60 99 Las Animas 8421 37040 130 0.50 3.00 0.92 1.00 0.80 0.00 30.00 0.63 0.03 0.2 0 9 11 0.67 10 Lincoln 2444 27161 61 0.48 3.00 0.86 0.00 0.94 1.00 6.00 0.17 0.67 0.00 11 14 0.54 7 Logan 9044 49188 75 0.49 2.00 0.93 1.00 0.90 0.00 45.00 0.51 0.27 0.16 10 12 0.52 24 Mesa 64213 39991 112 0.42 1.00 0.97 1.00 0.91 1.00 290.00 0.54 0.13 0.26 18 26 0.52 98 Mineral 1236 62422 110 0.61 3.00 1.00 1.00 0.00 10 12 Moffat 6239 40272 117 0.44 3.00 0.75 0.00 0.56 0.00 9.00 0.33 0.33 0.33 9 11 0.50 2 Montezuma 12132 39534 109 0.56 2.00 0.66 0.00 0.59 0.00 31.00 0.45 0.26 0.19 11 13 0.60 12 Montrose 18545 34164 118 0.54 2.00 0.84 0.00 0.84 0.00 63.00 0.52 0.25 0.06 12 14 0.74 32 Morgan 11597 40384 91 0.62 2.00 0.98 1.00 0.99 1.00 67.00 0.49 0.34 0.12 12 14 0.65 28 Otero 10136 33650 130 0.69 2.00 0.97 1.00 0.93 1.00 51.00 0.49 0.06 0.39 9 11 0.73 16 Ouray 3178 51545 89 0.35 2.00 1.00 1.00 0.83 0.00 1.00 0.00 1.00 0.00 9 11 1.00 1 Park 14224 37817 94 0.39 1.00 1.00 1.00 1.00 1.00 12.00 0.83 0.08 0.08 12 15 0.33 2 Phillips 2111 42799 68 0.41 2.00 0.83 0.00 0.94 1.00 5.00 0.00 0.20 0.80 10 12 1.00 12 Pitkin 13104 126741 111 0.06 2.00 0.94 1.00 0.90 0.00 3.00 0.33 0.67 0.00 10 12 1.00 1 Prowers 5960 38362 87 0.67 2.00 0.96 1.00 0.95 1.00 17.00 0.82 0.18 0.00 9 11 0.86 6 Pueblo 70490 33937 123 0.60 1.00 0.98 1.00 0.86 0.00 282.00 0. 39 0.09 0.47 15 20 0.77 173 Rio Blanco 3342 46105 97 0.31 3.00 1.00 1.00 1.00 1.00 8.00 0.00 0.25 0.75 9 11 0.60 3 Rio Grande 6686 38617 140 0.62 2.00 0.96 1.00 0.93 1.00 8.00 0.50 0.50 0.00 10 12 0.64 9

PAGE 116

106 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Routt 16476 65353 92 0.22 2.00 0.93 1.00 0.83 0.00 2.00 0.50 0.00 0.00 9 11 1.00 3 Saguache 3974 27995 152 0.85 3.00 0.67 0.00 0.51 0.00 2.00 0.00 0.00 1.00 10 12 0.80 4 San Juan 765 51091 122 0.58 3.00 1.00 1.00 0.00 9 11 San Miguel 6714 73709 109 0.33 3.00 0.50 0.00 0.43 0.00 2.00 1.00 0.00 0.00 9 11 0.00 0 Sedgwick 1419 54204 81 0.54 3.00 1.00 1.00 1.00 1.00 2.00 0.00 0.50 0.00 10 12 0.00 0 Summit 30534 55761 83 0.37 2.00 0.95 1.00 0.97 1.00 5.00 0.40 0.60 0.00 10 12 0.50 2 Teller 12969 43955 104 0.40 1.00 0.97 1.00 0.92 1.00 12.00 0.33 0.33 0.08 11 14 0.66 21 Washington 2445 43604 70 0.46 3.00 0.43 0.00 0.70 0.00 10.00 0.50 0.20 0.20 10 12 0.80 4 Weld 101335 42374 88 0.53 1.00 1.00 1.00 0.97 1.00 171.00 0.60 0. 33 0.04 15 19 0.59 100 Yuma 4488 52608 62 0.61 3.00 1.00 1.00 1.00 1.00 10.00 0.50 0.30 0.20 10 12 0.78 7

PAGE 117

107 APPENDIX D . All Variables by County for Fiscal Year 2015 All Variables by County: FY15 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Adams 169295 38451 103 0.48 1.00 0.94 1.00 0.89 0.00 605.00 0.41 0.14 0.40 15 22 0.40 101 Alamosa 6782 34676 115 0.67 2.00 0.98 1.00 0.74 0.00 65.00 0.22 0.15 0.60 10 12 0.58 11 Arapahoe 246782 54476 92 0.44 1.00 0.82 0.00 0.82 0.00 355.00 0.58 0.21 0.09 25 33 0.56 191 Archuleta 9100 39457 91 0.52 2.00 0.86 0.00 0.86 0.00 2.00 1.00 0.00 0.00 10 12 0.36 8 Baca 2252 37496 62 0.65 3.00 1.00 1.00 0.88 0.00 4.00 0.00 0.50 0.50 9 11 1.00 1 Bent 2249 27446 68 0.73 3.00 1.00 1.00 0.89 0.00 5.00 0 .40 0.00 0.40 9 11 0.75 9 Boulder 132789 64197 84 0.25 1.00 0.96 1.00 0.89 0.00 115.00 0.44 0.21 0.34 17 19 0.51 54 Broomfield 24845 58710 84 1.00 0.92 1.00 0.95 1.00 19.00 0.26 0.37 0.32 12 14 0.69 18 Chaffee 10544 41150 75 0.38 2.00 0.85 0.00 0.86 0 .00 13.00 0.77 0.00 0.08 11 14 0.53 9 Cheyenne 993 52857 73 0.43 3.00 1.00 1.00 1.00 1.00 2.00 0.50 0.00 0.50 11 14 1.00 2 Clear Creek 5726 55988 106 0.25 1.00 0.92 1.00 0.89 0.00 24.00 0.58 0.29 0.04 12 14 0.50 6 Conejos 4364 33621 121 0.66 2.00 0.67 0 .00 0.63 0.00 19.00 0.42 0.16 0.37 10 12 0.40 2 Costilla 2753 32495 134 0.85 3.00 0.91 1.00 0.81 0.00 13.00 0.15 0.15 0.69 10 12 1.00 4 Crowley 1575 22220 54 0.70 2.00 0.67 0.00 0.82 0.00 12.00 0.33 0.08 0.50 9 11 0.00 0 Custer 4241 42056 72 0.49 3.00 1.00 1.00 2.00 0.50 0.00 0.00 10 12 1.00 1 Delta 14676 33982 119 0.52 2.00 0.97 1.00 0.84 0.00 43.00 0.40 0.33 0.16 13 21 0.69 22 Denver 306478 67048 100 0.70 1.00 0.91 1.00 0.84 0.00 790.00 0.33 0.17 0.36 37 57 0.57 377 Dolores 1478 35774 95 0.51 3. 00 1.00 1.00 0.71 0.00 1.00 0.00 1.00 0.00 9 11 1.00 1 Douglas 117597 67576 75 0.11 1.00 0.96 1.00 0.93 1.00 74.00 0.41 0.46 0.01 14 17 0.60 53 Eagle 31876 66442 87 0.41 2.00 1.00 1.00 1.00 1.00 10.00 0.60 0.20 0.00 10 12 0.57 8 El Paso 264853 44758 100 0.39 1.00 0.86 0.00 0.92 1.00 583.00 0.46 0.21 0.24 31 48 0.62 437 Elbert 9229 51247 78 0.22 1.00 0.88 0.00 0.88 0.00 17.00 0.24 0.41 0.24 11 14 0.89 16 Fremont 19543 32185 91 0.56 2.00 0.97 1.00 0.96 1.00 95.00 0.51 0.13 0.29 12 16 0.71 63 Garfield 23 642 55978 100 0.48 2.00 0.94 1.00 0.91 1.00 26.00 0.31 0.15 0.54 10 12 0.70 21 Gilpin 3619 43869 85 0.27 1.00 0.89 0.00 0.90 1.00 12.00 0.58 0.33 0.08 10 12 0.50 4

PAGE 118

108 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Grand 16506 43964 9 2 0.34 2.00 1.00 1.00 0.96 1.00 6.00 0.50 0.17 0.33 9 11 0.33 1 Gunnison 11721 41888 83 0.25 3.00 1.00 1.00 0.87 0.00 0.00 10 12 0.92 11 Hinsdale 1415 46243 76 0.33 3.00 0.00 9 11 Huerfano 5201 38493 135 0.76 3.00 0.93 1.00 0.87 0.00 19.00 0.74 0.16 0.05 9 11 0.70 7 Jackson 1303 45264 121 0.54 3.00 0.50 0.00 1.00 0.00 1.00 0.00 9 11 1.00 1 Jefferson 236639 57009 94 0.32 1.00 0.84 0.00 0.77 0.00 593.00 0.35 0.17 0.38 16 23 0.66 326 Kiowa 820 51397 71 0.43 3.00 0.50 0.00 0.79 0.00 6.00 0.33 0.50 0.17 9 11 Kit Carson 3555 33595 62 0.60 3.00 0.25 0.00 0.51 0.00 1.00 0.00 1.00 0.00 11 14 La Plata 26951 54980 88 0.31 2.00 0.97 1.00 0.95 1.00 16.00 0.63 0.19 0.13 12 14 0.48 10 Lake 4326 33325 116 0.75 2.00 1.00 1.00 0.98 1.00 2.00 1.00 0.00 0.00 11 14 0.33 1 Larimer 142980 47005 83 0.33 1.00 0.94 1.00 0.93 1.00 146.00 0.44 0.14 0.23 21 27 0.68 114 Las Animas 8426 36920 117 0.52 3.00 0.50 0.00 0.64 0.00 44.00 0.66 0.09 0.20 9 11 0.73 19 Lincoln 2446 26868 65 0.44 3.00 1 .00 1.00 0.98 1.00 12.00 0.67 0.25 0.00 11 14 1.00 4 Logan 9064 50746 78 0.44 2.00 0.98 1.00 0.97 1.00 48.00 0.42 0.29 0.15 10 12 0.58 21 Mesa 65192 40335 125 0.42 1.00 0.85 0.00 0.89 0.00 276.00 0.49 0.14 0.31 18 26 0.50 106 Mineral 1242 64312 97 0.60 3.00 1.00 0.00 1.00 0.00 10 12 Moffat 6248 40697 115 0.43 3.00 0.50 0.00 0.75 0.00 15.00 0.33 0.27 0.27 9 11 0.50 2 Montezuma 12186 40374 126 0.57 2.00 0.47 0.00 0.61 0.00 21.00 0.43 0.24 0.19 11 13 0.75 18 Montrose 18635 35755 113 0.51 2.00 0.93 1.00 0.70 0.00 71.00 0.49 0.31 0.13 12 14 0.47 30 Morgan 11647 41782 95 0.62 2.00 0.95 1.00 0.98 1.00 56.00 0.43 0.30 0.21 12 14 0.40 19 Otero 10190 36100 126 0.69 2.00 0.70 0.00 0.74 0.00 54.00 0.30 0.04 0.56 9 11 0.47 14 Ouray 3211 54355 91 0.34 2.00 1.00 1.00 1.00 1.00 2.00 1.00 0.00 0.00 9 11 Park 14311 39885 88 0.39 1.00 0.94 1.00 0.98 1.00 10.00 1.00 0.00 0.00 12 15 0.67 4 Phillips 2123 41546 66 0.42 2.00 1.00 1.00 1.00 1.00 7.00 0.43 0.14 0.43 10 12 0.70 7 Pitkin 13201 131562 108 0.05 2.00 1.00 1.00 0.96 1.00 1.00 1.00 0.00 0.00 10 12 0.67 2 Prowers 5964 41313 96 0.66 2.00 0.96 1.00 0.96 1.00 6.00 0.33 0.00 0.67 9 11 0.42 5 Pueblo 70596 35508 119 0.63 1.00 0.97 1.00 0.92 1.00 307.00 0.39 0.08 0.50 15 20 0.67 202 Rio Blanco 3340 4554 8 109 0.36 3.00 1.00 1.00 1.00 1.00 8.00 0.00 0.00 1.00 9 11 0.73 8 Rio Grande 6698 43563 129 0.62 2.00 0.92 1.00 0.94 1.00 9.00 0.22 0.22 0.56 10 12 0.57 8

PAGE 119

109 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Routt 16603 69988 89 0.22 2.00 1.00 1.00 0.96 1.00 1.00 0.00 1.00 0.00 9 11 0.50 1 Saguache 4005 31366 141 0.87 3.00 0.00 0.00 0.14 0.00 3.00 0.00 0.67 0.00 10 12 0.00 0 San Juan 765 58158 131 0.56 3.00 0.67 0.00 0.00 9 11 San Miguel 6796 79945 114 0.29 3.00 1.00 1 .00 0.90 1.00 3.00 1.00 0.00 0.00 9 11 1.00 2 Sedgwick 1420 51568 83 0.44 3.00 1.00 1.00 1.00 1.00 0.00 10 12 0.00 0 Summit 30742 59569 79 0.34 2.00 0.50 0.00 0.73 0.00 5.00 0.40 0.40 0.20 10 12 1.00 3 Teller 13040 46062 100 0.36 1.00 0.89 0.00 0. 89 0.00 20.00 0.65 0.20 0.10 11 14 0.26 5 Washington 2450 41262 72 0.49 3.00 0.67 0.00 0.79 0.00 7.00 0.29 0.14 0.43 10 12 0.50 1 Weld 104190 43867 92 0.51 1.00 0.96 1.00 0.98 1.00 169.00 0.61 0.29 0.02 15 19 0.71 102 Yuma 4500 54543 66 0.56 3.00 1.00 1 .00 1.00 1.00 7.00 0.57 0.29 0.14 10 12 0.46 5

PAGE 120

110 APPENDIX E . All Variables by County for Fiscal Year 2016 All Variables by County: FY16 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Adams 170976 39618 99 0.50 1.00 0.94 1.00 0.88 0.00 548.00 0.43 0.14 0.36 15 22 0.44 202 Alamosa 6808 34553 107 0.64 2.00 0.71 0.00 0.80 0.00 59.00 0.49 0.15 0.31 10 12 0.50 21 Arapahoe 248864 55116 89 0.43 1.00 0.85 0.00 0.88 0.00 336.00 0.60 0.19 0.08 25 33 0.54 159 Archuleta 9211 40153 86 0.52 2 .00 0.97 1.00 0.95 1.00 11.00 0.64 0.00 0.36 10 12 0.25 1 Baca 2252 38757 61 0.66 3.00 1.00 1.00 0.88 0.00 4.00 0.00 0.00 1.00 9 11 0.33 1 Bent 2251 27336 67 0.71 3.00 0.95 1.00 0.91 1.00 3.00 0.00 0.33 0.67 9 11 0.56 5 Boulder 134154 65150 82 0.27 1.00 0.88 0.00 0.88 0.00 138.00 0.38 0.14 0.45 17 19 0.51 51 Broomfield 25545 60316 86 1.00 0.95 1.00 0.92 1.00 28.00 0.61 0.18 0.18 12 14 0.53 9 Chaffee 10704 42670 67 0.39 2.00 0.87 0.00 0.86 0.00 17.00 0.29 0.06 0.59 11 14 0.64 9 Cheyenne 998 49553 73 0.56 3.00 1.00 1.00 1.00 1.00 0.00 11 14 1.00 3 Clear Creek 5742 56172 104 0.23 1.00 1.00 1.00 0.93 1.00 20.00 0.65 0.05 0.10 12 14 0.07 1 Conejos 4385 33831 122 0.69 2.00 0.50 0.00 0.49 0.00 16.00 0.31 0.31 0.38 10 12 0.60 6 Costilla 2788 31540 1 17 0.86 3.00 0.82 0.00 0.73 0.00 17.00 0.47 0.29 0.18 10 12 0.50 2 Crowley 1580 22566 54 0.72 2.00 0.75 0.00 0.89 0.00 15.00 0.20 0.07 0.60 9 11 0.50 3 Custer 4284 40105 70 0.51 3.00 1.00 1.00 0.93 1.00 2.00 1.00 0.00 0.00 10 12 Delta 14670 33581 12 0 0.56 2.00 1.00 1.00 0.94 1.00 28.00 0.36 0.25 0.14 13 21 0.53 20 Denver 314631 64004 97 0.69 1.00 0.91 1.00 0.79 0.00 960.00 0.29 0.19 0.42 37 57 0.52 321 Dolores 1478 31995 105 0.52 3.00 1.00 1.00 0.81 0.00 0.00 9 11 Douglas 119485 68560 75 0.12 1.00 0.92 1.00 0.92 1.00 89.00 0.51 0.31 0.04 14 17 0.64 30 Eagle 32049 68105 89 0.41 2.00 1.00 1.00 0.99 1.00 10.00 0.30 0.30 0.30 10 12 0.75 3 El Paso 267888 45026 94 0.40 1.00 0.95 1.00 0.93 1.00 659.00 0.43 0.20 0.25 31 48 0.66 431 Elbert 9378 52989 76 0.23 1.00 1.00 1.00 0.89 0.00 12.00 0.42 0.42 0.00 11 14 0.63 5 Fremont 19592 32780 87 0.57 2.00 0.99 1.00 0.98 1.00 109.00 0.32 0.16 0.39 12 16 0.62 55 Garfield 23780 53172 97 0.48 2.00 0.83 0.00 0.84 0.00 32.00 0.34 0.16 0.47 10 12 0.56 14 G ilpin 3635 44702 75 0.28 1.00 1.00 1.00 0.96 1.00 17.00 0.53 0.18 0.29 10 12 0.50 4

PAGE 121

111 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Grand 16629 44391 87 0.38 2.00 1.00 1.00 0.90 1.00 2.00 0.00 0.50 0.50 9 11 0.44 4 Gunnison 11817 4 3294 75 0.27 3.00 1.00 1.00 0.96 1.00 5.00 0.00 0.20 0.80 10 12 0.83 5 Hinsdale 1421 47130 65 0.32 3.00 1.00 1.00 1.00 1.00 0.00 9 11 Huerfano 5230 39851 131 0.73 3.00 0.88 0.00 0.75 0.00 23.00 0.61 0.09 0.22 9 11 0.46 5 Jackson 1307 44564 104 0.53 3.00 1.00 1.00 0.00 9 11 Jefferson 238450 58053 90 0.31 1.00 0.92 1.00 0.88 0.00 600.00 0.36 0.14 0.36 16 23 0.56 226 Kiowa 820 49355 72 0.45 3.00 0.33 0.00 0.29 0.00 3.00 0.67 0.00 0.33 9 11 1.00 4 Kit Carson 3555 35299 68 0.61 3.00 0.07 0.00 0.22 0.00 8.00 0.38 0.38 0.25 11 14 0.00 0 La Plata 27277 52588 85 0.34 2.00 0.98 1.00 0.98 1.00 21.00 0.62 0.05 0.29 12 14 0.59 13 Lake 4365 33569 96 0.74 2.00 1.00 1.00 0.98 1.00 1.00 1.00 0.00 0.00 11 14 0.67 4 Larimer 145258 48289 83 0.33 1.00 0.88 0.00 0.91 1.00 145.00 0.47 0.12 0.23 21 27 0.69 102 Las Animas 8429 37959 120 0.57 3.00 0.82 0.00 0.71 0.00 34.00 0.71 0.06 0.24 9 11 0.61 20 Lincoln 2461 25177 59 0.40 3.00 0.81 0.00 0.81 0.00 15.00 0.47 0.33 0.20 11 14 0.50 2 Logan 9091 493 37 85 0.48 2.00 1.00 1.00 0.99 1.00 29.00 0.34 0.24 0.31 10 12 0.69 25 Mesa 65661 39920 137 0.49 1.00 0.97 1.00 0.92 1.00 277.00 0.48 0.16 0.29 18 26 0.42 80 Mineral 1254 64321 68 0.63 3.00 1.00 0.00 1.00 0.00 10 12 Moffat 6255 39190 115 0.4 2 3.00 0.34 0.00 0.70 0.00 17.00 0.41 0.35 0.06 9 11 0.69 11 Montezuma 12218 39590 129 0.55 2.00 0.94 1.00 0.81 0.00 16.00 0.63 0.25 0.06 11 13 0.48 10 Montrose 18757 36339 108 0.51 2.00 0.70 0.00 0.66 0.00 66.00 0.56 0.29 0.05 12 14 0.60 18 Morgan 1172 3 41929 102 0.63 2.00 0.90 1.00 0.93 1.00 57.00 0.40 0.30 0.19 12 14 0.59 31 Otero 10213 35875 120 0.70 2.00 0.78 0.00 0.71 0.00 40.00 0.35 0.15 0.48 9 11 0.38 13 Ouray 3245 55612 88 0.29 2.00 0.75 0.00 0.67 0.00 0.00 9 11 1.00 1 Park 14432 41289 81 0.37 1.00 1.00 1.00 1.00 1.00 6.00 0.17 0.33 0.00 12 15 0.14 1 Phillips 2132 44069 66 0.43 2.00 1.00 1.00 0.83 0.00 6.00 0.17 0.17 0.67 10 12 0.00 0 Pitkin 13307 136025 109 0.04 2.00 1.00 1.00 0.97 1.00 1.00 1.00 0.00 0.00 10 12 0.67 2 Prowers 5969 4 0752 90 0.64 2.00 1.00 1.00 1.00 1.00 10.00 0.30 0.20 0.40 9 11 0.85 11 Pueblo 70875 36250 117 0.60 1.00 0.89 0.00 0.92 1.00 315.00 0.36 0.11 0.50 15 20 0.62 203 Rio Blanco 3392 42740 116 0.35 3.00 1.00 1.00 0.94 1.00 10.00 0.60 0.00 0.40 9 11 0.60 3 Ri o Grande 6712 43749 123 0.63 2.00 0.94 1.00 0.96 1.00 10.00 0.30 0.40 0.10 10 12 1.00 2

PAGE 122

112 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Routt 16698 69602 86 0.24 2.00 0.97 1.00 0.97 1.00 3.00 0.00 0.67 0.33 9 11 1.00 2 Saguache 403 2 30284 130 0.87 3.00 0.27 0.00 0.35 0.00 6.00 0.00 0.50 0.17 10 12 0.33 1 San Juan 771 46019 135 0.64 3.00 0.00 9 11 San Miguel 6828 75793 112 0.32 3.00 0.67 0.00 0.88 0.00 0.00 9 11 0.33 1 Sedgwick 1420 46489 72 0.56 3.00 1.00 1.00 1.00 1.00 0.00 10 12 1.00 1 Summit 30883 61765 76 0.37 2.00 0.94 1.00 0.93 1.00 5.00 0.20 0.80 0.00 10 12 0.80 4 Teller 13124 46208 96 0.36 1.00 0.95 1.00 0.78 0.00 26.00 0.54 0.19 0.19 11 14 0.71 5 Washington 2455 40284 71 0.52 3.00 0.33 0.0 0 0.42 0.00 5.00 0.20 0.60 0.00 10 12 0.50 4 Weld 107210 43757 92 0.53 1.00 0.96 1.00 0.97 1.00 165.00 0.61 0.27 0.07 15 19 0.54 61 Yuma 4501 52271 64 0.60 3.00 1.00 1.00 1.00 1.00 10.00 0.50 0.20 0.30 10 12 0.88 7

PAGE 123

113 APPENDIX F . All Variables by County for Fiscal Year 2017 All Variables by County: FY17 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Adams 173649 41215 79 0.49 1.00 0.93 1.00 0.90 1.00 496.00 0.48 0.17 0.30 15 22 0.60 285 Alamosa 6839 35721 86 0.63 2.00 0.82 0.00 0 .73 0.00 43.00 0.60 0.12 0.28 10 12 0.52 27 Arapahoe 253290 56642 76 0.43 1.00 0.91 1.00 0.86 0.00 375.00 0.61 0.19 0.10 25 33 0.58 175 Archuleta 9327 39944 72 0.46 2.00 1.00 1.00 0.96 1.00 7.00 0.29 0.00 0.57 10 12 1.00 1 Baca 2253 42019 53 0.64 3.00 0 .67 0.00 0.67 0.00 3.00 0.33 0.33 0.33 9 11 1.00 3 Bent 2254 26417 54 0.75 3.00 1.00 1.00 0.98 1.00 6.00 0.33 0.00 0.50 9 11 1.00 8 Boulder 135968 68027 71 0.26 1.00 0.96 1.00 0.95 1.00 153.00 0.44 0.14 0.38 17 19 0.56 66 Broomfield 26319 63596 73 1.0 0 0.79 0.00 0.89 0.00 22.00 0.73 0.09 0.18 12 14 0.73 19 Chaffee 10869 43773 55 0.34 2.00 0.67 0.00 0.77 0.00 17.00 0.35 0.12 0.53 11 14 0.67 8 Cheyenne 1001 57212 55 0.48 3.00 1.00 1.00 1.00 1.00 0.00 11 14 0.67 2 Clear Creek 5754 56672 80 0.27 1 .00 0.92 1.00 0.94 1.00 16.00 0.56 0.13 0.25 12 14 0.14 1 Conejos 4405 34726 99 0.61 2.00 0.33 0.00 0.70 0.00 15.00 0.20 0.33 0.47 10 12 0.64 9 Costilla 2831 32753 83 0.85 3.00 0.70 0.00 0.89 0.00 27.00 0.48 0.04 0.41 10 12 0.63 10 Crowley 1584 19443 41 0.73 2.00 1.00 1.00 0.91 1.00 9.00 0.22 0.11 0.67 9 11 0.67 8 Custer 4343 38008 60 0.50 3.00 1.00 1.00 0.93 1.00 4.00 0.75 0.00 0.00 10 12 0.50 1 Delta 14681 37475 89 0.57 2.00 0.96 1.00 0.97 1.00 55.00 0.29 0.24 0.29 13 21 0.46 13 Denver 321513 69862 80 0.68 1.00 0.95 1.00 0.90 1.00 954.00 0.28 0.19 0.45 37 57 0.56 403 Dolores 1478 34214 79 0.59 3.00 0.75 0.00 0.90 1.00 0.00 9 11 1.00 5 Douglas 122818 71208 65 0.12 1.00 0.93 1.00 0.93 1.00 93.00 0.47 0.32 0.17 14 17 0.71 53 Eagle 32465 70384 7 2 0.41 2.00 0.98 1.00 0.95 1.00 9.00 0.22 0.33 0.44 10 12 0.64 7 El Paso 271183 46511 76 0.40 1.00 0.99 1.00 0.93 1.00 772.00 0.40 0.21 0.32 31 48 0.66 422 Elbert 9484 55187 67 0.22 1.00 0.95 1.00 0.97 1.00 7.00 0.14 0.71 0.00 11 14 0.77 10 Fremont 1968 9 33422 67 0.52 2.00 0.99 1.00 0.99 1.00 115.00 0.37 0.12 0.36 12 16 0.53 41 Garfield 23930 55305 75 0.48 2.00 0.89 0.00 0.86 0.00 45.00 0.60 0.04 0.33 10 12 0.50 16 Gilpin 3676 45531 59 0.27 1.00 1.00 1.00 0.97 1.00 6.00 0.50 0.00 0.50 10 12 0.43 3

PAGE 124

114 COU NTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Grand 16782 46013 69 0.37 2.00 0.85 0.00 0.86 0.00 5.00 0.20 0.20 0.60 9 11 0.67 2 Gunnison 11917 44365 62 0.26 3.00 1.00 1.00 0.98 1.00 7.00 0.43 0.14 0.14 10 12 1.00 6 Hinsdale 1428 49 480 67 0.39 3.00 0.00 9 11 Huerfano 5266 41138 111 0.72 3.00 0.76 0.00 0.87 0.00 18.00 0.56 0.17 0.28 9 11 0.52 11 Jackson 1314 46248 68 0.48 3.00 0.00 0.00 0.00 0.00 0.00 9 11 0.00 0 Jefferson 240989 60398 75 0.33 1.00 0.96 1.00 0.94 1.00 526.00 0.37 0.14 0.35 16 23 0.56 263 Kiowa 821 53959 62 0.44 3.00 1.00 1.00 0.78 0.00 11.00 0.27 0.00 0.73 9 11 0.40 4 Kit Carson 3555 39016 59 0.59 3.00 0.35 0.00 0.46 0.00 11.00 0.64 0.18 0.18 11 14 1.00 2 La Plata 27527 52759 68 0.35 2.00 0.96 1.00 0.98 1.00 29.00 0.38 0.03 0.45 12 14 0.52 12 Lake 4397 34668 68 0.67 2.00 1.00 1.00 0.97 1.00 0.00 11 14 Larimer 147811 50539 69 0.33 1.00 0.95 1.00 0.86 0.00 128.00 0.52 0.10 0.28 21 27 0.57 75 Las Animas 8434 38119 88 0.55 3.00 0.9 7 1.00 0.94 1.00 18.00 0.61 0.11 0.28 9 11 0.41 17 Lincoln 2470 28330 47 0.42 3.00 1.00 1.00 0.81 0.00 13.00 0.31 0.00 0.62 11 14 0.46 5 Logan 9105 48521 64 0.47 2.00 1.00 1.00 0.98 1.00 43.00 0.47 0.12 0.30 10 12 0.42 17 Mesa 66285 41503 96 0.51 1.00 0 .98 1.00 0.95 1.00 274.00 0.50 0.14 0.27 18 26 0.35 57 Mineral 1268 66047 56 0.63 3.00 1.00 1.00 1.00 1.00 2.00 0.50 0.50 0.00 10 12 0.00 0 Moffat 6265 39007 91 0.43 3.00 1.00 1.00 0.94 1.00 17.00 0.41 0.41 0.06 9 11 0.39 5 Montezuma 12240 40336 97 0.61 2.00 0.75 0.00 0.85 0.00 16.00 0.44 0.19 0.31 11 13 0.50 8 Montrose 18881 37658 81 0.48 2.00 0.85 0.00 0.80 0.00 69.00 0.58 0.28 0.09 12 14 0.57 26 Morgan 11756 43632 69 0.64 2.00 0.87 0.00 0.88 0.00 33.00 0.55 0.27 0.09 12 14 0.61 27 Otero 10270 37214 99 0.69 2.00 0.59 0.00 0.62 0.00 55.00 0.36 0.11 0.49 9 11 0.80 32 Ouray 3294 56335 66 0.31 2.00 1.00 1.00 0.82 0.00 0.00 9 11 1.00 3 Park 14567 42703 66 0.38 1.00 1.00 1.00 1.00 1.00 6.00 0.00 0.17 0.83 12 15 0.00 0 Phillips 2133 47637 52 0.40 2 .00 1.00 1.00 1.00 1.00 2.00 0.00 1.00 0.00 10 12 0.20 1 Pitkin 13397 143812 93 0.05 2.00 1.00 1.00 0.94 1.00 1.00 1.00 0.00 0.00 10 12 Prowers 5968 41247 68 0.64 2.00 1.00 1.00 1.00 1.00 25.00 0.16 0.12 0.68 9 11 0.57 4 Pueblo 71153 37231 97 0.65 1 .00 0.93 1.00 0.93 1.00 293.00 0.33 0.08 0.54 15 20 0.51 147 Rio Blanco 3472 44189 85 0.33 3.00 0.73 0.00 0.84 0.00 15.00 0.47 0.00 0.53 9 11 0.67 6 Rio Grande 6717 45544 104 0.59 2.00 0.93 1.00 0.95 1.00 13.00 0.46 0.46 0.08 10 12 0.69 11

PAGE 125

115 COUNTY HU PCI PCU FRL GEO AC_J 90_J AC_4 90_4 TK FC CC KC P S SPM TKP Routt 16857 73200 73 0.22 2.00 0.83 0.00 0.90 0.00 7.00 0.43 0.29 0.29 9 11 1.00 3 Saguache 4084 30134 111 0.90 3.00 1.00 1.00 0.57 0.00 7.00 0.00 0.14 0.57 10 12 0.50 3 San Juan 779 47138 80 0.60 3.00 1.00 0.00 0.00 1.00 9 11 San Miguel 6862 77106 94 0.28 3.00 0.83 0.00 0.75 0.00 2.00 0.50 0.00 0.00 9 11 0.50 1 Sedgwick 1423 52363 66 0.56 3.00 1.00 1.00 0.95 1.00 0.00 10 12 Summit 31185 64446 65 0.32 2.00 0.93 1.00 0.90 1. 00 4.00 0.25 0.75 0.00 10 12 0.00 0 Teller 13231 47030 77 0.36 1.00 1.00 1.00 0.99 1.00 24.00 0.54 0.21 0.21 11 14 0.48 12 Washington 2461 41019 60 0.49 3.00 0.67 0.00 0.45 0.00 4.00 0.00 0.00 1.00 10 12 0.00 0 Weld 110496 44080 68 0.50 1.00 0.96 1.00 0 .94 1.00 188.00 0.58 0.27 0.10 15 19 0.51 70 Yuma 4505 55197 47 0.58 3.00 1.00 1.00 1.00 1.00 9.00 0.33 0.33 0.33 10 12 0.50 3

PAGE 126

116 APPENDIX G . Percent Change Across Variables by County from Fiscal Year 2013 to Fiscal Year 2017 Percent Change Across Variab les by County: FY2013 FY2017 County Name %Change Housing %Change Income %Change Unemp %Change FRL Eligible %Change PerMgmt 4 Qrts %Change Foster %Change Cong %Change Kinship %Change Perm. Success Align PM and Perm Adams 4.66 18.28 58.42 2.11 11.63 17.5 9 14.90 118.33 15.38 Yes Alamosa 2.04 16.57 60.55 13.71 39.48 28.99 53.49 0.78 9.11 No Arapahoe 4.69 13.90 56.32 3.93 32.20 15.30 32.95 35.67 1.36 No Archuleta 4.42 16.71 62.69 13.88 11.52 59.66 100.00 174.29 49.93 Yes Baca 0.04 1.01 56. 20 13.84 33.33 33.33 0.00 No Bent 0.22 5.78 67.86 6.01 3.27 54.17 100.00 14.29 Yes Boulder 4.68 20.37 53.90 1.39 15.32 9.37 20.70 35.95 11.20 No Broomfield 14.89 15.52 55.21 5.09 127.27 71.59 43.18 8.63 Yes Chaffee 5.49 23.25 62. 07 16.13 13.09 54.62 5.88 33.30 Yes Cheyenne 1.62 2.72 51.75 31.89 0.00 100.00 Clear Creek 0.75 12.89 59.60 9.77 1.85 33.20 20.83 83.66 Yes Conejos 1.61 19.28 56.00 9.80 27.54 60.00 166.67 9.94 No Costilla 5.01 13.27 65.98 4.13 47.07 27.78 77.78 37.50 No Crowley 1.21 5.14 70.50 4.02 1.09 61.11 74.07 19.93 Yes Custer 4.58 2.28 56.52 11.02 40.00 100.00 100.00 Yes Delta 0.39 17.07 53.16 20.36 52.02 55.67 26.06 210.30 26.35 No Denver 9.08 13.12 58.12 6.56 3.98 16.35 4.68 21.31 5.37 No Dolores 0.61 7.34 59.07 47.53 10.00 0.00 No Douglas 10.00 13.95 53.90 2.19 93.94 19.86 21.76 117.92 12.76 Yes Eagle 2.99 27.34 59.55 4.34 1.74 11.11 33.33 77.78 15.20 Yes El Paso 4.80 12.69 59.79 8.27 23.03 14.81 0.86 53.41 7.86 Yes Elbert 4.90 17.03 56.49 6.53 0.66 65.08 124.49 100.00 2.90 Yes

PAGE 127

117 County Name %Change Housing %Change Income %Change Unemp %Change FRL Eligible %Change PerMgmt 4 Qrts %Change Foster %Change Cong %Change Kinship % Change Perm. Success Align PM and Perm Fremont 1.29 16.35 63.59 0.97 19.15 10.84 20.55 57.89 22.86 Yes Garfield 1.88 23.02 60.53 0.43 1.09 260.00 91.79 100.00 0.00 No Gilpin 2.34 10.95 67.04 10.84 18.15 12.50 100.00 350.00 14.20 Yes Grand 2.2 9 19.15 60.57 13.34 6.29 70.00 33.30 Yes Gunnison 2.74 18.09 62.42 5.82 5.38 5.71 57.14 47.62 49.93 Yes Hinsdale 1.49 23.90 40.71 71.03 Huerfano 1.96 19.00 58.58 1.48 42.68 19.75 8.33 261.11 24.28 No Jackson 1.31 20.87 60.23 8.94 100.00 100.00 Jefferson 3.54 17.71 57.87 5.08 83.11 10.21 35.52 2.33 11.91 No Kiowa 0.61 6.38 49.18 16.42 11.11 45.45 100.00 40.03 Yes Kit Carson 0.74 12.67 46.36 1.81 48.85 4.55 33.33 No La Plata 4.16 3.30 55.56 0.87 12.24 43.10 68.97 101.72 9.21 Yes Lake 1.64 14.43 68.37 8.34 0.83 100.00 100.00 Yes Larimer 7.37 18.75 55.77 1.50 16.38 3.56 6.56 0.48 16.72 No Las Animas 0.58 10.10 66.28 6.29 25.15 27.31 177.78 0.79 28.32 No Lincoln 1.35 18.22 64.93 5.78 6.21 53.85 100.00 35.00 Yes Logan 0.91 3.08 56.46 0.64 72.52 23.02 49.26 262.79 24.95 No Mesa 3.41 10.10 55.35 13.21 47.63 14.39 22.17 61.42 27.22 No Mineral 3.51 16.87 65.64 11.62 0.00 Moffat 0.56 0.87 52.11 1 .66 2.69 34.12 9.80 12.10 No Montezuma 0.94 13.34 48.13 10.02 64.88 5.73 8.75 29.46 25.00 Yes Montrose 2.18 16.62 62.84 12.02 29.60 4.63 27.36 19.57 1.44 Yes Morgan 1.70 18.82 59.65 0.11 10.43 33.09 33.45 30.68 2.33 No Otero 4.61 15.56 55. 41 1.11 16.01 37.19 314.55 55.45 28.82 No Ouray 4.60 17.96 58.23 15.61 18.18

PAGE 128

118 County Name %Change Housing %Change Income %Change Unemp %Change FRL Eligible %Change PerMgmt 4 Qrts %Change Foster %Change Cong %Change Kinship %Change Perm. Suc cess Align PM and Perm Park 2.81 16.69 61.63 4.26 2.30 100.00 0.00 100.00 No Phillips 1.19 13.39 56.67 9.89 11.76 500.00 100.00 80.00 No Pitkin 2.59 39.70 52.79 29.93 2.42 100.00 Prowers 0.18 7.61 55.56 1.48 18.00 68.00 52.00 444.00 42.90 No Pueblo 1.18 16.19 56.70 6.07 23.06 28.21 39.05 68.54 37.96 No Rio Blanco 4.23 0.41 51.70 22.42 16.39 40.00 100.00 20.00 22.17 Yes Rio Grande 0.63 25.20 55.17 3.94 23.45 1.10 13.46 31.20 No Routt 2.66 21.87 58.52 5.76 7.18 71.43 33.33 No Saguache 3.68 14.02 56.81 11.57 11.85 100.00 33.33 Yes San Juan 1.83 15.50 69.58 13.09 San Miguel 2.39 24.05 53.69 11.10 1.32 Sedgwick 0.49 1.39 53.52 23.13 5.26 Summit 3.05 25.2 7 60.12 0.18 8.81 50.00 50.00 100.00 Yes Teller 3.32 11.15 60.71 3.18 91.55 16.67 35.19 191.67 48.61 Yes Washington 0.74 1.05 54.55 3.55 48.57 100.00 100.00 100.00 Yes Weld 10.80 13.55 60.00 2.41 0.38 13.99 27.41 134.47 21.63 Yes Yuma 0.54 9.77 59.13 1.29 5.36 16.67 44.44 33.33 No

PAGE 129

119 APPENDIX H . Selected Histograms Demonstrating Normality Approximations

PAGE 130

120 APPENDIX I . Mann Whitney U Means Comparisons for Fiscal Years 2016 and 2017 Mann Whitney U Means Compa rison for FY2016

PAGE 131

121 Mann Whitney U Means Comparison for FY2017

PAGE 132

122 APPENDIX J

PAGE 133

123

PAGE 134

124

PAGE 135

125

PAGE 136

126

PAGE 137

127

PAGE 138

128

PAGE 139

129

PAGE 140

130

PAGE 141

131