THE SYSTEMIC EFFECTS OF SCHOOL CHOICE INDUCED COMPETITION: DEFINING COMPETITION AND EVALUATING ITS EFFECTS ON THE OUTCOMES OF ALL STUDENTS By Benjamin M. Creed A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Educational Policy Doctor of Philosophy 2016 ABSTRACT THE SYSTEMIC EFFECTS OF SCHOOL CHOICE INDUCED COMPETITION: DEFINING COMPETITION AND EVALUATING ITS EFFECTS ON THE OUTCOMES OF ALL STUDENTS By Benjamin M. Creed Using the three paper format, this dissertation contributes to the literature evaluating school choice and school competition. This study highlights important gaps in our collective understanding of the impact of school choice policy. This dissertation contributes in multiple ways to the closing of important gaps related to the effect of school choice induced competition on student outcomes: 1) developing a consistent measure of competition grounded in theory and empirical evidence and 2) evaluating the systemic effects of competition, the effects of school competition on all students within an educational market regardless of the school attended. I address the first gap in three ways. I highlight the existing variation in competition measures and demonstrate that the presence of multiple competition measures in the extant literature is cause for concern. I do this by showing that you can infer positive, negative, and null impacts of competition on student outcomes simply by substituting the various measures of competition into the same regression model. Second, I lay out a process for selecting between measures of competition, with the goal of a continued conversation around improving our measures of competition. Third, I suggest a theoretically grounded, empirically refined measure of competition. These efforts contribute to the current school choice policy conversation by focusing attention and thought on the definition and measurement of competition, a key avenue through which choice is to improve the educational system. Improving the measurement of competition should appeal to all interested in school choice as it is essential to any evaluation. I address the second gap in two ways. I first highlight the importance of bringing all students in an educational market under one framework that of systemic effects in addition to other public schools through inter-district choice. While evaluating the systemic effects of school choice induced competition allows us to address questions of policy relevance, such as what are the average impacts of competition on educational outcomes for all students residing within an educational market, there are no domestic studies of the systemic effects of competition. I then produce the first such domestic study for the state of Michigan. I create a unique five year panel dataset covering the school years 2008-09 through 2012-13 for Educational Performance and Information and the Common Core of Data to evaluate the systemic effects of competition on the average and variation of student test score outcomes. The evidence suggests that there is not a single systemic effect of competition for all districts or contexts. The average impact and the impact on the variation of test scores differs across district context. Competition is associated with negative impacts in some cases and may not be a tide that lifts all boats nor does it lead to a narrowing of the gaps between the boats. In sum, this paper makes multiple contributions to the school competition literature. I underscore the need for more careful attention to be paid to the measurement of competition. I also demonstrate the value-added by evaluating the impact of competition on the entire system rather than just on its component parts. In this work I demonstrate that competition does not operate in a simple manner. There is evidence that competitive pressure has mixed impacts dependent on context. This has implications for both the design of policy as well as whether or not school choice will yield improvements for all students. Copyright by BENJAMIN M. CREED 2016 v To my partners on this journey, Mary Pat at my hip and Carly Grace slightly ahead. vi ACKNOWLEDGEMENTS you get to stand up to say, sit down to write, or casually share just how much you appreciate another person. What a joy it is to reflect on how others have helped you get you where you are. Certainly humbling. Certainly encouraging. Certainly overwhelmingthan a hint of irony that the following effort for me to say thank you, to say I appreciate what scholar and a person will probably accompany a document that maybe 5 people will ever read! Nevertheless, I hope I do justice to those that have graciously done more than could be expected to help me pursue my goals. There is no denying my dissertation is stronger because of all those who have affected me. However, there is little doubt that any errors, missteps of thought, or remaining questions are due to me and me alone. With that said, to start a constant refrain, thank you. Thank you to those who have invested themselves into my apprenticeship as a budding scholar. In a field where the lessons are built through firsthand experiences, the observation of the approaches of those more senior, and built on relationships, I have had the amazing fortune to have a series of amazing mentors. One of the great privileges of my life has been to learn from and work under my advisor and mentor, Amita Chudgar. From our first meeting at a hotel under renovations outside of Ann Arbor through the direct, meaningful, and needed critiques in the past e always been excited to hear what you have to say. The way you approach your work and relationships, with compassion, care, and a consistent grounding in the why of it all, ou. Josh Cowen, who took me under his wing these past two years, thank you for your investment in my training, vii my professional development, and for challenging me. I appreciate the trust you showed me in our work. I learned much from you about navigating professional relationships and about how to surrounding this work. You laid out a map of southeast Michigan and provided an impromptu history lesson. This helped me stay grounded, gave me a sense of place, and has fostered a keen interest in better understanding the complexities of Detroit and Michigan. Thank you. Rebecca Jacobsen, hearing the echoes of your kind words through the voice of others, your encouragement and belief in me manifested, amongst other things, by inviting me to share my work with your students, your perspective and your feedback, I appreciate them all. Thank you. To those who believed in me when they had little reason to, I appreciate your investment. I hope I have lived up to the trust you showed in an unproven entity. Michael Sedlak, you were instrumental in bringing me to Michigan State University. The support you provided to me as I pursued my degree and as I saw opportunities to turn my brainchildren into reality cannot be overstated. I have felt honored, fortunate, and lucky to have studied in the Educational Policy program. Thank you. Bob Floden and Jeff Wooldridge, I am grateful for your support and decision to include me in the Economics of Education fellowship. The training and experiences provided are unmatched. Thank you. Joni Burns, Jodi Potter, and Gretchen Ewart, thank you for program hum. To all my friends and colleagues who have contributed to my personal and intellectual so many who have provided me with constant encouragement, excitement, challenge, and depth viii name a few amongst the many, for their contribution to my intellectual growth: Walt Cook, Dan Fitzpatrick, Alan Hastings, Laura Holden, Justina Judy, John Lane, Alyssa Morley, Dave Reid, Andy Saultz, Guan Saw, Rachel White. Thank you. my thanks through my actions. To my parents, how do you say thanks to those that have sown the seeds and encouraged their growth? I have taken so much from both of you. Your commitment to education, to fostering my curiosity, to the continued encouragement and support as I wend my way through life, I hope to provide Carly with the same grounding. While I look forward and know the path ahead of me is of my (well, my families) choosing, I know the two of you will ve always generously followed my path, helping me when I needed it, encouraging me when my pace lagged, and cheering me on when I found something of joy. I see fingerprints of you both in all that I do, this included. No bigger supporters could ever be hoped for. Thank you. Jacob, my brother and my friend, I hope you know the indelible impact you have had on my life. Again, what words capture how it feels to have you as a big brother? Your intellect, your humor, and your sensitivity are examples for all. From my earliest memories to sharpening my thoughts as this dissertation took shape, I am forever grateful. Thank you. To Tim and Rosemary Caragher, in-laws is too harsh sounding of a term. You have taken me in as a son. Your support for myself, for my wife, and for our family has been undeniable as we have pursued this goal. I appreciate our conversations about the PhD pursuit and your shared experiences, Tim. Rosemary, I always looked forward to our passionate conversations before others would awake about the importance and role of education it always reminded me of the importance of my work. Thank you. ix Finally, to my wife and daughter. This is the easiest and hardest of all. The easiest, Mary, because I am thankful for everything. The hardest because of the depth of support, love, kindness, belief, and encouragement provided to me. Your sacrifices, your strength, your commitment, your grace through it all humble me. I am who I have become because of you. To already changed me. Your sweet innocence, joy, and incredibly cute smile motivate me to grow, to challenge the world to improve, and to follow your lead as long as you let me. Thank you. x TABLE OF CONTENTS LIST OF TABLES ........................................................................................................................ xii LIST OF FIGURES ..................................................................................................................... xiv Paper 1: The three policy logics of school choice: What do we know and where are the gaps? .... 1 Introduction ..................................................................................................................................... 1 Policy logics .................................................................................................................................... 3 Policy logic - Competition .................................................................................................. 5 Policy logic - Innovation ..................................................................................................... 5 Policy logic expansion of choice ..................................................................................... 8 Reviewing the literature on key components of the policy logics ................................................ 11 Competition multiple measures and mixed findings ..................................................... 12 Innovation under-conceptualization and mixed evidence .............................................. 14 Expansion of choice more students have access but still uneven .................................. 16 Discussion ..................................................................................................................................... 18 REFERENCES ............................................................................................................................. 20 Paper 2: Defining and evaluating the measurement of school competition: towards a theoretically grounded, empirically refined measure of school competition ..................................................... 27 Literature review ........................................................................................................................... 29 History of school choice and competition ........................................................................ 29 Measures of competition in the literature ......................................................................... 32 Competitive effects on test score outcomes. ............................................................. 33 Competitive effects on non-test score studies. .......................................................... 34 Conceptual framework for school choice induced competition ....................................... 40 School choice policy context. ................................................................................... 42 Characteristics of the school choice market. ............................................................. 43 Perceptions of TPS administrators. ........................................................................... 47 Local contextual factors. ........................................................................................... 48 Fitting the pieces together. ........................................................................................ 49 The school choice context in Michigan in ideal study location ..................................... 53 Data and Methods ......................................................................................................................... 55 Data and methods assessing the correlation between measures .................................... 56 Operationalizing measures of competition. .............................................................. 56 Presence variables. ................................................................................................ 58 Market share variables. ......................................................................................... 58 Function of market share and presence variables. ................................................ 62 xi Regression variables. ............................................................................................ 63 Correlational analysis of school competition measures. ........................................... 63 Regression analysis of school competition measures. .............................................. 64 Data and methods assessing the conceptual coverage of the measures ......................... 67 Interviews. ................................................................................................................. 67 Analyzing interview data. ................................................................................................. 71 Data and Methods Evaluating the measures of competition using a rubric ................... 72 Developing an evaluation rubric. .............................................................................. 72 Results ........................................................................................................................................... 74 Correlational results. ......................................................................................................... 74 Fixed effects results. ......................................................................................................... 77 Contextualizing the conceptual framework for MI through interviews.................... 79 Systematically evaluating the measures of competition. .......................................... 94 Discussion ..................................................................................................................................... 99 Conclusion .................................................................................................................................. 106 APPENDIX ................................................................................................................................. 107 REFERENCES ........................................................................................................................... 111 Paper 3: Evaluating the systemic effects of school choice induced competition: Student outcomes in Michigan ................................................................................................................................. 119 Introduction ................................................................................................................................. 119 Literature review ......................................................................................................................... 121 Review of school choice induced competition literature ................................................ 121 Conceptual framework for evaluating the systemic effects of competition ................... 126 Data & Variables......................................................................................................................... 136 Data ................................................................................................................................. 136 Variables. ........................................................................................................................ 138 Outcomes of interest. .............................................................................................. 138 Measures of competition. ........................................................................................ 142 Controls. .................................................................................................................. 147 Methods........................................................................................................................... 150 Results ......................................................................................................................................... 156 Discussion and Limitations ......................................................................................................... 168 Conclusion .................................................................................................................................. 172 APPENDIX ................................................................................................................................. 173 REFERENCES ........................................................................................................................... 180 xii LIST OF TABLES Table 1. Studies of the impact of choice based competition on non-test score outcomes ............ 35 Table 2. Summary of the measures of competition in the literature reviewed in this paper. ....... 39 Table 3. Definition of competition variables and data sources used to create each measure. ...... 57 Table 4. Descriptives of the measures of competition in the extant literature. ............................. 59 Table 5. Sampling strategy with pseudonyms. ............................................................................. 69 Table 6. Within category comparisons for fourth grade. .............................................................. 75 Table 7. Within category comparisons for fourth grade. .............................................................. 76 Table 8. Between category correlation on select measures for grades 4 and 7. ........................... 77 Table 9. Fixed effects regression estimates of the competitive effects on standardized average MEAP scores. ............................................................................................................................... 78 Table 10. Descriptive characteristics of the districts for the superintendents interviewed. .......... 79 Table 11. Evaluation rubric for measures of competition............................................................. 96 Table 12. Possible interpretations of results with no changes in traditional public schools ....... 135 Table 13. Description of each of the outcome measures ............................................................ 139 Table 14. Description of each of the measures of competition used .......................................... 140 Table 15. Control variable definitions and data sources ............................................................. 148 Table 16. Descriptives ................................................................................................................ 157 Table 17. POLS regression of competition measures on system level average outcomes. ........ 158 Table 18. POLS, FE, and Random Trends for systemic effects on average MEAP math scores159 Table 19. POLS, FE, and Random Trends for systemic effects on average MEAP reading scores..................................................................................................................................................... 161 xiii Table 20. Systemic effects of school competition on the variation of MEAP outcomes, measured by three gaps. .............................................................................................................................. 163 Table 21. Subgroup analysis of the systemic effects on average student test scores by enrollment and fund trends ............................................................................................................................ 164 Table 22. Subgroup analysis of the systemic effects on student test score gaps by enrollment and fund trends .................................................................................................................................. 165 Table 23. Summary results for all systemic effects on average outcomes. ................................ 166 Table 24. Summary results for all systemic effects on outcome gaps. ....................................... 167 Table 25. Subgroup analysis of the systemic effects on average student test scores District MEAP quartiles ........................................................................................................................... 174 Table 26. Subgroup analysis of the systemic effects on average student test scores %Black quartiles ....................................................................................................................................... 175 Table 27. Subgroup analysis of the systemic effects on average student test scores %FRL quartiles ....................................................................................................................................... 176 Table 28. Subgroup analysis of the systemic effects on test score gaps District MEAP quartiles..................................................................................................................................................... 177 Table 29. Subgroup analysis of the systemic effects on test score gaps District % Black/African American quartiles ...................................................................................................................... 178 Table 30. Subgroup analysis of the systemic effects on test score gaps % FRL quartiles ...... 179 xiv LIST OF FIGURES Figure 1. Competitive effects of school choice studies are comprised of two main components systemic effects and TPS effects studies. ....................................................................................... 4 Figure 2. Policy logic diagram for competition. This diagram lays out a general logic model for how school choice policy can create competition which improves the efficiency of the system and ultimately an improved educational system for all students. It is not meant to capture all avenues by which competition leads to improved outcomes but instead to provide a useful heuristic. School choice policies will induce schools to compete for students. This competition for students will lead to new practices and options as well as the removal of low performing schools. The responses to competition will lead to a more efficient school system and, ultimately, to improved outcomes for all students. ........................................................................................... 6 Figure 3. Policy logic diagram for innovation. This diagram lays out a general logic model for how school choice policies can spur schooling innovations and ultimately an improved educational system for all students. The introduction of schooling options free of some of the rules and regulations of the traditional public schools will allow schools to innovate. These schools outside of the standard rules and regulations will experiment with new programs, teaching methodologies, and governance structures. The innovations which work will then filter through the system increasing the diversity and effectiveness of schooling practices and will lead to an improved educational system for all students. ....................................................................... 7 Figure 4. Policy logic diagram for expansion of choice. This diagram lays out a general logic model for how the expansion of school choice to students who traditionally have not had choice leads to improving the match of services to needs and ultimately an improved educational system for all students. The introduction of a publicly funded school choice policy will provide all expansion of choice will lead to more students exiting lower quality schools, more students attending better schools, and more students attending schools perceived as better matches. This leads to a better match of services to needs, more students receiving better education and will lead to an improved educational system for all students. ............................................................... 8 Figure 5a. Conceptual framework for what factors influence the extent of competition felt by districts. ......................................................................................................................................... 44 Figure 5b. Examples of the types of factors which contribute to each of the overarching categories in Figure 1a. ................................................................................................................. 45 Figure 6. Possible responses on index cards. ................................................................................ 80 Figure 7. Response to the index cards - Low A ............................................................................ 82 xv Figure 8. Response to the index cards - Middle A ........................................................................ 83 Figure 9. Response to the index cards - Low B ............................................................................ 84 Figure 10. Response to the index cards - High B ......................................................................... 85 Figure 11. Response to the index cards - Low C .......................................................................... 86 Figure 12. Response to the index cards - Middle R-C .................................................................. 87 Figure 13. Response to the index cards - Low R-D ...................................................................... 88 Figure 14. Response to the index cards - High D ......................................................................... 89 Figure 15. Response to the index cards - Low E .......................................................................... 90 Figure 16. Response to the index cards - Middle E ...................................................................... 91 Figure 17. Competitive effects of school choice studies are comprised of two main components systemic effects and TPS effects studies. ................................................................................... 122 Figure 18. Models of the systemic effect of competition on student outcomes ......................... 130 Figure 19. Systemic effect of competition on variation in student outcomes. ............................ 133 Figure 20. Systemic effect of different policy option on average quality of schooling and variation. ..................................................................................................................................... 134 Figure 21. Gaps between deciles for district average math and reading MEAP scores ............. 162 1 Paper 1: The three policy logics of school choice: What do we know and where are the gaps? Introduction School choice, broadly defined, has long been a part of the educational landscape in the United States. Families could, ostensibly, choose where to live, whether to attend private secular or religious schools, or to homeschool their children. These forms of de facto school choice rested on the freedom for families to choose where to live, choose a religious education, and choose what to learn. Importantly, these choices relied primarily on private resources and decisions. No public provisions were made to address whether a family could exercise their ability to choose. While these private impulses remain a part of the American educational system, the past two and a half decades has seen the introduction and widespread adoption of a variety of publicly funded school choice policies. Public school choice policiesusing public funds to provide schooling options to familieshave primarily taken the form of three different policy options: charter schools, vouchers, and inter-district open enrollment plans. Since the passage of the first charter school law (Minnesota) and the first voucher law (Milwaukee, Wisconsin) in 1989, school choice legislation and enrollment has rapidly increased. In the 2013-14 school year, there were 43 states with charter school legislation (Ziebarth, 2016), enrolling over 5% of students (NAPCS dashboard, n.d.). Currently, thirteen states and Washington D.C. have publicly funded voucher systems covering at least a portion of students (NCSL, n.d.) and fourteen states subsidize private school tuition through tax credits or deductions (Cowen & Toma, 2015). Open enrollment, both 2 inter and intra-district choice, has also seen a marked increase in both the number of districts and states participating and the overall use of the policies with all but two states having some form of open enrollment as of 2015 (Education Commission of the States, n.d.). The three main arguments undergirding publicly funded school choice are a) school choice leads to competition which leads to a better educational system for all students (e.g. Chubb & Moe, 1990; Friedman, 1955), b) creating choice schools outside of the traditional public system framework would encourage innovations which traditional public schools (TPS) will adopt and in turn improve the educational system for all students (e.g. AFT, n.d., Preston, Goldring, Berends, & Cannata, 2012), and c) school choice policies expanding school choice to families that traditionally had no options will improve the match between student and school which would improve the quality of education for all students (e.g. Friedman, 1955; Lauen, 2007; Smith, Richards, & Perez, Jr., 2016). Each of these underpinning logics aspire to a similar goal of improved education for all students served by the public education system. While they are presented for the most part as distinct logical frameworks, there are points of overlap in conceptualization as well as in in practice. Concurrent to states adopting school choice policies, researchers have produced a number of studies exploring various aspects of these relatively new policies. While there are exceptions, typically studies have explored four aspects of these policies: a) how the policies operate, b) the impact of choice on those students who use choice, c) whether they result in innovative practices, and d) whether they induce competitive effects in traditional public schools. Studies pursuing these questions focus on informative aspects of school choice policy how does the policy work and how do parts of the system respond. However, there are no studies, to my knowledge, which evaluate whether school choice policies produce the outcomes 3 described by each of the three policy logics an improved educational system for all students. This paper describes each of the three policy logics mentioned above, reviews the current research for the primary factor for generating improved educational outcomes for all students in each policy logic (competition, innovation, and expansion of choice), and concludes by suggesting three fruitful areas of research: a) developing a consistent measure of competition, b) evaluating the systemic effects the effects on all students within the publicly funded educational system of school choice policies, and c) improving the conceptualization of innovation and the factors which may promote or constraint innovation. Policy logics Before delving further into the literature, I introduce a three terms which are central to this and the subsequent papers a) competitive effects of school choice studies, b) the systemic effects of competition, and c) the TPS effects of competition (see Figure 1). Competitive effects of school choice studies are any empirical work which assesses the role that competition plays on the educational system. This term is an overarching term and envelops the other two. Systemic effects of competition studies are a subset of the competitive effects literature. Systemic effects studies set the unit of analysis as all students residing within a given educational system1. Systemic effects studies address questions related to the impact of competition on all students, regardless of school attended, whether the overall efficiency in the system changes, or if innovation occurs among the schools in the system. TPS effects of competition studies are those studies which set the unit of analysis to the traditional public schools. TPS effects studies examine the effect of competition on TPS student outcomes, on the efficiency of TPS schools or 1 For the purposes of the subsequent papers, I set the educational system equal to all students residing within a given catchment area. I discuss this decision in more detail below in the methods sections of Papers 2 and 3. 4 Figure 1. Competitive effects of school choice studies are comprised of two main components systemic effects and TPS effects studies. whether TPS innovate. In order to directly assess the impact of school choice policies on the outcomes of all students defining the educational market, or system, is essential. The arguments undergirding each of the school choice logics do not limit the gains to those students that leave their home traditional public school (TPS). Nor do the gains accrue only to those students who remain in their TPS. Each logic argues that the educational outcomes will improve on balance. As such, I argue that all students that attend schools which receive public funds fall into the educational system and ought to garner consideration when evaluating the effects of competition. Therefore, I define the systemic effects as the effects of competition on the educational outcomes of all students within a given educational market2, whether they attend TPS, magnet, alternative public schools, or charters. By defining the systemic effects this way, policy relevant questions such as whether introducing school choice would improve the overall student outcomes for students within the system, how does the variation between subgroups and schools change with school 2 This concept can be operationalized with different boundaries and definitions. The specifics do not influence the underlying intuition: the systemic effect is the impact on all students within a given market, regardless of school attended. 5 choice policies, and do school choice policies lead to improvements in overall efficiency of the system can all be assessed. Policy logic - Competition Arguments for school choice legislation often appeal to marketization forces in order to improve the educational system. The central argument of this policy logic argue that the traditional public schools (TPS) virtual monopoly on publicly funded schooling options stymies innovation, productivity, and quality. Specifically, the school system is not incentivized to meet the needs of the individual students and families which leads to poor matching between services and needs, limits the relevancy of the education to the context, and creates inefficiencies. The introduction of school choice into the public school system will break the educational monopoly leading to improvements in the quality of the school system (See Figure 2). These improvements will occur due to the competitive pressure to attract students. Forced to compete for students, schools will innovate in delivery methods and content, provide services better matched to the demands of families, and/or improve operations. Schools which cannot attract enough schools will close and new schools will open to fill needs or replicate successful models. Ultimately, the overall increase in efficiency, found through changes in practices, and a better match of services to student needs, will improve leading to an increase in the quality of the schooling system (e.g. Chubb & Moes, 1990; Friedman, 1955; Hoxby, 2001). Policy logic - Innovation While competition induced by school choice policy may induce innovative practices (e.g. Chubb & Moe, 1990), charter school policy is also suggested as leading to innovation through 6 Figure 2. Policy logic diagram for competition. This diagram lays out a general logic model for how school choice policy can create competition which improves the efficiency of the system and ultimately an improved educational system for all students. It is not meant to capture all avenues by which competition leads to improved outcomes but instead to provide a useful heuristic. School choice policies will induce schools to compete for students. This competition for students will lead to new practices and options as well as the removal of low performing schools. The responses to competition will lead to a more efficient school system and, ultimately, to improved outcomes for all students. reduced bureaucracy and more autonomy (e.g. Chubb & Moe, 1990; Geske, Davis, & Hingle, 1997). The rhetoric supporting this policy logic has found its way into the discourse at both the federal and state levels of policy making. At the federal level, President Obama noted that due to the independence of charter schools ththat encourage academic excellence and set students on a path to success. They are laboratories Nearly all state charter laws (more than 90%) have language related to charter schools as innovators (Wohlstetter, Smith, & Farrell, 2013) with some states requiring charter school applicants to demonstrate their innovativeness (e.g. Ausbrooks, Barrett, & Daniel, 2005). 7 Figure 3. Policy logic diagram for innovation. This diagram lays out a general logic model for how school choice policies can spur schooling innovations and ultimately an improved educational system for all students. The introduction of schooling options free of some of the rules and regulations of the traditional public schools will allow schools to innovate. These schools outside of the standard rules and regulations will experiment with new programs, teaching methodologies, and governance structures. The innovations which work will then filter through the system increasing the diversity and effectiveness of schooling practices and will lead to an improved educational system for all students. The underlying policy logic of how school choice policy encourages innovation and ultimately improves the overall system can be seen in Figure 3. Providing for independent schools, often charter schools, will lead to new programs, practices, and new governance structures. These new schools will produce a diversity of schooling options and a new set of best practices will proliferate through the system. This proliferation would occur either through altruistic mechanisms (e.g. schools wanting to adopt new best practices to improve the educatiothe need to adopt best practices or diversify offerings to attract students to the school or district). Whatever the mechanism of adoption, the diversity of options and new practices in the education system will lead to improved outcomes for all students. 8 Policy logic expansion of choice The expansion of choice to all families has grown in acceptance, finding purchase in federal policies such as NCLB and Race to the Top (Berrends, Cannata, & Goldring; 2011), local media outlets (e.g. Smith Richards & Perez, Jr., 2016), and even petitions on Change.org as a means to improve the educational outcomes for all students. Figure 4 demonstrates the logic behind how expansion of choice would lead to improved outcomes for all students. Figure 4. Policy logic diagram for expansion of choice. This diagram lays out a general logic model for how the expansion of school choice to students who traditionally have not had choice leads to improving the match of services to needs and ultimately an improved educational system for all students. The introduction of a publicly funded school choice policy will provide all families access to school choice, facilitapreferences. This expansion of choice will lead to more students exiting lower quality schools, more students attending better schools, and more students attending schools perceived as better matches. This leads to a better match of services to needs, more students receiving better education and will lead to an improved educational system for all students. Absent publicly funded school choice options families, school choice was limited to those families willing and able to bear the cost of selecting a residence based on the schools or sending their child to a private school. The expansion of school choice to all families would popular in the media, expanding choice to all would ensure no child would be stuck in a failing 9 school. Families would be able to exit failing schools, attend better schools, and to attend schools which better match their preferences. This would lead to an increase in the quality of schools attended and a better fit which would improve the schooling outcomes for all students. ***** It is important to note the interplay of the three policy logics as well. School competition is thought to both spur innovation as well as provides a mechanism for the innovations, wherever they arise, to proliferate throughout the system. School competition and publicly funded expansion of choice are both argued to improve the match between students and schools which leads to efficiency gains. Innovations and the expansion of choice also go hand in hand as school choice allows families to select which innovations best serve their needs while also allowing schools to provide a subset of practices rather than a broad spectrum of options. The largest overlap of the three logics is that they lead to positive systemic effects improved educational outcomes for all students regardless of school attended. The competition policy logic does not suggest that only traditional public schools will improve. Innovations are not for the sake of innovation alone the argument is that new best practices will emerge and be taken up by the system to better serve all students. The expansion of school choice to more students is not intended to just give choice but to provide a way out of failing schools for all students importantly, leaving failing schools for better options and raising the quality of school accessed for all students. However, the empirical evidence is relatively thin: there are only two domestic studies (Hoxby, 2000; Rothstein, 2005)3 and three international studies that take into 3 I have chosen to only briefly touch on these two studies as the debate between Hoxby and Rothstein calls into question the methods of each. 10 consideration the performance of all students within a given market, regardless of type of school attended.4 The two domestic papers explore the effect of Tiebout choice generated competition on all public school student outcomes using an instrumental variables approach based on rivers and streams (Hoxby, 2000; Rothstein, 2005). These two papers are in conversation with each other as the Rothstein (2005) paper attempts to reproduce the results in Hoxby (2000). Hoxby (2000) found a positive impact of Tiebout choice on public school productivity while Rothstein (2005) found a null effect. Given the mixed results from these studies (and subsequent responses) there is little consistency in the domestic literature. systemic effects of competition, measured by the HHI, on central exam scores, graduation rates, and percent of students graduating on time. Hsieh and Urquiola (2006) look at the systemic effect of competition in Chile, measured as the share of private school enrollment at the commune level, on the average years of school completed and the math and reading test scores. West and Woessman (2010) use the national share of private enrollment as the measure of g, and science scores as well as their per pupil expenditures. Results from the Netherlands and Chile suggest that there is either no systemic effect or a negative systemic effect of competition. West and Woessmann (2010), on the other hand, find that there is a significant positive effect on quality and efficiency. 4 Studies which examine portfolio districts could be thought of as providing estimates of the systemic effects of school competition such as the recent studies of the New Orleans Recovery District (Harris & Larsen, 2016). However, it is not clear that we should interpret studies of districts adopting portfolio models as equivalent to studies of school choice policies. Portfolio districts do likely draw on competition between schools but also introduce new governance structures, new initiatives, and other confounding changes. Therefore, research on portfolio districts can be thought of as systemic effect studies, what is the effect on all students within a given market of a policy change, but not as a study of the systemic effects of school competition. 11 While these three international studies represent the only research which has explored the systemic effect of competition, their results do not speak directly to the U.S. context. Given the centrality of systemic effects in each of the three main policy logics, there is a need to produce research which empirically evaluates the systemic effects of school choice policies. The dearth of systemic effects studies can likely be attributed to a number of factors. First, in order to examine the systemic effects of school choice policies sufficient time must pass for the effects to emerge. Secondly, the implementation and operation of the school choice policies received much of the initial research attention. Questions related to, among others, the quality of choice schools, who utilizes choice options, and the effect of competition on TPS were likely more pertinent to policy design conversations. Finally, the historic lack of statewide longitudinal student level data sets has probably also contributed to the gap due to the momentum of lines of research. Reviewing the literature on key components of the policy logics I know turn to the what the literature says about the three key components which, according to the above policy logics, will induce improved educational outcomes for all students competition, innovation, and the expansion of choice. As these represent the initial vectors for each logic, I summarize the strengths and limitations of the current literature for each of the components. For competition, I briefly review the evidence for how competition impacts the educational system, highlight the wide variation in conceptualization and measurement of competition, and argue that a consistent measure of competition is needed before further evaluating whether competition leads to improved outcomes for all students, or positive systemic effects. In the section on innovation, I survey the literature on whether the introduction of school choice policies have a) led to innovations by freeing charter schools from the standard rules and regulations and b) led to innovations through competitive pressures. Finally, I synthesize the 12 literature on whether the expansion of school choice has led to more families of all backgrounds utilizing school choice to exit low performing schools for better schools. The following three sections lays out our current understanding of each of these three components, identifies gaps in the literature, and highlights the absence of domestic systemic effects studies. Competition multiple measures and mixed findings In contrast to research on the systemic effects of school choice, dozens of studies have examined how TPS have responded to the competitive pressure of school choice the TPS effects of compeition. Studies have looked at the impact of competition on student test scores in TPS (e.g. Bifulco & Ladd, 2006; Bohte, 2004; Figlio & Hart, 2014; Sass, 2006; Zimmer & Buddin, 2009; Zimmer, Gill, Booker, Lavertu, & Witte, 2012), non-test score student behavior and outcomes (e.g. Dee, 1998; Falck & Woessmann, 2013; Hoxby, 1994; Hsieh & Urquiola, 2006; Imberman, 2011; Lavy, 2010; Misra, Grimes, & Rogers, 2012; Sobel & King, 2008), and TPS district level financial responses (e.g. Arsen & Ni, 2012; Bifulco & Reback, 2014; Bradley et al., 2001; Linick, 2016; Maranto, Milliman, & Stevens, 2000; Misra et al., 2012; Ni, 2009). These studies have found positive effects of competition (e.g. Bohte, 2004; Booker, Gilpatrick, Gronberg, & Jansen, 2008; Carr & Ritter, 2007; Figlio & Hart, 2014; Holmes, DeSimone, & Rupp, 2003; Hoxby, 2003; Sass, 2006; West & Woessmann, 2010), negative effects (Bifulco & Ladd, 2006; Dijkgraaf et al., 2013; Hsieh & Uqruiola, 2006; Ni, 2009), or no/mixed effects (Buddin & Zimmer, 2005; Imberman, 2007, 2011; Linick, 2016; Zimmer & Buddin, 2009). Within this mixed literature base, there are nearly as many measures of competition as there are studies. Broadly defined, the competition measures fall into three general categories based on how the authors conceptualize competition measures of the presence of school choice options, 13 measures of the market share of the school choice sector, and measures which are a function of both presence and market share (Linick, 2014; Ni & Arsen, 2013). Studies using the presence of choice schools have operationalized this measure in a number of ways: a) the presence of a choice policy (e.g. Lavy, 2010; Linick, 2016; Loeb, Valant, & Kasman, 2011); b) the existence of at least one choice school in a given area or within a given distance (e.g. Abernathy, 2008; Gresham, Hess, Maranto, & Milliman, 2000; Holmes et al., 2003; Jackson, 2012; Sobel & King, 2008); and the number of choices available (e.g. Bifulco & Ladd; Zimmer & Buddin, 2009). Similarly, studies have operationalized the market share of choice schools as the percentage of students within various geographical boundaries attending charter schools (e.g. Hoxby, 2003; Imberman, 2007, 2011; Ni, 2009; Zimmer & Buddin, 2009) or private schools (e.g. Hoxby, 1994; Hsieh & Urquiola, 2006; West & Woessman, 2010) a handful of studies have refined market share measures by including the ability of families to utilize choice options (Maranto et al., 2000), the duration of exposure (Ni, 2009), and including a quality of competition component (e.g. Cremata & Raymond, 2014). The final set of studies of school competition conceptualized the measure as a function of both presence and market share, or potential and realized loss of students. Within these studies, competition is defined at the school (e.g. Booker et al., 2008), district (e.g. Booker et al., 2008; Sass, 2006), or county level (e.g. Bohte, 2004). Some studies employed the Herfindahl-Hirschman Indexa measure of competition based on each sctotal enrollment for all schools in a given geographic region (e.g. Dijkgraaf, Gradus, & DeJong, 2013; Greene & Kang, 2004; Hanushek & Rivkin, 2003) or the Gravity Access Model (Misra, et al., 2012) which includes distance between schools along with enrollment share. 14 Evaluating the school competition policy logic for school choice relies on two key factors: a) accurately measuring the extent of competition in the corresponding system and b) assessing the outcomes for all students in a given system regardless of school attended. In other words, understanding whether school competition improves the educational system for all students hinges on having an accurate way of measuring the amount of completion facing schools. Without either of these two factors, the school competition policy logic for school choice policy cannot be fully evaluated. While not exhaustive, the above review underscores the current variation in the conceptualization and measurement of competition. Without a consistent conceptualization or measure of competition it is unclear how to synthesize the literature. Further, studies which focus on TPS students provide insights into the responses of TPS and can help test parts of the policy logic. However, they stop short of answering the final question related to the policy logic does competition improve the outcomes of all students? Innovation under-conceptualization and mixed evidence The current literature on the relationship between school choice policy and innovation generally responds to two questions: a) do charter schools develop practices which differ from TPS practices and b) do TPS produce innovative practices when faced with school competition? The exception to these two general questions comes from a study of New Orleans which looks at the level of innovation amongst all schools within the educational system (Arce-Trigatti, Harris, Jabbar, & Lincove, 2015). The authors find evidence of high levels of market differentiation both public schools and charter schools fill niches in the educational market which may not have otherwise been filled. However, this study does not provide a counterfactual for what the level of differentiation was prior to the creation of the New Orleans Recovery School District, a portfolio 15 district type model of school governance. Without this counterfactual scenario the level of innovation associated with the introduction of school choice policy remains unknown. As suggested above, I know of no studies linking choice induced innovation leads to improvements in the educational system. I focus below primarily on studies related to the first question before briefly summarizing the evidence for the second question. Charter school legislation has garnered support as a way to allow innovative educational practices to develop (e.g. AFT, n.d.; Chubb & Moe, 1990; Preston et al., 2012). However, there are only a handful of studies which explore whether choice schools produce innovative practices. Early studies rely on limited data and found little or mixed evidence of charter school innovations. Two studies from Michigan found relatively small (Arsen, Plank & Sykes, 1999; Horn & Miron, 2000) differences between TPS and charter schools, with many of the practices labeled innovative in the charter schools also being found in TPS (Horn & Miron, 2000). A study form Texas, using publicly available data from 159 Texas charter schools in 2001-02, found relatively few charter schools creating innovative practices (Ausbrooks et al., 2005). A multi-state study compared charter school practices to a set of matched TPS (Goldring & Cravens, 2008) and found charter schools did not adopt practices more associated with flexibility as would be predicted (Goldring & Cravens, 2008). Finally, Preston et al. (2012) utilized the Schools and Staffing Survey to create a dataset consisting of 203 charter schools matched to 739 TPS from the district boundaries the charter school is in. They examine if charter schools use innovative practices (defined as a practice existing in the charter school but not in the matched public schools) in staffing policies, academic support services, organizational structures, and governance through a comparison of reported practices. Overall, they find charter schools are not more innovative than TPS other than in teacher tenure policies. While there are particular charter 16 schools which utilize innovative practices, i.e. no-excuses schools like KIPP, the current empirical evidence suggests charter schools are not systematically more innovative than TPS. Studies which examine whether or not TPS innovate in response to competition are more numerous. Overall, the most systematic response of TPS to competitive pressure is to increase marketing and outreach (e.g. Gresham et al., 2000; Hess, 2002; Hess, Maranto, & Milliman, 2001; Loeb et al., 2011; Lubienski, 2005, 2007; Maranto, Hess, & Milliman, 2001) leaving services and programs typically untouched (e.g. Kim & Youngs, 2013; Zimmer & Buddin, 2009). While some TPS teachers and administrators responded by altering their practices, the majority did not. In a study of California principals only about twenty percent facing competitive pressures reported changing at least one practice in response (Zimmer & Buddin, 2009). In a study of 30 schools in 7 districts, over two-thirds of teachers and administrators reported that they did not change practices when faced with competitive pressure (Kim & Youngs, 2013). The evidence above suggests school choice policof charter schools or through competition, do not systematically produce innovative practices. This does not necessarily mean innovation has not occurred. Instead this could reflect issues with the self-report nature of some data as actors may not properly attribute why they changed practices and the need for more nuanced definitions of innovative practices (Berends, 2015). What is evident from the literature is that there are no studies which look at the systemic effects of school choice induced innovation. The study from New Orleans (Arce-Trigatti et al., 2015) represents a first step towards this sort of study. Expansion of choice more students have access but still uneven Simply looking at the large increase in the number of students which have used school choice over the past 25 years demonstrates choice has proliferated. As the expansion of choice is 17 otherwise exercise choice, research has focused on where schooling options exist. Other research has focused on who utilizes choice programs in different contexts. Both of these lines of research are important in their own right as positive systemic effects of expanded choice likely rest on available options and widespread use of choice. As seen above, many studies have probed these two aspects of the policy logic while few studies have examined the systemic effects for all students. Those studies exploring the systemic effects of expanded choice focus primarily on the systemic effects of choice on racial segregation and socioeconomic stratification associated (e.g. The literature exploring the location decision of charter schools, and thus the availability of charter options, is still developing but early lessons are emerging. Lubienski, Gulosino, and Weitzel (2009) mapped location decisions of charter and private schools in Detroit, New Orleaoptions for all students, competitive incentives may also cause schools to arrange themselves in 009, p. 642). They note that market, policy, and contextual conditions all factor in to the establishment patterns as well. A study of New Jersey charter school locations found that charters opened near, but not in, predominately African-American neigbhorhFitzpatrick, and Jacobsen (2015) add to the evidence on where charter schools establish by examining elementary schools between 2009 and 2013 in New York City. They found that schools do not randomly open. Charter schools do not appear to open in areas of low parental satisfaction and are only slightly sensitive to poverty levels. Encouragingly, and somewhat in contrast to Lubienski et al. (2009), the opening of charter schools was positively associated with 18 areas with low performing schools. However, a study of the location decisions of charter schools in Chicago demonstrated that charter schools did open in higher need areas but were less likely to open in the highest need regions (LaFleur, 2016). In sum, there is growing evidence about the supply side decisions related to the establishment of charter schools which may impact the availability of choice even in contexts where publicly funded school choice is ostensibly present. Discussion The reviewed literature above demonstrates that central aspects of each of the three policy logics have received attention from researchers. Much has been learned about portions of each logic. However, significant gaps remain in our overall evaluation of school choice policies. As each underlying logic of school choice ends with the improvement of the educational outcomes for all students, the dearth of studies examining the systemic effects of school choice policies on student outcomes is surprising. As more data becomes available, and school choice policies continue to expand, the underlying goal of school choice policy deserve careful attention from researchers. Evaluating the systemic effects of school choice policies represents an important but under-conceptualized and understudied component of the school choice literature. Without system wide evaluations policy relevant questions, such as does choice improve outcomes for all students and do gains accrue to all students within the system, remain unanswered. This applies to each of the three logics. The second key gap emerges in the school competition literature. There is wide variation in the measurement and conceptualization of school competition. This is not a new concern for the school competition research. Belfield and Levin (2001) first flagged this as an issue with both Linick (2014) and Ni and Arsen (2013) making the point more recently. Before we can evaluate the systemic effects of school choice induced competition, we need a theoretically and 19 empirically grounded measure of competition. Without this it will remain unclear exactly what is being measured. Similarly, the under-conceptualization of innovation in the literature (Berends, 2015) represents another area for further research. The relationship between innovation and systemic improvements is unknown. Finally, the emerging literature on the location and the quality of available options is beginning to move our understanding from the demand side to the supply side of the market. Studying the supply side as well as the matching of supply and demand will contribute to the overall evaluation of the effect of school choice on the educational system. In summary, there is much known about school choice policy but the large questions still remain unanswered. In an area of study which has received substantial attention over the past two and a half decades, there is still much work to be done. There are contributions to be made from theoretical and conceptual efforts. A number of important empirical studies, drawing on the theoretical and conceptual work, are still ahead of researchers interested in this topic. 20 REFERENCES21 REFERENCES Abernathy, S. F. (2008). School choice and the future of American democracy. University of Michigan Press. American Federation of Teachers (n.d.) AFT - A Union of Professionals - Charter Schools. Retrieved May 4, 2014, from https://www.aft.org/issues/schoolchoice/charters/ Arce-Trigatti, P., Harris, D. N., Jabbar, H., & Lincove, J. A. (2015). Many Options in New Orleans Choice System. Education Next, 15(4). Arsen, D., & Ni, Y. (2012). Is Administration Leaner in Charter Schools? Resource Allocation in Charter and Traditional Public Schools. Education Policy Analysis Archives, 20(31). Arsen, D., Plank, D., & Sykes, G. (1999). School Choice Policies in Michigan: The Rules Matter. ERIC. Retrieved from http://files.eric.ed.gov/fulltext/ED439492.pdf Ausbrooks, C. Y. B., Barrett, E. J., & Daniel, T. (2005). Texas charter school legislation and the evolution of open-enrollment charter schools. Education Policy Analysis Archives, 13(21), n21. Belfield, C. R., & Levin, H. M. (2002). The effects of competition between schools on educational outcomes: A review for the United States. Review of Educational Research, 72(2), 279341. Berends, M. (2015). Sociology and School Choice: What We Know After Two Decades of Charter Schools. Annual Review of Sociology, 41(1), 159180. http://doi.org/10.1146/annurev-soc-073014-112340 Berends, M., Cannata, M., & Goldring, E. (2011). School Choice Debates, Research, and Context. In M. Berends, M. Cannata, & E. Goldring (Eds.) School Choice and School Improvement (pp. 3-14). Cambridge, MA: Harvard Education Press. Bifulco, R., & Ladd, H. F. (2006). The impacts of charter schools on student achievement: Evidence from North Carolina. Education, 1(1), 50-90. Bifulco, R., Ladd, H. F., & Ross, S. L. (2009). Public school choice and integration evidence from Durham, North Carolina. Social Science Research, 38(1), 7185. Bifulco, R., & Reback, R. (2014). Fiscal Impacts of Charter Schools: Lessons from New York. Education Finance and Policy, 9(1), 86107. 22 Bohte, J. (2004). Examining the impact of charter schools on performance in traditional public schools. Policy Studies Journal, 32(4), 501520. Booker, K., Gilpatric, S. M., Gronberg, T., & Jansen, D. (2008). The effect of charter schools on traditional public school students in Texas: Are children who stay behind left behind? Journal of Urban Economics, 64(1), 123145. Bradley, S., Johnes, G., & Millington, J. (2001). The effect of competition on the efficiency of secondary schools in England. European Journal of Operational Research, 135(3), 545568. Buddin, R. & Zimmer, R. (2005). Is charter school competition in California improving the performance of traditional public schools? Paper no. 146, National Center for the Study of Privatization in Education, New York, 2007. Carr, M., & Ritter, G. (2007). Measuring the competitive effect of charter schools on student Privatization in Education (Columbia University) Research Paper, 146. Brookings Institute. Cowen, J. M., & Toma, E. F. (2015). Emerging Alternatives to Neighborhood-Based Public Schooling. In H. Ladd, & M. Goertz, Handbook of Education Finance and Policy. Association for Education Finance and Policy arch 13 15, 2014 in San Antonio, TX. Retrieved from http://www.aefpweb.org/annualconference/download-39th. Dee, T. S. (1998). Competition and the quality of public schools. Economics of Education Review, 17(4), 419427. Dijkgraaf, E., Gradus, R. H., & de Jong, J. M. (2013). Competition and educational quality: Evidence from the Netherlands. Empirica, 40(4), 607634. Education Commission of the States (ECS) State Policy Database, retrieved March 12, 2016. Falck, O., & Woessmann, L. (2013). School competitiintentions: International evidence using historical Catholic roots of private schooling. Small Business Economics, 40(2), 459478. Figlio, D., & Hart, C. (2014). Competitive effects of means-tested school vouchers. American Economic Journal: Applied Economics, 6(1), 133156. 23 Friedman, M. (1955). The role of government in education. Rutgers University Press. Geske, T. G., Davis, D. R., & Hingle, P. L. (1997). Charter schools: a viable public school choice option? Economics of Education Review, 16(1), 1523. H. J. Walberg, New York: Taylor and Francis (pp. 39 - 60). Greene, K. V., & Kang, B.-G. (2004). The effect of public and private competition on high school outputs in New York State. Economics of Education Review, 23(5), 497506. Gresham, A., Hess, F., Maranto, R., & Milliman, S. (2000). Desert Bloom: ArMarket in Education. Phi Delta Kappan, 81(10), 75157. Gulosino, C., & dEntremont, C. (2011). Circles of influence: An analysis of charter school location and racial patterns at varying geographic scales. Education Policy Analysis Archives, 19(0), 8. http://doi.org/10.14507/epaa.v19n8.2011 Hanushek, E. A., & Rivkin, S. G. (2003). Does public school competition affect teacher quality? In The economics of school choice (pp. 2348). University of Chicago Press. Harris, D. N., & Larsen, M. (2016). The effects of the New Orleans Post-Katrina School reforms on student academic outcomes. Retrieved from http://educationresearchalliancenola.org/files/publications/The-Effects-of-the-New-Orleans-Post-Katrina-School-Reforms-on-Student-Academic-Outcomes.pdf Hess, F. M. (2002). Revolution at the margins: The impact of competition on urban school systems. Brookings Institution Press. Hess, F. M., Maranto, R. A., & Milliman, S. (2001). Coping with competition: The impact of charter schooling on public school outreach in Arizona. Policy Studies Journal, 29(3), 388404. Holme, J. J., & Richards, M. P. (2009). School choice and stratification in a regional context: Examining the role of inter-district choice. Peabody Journal of Education, 84(2), 150171. Holmes, G. M., DeSimone, J., & Rupp, N. G. (2003). Does school choice increase school quality? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9683 Horn, J. G., & Miron, G. (2000). An evaluation of the Michigan charter school initiative: Performance, accountability, and impact. 24 Hoxby, C. M. (1994). Do private schools provide competition for public schools? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w4978 Hoxby, C. M.. (2000). Does Competition among Public Schools Benefit Students and Taxpayers?. The American Economic Review, 90(5), 12091238 Hoxby, C. M. (2001). Rising tide. Education Next, 1(4). Hoxby, C. M. (2003). School choice and school productivity. Could school choice be a tide that lifts all boats? In The economics of school choice (pp. 287342). University of Chicago Press. Hsieh, C.-T., & Urquiola, M. (2006). The effects of generalized school choice on achievement 90(8), 14771503. Imberman, S. A. (2007). The effect of charter schools on non-charter students: An instrumental variables approach. University of Houston. Retrieved from http://www.ncspe.org/publications_files/OP149.pdf Imberman, S. A. (2011). The effect of charter schools on achievement and behavior of public school students. Journal of Public Economics, 95(78), 850863. Jackson, C. K. (2012). School competition and teacher labor markets: Evidence from charter school entry in North Carolina. Journal of Public Economics, 96(56), 431448. Kim, W. J., & Youngs, P. (2013). The impact of competition associated with charter schools and interdistrict school choice policies on educators and schools. International Journal of Quantitative Research in Education, 1(3), 316340. -spatial analysis. education policy analysis archives, 24, 33. Lauen, L. L. (2007). Contextual explanations of school choice. Sociology of Education, 80(3), 179209. Lavy, V. (2010). Effects of Free Choice Among Public Schools. Review of Economic Studies, 77(3), 11641191. Linick, M. A. (2014). Measuring Competition: Inconsistent Definitions, Inconsistent Results. Education Policy Analysis Archives, 22(16). 25 Linick, M. A. (2016). Examining charter school policy and public school district resource allocation in Ohio. Education Policy Analysis Archives, 24(0), 19. http://doi.org/10.14507/epaa.24.2178 Loeb, S., Valant, J., & Kasman, M. (2011). Increasing choice in the market for schools: Recent reforms and their effects on student achievement. National Tax Journal, 64(1), 141164. Lubienski, C. (2005). Public schools in marketized environments: Shifting incentives and unintended consequences of competition-based educational reforms. American Journal of Education, 111(4), 464486. Lubienski, C. (2007). Marketing Schools Consumer Goods and Competitive Incentives for Consumer Information. Education and Urban Society, 40(1), 118141. Lubienski, C., Gulosino, C., & Weitzel, P. (2009). School choice and competitive incentives: Mapping the distribution of educational opportunities across local education markets. American Journal of Education, 115(4), 601647. Maranto, R., Hess, F., & Milliman, S. (2001). Small districts in big trouble: How four Arizona school systems responded to charter competition. The Teachers College Record, 103(6), 11021124. Maranto, R., Milliman, S., & Stevens, S. (2000). Does Private School Competition H-arm Public Research Quarterly, 53(1), 177192. Misra, K., Grimes, P. W., & Rogers, K. E. (2012). Does competition improve public school efficiency? A spatial analysis. Economics of Education Review, 31(6), 11771190. National Alliance for Public Charter Schools (NAPCS). (2016). Measuring up to the model: A ranking of state charter school laws. Seventh Edition. Washington D.C.: NAPCS. National Alliance for Public Charter Schools (NAPCS). (n.d.). The public charter schools: Dashboard. Retrieved from http://dashboard.publiccharters.org/dashboard/home National Conference of State Legislators (NCSL). (n.d.). School Voucher Laws: State-by-state comparison. Retrieved from http://www.ncsl.org/research/education/voucher-law-comparison.aspx Ni, Y. (2009). The impact of charter schools on the efficiency of traditional public schools: Evidence from Michigan. Economics of Education Review, 28(5), 571584. Ni, Y. & Arsen, D. (2013). The competitive effects of charter schools on public school districts. In C. A. Lubienski & P. C. Weitzel (Eds.), The Charter school experiment: expectations, evidence, and implications (pp. 93 120). Cambridge, MA: Harvard Education Press. 26 Presidential Proclamation -- National Charter Schools Week, 2015. (2015, May 4). Retrieved January 19, 2016, from https://www.whitehouse.gov/the-press-office/2015/05/04/presidential-proclamation-national-charter-schools-week-2015 Preston, C., Goldring, E., Berends, M., & Cannata, M. (2012). School innovation in district context: Comparing traditional public schools and charter schools. Economics of Education Review, 31(2), 318330. Rothstein, J. (2005). Does competition among public schools benefit students and taxpayers? A comment on Hoxby (2000) (No. w11215). National Bureau of Economic Research. Sass, T. R. (2006). Charter schools and student achievement in Florida. Education, 1(1), 91122. Saultz, A., Fitzpatrick, D., & Jacobsen, R. (2015). Exploring the Supply Side: Factors Related to Charter School Openings in NYC. Journal of School Choice, 9(3), 446466. http://doi.org/10.1080/15582159.2015.1028829 choice abounds. The Chicago Tribune, Retrieved from http://www.chicagotribune.com/news/ct-chicago-schools-choice-neighborhood-enrollment-met-20160108-story.html Sobel, R. S., & King, K. A. (2008). Does school choice increase the rate of youth entrepreneurship? Economics of Education Review, 27(4), 429438. Resistance to State Schooling, Contemporary Private Competition and Student Achievement across Countries. The Economic Journal, 120(546), F229F255. Wohlstetter, P., Smith, J., & Farrell, C. C. (2013). Choices and challenges: Charter school performance in perspective. Harvard Education Press Cambridge, MA. Ziebarth, T. (2016). Measuring up to the model: a ranking of state charter school laws. National Alliance for Public Charter Schools. 7. Zimmer, R., & Buddin, R. (2009). Is charter school competition in California improving the performance of traditional public schools? Public Administration Review, 69(5), 831845. Zimmer, R., Gill, B., Booker, K., Lavertu, S., & Witte, J. (2012). Examining charter student achievement effects across seven states. Economics of Education Review, 31(2), 213224. 27 Paper 2: Defining and evaluating the measurement of school competition: towards a theoretically grounded, empirically refined measure of school competition The past two and a half decades have seen many states and municipalities adopt various school choice policies as a means to improve their educational systems. During this same time, there has been a corresponding increase in the research literature on school choice. One of the most contested areas of school choice research is the competitive effects literature studies of how the educational system responds to school choice induced competition. The introduction of publicly funded school choice policies has often been argued to improve the efficiency of traditional public schools (TPS) and improve the quality of schooling for all students through the introduction of competition. There have been a number of studies which have found a positive competitive impact of choice (e.g. Figlio & Hart, 2014; Sass, 2006), a negative association (e.g. Bifulco & Ladd, 2006; Hsieh & Urquiola, 2006; Ni, 2009), and no effects (e.g. Zimmer & Buddin, 2009). Despite competition being the lynchpin in the competitive effects framework, there is no unified measure of competition in the literature. In fact, there are more than two dozen different measures of competition in the over 50 competitive effects studies. Blumer (1969) argues that the empirical assessment of a research instrumentin this case a measure of competitionshould rely on how well the instrument captures the entirety of the concept or proposition it intends to measure. The empirical and theoretical evidence suggests that other factors beyond structural components play a role in determining the competitive pressure facing schools and districts, such as perceptions of administrators (e.g. Abernathy, 2008; Loeb, Valant, & Kasman, 2011; Zimmer & Buddin, 2009), policy context (Arsen, Plank, & Sykes, 1999; Hess, 2002), and local contextual factors (e.g. Maranto, Milliman, & Stevens, 28 2000). However, a systematic review of the competitive effects of school choice on student test scores (Linick, 2014) suggests that our measures of competition may not fully capture competition. This is not a new concern. Over a decade earlier Belfield and Levin (2001) raised the issue of construct validity in regards to the measures of competition. Rather than measuring the extent of competition, the existing measures may capture something closer to the availability or presence of choice (Belfield & Levin, 2001; Linick, 2014, Ni & Arsen, 2013). It is surprising given the centrality of competition in the school choice literature that there are such varied, differently conceived, and potentially incomplete proxies of competition. This paper adds to our current understanding of the competitive effects of school choice policy by producing the first systematic evaluation of the existing measures of school competition and suggesting a promising measure of competition that is theoretically grounded, empirically informed. I do this by addressing the following research questions: 1. What are the currently used measures of competition and how do they correlate when operationalized for the specific state context of Michigan? How do statistical inferences change depending on which measure is used? 2. Based on the theoretical and empirical evidence and a systematic ranking of the competition measures, which measure covers the concept of school competition best? What, if any, measures of competition show promise? The state of Michigan represents a unique and informative setting to conduct this work as it was an early adopter of choice legislation, provides two publicly funded choice options, has a funding mechanism which ties operational money to student enrollment, and has a relatively high 29 proportion of families which utilize choice compared to other states.5 As such, I focus on the Michigan context when operationalizing measures and performing analyses. Together, the development and application of a theoretically grounded, empirically refined measure of competition will help us better understand how competition impacts the educational system. The creation of a measure of competition, grounded in both the theoretical and empirical literature will facilitate further refinement of how competition is measured and, thus, bring further clarity to a literature base with mixed results. The paper proceeds as follows. I first review the relevant competitive effects literature. This is followed by the development of a conceptual framework for competition based on a directed review of theoretical and empirical literature. I then discuss the various data sources and methodologies used to answer the research questions. Next the analytic results are presented. Finally, I discuss the results and conclude. Literature review History of school choice and competition The saliency and use of school choice reforms and policiesin the form of vouchers, charter schools, and inter-district choice has increased steadily since Milton Friedman first argued for market based reforms to improve the public school system in 1955. School choice enables parents to vote with their feet. Attaching school funding to student enrollment puts 5Michigan first enacted charter legislation in 1993. In 1994, legislation made the funding of school district current operations a state responsibility, tying operation funds (approximately $7,000 per student in 2014) directly to student enrollment. The Michigan legislature created a system of inter-district choice in 1996. In 2011, Michigan lifted the cap on the number of charters that the governing body of state public universities could issue. More than 10% of students participated in choice programs in 2013. 30 competitive pressure on the public school system to improve the educational quality for all students. While de jure school choice is relatively new, de facto school choice has existed as long as public systems of education. In the U.S., de facto school choice has taken on many forms such as home schooling, private secular and religious schools, and the choice of where to live. Economists have long suggested that families decide where to live based upon the basket of public services provided and the taxes assessed, through a mechanism called Tiebout choice (Tiebout, 1956). However, not all families have the resources to take advantage of these forms of de facto school choice: not all families can afford to have one parent remain out of the labor market or have the necessary human capital to teach their child(ren); not all families can afford the tuitions charged by private schools; not all families can find or afford to move to a community with their preferred levels of educational quality. Thus, some advocates have furthered school choice legislation as a way to increase equity by allowing all families to have Albert Shanker, the former president of the American Federation of Teachers, is an often overlooked but quite important early voice in the push for charter school legislation. He argued for the establishment of charter schools as a form of instructional laboratory in which educators could seek to innovate and inform the public school system (AFT, n.d.). Finally, other school choice advocates argue that the monopolistic system of traditional public schools (TPS) stymies the productivity of the educational system. The introduction of choice into the public school system will increase the overall quality of the school system through the mechanism of competition. By breaking the virtual monopoly that public schools have on the provision of schooling, schools will compete with one another for students and 31 families as well as the resources associated with enrollment. Forced to compete, all schools will need to innovate, provide new or targeted services matched to family/student needs, cut costs, and so on which will not only improve the overall efficiency but also the quality of the education provided (e.g. Friedman, 1955; Chubb & Moe, 1990). This paper focuses primarily on this underlying logic of school choice policy: increased choice will lead to increased competitive pressure which, in turn, will lead to improved educational quality across the system. The role of school competition on impacting student outcomes is fronted demonstrating the importance of identifying a promising measure of competition. Since Minnesota passed the first charter school legislation in 1991 (House File 700/Senate File 467, Laws of Minnesota 1991, chapter 265, article 9, section 3), 42 states and the District of Columbia have charter legislations (NAPCS, 2014). The total number of charter schools has concurrently risen as well, from the first charter school in 1992 to 1,542 schools nationally (340,000 students) in 1999-00 to over 6,000 charter schools (1.787 million students in 2010) in 2013-14 (NAPCS Dashboard, n.d.; Snyder & Dillow, 2013). Within this rapid growth, inter and intra state variation exists (Jochim & DeArmond, 2014; Snyder & Dillow, 2013). Voucher systems and inter-district choice represent two other mechanisms that have been introduced to increase school choice in states, districts, and cities across the country. In 1989, Wisconsin passed legislation creating a publically funded voucher system in Milwaukee targeted to low income families. In the subsequent years, Arizona, Indiana, Louisiana, North Carolina, Ohio, Wisconsin, and the District of Columbia have all enacted some version of a statewide, typically income based, targeted voucher system. Florida, Georgia, Mississippi, Oklahoma, and Utah all have voucher systems for students with certain disabilities or IEPs while Maine and Vermont have voucher systems targeted at rural students (NCSL, n.d.). Inter-district school 32 y many states, metropolitan areas, and districts. The design of inter-district choice policy varies by state. However, many inter-district choice policies allow each district to decide if they want to receive students from other districts (e.g. California, Michigan, New Jersey). As can be seen, school choice has become a fixture of the current educational landscape. Similarly, studies of school choice have become a fixture of educational policy research. Within the field of school choice, several subfields have received considerable attention: studies examining the passage and implementation of school choice policies, impact studies examining the effect of using choice on the choosers, comparative studies which focus on the different levels of productivity or efficiency in different sectors, studies of whether innovative practices occur when choice is present, and those that explore the competitive effects of school choice. This research focuses on the last subset of studies, the competitive effects studies which look at how the educational system responds due to school choice induced competition to identify measures of competition for further review. Measures of competition in the literature The introduction of a publicly funded school choice policy has been argued to improve the efficiency of traditional public schools (TPS) and improve the quality of schooling for all students. Despite competition serving as the lynchpin in this framework, significant variations in the definition of competition exist in the literature. Measures of school competition generally fall within one of three categories: measures of a) the presence of choice schools or policies, or b) market share of choice schools, or c) some function of presence and market share of choice schools (Linick, 2014; Ni & Arsen, 2013). 33 These measures are structural in nature, capturing readily observable aspects of schools and districts. Within each of these broad categories there is significant variation in how the measures are operationalized, how markets are defined, and in what factors are considered. There is no standard method by which to judge which measure best approximates the extent of competition present in the schooling context. The measurement of school competition and the competitive effects of school choice policies on student test score outcomes was first systematically explored by Belfield and Levin (2001) and has been updated by Ni and Arsen (2013) and Linick (2014). I draw on Ni and ompetition measures in the test-score competitive effects papers adding a few recent papers to demonstrate that the variation in measures has yet to be addressed. I then extend the conversation to include non-test score competitive effects. The non-test score papers were not covered by previous efforts therefore I devote more space to discussing the studies. Competitive effects on test score outcomes. An 11 study analysis conducted by Ni and scores from a variety of states and assessments demonstrate the variety of competition measures the studies employed. 3 of the 11 studies reviewed used a competition measure based primarily upon the presence of school choice (Bettinger, 2005; Bifulco & Ladd, 2006; Holmes, DeSimone, & Rupp, 2003), 3 studies used a measure that looked at the market share of charter schools (Hoxby, 2003; Imberman, 2007; Ni, 2009), 2 studies looked at charter presence and market share separately (Buddin & Zimmer, 2005; Carr & Ritter, 2007), and 3 studies employed a combined measure of both presence and market share (Bohte, 2004; Booker, Gilpatrick, Gronberg, & 34 Jansen, 20086; Sass, 2006). Variation existed within each of these general definitions of competition. A recent paper by Figlio and Hart (2014) explored the competitive effects of a means-tested voucher program in Florida. They utilized measures of competition based on the presence of private school options: such as miles to nearest private competitor, number of local private schools within 5 miles, and number of private school seats available. Harrison and Rouse (2014) used an Herfindahl- Hirschman Index (HHI) to evaluate the competitive effect of the abolishment of school zoning in New Zealand. The HHI provides a measure of competition geographic region.7 Finally, Linick (2014) finds that other competitive effects studies looking at student test score outcomes fall into the three categories laid out before: presence, market share, or a function of both. Egalite (2016) recently released a research paper exploring the competitive effects of the Louisiana Scholarship Program, a targeted voucher program, on TPS student test scores. She used three presence measures of competition the distance to the nearest private school, the number of private schools within a given radius, the number of different types of private schools within a given radiusand an HHI measure. Competitive effects on non-test score studies. Studies focusing on the competitive effects of school choice policy on non-test score outcomes have also utilized various measures of competition. The non-test score studies are presented in Table 1. The presence of choice has 6 Ni & Arsen (2013) use a working paper from 2005. I substitute the 2008 version published in Journal of Urban Economics. 7 HHI is defined as a sum of the squares of the market share (s) of each school provider i that serves students residing in district k, at time t: ). Market share (s) is defined as the enrollment in provider i (which can be the home district, a neighboring district via IDC, or a charter school) divided by total number of students that reside in district k, at time t. The HHI ranges from near 0, many small schools in competition, to 1, a single monopolistic provider. As such, a number closer to 0 typically is typically associated with more competition and a number closer to 1 with less competition. 35 Table 1. Studies of the impact of choice based competition on non-test score outcomes 36 Table 1 (cont'd). been operationalized as a measure of competition in a number of ways: the presence of a charter, voucher, or other choice policy (e.g. Lavy, 2010; Loeb et al., 2011); the existence of at least one charter school in the district (e.g. Abernathy, 2008; Gresham, Hess, Maranto, & Milliman, 2000), 37 the county (Sobel & King, 2008), or a set distance from the TPS (e.g. Jackson, 2012); and the number of choices nearby to the TPS (e.g. Zimmer & Buddin, 2009). Several non-test studies have operationalized the market share of choice schools as the percentage of students within various geographical boundariesi.e. distance, district, county, countryattending charter schools (e.g. Imberman, 2011; Zimmer & Buddin, 2009) or private schools (e.g. Hoxby, 1994; Hsieh & Urquiola, 2006; West & Woessman, 2010). A handful of studies have made substantive refinements to market share measures by including the ability of families to afford to utilize choice options (Maranto et al., 2000), the duration of exposure to competition (Ni, 2009), and a measure of the quality of the competition, based on test scores (Cremata & Raymond, 2014). The final set of studies of the impact of school competition on non-test score outcomes conceptualized the measure as a function of both presence and market share, or potential and realized loss of students. The studies used either the Herfindahl-Hirschman Index which provides schools in a given geographic region (Dijkgraaf, Gradus, & DeJong, 2013; Greene & Kang, 2004) or the Gravity Access Model (Misra, Grimes, & Rogers, 2012) which includes distance between schools along with enrollment share. Some studies account for the lack of a consistent measure by applying multiple measures of competition. For example, Zimmer and Buddin (2009) include a host of competition measures from the presence and market share categories in their study of California schools. The results are decidedly mixed which, they argue, indicates no systematic competitive effect. This study implicitly shows the importance of which measure of competition is used: the competitive effects associated with each various measures were positive, negative, or null. An alternative 38 interpretatido not capture the same information and, thus, the regression inferences are sensitive to the measure of competition selected. Rather than evidence of no systematic competitive effects this study provides evidence of the need for a way to evaluate multiple measures of competition against the underlying construct of school competition. Figlio and Hart (2014) utilized 5 measures of competition to explore the effects of a voucher program in Florida. They leveraged the passage and lagged implementation of voucher legislation as a compelling identification strategy for the effect of competition: there was a year lag between when the law passed and when students could use a voucher to attend private schools. This allowed Figlio and Hart to examine the competitive effect of vouchers, i.e. the pressure schools felt to improve to retain students, net of financial or compositional effects. They found that all measures were associated with statistically significant competitive effects. Each had a different magnitude but the effect was in the same direction. The measures were presence type measures interacted with whether or not the voucher policy was in effect. They use the consistency of estimates across measures to argue that their findings are on solid ground. However, all measures were variations on a themecompetitive effects will emerge with the threat of potential loss. Further, the identification strategy necessarily limits the generalizability of the results as the inferences were for schools facing a new source of competitive pressure, in communities with established private schools, and without the associated loss of funding and compositional effects. In other words, the findings of Figlio and Hart (2014) may apply to the onset of new competition but not to the effects of school of competition once school choice policies have matured. 39 The preceding review of the competitive effects literature is not meant to be exhaustive. Instead it highlights the overarching patterns of how competition has been measured in the 31 papers reviewed above as well as highlight the wide variety of measures used. Table 2 displays the number of studies using each of the three categories of measures. Some papers included more than one measure of competition (e.g. Figlio & Hart, 2014 included 5 measures categorized under the presence measure; Zimmer & Buddin, 2009 used 20 presence based measures of Table 2. Summary of the measures of competition in the literature reviewed in this paper. competition and 5 market share measures). I only count the number of different categories of variables used in the paper; therefore, Figlio and Hart (2014) is only represented once while Zimmer and Buddin (2009) contributes a tally to both the presence and the market share rows in Table 2. The sheer variation in the measures of competition employed is readily apparent. The 31 papers reviewed utilize one or more presence based measure 14 times, at least one market share variable 14 times, and a function of market share and presence 8 times. Four of the papers use measures from more than one of the three categories. This understates the number of different individual measures used as some papers used multiple measures within in a given category or operationalized the measure type differently (i.e. different boundary definitions). In each case, 40 the authors have provided compelling arguments for why the particular measure of competition should be used. The studies which used various iterations of the presence of choice, measured by the density and proximity of choice schools, capture an important aspect of competitionavailability or potential lossbut do not account for the actual loss of students to choice. The studies which use measures of market share argue that the competitive pressure exerted on TPS may not depend on the threat of loss but on the realized loss of students to choice schools. However, these measures do not typically account for the number of competitors or the different types of schooling provided. Studies using a combination of presence and market share draw on both of these insights, partially addressing the concerns of using presence or market share alone. The following section reviews theoretical and empirical literature to develop a conceptual framework grounded in theory and updated by empirical findings. Conceptual framework for school choice induced competition Assessing an empirical research measurein this case a measure of competitionrelies on how well the measure captures the entirety of the concept or proposition it intends to measure (Blumer, 1969). In order to evaluate the extent to which various measures of school competition cover the concept, a systematically developed conceptual framework is needed.8 This is not a new concern; Woods (2000) and Belfield and Levin (2001), amongst others, have discussed the need for a carefully constructed measure of competition. Yet the concern has not been sufficiently addressed (e.g. Linick, 2014). To develop the conceptual framework I conducted a systematic review of the theoretical and empirical evidence across multiple disciplines, including education, economics (particularly Industrial Organization) and sociology (specifically 8 In the psychometric literature, an analogous concern is called construct validity. 41 Economic Sociology). Within the education discipline I reviewed the competitive effects of school choice literature. From Industrial Organization (IO), I included general treatments of the field as well as work focused on education and healthcare. Two key insights which emerged from the IO literature was that the enrollment trends of the district could play a mediating role in the amount of competition a particular market faced and that competition may not be linear in its effects. Similarly, I reviewed recent work from Economic Sociology (ES) paying specific attention to applications to education. The main insights from ES were the importance of the perceptions of the key actors and the degree of interconnectedness in the market (in my interviews this emerged as explaining part of the different competitive pressure exerted by loss to charter schools and loss to TPS). Across these literature bases, I included pieces which addressed empirical or conceptual questions related to defining or measuring competition, with a focus on studies which dealt with public sectors. I coded the articles for themesdisciplinary lens, sector of competition, theory, conceptual framework, measurement of competition, and factors related to competitive pressure. From this, I developed a general conceptual framework for the factors related to the extent of competition felt by districts. I begin with the evidence of which factors are associated with the extent of competition felt by school districts from the educational literature base. I use this as the entry point for two main reasons. The first is their direct application to the educational context. The second is that understanding the specific industry or system where competition is being studied has roots within the IO literature and the ES literature, respectively. The New Empirical Industrial Organization (NEIO) suggests that industries are sufficiently unique that cross-industry comparisons may be flawed (e.g. Einav & Levin, 2010). Similarly for ES, Bagley (2006) argues that the wider structures and contexts need to be accounted for when trying to understand the 42 complex process of determining competition and the competitive effects. For each category of factors listed below I begin with the evidence from the educational literature before turning to IO and ES for additional insights. The IO papers come primarily from the Structure-Conduct-Performance (SCP) and New Empirical Industrial Organization (NEIO) perspectives9. Overall, there are four categories of factors identified in the literature as theoretically or empirically impacting the extent of competition felt by a district. These categories of factors are the school choice policy context, the characteristics of the school choice market, the local contextual factors, and the perceptions of the traditional public school administrators. See Figures 1a and 1b for a diagram of the conceptual framework. I discuss each of the categories of factors in detail below followed by an explanation of how the fit together. School choice policy context. The specific rules and regulations contained within a context. The educational markets are influenced by the various rules and regulations related to the funding of choice schools, admissions policies, who can issue a charter, who can teach in a charter school, who provides transportation, and so on (e.g. Arsen et al., 1999; Carnoy, Mishel, Jacobsen, & Rothstein, 2005; Epple, Figlio, & Romano, 2003; Hastings, Kane, & Staigher 2005; Hess, 2002) which likely influences the extent of competition felt by the district. A school choice policy context that provides different levels of funding based on the characteristics of the 9 The Structure-Conduct-Performance literature relies on a conceptual model which posits that the market structure is causally linked to the conduct taken by firms which in turn is causally linked to the performance of the industry (e.g. Gaynor, 2006). Researchers have applied this framework to number of sectors, often through the use of market structure variables like the HHI. In other words, knowing the structure of the market allows for researchers to make inferences about the overall performance of the sector. New Empirical Industrial Organization recognizes that industries differ sufficiently from each other to warrant industry specific models of and theories of competition (e.g. ushered in industry specific studies which used economic modeling based on the specifics of an industry. For the educational sector, using measures or models developed for other industries may not fully capture the uniqueness of the education system leading to potentially flawed inferences. 43 students enrolled, i.e. differentiates between elementary and high school students, between level of per-pupil funding based only on enrollment. In the former, competition might be expected to occur for students across the grade and needs spectrum. In the latter, schools have an incentive to compete for less expensive to educate students. Theoretical and empirical work have shown the potential implications of various choice policy designs on the average outcomes, on the variation of outcomes, and on how the school system responds. Theoretical work examining the general equilibrium effects of voucher policy design has shown that the amount of the voucher, if and how vouchers are targeted, what private schools are included in the voucher system; and what the local school policies likely influence the response of the educational system (e.g. Epple & Romano, 1998, 2003; Epple, Newlon, & Romano, 2002; Ferreyra, 2007; Nechyba, 2000, 2003). Work in Michigan also shows the importance of understanding the school finance system and how it may impact other policies, including choice-based reforms (Epple & Ferreyra, 2008). As Arsen et al. (1999) succinctly put Thus, understanding the rules and incentives of a given school choice policy and the surrounding policy context represents the first key determinative factor in how school choice manifests itself in competitive pressure. This suggests the need to account for variations in policies and incentive structures across state or policy contexts. Characteristics of the school choice market. The characteristics of the school choice market represents the quantifiable, observable measures which provide basic information about the school choice market. The competition that school choice policy is intended to create functions primarily through the loss, or threat of loss, of students from TPS to choice schools, 44 Figure 5a. Conceptual framework for what factors influence the extent of competition felt by districts. Note: Each blue box represents a category of factors found in the theoretical and empirical literature to potential influence the extent of competition. The arrows represent the direction of influence. For example, School choice policy context directly influences the characteristics of the school choice market as well as the perceptions of TPS administrators. While there is no direct impact on the local contextual factors from the school choice policy context, the design of the school choice policy can influence which local contextual factors impact TPS perceptions and the extent of competition felt. For instance, the provision (or lack of) for transportation would dampen (or heighten) the relevancy of the SES of families in the district. The local contextual factors directly influence the perceptions of TPS administrators and extent of competition. The characteristics of school choice market are influenced by the policy context but also influence the perceptions of TPS and the extent of competition. The red box represents the extent of competition felt by a district. The orange box at the bottom suggests that the extent of competition is related to the competitive effects. However, for purposes of this conceptual diagram the responses and processes by which the extent of competition begets a competitive effect remain a black box. 45 and vice versa. In the educational literature, there is near universal agreement that these factors are important as is evidenced by every study reviewed in Table 1, by Linick (2014), and Ni and Arsen (2013) discussed above using some measure of the characteristics of the school choice market. These measures include, but are not limited to, the number and proximity of choice schools (e.g. Zimmer & Buddin, 2009), the market share of the choice sector (e.g. Hoxby, 2003; Ni, 2009), and the use of the HHI (see Belfield & Levin, 2001 and Linick, 2014). Other Figure 5b. Examples of the types of factors which contribute to each of the overarching categories in Figure 1a. characteristics of the school choice market influence the level of competition felt by a district. For instance, fees or tuition associated with choice schools (e.g. Arsen et al., 1999; Maranto et al., 2000), how long the choice sector has existed (e.g. Ni, 2009), the average test scores (Cremata & Raymond, 2014) and other educational outcomes of choice schools all play some role in determining the extent of competition felt by a district. The IO literature, specifically the Structure-Conduct-Performance literature suggests that knowing the structure, or concentration, of a market is a sufficient proxy for the extent of competition in a given market which in turn determines the conduct and performance of a given firm (Davis, 2011). Given the number of competitors, distances between them, and market share of each, the extent of competition can be calculated often as an HHI. However, the relationship 46 between the number of competitors and the extent of competition may not be linear. Bresnahan that decreases with and families must be taking advantage of the options by enrolling their children in choice schools. In a study of driving schools in Sweden, Asplund and Sandin (1999) suggest the importance not only of the distances and concentration within a market but also the distances to the nearest markets. For the educational setting, the importance of this insight likely depends on the school choice policies and markets. An IO study of civil aviation finds that the quality of close substitutes factors into the extent of competition (Lijesen, 2004). Einav and Levin (2010) suggest the need to develop industry specific, in this case educational sector specific, models and theories for how the extent of competition is determined. This section is a step towards this. The ES literature suggests that the interconnectedness of the market plays a role in determining the extent of competition (Fligstein & Dauter, 2007). If schools are highly connected, having strong relationships with other nearby schools the extent of competition may be lower than if the interconnectedness was weaker. Using network analysis on over 600 industries, Braha, Stacey, and Bar-Yam (2011) suggest that the size and distance of individual firms is associated with the extent of competition felt. In sum, the theoretical and empirical evidence suggests several key factors within this category influence the extent of competition. These factors include those currently used in the educational literature: the number of competitors, the distance between the competitors, the market share, the concentration of a market. However, other characteristics of the school choice market appear to help determine the extent of competition: the enrollment size of a school, the quality of substitutes (nearby choice schools), accounting for the non-linear relationship between 47 number of competitors in the market and competition, and the existing interconnectedness of the market. Perceptions of TPS administrators. Education researchers are paying increasing attention to the role the perceptions of district and school administrators play in determining the extent of competition felt (e.g. Abernathy, 2008; Jabbar, 2015; Joshi, 2014; Kim & Youngs, 2013; Loeb et al., 2011; Zimmer & Buddin, 2009). Most studies focus on understanding whether TPS principals perceive competition (e.g. Abernathy, 2008; Jabbar, 2015; Kim & Youngs, 2013; Zimmer & Buddin, 2009), what structural aspects of the choice sector are related to principal perceptions of competition (e.g. Jabbar, 2015; Loeb et al., 2011), and what changes at the school do principals associate with competition (Jabbar, 2015; Kim & Youngs, 2013; Zimmer & Buddin, 2009). Insights from Economic Sociology come from seeing economic life as embedded in social life (Granovetter, 1985). Perceptions of competition are important to determining the extent of competition and the responses to the pressures (Braha et al., 2011), i.e. who does a school view as its competitors. As mentioned above, the perception of being interconnected may limit the impact of competition (Fligstein & Dauter, 2007). The perceptions of TPS administrators in England about competition were found to correlate with student outcomes but not necessarily with the characteristics of the choice sector (Levacic, 2004) suggesting that perceptions of TPS administrators are influenced by things other than the characteristics of the school choice market from above. Drawing on the theoretical framework of economic sociology, Jabbar (2015) suggests that the extent of competition felt by TPS has a component which is based on the perceptions and experiences of TPS administrators. Joshi (2014) argues that the 48 perceptions of TPS principals is important to understanding the extent of competition facing schools in Nepal. The theoretical framework of Economic Sociology and the empirical findings suggest that the perceptions of TPS administrators, at the district and school level, likely play an important mediating role in how school choice puts competitive pressure on districts and schools as is demonstrated in its placement in Figure 1. For example, if administrators in District A perceive that the nearby charter school gains enrollment from District B they are unlikely to feel pressure, regardless of the veracity of that perception compared to the reality of the situation. The perceptions of administrators and principals about the relative strengths and/or weaknesses of other districts, private schools, or charter schools all likely mediate and influence the competitive pressure felt. For example, if TPS administrators believe that charter schools are attracting students away through targeted marketing efforts the competitive pressure felt by the district will differ than if the TPS administrators perceived that the charter schools provide better academic services. Taken together, the theoretical and empirical evidence suggest that he levels of competition felt by a TPS, as well as a mediating role in the how the charter sector characteristics influence competition. Local contextual factors. The local contextual factors box in Figure 1 refers to the demographic characteristics (i.e. wealth, socio-economic status, diversity, population trends) of the district. To note, the local contextual factors in Figure 1 are not directly altered by the school choice policy context but the relative importance of various local contextual factors is potentially influenced. This is indicated in the dashed arrow running form the School Choice Policy Context box to the Local Contextual Factors in Figure 1. For example, if transportation is provided in the school choice policy the relationship between the ability of families to utilize choice and SES 49 will potentially diminish, all else equal. Conversely, if transportation is not provided for in the school choice policy design the ability of families to provide transportation to alternative schools will increase in its saliency. However, the local contextual factors have the potential for directly influencing both the perceptions of TPS administrators and the overall extent of competitive pressure felt. Accounting for the enrollment trends in the district provides potentially determinative information regarding the competitive pressure felt by TPS, i.e. a district gaining students may face different levels of competitive pressure from a district that has flat or declining levels of enrollment. Similarly, the ability for families to provide the resources necessary to utilize choice options, such as providing transportation to and from a charter school or paying for tuition/fees in private schools, plays into the amount of competitive pressure felt (e.g. Maranto et al., 2000). Drawing on the theory of New Empirical Industrial Organization, Bresnahan and Reiss (1991) argue that population trends matter in determining the extent of competition in a given market. Firms rely on having a large enough consumer base to remain profitable. Therefore, the competitive pressure can increase or decrease based on population trends even with no new entries or exits of firms. These are a few examples of how accounting for contextual factors leads to a clearer measure of the levels of competitive pressure. Fitting the pieces together. Figure 1a represents a conceptual framework for how the factors identified above influence the extent of competition facing school districts. In Figure 1a, the four boxes in blue are the overarching categories discussed above. Each blue category is comprised of the different factors contained within (see Figure 1b). The primary impact of the school choice policy context runs through its influence on the characteristics of the school choice market, the perceptions of TPS administrators, and to some degree which local contextual factors are relevant. The characteristics of the school choice market (i.e. the structure of the market, the 50 quality of the schools within the market, the number of kids leaving the district) are influenced directly by the school choice policy design and context. For example, which schools are included in the competitive market depends on the policy(ies) chosen, i.e. open enrollment, means-tested vouchers, charters. The selection and design of a given choice policy can influence whether the majority of student movement is at the elementary or at the high school level, the number of new schools entering the market (via charter school establishment or vouchers for privates), and the relative role that distance plays (based on whether or not transportation is included). The perceptions of TPS administrators are also influenced by the policy context in which they operate. Both rhetorically and through policy pressures, perceptions of TPS administrators can be influenced by the structure of the choice policy. Finally, which of the local contextual factors are influence the perceptions of TPS administrators and the overall extent of competition are partially influenced by the school choice policy context. In a system like Michigan where operational funding is tied to student enrollment, the overall district enrollment patterns may influence the extent of competition. Conversely, a school choice policy operating in a system where operational funding is tied to property taxes may be less influenced by enrollment patterns and more influenced by overall population trends. The arrows running out from the characteristics of the school choice market (Figure 1) represent the theoretical and empirical evidence suggesting the direct influence of these characteristics on the perceptions of TPS administrators (e.g. Braha et al., 2011; Jabbar, 2015; Kim & Youngs, 2013; Levacic, 2004; Loeb et al, 2011; Woods, 2000) and on the extent of competition (e.g. Abraham, Gaynor, & Vogt, 2007; Andritsos & Tang, 2014; Bresnahan & Reiss, 1991; Katz, 2013; Lijesen, 2004). Overall, the characteristics of the school choice are 51 theoretically and empirically associated directly with the perceptions of TPS administrators and the extent of competition. The theoretical and empirical evidence suggest local contextual factors impact both the perceptions of TPS administrators and the overall extent of competition. As mentioned above, IO enrollment (e.g. Bresnahan & Reiss, 1991), can influence the extent of competition directly and the ES literature suggests a role for the local context in the development of perceptions (e.g. Bagley, 2006; Woods, 2000). The two arrows running out from the local contextual factors box in Figure 1 represent these insights. The final category of factors to directly influence the extent of competition is the perceptions of TPS administrators. Figure 1 indicates that perceptions are influenced by the policy context, characteristics of the school choice market, and the local contextual factors. As discussed above, this is supported by both theoretical and empirical evidence. However, the only arrow outward from the perceptions of TPS administrators box is to the extent of competition. It is unclear if and how the characteristics of the school choice market or the local contextual factors could be significantly impacted by the perceptions of TPS administrators. Further, there is no empirical or theoretical evidence for this influence. As discussed above, there is empirical and theoretical evidence from the education literature (e.g. Jabbar, 2015; Levacic, 2004; Loeb et al., 2011) and the ES literature (e.g. Braha et al., 2011; Fligstein & Dauter, 2007; Podolny, 1993) arguing for a direct relationship between perceptions and the extent of competition facing a district. Put succinctly, Figure 1 represents the theoretical and empirical evidence and shows that the design of a school choice policy directly influences two main categories of factors52 characteristics of the school choice market, perceptions of TPS administratorsand partially influences the thirdlocal contextual factors. The perceptions of TPS administrators are influenced by the market characteristics and the local context. Finally, each of these factors is interrelated and plays a potentially important role in determining the extent of competition. The extent of competition is then directly related to the competitive effects. This conceptual framework provides a coherent, nuanced understanding of the various factors which influence the extent of competition. The existing measures of competition primarily capture an important subsection of the characteristics of the school choice market, but omit other factors highlighted by theoretical and empirical work. Rules regarding the funding of choice schools, admissions policies, who can issue a charter, who can teach in a charter school, and so on all shape the educational market (e.g. Arsen et al., 1999; Carnoy et al., 2005; Hess, 2002) and, thus, factor into the extent of competition felt. Theoretical and empirical evidence from other sectors suggest other characteristics of the school choice market that need to be accounted for: potential non-linearities in the impact of competition (Bresnahan & Reiss, 1991); how interconnected the various schools are in the educational market (Fligstein & Dauter, 2007); and the relative size of providers (Braha et al., 2011). Researchers have suggested that the perceptions of district and school administrators play a role in determining the extent of competition felt and, thus, the responses taken (e.g. Braha et al. 2011; Granovetter, 1985; Fligstein & Dauter, 2007; Jabbar, 2015; Levacic, 2004; Kim & Youngs, 2013; Loeb et al., 2011). Local contextual factors, such as changes in the potential enrollment pool or whether families can afford to take advantage of choice options, have also been flagged as influencing the extent of competition (e.g. Bresnahan & Reiss, 1991; Maranto et al., 2000). These all represent key extensions to how competition has been conceptualized and, thus, measured in the literature. 53 If the existing measures of competition provide reasonably good coverage of the underlying construct of competition there is less cause for concerni.e. the measures of competition currently used correlate highly with each other and with the construct of the extent of competition. On the other hand, if the various measures of competition only capture a single contributing factor (of varying weight) the studies with differing measures may not add to the same conversation. Our ability to draw synthetic inferences from the current body of competitive effects literature depends on how well the current measures of competition correlate with one another. The school choice context in Michigan in ideal study location The state of Michigan represents a unique and informative case study as it was an early adopter of choice legislation, provides two publicly funded choice options (charter and inter-district choice), has a funding mechanism which ties operational money to student enrollment, and has a relatively high level of families which opt out of their TPS. Michigan first enacted charter legislation in 1993 which enabled public universities, community colleges, K-12 local education authorities, and intermediate school districts to grant a charter. Under the 1993 charter legislation, students are allowed to attend any charter school, regardless of geographic location, given the availability of seats. Further, the legislation restricted charter schools from using race or other characteristics to screen students, prohibited charging tuition, and set up a lottery system for schools that were oversubscribed (Michigan Department of Education, 2012). Two subsequent pieces of legislation, passed in 1994 and 1996, added nuance to the Michigan education system. In 1994, the funding of school district current operations became the responsibility of the state through the passage of Proposal A. Local districts were no longer able to raise money through taxes to support the current operations expenses. The operation funds 54 were tied directly to the enrollment of students, meaning that a district would receive or lose $7,187 based on gaining or losing a student10. This shift in funding meant that charter schools were in direct competition with TPS for student enrollment and the attached current operational funds. The Michigan legislature created a system of inter-district choice in 1996, enabling students to transfer to schools outside of their district. A school district can decide whether or not to allow students to transfer in but cannot determine if students residing in the district can transfer out. Finally, in 2011 Michigan lifted the existing cap on the number of charters that the governing body of state public universities could issue. In 2015, the cap on the maximum number of charter contracts issued from state public universities was lifted (PA 277; 2011). These state laws, passed in 1993, 1994, 1996, and 2011, have created multiple forms of choice as well as direct fiscal consequences for losing, retaining, and recruiting students. Choice has existed within Michigan at varying levels since 1993, providing over 20 years of data. Michigan also has wide variation between regions and cities that has developed over time, with three cities ranking in the top twenty nationally in charter school enrollment share: Detroit, Grand Rapids, and Flint (NAPCS, 2014). The multiple forms of choice in Michigan allows for the comparison of the competitive effects of different choice mechanisms. Over half of Michigan charter schools have existed for 10 or more years (more than 75% have existed for at least 4 years) and Michigan has the 8th highest share of charter school enrollment in the country (NAPCS Dashboard, n.d.; Snyder & Dillow, 2013). More than half of the counties in Michigan, 43 out of the 83, have 6% or more of students living in the county attending a charter school (authors calculations using CEPI enrollment data). All of this together underscores how 10 Do 55 Michigan provides a meaningful location in which to understand the impact of school choice induced competition on the public education system. This dissertation provides the first direct empirical comparison of the existing measures of competition and evaluates how well the various measures of competition capture the underlying construct of competition through the application of a rubric. This is done by addressing the following research questions: RQ1. What are the currently used measures of competition and how do they correlate when operationalized for MI? How do statistical inferences change depending on which measure is used? RQ2. Based on the theoretical and empirical evidence and a systematic ranking of the competition measures, which current measure covers the concept of school competition best? What, if any, measures of competition show promise? Data and Methods To answer the above research questions I use a number of data sources and methods. I briefly summarize the data and methods used in a more narrative format in order to clearly demonstrate the interconnection of each separate part. I then spend considerable more time explaining the data and methods used. When discussing this in more detail below, I group the data and methods for each of the lines of inquiry together in order to maintain continuity. As alluded to above, multiple measures of competition are not inherently an issue. If all measures correlate highly with each other and with the construct of competition there would be little difference between which measure was used. However, if the measures do not correlate either with one another or the construct of competition the body of school competitive effects literature would become significantly more fragmented. I test each of these concerns in turn. 56 To test the first concern, I operationalize the existing measures of competition for the Michigan context using a variety of secondary data sources. I then perform correlational analyses to test whether or not the measures are correlated with one another. Following this I explore if, and how, the inferences of a fixed effects regression change when substituting the extant measures into the same model using the same data. The results of both of these quantitative tests suggest that the measures are not uniformly correlated with one another nor do they yield the same inference for the competitive effect of school choice. I test the second concern, how do the measures cover the underlying construct of school competition, by developing and applying a systematic rubric to the measures. The rubric is based on the conceptual framework (Figure 1) and more pragmatic concerns such as cost to construct, missing data issues, and feasibility of implementation. I draw on interviews with 10 Michigan District Superintendents to further empirically ground the conceptual framework while also contextualizing the framework for the Michigan context. I then systematically and transparently evaluate the existing measures of competition via the rubric. Data and methods assessing the correlation between measures Operationalizing measures of competition. The data used come from multiple sources: the Center for Educational Performance and Information (CEPI) website, the MI School Data The CEPI files website provides downloadable excel sheets of the academic performance of TPS and charter schools. Finally, the CCD for Michigan is available online at the NCES website for download. The data used covers the school years of 2008-09 through 2012-13. Together, these data sources provide information on a wide variety of variables related to the measurement of competitive 57 pressure facing schools due to school choice. Data on enrollment numbers and patterns, school and grade level student outcomes, the district financial data, and the characteristics of the student body are all available through CEPI and the MI School Data website. The physical location of Table 3. Definition of competition variables and data sources used to create each measure. 58 schools are available through the CCD. Table 3 provides the definitions and data sources used to create the following measures of competition from the literature. Presence variables. I used the CCD to operationalize a subset of the school competition measures I labeled presence measures drawn from Jackson (2012) and Zimmer and Buddin (2009). These measures include: a) 12 indicator variables for the presence of at least one school of any type, one TPS, one charter, or one magnet school within three different radii of a given traditional public school (2, 10, and 20 miles); b) 4 continuous measures of the distance in miles from each TPS to the nearest option (any, a TPS, a charter, or a magnet school), and finally c) 8 count variables for the total number of any, TPS, charter, or magnet schools within set radii of a given traditional public school. The CCD provides the longitude and latitude of all TPS, charter, and magnet schools in the state of Michigan. Along with these coordinates, the CCD also provides a categorical variable for whether a school serves the elementary, middle, or high school grades. Using the nearstat command in Stata11, I created the above series of variables for traditional public elementary, middle, and high schools. Summary statistics for these and the following measures of competition are in Table 4. Market share variables. I used the CEPI data to create the market share measures of school competition. The CEPI data provides district level data on the overall number of students enrolled in public schools or a public school academy (charter school). Further, the data provides information on the number of students attending a district who do not reside in the district. This 11 The nearstat command uses the latitudinal and longitudinal coordinates of observations to calculate a variety of distance related measures. Using geodesic (or as the crow flies) distances in miles (or kilometers if specified) between two coordinate pairs, the nearstat command can calculate the distance to the nearest coordinates, the number of coordinates within a defined radius, and so on. The CCD provides the latitude and longitude for every public school (charter or TPS) in Michigan. Thus, the nearstat command provides a straightforward way to leverage this information. 59 information on the overall enrollment for all traditional public schools and charter schools, the Table 4. Descriptives of the measures of competition in the extant literature. number of students attending a given district through a choice mechanism, and the resident district number of students attending through choice. The number of students residing in a given district, regardless of school attended, can be backed out of these files. Using this data, I operationalized a series of market share variables found in the existing literature. The first measure I created was the proportion of students leaving a district as a share of district enrollment, proportionenrollmentthe number of resident students leaving the district divided by the total current enrollment in the district. The second measure was the proportion of students leaving a district as a share of total number of students residing in the district, 60 proportionresidentsthe number of resident students leaving the district divided by the total number of students who were assigned by residency to the district, regardless of the school attended. While these two measures share the numerator, they differ on the denominator used. The proportionresidents measure only considers the impact of losing resident students through school choice policies. On the other hand, the measure of proportionenrollment, implicitly places seeing an increase in overall enrollment, perhaps through an open-enrollment program, losing x number of students would receive a lower value on the proportionenrollment measure of competition than a net losing district with the same number of resident students leaving. A third way to account for student loss, particularly salient in contexts with inter-district choice, is to use the net student flow due to inter-district choice in the numerator (the number of students entering the district through inter-district choice minus the number of students leaving the district via inter-district choice) divided by the number of students residentially assigned to the district (Arsen, DeLuca, Ni, & Bates, 2015). As this measure, proportionnetchange, has not been used in the competitive effects literature I do not include it in the following analyses. However, I briefly discuss the implications of each of these three measures of proportional student loss in the following two paragraphs as the selection of the measure impacts the interpretation of the results. Each of the three proportional loss measures represent different ways to conceptualize the measurement of enrollment loss, each with a different way of accounting for student inflows via policies like inter-district choice. Using the number of residentially assigned students in the denominator, proportionresidents, highlights the impact of resident students leaving a district but does not account for any non-resident student inflows through inter-district choice policies. The proportionenrollment measure indirectly accounts for student inflows by using the overall 61 district enrollment in the denominator. Using the district enrollment rather than the residency based population either magnifies or dampens the loss of resident students based on whether or not the district is able to attract students into the district via inter-district choice. In other words, the proportionenrollment measures allows for a district that is losing resident students to charters or inter-district choice but is able to replace those students via school choice mechanisms to face a different amount of pressure than a district losing an equivalent number of student who is unable to attract students to the district. Finally, the proportionnetchange variable directly accounts for the net flows of students due to any choice mechanism. The interpretation is different than the other two measures rather than the proportion of students lost via choice, the interpretation of this variable would be the proportional change in enrollment due to choice. The former two measures vary from 0 to 1 while the latter varies from -1 to 1 (for all intents and purposes). In contexts with only charter school choice, proportionresident and proportionnetchange are equivalent in their absolute values and are likely preferred. Proportionenrollment differs from the other two measures as the loss of students via choice enters in both the numerator and denominator. In contexts with both inter-district choice policies and charter schools, the preference amongst the measures is less clear. Using the proportionresident in this context does not account for whether a district is able to offset the student loss via attracting students to the district. The other two measures, proportionenrollment and proportionnetchange, do allow for the inflows of students to impact the pressure felt. As emerges from the interviews discussed below, the loss of students to inter-district choice may apply different pressure than loss to charter schools. If this is the case, having separate loss terms for each type of loss is likely to be 62 the preferred approach. The above does not lay out the preferred measure amongst these but helps to demonstrate the differences between the types. Function of market share and presence variables. The competition measures constructed based on both market share and presence represented the final set of variables I operationalized from the literature. As the Hirschman-Herfindahl Index (HHI) represents the most common measure of this type, I chose to create the HHI measures to compare against the above measures. The HHI was originally conceived of as a measure of market concentration for private firms. HHI is defined as a sum of the squares of the market share (s) of each school provider i (equation 1). Market share (s) is defined as the proportional enrollment in provider i The HHI ranges from near 0, many small schools in competition, to 1, a single monopolistic provider. As such, a number closer to 0 typically is typically associated with more competition and a number closer to 1 with less competition. The intuition behind this measure is that only (1) knowing the number of competitors within a given market is not enough to determine the extent of competition present. The value of the HHI is sensitive to how the market is defined. Using the CCD, I created a series of HHIs based on various geographical definitions. First, I used a county level definition of the market at the elementary, middle, and high school levelshhicountyelementary, hhicountymiddle, hhicountyhigh, respectively. Next I created HHIs for each public school in Michigan based on all schools that serve the equivalent grades within the given radii (5, 10, and 15 miles). In order to make the subsequent correlational and regression analyses more straightforward, I multiplied the various HHI variables by -1. This gives the interpretation that 63 increases in the HHI variables correspond to increases in the amount of competition. In other words, moving from -.5 to -.1 indicates more competition. Regression variables. The regression analyses focus on the school average 4th and 7th grade MEAP test scores in Mathematics and Reading. I limit the analysis to these two grades and subjects to compare the findings to previous work in Michigan by Ni (2009). I standardized the student test scores for a given year at the state level. This accounts for any changes in the MEAP tests year to year by comparing a student only to other students in the same grade in the same year. Further, this allows for more intuitive interpretations of the coefficients. I then aggregate the student test scores to the school level to create school level averages for Math and Reading MEAP scores. The focus of this work is to highlight the various measures of competition and their impact on our regression based statistical inferences. As such, I include the standard school and district level controls: percentage free-reduced price lunch students in the school, the percentage orted as Black, Hispanic, or Asian, the log of district expenditures, and the log of district enrollment. Correlational analysis of school competition measures. After operationalizing the measures described above, I used two correlational measures to answer the question of how the various measures of competition compare to one anothercorrelation coefficiencts provide a method to empirically test each of the previously used correlation coefficient assesses any linear association between the two measures and the 64 oefficient compares whether both variables move together in the same direction, regardless of whether that relationship was linear. This correlational analysis provides an assessment of the relationship each measure has with the others. Some measures will likely be highly correlated as they are constructed using similar information and methods. However, I focus primarily on how well they co-vary across types of measures, i.e. presence, market share, and a function of presence and market share. If the measures are highly correlated with each other, the estimates in the extant literature would not differ greatly if another measure was substituted in. However, if the correlations are low, or negative, the estimates obtained may be sensitive to how competition was conceptualized and ultimately measured.12 Regression analysis of school competition measures. The correlational analyses show how the measures co-vary. Alone, we cannot explicitly know how the various measure of school competition would impact the estimates of the regression models the measures are employed in. To highlight what, if any, impact using different measures has on our statistical inferences I adopt a basic model of the competitive effects of school choice on traditional public schools. I follow the modelling of the competitive effects used in Ni (2009) as I am using later waves of the same data. The impact of school competition on school i's average student test scores on the 12 Another approach to measuring how the different measures group together rather than relying on the conceptual underpinnings of each measure would be to pursue a Factor Analysis approach. After creating each of the competition measures, using factor analysis would see if the measures appear to collect into the three categories identified in the literature. There is some appeal to this. However, I prefer the correlational approach as each of the measures were argued for by the various authors as proxying competition based on a particular aspect. Therefore, if the presence measures do not correlate with one another that is an interesting finding in and of itself. Similarly, if market share variables group together and the function of presence and market share variables do not then there are potential implications. For example, if all proxies of market share are highly correlated it matters less how market share was operationalized as each measure (assuming it faithfully adheres to the concept of market share) captures similar information. This being said, Factor analysis has the potential to reveal important underlying groupings. 65 MEAP at time t, Yit, is a function of the level of competition (Cit), the school level student body characteristics mentioned above (Sit), and district characteristics (Dit). The pooled OLS (2) regression model, pooling across years, is presented in Equation 2. However, if there are any unobserved factors that are associated with the extent of competition in and the school level student outcomes the estimates may be biased. I use a FE approach to account for any time invariant attributes associated with competition and student outcomes. This approach is attractive for two key reasons: a) it allows the measure of competition to be correlated with any stable characteristics of the school including the likelihood that a given context is more amenable to choice and b) the FE inferences are identified off of changes in the extent of competition over time for each school. To formalize the FE model, the error term is decomposed into a time variant part and a time invariant part in Equation (3) 3. T represents a series of year fixed effects and is the time invariant portion of the error term. The robust standard errors are clustered at the school. Applying the FE transformation to Equation 3, subtracting the mean of each variable from each observation, eliminates time invariant attributes. Within the fixed effects framework, the competitive effect estimates represent the effect of changes in the measure of competition within a school on the outcomes of interest. Schools with invariant measures of competition from 2009 to 2012 are thought of as having faced no change in competition. This is precisely one of the arguments for using fixed effects instead of a Pooled Ordinary Least Squares (POLS) framework: the POLS framework treats each school-year observation as independent whereas FE accounts for the fact that schools are observed for each of the 4 years. Thus, using different 66 measures of competition which vary for different proportions of districts over time influences the number of district-year observations the effect is identified off of. For example, the existence of at least one charter school within 2 miles and 10 miles is relatively stable over the four-year panel, 53.55% and 80.01% of schools have no change in this variable respectively. On the other hand, 14.25% of schools have a stable HHI for a 2-mile radius and 7.55% of districts have a stable HHI for a 10-mile radius over the four-year panel. However, there are certainly drawbacks to the FE approach. Using the above example of having at least one charter school within a 2-mile radius there are around 50% of schools which have a stable value on this measure. Since the FE approach identifies off of within-unit variation in the measure it only provides estimates of the change in competitive pressure. This means that using the FE approach, mechanically, treats the contribution to the competitive effect estimate from a district with at least one charter school within 2 miles for the duration of the panel as equivalent to a district with no charter schools within 2 miles over the four-year panel. Therefore, the FE coefficients do not provide an estimate of how having a high time-invariant level of competition differs from having a low time-invariant level of competition or from those districts with variation in the level of competition over time. If having a consistent level of competitive pressure is associated with (positive or negative) competitive effects while an unstable pattern of competitive pressure is associated with null effects, then the FE approach would produce estimates which run contrary to the true competitive effects. In sum, relying only on the FE approach assumes that variation in competitive pressure is associated with competitive effects while POLS assumes that the level of competitive pressure is associated with competitive effects. Given this caveat, I follow the FE approach below as most competitive effects papers utilize this approach regardless of the competition measure chosen. 67 This further nuance underscores the influence that the choice of measure, and potentially modeling decisions, has on the inferences made. The estimates of associated with the various measures can be compared. Examining the point estimates and statistical significance, coupled with the correlational analyses, will help further our understanding of whether the measures are a) capturing the same information and b) if the inferences of competitive effects are robust to different measures of competition. Data and methods assessing the conceptual coverage of the measures Interviews. The purpose of the interviews was to contextualize and update the conceptual framework for Michigan related to the extent of competition a district faces due to school choice policy. I chose to interview district superintendents for two primary reasons. First, there is little empirical evidence about how superintendents perceive competition from school choice policies. Second, the district superintendent plays a key role in district level responses, possesses direct knowledge of funding levels and determines the allocation of funds, help decide the extent of district participation in the Schools of Choice program, and has a unique vantage point to understand how school choice and competition impact all schools within the district. Principals, schoolboards, and other district administrators likely influence the impact of school competition on district responses. For example, the empirical evidence suggests that principals perceive school competition (e.g. Jabbar, 2015; Kim & Youngs, 2013; Loeb et al., 2011) which may have some impact on decision making at the school level (Kim & Youngs, 2013; Loeb et al., 2011). While the roles of the other actors are fruitful areas for further research, I chose to focus on the superintendent due to their unique role in district decision making. There were two important types of variation I wanted to probe based on the empirical literature: variation in the extent of competition and variation between contexts. To ensure 68 variation along these vectors, I employed a stratified purposive sampling technique (Teddlie & Yu, 2007). I purposively selected five Intermediate School Districts in Michigan based on overall enrollment, a significant proportion of families using school choice policies to attend non-residency assigned schools, and relevance to the Michigan context. The five ISDs either contained a major urban center (4 out of 5) or had high population density and adjacent to a major urban center (1 out of 5). This meant that no primarily rural ISDs were selected nor were my findings to all contexts, these constraints allow for a better understanding of a particular choice context primarily urban and suburban use of school choice mechanisms. Including rural districts may have allowed me to understand how school choice operates in contexts with relatively few choices, choices which may be more related to parental job locations, or the nearest school being across district lines (this was discussed by one superintendent which abutted a primarily rural district). Within these five ISDs I stratified the districts by proportion of students residing in the district but attending another school, via inter-district choice or charter school enrollment, in each school year between 2009-10 and 2012-13.13 The three strata districts were placed into were low (<10% residents leaving), medium (>=10% and <20%), and high (>=20%). Within each ISD, I selected a target and backup district within each strata giving me a list of 15 targeted districts and 15 backup districts. The first criteria for selection for both the targeted and the replacement districts was that the district had remained within one strata for the four years.14 I 13 As discussed above and throughout, the market share of students does not necessarily represent competitive pressure. However, I used this measure to stratify the districts as I believed students utilizing school choice polices to leave the district is a necessary, but possibly not sufficient, factor. Further, by stratifying districts through the use of proportion of resident students leaving the district I am able to test this assumption by ensuring I talked to districts losing very few students and districts losing many students via choice options. 14 Alternative ways for stratifying the districts were consideredone example is whether a district saw increasing, decreasing, or stable trends in the proportion of resident students attending another school. I ultimately settled on 69 then purposively selected amongst the districts which remained within a single stratum for four years based on size, geographic proximity to other target districts, and context. I selected the best fitting district as the target and the second best as the replacement. Primarily rural districts were not represented in my sample but those which included rural locations were. Table 5 presents a visual representation of the sampling strategy. The columns represent the five ISDs and the rows represent the three strata. The pseudonyms of the target districts for each ISD and strata are Table 5. Sampling strategy with pseudonyms. presented in each cell. I chose the naming convention to clearly indicate the strata and ISD the district came from. Districts that come from the low strata all begin with low, from the middle stISD portion of the name. When I was unable to secure an interview in the target district, I contacted the replacement district. The replacement districts are in parentheses. A bolded district name indicates an interview was obtained. Five of the 15 cells do not contain a bolded district as I was unable to obtain access to both the target and replacement districts. Further, my final sample contains fewer Middle (3 out of the target 5) and High (2 out of the target 5) competition stability for the following reasons: a) this enabled a stable, systematic way of holding one key variable constant across contexts, b) the existing literature suggested that certain levels of student loss were associated with competitive pressure, not trends in student loss, and c) the chosen strata represent a clean way to separate districts into groups. 70 districts. This response rate poses a concern due to potential response bias. However, I am not trying to make inferences about the population of superintendents in Michigan. I am trying to understand what patterns might exist across context and levels of student loss. The strategy employed is not intended to generalize to the population under study, but instead allows for generalizing to a theory. This sampling strategy allows me to compare superintendents within an ISD, or context, which differ on the levels of student usage of choice policies. I am also able to compare across contexts holding the strata constant. This allows me to probe the role that context plays (across ISDs at a given strata) and that student loss plays (within ISD across strata) in determining how superintendents perceive the extent of competition facing their district. I conducted one 45 60 minute semi-structured interview with each superintendent. I piloted this interview with one district superintendent, receiving feedback on the questions and length. The final interview protocol is included in Appendix B. To understand how superintendents perceive the extent of competition felt in their district I used three techniques. I first asked open-ended questions about the pressures associated with school choice policies, following up with probing questions. This allowed the respondents to share any pressures they associated with school choice policy while allowing me to ask specific follow up questions based on their responses. Secondly, I presented a stack of index cards (Spradley, 1979) to the superintendents with factors that the literature and previous interviews associated with the extent of competitive pressure. I then instructed the superintendents to read through the cards, sorting them into two piles: a) those that did factor into the extent of competitive pressure facing the district and b) those that did not factor into the extent of pressure. After this sorting exercise, I asked superintendents if there were any factors missing. If so, I would add the factors to a new 71 index card. Any new index card was retained for future interviews. I then instructed the superintendents to organize the factors associated with competitive pressure from the most important factor to least important. We then discussed the order they placed them in. The final ordering of the cards are presented in Figures 2 through 11. The green boxes represent factors from the literature and the blue boxes represent factors added through interviews. Finally, I asked superintendents to rate the school choice related level of competition their district faced from 0 to 10. The 45-60 minute interviews represent a relatively short period of time to fully gauge the perceptions. However, using these three different methods within the interview allowed me to (re)assess my inferences based on different types of data and develop a consistent story across -structured interview questions, the index card sorting exercise, and their overall assessment of the level of competition. Analyzing interview data. To analyze the interview data, I transcribed the interviews in their entirety. I then imported the text files into a qualitative data analysis software package, NVivo, for coding and analysis. I read through each of the interviews and created a one to two page memo for each interview capturing my initial reactions to the interview. As the primary aim was understanding what factors contributed to the extent of competition felt at the district level, I focused most of my subsequent analytic efforts on responses which dealt with this topic15. existing literature suggested were related to the extent of competition. For example, I started with Extent of Competition Factors as a master code and a series of subcodesi.e. District 15 While I focused on a particular section of my interviews, I coded the rest of the interview using more inductive methods following Glaser and Strauss (1967): I reviewed the transcripts line by line, created codes for each line, revisited the emergent codes and developed more abstracted categories covering the initial codes. While this coding work is not germane to determining the factors associated with the extent of competition felt by superintendents, the initial coding efforts provided a better understanding of my data as a whole, raised new research questions, and will provide me with fruitful data to study. 72 Enrollment Trends, location of charter school, distance to nearest district, number of students leaving the district. I then coded the data from this initial set of master and subcodes, adding additional subcodes when superintendents introduced a new subcode such as the safety of schooling options, safety. From these codes, I looked for emerging patterns and wrote short analytic memos based on what I noted. I then went back to the data to look for both confirming and disconfirming evidence for my conclusions. If I found disconfirming evidence I would update my conclusions accordingly. This proceeded in an iterative fashion until the explanation of patterns accounted for the data. Data and Methods Evaluating the measures of competition using a rubric Developing an evaluation rubric. The literature-grounded conceptual framework (Figure 1) was updated using the interview findings to inform the evaluation rubric. As Blumer (1969) argued, empirical measures should be evaluated based on how well they cover the underlying construct of interest. Systematic, transparent evaluations of how well measures relate to underlying constructs have their roots in several fields. I model my evaluation off of efforts in education (e.g. Harwell and LeBeau, 2010), healthcare (e.g. The National Quality Forum, n.d.), linguistics (Budanitsky & Hirst, 2006), and psychometrics (e.g. Haynes, Richard, & Kubany, 1995). Content validity, or the coverage of the underlying construct by a measure, is assessed in nearly all evaluation efforts. Thus, one of the key aspects of my systematic evaluation of the various measures of competition is the coverage of the conceptual framework I developed. The literature also suggests more pragmatic concerns when evaluating measures. Harwell and LeBeau (2010) suggests that measures of SES should be evaluated based on four criteria: reliability and validity, applicability to all students, have low non-response rates, and should have minimal costs associated with collection. The National Quality Forum, a Washington D.C. 73 non-profit in the field of healthcare, similarly suggests that any measure must meet a set of conditions before being considered. These include the importance of the measure to report, scientific acceptability of the properties of the measure, feasibility, usability and use, and related or competing measures (NQF, n.d.). I include these concerns in my evaluation. To make the process as transparent and systematic as possible I developed a rubric which will evaluate the measures based on the coverage of the concept, the cost of data collection, potential data issues, feasibility of measurement, and the sensitivity of the measure (i.e. can it measure various degrees of competitive pressure or is it a binary variable). In the evaluation rubric I create subcategories within each of the overarching categories (presence, market share, and function of market share & presence) which collect the extant measures based on their assumption for what drives competition. For example, I evaluate measures which assume that competitive pressure is a function of the number of options within a set boundary all at once as the logic remains the same regardless of the distances used in the particular measure (i.e. 2 miles, 5 miles, district boundaries). I give a similar treatment to all measures which equate competitive pressure with the presence of at least one choice within various boundary definitions as little changes in the application of the rubric due to different boundaries. For measures of market share, I include the basic conceptual measure of proportion of students lost and two key extensions. In the rubric I only evaluate the HHI as it is the most commonly used function of presence and market share in the literature. An evaluation of other measures in this category (i.e. GAM) would proceed similarly. I then apply the rubric to each of the three subcategories mentioned, as well as other potentially promising measures of competition. By making the rubric as transparent as possible, further refinements to the criteria and application of the criteria are possible. 74 Results The Results Section follows the flow of the Data and Methods: I first present the correlational results, followed by the FE regression findings, then I present the results from the interviews, and finally the rubric driven evaluation of how well the existing measures of competition cover the construct and meet pragmatic concerns. Correlational results. The correlational results are presented in Tables 6, 7, and 8. Tables 6 and 7 present the within categoryi.e. within measures defined as presence, within market share measures, within measures which are a function of the twoPearson and th grade and 7th grade, respectively. Each panel represents a category of measures: the top three panels are presence measures, the fourth panel is HHI, and correlation coefficients, below the diagonal are the Pearson correlation coefficients. Using both measures of correlation, the market share and HHI measures are highly relationship between the HHIs with the market defined as the county and defined as a five-mile direction. This is true at both the 4th and 7th grade levels. When looking at the presence measures, the story told by the correlations is much more varied. There are some measures which are highly correlated, linearly and monotonically, such as the different number of neighbors measures. Others appear the nearest neighbor. Overall, the key takeaways of Tables 6 and 7 is that the within category 75 correlations are quite high for HHI and market share while the presence measures are more mixed. Table 6. Within category comparisons for fourth grade. Table 8 presents the cross category correlations for a subset of measures. I present both market share variables (the first two), the HHI for 5 and 10 mile radii from a given public school (third and fourth), and the charter based presence measures (last 5). Like Tables 6 and 7, the top 76 Table 7. Within category comparisons for fourth grade. category correlations are relatively low with the exception of the HHIs compared with the charter presence variables. This is perhaps not surprising given that both measures rely on the presence between the HHI and presence measures is to be expected since as the HHI approaches 1 there is less concentration in the market and as it approaches 0 there is more. On the other hand, the market share measures are weakly correlated with the presence and the HHI measures. 77 Table 8. Between category correlation on select measures for grades 4 and 7. Fixed effects results. The fixed effects regression results with the full suite of covariates are presented in Table 9. Each cell in Table 9 represents a separate regression estimate of , the coefficient associated with the measure of competition. Every regression includes school and district controls, year dummies, and the robust standard errors are clustered at the school level. Each column presents the results for a different grade and subject outcome: Grade 4 Math MEAP scores, Grade 7 Math MEAP scores, Grade 4 reading MEAP, Grade 7 reading MEAP scores. Appendix A presents the fixed effects results when adding the controls in a stepwise manner, starting with just a measure of competition, then adding log of enrollment, then including racial/ethnic covariates, then adding FRL percentage and log of expenditure, and finally with the full suite of covariates. The results are qualitatively the same. 78 Table 9. Fixed effects regression estimates of the competitive effects on standardized average MEAP scores. 79 Overall, the analyses show inconsistent results of the competitive effect on test scores. There are several measures of competition that are statistically significant while a majority are not significant. Within those that are statistically significant, the effect is not always in the same direction. Having a greater proportion of students leaving the district (propofenrollmentleaving, propresidentleaving) is associated with higher scores as is a higher concentration of charters or magnets within 2 miles (magnetnbnei2 charternbnei2). Conversely, a higher HHI (a lower concentration) is associated with higher scores, having many neighbors in general (allnbnei10) is associated with lower scores, and having a charter within 10 miles is associated with lower 4th grade reading scores. Contextualizing the conceptual framework for MI through interviews. The interviews serve as the primary way to update the conceptual framework for the Michigan context. Table 10 presents basic descriptive characteristics of the districts that the interviews Table 10. Descriptive characteristics of the districts for the superintendents interviewed. took place in. In addition to the information in Table 10, it is important to remember that the participating districts were near urban centers in the lower peninsula of Michigan. No primarily 80 rural districts were included in the sample and there is reason to believe that superintendents in rural contexts may have responded differently. Comparing the districts in the low strata, names beginning with low, have similar descriptive characteristics across ISDs. The middle and high strata tend to have greater concentrations of FRL students but vary on the racial/ethnic makeup. One interesting point that emerges from the comparison of the first two columns is how the choice of denominator produces different proportions of students leaving. supplementing my discussion with other evidence from the interviews as needed. Figure 2 Figure 6. Possible responses on index cards. 81 presents all of the different terms on the index cards, green terms were the initial cards I created and the blue terms represent ones that superintendents added during the interviews. Figures 3 through 12 recreate the ordering and pattering of how district superintendents sorted the index cards. In each figure, the factors are sorted from top to bottom in terms of most influence to least influence. The absence of a factor indicates that the given superintendent does not perceive that factor as being highly relevant to the extent of competition facing their district. From the interviews and card sorting exercise four major factors emerged. The first was the near unanimity from district superintendents of the importance of student flows. The number of students leaving the district and the overall district enrollment trends emerged as consistently top influences on the extent of competition felt at the district level. Seven of the ten superintendents had the number of students leaving the district on their lists, with six of the ten placing that factor at or near the most influential. Similarly, the overall district enrollment trends Superintendents from low, middle, and high strata as well as different ISDs mentioned this. The superintendent from district Low C said, catch our notice so that This was echoed by the superintendent in Middle E, In [a former district] because we were either staying neutral or shrinking a little we did feel the pressure to go out and try to get schools of choice kids but because were filled at the brim here trends and student flows impact decision making in locations with relatively few students leaving. In high strata districts, superintendents were more acutely aware of what grades and why students were leaving. The superintendent in High B hat 82 Figure 7. Response to the index cards - Low A 83 Figure 8. Response to the index cards - Middle A 84 Figure 9. Response to the index cards - Low B 85 Figure 10. Response to the index cards - High B 86 Figure 11. Response to the index cards - Low C 87 Figure 12. Response to the index cards - Middle R-C 88 Figure 13. Response to the index cards - Low R-D 89 Figure 14. Response to the index cards - High D 90 Figure 15. Response to the index cards - Low E 91 Figure 16. Response to the index cards - Middle E 92 losing students? ? This superintendent discussed how the district had changed the grade configuration in schools to attempt to respond to the loss of students at the traditional transition grades, between 5th and 6th and between 8th and 9th. The second factor highlighted by superintendents was the quality of other schooling options. Seven listed the quality of nearby districts as influencing the level of competition felt. In follow up probing, superintendents indicated that they were thinking about school test scores and parental perceptions of test scores when they considered this index card. The superintendent from Low D of quality, and you know just overall quality and how can we match that or beat that so we think about that. In follow up, the superintendent indicated that parental perceptions of the quality moving to schools that are perceived as being higher quality based on test scores, SES of the student body, and programs offered. They debated the idea that other schools are necessarily better, but that the perception of quality runs along those lines. This is an important point to note as parental perceptions typically are absent from the school competition literature. This may play a key role in explaining why school districts respond to competition by increasing marketing efforts and parental outreach (e.g. Hess, 2002; Hess, Maranto, & Milliman, 2001; Loeb et al., 2011; Lubienski, 2005, 2007; Maranto, Hess, & Milliman, 2001). Separately, seven superintendents discussed the programs available at nearby public school districts. The superintendent form Low C I think 93 programs offered by other entities whether timpact The programs the superintendents associated with this card included band, sports, language programs, and drama amongst others. Superintendents also mentioned the influence of facility quality and parental perceptions of safety. Interestingly, charter quality and program offerings were mentioned by fewer district superintendents. This may be in part due to the specific districts where I researched. Due to the relatively higher response rate of low and medium competition schools compared with schools facing high student loss, my sample may have missed districts which would have ranked charter school quality and programs higher. The third main finding was the perception that schools facing declining general fund balances were believed to face higher levels of competitive pressure. This was discussed by districts losing students about themselves, net gaining districts about districts losing students, deficit school districts about deficit school districts, and non-deficit school districts about deficit school districts. District Low A, which say relatively few students leaving but faced a declining o financially strapped districts are looking for ways to increase their out of district students and we would be one of them because this year for instance This was echoed by a high loss, declining general fund balance district: into October and November. At that point then you have your count and you know wh94 Superintendent High D In other words, there was an agreement across ISDs and levels of student loss that districts with a declining general fund balance were under substantively more competitive pressure than those districts with stable general funds. In total seven out of ten district superintendents discussed this without prompting. Finally, superintendents suggested that they felt different pressures from the loss of a student via Schools of Choice (inter-district choice) than from a student using a charter school. SoC choosers leave and typically -12, had extra-curriculars, and had most of the programs and services of the district they left. On the other hand, charter choosers would come back within a year or two. some point during their K-kids coming in and out and many times those choices are made based on a harumph, to a charter. But when they go to a neighboring district we Superintendent High B Superintendents suggested that families would return from charter schools for a number of the charters typically only offered grades K-8, and the support services were lower. Systematically evaluating the measures of competition. The following presents a rubric based on the above conceptual framework and the interviews with Michigan district 95 superintendents. I then apply the rubric to various measures of competition from the literature as well as alternative measures not currently used in the school choice literature. The evaluation rubric is displayed in Table 11, with a subset of the measures of competition evaluated. The first column displays a brief description of the measure of competition. The next four columns evaluate the given measure of competition based on the applicability of the measure and the coverage of the conceptual framework. The final four columns evaluate each measure based on the practical considerations. I provide the evaluation of the various measures of competition based on the corresponding criterion within each cell. The first two measures I evaluate fall within the presence category.16 Neither of these measures account for either the perceptions of TPS administrators or the local context directly. While the number of choice options likely does influence the perceptions of TPS administrators somewhat, the interviews suggest a limited role for a simple count of options. The measures do not account for any variation in the school choice policy context, therefore comparisons between contexts should be done carefully. These two general types of presence measures do capture an important characteristics of the school choice market: the potential loss of students through school choice mechanisms. However, the theoretical, empirical, and interview evidence suggest other factors play roles in determining the extent of competition facing a district. On the other hand, these measures are highly feasible, free to construct, and likely to have few data issues as the data necessary are freely available through the Common Core of Data for all states across multiple years. The gradation of the two presence measures is medium for the number of charter schools or TPS schools and low for the indicator variable. The number of schools is a count 16 The same logic applies for all measures of presence, regardless of how the boundaries are defined (i.e. 2 miles, 5 miles, district borders). 96 Table 11. Evaluation rubric for measures of competition. indicator variable is only 0 or 1. I use the general version of the market share measure of competition as a similar discussion applies to all variations of the measure. Again, the market share categories do not 97 account explicitly for any variation in the school choice policy context or local contexts making cross context comparisons less straightforward. While the perceptions of TPS administrators are not directly accounted for, the interviews and conceptual framework suggest the proportion of students leaving the district influences the perceived extent of competition. The characteristic of the school choice market captured by this measure, number of students leaving as a proportion of enrollment, is more central to the conceptual framework and mentioned by a majority of superintendents interviewed. This measure also does relatively well according to the practical concerns. The main feasibility concern stems from the lack of publicly available administrative data on the number of students attending a non-residentially assigned district of sufficient disaggregation for all states. This is likely to become less of a problem in the coming years as more states develop their data capacity and transparency. For now, measures of market share may only be possible for a subset of states. This is not a current concern for the Michigan context as the necessary data is publicly available. The gradation of the market share measures are high, ranging from 0 to 100 percent. In Michigan, the potential data issues and costs are low as the data is free and well maintained. The two refinements of including the duration of exposure to student loss (Ni, 2009) and the quality of the nearby schools (Cremata & Raymond, 2014) improve the coverage of the characteristics of the school choice market. For the Michigan context they do not negatively impact the practicality concerns. This likely differs state to state. The next measure is the HHI measure. Like the above measures, the HHI does not account for the school choice policy context or the local context. The HHI measure includes a measure of concentration, based on the number of competitors and the market share of those competitors, to account for the characteristics of the school choice sector. This provides coverage of one core factor, according to the literature and interviews, and the number of options nearby. 98 For Michigan, the HHI does as well as the market share measures. Most ways of defining the boundaries of the choice market (by geographic distance, by whether a student from District 1 has ever attended the school, etc.) can be created for the Michigan data. The measure of competition can continuously vary between 0 and 1, the data is freely accessible through a combination of the CCD and Michigan data sources, and the data issues are minimal. A potential improvement of the HHI measure would be to incorporate a measure of school quality in the HHI calculation, similar to what Lijesen (2004) calculated for the civil aviation sector. The next three measures are either not used in the literature or used sparingly. Using a direct measure of the perceptions of TPS administrators would provide indirect coverage of each of the non-perception categories. The perceptions of TPS administrators are influenced by the school choice policy context, the local context, and the characteristics of the school choice market. Depending on the data collection instrument, i.e. surveys or interviews, the data could be used to create finely differentiated measures for the extent of competition. However, the feasibility of gathering this information for all, or even a majority, districts in a state is low. Further, the costs of time and resources associated with this type of measure are significant in comparison with other measures of competition especially if the longitudinal impact of competition is to be explored. Further, there are some important data issues for using survey or interview collected perceptions of TPS administrators. First, the interviews will likely have to be retrospective in nature due to the lag of available secondary data for analysis. This brings up two potential issuesthe need for superintendents to have served a minimum number of years and the possibility that current decisions or contexts may influence the recollection of the past (Becker, 2007). Second, the perceptions of district superintendents do not account for the direct influence of the school choice policy context, the characteristics of the school choice market or 99 the local context on the extent of competition a district faces. The perceptions measures quality relies on the accuracy of their perceptions. The next measure attempts to address the incomplete nature of collecting perception data by directly including the characteristics of the school choice market. While improving on the use of superintendent perceptions alone, it faces the same practical concerns just mentionedcost, feasibility, and data issues. Discussion Taken together, Tables 6 through 8 suggest different measures of competition are either capturing different aspects of competitive pressure or some are not proxying what they are intended to cover. If they all were proxying the same fundamental construct, we would expect higher correlations across the board rather than primarily amongst those constructed using similar aspects, i.e. market share or presence. Ideally, all of the measures would correlate with each other even when using different proxies of competition. Perhaps more worrisome are the results of the fixed effects regressions presented in Table 9. The results are similar in part to the results found in Zimmer and Buddin (2009). Depending on the measure used, a different statistical inference may emerge. In other words, given the same underlying dataset finding a positive, negative, or no competitive effects depends in part on the measure used. The above correlational analyses and regressions cannot indicate which is the hool competition. Given the empirical evidence that the measures of competition are not universally highly correlated and that different measures may yield different statistical inferences; they do not provide information on how to bring together different studies which employ different measures of competition. Each of the measures used in the literature and in the above analysis likely captures an important aspect of the school choice environment. But how do you choose amongst various measures of competition? 100 The development of a conceptual framework is a first step towards answering the question posed above. Above I have reviewed the pertinent education literature, supplementing it with theory and empirical findings from Industrial Organization and Economic Sociology. These three literature bases were brought together to create a conceptual framework from which to evaluate the extant measures of competition. Interviews of district superintendents were conducted in order to bring new empirical evidence to bear and contextualize the rubric for the Michigan context. Finally, existing measures and several new measures were evaluated based on a rubric. The use of the rubric suggests a relative order for the various measures. The presence measures are easily created for most contexts and across many years. However, they are limited in their coverage of the concept of competition in the educational setting based on the conceptual framework. According to the application of the rubric, the market share variables are preferred over the presence variables as they account for a characteristic of the school choice market which influences the perceptions of TPS administrators. Further, the extensions of Ni (2009) and Cremata and Raymond (2014) add additional coverage of key factors without negatively impacting the pragmatic portion of the rubric. Whether to use the HHI over the market share extensions is not exactly clear from the rubric. Both account for important factors which likely influence the perceptions. In the Michigan context, there is no difference on the feasibility, gradation, data issues, or costs associated with either as well. The rubric highlights the value of obtaining perception data directly but also indicates the tradeoff in practicalityperceptions have a high degree of coverage but are costly with potential data problems. In the last row of the rubric, I suggest a measure which is currently not in use but is based on the conceptual framework developed above and the interviews with Michigan 101 superintendents. The measure is made of two constituent parts: the extent of competition due to inter-district choice (TPS to TPS movement) and the extent of competition due to charter school choice (TPS to charter movement). Both the extent of competition due to inter-district choice and the extent of competition due to charter school choice consist of two components. The first component (below labeled ) is the proportion of students leaving the district through a school choice mechanism (either inter-district choice or charter school choice), weighted by the relative quality of the choice schools based on test scores. The second component (below labeled is this weighted loss term interacted with two mediating variables included given the district superintendent interviews (: a) year to year proportional change in the general fund balance and b) year to year proportional change in district enrollment. The proportion of students leaving the district and weighting component of the measure will be created based on equation 4. The weighted leaving measure, L, will be calculated for each district i at time t as follows: the number of students, D, from district i attending district j in sector s (traditional public schools or charter schools) in time t -1, weighted by the term . ; for each j with students from district i in time t-1 (4) The weighting term, , allows for differential weights to be placed on the loss of students to each sector, s, and to each district j. The weighting term can take on a variety of formulations dependent on the assumptions made. For example, the weighting term, , could be designed such that there is a linear relationship assumed between the difference in average MEAP test scores and the amount of weight placed on the loss of a student as in Equation 5. (5) 102 Equation 5 sets to proportional relation relationship between the average MEAP test scores (A) at t -1 between district j and i. This produces a weight which varies around 1 based on the distance between school average MEAP test scores. For example, if district j performs better on average then district i the weighting is greater than 1 or more weight is given to students lost to district j. If district j performs worse on average than district i the loss of students to district j is down-weighted, or less than 1. This weights the loss of a student to another district based on the relative difference in test scores. For example, consider the case of a school district called Washington that loses 200 students, half to two neighboring districts, Franklin and Monroe. Assume that Franklin and Monroe have higher average MEAP scores than Washington. Equation four implies that this would put more competitive pressure on Washington than if Franklin and Monroe had equal or lower average scores. This measure accounts for the theoretical, and empirical evidence as well as the findings in the Michigan based interviews. Another method to construct is to assume that the relationship between the between the difference in average MEAP test scores and the competitive pressure of student loss varies based on if a student leaves district i for a lower, similar, or higher performing district j. This is demonstrated in Equation 6. The weighting term is separated into three (6) Where: = 1 if < -.25 s.d.; else = 0 if s.d.; else = 1 if s.d.; else = 0 components , and one each for if the student attends a district which performs more than of a quarter standard deviation lower than their assigned district, a similar 103 performing district (within .25 standard deviations), or higher performing districts (scoring more (7) than .25 standard deviations above. This yields a term with three sub-components as in Equation 7. Regardless of the definition of , The numerator in Equation 4 is then divided by the number of students, E, residing in district i at time t-1. The components which are summed are from the previous year to account for a lagged response to competition. This weighted leaving measure, , is then interacted with measures of the proportional (8) change from the previous year in district i's enrollment at time t-1, P, and the proportional change from the previous year in the general fund balance, G, at time t-1 to calculate this additional component to competition, C, for district i at time t from sector s (see Equation 8). The two terms interacted with in Equation 8 are to be included separately in the regression models to account for their individual contribution to the outcomes examined. As this promising measure of competition (Equation 9) is based in part on interviews with Michigan superintendents it is intended to be comparable only to school choice policy (9) contexts similar to the Michigan school choice policy context (systems with funding following the student, for-profit and not-for-profit charter schools, the existence of inter-district choice). This measure captures two key aspects of the school choice market, number of students leaving the district and the MEAP test scores of choice options. This measure also includes observable variables district superintendents indicated were important in determining the extent of competition facing their district. It also allows for different relationships to exist between the pressure being exerted from losing students to charter schools or the pressure from loss to 104 neighboring districts, resonating with what Michigan superintendents discussed. Finally, it includes two proxies for the local context, the shrinking enrollment and the general fund balance. In terms of feasibility, it is straightforward to create this measure based on the publicly available data. The gradation of this measure is high. While the initial collection of interviews and analyses would be resource intensive, once completed the cost is comparable to the above measures which rely on secondary data analysis. This measure receives a middle for potential data issues as the measure relies on interview data from a subset of superintendents in Michigan. The applicability of this measure of competition to Michigan rests partially on the interview data, therefore heightening any potential issues with the data collection and analysis. However, the current study draws from a variety of contexts within Michigan and actively pursued disconfirming evidence. The evaluation of the final measure suggests that it may be a promising measure of competition. The measure provides a high degree of coverage of the conceptual framework and adds little concern from a practical standpoint. This measures not only covers the conceptual framework well but also takes into account several key insights from the superintendent interviews. Beyond the application of this conceptual framework to the development of a measure of competition, the framework may also have implications for the design of school choice policy by better reflecting the reality of the educational sector. For example, Michigan superintendents responded that they are more sensitive to the quality of their competitors than the number of nearby options. Therefore, policies encouraging the development of high quality charters may not only provide new, higher quality options but may also lead to higher levels of competition with fewer charter schools. 105 The conceptual framework and evaluation of the measures of competition also directly informs the educational policy research literature. Based on the evaluation of the measures of competition using the above rubric, the most commonly employed measures may provide a minimal amount of coverage of the underlying construct. While they meet the practical criteria, the evidence shows the need for the school choice literature to be interpreted based strictly on what is being measured. Using conservative interpretations, studies which utilize presence measures are in fact studies of the presence of options rather than competitive effects. Similarly, studies which use the market share measures are studies of the effect of losing students through choice rather than competitive effect studies. These are subtle but important differences which may explain why there is such variation in the competitive effects literature. The above is the first systematic effort to bring together the literature to develop a conceptual framework for the extent of competition facing school districts which is grounded in theory and updated with empirical evidence. This focuses attention on the measurement and conceptualization of school competition and its competitive effects. Further, it makes clear the underlying assumptions and methods of operationalizing the measures which will enable continued refinement in the conceptualization and measurement of school choice induced competition. The analysis using the evaluation rubric was framed around the context of Michigan but nothing about the rubric itself is Michigan specific. It may prove informative to apply it to other states or contexts, with variations based on the context, enabling a cross-context comparison. Finally, it will improve the comparability of studies across contexts and datasets. Competition amongst schools is the key mechanism by which school choice legislation is supposed to improve the educational system. Therefore it is important to better understand the 106 measurement of competition in order to evaluate the impact of school choice policies on the educational system. Conclusion The competition and quality relationship is complex in other sectors (e.g. Gaynor & Vogt, 2000; Katz, 2013), and education is unlikely any different (e.g. MacLeod & Urquiola, 2012). Even having a perfect measure of competition does not mean we will find a systematic impact of competition on quality. That is not the point of improving our measure of competition. Instead, improving the measure of competition allows us to assess if there is a systematic impact of competition, if varying levels of competition have different competitive effects on outcomes, or if changes in competition levels are associated with attainment gaps in the system. Without a theoretically grounded, empirically refined measure of competition the answers to these questions and others will remain unclear as they fundamentally depend on the measure employed. This paper suggests that the wide variation in our current measures of competition is cause for concern. Using different measures of competition in the same regression model with the same data set produces differing inferences. The second section of this paper takes up the important question of which measure(s) are the most promising for the educational sector. I have suggested a potential measure that is grounded in theory and updated with the empirical evidence based on the loss of students through choice, the quality of available options, a measure of makes the case for a renewed effort to continue refining our understanding of the extent of competition facing schools and districts. The results of doing so are a better understanding of the role competition can play in the education system as well as a clearer literature base. 107 APPENDIX108 Appendix A. Interview Protocol IRB application ID#: i046324 Interview protocol: School district superintendents your role as district superintendent. To be sure I am collecting consistent and accurate information, I may ask some questions that seem obvious or straightforward to you. I will primarily ask you questions about how different school choice policies impact the work you do. The interview should take between 45 minutes to an hour. Participation in this interview is completely voluntary. You have the right to say no to being interviewed and you may change your mind at any time and withdraw. You may also record the interview to ensure accuracy. After the tape has been transcribed, it will be destroyed. In the transcript, all identifying information will be deleted. Background I am going to start off asking you some basic questions about your background and your job. This is for me to get a better understanding of who you are and what you do. 1. How many years have you been a superintendent? 2. What is your educational background? Formal, professional, informal, etc. 3. What did you do before becoming a superintendent in _____________? Research question 1 your district. As you know, students can opt out of the district schools and enroll in another district or a charter school. I am particularly interested in how the multiple schooling options for students and families impacts your district. 109 1. Do families in your district use these options to attend schools outside of your district? a. About how many students do you think use this option? [Follow up if not sure10%, 50%, a few, a lot] b. What types of schools do families choose to leave for? i.Probe if not mentioned: do they leave for charter schools, for higher quality schools, different SES makeup, etc.? c. Which students typically leave the district? i.Probe if not mentioned: Are they from a certain grade? A certain achievement score? Whatever? 2. What sort of pressures does your district face due to school choice? (If not brought up by the superintendent: Within this environment, some districts face pressure to retain students, provide different services or programs, attract students from other districts, and compete. Do you think that your district faces these pressures? ) 3. Which of these pressures (list), if any, do you worry about the most? 4. Do you view losing students to charter schools the same as losing students to another district? Research question 2 How do the measures of competition used in the academic literature compare to how superintendents perceive competition? Below is a list of various factors that you and others have suggested feed into the amount of competitive pressure a school district faces due to school choice. Please take a moment to look over the Is there anything that you think should be removed? Location of charter schools 110 The number of nearby charter schools The number of students that leave the district schools Number of years that students have been leaving the district Quality of charter schools Quality of neighboring districts Programs offered by charter schools Programs offered by neighboring districts The decline in overall enrollment in district schools Of those remaining, could you please place them in order of how much they influence the level of competition that your district faces, from the most important to the least important. Research question 3 What are the school district responses to competition? briefly about how your district has responded to the pressures. What, if anything, have you done in response to the pressures you associate with school choice? a. What strategies have you used to retain students in your district? i.(probe: programs, AP classes, marketing, etc.) b. What strategies have you used to attract students to your district? i.(probe: programs, AP classes, marketing, etc.) c. Have these strategies been effective? i.How do you measure the effectiveness of the strategies? 111 REFERENCES 112 REFERENCES Abernathy, S. F. (2008). School choice and the future of American democracy. University of Michigan Press. Abraham, J., Gaynor, M., & Vogt, W. B. (2007). Entry and competition in local hospital markets. The Journal of Industrial Economics, 55(2), 265288. American Federation of Teachers (n.d.) AFT - A Union of Professionals - Charter Schools. Retrieved May 4, 2014, from https://www.aft.org/issues/schoolchoice/charters/ Andritsos, D. A., & Tang, C. S. (2014). Introducing competition in healthcare services: The role of private care and increased patient mobility. European Journal of Operational Research, 234(3), 898909. Arsen, D., Plank, D., & Sykes, G. (1999). School Choice Policies in Michigan: The Rules Matter. ERIC. Retrieved from http://files.eric.ed.gov/fulltext/ED439492.pdf Asplund, M., & Sandin, R. (1999). Competition in interrelated markets: An empirical study. International Journal of Industrial Organization, 17(3), 353369. Bagley, C. (2006). School choice and competition: a public-market in education revisited. Oxford Review of Education, 32(3), 347362. Becker, H. S. (2007). Telling about society. University of Chicago Press. Belfield, C. R., & Levin, H. M. (2002). The effects of competition between schools on educational outcomes: A review for the United States. Review of Educational Research, 72(2), 279341. Bettinger, E. P. (2005). The effect of charter schools on charter students and public schools. Economics of Education Review, 24(2), 133147. Bifulco, R., & Ladd, H. F. (2006). The impacts of charter schools on student achievement: Evidence from North Carolina. Education, 1(1), 50-90. Blumer, H. (1986). Symbolic interactionism: Perspective and method. Univ of California Press. 113 Bohte, J. (2004). Examining the impact of charter schools on performance in traditional public schools. Policy Studies Journal, 32(4), 501520. Booker, K., Gilpatric, S. M., Gronberg, T., & Jansen, D. (2008). The effect of charter schools on traditional public school students in Texas: Are children who stay behind left behind? Journal of Urban Economics, 64(1), 123145. Braha, D., Stacey, B., & Bar-Yam, Y. (2011). Corporate competition: A self-organized network. Social Networks, 33(3), 219230. Bresnahan, T. F. (1989). Empirical studies of industries with market power. Handbook of industrial organization, 2, 1011-1057. Bresnahan, T. F., & Reiss, P. C. (1991). Entry and competition in concentrated markets. Journal of Political Economy, 9771009. Budanitsky, A., & Hirst, G. (2006). Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1), 1347. doi:10.1162/coli.2006.32.1.13 Buddin, R. & Zimmer, R. (2005). Is charter school competition in California improving the performance of traditional public schools? Paper no. 146, National Center for the Study of Privatization in Education, New York, 2007. Carnoy, M., Jacobsen, R., Mishel, L., & Rothstein, R. (2005). The charter school dust-up. Economic Policy Institute, Washington DC. Carr, M., & Ritter, G. (2007). Measuring the competitive effect of charter schools on student National Center for the Study of Privatization in Education (Columbia University) Research Paper, 146. Chubb, J. E., & Moe, T. M. (1990). Washington: The Brookings Institute. Paper presented at AEFP March 13 15, 2014 in San Antonio, TX. Retrieved from http://www.aefpweb.org/annualconference/download-39th. Davis, P. (2011). On the role of empirical industrial organization in competition policy. International Journal of Industrial Organization, 29(3), 323328. Dijkgraaf, E., Gradus, R. H., & de Jong, J. M. (2013). Competition and educational quality: Evidence from the Netherlands. Empirica, 40(4), 607634. 114 Egalite, A. J., The Competitive Effects of the Louisiana Scholarship Program On Public School Performance (February 24, 2016). Available at SSRN: http://ssrn.com/abstract=2739783 Einav, L., & Levin, J. (2010). Empirical Industrial Organization: A Progress Report. Journal of Economic Perspectives, 24(2), 14562. Epple, Dennis, and Richard E. Romano. "Competition between private and public schools, vouchers, and peer-group effects." American Economic Review (1998): 33-62. Epple, Dennis N., and Richard Romano. "Neighborhood schools, choice, and the distribution of educational benefits." The economics of school choice. University of Chicago Press, 2003. 227-286. Epple, Dennis, David Figlio, and Richard Romano. "Competition between private and public schools: testing stratification and pricing predictions." Journal of public Economics 88.7 (2004): 1215-1245. Epple, Dennis, Elizabeth Newlon, and Richard Romano. "Ability tracking, school competition, and the distribution of educational benefits." Journal of Public Economics 83.1 (2002): 1-48. Ferreyra, Maria Marta. "Estimating the effects of private school vouchers in multidistrict economies." The American Economic Review (2007): 789-817. Figlio, D., & Hart, C. (2014). Competitive effects of means-tested school vouchers. American Economic Journal: Applied Economics, 6(1), 133156. Fligstein, N., & Dauter, L. (2007). The sociology of markets. Annual Review of Sociology, 33, 105128. Friedman, M. (1955). The role of government in education. Rutgers University Press. Gaynor, M. (2006). What do we know about competition and quality in health care markets? (No. w12301). National Bureau of Economic Research. Gaynor, M., & Vogt, W. B. (2000). Antitrust and competition in health care markets. Handbook of health economics, 1, 1405-1487. Glaser, B., & Strauss, A. (1967). The discovery ofgrounded theory. London: Weidenfeld and Nicholson. 115 Granovetter, M. (1985). Economic action and social structure: the problem of embeddedness. American Journal of Sociology, 481510. Greene, K. V., & Kang, B. G. (2004). The effect of public and private competition on high school outputs in New York State. Economics of Education Review, 23(5), 497506. Market in Education. Phi Delta Kappan, 81(10), 75157. Harrison, J., & Rouse, P. (2014). Competition and public high school performance. Socio-Economic Planning Sciences, 48(1), 1019. doi:10.1016/j.seps.2013.11.002 Harwell, M., & LeBeau, B. (2010). Student eligibility for a free lunch as an SES measure in education research. Educational Researcher, 39(2), 120131. Hastings, Justine S., Thomas J. Kane, and Douglas O. Staiger. Parental preferences and school competition: Evidence from a public school choice program. No. w11805. National Bureau of Economic Research, 2005. Haynes, S. N., Richard, D., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7(3), 238. Hess, F. M. (2002). Revolution at the margins: The impact of competition on urban school systems. Brookings Institution Press. Hess, F. M., Maranto, R. A., & Milliman, S. (2001). Coping with competition: The impact of charter schooling on public school outreach in Arizona. Policy Studies Journal, 29(3), 388404. Holmes, G. M., DeSimone, J., & Rupp, N. G. (2003). Does school choice increase school quality? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9683 House File 700/Senate File 467, Laws of Minnesota 1991, Chapter 265, Article 9, Section 3. Retrieved from https://www.revisor.mn.gov/laws/?id=265&year=1991&type=0. Hoxby, C. M. (1994). Do private schools provide competition for public schools? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w4978 116 Hoxby, C. M. (2003). School choice and school productivity. Could school choice be a tide that lifts all boats? In The economics of school choice (pp. 287342). University of Chicago Press. Hsieh, C.-T., & Urquiola, M. (2006). The effects of generalized school choice on achievement Journal of Public Economics, 90(8), 14771503. Imberman, S. A. (2007). The effect of charter schools on non-charter students: An instrumental variables approach. University of Houston. Imberman, S. A. (2011). The effect of charter schools on achievement and behavior of public school students. Journal of Public Economics, 95(78), 850863. Education Marketplace in Post-Katrina New Orleans. American Educational Research Journal, 0002831215604046. Jackson, C. K. (2012). School competition and teacher labor markets: Evidence from charter school entry in North Carolina. Journal of Public Economics, 96(56), 431448. Paper presented at AEFP March 13 15, 2014 in San Antonio, TX. Retrieved from http://www.aefpweb.org/annualconference/download-39th Paper presented at AERA April 3-7, 2014 in Philadelphia, PA. Retrieved from http://http://works.bepress.com/pjoshi/ Katz, M. L. (2013). Provider competition and healthcare quality: More bang for the buck? International Journal of Industrial Organization, 31(5), 612625. Kim, W. J., & Youngs, P. (2013). The impact of competition associated with charter schools and interdistrict school choice policies on educators and schools. International Journal of Quantitative Research in Education, 1(3), 316340. Lavy, V. (2010). Effects of Free Choice Among Public Schools. Review of Economic Studies, 77(3), 11641191. evidence. Education Economics, 12(2), 177193. 117 Lijesen, M. G. (2004). Adjusting the Herfindahl index for close substitutes: an application to pricing in civil aviation. Transportation Research Part E: Logistics and Transportation Review, 40(2), 123134. Linick, M. A. (2014). Measuring Competition: Inconsistent Definitions, Inconsistent Results. Education Policy Analysis Archives, 22(16). Loeb, S., Valant, J., & Kasman, M. (2011). Increasing choice in the market for schools: Recent reforms and their effects on student achievement. National Tax Journal, 64(1), 141164. Lubienski, C. (2005). Public schools in marketized environments: Shifting incentives and unintended consequences of competition-based educational reforms. American Journal of Education, 111(4), 464486. Lubienski, C. (2007). Marketing Schools Consumer Goods and Competitive Incentives for Consumer Information. Education and Urban Society, 40(1), 118141. MacLeod, W. B., & Urquiola, M. (2012). Competition and educational productivity: Incentives writ large. Retrieved from www.econstor.eu/bitstream/10419/69385/1/732555515.pdf Maranto, R., Hess, F., & Milliman, S. (2001). Small districts in big trouble: How four Arizona school systems responded to charter competition. The Teachers College Record, 103(6), 11021124. Maranto, R., Milliman, S., & Stevens, S. (2000). Does Private School Competition H-arm Public Political Research Quarterly, 53(1), 177192. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Sage. Misra, K., Grimes, P. W., & Rogers, K. E. (2012). Does competition improve public school efficiency? A spatial analysis. Economics of Education Review, 31(6), 11771190. National Alliance for Public Charter Schools (NAPCS). (2014). Measuring up to the model: A ranking of state charter school laws. Fifth Edition. Washington D.C.: NAPCS. National Alliance for Public Charter Schools (NAPCS). (n.d.). The public charter schools: Dashboard. Retrieved from http://dashboard.publiccharters.org/dashboard/home National Conference of State Legislators (NCSL). (n.d.). Charter schools. Retrieved from http://www.ncsl.org/research/education/charter-schools-overview.aspx 118 National Quality Form. (n.d.). Measure evaluation criteria. Retrieved from https://www.qualityforum.org/docs/measure_evaluation_criteria.aspx Nechyba, Thomas J. "Introducing school choice into multidistrict public school systems." The economics of school choice. University of Chicago Press, 2003. 145-194. Ni, Y. (2009). The impact of charter schools on the efficiency of traditional public schools: Evidence from Michigan. Economics of Education Review, 28(5), 571584. Ni, Y. & Arsen, D. (2013). The competitive effects of charter schools on public school districts. In C. A. Lubienski & P. C. Weitzel (Eds.), The Charter school experiment: expectations, evidence, and implications (pp. 93 120). Cambridge, MA: Harvard Education Press. Podolny, J. M. (1993). A status-based model of market competition. American Journal of Sociology, 829872. Sass, T. R. (2006). Charter schools and student achievement in Florida. Education, 1(1), 91122. Snyder, T. D., & Dillow, S. A. (2013). Digest of Education Statistics, 2012. NCES 2014-015. National Center for Education Statistics. Retrieved from http://eric.ed.gov/?id=ED544576 Sobel, R. S., & King, K. A. (2008). Does school choice increase the rate of youth entrepreneurship? Economics of Education Review, 27(4), 429438. Spradley, J. P. (1979). The ethnographic interview. New York: Holt, Rinehart and Winston. Teddlie, C., & Yu, F. (2007). Mixed methods sampling a typology with examples. Journal of Mixed Methods Research, 1(1), 77100. Tiebout, C. M. (1956). A pure theory of local expenditures. The Journal of Political Economy, 416424. Resistance to State Schooling, Contemporary Private Competition and Student Achievement across Countries. The Economic Journal, 120(546), F229F255. Woods, P. A. (2000). Varieties and themes in producer engagement: Structure and agency in the schools public-market. British Journal of Sociology of Education, 21(2), 219242. Zimmer, R., & Buddin, R. (2009). Is charter school competition in California improving the performance of traditional public schools? Public Administration Review, 69(5), 831845. 119 Paper 3: Evaluating the systemic effects of school choice induced competition: Student outcomes in Michigan Introduction Over the past two decades, many states and municipalities adopted various school choice policies as a means to improve educational systems. As discussed in the first paper, the various policy logics undergirding school choice suggest that the benefits accrue to all students, not just those remaining in traditional public schools (TPS) or attending a school of choice. This suggests the importance of evaluating the systemic effects of competition the effect of competition on all students regardless of the school attended. However, the existing research focuses primarily on the response of TPS to competition (e.g. Figlio & Hart, 2014; Sass, 2006; Zimmer, Gill, Booker, Lavertu, & Witte, 2012) and the comparative performance TPS and choice schools (e.g. Angrist, Bettinger, & Kremer, 2006; Bifulco & Ladd, 2006; CREDO, 2013). There are only three papers I am aware of which look at the systemic effects of competition, two coming from other countries and one cross-national analysis.17 This paper produces the first evidence on the systemic effects of competition on student test scores in Michigan and in the domestic literature base. 17 Muralidharan and Sundararaman (2015) recently published experimental evidence from Andhra Pradesh on the impact of vouchers on student outcome and the spillover effects. Through the use of a two-stage lottery-based design in the provision of vouchers, where vouchers were distributed via lottery to villages and then administered via a lottery within a village to students, they compared winners and losers of the lottery within a village and then leveraged differences in outcomes in villages assigned participation in the voucher program to those without the voucher system. This study evaluates the systemic effects of introducing a voucher system (a net positive effect on all students, driven by students winning the lottery) rather than a study of the systemic effects of competition as they primarily evaluate the comparative performance of the private and public sectors. 120 This study operationalizes the promising measure of competition developed in the second paper and applies it to the Michigan context to evaluate the systemic effects of competition. I address the policy relevant questions of: 1) what impact does competition have on the average student outcomes for students within an educational market? and 2) what impact does competition have on the variation of student outcomes within an educational market? Further, this paper adds to the dialogue surrounding choice and competition by suggesting a move from comparing public and charter schools to understanding the system as a whole. The use of the promising measure of competition shows the importance of accounting not just for the loss of students via school choice mechanisms, but also the importance of where those students go as well as the context of their district of residence. The results suggest that the systemic effects of competition on the average MEAP scores is not universally positive there is a null, or potentially negative, impact on the overall average with mixed impacts on sub-contexts and subgroups. The systemic effects of competition on the spread of test scores varies by context and by the schools students leave for. Importantly, the mixed results do not only show a closing or a null impact on the gaps there is evidence that competition is associated with the increase of test score gaps for students. While this study focuses only on test score outcomes, these findings should give everyone interested in school choice policy pause. They demonstrate that the story is not as simple as school choice improves student test scores, nor is it as simple as school choice harms the educational system. It appears that there are important contextual factors which are associated with changes in the average and variation of test scores. Further, the use of the promising measure of competition makes several key contributions to the literature. First, the measure allows for loss to different sectors to have different impacts on the system. Second, the 121 relative quality of the schools attended by an exiting student are included in the measure accounting for a key factor district superintendents reported responding to. Finally, the measure accounts for contextual aspects which emerged from the literature and interviews with district superintendents. In sum, this paper highlights a number of important avenues for research evaluating school choice policies a focus on generating systemic effect studies which ought to include non-test score studies, probing the contextual factors which are associated with positive impacts of competition to open the black-box of district responses to competition, and the need for further application and refinement in the measure of competition. Literature review The following literature review proceeds in two stages. The first section explores the extant school competition literature. It explores in detail the few systemic effects studies and then briefly summarizes the literature addressing other questions related to school choice induced competition. The second section uses the literature to develop the conceptual framework for understanding the systemic effects of school choice induced competition. Review of school choice induced competition literature As discussed in Paper 1, and reintroduced in Figure 1, studies of the competitive effects of school choice are made up of two distinct types: 1) systemic effects of competition and 2) TPS effects of competition. Studies of the systemic effects of competition examine the effects of competition on the educational outcomes of all students within a given educational market regardless of the school attended. The TPS effects of competition evaluate the effects of competition on just those students who remain in traditional public schools. Studies which assess the systemic effects of school competition are rare in the literature, domestic or international. 122 While competition is argued to improve the schooling options for all students, a majority of the school choice literaturewhether it is vouchers (e.g. Chakrabati, 2013; Angrist et al., 2006), charter schools (e.g. Bifulco & Ladd, 2006), or intra/inter-district transfers (e.g. Holme & Richards, 2009)focuses on comparisons of effectiveness between public schools and schools of choice. Figure 17. Competitive effects of school choice studies are comprised of two main components systemic effects and TPS effects studies. The remaining domestic school competition literature explores the TPS effects of competition (e.g. Arsen & Ni, 2012; Lubienski, 2005; Sass, 2006; Zimmer et al., 2012). TPS effects studies seek to understand the changes in productivity of TPS, whether new innovations occur within the TPS, who leaves or remains in the TPS system, and so on. These studies contribute to our understanding and address important questions. However, focusing solely on comparative questionsi.e. the relative effectiveness of schoolsor on the TPS effects of competition limits the type of policy relevant questions which can be asked. As mentioned in paper 2, there are relatively few empirical studies I am aware of which directly assess the systemic effects of competition. The best evidence comes from three international studies: a cross-national study 123 (West & Woessmann, 2010), from Chile (Hsieh & Urquiola, 2006), and the Netherlands (Dijkgraaf, Gradus, & de Jong, 2013). West and Woessmann (2010) use data from the 2003 Program for International Student Assessment (PISA) to provide comparative evidence from 29 countries about how the share of students attending private schools in a country relates to the math, reading, and science test scores of students. West and Woessmann (2010) employed an instrumental variable approach, due to the likely endogeneity of private school enrollment to public school quality and the potential for omitted variables related to the share of private school enrollment. They used the population share of Catholics in 1900 as an instrument on the number of private schools in a country, arguing that due to historical reasons it is related to the number of private schools but should be unrelated to student achievement, other than through the effect of competition. They found that a higher share of private school enrollment increased the national average on all three subjects. For math they found an increase of 9.1 percent of a standard deviation with a 10% increase in share of national enrollment in private schools, with smaller but statistically significant increases for reading and science. Further, the beneficial effect of private school competition accrued similarly to public and private school students. However, the measure of competition of private school enrollment share for this study represents a more stable, less intrusive type of competition than most choice policies would introduce. The Chilean voucher program provides evidence from a single country, with regional variation in competition levels over time. The Chilean voucher system began in thand extended a flat rate voucher to all students in the country. This is a unique system as all schools (public, private secular, and private religious) are eligible to receive the government funded voucher but private schools can charge more than the amount of the voucher. Hsieh and 124 Urquiola (2006) constructed panel data for 300 communes in Chile from 1982 - 1996, including measures of the average commune levels of student achievement on a national test, grade repetition rates, and total yeamakeup. The communes served as proxies for the local educational market and the averages included information from all students in the commune. The average commune had 27 schools (18 public, 7 private voucher, and 2 tuition charging), was 55 square kilometers, and had a population of 39,000 people. Hsieh and Urquiola (2006) argue that by looking at all students s student between communes in growth rates of private schools and enrollment, they employed a fixed effects approach to examine how changes within a commune in private share were related with changes in outcomes, controlling for previous and concurrent trends. As private enrollment, as a share of commune enrollment, went up by one standard deviation math scores decreased by nearly a quarter of a standard deviation. The point estimates for reading scores were primarily negative although never statistically significant. Over time, discernable negative impact on median TIMSS scores emerged, despite the relatively large economic growth of Chile during this period, adding to the evidence that competition harmed the regional and national education systems (Hsieh & Urquiola, 2006). Dijkgraaf, Gradus, and De Jong (2013) looked at the effects of competition on secondary school quality, within geographically defined educational markets, measured by central exam scores, share graduated on time, and share graduated, in the Netherlands. The Netherlands has had free parental choice since 1917. Public and private schools must adhere to similar rules, and are fully financed by the government based on the number of students enrolled. They argue that 125 it can be thought of as close to the ideal test for free market, voucher system as is possible. Through a mico-level panel data set, covering the period from 2002 -2006, they employed pooled OLS including year and school dummies. Competition was constructed using the Herfindahl-Hirschman Index (HHI), which essentially provides a measure of competition based a given geographic region. The HHI varies from 0, many schools of equal size in a region or high competition, to 1 which represents no competition at all. Dijkgraaf et al. (2013) found small negative impacts, or no impacts, but never positive impacts of competition on quality measured by scores on a central exam. The Netherlands and Chile studies suggest that there is either no systemic effect or a negative systemic effect of competition (Dijkgraaf et al., 2013; Hsieh & Urquiola, 2006). West and Woessmann (2010) find a significant positive effect of competition on quality and efficiency. The results of these three studies are far from conclusive, particularly for the U.S. context, and each uses a different measure of competition further limiting our ability to synthesize the results across studies. As discussed in the previous dissertation papers, the remaining research addresses whether or not TPS or charter schools innovate, compares the TPS sector with the choice sector, or explores the TPS effects of competition. With some notable exceptions such as KIPP schools, the limited literature on innovations suggests relatively few innovative practices emerging in charter schools (e.g. Ausbrooks et al., 2005; Goldring & Cravens, 2008; Horn & Miron, 2000; Preston, Goldring, Berends, & Cannata, 2012). The response of the TPS sector to competition also yields no systematic evidence that competition spurs innovative practices beyond increased outreach and marketing efforts by TPS (e.g. Gresham, Hess, Maranto, & Milliman, 2000; Hess, 126 2002; Hess, Maranto, & Milliman, 2001; Loeb, Valant, & Kasman, 2011; Lubienski, 2005, 2007; Maranto, Hess, & Milliman, 2001). Studies which compare charter school performance to that of the TPS sector find that charter schools on average perform similarly, if not slightly better, to public schools with significant variation amongst charter schools (e.g. Betts & Tang, 2014; CREDO, 2013). The second paper delves deeply in to the current literature on the TPS effects of competition; in sum, the evidence for how TPS respond is mixed likely due in part to different measures, different policy designs, and different methods. Conceptual framework for evaluating the systemic effects of competition I draw from the economic literature, specifically production function models, to conceptualize the systemic impact of competition on educational outcomes. While most educational production functions are used at the school or student level, I apply it to the students assigned to a district. Under this framework, the average student outcomes for all students that reside in the district boundaries are a function of family, school, and district inputs as well as the extent of competition at the district level. I choose the group of students that reside within the district as the primary unit of analysis for a number of reasons. Defining the unit of analysis as the group of students which reside in the district has appeal conceptually as well as analytically. The system thus defined allows for a consistent comparison across contexts with different levels of student movement. Second, student assignment to districts represents traditional educational boundaries which still undergird the system of education in Michigan. Children in Michigan are assigned a home district based upon their home residence. The district still represents the local educational authority and makes decisions regarding the educational system. Districts also represent distinct geographical boundaries, marking clear distinctions between the assignment of children to groups. 127 Analytically, focusing on systemic effects solves a key analytic problem for evaluating the impact of school choice on the educational system. In their 2003 NBER working paper, Hsieh and Urquiola demonstrate that if school choice leads to student sorting and the quality of a be assessed by looking at the response of TPS alone. Muralidharan and Sundararaman (2015) suggest experimental studies of school choice based on lotteries have not sufficiently accounted for two core limitations: a) the effect of changes to peer composition, class size and other per student resources, and changes in the actions of school staff caused by students exiting a school and b) experimental evidence is only for those students who applied to the lottery and cannot address students who do not apply or who are already in the choice school. We do have sufficient evidence in the U.S. context to believe that school choice can be associated with sorting (e.g. Bifulco, Ladd, & Ross, 2009; Chakrabati, 2006; Ni, 2012) and that peers do affect student outcomes (e.g. Angrist & Lang, 2004; Carrell & Hoekstra, 2010; Carrell, Sacerdote, & West, 2013; Imberman, Kugler, & Sacerdote, 2012). While these studies do not imply all school choice policies or environments lead to sorting or that peer effects always exist, they do imply that Sundstudent outcomes. TPS effects studies, those focused on the response of TPS to competition, only evaluate the impact of school choice on one portion of the educational system those students still in TPS. This approach allows the researcher to focus on the response of the what is typically the primary educational provider in any system traditional public schools. Examining the response of TPS to competition provides an empirical test of a set of theoretical hypotheses a) public 128 schools are inefficient due to having a virtual monopoly on the provision of education, public schools are not responsive to the needs of children and families due to the monopoly, and so on. The TPS effects studies directly test these hypotheses and others, helping to improve our understanding of the educational system. However, as Hsieh and Uquiola (2003, 2006) effectively argue, if sorting occurs and peer effects exist it is hard to disentangle how much of the perceived response of TPS is due to changes in practices or due to sorting. While research has noted the importance of accounting for sorting and peer effects, this is still a limitation of the TPS effects literature given that the literature relies on instrumental variable approaches (e.g. Imberman, 2011) or school choice policy implementation which likely operate concurrently with other policies (e.g. incentives also likely play a role in Figlio & Hart, 2014). Using the systemic effects approach provides another means to solve the analytic problem of sorting and peer effects by subsuming them into the overall effect of school choice. Further, the systemic effects approach allows for us to understand the overall impact on all students regardless of the reason they remain in a TPS, switch to a charter school or other district, or remain in their school of choice. The systemic effects approach, applied to school competition, produces an estimate of the impact of competition on all students in an educational market regardless of the school attended. However, the systemic effects approach obscures whether changes in outcomes, efficiency, or innovation come from TPS responses, a better match of services to students, the creation of new programs, peer effects, and so on. This remains obscured using a systemic effects approach. The benefits and limitations of both the TPS effects and the systemic effects approaches demonstrate the value of using both approaches to evaluate the competitive effects of competition. Currently, the school choice literature has a 129 preponderance of TPS effects studies. Contributing the perspective of the systemic effects approach will further our understanding of the overall competitive effects. A further distinction that needs to be made is between a systemic effects approach and a general equilibrium approach. While the two have conceptual overlap they both explore the expected outcome changes for all students in the system there are important distinctions between the two literatures. General equilibrium studies typically assume large policy shifts introduction of vouchers, introduction of charters and attempt to model what will occur over the long run (e.g. Epple & Romano, 2003; Ferreyra, 2007; Nechyba, 2000, 2003). There is empirical work (e.g. Epple, Figlio, & Romano, 2004; Hastings, Kane, & Staiger, 2005, 2009) which tests the theoretical predictions in the general equilibrium theoretical literature. These studies look at the responses of public and private schools to voucher systems, typically separately (e.g. Epple et al., 2004; Hastings et al., 2005, 2009). Systemic effects studies look at shifts in the system level outcomes as marginal changes in the use of choice, and the competitive pressure associated with it, occur. Systemic effects is more applicable to the current Michigan context, and most state contexts, given that some level of choice-based reforms exist and much of the debate currently centers around expanding, contracting, or regulating schools of choice rather than eliminating or establishing it. In the production function framework, the effectiveness and efficiency of the school system is measured in terms of student outcomes. Typically, these outcomes are examined in relationship to the quality or quantity of the specific inputs of interest. In the case of competition, the hypothesized effects on outcomes vary. The five panels of Figure 2 present empirical and theoretical conceptualizations of how competition may impact the quality of schooling for all students that reside in the district but attend any school regardless of location and type. Each 130 panel represents a general equilibria scenario, the potential long run impacts of varying levels of competition. The y-axis on panels A - E represents a measure of overall quality of a given school system in the long run. The x-axis represents the continuum from no competition in a district to full competition. The line indicates the hypothetical pattern of quality for a school system with varying degrees of choice. Quality here is thought of in broad terms for the sake of exposition. Figure 18. Models of the systemic effect of competition on student outcomes Panel A represents the argument that competition has essentially a positive, monotonic effect on school quality: greater levels of competition lead to higher quality of schooling for all students. This figure echoes the findings of the impact of competition on traditional public school students (e.g. Bohte, 2004; Carr & Ritter, 2007; Holmes, DeSimone, & Rupp, 2003; Hoxby, 1994, 2003; Sass, 2006) and one systemic effect study (West & Woessmann, 2010) which find positive effects of competition on varied outcomes such as test scores, graduation rates, wages, and so on. Panel A shows that the introduction of any amount of choice will increase the quality of the school system, while there are likely different marginal returns to increasing competition at various points along the spectrum. 131 There is also evidence which supports a counter argument to that expressed in Panel A: the introduction of any amount of competition will harm the overall school system. Empirical evidence from systemic effects studies exists that suggests Panel B may in fact represent the systemic effects of competition (Dijkgraaf et al., 2013; Hsieh & Urquiola, 2006) and from studies on the impact of competition on TPS (e.g. Arsen & Ni, 2012b; Bifulco & Ladd, 2006; Maranto et al., 2000; Ni, 2009). Here, the evidence suggests that increases in competition have a monotonically negative impact on outcomes. The arguments presented in the studies which form the empirical basis for Panel A and B are more nuanced than the figures suggest, but each set of findings presents evidence suggesting the potential monotonicity of competition, be they from systemic effects studies (Dijkgraaf et al., 2013; Hsieh & Urquiola, 2006; West & Woessmann, 2010) or the responses of TPS to competition. Panel C suggests that there is no systematic association between levels of competition and the quality of schooling. The systemic effects of competition may be positive in some locations, negative in others, and non-existent in others still. The mixed nature of the literature on the response of TPS to competition may echo the underlying story for systemic effects: the mixed evidence suggests that this figure may best represent reality as context and policy design likely matter (e.g. Arsen, Plank, & Sykes, 1999; Carnoy, Jacobsen, Mishel, & Rothstein, 2005; Hess, 2002). Further, recent research provides evidence that charter schools do not allocate resources differently than TPS (Arsen & Ni; 2012a) nor are they more efficient (Gronberg, Jansen, & Taylor, 2012), that TPS respond to competition through increased marketing and outreach efforts (e.g. Gresham et al., 2000; Hess et al., 2001, Lubienski, 2005, 2007; Maranto et al., 2001), and that the quality of charter schools varies from school to school (e.g. CREDO, 2013). Together, these behaviors of both charters and TPS point to the possibility that the 132 benefits and losses due to competition are either idiosyncratic or do not affect the quality of schooling provided. Taken together, Panel C provides an empirically defensible model of there being no systemic effect of competition. Panel D and E represent two further models which are not systematically explored in the literature. They differ from Panels A-C by suggesting that the systemic effects of competition may not be unidirectional. Panel D shows that the introduction of a limited amount of competition could be harmful to the overall system, potentially through student sorting or funds being shifted away from academic programs. However, once a critical mass of competition exists the efficiency gains will outweigh any negative impacts. Panel E assumes that a shift to high levels of competition will have deleterious effects on the overall quality. However, the model suggests a minimal extent of competition can provide a boost to the quality of a school system, potentially by encouraging instructional innovation, efficiency, and careful attention to the needs of students and families. In contrast to Panel D, once a critical mass of competition exists the negative systemic effects of competition outweigh the positive aspects of competition. Thus, the quality of the school system begins to decline as more and more choice is added. The panels in Figure 2 represent the average systemic effect of competition on school quality. However, Figure 2 may mask important tradeoffs between efficiency and equity. The panels in Figure 3 demonstrate the mathematical concept that a variety of distributions can have equivalent means. Panel A shows what would happen if competition had no effect on the average outcomes but narrowed the variation in student outcomes within the district (e.g. Epple & Romano, 1998; Nechyba, 2000, 2003). Panel B models an increase in variation associated with increasing competition but no subsequent impact on outcomes at the district level (e.g. Epple & 133 Figure 19. Systemic effect of competition on variation in student outcomes. Romano, 1998; Hastings et al., 2005, 2009). Finally, Panel C represents the case where competition has no effect on average outcomes or on variation in the district (e.g. Muralidharan & Sundararaman, 2015). Taken together, Figures 2 and 3 suggest that in order to understand the systemic effects of competition on educational outcomes attention ought to be paid to both the overall pattern of outcomes as well as to the associated variation. Deciding between different policy options (Figure 4) which differ in variation and average effect is a political decision, not an empirical one. However, generating an empirical understanding of the potential tradeoffs, i.e. an increase in average outcome and an increase in variation, ideally enables the political decision making process to make more informed choices. The above conversation implies that productivity changes in the school system occur relative to the amount of competitive pressure. However, it is possible to see changes in both the average outcomes as well as the gaps in outcomes without any underlying changes in the 134 Figure 20. Systemic effect of different policy option on average quality of schooling and variation. productivity of schools these effects would be driven by either sorting or peer effects. It could be that some students attend schools which improve their outcomes, all else equal, while others remain in their home district and are unaffected by the student loss leading to an increase in system average. Students could, conversely, attend lower performing schools and see a drop in their scores, while those remaining in their home TPS stay the same leading to a decrease in average scores. Or it could be that the losing district changes their programs which leads to an increase or decrease in average outcome. Each of the above can be applied to interpreting the gaps as well. In fact, you might expect them to work together. Figure 5 demonstrates this by providing potential non-productivity altering explanations for nine possible outcome patterns. Across the top of Figure 5 are the three possible results for average outcomes and the rows are the three possible results for gap outcomes. Each cell represents a brief potential explanation for whatever patterns of results are seen in systemic effects studies which look at both the average and variation in outcomes. 135 Table 12. Possible interpretations of results with no changes in traditional public schools The top left cell, a decrease in the average but an increase in the gaps could indicate that lower performing students are using choice but are seeing negative impacts on their educational outcomes in the choice school; all students remaining in their home school are unaffected. If there is an increase in the average and an incrstudents are leaving and the benefits are accruing to them. A stable average may indicate that any impact on higher performing students are offset by an equal in magnitude but opposite in direction impact on lower performing students. Stable gaps indicate that all students are impacted positively, negatively or not at all equally. A decrease in average coupled with a narrowing of the gap may indicate that higher scoring students are leaving and seeing negative impacts of choice. If the average increases while the gaps decrease, students are using choice to improve their schooling while other students remain unaffected. These all represent ways that a systemic approach, which focuses not just on the average but the gaps, can help us better understand the effect of school choice on the educational system. The above discussion highlights the limited literature on systemic effects. Further, the importance of assessing the systemic effects of competition on the average and variation in outcomes is made clear as well as assessing the monotonicity of systemic effects. This paper directly addresses the lack of systemic effects studies in the literature and each of these concerns by answering the following two questions: 136 1. What are the systemic effects of competition on the average MEAP scores for all students residing in a given district? Are the average systemic effects on test scores monotonic? 2. What are the systemic effects of competition on the variation of MEAP scores for all students residing in a given district? Are these systemic effects on variation monotonic? These questions are focused on MEAP scores. They do not address whether there are systemic effects of competition on non-test score outcomes. However, as this is the first systemic effects study I am aware of, in Michigan and domestically, MEAP scores are a reasonable starting place. Data & Variables Data The data for this paper comes from three secondary data sources: a) the Michigan Common Core of Data (CCD). I constructed a four-year panel data set which covers all students in Michigan from the 2009-10 school year to 2012-13. MDED covers the universe of students student upon entry to the education system allowing for the enrollment history, the Michigan Educational Assessment Program (MEAP) standardized test scores for students in grades 3 through 8 on math and reading, and the standard suite of student level variables to be captured for the entirety of the state over this time.18 I focus only on those students which attend either a 18 I follow the process used by Cowen, Creed, and Keesler (2015) to address duplicate entries. 137 TPS or a charter school (5,738,460 student-year observations), leaving out alternative educational programs, students in private schools, homeschooled children, and so on. This restriction is necessary given the limited coverage of these students in the MDED data set and the outcomes explored in this paper (students attending private schools or who are homeschooled are not required to take the MEAP tests). However, omitting these students may influence the following analyses if the level of competition is related to both the enrollment in a non-TPS or charter school and to the outcomes. Any inducement of students to either re-enter or exit the publicly funded general education system will be subsumed into the overall impact on the system. The MDED data set i19. Financial information for each district in Michigan came from CEPI data sets publicly available from www.MISchoolData.org for the school years 2008 09 through 2012 2013. While I am able to create the number of students assigned to a particular district from 2009 2012 using MDED data, I use CEPI data from 2008 2012 for district enrollment numbers so I can use enrollment data for 2008. The CCD provides information on the location of TPS and charter schools used in the creation of various measures of competition. The CCD also provided the data used to create pupil teacher ratios for each school and district. The district of residency will proxy the educational markets for this study of the systemic effects the effect of competition on all students within a given educational market regardless of school attended. As such, the subsequent analysis takes place at the district of residence level for the years of 2009-10 to 2012-13 (2,190 district-year observations).20 19 The Michigan Department of Education only provided residentially assigned district information for the years 2009-2012. While I have access to more years of data (2005-2012), the analyses in this paper focus on just those four years with this key piece of information 2009 - 2012. 20 I have also run the analyses at the student level, clustering the errors at the resident district. The results are similar. 138 Variables. Below I discuss each variable in turn, describing how they were created and the data used. Three tables display the variables discussed below with Table 1 presenting the outcomes, Table 2 containing information related to the measures of competition, and Table 3 displaying the control variables. Outcomes of interest. As this study focuses primarily on the systemic effects of competition on student test score outcomes, I utilize the student MEAP scores in 3rd through 8th grade on the Math and Reading tests at the student and district of residence level. The decision to use only state standardized tests stems from the following reasons. First, choosing one set of outcomes rather than a variety allows for the study to remain focused on the argument for systemic effects. Second, using standardized test scores enables me to examine the distributional effects of school competition within a district by examining gaps in the outcome in a way that other measures could not, i.e. dropout rates, graduation rates. Third, MEAP scores allow me to look at the impact of school competition on the outcomes of six grades as opposed to only at the end of secondary school (graduation and dropout rates), or a particular grade (ACT, Michigan Merit Exam). Finally, a large number of school competition studies have focused primarily on standardized test scores so this effort extends from that tradition. I do recognize the need to understand other outcomes that parents, policy makers, and researchers care about such as good study habits and self-discipline, critical thinking, and preparation for college. This focus on MEAP scores represents a key limitation of the study. Future work exploring the factors listed above, on parental involvement, on teachers, and on other factors will further contribute to our understanding of the systemic effects of competition. Creating the main outcomes of interest was a two-step process. First, I standardized 139 Table 13. Description of each of the outcome measures 140 Table 14. Description of each of the measures of competition used 141 student MEAP scores within a subject by grade and year at the state level to create student level standardized test scores (state_standardizedMEAPmath and state_standardizedMEAPreading). Standardizing in this way enables me to compare across grades and years but obscures the levels obtained by each student21. These scores represent the outcomes of interest for the competitive effects models as well as the student level systemic effects specifications. To create the average standardized MEAP scores at the district of residence level I took the mean of the student level standardized test scores for all students residing in a given district. This produced the variables averagedistrictmeapmath and averagedistrictmeapreading for each district. These are my main outcomes of interest for questions related to the systemic effects on average test scores. In order to examine the systemic effect of competition on the variation of outcomes, not just the above average outcomes, I created a series of MEAP standardized score gap measures within each district. Again, this was a multi-step process. Using the student level standardized test scores above I created achievement deciles within each district. I then generated the average standardized score for each decile within each district (e.g. decile1districtmeapmathscore, decile9districtimeapreadingscore). These deciles allowed me to explore if there were different relationships between school competition and the 10 deciles. I discuss this further below in the methods and results. Finally, I created gap measures comparing the 9th and 1st decile (disctrictmeapmathgap9to1; disctrictmeapreadinggap9to1), the 9th and 5th decile (districtmeapmathgap9to5; districtmeapreadinggap9to5), and the 5th to the 1st decile (districtmeapmathgap5to1; districtmeapreadinggap5to1) by subtracting the lower average decile score from the higher decile score. This enabled me to explore if school competition narrowed or 21 The raw MEAP score can be recovered by multiplying the state_standardizedMEAPmath or state_standardizedMEAPreading score by the standard deviation of the observation year MEAP scores for the particular grade and adding that to the mean score for the year and grade. + 142 widened the test score distance between the top performers, middle performers, and lower performers. Examining these evolution of the Black/White and socio-economic based achievement gaps over time. Table 1 presents the outcomes discussed above. Measures of competition. The key variables of interest are the measures of competition used. In general, I use two competition variables: a) the proportion of students assigned to a district by residence who attend another TPS outside of the district or a charter school (propresidentsleaving) and b) variations on the promising measure of competition developed in Dissertation Paper #2. I discuss each in turn. Table 2 provides a summary of the below including the conceptual underpinning, basic formula, and data sources for each measure of competition. Drawing on the rich MDED data set, I created propresidentsleaving by first creating a count of the number of students residing in a given districts catchment area. I then counted the number of students enrolled in a district which differed from their residentially assigned district. Finally, I divided the number of students attending school in a different district by the total number of residentially assigned students. This variable measures the number of students who are utilizing a school choice mechanism, including those attending a charter school and those utilizing interdistrict choice. The purpose of this measure is to get an overall sense of how student movement via publicly funded school choice options relates to the average and variation of test scores. This measur as constant regardless of context is a limitation of this measure. I draw upon the promising measure of school competition in my second paper to operationalize a set of competition measures () which allow for differential impacts based on whether a student 143 leaves for a charter school or another district. Further, the promising measure of competition also allows for the relative test scores of the district attended and the residentially assigned district. I control for the districts overall enrollment trends of the resident district and trends in the overall general fund balance in the following regression models. I allow for different the enrollment trends and general fund balance trends to influence the amount of competitive pressure each student lost places on a given district by running each regression for all districts, net growing enrollment districts, net declining enrollment districts, net growing general fund balance districts, and net declining general fund balance districts. Ideally, I would employ the measure developed in paper 2 as it conceptually allows for the potentially cyclical effect of competition past responses and decisions do impact the current extent of pressure felt. However, the measure as developed in paper 2 likely introduces endogeneity concerns which outweigh the benefits. The policy context of Michigan, particularly the funding of the operational accounts being tied directly to student enrollment, leads districts which face higher levels of charter school availability to also face declining enrollment and declining general fund balances (Arsen, DeLuca, Ni, & Bates, 2015). The full competition measures developed in paper 2 are highly correlated with the measures used below correlation coefficient = .74) which adds further reason to use the pared back measure as value-add of including the contextual term does not appear to be that great. Together, these choices account for the four key insights from the superintendent interviews in paper 2: a) the influence of student flows and enrollment trends, b) the school quality differences, c) the general fund balance impact, and d) the difference between loss to inter-district choice and charter schools. I discuss this further below. 144 The various iterations of the competition measures below recognize that districts respond to the type of school students leave for and the relative quality of the schools (defined below as ). For example, students leaving for a lower performing school likely exert less competitive pressure on test scores for the district of residence than a student leaving for a higher performing district. These insights come from both the literature (e.g. Cremata & Raymond, 2014; Lijesen, 2004) as well as from the interviews (see Paper 2). Superintendents reported that they are sensitive to both the quality of schools students attend through choice and the sector attended (TPS vs. charter). The potentially heterogeneous effects of the district enrollment and general fund trends are accounted for by grouping districts based on these factors. A district may see students leaving for other schools as a sort of relief valve for a growing district (e.g. Cardon, 2003) while a district with declining enrollment numbers may feel the loss of a single student more acutely. Finally, districts which face declining general fund balances may be more sensitive to the loss of a student than those which see stable or growing balances. Both of these factors also emerged in the literature (Cardon, 2003; Bresnahan & Reiss, 1991) and the interviews discussed in paper 2. I operationalize three conceptualizations of the promising measure of competition, , for each district i, at time t, from sector s (charter schools or other TPS). In general, is made up of one type of term: . is a weighted leaving measure for district i at time t-1 from sector s is calculated in equation (1). The components of this term are calculated at t-1 to allow for a lagged response to competition. The number of students, D, from district i attending district j in sector s (traditional public schools or charter schools) in time t, is weighted by the term . (1) 145 The weighting term can be constructed in a number of ways but is based on the differences in average achievement of district i and district j of sector s. For this paper, I set = 1 for all i, j, s, and t so as not to assert a weighting relationship. While this assumes no impact of test score differentials it still allows for competitive pressure to differ based on whether a student leaves district i through interdistrict choice (lits_noweights_soc) or for a charter school (lits_noweights_psa). Instead of creating a weight, I test if there is a differential impact of school competition based on test score differentials between sending and receiving districts. To do this, I decompose the term in two ways. The first allows for the competitive pressure to differ when students leave for lower performing districts than when they leave for similar or better performing districts. To do this, I model as equation (2) with two components on the right hand side, one each for students lost to lower performing districts and for students lost to similar (2) Where: = 1 if < 0; else = 0 And: if or higher performing districts. is the average achievement22 for district j in sector s at time t 1 and is the average achievement for district i at time t 1. This allows for the systemic effects to vary not just by sector but also by quality of districts. The second follows the above 22 Calculated as a simple average of math and reading scores = (). 146 (3) Where: = 1 if < -.25; else = 0 if ; else = 1 if ; else = 0 logic (equation 3) but creates three categories: students lost to districts that score on average .25 units lower (lits_bottom_psa, lits_bottom_soc), lost to districts that score between .25 units lower and .25 units higher (lits_middle_psa, lits_middle_soc), and lost to districts scoring more than .25 units above (lits_upper_psa, lits_upper_soc). Each summation term Equations 2 and 3 correspond to the italicized terms, i.e. , corresponds to lits_bottom_psa and lits_bottom_soc depending on the sector summed. To simplify the following notation, I let = 1, define s so that s = 1 represents loss to a charter and s = 2 is loss to another district, and introduce k to delineate between the various subgroupings based on achievement discussed above (n=no scores; l=lower and h=higher; b = bottom, m = middle, u = upper). For example: To summarize, I use four approaches to measure competition. The first measures the proportion of resident students leaving the district for any type of choice and is from the literature. The second splits the competition measure into a charter school component (lits_noweights_psa) and an interdistrict choice component (lits_noweights_soc). The third splits the competition measure into two charter school terms accounting for relative achievement to play a role (lits_twocategorieslower_psa, lits_twocategorieshigherer_psa) and two interdistrict choice components (lits_twocategorieslower_soc, lits_twocategorieshigherer_soc). Finally, I 147 split the competition measure into six components, three each for charter school loss (lits_bottomthird_psa, lits_middlethird_psa, lits_upperthird_psa) and interdistrict loss (lits_bottomthird_soc, lits_middlethird_soc, lits_upperthird_soc). The use of the more nuanced measures of competition represents a substantive addition to the literature. These measures allow for competitive pressure to be related to more than just the loss of a certain number of students: these measures reflect the literature and the interviews with superintendents by letting the amount of competitive pressure, and the effect of competition, be related to where students use choice to attend. As discussed above, using a systemic effects approach shapes our interpretation of the relationship between the competition variables and the outcomes. The coefficients should be interpreted as what happens to the system as a whole. Any positive, negative, or null finding can represent the impact of sorting, of peer effects, or of changes to the efficiency/productivity of the schools in the system. The systemic effects of competition leave the mechanism of change as a black box. Further work is needed to parse out what drives the systemic effects. Future work can explore whether or not student sorting is occurring and the relationship between this sorting and the outcomes. In depth case studies which examine locations with positive and negative systemic effects can help to tease out what, if any, responses can improve the system. And so on. Controls. The remaining variables fall into two categories: student characteristics and district attributes. Each of the following variables are aggregated to the system level, the district of residence, as this is the unit of analysis. I summarize the below in Table 3. The student characteristics are for all students who reside in a given district regardless of school attended. The district attributes are weighted averages for each characteristic included. I provide examples below for creating the variables. 148 The creation of student variables consisted of aggregating the student level data from MDED up to the system level. Systemlevelfemale measures the percentage of female students that reside within the district. Systemlevelfrl indicates the proportion of students qualifying for free or reduced priced lunch (FRL) status at the system level. I also include controls for the proportion of students which report being Black/African American, Hispanic, and Asian Table 15. Control variable definitions and data sources 149 Table 15 150 American in each residentially assigned district23. I also include the proportion of students who are flagged as Limited English Proficiency and Special Needs. For the measures of district attributes, I created system level weighted averages of each of measure. In other words, each of the district variables represents the system average, weighted by the proportional enrollment of all students residing in the district. This allows me to observe the average level of educational resources or context accessed by all students residing in a given system. Using pupil teacher ratio as an example (ptrdistrict), I first determined the school level pupil teacher ratio experienced by every student residing in a given district by bringing in data from CCD. I then summed all of these ratios and divided by the number of students assigned to the district. I similarly created the average log of per pupil expenditures (systemlogppe), and the system level district size (systemdistrictenrollment). Finally, I include the proportional change in district enrollment (, and the proportional change in the general fund balance ()24. Methods If competition and students were randomly assigned to educational markets a simple linear regression would recover the causal estimate of competition on the outcomes of interest. However, this is not the case. There are three core concerns for identifying the effects of 23 The proportion of White students is the omitted category. 24 Since some of the general fund balances went from negative to positive, or vice versa, the sign on could be negative (n=70). The other two components, and , were always positive by construction. Having a negative sign on the amount of competitive pressure placed on a district due to student loss does not make intuitive sense. As was constructed by dividing the general fund balance at time t-1 by the fund balance at t. I took the following approach when the sign on general fund increased from a deficit at time t 1 to having money in the account at time t I treated this as the the pressure felt per student lost. Conversely, if the sign was negative because a distat time t but had money in it at time t 1 I treated this as a losing district and assigned it the mean value for losing districts (1.67), simulating an increase in the magnitude of losing a student. 151 competition on choose to attend schools randomly, and 3) the influence of peer effects on student outcomes. The first identification concern represents the most important one for the analyses discussed below. As such, I spend more time addressing this concern before turning briefly to the second and third concerns. Much like the establishment of charter schools is non-random, competition likely does not appear randomly. As a result, the presence and extent of competition may be associated with concurrent trends, pre-existing trends, or due to competition only appearing where it is most likely to have a positive impact. Any single one of these would lead to biased estimates of the systemic effect of competition. The concerns about concurrent trends are addressed by including controls for overall changes in the number of students in the district, changes in the racial/ethnic composition, and shifts in the percentage of students qualifying for free and reduced price lunch between the years 2009 to 2012. The concerns about preexisting trends and the location of competition being associated with the likelihood of it being impactful are harder to directly control away. However, as long as these factors are relatively stable a fixed effects or first differencing approach can at least partially address these concerns. Both fixed effects and first differencing account for any historical context or time-invariant aspects of the district that would be associated with both the extent of competition and student outcomes. If the factors associated with the extent of competition vary over time, these approaches will be unable to adequately address this source of bias. However, the controls included in the following analyses as well as the ability to account for any time invariant characteristics reduces the likelihood that unobserved, time-variant endogenous variables will significantly bias the findings. 152 The second identification issue associated with student sorting, or the non-random matching of students to schools, stems from the fact that choice can arguably influence student outcomes through both student sorting and changes in the productivity of schools due to competitive pressures. As such, isolating the impact separately for either on student outcomes becomes problematic. If high performing students systematically leave TPS for charter schools, the efficiency of public schools will appear to drop. Conversely, if low performing students tend to opt out of TPS for charter schools then TPS would see an increase in efficiency even if no such change took place. However, measuring changes in the average outcome for all students residing in the districtor at the system levelallows for the average productivity of all schools to be assessed, thus accounting for the changes in the student composition in each sector (see Hsieh & Urquiola, 2003, 2006), thus accounting for this identification concern.25 However, the system level analysis may be biased as it includes any changes in peer effects, driven by sorting, in the measure of average school productivity. As Hsieh and Urquiola argue (2003), there is not much that can be done unless we can accurately account for the peer effects. The change in peer effects ultimately gets wrapped up in the treatment effect. With the above in mind, I conducted a series of analyses to examine the systemic effects of competition on average test scores for all students which reside in the catchment area of a given district. As is suggested above, I rely on insights from Hsieh and Uqruiola (2006) as well as Ni (2009) to drive the majority of my analyses. Below, I discuss the methods in detail. I start with a basic bivariate analysis of how the system level outcomes, the average outcomes for the 25 For a formal explanation of how the second concern is addressed through a system level analysis, see the work of Hsieh and Urquiola (2003, 2006). 153 measures of competition. I do this by running a simple OLS on Equation 4. represents the outcome for district i at time t, represents the various proxies for competition described (4) above for district i at time t, and is an idiosyncratic error term clustered at the district level. As discussed above, the relationship between competition and outcomes may not be linear. To account for this potential non-linearity, after running the above and subsequent regression models with the standard competition measure I ran models which included quadratics of the competition measures. lts below as the quadratic terms were non-significant for all models. After this basic regression I conduct a pooled ordinary least squares regression on Equation 5. represents the matrix of average student characteristics in district i at time t and is the matrix of weighted averages of the district characteristic for district i (5) at time t. represents a series of year dummies and I include the idiosyncratic error term clustered at district level. For this and subsequent regressions, I estimate the relationship using all districts in Michigan, districts experiencing a net loss of total enrollment from 2009 to 2012, districts experiencing a net growth in total enrollment from 2009 to 2012, districts seeing a declining general fund balance from 2009 to 2012, and districts seeing an increase in general fund balance from 2009 to 2012 separately. The results are reported below. 154 The pooled OLS will be biased if the measure of competition is endogenous. Such would be the case if a pre-existing trend influenced both outcomes and the extent of competition or if there was an underlying, unobserved attribute associated with both competition and student outcomes. I account for any time invariant attributes (observed or unobserved) that are associated with competition and student outcomes through fixed effects. I decompose the error term into (6) a time variant part and a time invariant part , which can be seen in Equation 6. I then apply a fixed effects (FE) transformation which subtracts the mean of each variable from each observation, eliminating any time invariant attributes whether they are observed or not. This can also be called the within district transformation as the analyses can be interpreted as changes in student outcomes within a system as competition levels change over time. When using a fixed effects approach, it is appropriate to test whether a random effects model better fits the data. I ran a random effects model and perform the recommended Hausman test (Wooldridge, 2010, p. 328). For all of the FE models run, the Hausman test strongly rejects the null hypothesis of a random effects approach being appropriate. Thus, I report only the FE results. As Ni (2009) argued, the speed of response to the extent of competition faced in the district likely varies across contexts. Therefore, I follow her suggestion of estimating a random trend model. This is done by taking the first difference of Equation 7 and then applying a (7) 155 fixed effects transformation or another first differencing (Wooldridge, 2010, p. 375). I opt for the second transformation to follow the FE approach. The first differencing eliminates the term , the time invariant part of the error, and the fixed effects transformation eliminates any system level trend. I first explore the systemic effects of competition on the average student test scores by using the average test scores for all students residing in a given district as the outcome. In order to explore the systemic effects of competition on the variation of outcomes I first produce graphs, disaggregated by within district deciles, of the average MEAP score on the y-axis and the measure of competition on the x-axis. This enables a visual inspection of if gaps appear to widen or close between any of the deciles. I then set the gap measures as the outcomes of interest and run the suite of analyses described above. As discussed above, it is possible systemic effects do not follow a linear pattern. Including quadratics in each of the average outcome regressions above I am able to test if the patterns are non-linear. It is also possible that the systemic effects are heterogeneousthat higher performing districts respond differently from lower performing districts. To test this possibility, I separate the districts into four quartiles based on their 2009 average MEAP scores. I then run the above regressions on each of these four groups for the impact on average scores and variation. Finally, I test if districts with higher concentrations of FRL or Black/African American students are impacted by competitive pressures differently by producing quartiles on these characteristics using the 2009 year for average and variation in test scores. These results are included in the appendices. In total, I run 24 regressions exploring the systemic effects of competition on the average outcomes and 36 regressions looking at the systemic effects on the variation of Math and Reading scores. 156 Results Definitions and data sources for each of the variables are included in Tables 1, 2, and 3. The summary statistics for all of the main variables used in the following analyses are presented in Table 4. As discussed below, I present an overarching summary of the systemic effects of competition on average outcomes (Table 12) and on the variation of outcome (Table 13) to help scaffold the below results. The results of applying POLS to Equation 4, exploring the relationship between the average system test score and the measures of competition without any controls, are presented in Table 5. This provides a first look at how the average test score moves with changes in competitive pressure. However, these are simple POLS results without controls so the relationships should be interpreted as simple correlations. The first thing to note is that from here onwards the coefficients on the lits terms have been standardized to allow for easier interpretation a one unit change in the standardized variables is equal to a one standarddeviation change in the variable. Looking at the first row we see the proportion of resident students leaving their residentially assigned district for another school is negatively associated with system level average test scores in both Math and Reading. In other words, systems with more students leaving are systems with a lower average MEAP Math and/or Reading scores. The promising measures of competition need a little more explanation on how to interpret. The various terms represent the proportion of students lost, separated out by sector students are lost to and/or the relative performance of the districts. For example, the terms % of students attending TPS (litsnoweight_soc) and % of students attending charters 157 Table 16. Descriptives 158 (litsnoweight_psa) represent the relationship between the proportion of students lost to Schools charter schools). As there is no weighting applied to the loss of students, we can interpret these coefficients as indicating that systems with higher proportions of students leaving the assigned district for other schools also have lower Math and Reading scores on average. In both Math and Reading, nearly every relationship is negative and statistically significant indicating that the Table 17. POLS regression of competition measures on system level average outcomes. 159 more student loss of nearly any type of school is associated with lower system average scores. Again, these results are only descriptive of the association between competitive pressure and MEAP score outcomes. I now turn to the next sets of analyses which introduce controls and more sophisticated estimations strategies. Table 18. POLS, FE, and Random Trends for systemic effects on average MEAP math scores 160 Tables 6 and 7 present the main results for the systemic effects of competition on average MEAP Math and Reading test scores, respectively. In each of these tables, Columns 1, 4, 7, and 10 contain the results of the POLS regressions. Columns 2, 5, 8, and 11 have the results for the FE regressions while Columns 3, 6, 9, and 12 present the Random Trends model. Interestingly, once controls are introduced the proportion of residents leaving the district no longer appear to be associated with the system average Math or Reading MEAP score. Table 3 shows a consistent pattern across the three versions of the promising measure of competition: the POLS specification indicates that as competitive pressure increases system average Math scores either remain the same or decrease, depending on where students leave the residential district for. However, when using the FE and Random Trends approaches there are no significant results, except for in the three categories measure of the FE (column 11) where there is a negative impact on Math as more students leave for higher performing TPS. Taken in whole, there is little evidence for an overall systemic effect of competition on the average math test scores. The results for MEAP Reading scores are strikingly similar (Table 7). The POLS results again consistently indicate a negative relationship between the outcome and the measures of competition when the measures of competition are significant. Overall, the evidence suggests little in the way of a net positive systemic effect of competition on reading outcomes once a FE approach is used. The only exception is in the three categories FE approach where there is a positive relationship between reading scores and students leaving for lower performing charter schools. The random trends models show either no impact or a negative impact of student loss on the system. Losing a higher proportion of students to lower or similarly performing schools through SoC is associated with a negative systemic effect. 161 Together, Tables 6 and 7 suggest no positive systemic effect of school competition on the average MEAP score in math or reading when using iterations of the promising measure of Table 19. POLS, FE, and Random Trends for systemic effects on average MEAP reading scores competition. The loss of students (lits) is primarily associated with a negative systemic effect when statistically significant. This demonstrates the importance of including more than just the number of students lost when measuring competitive pressure. The systemic effects of competition can potentially operate on the variation of test score outcomes as well. Figure 6 presents linear best fit lines for the association of the average MEAP scores for Math and Reading broken out by the within district deciles. While there is a pattern of a downward sloping line, signifying that as the competitive pressure increases the average score for each decile decreases, there is little clear trend in the size of the gaps between any two 162 deciles. Table 8 presents the results for a series of gap analysesthe gap between the 9th decile and the 1st (lowest), 5th and the lowest, and the 9th decile and the 5th decilefor both Math and Figure 21. Gaps between deciles for district average math and reading MEAP scores Reading MEAP scores. I report only the FE specification for the three category competition measure in Table 5 as the FE approach accounts for any time invariant factors, uses each district as a control for itself, and retains a larger n than the Random Trends model26. A positive coefficient represents a growing gap while a negative coefficient indicates a closing gap. For Math in Table 8, losing students is associated with a widening of each gap when significant. When students leave for lower TPS, there is an associated growth in the 9th/1st gap, loss to charters that are lower performing is associated with widening between the 5th and 1st deciles, and loss to higher performing charters is associated with a widening of the 9th/5th gap. These effects on the gaps on math MEAP scores shows the value in accounting for where students leave a district for as just using proportion loss masks these interesting impacts. Further exploration of what drives these results is left to future work but a set of possible explanations 26 I have run the Random Trends models and the results are quite similarall significant coefficients in the Random Trends model are similar in direction and magnitude to the FE model. However, there are fewer statistically significant coefficients likely in part due to the smaller n. 163 for each result are as follows: the 9th/1st and 5th/1st results could indicate that lower performing students are attending districts which are lower performing and thus are seeing lower outcomes and higher performing students are attending higher performing charter schools and benefitting Table 20. Systemic effects of school competition on the variation of MEAP outcomes, measured by three gaps. from this choice could explain the widening of the 9th/5th gap. In sum, there is no indication that competition results in the narrowing of the math gaps there is some evidence that math gaps widen with more competition. 164 Table 8 also shows that reading scores appear to respond in a less uniform manner across the three gaps. Loss of students to similarly performing schools via SoC yields a widening of the gaps while losing students to lower performing charters is associated with a closing of the test score gaps between the 9th and 1st deciles. There is no impact on the gap between the 5th and 1st deciles. The gap between the 5th and 9th decile widens as more students attend similarly performing TPS. The next set of tables explore two of the insights from the superintendent interviews: the impact of student loss may vary based on the enrollment and general fund trends of the district. Table 21. Subgroup analysis of the systemic effects on average student test scores by enrollment and fund trends Table 9 explores the systemic effects on the average MEAP scores for districts with varying 165 trends and Table 10 explores the systemic effects of competition on the test score gaps of districts with varying trends. Again, the tables present the FE regression results using the three Table 22. Subgroup analysis of the systemic effects on student test score gaps by enrollment and fund trends category measure of competition. Tables 9 and 10 separate the districts by the trends over the panel, from 2009-10 to 2012-13 school years, for enrollment and for general fund balance. Given the superintendents interviews, one would expect the results to show that districts with shrinking enrollment and/or budgets would be associated with declining averages and potentially increasing gaps if there was any impact. However, Table 9 indicates that the trend in overall district enrollment may yield results which are opposite in direction to what would be expected average math score gains associated with increased loss in declining enrollment districts and dropping math scores in districts with increasing enrollment. The math results for trends in general fund balance are consistent with the interviews as are all the results for reading. Table 10 166 Table 23. Summary results for all systemic effects on average outcomes. explores whether different district trends impact the test score gaps. When significant, student loss is associated with growing 9th/1st decile gaps. In sum, it appears that the trends are more related to the average outcomes than the gaps. I have also run the above regressions for districts which are facing declines in both enrollment and general fund balance. The systemic effects of competition on average Math scores is mixed and on Math gaps is associated with a widening of 167 the 9th/1st decile gaps. There are no impacts on reading. Tables 11 and 12 summarize the above results. Table 24. Summary results for all systemic effects on outcome gaps. 168 The appendices explore the systemic effects of competition on test score averages (Table A1 A3) and gaps (A4 A6) for districts disaggregated by MEAP performance, proportion Black/African American enrollment, and proportion FRL. For each of these three traits, Iseparated the districts into quartiles and present the FE results with the three categories of loss measure. The overarching takeaway from these tables is that the systemic effects on the average test scores vary by trait and quartile but the systemic effect on the gaps consistently show a widening of the test score gaps for the 9th/1st deciles. The widening of test score gaps exists for both subjects and across district characteristic and quartile. Discussion and Limitations Overall, this paper makes several important contributions to the school choice literature. The first is it argues for and produces the first systemic effects study in the United States. By doing so, this study helps address a question at the heart of school choice policy conversations: does school choice improve the educational system on average and for all students? Prior research has focused on other important questions, such as do traditional public schools improve their student outcomes when facing student loss, do charter schools produce innovative practices, does the introduction of choice alter the efficiency of TPS, does one sector outperform the another, what are the stratification/segregation implications of choice policies, and so on. However, studies looking at the impact of publicly funded school choice on all students residing in a given system are conspicuously absent from the US literature. The second key contribution is that it looks beyond the impact of competition on the average outcomes and examines how school competition impacts variation in the outcomes of interest. Finally, the application of a measure of competition accounting for the loss of students via school choice options and the 169 contextual differences amongst districts represents a promising way forward to understand how the systemic effects of competition operate. The evidence presented above suggests that the systemic effect of competition from student loss on the average MEAP Math and Reading scores is either null or negative. This is evidence that using a measure of competition which accounts for student loss to either charter schools or inter-district choice and the relative performance of the schools yields more nuanced information about how school choice induced competition operates. Corollary to the minimal systemic effects on the average outcomes are the impacts on test score gaps. The gaps, when they move, appear to widen, particularly for the 9th and 1st deciles. appear resistant to increased competitive pressure. Together, the overall systemic effects of competition appear to on the whole to either be non-existent or slightly counter to the arguments of school choice policy advocates increased competitive pressure does not seem to improve all students within the educational system nor does it appear to close test score gaps. Referring back to Figure 5, it appears that the results align with either the top left (Gaps widen, Average decreases) or top middle (Gaps widen, average remains stable) cells. This provides areas for future work as well as testable hypotheses: are low performing students more likely to utilize choice in Michigan? Is the impact of using choice for these students negative? Are the effects of using choice homogeneous for students across test scores? Even if we interpret the above results as an absence of systemic effects on the average and variation of standardized test scores, this should not be interpreted as school choice having had no impact on the educational system. As noted earlier, the use of test scores as the outcome of interest comes with benefits and limitations. Other outcomes that children, parents, practitioners, and policymakers care as much or more about may be impacted differently. For 170 example, the focus on test score averages and gaps says nothing about the systemic effect of competition on the economic stratification or racial/ethnic segregation of the students residing in preparedness, and parental satisfaction, amongst other potentially important outcomes, remain open questions. By exploring the heterogeneous systemic effects, this paper furthers our understanding of how school choice induced competition impacts students in various contexts. The use of quartiles demonstrates that the relationship between systemic effects and test score outcomes varies by context. The goal of this paper is not to determine why the systemic effects appear the way they do but instead one of the goals was to test if there were differential patterns in the systemic effects based on system makeup. The fact that this paper finds variation in how competition impacts educational systems opens up multiple avenues for future research. Understanding the determinants of whether increased competitive pressure leads to system wide improvements, harms, or no impact represents a key future extension of this work. Are there aspects of policy design which are related to the systemic effect of competition? Perhaps the lack of evidence for a uniform systemic effects of competition on test score average and variation should come as little surprise. The mixed findings in the extant literature on how TPS respond to competition presages the findings of this paper. However, drawing on the unit of analysis we can do more than hypothesize about the effect for all students. Systemic effects studies can serve as a backdrop of sorts to other types of research related to school choice and school competition. It provides a picture of the proverbial forest which gives context to the trees of TPS competitive response studies, TPS/charter comparative studies, work on choice 171 induced innovation, and so on. Each of these studies, or trees, is important to understand more fully in and of itself. So too is understanding how the studies relate to one another and what part of the whole they represent. This paper is not without limitations. As discussed, the focus solely on test scores represents a key limitation as it is only one aspect of school quality. It is possible that schools respond to competition by offering more/fewer services, broadening/narrowing the curriculum, making investments aimed at improving graduation and college going rates, improving parental satisfaction and collaboration, and so on. If these are not related to Math and Reading MEAP scores, this study misses them. However, the examination of test scores represents an important first step in understanding the overall phenomenon of the systemic effects of competition and an important component to the overall story. The data used for this study only covers four school years, 2009-2010 through 2012-13, and the analysis includes lagged terms which limits our analysis to three time periods. A longer panel may produce more precise estimates and reveal trends which currently remain unobserved. This study also does not include students who attend private schools, are homeschooled, or are enrolled in alternative schools. If school choice policy oups would bias the findings. It is unclear, given the lack of a voucher program in Michigan, how many families would be induced to enroll in a private school, or to switch out of a private school, this area may yield important patterns. This paper also does not differentiate between different loss to different types of charter schools (i.e. part of a national charter school network, for profit vs. non-profit) or by the chartering body (i.e. local district, local university, non-local university). As Muralidharan and Sundararaman (2015) argued in their study of Andhra Pradesh, India, differentiating student loss by comparing 172 schools based on time use in schools, resources available, and programs provided may yield further insights into our understanding of competitive pressure. These represent interesting ways to improve upon the measure of competition. Given all of these caveats, I believe this paper still contributes to our collective understanding of school choice policies and the effects of these policies on the students in the educational system. Conclusion This study compiles a rich panel dataset consisting of the universe of Michigan students attending traditional public and charter schools from the 2009-10 school year through the 2012-13 school year. To this student data, I add information on all schools and districts in the state of Michigan from 2008-2012. I use this unique dataset to explore the systemic effects of competition operationalized as the effect focatchment area regardless of publicly funded school attended on the MEAP test score system level averages and variation. This represents a contribution the sparse domestic systemic effects of competition studies. In general, I find no consistent systemic effect of competition on the average test scores or the variation of test scores. What evidence there is suggest a decrease in the average and an increase in the 9th and 1st decile gaps as competitive pressure rises. This study finds that the systemic effects are heterogeneous when disaggregating systems based on quartiles of achievement, proportion Black/African American, and proportion FRL. In sum, competition does not appear to be a tide that lifts all boats nor does competition produce the same systemic effects in all contexts. Further study of the interaction of context and competitive pressure through the lens of systemic effects would contribute to our understanding of the phenomenon of school competition and school choice policy design. 173 APPENDIX174 Table 25. Subgroup analysis of the systemic effects on average student test scores District MEAP quartiles 175 Table 26. Subgroup analysis of the systemic effects on average student test scores %Black quartiles 176 Table 27. Subgroup analysis of the systemic effects on average student test scores %FRL quartiles 177 Table 28. Subgroup analysis of the systemic effects on test score gaps District MEAP quartiles 178 Table 29. Subgroup analysis of the systemic effects on test score gaps District % Black/African American quartiles 179 Table 30. Subgroup analysis of the systemic effects on test score gaps % FRL quartiles 180 REFERENCES181 REFERENCES Angrist, J., Bettinger, E., & Kremer, M. (2006). Long-Term Educational Consequences of Secondary School Vouchers: Evidence from. American Economic Review, 96(3), 847-862. Angrist, J. D., & Lang, K. (2004). Does school integration generate peer effects? Evidence from 1634. Arsen, D., DeLuca, T., Ni, Y., & Bates, M. (2015) Which districts get into financial trouble and Center Arsen, D., & Ni, Y. (2012). Is Administration Leaner in Charter Schools? Resource Allocation in Charter and Traditional Public Schools. Education Policy Analysis Archives, 20(31). Arsen, D., & Ni, Y. (2012). The effects of charter school competition on school district resource allocation. Educational Administration Quarterly, 48(1), 3-38. Arsen, D., Plank, D., & Sykes, G. (1999). School Choice Policies in Michigan: The Rules Matter. ERIC. Retrieved from http://files.eric.ed.gov/fulltext/ED439492.pdf Ausbrooks, C. Y. B., Barrett, E. J., & Daniel, T. (2005). Texas charter school legislation and the evolution of open-enrollment charter schools. Education Policy Analysis Archives, 13(21), n21. Betts, J. R., & Tang, Y. E. (2014). A meta-analysis of the literature on the effect of charter schools on student achievement. Seattle, WA: Center on Reinventing Public Education, University of Washington. Retrieved March, 13, 2015. Bifulco, R., & Ladd, H. F. (2006). The impacts of charter schools on student achievement: Evidence from North Carolina. Education, 1(1), 50-90. Bifulco, R., Ladd, H. F., & Ross, S. L. (2009). Public school choice and integration evidence from Durham, North Carolina. Social Science Research, 38(1), 7185. Bohte, J. (2004). Examining the impact of charter schools on performance in traditional public schools. Policy Studies Journal, 32(4), 501520. 182 Carnoy, M., Jacobsen, R., Mishel, L., & Rothstein, R. (2005). The charter school dust-up. Economic Policy Institute, Washington DC. Carr, M., & Ritter, G. (2007). Measuring the competitive effect of charter schools on student public schools. National Center for the Study of Privatization in Education (Columbia University) Research Paper, 146. Carrell, S. E., & Hoekstra, M. L. (2010). Externalities in the classroom: How children exposed to ds. American Economic Journal: Applied Economics, 2(1), 211228. Carrell, S. E., Sacerdote, B. I., & West, J. E. (2013). From natural variation to optimal policy? The importance of endogenous peer group formation. Econometrica, 81(3), 855882. Center for Research on Education Outcomes. (2013). National Charter School Study 2013. Center for Research on Education Outcomes (CREDO) Report. Chakrabarti, R. (2013). Vouchers, public school response, and the role of incentives: Evidence from Florida. Economic Inquiry, 51(1), 500526. -State University Education Policy Center Dijkgraaf, E., Gradus, R. H., & de Jong, J. M. (2013). Competition and educational quality: Evidence from the Netherlands. Empirica, 40(4), 607634. Epple, Dennis, and Richard E. Romano. "Competition between private and public schools, vouchers, and peer-group effects." American Economic Review (1998): 33-62. Epple, Dennis N., and Richard Romano. "Neighborhood schools, choice, and the distribution of educational benefits." The economics of school choice. University of Chicago Press, 2003. 227-286. Epple, Dennis, David Figlio, and Richard Romano. "Competition between private and public schools: testing stratification and pricing predictions."Journal of public Economics 88.7 (2004): 1215-1245. Epple, Dennis, Elizabeth Newlon, and Richard Romano. "Ability tracking, school competition, and the distribution of educational benefits." Journal of Public Economics 83.1 (2002): 1-48. 183 Ferreyra, Maria Marta. "Estimating the effects of private school vouchers in multidistrict economies." The American Economic Review (2007): 789-817. Figlio, D., & Hart, C. (2014). Competitive effects of means-tested school vouchers. American Economic Journal: Applied Economics, 6(1), 133156. Charter School Outcomes, M. Berends, M. G. Springer, and H. J. Walberg, New York: Taylor and Francis (pp. 39 - 60). Market in Education. Phi Delta Kappan, 81(10), 75157. Gronberg, T. J., Jansen, D. W., & Taylor, L. L. (2012). The relative efficiency of charter schools: A cost frontier approach. Economics of Education Review, 31(2), 302317. Hastings, Justine S., Thomas J. Kane, and Douglas O. Staiger. Parental preferences and school competition: Evidence from a public school choice program. No. w11805. National Bureau of Economic Research, 2005. Hess, F. M. (2002). Revolution at the margins: The impact of competition on urban school systems. Brookings Institution Press. Hess, F. M., Maranto, R. A., & Milliman, S. (2001). Coping with competition: The impact of charter schooling on public school outreach in Arizona. Policy Studies Journal, 29(3), 388404. Holme, J. J., & Richards, M. P. (2009). School choice and stratification in a regional context: Examining the role of inter-district choice. Peabody Journal of Education, 84(2), 150171. Holmes, G. M., DeSimone, J., & Rupp, N. G. (2003). Does school choice increase school quality? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9683 Horn, J. G., & Miron, G. (2000). An evaluation of the Michigan charter school initiative: Performance, accountability, and impact. Hoxby, C. M. (1994). Do private schools provide competition for public schools? National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w4978 184 Hoxby, C. M. (2003). School choice and school productivity. Could school choice be a tide that lifts all boats? In The economics of school choice (pp. 287342). University of Chicago Press. Hsieh, C.-T., & Urquiola, M. (2003). When schools compete, how do they compete? An Research. Retrieved from http://www.nber.org/papers/w10008 Hsieh, C.-T., & Urquiola, M. (2006). The effects of generalized school choice on achievement 90(8), 14771503. s children: Evidence on the structure of peer effects from hurricane evacuees. The American Economic Review, 20482082. Loeb, S., Valant, J., & Kasman, M. (2011). Increasing choice in the market for schools: Recent reforms and their effects on student achievement. National Tax Journal, 64(1), 141164. Lubienski, C. (2005). Public schools in marketized environments: Shifting incentives and unintended consequences of competition-based educational reforms. American Journal of Education, 111(4), 464486. Lubienski, C. (2007). Marketing Schools Consumer Goods and Competitive Incentives for Consumer Information. Education and Urban Society, 40(1), 118141. Maranto, R., Hess, F., & Milliman, S. (2001). Small districts in big trouble: How four Arizona school systems responded to charter competition. The Teachers College Record, 103(6), 11021124. Muralidharan, K., & Sundararaman, V. (2015). The Aggregate Effect of School Choice: Evidence from a Two-Stage Experiment in India. The Quarterly Journal of Economics, 130(3), 1011-1066. Nechyba, Thomas J. "Introducing school choice into multidistrict public school systems." The economics of school choice. University of Chicago Press, 2003. 145-194. Ni, Y. (2009). The impact of charter schools on the efficiency of traditional public schools: Evidence from Michigan. Economics of Education Review, 28(5), 571584. Ni, Y. (2012). The sorting effect of charter schools on student composition in traditional public schools. Educational Policy, 26(2), 215242. 185 Preston, C., Goldring, E., Berends, M., & Cannata, M. (2012). School innovation in district context: Comparing traditional public schools and charter schools. Economics of Education Review, 31(2), 318330. Reardon, S. F. (2011). The widening academic achievement gap between the rich and the poor: New evidence and possible explanations. Whither opportunity, 91-116. Sass, T. R. (2006). Charter schools and student achievement in Florida. Education, 1(1), 91122. Resistance to State Schooling, Contemporary Private Competition and Student Achievement across Countries. The Economic Journal, 120(546), F229F255. Wooldridge, J. M. (2010). Econometric Analysis of Cross Section and Panel Data (second edition edition.). Cambridge, Mass: The MIT Press. Zimmer, R., Gill, B., Booker, K., Lavertu, S., & Witte, J. (2012). Examining charter student achievement effects across seven states. Economics of Education Review, 31(2), 213224.