WHOSE PERFORMANCE COUNTS? EQUITY CONCERNS IN PERFORMANCE FUNDING POLICIES By Renata Opoczynski A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Higher, Adult, and Lifelong Education-Doctor of Philosophy 2017 ABSTRACT WHOSE PERFORMANCE COUNTS? EQUITY CONCERNS IN PERFORMANCE FUNDING POLICIES By Renata Opoczynski While accountability in higher education has been a topic of debate for decades, in recent years the discussions have shifted to emphasize efficiency and economic measures of success. A prominent example of this accountability movement is the increase in popularity of performance funding policies. These policies connect specific outcomes on state selected metrics to increased state funding (Goldstein, 2012). Performance funding policies purport to increase efficiency by rewarding reductions in cost and increases in specific economic outcomes. However, many of these policies neglect a similar emphasis on maintaining access, which may lead to undesirable consequences including reducing the enrollment of traditionally underserved students (students from low socioeconomic status (SES) families and historically underserved students of color) (Dougherty et al., 2014). Therefore, this study explored whether performance funding policies have an effect on underrepresented students’ enrollment. Through a fixed effects panel analysis covering the years 2000 to 2014, this study explored any changes in enrollment of underserved minority students and Pell Grant receiving students in public four-year institutions. Findings from this study demonstrate that performance funding does have the potential to influence enrollment profiles at U.S. public four-year institutions. Specifically, this study found it changed the enrollment of underserved minority students. Further, these influences may not be equitable across all institutions, and instead may effect lower status institutions in a different manner than higher status institutions. Specifically, those with more flexibility in their enrollment profile may be more likely to change their enrollment of both Pell Grant students and underserved minority students. These findings have profound implications for higher education institutions, policy formation, and social equity. TABLE OF CONTENTS LIST OF TABLES ....................................................................................................................... vii LIST OF FIGURES ..................................................................................................................... ix Chapter 1 ...................................................................................................................................... 1 Problem Statement ......................................................................................................................... 1 Push for Efficiency .................................................................................................................. 2 Performance Funding ............................................................................................................... 4 Change in Access ..................................................................................................................... 5 Purpose of Study ............................................................................................................................ 7 Research Questions ........................................................................................................................ 8 Conceptual Framework .................................................................................................................. 9 Chapter 2 ...................................................................................................................................... 13 Higher Education Funding ........................................................................................................... 14 State Funding ............................................................................................................................... 14 State Appropriations .............................................................................................................. 15 Variance Among States.......................................................................................................... 16 Price of Higher Education ...................................................................................................... 17 Financial Aid.......................................................................................................................... 18 Accountability in Higher Education ............................................................................................ 19 Defining Accountability......................................................................................................... 19 History of Accountability ...................................................................................................... 21 Forms of Accountability ........................................................................................................ 24 Push for External Accountability ........................................................................................... 25 Performance Funding ................................................................................................................... 27 Defining Performance Funding.............................................................................................. 27 Spread of Performance Funding ............................................................................................ 30 Concerns with Performance Funding..................................................................................... 33 Best Practices in Performance Funding ................................................................................. 35 Effectiveness of Performance Funding .................................................................................. 38 Immediate impacts ........................................................................................................... 38 Intermediate institutional change ..................................................................................... 38 Ultimate student outcomes ............................................................................................... 39 Unintended outcomes....................................................................................................... 41 Enrollment and Retention Literature ........................................................................................... 43 General Trend Across Race ................................................................................................... 43 General Trend Across SES .................................................................................................... 45 Institutional Factors ............................................................................................................... 46 Student demographics ...................................................................................................... 46 Structural characteristics .................................................................................................. 48 Financial characteristics ................................................................................................... 49 iv State Factors ........................................................................................................................... 50 Financial........................................................................................................................... 50 Population data................................................................................................................. 52 Conclusion ................................................................................................................................... 53 Chapter 3 ...................................................................................................................................... 54 Sample/Population ....................................................................................................................... 54 Data Sources ................................................................................................................................ 56 Parent-Child Issues ................................................................................................................ 57 Variables ...................................................................................................................................... 58 Dependent Variables .............................................................................................................. 58 Institutional Variables ............................................................................................................ 58 State Variables ....................................................................................................................... 60 Analytical Method ....................................................................................................................... 64 Models.................................................................................................................................... 65 Limitations ................................................................................................................................... 67 Chapter 4 ...................................................................................................................................... 69 Descriptive Statistics.................................................................................................................... 69 Performance Funding and Underserved Student Enrollment ...................................................... 75 Pell Grant Variables ............................................................................................................... 76 Pell Grant percent ............................................................................................................ 77 Pell Grant amount ............................................................................................................ 78 Underserved Minority Student Variables .............................................................................. 81 Total underserved minority students ................................................................................ 81 Percent underserved minority students ............................................................................ 83 Institutional Type and Performance Funding .............................................................................. 88 Carnegie Classification .......................................................................................................... 90 Carnegie classification and Pell Grant variables ............................................................. 91 Carnegie classification and underserved minority student variables ............................... 93 Selectivity .............................................................................................................................. 96 Selectivity and Pell Grant variables ................................................................................. 97 Selectivity and underserved minority student variables .................................................. 98 Conclusion ................................................................................................................................. 101 Chapter 5 .................................................................................................................................... 102 Summary of Study ..................................................................................................................... 102 Discussion of Findings............................................................................................................... 104 Underserved Student Enrollment ......................................................................................... 104 Institutional Type ................................................................................................................. 110 Implications................................................................................................................................ 114 Implications for Policy......................................................................................................... 114 Implications for Research .................................................................................................... 116 Conclusion ................................................................................................................................. 118 APPENDIX ................................................................................................................................ 120 v BIBLIOGRAPHY ...................................................................................................................... 125 vi LIST OF TABLES Table 1 List of States with Performance Funding and Years ..................................... 61 Table 2 Variable Definitions and Sources .................................................................. 62 Table 3 Summary Statistics For All Institutions Across All Years ............................ 71 Table 4 Descriptive Statistics by Performance Funding ............................................. 73 Table 5 Results for the Effects of Performance Funding on Pell Grant Percent ........ 78 Table 6 Results for the Effects of Binary Performance Funding on Pell Grant Amount ........................................................................................................................ 80 Table 7 Results for the Effects of Binary Performance Funding on USM Total ........ 83 Table 8 Results for the Effects of Binary Performance Funding on USM Percent .... 84 Table 9 Results for the Effects of Performance Funding Count on USM Percent ..... 86 Table 10 Transition Rates Across Carnegie Categorical Variable ............................... 89 Table 11 Transition Rates Across Selectivity Categorical Variable ............................. 90 Table 12 Results of Performance Funding Interacted with Carnegie on Pell Grant Variables ........................................................................................................ 92 Table 13 Results of Performance Funding Interacted with Carnegie on USM Variables ........................................................................................................................ 94 Table 14 Results of Performance Funding Interacted with Selectivity on Pell Grant Variables ........................................................................................................ 97 Table 15 Results of Performance Funding Interacted with Selectivity on USM Variables ........................................................................................................ 99 Table 16 Results for the Effects of Performance Funding Count on Pell Grant Percent ...................................................................................................................... 121 Table 17 Results for the Effects of Performance Funding Count on Pell Gant Amount ...................................................................................................................... 122 Table 18 Results for the Effects of Performance Funding Count on USM Total ....... 123 vii Table 19 Tobit Results for the Effects of Binary Performance Funding on USM Percent ...................................................................................................................... 124 Table 20 Tobit Results for the Effects of Performance Funding Count on USM Percent ...................................................................................................................... 124 viii LIST OF FIGURES Figure 1 States with Performance Funding Policies Active and in Transition on July 2015 ........................................................................................................ 31 Figure 2 Six Year Graduation Rates for Four Year Institutions by Percent of Pell Recipients ............................................................................................... 47 ix Chapter 1 Over the last two decades, policymakers and the public have increasingly sounded an alarm related to the lack of accountability in higher education. While discussions regarding accountability in higher education have been commonplace since the 1970s, the intensity of their calls has significantly increased in recent years and taken a marked shift toward efficiency and demanding production of specific results (Burke, 2005). Further, Fuller, Lebo, and Muffo (2012) noted that higher education institutions should expect these demands to continue and possibly even increase in future years. Those sounding the alarm usually claim that higher education has strayed from its core mission of educating students, citing the fact that despite students attending higher education at the highest rates ever, the six year graduation rate is at an all-time low (Kelly & Schneider, 2012). At the same time, the price of participating in higher education is consistently increasing, leading policymakers and the public to question the value of higher education (McMahon, 2009). Problem Statement Underpinning this accountability movement is a shift in society’s view of its social contract with higher education. Throughout history, the relationship between the government and higher education institutions has changed, as has society’s view on the purpose of higher education (Thelin, 2004). The changing nature and expectations inherent in the social contract can influence the values and missions of higher education institutions, expectations on and from both political parties, funding for higher education at the state and federal level, power relationships between the parties, and views on the purpose of higher education (Edgell, 2009; Gumport, 2001; Kezar, 2004). Historically, the social contract regarding public higher education involved state leaders adequately funding autonomous institutions in exchange for those 1 institutions serving the common good of the state (Thelin, 2004). However, beginning in the 1980s, state legislators shifted the conditions of the social contract by imposing a greater degree of control over higher education institutions, while at the same time decreasing the state financial support to public colleges and universities (Edgell, 2009; Gumport, 2001). Often, the new expectations for institutions require a market orientation, which shifts away from the premise that higher education is a public good focused on meeting the diverse social needs of a state towards viewing higher education as a private good focused strictly on private economic outcomes and efficiency (Gumport, 2001; McMahon, 2009; Slaughter & Rhoades, 2004). As a result, state officials decreased financial support for higher education, believing that those who directly benefit from the education (the students) should pay a greater share of the costs of higher education, as opposed to the state significantly subsidizing the cost as a public investment (Slaughter & Rhoades, 2004). Performance funding policies are a prominent development within this broader policy shift. Push for Efficiency When policy makers adopt a market orientation of higher education, success is measured through a production function lens, which emphasizes efficiency. Simply defined, efficiency is the measure of outputs over inputs. State leaders expecting greater efficiency from higher education pressure institutions to increase outputs, decrease costs, or both. Performance funding policies are an example of this push for efficiency. In higher education policy, efficiency models, such as performance funding, emphasize a narrow view of outputs with a focus on market-based benefits e.g., graduation rates, salary post-graduation, faculty research productivity (McMahon, 2009; Slaughter & Rhoades, 2004). This heightened push for economic focused accountability and efficiency is the result of a complex set of factors including shifts in state funding models, 2 economic constraints, a growing federal investment in higher education, the public’s demand for higher education, labor market expectations, and the public’s perception of the cost and value of higher education (Eaton, 2012; Keller, 2012; NASBO, 2013). The ability for higher education leaders to respond to efficiency requests is complicated by the simultaneous decrease in state financial support. Despite three years of increases in per student appropriations, the State Higher Education Executive Officers (SHEEO) found that nationally in 2015 per Full Time Equivalent (FTE) funding is still 15.3% lower than levels before the 2008 Great Recession (SHEEO, 2016). In the last year, most states increased their per FTE funding, however direct dollar appropriations are still below appropriated amounts in 2008 and 12 states continued to cut their funding per FTE (Mitchell, Leachman, & Masterson, 2016). The availability of alternative revenue streams, including tuition and fees, makes higher education an easy target for funding cuts (National Association of State Budget Officers [NASBO], 2013). Further, the growing pressure from other state needs (e.g., Medicaid, K-12 education, corrections) limits the ability of state leaders to increase funding for higher education in the near future (NASBO, 2013; SHEEO, 2013). The push for efficiency is further complicated by the lack of agreement on what output measures should be used to gauge efficiency (Powell, Gilleland, & Pearson, 2012). Graduation rates, research output, time to degree, degrees awarded in specific fields, job placement rates and graduate salaries are often cited as metrics for assessing institutional efficiency. Yet, institutional leaders often lack the necessary data and benchmarks to know if there is a relationship between their expenditures and outcomes (Powell et al., 2012). As a result, higher education administrators tend to increase their expenditures on programs intended to improve outcomes (e.g., development education, living learning communities, first year seminars) and hope the 3 increase will also improve efficiency (Powell et al., 2012). While it is difficult to define efficiency in higher education, the term is nonetheless used to motivate many policy discussions. As McMahon (2009) explains, the term “efficiency” is “thrown around with wild abandon” (p. 12) by policy creators when advocating for higher education accountability. Performance funding policies help alleviate the challenges associated with this ambiguity by providing specific definitions of efficiency and outlining what outputs the state legislatures value and utilize to measure quality. Performance Funding Performance funding policies connect specific performance outcomes on state selected measures to increased state funding (Goldstein, 2012). By tying funding to state specific metrics, legislators believe institutions will be incentivized to improve efficiency. Through performance funding policies, institutions are held accountable for meeting outcome targets as part of the state appropriations process. Performance funding policies allow state officials to emphasize metrics such as graduation rates, financial stewardship, credit hours earned, wages of graduates, and research grants that they believe will better incentivize efficiency. The institutions that perform well on the metrics are financially rewarded through increases in their operating budget. Further, proponents including the Lumina Foundation, Complete College America (CCA), NASBO and the National Governors Association (NGA), highlight that performance funding shifts incentives away from inputs such as enrollments to outcomes and results and helps align higher education with state priorities (NASBO, 2013). Skeptics however argue that performance funding might encourage institutions to either relax graduation standards or turn away underrepresented students by tightening their admission criteria (Dougherty et al., 2014). As of July 2015, performance funding policies exist in 32 states; five more states were in the process of 4 transitioning (National Conference of State Legislatures [NCSL], 2015). The belief that performance funding policies will increase efficiency assumes that higher education operates within a competitive marketplace where the rules and assumptions of market economics (such as rational choice theory or perfect knowledge) can be met (Gumport, 2001). However, many higher education researchers believe standard economic market assumptions do not apply to institutions of higher education (see quasi markets below) and believe policies and decisions based on these assumptions are flawed (e.g., Gumport, 2001; Marginson, 1997; McMahon, 2009; Slaughter & Rhoades, 2004). If economic assumptions cannot be met and higher education does not function as a traditional market, then instituting performance funding policies would create quasi markets. Market behavior in quasi markets arises from the priorities of the policy makers, as opposed to true economic efficiency and supply and demand of resources (Taylor, Cantwell & Slaughter, 2013). Change in Access If performance funding policies value efficiency (increased economic outputs at lower costs) and quasi markets reward institutions that engage in behavior valued by the policy, then institutions may engage in behaviors to increase their quality and decrease their costs in order to reap the benefits of the performance funding policy. While there may be advantages to these behaviors, they may also have unintended consequences. For example, it is possible that performance funding policies create unintended consequences for access among underrepresented students. Immerwahr, Johnson, and Gasbarra (2008) noted that many higher education leaders believe cost, quality, and access are engaged in an “Iron Triangle,” where a change in one inevitably will affect at least one of the other two. For instance, if an institution sees an increase in their enrollment numbers through a large first year class, they will either have 5 to increase their costs or their quality may decrease through changes such as larger classes or increasing the occupants per room in their residence halls. In their study, Immerwahr et al. (2008) interviewed a diverse group of 30 institutional presidents and found that almost all agreed that the growing pressures for accountability and decreasing state support increase the price of higher education, making it less accessible. While legislators and the public may disagree with this perspective, it demonstrates how college presidents view decision making related to the elements of the iron triangle. As previously stated, the belief that cost, quality, and access are fundamentally interwoven may lead performance funding policies to create unintended outcomes related to access for underserved students. Specifically, the quasi markets created by performance funding policies may create incentives for institutions to attend to the metrics measured by policymakers (efficiency) and give less attention to the metrics for which institutions are not held accountable (underrepresented student enrollment). In other words, changes to quality and cost may lead institutional leaders to feel they must change enrollment profiles (Immerwahr et al., 2008). Research supports this concern; a commonly mentioned possible unintended impact of performance funding is restricting the admission of less prepared students (Dougherty et al., 2014). In fact, many institutional administrators acknowledged at least thinking about changing the institutional makeup of their classes in order to perform better on performance funding metrics (Lahr et al., 2014). Therefore, in this study, it is hypothesized that performance funding policies may lead to unintended consequences regarding access for underrepresented students. Institutions changing enrollment profiles, in particular enrolling less underrepresented students of color and Pell eligible students, in order to perform better on performance funding metrics could have serious repercussions to the whole higher education system. Educational 6 attainment in the U.S. for adults aged 25-34 is currently lagging behind 15 other OECD nations and those nations continue to make progress while the U.S. stagnates (Heller, 2013). In order to match the level of the highest performing nation, the U.S. would need to increase the number of students aged 25-64 with college degrees by 7.9% each year until 2020 (Perna, 2013). However, the high school graduate population (the available pool of students for enrolling in higher education) is becoming more racially diverse and financially needy (National Center for Education Statistics [NCES], 2014; Western Interstate Commission for Higher Education [WICHE], 2012). With these population changes, limiting the enrollment of underrepresented students of color and Pell eligible students could decrease the number of students eligible to enroll in and complete higher education. Therefore, if performance funding policies limit enrollment of underrepresented students, these policies may be antagonistic to the goal of increasing U.S. higher education participation and completion and decrease the global competiveness of the U.S. higher education system. Purpose of Study Performance funding policies are increasing in prevalence as states push for greater efficiency and specific outcomes from higher education institutions. These policies connect specific outcomes on state selected metrics to increased state funding (Goldstein, 2012). Performance funding policies purport to increase efficiency by rewarding reductions in cost and increases in specific economic outcomes. However, many of these policies neglect a similar emphasis on maintaining access, which may lead to undesirable consequences including reducing the enrollment of traditionally underserved students (students from low socioeconomic status (SES) families and historically underserved students of color) (Dougherty et al., 2014). College enrollment and graduation rates differ significantly based on race and family 7 income (Heller, 2013; NCES, 2016). In 2012, 68% of White students enrolled in college directly after high school compared to 63% of Black students and 62% of Hispanic students (NCES, 2016). Discrepancies in high school graduation rates and lower completion rates in higher education for underrepresented students of color lead to even greater disparities in bachelor degree attainment across race. In 2015, only 21% of Black and 16% of Hispanic 25 to 29 year olds had a bachelor’s degree compared to 43% of Whites in the same age group (NCES, 2014). The educational gap is more pronounced when looking at SES. Heller (2013) found students from families in the top quintile of the income distribution were significantly more likely to be enrolled full time at four-year institutions than students from the bottom quintile (55% verse 33%). The enrollment divide foretells a continuing and systemic issue, as Heller (2013) concluded that the difference between students from the top and bottom quintile who attained a bachelor’s degree eight years after high school was 40 percentage points. Further limiting enrollment of underrepresented students as a response to performance funding would widen these gaps, which could stand in the way of national policy priorities. Researchers found that in order to meet the national college completion goals the U.S. must not only acknowledge the attainment gap, but also actively work to improve the outcomes for underrepresented students (Heller, 2013; Perna, 2013). Therefore, this study explored whether performance funding policies have an effect on underrepresented students’ enrollment. Research Questions The current educational inequality in the U.S. has led to the increased economic and social stratification over time and needs to be addressed (Perna, 2013). However, limited research has explored how higher education institutions are balancing the competing needs of demonstrating efficient economic outcomes via performance funding while maintaining access 8 and success for all students, particularly in the four-year sector. As such, this study explored how performance funding influences the enrollment of underserved students and focused on the following research questions: 1) Are there unintended consequences from performance funding policies related to underserved students’ enrollment at public four-year institutions? a. Does performance funding have an influence on the enrollment of Pell Grant eligible students? b. Does performance funding have an influence on the enrollment of underserved minority students? 2) Does institutional type influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? a. Does Carnegie classification influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? b. Does selectivity influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? Conceptual Framework In order to understand how performance funding policies influence underrepresented students’ enrollment, this study utilizes the Iron Triangle concept introduced by Immerwahr et al. (2008) and weaves in the concept of Quasi Markets. These theories suggest that the higher education market does not follow traditional economic principles and instead institutions may make decisions striving to increase prestige or policy benefits. Further, these decisions may have an intended or unintended impact on access due to the relationship between access, cost, and quality. 9 As noted above, institutional administrators believe that cost, quality, and access are fundamentally interwoven in an Iron Triangle and that a change in one will lead to a change in at least one of the other two. If performance funding policies value efficiency (increased economic outputs at lower costs) then institutions may engage in behaviors to increase their quality and decrease their costs in order to reap the benefits of the performance funding policy. Specifically, performance funding policies may create incentives for institutions to attend to the metrics measured by policymakers (efficiency) and give less attention to the metrics for which institutions are not held accountable (underrepresented student enrollment). In other words, changes to quality and cost may lead institutional leaders to feel they must change enrollment profiles (Immerwahr et al., 2008). The conceptual framework for this study is also informed by quasi markets. Historically, higher education institutions were autonomous from the market, but during the late twentieth century institutions began to engage in market like behaviors, (Slaughter & Leslie, 1997). Institutions engage in market-like behaviors when they compete for funds from external stakeholders with a focus on financial efficiency (Slaughter & Leslie, 1997). The U.S. now has a well developed higher education market (Marginson, 1997) and performance funding policies assume that higher education operates within a competitive marketplace where the rules and assumptions of market economics (such as rational choice theory or perfect knowledge) can be met (Gumport, 2001). However, numerous scholars identified five significant restrictions of the standard economic viewpoint for higher education, which help explain the behavior of quasi markets. First, Gumport (2001) explained that the market for higher education is rendered irrelevant due to regulation by state and federal policies, and admission decisions. In other words, forces and 10 decisions that arise from state and federal policies and institutional decisions supersede standard market principles. Second, governmental subsidies cause the price of higher education to be below the actual market price (Marginson, 1997). Therefore, standard market pricing behavior can not be applied to higher education. Third, the market incorrectly assumes that students are capable of informed choice with perfect knowledge (Gumport, 2001; McMahon, 2009; Slaughter & Rhoades, 2004). While many students research higher education extensively, it is difficult to account for all the unknown factors related to higher education (i.e., how much value added the student will receive from their degree) and make decisions with true perfect knowledge as required for market principles to apply. Fourth, perfectly competitive markets would assume homogeneity of product, however it is clear that not all colleges provide the same rate of return on a student’s investment (Slaughter & Rhoades, 2004). Finally, higher education institutions are not necessarily privately owned and most are not focused on maximizing profit (LeGrand, 1991). Since quasi markets arise out of policy and are not influenced by market price, informed choice by consumers, or a focus on maximizing profits, they account for the unique characteristics listed above. In other words, rewards in quasi markets are based on policy priorities rather than economic efficiency and structured from decisions made by politicians with the hope of furthering public interest (Le Grand, 1991; Taylor, et al., 2013). Weaving the two theories together helps us understand how performance funding may influence institutional enrollment patterns. If certain outcomes are coveted by state legislators through performance funding, then administrators may strive to increase their prestige and financial appropriation by responding to the pressures of the performance funding policy instead of standard economic market pressures, thereby creating a quasi market. In other words, administrators may feel the need to increase quality or decrease costs in order to perform better in the quasi market created 11 by the performance funding policy. However, since access, quality, and costs are locked in an iron triangle, then the changes to cost and quality would also impact enrollment decisions. 12 Chapter 2 The new accountability movement shifts the expectations of the social contract with higher education to one focused on efficiency and limited financial support from the state (Edgell, 2009; Gumport, 2001; McMahon, 2009; Slaughter & Rhoades, 2004). Performance funding policies are an outgrowth of this push for efficiency. The quasi markets created by performance funding policies reward institutions that engage in behavior favored by the policy and may create incentives for institutions to attend to the metrics favored by policymakers (efficiency) and give less attention to the metrics for which institutions are not held accountable (such as underrepresented student enrollment). In other words, performance funding policies may have unintended consequences related to access and persistence for underrepresented students due to the iron triangle of cost, quality and access (Immerwahr et al., 2008). Specifically, changes to quality and cost may lead institutional leaders to feel they must change access (Immerwahr et al., 2008). Limited research has explored how higher education institutions are balancing the competing needs of demonstrating efficient economic outcomes via performance funding while maintaining access and success for all students, particularly in the four-year sector. As such, this study will explore how performance funding influences the enrollment and completion of underrepresented students. This chapter presents an overview of the literature related to higher education funding (with a specific focus on state funding), the accountability movement, and performance funding specifically. It then explores the literature on enrollment and retention from an institutional and state perspective with a focus on underrepresented students of color and low SES students. It concludes by presenting how this study will enhance and expand literature on performance funding and underrepresented student enrollment and retention. 13 Higher Education Funding Higher education institutions, specifically four year public institutions, collect revenue from several sources including tuition and fees, state and local appropriations, contracts/grants, donations, and interest from their endowments. Of these, tuition and fees and state and local appropriations make up the vast majority of institutional revenue. Public institutions collected $66.9 billion in net tuition revenue in 2015 and $90.9 billion in state and local funding (SHEEO, 2016). On average, state and local appropriations makes up 54% of the revenue for public higher education institutions (Mitchell, et al., 2016). Tuition revenue makes up 46.5% of all operating expenses for institutions on average, which is up from 23.8% in 1988; though it should be noted that there is great variance in funding among the states (SHEEO, 2016). The makeup of higher education revenue has significantly shifted as higher education takes a market orientation and perspective shifts to view the sector as more of a private good. In 1998 institutions received 3.2 times as much revenue from state and local government as they did in tuition, now they receive only 1.2 times as much from state and local appropriations as they do from tuition (Mitchell et al., 2016). Changes in the funding of higher education have led to a 33% increase (adjusted for inflation) in the net cost of attendance from 2008 to 2015 (Mitchell et al., 2016). At public four-year institutions, students now cover around half of the costs of their education (Desrochers & Hurlburt, 2014). As this research focuses on performance funding, which is a form of state appropriations, this literature review will focus on that area as well. State Funding States fund higher education institutions via two primary methods: direct appropriations and funding students through financial aid. These two methods benefit different numbers of students as well as different groups of students (Toutkoushian & Shafiq, 2010). State 14 appropriations benefit all in-state students across all different types of income levels by lowering the price of higher education (Toutkoushian & Shafiq, 2010). In contrast, state financial aid benefits a smaller number of students as only those who meet eligibility criteria (be it merit or financial based) receive the funds, which in turn lowers the price they pay for higher education (Toutkoushian & Shafiq, 2010). Additionally, need based financial aid targets lower income students as opposed to state appropriations, which benefits all income groups. Both state appropriations and state financial aid are discussed below. State Appropriations The most frequently used option to provide funding to higher education institutions is in the form of direct state appropriations. More than 90% of state funding for higher education is in the form of state appropriations (Hauptman, 2011). Policymakers hope that these funds will be used to reduce the price charged to in state students, allowing more students to attend higher education (Toutkoushian & Shafiq, 2010). In 2015, state and local funding for higher education totaled $90.9 billion (SHEEO, 2016). After the great recession hit, many states cut higher education funding significantly. The last few years have seen a return to funding higher education, however funding is still 18% lower than it was in 2008, the start of the great recession (Mitchell et al., 2016). When looking at funding per full time equivalent students (FTE) the trend remains the same. In 2015, average state and local funding per FTE was $6,966, an increase of 5.2% (in constant dollars) from 2014 (SHEEO, 2016). However this increase in funding per FTE is due mostly to declining enrollment numbers as opposed to increases in higher education funding (SHEEO, 2016). Additionally, despite three years of increased appropriations per FTE, 2015 funding is still 15.3% lower than prerecession funding (SHEEO, 2016). In fact, in 2016 all states 15 except for Montana, Wisconsin, Wyoming, and North Dakota have lower funding per student than they did before the recession (Mitchell et al., 2016). In response to the great recession, the federal government passed the American Recovery and Reinvestment Act (ARRA) in 2009, which included funding to help stabilize state support for higher education (SHEEO, 2016). The relative stability of state support for higher education directly after the great recession is a result of ARRA funds (SHEEO, 2016). When the ARRA funds ran out in 2012, higher education support decreased by 7% (SHEEO, 2014). The availability of alternative revenue streams, such as tuition and fees, makes higher education an easy target for funding cuts (NASBO, 2013). Further, the growing pressure from other state needs (e.g., Medicaid, K-12 education, corrections) limits the ability of state leaders to increase funding for higher education in the near future (Mitchell et al., 2014; NASBO, 2013; SHEEO, 2013). However, state funding still accounts for a large share of the revenue higher education institutions receive for general operating expenses (SHEEO, 2011). Variance Among States There is significant variance regarding state funding for higher education between states (Weerts & Ronca, 2012). For instance, in 2016, 38 states increased per FTE funding for higher education, while 12 states decreased funding (Mitchell et al., 2016). Research has found several factors can influence the amount of money appropriated to higher education including: other state budgetary demands, general economic health, the business cycle, state higher education governance structures, state fiscal health, state politics, demographic factors, population changes, party affiliation of the governor, party control in the legislature; legislative professionalism; and local funding (Delaney & Doyle, 2011; McLendon, Hearn & Mokher, 2009; SHEEO, 2014;Tandberg, 2010; Tandberg & Grifith, 2013; Weerts & Ronca, 2012). 16 Price of Higher Education Aligned with this decline in state revenue, there has been an increase in the price of tuition. Published tuition (often referred to as sticker price) has increased by $2,333 (after adjusting for inflation) nationally since the 2007-08 school year, a 33% increase (Mitchell et al., 2016). It should be noted that as states began to restore funding to higher education the increases in tuition have been more modest. In the 2015-16 school year, the average tuition increase at public four year institutions across all states (after adjusting for inflation) was only $254 or 2.8% (Mitchell et al., 2016). From a FTE perspective, net tuition (adjusted for inflation via the Higher Education Cost Adjustment) per FTE increased 5.0% annually from 2009 to 2011, then by 7.1% in 2012 and 4.7% in 2013 (SHEEO, 2014). Despite the increases in published tuition, the revenue gained from tuition only offset about 60% of state and local funding cuts to masters and doctoral institutions and 30% of funding cuts to bachelors institutions (Mitchell et al., 2014). This increase in tuition has resulted in students taking on more debt to pay for their higher education degree. 6o% of the students graduating in 2014 from public four-year institutions had taken on student loan debt (Mitchell et al., 2016). In the 2014-2015 school year, the average amount of debt students graduating from public four-year institutions left with was $25,500, up from $14,300 just three years prior (Mitchell et al., 2014; Mitchell et al., 2016). Federal financial aid (primarily through the Pell Grant program) has increased to counter this rising tuition level. The amount of aid distributed through the Pell Grant program increased by 80% between 2007-08 and 2014-15, resulting in a 2.7 million increase in the number of students who receive Pell grants as well as an increase in the average amount awarded from $3,028 to $3,673 (Mitchell et al., 2016). On the other hand, state financial aid has decreased by about 4% since the 2007-08 school year (Mitchell et al., 2016). 17 Financial Aid The second main funding source state governments provide is through state financial aid. These funds are given directly to students and can take the form of either need based aid or merit based aid. Need based aid is awarded to students based on financial need as calculated through their family’s expected family contribution. Merit based aid is state financial aid that is awarded based on factors not related to a student’s financial need, for instance performance in high school or on standardized admission tests such as the ACT or SAT. Historically, the vast majority of state financial aid was in the form of need based aid and merit aid programs were very small and specialized programs targeting groups such as widows or children of police officers killed in the line of duty (Heller, 2011). In 1993 Georgia created the Helping Outstanding Pupils Educationally (HOPE) program and the popularity of the program inspired many other states to create similar merit aid programs (Heller, 2011). The differing foci of merit and need based aid lead to significantly different funding distributions across income brackets. For instance, Heller (2011) found that 85% of state need based aid went to students in the lower or lower middle income quartiles, versus only 44% of merit based aid went to students in the lower or lower middle income quartiles. This raises questions regarding the relationship between income and measures used to access academic merit (Heller & Marin, 2004). Total state financial aid in 2014-15 was $12.4 billion (National Association of State Student Grant and Aid Programs [NASSGAP], 2016). In 2014-2015 state aid constituted 13% of total state financing of higher education (The College Board, 2016). Once again, there was great variance amongst the states with a range from 0% in New Hampshire to 38% in South Carolina (The College Board, 2016). The shift toward merit aid has been increasing over the years and in 2014-2015 24% of all state aid was in the form of merit aid (The College Board, 2016). Yet 18 again, there is great variance amongst the states with 26 states allocating 95% or more of their aid in a need based manner and 14 states allocating more than half of their aid on merit based formulas (The College Board, 2016). Accountability in Higher Education Discussions regarding accountability have occurred in higher education since the 1970s. However, in recent years, they have taken a significant shift toward discussions of efficiency and demanding the production of specific results (Burke-1, 2005). Several factors have driven the emphasis on efficiency. First, the massification of higher education has shifted the college degree into a necessity for most Americans and brought the higher education sector into the spotlight (Burke-1, 2005). Second, economic competiveness and the belief that the United States is falling behind in the knowledge economy has increased the attention on higher education’s ability to produce graduates with skills and abilities for today’s labor market (Burke-1, 2005). Third, the recent economic downturn has strengthened this call for greater accountability due to the financial pressure placed on states and legislators’ focus on the rising price of higher education (Powell, et al., 2012). These three factors combined with the shifting social contract has led to a strong focus on accountability in higher education and a demand for quantified proof of the economic returns on the public investment in higher education. Defining Accountability Burke (2005) stated that accountability is “the most advocated and least analyzed word in higher education” (p. 1) and that the term is frequently used but with multiple and often unclear meanings. Burke (2005) outlined a framework for defining accountability, which states that accountability imposes six demands on higher education institutions. First, institutions must exhibit proper use of their powers. Second, they must demonstrate work towards achieving their 19 mission. Third, they must have some method of reporting their performance to the public. Fourth, they must prove efficient use of the resources they receive and demonstrate effective outcomes from these resources. Fifth, they must demonstrate quality control. Finally, they must exhibit a commitment to meeting the public needs. Additionally, several forms of accountability exist, which further complicates the ability to reach agreement on what accountability should entail. First, upward accountability is the traditional reporting structure to a superior and covers what is commonly considered procedural, bureaucratic or legal accountability (Burke-1, 2005). Examples for higher education institutions include reporting to a Board of Trustees or reporting for state appropriations hearings. Second, downward accountability centers on a manager’s responsibility to his or her subordinates (Burke1, 2005). An example in higher education institutions is a president responding to requests from academic senate. Third, inward accountability emphasizes professional or ethical standards and in higher education it is demonstrated through professional accountability such as accreditation (Burke-1, 2005). Finally, outward accountability focuses on responding to external constituents, stakeholders and the public at large (Burke-1, 2005). In higher education institutions this includes performance reporting, report cards, and assessments. This literature review focuses on this last form of accountability and specifically performance funding policies. Higher education accountability is a fine edged sword. If it is too strict and regulatory higher education leaders view it as bureaucratic oversight and an infringement of academic freedom (Burke-1, 2005; Burke & Minassians, 2002; Toman, 2008). On the other hand, if the goals are overly broad or there is not enough “teeth” to the policies, legislators fear it will be ineffective (Toman, 2008). Rutherford and Rabovsky (2014) noted that a frequent reason accountability measures struggle is that policy makers fail to grant freedom for experimentation 20 and innovation in the policy and instead they impose accountability without removing any of the red tape. To understand this dichotomy it is helpful to explore the history of accountability in higher education. History of Accountability How states hold their higher education institutions accountable has changed over the years from system efficiency, to educational quality, to productivity, to response to market demands (Burke-1, 2005). The following is a brief summary of the accountability movement in the United States since the 1970s. The emphasis on accountability in higher education is buttressed by society’s shifting view of its social contract with higher education. Historically, the social contract involved state leaders adequately funding autonomous institutions in exchange for those institutions serving the common good of the state (Thelin, 2004). However, this began to shift in the 1970s with state revenue declines due to a falling economy. To create more efficient higher education systems, states created centralized higher education governing boards and increased regulation on previously autonomous institutions (Burke, 2005). Additionally, the public began to question the benefits of higher education as the price of education increased (Toman, 2008). In the 1980s states continued to increase regulation over higher education institutions, this time with a focus on student learning assessment (Burke, 2005). More than 30 states mandated that higher education institutions create plans to assess student learning, however, they allowed institutions the autonomy to determine the knowledge students should possess, develop how to best assess student learning, and decide how to use the results to improve institutional performance (Burke, 2005). The 1980s also saw the beginnings of a larger shift in the social contract, with the start of significant discussions over the purpose of higher education (Ewell, 21 1994). These discussions led to viewing higher education less as a public utility and more as a strategic investment, and in turn, accountability began to focus on the return from that investment (Ewell, 1994). Despite policymakers’ desire for demonstrated return on their investment, most institutions focused their assessment programs on internal institutional improvement and neglected the external demands of demonstrated results from accountability (Burke & Minassians, 2002). Due to this neglect, the 1990s began a shift beyond just increased regulation for higher education institutions to legislators connecting increased performance expectations to financial support from the state (Burke, 2005). Further, state officials perceived the inability to compare institutions as a significant flaw in previous accountability models and felt that comparing institutions was the best way to access performance (Burke & Minassians, 2002). From this desire to benchmark institutions against each other, came the rise in prominence of annual rankings of institutions such as U.S. News and World Report rankings, which are still prevalent today (Toman, 2008). This time period also saw the creation of performance funding policies in many states as legislators sought to press for demonstrated results and the ability to compare institutions (Burke & Minassians, 2002). The turn of the century again saw declining state revenues from another recession, which in turn led to further cuts to higher education (Burke, 2005). Accountability policies shifted to prioritizing work force needs and other state priorities over academic concerns with state policymakers seeking to influence the behavior of institutions in order to align institutional performance with state goals (Burke, 2005; McLendon, Hearn & Deaton, 2006). Within a period of three years, reports by NGA, NCSL, and SHEEO all advocated for higher education policies that focused on state needs and economic competitiveness (NGA, 2007; NCSL, 2006; SHEEO, 22 2005). The renewed push for accountability was spurred on by the public’s frustration with the quality of student learning, higher education’s focus on research and graduate education, the expanding number of administrative staff on campus, low completion rates and limited employability of graduates (Burke, 2005). Further, legislators went beyond a push to compare institutions and began to advocate for the standardization of higher education, believing this would aid the comparability of institutions and enhance the measure of quality (Eaton, 2012). The rising price of higher education coupled with the erosion of public trust has renewed the accountability movement in recent years (Volkwein & Tandberg, 2008). Within this resurgence, organizations such as the Gates Foundation and Lumina Foundation have emphasized student completion and success (often defined as employment and salary) post graduation. To this end, accountability has taken a market orientation focused on economic outcomes and efficiency (Burke, 2005; McMahon, 2009; Toman, 2008). Simply defined, efficiency is the measure of outputs over inputs. State leaders desire greater efficiency from higher education and therefore advocate for institutions to increase outputs, decrease costs, or both. In higher education policy, efficiency models emphasize a narrow view of outputs with a focus on market-based benefits (e.g., graduation rates, salary post-graduation, faculty research productivity) (McMahon, 2009; Slaughter & Rhoades, 2004). Performance funding policies are an example of this push for efficiency and legislators are once again advocating for and implementing these policies. The renewed interest in performance funding is to be expected within the current focus on efficiency as many metrics used in these policies focus on efficiency (Burke & Serban, 1998). In line with a market orientation of higher education, state legislators have also decreased financial support for higher education on the premise that higher education 23 is a private good and therefore those who directly benefit from the education (the students) should pay a greater share of the costs of higher education (Slaughter & Rhoades, 2004). Forms of Accountability Forms and measures of accountability have progressed and expanded as the accountability movement has evolved. However, no clear agreement on the best accountability measures exists. The following are all mentioned as primary accountability measures or key principles of accountability: accreditation, rankings, licensure, alumni studies, academic program review, outcomes assessment, faculty productivity measures, student unit data systems, college spending measures, rankings such as U.S. News and World Report, end of program assessments such as the Collegiate Learning Assessment, voluntary assessment measures such as the College Portrait or the Access to Success initiative, and state report cards (Bogue & Hall, 2002; Carey & Schneider, 2010; Toman, 2008) Accreditation is the oldest and most well known form of higher education accountability. Its central role of providing quality assurance for higher education has existed for over a century. Accreditation is the primary method for internal accountability and is a process based on self review and peer assessment which covers an institution’s financial health, governance structure, faculty and staff relations, achievements, student services and student learning outcomes (Volkwein, 2009). Mirroring the trends in accountability as a whole, since 1990 accreditation has seen a growing focus on outputs and outcomes including graduation rates (Volkwein, 2009). However, such a focus on end of the line results takes away from the whole educational process and may lead to institutions missing key information that could improve student learning (Volkwein, 2009). Some accreditation organizations have recently taken steps to alleviate this 24 concern and encouraged institutions to focus on data across the whole institutional timeline (Volkwein, 2009). Push for External Accountability Despite accreditation’s long history, policy makers are pushing for a decreased focus on internal monitoring and a need for greater external accountability. Volkwein (2009) noted the struggle accreditation faces between the internal role of self-assessment for improvement and the external role of accountability to taxpayers, legislators, and students. Legislators fear that institutions would not be willing to highlight their areas of weakness publically and therefore believe external oversight is needed. Two main reasons are given for the need for increased external accountability. First, institutions are not viewed as credible judges of their own quality and it is believed that external assessment is needed. The changing social contract has led to an attack on the current system of accreditation with calls for its overhaul or even complete removal (Eaton, 2012). Accrediting organizations have been around since the early 1900s, but it was only since the 1950s that they took on the gatekeeping role of determining academic quality for the federal government in order for institutions to be eligible for federal funds (Eaton, 2012). In recent years, a belief has emerged that higher education institutions should be held to greater accountability by the government and this new process has included a more regulatory and market focus than the traditional collegial model of accreditation (Eaton, 2012). For instance, the 2008 reauthorization of the Higher Education Act included governmental control in areas previously left entirely to institutions, such as transfer of credit, student achievement, and distance learning (Eaton, 2012). The results of the accountability movement’s attacks on accreditation can already be seen in the increased focus throughout the process on detailed 25 outcome measures and a high level of concern around costs and productivity (Bardo, 2009). Part of this lack of trust in accreditation arises from a perceived secrecy in the process. Regional accreditation groups are criticized for keeping their discussions private and limiting the amount of information shared with external stakeholders such as parents and students (Bardo, 2009). Critics of the current system feel these conversations should be public and that stakeholders should know the results and changes that arise from an accreditation cycle so that customers (students and their parents) can make informed decisions (Eaton, 2012). A second reason given for the need for external monitoring is a discrepancy between how institutions measure and view quality versus how external stakeholders, in particular state legislators, measure quality and impose accountability. Institutional leaders tend to measure success through a resource/reputation model and represent professional accountability (Burke, 2005; Volkwein, 2009). In this structure, an institution’s quality is measured through its rankings, level of financial resources, faculty credentials, student achievement and fundraising using methods such as peer review, internal institutional improvement, reputation and rankings (Burke, 2005; Volkwein, 2009). Policymakers on the other hand focus on a strategic investment model of success characterized by political accountability (Burke, 2005; Volkwein, 2009). In this model, quality is measured through a return on investment, efficiency, and productivity of outputs using methods such as external regulation and performance reporting, budgeting and funding (Burke, 2005; Volkwein, 2009). Within these competing frameworks, state legislators often feel that higher education institutions are focusing on the wrong things and wasting taxpayers’ money, while institutions feel that legislators do not understand higher education’s structure and values. 26 Performance Funding The push for external accountability and a focus on efficiency has led to many states adopting performance funding policies. As states pass performance funding policies, the literature on the policies has also increased. Literature on performance funding policies has focused on several areas including defining performance funding, analyzing the spread of performance funding, identifying concerns with performance funding, recognizing best practices for performance funding policies, and exploring the effectiveness of performance funding policies. Each of these topics is explained in detail below. Defining Performance Funding Performance funding is part of a growing trend in accountability, which also includes performance based budgeting and performance reporting. It is important to understand the similarities and differences between these three accountability measures. All three are implemented with the purpose of demonstrating external accountability, improving institutional performance and responding to state needs (Burke & Minassians, 2002). However, the three differ in the how closely, if at all, institutional performance is tied to state funding. Performance reporting requires institutions to report their performance on various metrics, but does not tie any funding to how institutions perform on these metrics. It stems from the belief that institutions will perform better when they know their outcomes will be made public (Burke, 2005). Legislators specify a list of metrics that all public institutions need to report annually (Burke & Minassians, 2002). These reports go to governors, legislators and higher education coordinating boards, as well as being posted on institution or system wide websites (Burke, 2005). Performance budgeting allows funding entities to consider performance on various metrics as one factor when deciding funding amounts, but there is no specific formula connecting funding 27 to performance. In other words, an institution receives its determined state funding allocation and any additional funds for performance are solely determined by the judgment and discretion of state legislators (Burke, 2005). Finally, performance funding policies connect specific performance outcomes on state selected measures to increase state funding (Goldstein, 2012). The key factor that separates performance funding from performance budgeting is that the financial relationship is predetermined and specific. A particular score on a metric equates a specific funding amount. Through performance funding policies, institutions are held accountable for meeting outcome targets as part of the state appropriations process. Performance funding policies allow state officials to emphasize metrics such as graduation rates, financial stewardship, credit hours earned, wages of graduates, and research grants that they believe will better incentivize efficiency. Performance funding stems from a resource dependence framework. Legislators believe that institutions will strive to improve their scores on the various metrics in order to maximize their funding. Besides maximizing funding, legislators promote numerous benefits for institutions from performance funding. State leaders suggest that performance funding policies can provide administrators with information about their performance as well as how they compare to their peers (Dougherty & Reddy, 2013). Additionally, state leaders promote the fact that performance funding focuses on outcomes instead of prescribing behaviors, therefore allowing institutions to maintain autonomy and decided how they will respond to the policy mandates (Burke & Minassians, 2002; Rutherford & Rabovsky, 2014). Finally, performance funding policies, with their formalized and explicitly defined performance metrics, make clear to higher education 28 institutions the values and preferences state legislators will reward and value (Rutherford & Rabovsky, 2014; Sanford & Hunter, 2011). Metrics used in performance funding are connected to state priorities and therefore vary from state to state and even over time in the same state. However, some metrics are commonly seen across performance funding policies. Burke and Serban (1998) examined metrics across policies in 1998 and found that the following metrics were most common: graduation and retention rates, job placement, student transfer rate, faculty workload, and time to degree. Performance funding policies claim to shift funding (and therefore higher education’s focus) from input (resources received) and process measures (methods used to deliver services) to state focused outcome measures (quantity or quality of the products produced) (Burke, 2005). However, Burke and Serban’s (1998) study also indicated that across all metrics only 12% focused on outcomes and only 19% on outputs, with 48% focused on process. It should be noted that this study was done on older performance funding policies, and new research should explore if this trend is still present in current policies. Performance funding is often thought of as occurring in two waves, frequently referred to as performance funding 1.0 and performance funding 2.0. The adoption of performance funding 1.0 took place between1979 (when Tennessee implemented the first policy) and 2000 (Dougherty et al., 2014b). In this version, performance funding is a bonus over regular state funding and involves small amounts of funding, usually between 1 and 6 percent (Dougherty et al., 2014a). In performance funding 2.0, starting around 2007 and continuing to the present, the funding is embedded into the normal state funding for higher education (Dougherty et al., 2014b). About two thirds of the programs created in the second wave are readoptions of previously discontinued policies (Dougherty et al., 2014a). The amount of funding in 2.0 29 versions is also usually higher, with between 5 to 25 percent of all state higher education funding coming from performance funding formulas (Dougherty et al., 2014a). Spread of Performance Funding As of July 2015, 32 states have active performance funding policies and five more have programs in the process of being implemented (NCSL, 2015). Figure 1 shows the states with performance funding and those in transition. Authors attribute the spread of performance funding and its fellow accountability measures to diverse factors, including: globalization, pressure to maximize productivity and efficiency, a shift towards marketization of higher education and governance, financial pressures from the great recession, changes in political leadership, and growing frustration with the voluntary and self assessment of higher education (McLendon et al., 2006; Dougherty et al., 2014b). Several studies have empirically explored the spread of performance funding. McLendon et al., (2006) conducted an event history analysis model to explore what sociopolitical factors influence a state’s adoption of performance funding, performance based budgeting or performance reporting policies across all states. Of the ten areas they explored, they demonstrated that legislative party strength and higher education governance arrangements were the primary drivers. For performance funding specifically, a higher percentage of Republican state legislators and the absence of a consolidated governing board increased the probability of the state adopting a performance funding policy. The authors believe that Republicans favor performance funding over performance budgeting or performance reporting as it grants legislatures the strongest leverage for accountability (McLendon et al., 2006). Beyond Republican state legislators, governors who identify as Republican also seem to favor performance funding policies. Since 2007, nine states with Republican governors have implemented performance funding policies (Dougherty et al., 2014a). Of note, despite the 30 regional nature and spread of performance funding, McLendon et al. (2006) found no evidence that having neighboring states with performance funding increased the likelihood of a state adopting a performance funding policy. Figure 1 States with Performance Funding Policies Active and in Transition on July 2015 Source: National Council of State Legislatures Dougherty et al., (2014b) examined how political forces influence the adoption of performance funding through a qualitative analysis in three states (Indiana, Ohio, and Tennessee). Through interviews with state legislators, higher education administrators, state governing board members, and business leaders, the authors determined that the main supporters of the adoption of performance funding policies were the governor, the state higher education coordination board and to a lesser extent the legislative leaders and business leaders (Dougherty et al., 2014b). This advocacy coalition arose from the shared beliefs that states needed to increase the number of residents with college degrees in order to meet economic demands, that the recession required higher education to become more efficient, and that a performance funding policy would allow the previous two goals to be achieved (Dougherty et al., 2014b). 31 The role of foundations in policy spread and formation is well documented, and foundations played a significant role in the spread of performance funding. The Lumina foundation is a strong supporter of performance funding (Dougherty et al., 2014b) and has funded many initiatives focused on enhancing or creating performance funding and other advances in efficiency. Complete College America (CCA) is another strong supporter of performance funding (Tandberg & Hillman, 2014). In 2009 CCA recommended a series of reforms for higher education, of which one suggestion is implementing performance funding (CCA, 2012). Thirty-five states had signed on to these reforms (though not necessarily implemented performance funding) by 2013 (CCA, 2016). While performance funding is increasing in prevalence, it has a tumultuous past. Twothirds of performance funding policies that are implemented are later discontinued for at least some period of time (Dougherty et al., 2014a). In fact, of the 18 states that adopted performance funding between 1979 and 2012 only a few maintained the policy for more than four consecutive years (Rutherford & Rabovsky, 2014). Researchers discussed three main forces contributing to the closure of performance funding polices: opposition from higher education institutions who felt they were not consulted in the process and had significant concerns with the policies, economic recessions and the loss of political champions (mainly through election losses or term limits) (Dougherty, Natow & Vega, 2012; Rabovsky, 2012; Rutherford & Rabovsky, 2014). Despite performance funding’s spread across states, the policy seems to struggle to be disseminated across institutions. Senior administrators are familiar with the policy, however lower level administrators and faculty seem to not be aware of the specifics of the policy (Burke, 2002; Burke, 2005; Dougherty et al., 2014a; Freeman, 2000; Hall, 2000; Schaller, 2004). Relevant research has attributed this knowledge stratification to two causes. First, there is a 32 perception of role responsibility, with high level administrators seeing their role as responsible for the institutional implementation and accountability for performance funding, while not feeling it needs to be the concern of those below them on the campus hierarchy (Freeman, 2000; Hall, 2000). Second, since performance funding usually goes into the university general fund, there is a perception that only those connected to and responsible for the general fund (usually high level administrators) should be concerned with performance funding metrics (Freeman, 2000; Hall, 2000). Concerns with Performance Funding Despite performance funding’s spread, several concerns exist including what data is used in the policies, how to account for institutional differences, and how to incentivize institutional behavior. First, higher education leaders hold significant reservations regarding the appropriateness of various metrics used in performance funding policies (Dougherty & Reddy, 2013; O’Neal, 2007; Opoczynski, 2014; Sanford & Hunter, 2011). Most notably, there is great concern regarding the measures of graduation and retention, which, due to the nature of how IPEDS measures these metrics, offer only a limited slice of the students attending four year institutions and completely ignore transfer and part time students (O’Neal, 2007; Opoczynski, 2014). Further, there is debate about how the data should be measured and analyzed (Opoczynski, 2014; Sanford & Hunter, 2011). For instance, should the metric count the raw number of students graduating or use graduation percentages, and should institutions be compared to other institutions in the state, to their peers, or to their historical performance (Opoczynski, 2014; Sanford & Hunter, 2011)? Additionally, concerns exist regarding what measures (graduation rates, retention, course completion, etc.) best measure student success (Sanford & Hunter, 2011). As indicated above, defining success in higher education is difficult 33 and different performance funding policies utilize different definitions and metrics to measure student success. Another concern regarding performance funding policies is how to account for institutional differences (Hall, 2000; O’Neal; 2007). Higher education administrators often feel that performance funding fails to understand the various missions of higher education institutions and their varying student demographics (Hillman, Tandberg & Fryer, 2014). During implementation, higher education institutions, especially open access institutions, frequently express reservations about the performance funding policies including worry about the policies understanding their mission (Dougherty et al., 2014b). Further, Rutherford & Rabovsky (2014) argue that performance funding policies try to apply a one size fits all mentality to institutions and that the policies ignore many contextual factors, which may influence an institution’s ability to respond to the performance funding mandate. A final concern with performance funding is whether the limited funding in many states can actually incentivize institutions to change their behavior (CCA, 2012; Dougherty & Hong, 2006; Dougherty & Reddy, 2013; Jenkins et al., 2012; Opoczynski, 2014; Rutherford & Rabovsky, 2014; Sanford & Hunter, 2011; Shin, 2010). In many states, the portion of funding tied to performance metrics is less than 5% (Sanford & Hunter, 2011). Such small amounts may not be viewed as meaningful incentives to institutions, especially in relation to other possible revenues (CCA, 2012; Sanford & Hunter, 2011). In a recent analysis of Michigan, where performance funding accounts for just 2% of state funding, many administrators commented that they followed the performance funding metrics to ensure they were in compliance, but that the possible additional funds did not alter their behavior significantly (Opoczynski, 2014). While there is agreement that small amounts of funding may not change institutional behavior, there is 34 no clear consensus on an amount that would be viewed as “enough.” In fact, when Tennessee doubled the money allocated via performance funding in order to increase the incentive for institutions, there was no change in institutional retention rates (Sanford & Hunter, 2011). Further, performance funding policies assume that institutions are intentionally not performing at optimal efficiency or have misaligned priorities and are simply waiting for financial motivation to change their performance (Rutherford & Rabovsky, 2014). However, Opoczynski (2014) discussed that administrators stated that the policy did not alter their behavior tremendously because the behaviors the performance funding policy promoted, such as increased graduation rates, were values the institution already held and were striving to improve. Best Practices in Performance Funding Various researchers and policy organizations have promoted several principles believed to lead to successful implementation of performance funding, including how to strive to overcome the concerns indicated above. Several are outlined in this section. The first key practice leading to successful enactment of a performance funding policy is agreement and consultation on goals before implementing the policy. CCA (2012) suggested the importance of creating a clear and concise public agenda that articulate the needs of the state and how performance funding will address these goals. CCA (2012) also promotes including institutional representatives throughout the process due to their expert knowledge and the increased chances of support for the final policy if the representatives are involved from the start. Engaging institutional leaders could also ensure campus priorities’ are included in the policy and metrics (Burke, 2005). Dougherty and Reddy (2013) noted that in performance funding 2.0 programs (which are largely influenced by CCA) higher education administrators are consulted more frequently as the policy is crafted and that this has proven useful for a smooth implementation. 35 Another beneficial practice is to allow institutions to familiarize themselves with the metrics and how they would fair in performance funding with limited or no financial impacts (CCA, 2012; Dougherty et al., 2014b). Two possible structures to accomplish this are a free year or to phase in the policy over a period of a few years. A free year allows institutions to gain information on where they fall on the metrics without any financial impacts. Institutions can use this year to ask any questions they may have on the policy and to make changes to prepare for performance funding. Alternatively, the policy can be phased in with a small amount at the start (such as 2%) and increasing each year till the final desired amount is reached (CCA, 2012). It has also been recommended to take institutional mission into account (CCA, 2012; Dougherty et al., 2014b). While Burke (2005) believes that performance funding will never be able to fully account for diverse institutional needs, several options could make steps towards including institutional mission. First, the performance funding policy could specify different metrics for different institutional types (CCA, 2012). For instance, Ohio has separate formulas for their flagship institutions compared with their regional campuses (Dougherty et al., 2014b). Similarly, institutions could be compared to their peer institution instead of against other state institutions (Dougherty et al., 2014a). Michigan’s performance funding policy is structured this way and institutions score points based on how they compare to institutions in their Carnegie classification (Opoczynski, 2014). Additionally, an institution’s performance could be compared to itself across time. This would allow performance funding to reward any positive growth or improvement. Finally, institutions could be allowed to create a “point of pride” metric, which would allow each institution to have a personalized metric related to their mission (Burke, 2005). Another recommendation researchers and policy organizations endorse is including language or policies that protect academic standards and admissions of underserved students 36 (CCA, 2013; Dougherty et al., 2014a; Kelly & Lautzenheiser, 2013). Specifically, an option is to have a weighted formula where “at risk” students are weighted more heavily (Dougherty et al., 2014b). States that have weighted metrics often give significant weight to these metrics with numbers from 40% (Tennessee) to 100% (Texas) (CCA, 2012). Adding this extra weight to at risk students helps reduce the likelihood that institutions will simply reduce the enrollment of at risk students to improve their performance on the funding metrics. It is further recommended that legislators include only a small number (4 or 5) of explicit and easy to understand metrics that are clearly connected to state priorities and labor market needs (Burke, 2005; CCA 2012; Kelly & Lautzenheiser, 2013). Limiting the number of metrics helps ensure that performance funding aids institutions in aligning their goals with state priorities. With only a few clearly defined metrics, policymakers should be able to articulate to institutions what the state priorities are and what is expected from the institutions (CCA, 2012). Having too many metrics can muddle the message regarding state priorities and also potentially create instances where focusing on improving performance on one metric could negatively affect performance on another metric (Burke, 2005). As a large concern with performance funding is the volatility it can create regarding institutional budgeting, several best practices are suggested to offset these concerns. First, institutions could include a stop loss provision with would place a limit on how much funding an institutions could lose (CCA, 2012; Dougherty & Reddy, 2013). This establishes a maximum loss that an institution could face in any one year. Another option is to calculate performance data on a three year average in order to avoid sharp shifts in funding (Dougherty et al., 2014b). Likewise, institutions should be rewarded for any improvement, instead of only being rewarded if they reach a specific level of performance (CCA, 2012). 37 Effectiveness of Performance Funding Performance funding advocates believe that following the best practices outlined above will increase the effectiveness of performance funding policies. Dougherty and Reddy (2013) describe three levels of impacts of performance funding that institutions must progress through to see a positive effect of the policy. First, immediate impacts of performance funding are things such as institutions becoming aware of state priorities and their own performance relative to these priorities. Next, these immediate impacts must translate into intermediate institutional change such as modifications of institutional policies, programs, and practices that are designed to increase performance on state metrics. Finally, these intermediate impacts must result in changes to ultimate student outcomes such as increased graduation rates. It is these ultimate outcomes that performance funding policies seek to change. Each of these levels of impacts will be explored in this section. Immediate impacts. In their meta analysis, Dougherty and Reddy (2013) discussed that several studies (mostly in the two-year sector) demonstrated that performance funding policies did make higher education administrators more aware of state priorities as well as led to a greater awareness of institutional performance in relation to the state priorities. Further, they found evidence that performance funding policies increase competition amongst the institutions (Dougherty & Reddy, 2013). Limited research has explores these changes, with most studies exploring the effectiveness of performance funding focusing on the next two levels of impacts. Intermediate institutional change. Some research on performance funding shows that higher education institutions change their academic and student service policies, programs and practices when performance funding policies are put in place. For instance, some colleges reported closing programs with low graduation rate or job placement success (Dougherty et al., 38 2014a). Institutions also reported changing course content and instruction as well as course sequences to try and increase course completion rates (Dougherty & Reddy, 2013). Further, some institutions reported increasing advising, changing retention programs, registration procedures and student aid in order to increase retention and graduation rates (Dougherty & Reddy, 2013). These included changes to the reregistration process, changes in financial aid, improvements in academic advising, and changes to job placement supports (Dougherty & Reddy, 2013). Rabovsky (2012) explored intermediate institutional change through a national quantitative analysis and demonstrated that performance funding policies were positively related to expenditures on instruction. However, the effects were minimal with the difference between states with performance funding and those without being less than one percent. When separated into research universities and non research universities, the effect was more than double for research universities (1.34 percent verse 0.59 percent) yet still small for both (Rabovsky, 2012). However, this finding is noteworthy as it highlights the fact that performance funding policies may have differential impacts based on institutional classification. Ultimate student outcomes. When exploring whether performance funding policies increase outcomes such as retention and on time graduation rates the results are more mixed. Some studies have shown that the number of graduates increases after the implementation of performance funding (Bell, 2005; Phillips, 2002). However, these studies do not control for the numerous factors besides performance funding (e.g., increased enrollments, changes to financial aid) that could lead to an increase in graduation rates (Dougherty et al., 2014a). Most multivariate studies with extensive controls for other potential causes of graduation or retention rate increases find little evidence of performance funding policies increasing 39 graduation or retention rates in four year institutions (Rabovsky, 2012; Rutherford & Rabovsky, 2014; Sanford & Hunter, 2011; Shin 2010; Shin & Milton, 2004; Volkwein & Tandberg, 2008). Two possible exceptions are difference in difference based studies done by Hillman, Tandberg, and Gross (2014) and Tandberg and Hillman (2014) which found a positive effect of performance funding on graduation rates in some of their models. In an in depth analysis of Pennsylvania’s performance funding policy, Hillman, et al. (2014) found performance funding to have a positive impact in three of their ten models. These effects were noted in the models comparing performance funding in Pennsylvania to neighboring states, however when compared to similar states using coarsened exact matching, the authors found no effect of performance funding (Hillman, et al., 2014). In a national study which compared states with performance funding to control states which did not have performance funding, Tandberg and Hillman (2014) demonstrated that performance funding policies did have a small effect on graduation rates seven, eight, and eleven years after implementation. The magnitude of these impacts are small, but suggest that performance funding may have delayed effects or that performance funding may become more effective over time. However, Rutherford and Rabovsky (2014) conducted a fixed effects time series analysis with panel corrected standard errors and found that graduation rates may actually decline over the length of time that performance funding policy is in place. When separated out between performance funding 1.0 and 2.0 policies, there is a negative relationship between policy implementation and graduate rates for performance funding 1.0 policies. For performance funding 2.0 policies, the effect was positive, although the results were not statistically significant. Other authors have not found any difference based on length of policy 40 implementation (Shin, 2010). More research needs to be conducted to explore the relationship between length of policy implementation and student outcomes. In fact, not only has much research failed to demonstrated significant results from performance funding policies, but some research has shown that the variance in higher education performance may be best accounted for by variables outside the control of institutions or even states. Volkwein and Tandberg (2008) demonstrated that in all categories of Measuring Up (state report cards that measure the performance of higher education in the state) data from 2000 to 2006, state characteristics over which policy makers have little control, such as demographics and economic conditions, accounted for the greatest share of variance in scores on Measuring Up data. In other words, things that the state can control such as higher education governance structures or accountability policies have limited effect, while things beyond their direct control, such as economic conditions and demographic trends, are the strongest predictors of higher education performance. In fact, one study found that institutional characteristics account for 76% of the variance in graduation rates and that state characteristics (of which they included performance funding as one variable) only account for 15% of the variance (Shin, 2010). This finding suggests that institutional fixed effects may account for differences in graduation rates. A few studies (e.g. Rabovsky, 2012; Shin, 2010) explore the effect of performance funding on research productivity such as grant funding received or faculty research produced. Since the scope of this study does not address research productivity, they will not be discussed in this literature review. However, it is worth noting that literature on this topic shows limited success of performance funding policies in this outcome as well. Unintended outcomes. Performance funding policies focus on efficiency, which creates a quasi market where keeping costs low while improving outcomes such as graduation rates is 41 valued over other principles. Due to the iron triangle of cost, quality and access (Immerwahr et al., 2008), this could create some unintended outcomes related to access. Many authors noted a concern that performance funding policies could create unintended outcomes, in particular institutions favoring better prepared students and closing access to other students (Bell, 2005; Colbeck, 2002; Dougherty et al., 2014b; Lahr et al., 2014; Naughton, 2004; Opoczynski, 2014; Rutherford & Rabovsky, 2014). For instance, higher education administrators are concerned that performance funding policies would result in “creaming” or that open access institutions would lose funding (Dougherty et al., 2014b). Other higher education administrators suggested the possibility of weakened academic standards and restriction in admissions of underprepared students as a result of performance funding (Lahr et al., 2014). However, it was only mentioned as a possibility with no administrators stating it had been an intentional decision at their institution. Research exploring whether performance funding policies result in these unintended outcomes is limited. Kelchen and Stedrak (2016) explored whether performance funding affects colleges’ financial priorities and found that institutions subject to performance funding may change their enrollment patterns. Specifically, institutions subject to performance funding received about $30 less per FTE in Pell Grant revenue than colleges in states without performance funding. Similarly, Umbricht, Fernandez, and Ortagus (2016) examined any unintended outcomes from performance funding in the state of Indiana. Utilizing a difference in difference technique, they found that performance funding changed enrollment patterns and increased selectivity. Specifically, they found that institutions in Indiana admitted students with higher ACT scores after performance funding implementation when compared with both similar states and Indiana’s private institutions. Additionally, the total number of minority students 42 enrolled after performance funding was lower than both institutions in similar states and private institutions in Indiana. Enrollment and Retention Literature In order to address the question of whether performance funding changes the enrollment profile of underrepresented students over time, it is important to understand what other factors may account for changes in the enrollment and retention of students. The vast majority of research exploring the enrollment and retention of students in higher education takes a student level perspective, exploring factors such as institutional fit, academic readiness, or ability to pay. While the literature from the student perspective is vast, this research study explores retention and graduation rates from the institutional and state perspective and therefor this literature review will focus on these levels. General Trend Across Race Higher education enrollment and graduation rates vary considerably across race. Gaps in college enrollment after high school graduation across race have narrowed over time, yet significant differences in college attendance still exist (Baum, Ma, & Payea, 2013). In 2014, 68% of White students enrolled in college directly after high school compared to 63% of Black students and 62% of Hispanic students (NCES, 2016). While these numbers appear similar, they mask the different enrollment patterns of students by race. Specifically, when limited to enrollment in bachelor’s degree programs, 39% of students enrolled compared to 24% for Black students and 18% for Hispanic students. Further, significant differences in high school graduation rates exist across race resulting in 44% of all White 18 to 24 year olds being enrolled in higher education compared to 36% for Black young adults and 31% for Hispanic 18 to 24 year olds (Baum, et al., 2013) 43 Though race discrepancies in preparation for higher education are well documented, it does not explain the growing racial stratification in higher education (Carnevale & Strohl, 2013). Black and Hispanic students still earn degrees at lower rates than their equally performing peers. For instance, 57% of Black students and 56% of Hispanic students who scored at least a 1200 on the SAT earned certificate, associate or bachelor’s degrees compared to 77% of similar performing White students (Carnevale & Strohl, 2013). Looking at the next performance group (those who scored between 1000 and 1200 on the SAT), 47% of Black students and 48% of Hispanic students earned some form of degree compared to 68% of similar performing White students (Carnevale & Strohl, 2013). The racial gap continues throughout the higher education pipeline and large gaps exist in persistence and completion rates across race. Persistence rates are much higher for Asian American students and White students (69% and 59% respectively) than Black students and Hispanic students (46% and 49%) (Chen & St. John, 2011). Graduation rates show similar results, with National Education Longitudinal Study (NELS) data demonstrating that White students had a graduation rate of 70% compared to 52% for Black students and 49% for Hispanic students (Carnevale & Strohl, 2013). When looking at odds ratios, the discrepancies continue. For Black students, the odds of graduating were 70% compared to White students and for Hispanic students odds of graduating were 74% compared to White students (Titus, 2006). Similarly, the odds of dropping out were 1.23 times greater for minority students than White students (Rhee, 2008). These inequities result in large gaps in bachelor degree holders aged 25-29. In 2015, only 21% of Black and 16% of Hispanic 25 to 29 year olds had a bachelor’s degree compared to 43% of Whites in the same age group (NCES, 2016). There is farther stratification in degree 44 attainment by gender with males having lower rates of degree attainment across all races. Specifically, 11% of Hispanic men, 17% of Hispanic women, 16% of Black men and 24% of Black women having a bachelor’s degree, compared to 43% for White females and 35% for White males have attained a bachelor’s degree (Baum, et al., 2013). General Trend Across SES Higher education enrollment and graduation rates also vary significantly across income and SES. Recently, enrollment rates for middle and upper income students have increased, while enrollment rates for low income students have stagnated, resulting in widening higher education participation rates by income (Baum, et al., 2013). In 2014 81% of high school graduates from higher income families went straight to college post high school compared to 64% from middle income families and 52% from low income families (NCES, 2016). Higher education completion rates also vary by income quartile. For students who began higher education in 2003, 58% of students from the highest income quartile had earned a bachelor’s degree six years later, compare to 26% for those from the lowest income quartile (with the second and third quartiles having rates of 35% and 44% respectively) (Baum, et al., 2013). Many factors may be at play creating the stratification by income, however several studies have demonstrated that after controlling for individual and institutional effects, SES has a statistically significant impact on retention and graduation (Chen, 2011; Chen & St. John, 2011; Sparks & Nunez, 2014). For instance, the odds of dropping out for high SES students were 62% that of low SES students, and the odds for mid SES students were 81% that of low SES students (Chen, 2011). Similarly, high SES students had significantly higher persistence rates (71%) than low SES students (44%) and mid SES students (57%) (Chen & St. John, 2011). 45 Institutional Factors Research has shown that a wide variety of institutional factors may influence enrollment and retention rates. The three main categories for institutional factors that influence retention and enrollment rates are student demographics, structural characteristics and financial characteristics. Each of these is described in turn below. Student demographics. Several aspects of overall student demographics are believed to influence institutional graduation rates including the socioeconomic and racial diversity of the campus as well as the percent of students with non-traditional attributes. The percent of students who receive Pell Grants (a measure of socioeconomic diversity) at an institution is a strong predictor of institutional graduation rates (The Advisory Committee on Student Financial Assistance [ACSFA], 2013; Pike & Graunke, 2014; Titus, 2006). As the percent of students who receive Pell Grants increases, the six-year graduation rate decreases from a high of 80% for four year institutions with less than 20% Pell Grant recipients to a low of 25% for four year institutions with more than 80% of their students receiving Pell Grants (ACSFA, 2013). In fact, the percent of students who receive Pell Grants accounted for 55% of the variance in graduation rates (ACSFA, 2013). Figure 2 shows six year graduation rates by percent of Pell recipients for four-year public institutions in 2010. Additionally, institutional average SES had a significant positive effect on graduation rates beyond the individual effects of student SES on graduation rates (Titus, 2006). Research has also suggested that the percent of students who identify as minorities on campus negatively influenced retention and persistence rates (Heck, Lam, & Thomas, 2014; Pike 2013; Rhee, 2008; Ryan, 2004; Scott, Bailey & Kienzl, 2006; Sparks & Nunez, 2014; Titus, 2006). When exploring the impact of structural diversity on students who left the institution 46 separated out by those who stopped out, those who transferred to a different institution, and those who dropped out of higher education completely, structural diversity was significantly related to stop out, but not transferring to another institution or leaving higher education completely (Rhee, 2008). Yet this finding is not universal; some studies have found that the effect between proportion of underrepresented minority students and retention or graduation rates was not statistically significant (Chen, 2011; Pike & Graunke, 2014). Figure 2 Six Year Graduation Rates for Four Year Institutions by Percent of Pell Recipients Six  YEar  Graduation  Rate   100   90   80   70   60   50   40   30   20   10   0   0-­‐20%   20-­‐29%   30-­‐39%   40-­‐49%   50-­‐59%   60-­‐69%   70-­‐79%   80%  and   above   Percent  of  Students  with  Pell  Grants     Source: Advisory Committee on Student Financial Assistance Finally, research also shows that other student demographics significantly influence graduation rates. The percent of students who are part time verse full time is related to retention and graduation rates. Several studies have stated that the proportion of students who were full time had a positive effect on retention and graduation rates (Pike, 2013; Scott, et al., 2006). Pike and Graunke (2014) also found that the percent of part time students was negatively related to retention rates, however when they accounted for the unobserved heterogeneity through a fixed effect model, it was no longer statistically significant. Additionally, the proportion of students 47 living on campus is positively related to retention and graduation (Ryan, 2004; Scott, et al., 2006). The proportion of students who were non traditional aged had a negative effect on retention and graduation rates (Pike & Graunke, 2014; Scott, et al., 2006). However, Pike and Graunke (2014) indicated that the proportion of non traditional aged students was significant in one of their models, but not the other. This could be from using very strict T-values, and if more traditional T values had been used than age would have been statistically significant in both models (Pike & Graunke, 2014). Structural characteristics. Research also shows that institutional structural characteristics such as size, classification and selectivity have an effect on institutional retention and graduation rates. The research on size is inconclusive with several studies finding it has a significant positive effect on retention and graduation rates (Pike 2013; Pike & Graunke, 2014; Ryan, 2004) and several others finding no effect (Chen, 2011; Rhee, 2008; Titus, 2006). Research has also consistently suggested that after controlling for student characteristics, private institutions have higher retention and graduation rates than public institutions (Ryan, 2004; Pike 2013). Additionally, several studies have mentioned that classification or mission influences retention and graduation rates (Heck, et al., 2014; Pike, 2013; Shin, 2010). Specifically, doctoral universities have higher retention and graduation rates than baccalaureate institutions (Heck, et al., 2014; Pike, 2013) but master’s institutions having lower rates than baccalaureate institutions (Pike, 2013). Studies have also demonstrated that institutional selectivity (usually measured by student average SAT/ACT scores or through Barron’s selectivity index) is a significant predictor of retention and graduation rates. Several studies noted that selectivity had a positive and significant effect on graduation and retention rates (Chen & St. John, 2011; Gansemer-Topf & Schuh, 48 2006; Heck, et al., 2014; Pike, 2013; Pike and Graunke, 2014; Scott, et al., 2006; Shin, 2010; Sparks & Nunez, 2014; Titus, 2006). However, this finding was not unanimous with Chen (2011) finding that selectivity had no statistically significant effect. Specifically, the likelihood of retention, persistence and completion increases with institutional selectivity, with students enrolling in a highly selective institution having a retentions odds ratio of 1.81 (Sparks & Nunez, 2014), a persistence odds ratio of 2.1 (Chen & St. John (2011) and a completion odds ratio of 1.281 compared to other institutions (Titus, 2006). When separating out leaving behavior between those who stop out, transfer out, and drop out, selectivity had a significant effect on the odds of stopping out and transferring to another institution, but not on dropping out of higher education completely (Rhee, 2008). Financial characteristics. Financial characteristics such as expenditure on instruction, academic support, and student services is also related to retention and graduation rates, although the findings have been inconsistent. Some studies have discussed that the amount of money an institution spends on instruction related activities positively contributes to retention and graduation rates (Gansemer-Topf & Schuh, 2006; Ryan, 2004; Scott, et al., 2006), while others found no significant relationship (Chen, 2011; Titus, 2006) and one indicated a negative relationship (Heck, et al., 2014). Similarly, money spent on academic support is believed to improve graduation rates in some studies (Gansemer-Topf & Schuh, 2006; Ryan, 2004) and have no significant relationship in others (Chen, 2011; Titus, 2006). Further, expenditures on student services were believed to have a statistically significant effect on graduation rates in one study (Chen, 2011), no effect in others (Ryan, 2004; Titus, 2006) and a negative effect on retention rates in one study (Gansemer-Topf & Schuh, 2006). Some of these variations could arise from different methods of computing data, samples that were used or analytical methods. For instance, 49 some studies utilized straight dollar amounts for expenditures, while others used percent across all expenditures, and others calculated expenditures per FTE student. Additionally, across the studies, the methods varied including grouped logit regression, structural equation modeling, OLS longitudinal regression, and event history analysis. State Factors Beyond institutional level factors, several state level factors have also been noted to influence institutional retention rates. While the research in this area is limited, recent studies have found some financial factors that influence enrollment and graduation rates, as well as a few state demographic variables. Specifically, the level and type of financial support a state provides for higher education and the size of college eligible populations as well as unemployment rates all influence retention and completion rates. Financial. The amount of money that a state spends on higher education overall and through appropriations and student financial aid seems to have a positive effect on enrollment and graduation rates. Toutkoushian and Hillman (2012) demonstrated that the total state financial assistance per capita had a significant positive effect on students attending higher education. However, it was no longer statistically significant when isolating out four year institutions. Most research that explored the effect of state appropriations and higher education performance across all institutional types found a strong positive relationship (Shin, 2010; Titus, 2009; Toutkoushian & Hillman, 2012). This included research using both direct funding amounts and analysis of appropriations per capita. Specifically, for every 10% increase in state appropriations for higher education, bachelor’s degree production in the state would increase by 3% (Titus, 2009). However, other research that disaggregates by institutional type suggests possible interaction effects between state appropriations and institutional types. For instance, the 50 amount of state appropriations per capita decreased the likelihood of enrollment in both two year institutions and competitive four years institutions, but had no effect on non competitive four year institutions (Kim, 2012). Further, the effect of state appropriations varies based on the state’s political culture (Heck, et al., 2014). In states classified as Individualists (focus on limited government, social and economic self-improvement, and a marketplace orientation) state appropriation was negatively related to graduation rates (Heck, et al., 2014). On the other hand, in Traditionalist states (focus on maintaining established values and structures, and limited spending on social programs), state appropriation was positively related to graduation rates (Heck, et al., 2014). The amount of money a state provides in merit or need based grants also has a positive effect on enrollment and retention rates (Chen & St. John, 2011; Kim, 2012; Titus, 2009; Toutkoushian & Hillman, 2012; Zhang & Ness, 2010). A state simply having a merit aid program increased college going on average by 4.1%, however this is only slightly higher than comparison groups, which had 3.5%, 2.8% and 2.1% (Zhang & Ness, 2010). Further, most of the increase in enrollment rates occurs at research and doctoral institutions, which saw an average increase of 7.6%, while the change in other institutions was not statistically significant (Zhang & Ness, 2010). The amount of merit based aid also increases college enrollment rates. Specifically, for each $1000 increase per capita in merit based aid, college going rates would increase by 14.45% for all institutions and 6.54% for four year institutions (Toutkoushian & Hillman, 2012). Additionally, merit based aid was positively related to persistence with a one standard deviation increase in merit based aid resulting in a 4% increase in the odds of persistence (Chen & St. John, 2011). However, when Georgia was excluded (due to it being an outlier in the merit based aid category) the effect was no longer significant. The amount of tuition the grants covered was 51 also related to persistence. Particularly, for each 1% increase in tuition covered by merit based odds of persisting increased by 1% (Chen & St. John, 2011). The relationship between persistence and the percent of tuition covered by the aid remained significant even when Georgia was excluded. The results for need based aid are a little more complicated. Some research shows that the amount of state need based aid per FTE student was significantly positively related to the number of degrees produced in the state (Titus, 2009). Additionally, state need based aid was positively associated with enrollment rates in two year institutions (Kim, 2012). However the same research found no statistically significant relationship between state need based aid and enrollment in four year institutions (Kim, 2012). Further, Chen and St John (2011) found no statistically significant relationship between need based aid and the odds of persisting, however, they did find a statistically significant relationship between the amount of tuition need based aid covered and the odds of persisting. Specifically, for each 1% increase in tuition covered by need based aid, odds of persisting increased by 2% (Chen & St. John, 2011). Population data. A state’s demographic makeup also influences enrollment and completion rates in the state. Research has shown that the size of a state’s traditional college age population influences higher education completion rates, but in a negative way (Bound & Turner, 2007; Bound, Lovenheim, & Turner, 2010). Specifically, a 10% increase in the traditional college age population leads to a 4% decrease in the bachelor’s degree completion rate (Bound & Turner, 2007). Similarly, increases in the high school graduate population were negatively associated with the state’s score on the completion category in Measuring Up data (Volkwein & Tandberg, 2008). While this may seem surprising, Bound and Turner (2007) hypothesize that a crowding out effect occurs, where states are not able to fully support their 52 college aged population and therefore completion rates decrease. Relatedly, the percent of residents with a bachelor degree leads to larger number of students attending higher education (Toutkoushian & Hillman, 2012). Some research has also found that a state or county unemployment rate can influence persistence and graduation (Sparks & Nunez, 2014; Shin, 2010) but this finding is not unanimous, as some research has found that unemployment rates do not influence degree production (Titus, 2009). County unemployment rate can influence persistence, with students more likely to persist if the higher education institution was located in a county with higher than average unemployment rates (Sparks & Nunez, 2014). More broadly, state unemployment rate had a negative effect on graduation. Meaning that as unemployment rates went up, graduation rates went down (Shin, 2010). Conclusion This chapter has outlined the literature on higher education funding (with a specific focus on state funding), the accountability movement, and performance funding specifically as well as the literature on enrollment and retention from an institutional and state perspective with a focus on underrepresented students of color and low SES students. While the research in these areas is vast, limited research has explored how higher education institutions are balancing the competing needs of demonstrating efficient economic outcomes via performance funding while maintaining access and success for all students, particularly in the four-year sector. Specifically, no research has yet quantitatively explored if performance funding policies influence the enrollment rates of underrepresented students in public four year institutions. This study seeks to fill this gap in the research on performance funding and underrepresented student enrollment. 53 Chapter 3 This study explored how performance funding influences the enrollment of underrepresented students through the following research questions: 1) Are there unintended consequences from performance funding policies related to underserved students’ enrollment at public four-year institutions? a. Does performance funding have an influence on the enrollment of Pell Grant eligible students? b. Does performance funding have an influence on the enrollment of underserved minority students? 2) Does institutional type influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? a. Does Carnegie classification influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? b. Does selectivity influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? This chapter outlines the population studied, data sources, variables used, and analytical method used to answer these questions. Sample/Population This study focused exclusively on public 4-year institutions in U.S. states during the 2000-2014 time period. Criteria for inclusion in the study population were based on the following considerations. This study included only public institutions, as the performance funding policy does not apply to the private sector. Institutions were selected in a three-step process. 54 First, for each year, all four-year public institutions that were degree granting in the U.S. were identified. This was done using the IPEDS data base by selecting the “Sector of Institution” variable and identifying all “Public, 4 year or above” institutions as well as the “Degree Granting Status” variable and selecting “Degree-granting.” Next, the list was narrowed to ensure only four year, degree granting institutions in U.S. states were included. This was done using the “Sector of Institution” variable and all institutions listed as “2-year” or “Private nonprofit” were excluded. Additionally, the Carnegie variable was used to drop any institutions listed as Associate’s schools or Medical Schools, Health Schools, or Schools of Law. For the years it is available, the Carnegie Enrollment Profile was used and all institutions listed as two year were dropped. Further, all institutions with locations in Guam, Northern Marianas, Puerto Rico, Virgin Islands, or District of Columbia were excluded. Finally, as performance funding and this study focus on undergraduate students, any institutions that did not grant undergraduate degrees were dropped. This was done through the “Institutional Category” variable and all institutions listed as “Degree-granting, graduate with no undergraduate degrees” or “Degree-granting, not primarily baccalaureate or above” or “Degreegranting, associate’s and certificates” were excluded from the study. Additionally, the Carnegie undergrad profile were used and all institutions listed as “exclusively graduate/professional” and “Majority graduate/professional” were dropped. As the last step, all institutions were merged together across the years and only institutions that met the above criteria for all years were maintained. This resulted in a final dataset of 513 institutions. As all institutions that meet these criteria will be used, this study will focus on the whole population instead of a sample of the population. 55 Data Sources Data came primarily from the Integrated Postsecondary Education Data System (IPEDS), which is a dataset collected annually by the Department of Education’s National Center for Educational Statistics (NCES). All institutions that receive federal funds from the United States government are required to submit information to the NCES and all the institutions being analyzed in this evaluation submit IPEDS data annually. Data was gathered from the IPEDS data center for the years 2000 to 2014. The IPEDS database was selected for five main reasons. 1) It includes all the institutions in the population and most of the variables of interest. All institutions selected for this study submit data to the NCES annually. Additionally, most of the variables of interest are collected by the NCES in IPEDS annually or semiannually. This will ensure limited missing variable issues; 2) it has specific definitions for all the variables. This guarantees each institution is using the same criteria in reporting the variables and therefore increases validity; 3) it is available for all the years of the study. IPEDS data is collected annually since the 1980s and has yearly reports for the 2000-2014 timeframe of this study; 4) IPEDS is commonly used in higher education research including numerous studies analyzing performance funding (e.g., Rabovsky, 2012; Rutherford & Rabovsky, 2014; Tandberg & Hillman, 2014). It is specifically noted as beneficial for research questions related to the external environment such as a policy change (Jaquette & Parra, 2014); 5) IPEDS measures and reports data at the institutional level. Other databases (e.g., Delta Cost Project) condense multi-campus systems into one measure, which does not allow proper attention to “parent-child” issues (Jaquette & Parra, 2014). A few state level variables came from other databases. First, the amount states spend on higher education appropriations was retrieved from the Grapevine database. Additionally, the amount the state spends on need based and merit based aid came from the National Association 56 of State Student Grant and Aid Programs (NASSGAP). Finally, the college age population in a state, and the underrepresented minority population came from the U.S. Census database. Parent-Child Issues A parent-child issue arises from states with branch campuses or higher education systems. In a parent-child relationship data from the “parent” institution (e.g., the main campus) is used to report for the “child” (e.g., a branch campus) for at least one IPEDS variable (Jaquette & Parra, 2014). In particular these issues occur in IPEDS Finance data and are prevalent before 2003-2004 (Jaquette & Parra, 2014). As this research study includes states with branch campuses and systems and includes years before 2003-2004 it is important to discuss how this concern will be addressed. Since this study seeks to explore differences between institutional types (Research question #2) it is important to maintain as many institutions at the child level as possible. Therefore, to the extent possible, each institution will be treated autonomously in this research study. In order to decrease the likelihood of parent-child issues the number of IPEDS surveys used is limited and in particular the IPEDS finance survey is used for only one variable (state appropriations as a percent of total revenue). Using multiple IPEDS surveys increases the likelihood of encountering parent-child issues and in particular, the IPEDS finance survey is the most prevalent for parent-child issues (Jaquette & Parra, 2014). Jaquette and Parra (2014) note specifically that analyses that uses revenue data but does not use asset/liability measures typically does not have to be collapsed to the system level because revenue measures are usually reported at the child level. By limiting the number of IPEDS surveys used and only using a revenue variable from the finance survey, the likelihood of parent-child issues should be small. 57 Where data was faced with parent-child issues, variables were collapsed to the parent level through the process outlined in Jaquette and Parra (2014). Variables Dependent Variables Four different dependent variables were used throughout this study. For the research questions related to Pell Grants, both the percent of First Time Full Time (FTFT) students who received Pell Grants and the average amount of Pell Grants were used. These present complimentary views regarding the enrollment of Pell Grant students. The first represents any changes an institution may make in the number of Pell Grant students it enrolls, while the later represents the type of Pell Grant student enrolled. For research questions related to underserved minority students, both the total number of enrolled underserved minority students and the percent of minority students were used. To create the total underserved minority variable, the number of FTFT students who identified as Black, Hispanic, Native American, or Alaskan Native were added together. This number was then divided by the total FTFT enrollment to create the underserved minority percent variable. Institutional Variables Several institutional level variables were included in the models. The rationale for inclusion for each is explained below and each variable is defined in Table 2. Institutional factors may influence enrollment rates for underrepresented students of color and Pell eligible students and need to be controlled for. For instance, research consistently finds that receiving financial aid has a significant positive effect on student enrollment behavior (McPherson and Schapiro, 1991; Van Der Klaauw; 2002). In order to control for this, the average amount of institutional aid granted was added as a control variable. Similarly, the cost of attendance for higher education 58 can influence enrollment behavior, in particular for students with limited financial resources (Bergerson, 2009). Students frequently make enrollment decisions before tuition and fees are set for the upcoming year, meaning students deciding whether to enroll for the 2015-2016 academic year will base their decision on the 2014-2015 tuition and fees listed. Therefore the published tuition and fees for in state residents the year prior to starting was included as a control. While some may argue that net price is a more accurate measure for cost of attendance, research has shown that underrepresented students often do not have the cultural capital to understand net price calculations and instead make enrollment decisions based on sticker price (Bergerson, 2009). Institutional factors may also influence how an institution responds to performance funding. First, an institution’s level of selectivity may influence whether the institution is able to make changes to its enrollment profile. For example, an institution that accepts only 50% of its applicants is able to make greater changes to the types of students it admits compared to an institution that accepts 98% of its applicants. To this end, an institution’s admission rate was included as a control. Additionally, institutions may react differently to performance funding depending on how reliant they are on funding from the state. In other words, if an institution is highly reliant on state appropriations, they may be more likely to make institutional behavior changes (such as changing the enrollment profile) that could lead to increases in the funding they receive. To account for this, the percent of institutional revenue that comes from state appropriations was included as a control variable. For years this variable was not available, it was calculated by dividing the total institutional appropriations by the institution’s total revenue and multiplying by 100. Results were compared for the years both the calculated and reported variables were available and no differences were noted. Therefore for the years that 59 appropriations as a percent of total revenue was not available, the calculated admission rate was used to decrease the number of missing values. Research questions 2a and 2b ask whether institutional type influences the effect of performance funding on the enrollment of underserved students. To answer these questions, categorical variables were created for Carnegie classification and admission rate. For Carnegie classification, 10 categories were created (three for doctoral institutions, three for masters institutions, three for baccalaureate institutions, and one for special focus institutions). For admission rate, 5 categories were created: open access (admitting 85-100% of applicants), moderately selective (admitting 75-84.99% of applicants), selective (admitting 50-74.99% of applicants), highly selective (admitting 30-49.99% of applicants), and very highly selective (admitting less than 29.99% of applicants). State Variables To distinguish between states with and without performance funding a dummy variable was created that accounts for whether the state has performance funding each year. States that had a performance funding policy in place during the time frame studied were identified by looking at various research articles and meta analyses of performance funding research and compiling a list of states with performance funding. Where there was unanimous agreement the state was included with the dates provided. Where there was disagreement, the researcher explored further through state statutes and codes or discussion with state experts and determined if the state had performance funding and what years the policy was in place. When only some institutions within a state were subject to performance funding, then only those institutions were coded as performance funding and the other institutions were coded without performance funding. A performance funding count variable was also created where the number of years the 60 policy was in place was counted. If a policy was in place before the years of the study, the first year in the study was marked as the year of the policy. For instance if a policy went into place in 1995 then year 2000 would be marked as “6.” Table 1 lists the states with performance funding and the years where they had an active performance funding policy. Table 1 List of States with Performance Funding and Years State Arizona Arkansas Colorado Idaho Indiana Kansas Louisiana Maine Michigan Minnesota Mississippi Missouri New Jersey New Mexico New York Ohio Oklahoma Oregon Pennsylvania South Carolina South Dakota Tennessee Texas Virginia Years with Performance Funding 2011-2014 2013-2014 2000-2004 2000-2005 2007-2014 2002; 2005-2010; 2013-2014 2009-2014 2013-2014 2012-2014 2012-2014 2013-2014 1993-2002; 2012-2014 1999-2002 2007-2014 1998-2007(SUNY); 2000-2014(CUNY) 1998-2007; 2009-2014 1998-2000; 2002-2010; 2013-2014 2000-2001; 2007-2014 2000-2014 1996-2004 1998-2002; 2005-2013 1979-2014 2009-2011 2007-2014 Several state level variables were also used as controls. The rationale for including each variable is explained below and each is defined in Table 2. 61 Table 2 Variable Definitions and Sources Variable Percent Pell Grants Pell Grant Amount USM Total USM Percent Carnegie Classification Institutional Aid Amount Tuition Admission Rate Appropriations as Percent of Revenue Appropriations per FTE State Need Based Aid College Age Population College Age USM Population College Age USM Percent Affirmative Action Ban Definition of Variable The percentage of FTFT degree seeking undergraduate students who received Pell Grants in the incoming class. Average amount of Pell Grant received by FTFT students The total FTFT degree seeking undergraduate students who identify as at least one of the following: Black, Hispanic or Latino, American Indian or Alaskan Native, and Native American or other Pacific Islander. The percentage of FTFT degree seeking undergraduate students who identify as at least one of the following: Black, Hispanic or Latino, American Indian or Alaskan Native, and Native American or other Pacific Islander. An institutional classification coding structure developed by the Andrew W. Carnegie Foundation for the Advancement of Teaching. Average amount of institutional grants (scholarships/fellowships) received by FTFT students In-state published tuition and required fees for the prior academic year Number of total students admitted divided by the total applicants. Ratios are converted to percentages and rounded to the nearest whole number. State appropriations divided by total core revenues. Source IPEDS The total amount a state spends on tax appropriations for higher education divided by the total FTE enrollment minus medical students. The total amount a state spends on need based aid for higher education divided by the total FTE enrollment in the state minus medical students. The number of residents in the state between the ages of 18 to 24. The number of residents within a state aged 18-24 who identify as at least one of the following: Black or African American, American Indian and Alaska Native, Native Hawaiian and other Pacific Islander, or Hispanic. The percent of residents within a state aged 18-24 who identify as at least one of the following: Black or African American, American Indian and Alaska Native, Native Hawaiian and other Pacific Islander, or Hispanic. Whether a state has an affirmative action ban in place at any point during the study years Grapevine 62 IPEDS IPEDS IPEDS IPEDS IPEDS IPEDS IPEDS IPEDS NASSGAP US Census US Census US Census Various Sources Financial characteristics have been shown to influence student enrollment in higher education. Significant research has found a relationship between state appropriation amounts and students attending and succeeding in higher education (Kim, 2012; Shin, 2010; Titus, 2009; Toutkoushian & Hillman, 2012). Additionally, the amount of money a state provides in merit or need based grants also has a positive effect on enrollment and degree production rates (Kim, 2012; Titus, 2009; Toutkoushian & Hillman, 2012; Zhang & Ness, 2010). Both are believed to represent the overall commitment of the state to higher education. To account for this, the total state appropriations for higher education was included as a control variable. Additionally, this study controlled for the amount of money the state spends on higher education need based aid. Both of these variables were divided by total FTE enrollment (net medical students) in the state to account for differences in population. As research has shown that the number of college aged residents in the state can influence the enrollment and completion rates for higher education (Bound & Turner, 2007; Bound et al., 2010; Volkwein & Tandberg, 2008) this study included a control variable for the total college age population in the state. A few variables were controlled for only in select models. It can be assumed that the size of a state’s underserved minority population would influence the number of underserved minority students enrolled. Therefore, for the models using percent of students who identify as underserved minorities as the dependent variable, the state underserved minority population size was used as a control. In addition to the transformation noted above, variables were transformed or created in the following ways. First, state appropriations was adjusted for inflation using the Consumer Price Index for the 2014 year before dividing by FTE enrollment. Second, for the years admission rate was not available but applicant and admission numbers were available, admission rate was calculated by dividing the total number of applicants by the total number of students 63 admitted and multiplied by 100. This calculation was also performed on the years admission rate was available to confirm its accuracy. No difference was noted for the years both calculated and reported admission rate was available. Therefore for the years missing admission rates, the calculated admission rate was used to decrease the number of missing values. Analytical Method This study examined the effect of performance funding policies on the enrollment and graduation of underserved students through a linear regression analysis utilizing OLS-based panel techniques. Panel data refers to repeated observations of the same research unit (in this case, institutions) over a period of time. Panel methods are applicable for questions about systematic change over time. Specifically, panel analysis allows a researcher to explore temporal patterns and identify if any abrupt shifts occurred (Singer & Willett, 2003). Therefore, utilizing panel analysis for this study allowed the researcher to directly explore whether performance funding policies influence the enrollment of underrepresented students over time. Utilizing a panel dataset provides numerous benefits and is more reliable than crosssectional data analysis. First, panel data controls for individual heterogeneity by allowing the researcher to control for state and time invariant variables (Baltagi, 1995). Second, panel data gives less collinearity among the variables since the researcher is able to differentiate between variability between institutions and variability within institutions (Baltagi, 1995). Finally, panel data is beneficial for policy analysis as it allows for the study of lags in behavior as a result of a policy change (Wooldridge, 2013). While this study controlled for the variables mentioned above, it is possible that unobserved state factors also contributed to the enrollment and completion of underrepresented students. For instance, a state’s historic culture could promote participation in higher education. 64 In order to account for this in the analysis, state fixed effects estimates were included to account for the unobserved heterogeneity within states. Additionally, national policies or experiences could impact all institutions at specific points in time but be unobserved in the study. For instance, the great recession could influence higher education attendance. In order to account for this in the analysis, year fixed effects were also included. Finally, unobserved institutional factors could also influence the analysis, for instance an institution being located in a desirable place to live. Therefore, institutional fixed effects were also included. As previous research has shown that the number of years that have passed since implementation can influence the effect of performance funding (Tandberg & Hillman, 2014), state specific policy time dummies were created to account for the number of years since policy implementation with the first year of implementation being year one. For instance, since New Mexico began their performance funding policy in 2007 all institutions within the state of New Mexico will have a policy time variable of zero (0) for the years 2000-2006, then a one (1) for 2007, a two (2) for 2008, a three (3) for 2009 and so on. Conversely, since Pennsylvania began their performance funding policy in 2000 all institutions within that state will have a policy time variable of one (1) for 2000, a two (2) for 2001, a three (3) for 2002 and so on. This allows the researcher to explore if the effect of performance funding varies over time. Models Several models were estimated to answer the research questions. First, to answer research question #1, “Over time, are there unintended consequences from performance funding policies related to underrepresented students’ enrollment at public four-year institutions?” models were run with four different dependent variables. First, for research question 1a, models were run with average amount of Pell Grant and the percent of FTFT students who receive Pell Grants as 65 the dependent variables. For research question 1b, models were run with both the total number of underserved minority students and the percent of FTFT students who identify as underserved minority students as the dependent variable. Since the effects of policy change may not be seen till a few years after the policy goes into effect (Hillman, Tandberg, & Fryar, 2014) models were calculated with one, two, and three year lags. Further, models with both a binary performance funding variable and a performance funding count variable were used. As outlined above, the models included controls for institutional and state variables that may influence the dependent variable as well as institution, state and year fixed effects. A Hausman test was run to determine if a fixed effects or random effects model was most appropriate and a fixed effects model was selected for all models. Next, to answer research question #2, “Over time, does institutional type influence the effect of performance funding on underrepresented students’ enrollment at public four-year institutions?” models were run which included interaction effects for both Carnegie classification and admission rate with performance funding. Models were run for four dependent variables: Average Pell Grant awarded, percent of FTFT students who receive Pell Grants, total FTFT underserved minority enrollment, and percent of FTFT who identify as underserved minorities. These models were run for both a binary performance funding variable and with a performance funding count variable. As outlined above, the models included controls for institutional and state variables that may influence the dependent variable as well as institution, state and year fixed effects. A Hausman test was run to determine if a fixed effects or random effects model was most appropriate and a fixed effects model was selected for all models. Where significant findings occurred, tests for robustness were run and are included in the findings below or in an appendix. 66 When using panel data a few issues may arise and need to be addressed. First, panel data can cause issues of serial correlation where one observation for an institution is dependent on the preceding year’s observation at that institution. Serial correlation can bias results by presenting smaller standard errors and a higher R squared. To account for this, a Wooldridge (2002) test was conducted and no serial correlation was found. Limitations Several limitations in this study deserve mention. First, this study focuses exclusively on four-year institutions. Despite many performance funding policies affecting two-year institutions, they are excluded from this analysis due to the variance between the two sectors. Future research should explore whether performance funding influences the enrollment of underrepresented students in public two-year institutions. Second, this study focuses on FTFT students for the dependent variable, which excludes students who enroll part time or transfer into the institution. As many underrepresented students attend part time this could bias the results. However, enrollment figures for all undergraduates by Pell Grant recipients and underserved students of color are not available for all years of the study. Another limitation in this study is the change in racial categories in IPEDS. In particular, the category “two or more races,” which was added in 2008-2009 (and was optional until 20102011) presents a challenge. Students who identify as two or more races in the new categories may have identified themselves previously in an underrepresented student of color category, which could artificially lower the dependent variable in Research Question 1a “Does a state’s performance funding policy have an influence on the enrollment of underrepresented students of color?” As data is available without this category till 2010, students who identify as “two or more races” were excluded with recognition of the aforementioned bias. 67 A further limitation in this study is the inability to account for variance across performance funding models, specifically, the various strengths of performance funding models. As noted above, performance funding models account for anywhere from 5% to 100% of total state appropriations. It can be assumed that institutions that receive 100% of their state appropriations from performance funding may respond differently to the policy than those who only receive a small percent of their state appropriations from performance funding. However, no reliable data source exists that tracks this information and many state websites and legislative histories did not include this information. Therefore, this study was not able to control for strength of performance funding. Similarly, in recent years a few states have included metrics within their performance funding policies that focus on underserved student enrollment. However, as many of these metrics only went into effect in recent years they were not included in this study. Finally, the performance funding policies analyzed in this study are largely performance funding 1.0 policies with limited performance funding 2.0 policies. As performance funding 2.0 policies are still relatively new, not enough years of data exists to include most of these policies. As data becomes available, research should explore whether performance funding 2.0 policies have similar effects on enrollment of underrepresented students as performance funding 1.0 policies. 68 Chapter 4 This chapter presents descriptive and quantitative fixed effects panel analysis findings for the following research questions: 1) Are there unintended consequences from performance funding policies related to underrepresented students’ enrollment at public four-year institutions? a. Does performance funding have an influence on the enrollment of Pell Grant eligible students? b. Does performance funding have an influence on the enrollment of underserved minority students? 2) Does institutional type influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? a. Does Carnegie classification influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? b. Does selectivity influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? First, the descriptive statistics for the variables in the study are presented including any differences between institutions in performance funding states and institutions without performance funding. Then the panel analysis results for each of the research questions are presented in turn. Discussion of the results and implications are presents in Chapter 5. Descriptive Statistics Institutions were included in the sample if they were listed as public, four year, degree granting institutions for all 15 years of the study. However, as performance funding policies focus on undergraduate education, institutions were removed if they were majority graduate 69 serving institutions. This included institutions listed as medical schools or schools of law under the Carnegie classification, as well as institutions listed as majority graduate/professional under the Carnegie enrollment variable. The final result is a sample size of 513 institutions across all 50 states. Table 3 provides a summary of all variables included in this study for all institutions over the 15 years. Mean, standard deviation, minimum value and maximum value are included for all variables. As some variables are skewed (see specifics below), a median is presented for all variables as well. Skewness is a measure of the symmetry of the data set. If a variable is highly skewed its data points are unbalanced with a majority on one side or the other. A high skewness could pull the mean in one direction from a few large outliers or from clumping of the data. Therefore, the median is a preferred measure of central tendency for variables that are highly skewed. For the purpose of this study, a skewness greater than 1 or smaller than -1 was considered highly skewed. Table 3 demonstrates notable differences across institutions and years in this study. For instance, states funded higher education at significantly different rates. State appropriations per FTE (inflation adjusted to 2014 dollars) is measured in thousands of dollars and ranged from 1.892 to 17.152 with a mean of 7.260. Additionally, state need based financial aid per FTE (measured in thousands of dollars) differed from 0 to 1.867 with a mean of 0.574. Population variables also demonstrated diversity across all institutions and years. To start, the state college age population (presented in ten-thousands) skewed to the left (skewness of 1.90) and ranged from 5.632 to 749.205 with a median of 107.777. For underserved minority residents aged 18-24 (calculated in thousands) there was also a strong skew to the left (skewness of 2.06) and ranged from 1.771 to 2,165.958 with a median of 183.687. Additionally, the variable has a large 70 variance with a standard deviation of 503.742, demonstrating that there is wide variety across underserved minority residents aged 18-24 within the dataset. Similarly, the percent of residents 18-24 who identify as underserved minority residents skewed to the left (skewness of 1.13) and ranged from 1.51% to 67.52% and had a median of 18.18%. Table 3 Summary Statistics For All Institutions Across All Years Variable Median Mean SD State Need Based Aid .464 .575 .461 Appropriations per FTE 7.151 7.260 1.833 College Age population 107.777 161.592 160.083 College USM Population 183.687 375.063 503.743 College USM percent 18.18 20.29 13.11 Tuition Institutional Aid Amount Percent Pell Grants Pell Grant Amount USM Percent USM Total Appropriations as Percent of Revenue Admission Rate Min 0.00 1.892 5.632 1.771 1.51 Max 1.867 17.152 749.205 2,165.958 67.52 5.300 2.846 36.00 4,078.00 15.48 230.00 34.00 5.755 3.257 38.08 3,974.78 24.79 371.87 34.25 2.630 1.935 16.33 587.51 24.67 420.56 10.78 1.466 0.00 8.00 1,146.00 0.00 0.00 0.00 22.997 15.362 92.00 6,955.00 99.68 3,481.00 81.41 71.00 69.25 17.29 7.00 100.00 The institutional level financial variables also differed across institutions and years in this study. Published in district tuition (lagged one year and calculated in thousands of dollars) skewed slightly to the left (skewness 1.08) and ranged from 1.466 to 22.997 with a median of 5.300. To offset increasing tuition rates, institutions often provide institutional financial aid. The average amount of institutional financial aid provided (measured in thousands of dollars) was skewed left (skewness of 1.51) and ranged from 0 to 15.362 with a median of 2.846. Pell Grant aid presented similar patterns. The percent of students who received Pell Grants ranged from 8% to 92% and had a mean of 38.08%. The average amount of Pell Grant awarded to these students 71 ranged from 1,146 to 6,955 with a mean of 3,974.78. For underserved student enrollment, the percent of underserved minority FTFT students enrolled skewed left (skewness 1.76) and ranged from 0% to 99.68% with a median of 15.48%. The total number of underserved minority enrolled FTFT also skewed to the left (skewness 2.34) with a range from 0 to 3,481 and a median of 230. The extent to which institutions relied on state appropriations for their revenue varied greatly. State appropriations as a percent of total institutional revenue ranged from 0% to 81.41% with a mean of 34.25%. Institutional admissions rates in this study included open access institutions and highly selective institutions. Overall the mean admittance rate was 69.25% with the most selective institution having a 7% admittance rate and several institutions admitting all the applicants who applied (100% admittance rate). While the variety in values across all institutions is important to note, for this study it is of particular interest if any of the variables are notably different between institutions subject to performance funding and those without performance funding. Table 4 compares institutions with performance funding and institutions without performance funding for the variables used in this study. If an institution was subject to performance funding for only a portion of the study period (for instance 2000-2008), they would be included in the performance funding section for the years they were under performance funding (2000-2008) and then calculated in the non performance funding section for the years they did not have performance funding (2009-2014). Some important distinctions in the state level variables should be noted. First, in regards to institutional funding, two complimentary trends exist. Institutions with performance funding received slightly more need based state financial aid per FTE (median of 0.622 for institutions with performance funding compared to 72 0.426 for institutions without performance funding). Additionally, institutions with performance funding had lower levels of state appropriations per FTE (mean of 6.690 for institutions subject to performance funding and 7.502 for institutions without performance funding). Together, it can be inferred that states with performance funding appropriated more funding to need based financial aid, whereas states without performance funding relied more on direct appropriations to institutions. Table 4 Descriptive Statistics by Performance Funding States With Performance Funding States Without Performance Funding Variable Median Mean SD Median Mean SD State Need .622 .764 .533 .426 .494 .400 Based Aid Appropriations 6.609 6.690 1.591 7.395 7.502 1.875 per FTE College Age 114.557 152.795 110.156 101.736 165.322 176.927 population College USM 197.343 284.772 305.264 173.522 413.342 562.972 Population College USM 18.16 19.69 12.64 18.20 20.54 13.29 percent Tuition Institutional Aid Amount Percent Pell Grants Pell Grant Amount USM Percent USM Total Appropriations as Percent of Revenue Admission Rate 5.919 2.966 6.484 3.374 2.806 1.999 5.010 2.803 5.439 3.207 2.484 1.904 38.00 39.56 16.24 34.00 37.32 16.33 4,096.00 4,008.57 559.38 4,071.00 3,957.29 600.92 14.81 214.00 32.76 23.12 316.35 32.59 22.80 342.84 9.68 15.81 237.00 35.00 25.50 395.77 34.88 25.40 447.81 11.10 73.00 70.17 17.77 70.41 68.86 17.07 73 Significant differences in state population also existed between institutions with and without performance funding. Institutions with performance funding resided in states with larger college age populations and underserved minority college age populations (median of 114.557 and 197.343 respectively) compared to institutions without performance funding (median of 101.736 and 173.522 respectively). Due to this difference in overall numbers, the percent of 1824 who identify as underserved minority residents was also calculated. This variable had limited variance between institutions with and without performance funding (median of 18.16 for institutions subject to performance funding and 18.20 for institutions without performance funding). Together, these measures imply that performance funding is more prevalent in states with larger 18-24 populations, however the racial diversity is similar for both performance funding and non performance funding institutions. Some noteworthy differences in institutional level variables emerged when institutions were separated by those with and without performance funding. Published in district tuition rates were significantly higher at institutions with performance funding (mean of 6.484) compared to institutions without performance funding (mean of 5.439). There were no significant differences in average amount of institutional aid between institutions subject to performance funding and those without performance funding. Institutions with performance funding provided an average of 3.374 of financial aid to students and institutions without performance provided an average of 3.206. The percent of students who received Pell Grants was slightly higher at institutions with performance funding (39.56%) compared to 37.32% at institutions without performance funding. The average amount of Pell Grant awarded had negligible difference between institutions with and without performance funding (4.008 and 3.957 respectively). 74 Overall variables related to institutional cost had little difference outside of published in district tuition rate, which was notably higher at institutions with performance funding. This difference in tuition price without a corresponding difference in financial aid implies that students may pay a higher net price at institutions in performance funding states. Enrollment variables had some variance, but overall were similar. Underserved minority FTFT student enrollment was slightly lower for institutions with performance funding (316.35) compared to institutions without (395.77). However, the percent of FTFT students who identified as underserved minorities was more similar, with a mean of 23.12% for institutions with performance funding and 25.50% for institutions without. Institutions without performance funding were modestly more reliant on state appropriations. Institutions with performance funding received 32.59% of their revenue from state appropriations, whereas institutions without performance funding received 34.88% of their revenue from performance funding on average. In sum, the descriptive statistics established meaningful variance across institutions and years for variables included in this study. These distinctions demonstrate the importance of including the variables beyond their already established theoretical significance. Additionally, significant differences between institutions with and without performance funding are present, including in some of the variables of interest; signifying the importance of exploring if performance funding exerts an influence on the dependent variables. Performance Funding and Underserved Student Enrollment This section presents the major results for research question one. In order to examine the influence of performance funding on underserved student enrollment, a panel analysis using OLS regression was employed. A Hausman test was performed and the null hypothesis was rejected for all models. Rejection of the null hypothesis indicates that the fixed effects model is 75 consistent and therefor preferred. Robust standard errors were used for all models to address the potential of correlated standard errors. Institutional and year fixed effects were included for all models. For each dependent variable, analysis was performed using performance funding as a binary variable (1 if the state had performance finding, 0 if it did not) and as a count variable where each year of performance funding was counted (i.e. 1 the first year, 2 the second year and so on). If performance funding was in place when the study began the count was started at the first year of the policy. For instance if performance funding started in 1998 then the count variable for 2000 would be 3. As the effects of policy change can often take time to appear models with 1, 2, and 3 year lags were calculated. Pell Grant Variables Tables 5 and 6 present the results of research question 1a) Does performance funding have an influence on the enrollment of Pell Grant eligible students? This study explored the research question through two dependent variables: 1) the percent of FTFT students who receive Pell Grants; and 2) the average amount of Pell Grant awarded to students. These present complimentary views regarding the enrollment of Pell Grant students. The first represents any changes an institution may make in the number of Pell Grant students it enrolls, while the later represents the type of Pell Grant student enrolled. Pell Grants are awarded at various levels to students based on financial need with larger Pell Grants signaling greater need. In 2011 (the middle year of available data) Pell Grants were awarded in amounts between $555 and $5,550. Therefore changes in the amount of Pell Grant awarded could signal a change in the level of need for enrolled Pell Grant students (Klechen & Stedrak, 2016). For example, an increase in the average amount awarded would demonstrate that the Pell Grant students enrolled that year had a 76 greater financial need and therefore received more Pell Grant aid. As Pell Grant data is not available in IPEDS until 2008, these results only include variables for 2008-2014. Pell Grant percent. Table 5 presents the findings for percent of FTFT who receive Pell Grants with a binary performance funding variable. Across all models, no statistically significant relationship was present between performance funding and percent of Pell Grant students enrolled FTFT. Additionally, 95% confidence intervals ranged across positive and negative variables meaning it is impossible to attribute the direction of any effect that might occur. The only variables found to effect percent of Pell Grant students enrolled were institutional financial aid amount and tuition rate. Institutional financial aid amount was significant at the p<.001 level across all models, however the effect was small. For every $1,000 increase in institutional aid, there is an expected decrease of .56% of FTFT Pell Grant recipients. Utilizing the overall mean of FTFT students (1742), this translates to an expected decrease of about 10 Pell Grant students for every $1,000 increase in institutional aid. Published in district tuition rate for the prior year was significant at the p< .05 level across all models and again the effect was small. For every $1,000 increase in tuition there is an expected .64% increase in FTFT Pell Grant students enrolled. Utilizing the study mean, this translates to an expected increase of 11 Pell Grant students enrolled FTFT. In sum the variables that presented the strongest effect were cost based variables (tuition rate and institutional financial aid amount) implying that the cost of higher education has the strongest effect on enrolling Pell Grant eligible students. This is unsurprising as Pell Grant students are a proxy for low income students, who may not be able to afford high cost institutions. Of note, no state variables were statistically significant predictors of the percent of FTFT students who receive Pell Grants. 77 Little difference was found across models with different lags. The R-squared value shows that about 62% of variance of Pell Grant students enrolled FTFT can be explained by the variables in the model. Models were estimated utilizing a performance funding count variable, which had similar results to the binary performance funding models. Results for the performance funding count models are included in the appendix. Table 5 Results for the Effects of Performance Funding on the Pell Grant Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance 0.233 0.300 0.203 0.294 funding (0.292) (0.316) (0.356) (0.452) (binary) Institutional -0.562*** -0.563*** -0.562 *** -0.564*** Aid Amount (0.111) (0.110) (0.110) (0.110) Tuition 0.643* 0.643* 0.638* 0.637* (0.295) (0.295) (0.295) (0.296) Appropriations -0.046 -0.046 -0.046 -0.046 as Percent of (0.031) (0.031) (0.031) (0.031) Revenue State Need 0.505 0.521 0.493 0.508 Based Aid (0.686) (0.691) (0.687) (0.687) College Age -0.003 -0.003 -0.003 -0.003 Population (0.003) (0.003) (0.003) (0.003) Appropriations -0.1.04 -0.093 -0.093 -0.089 per FTE (0.229) (0.229) (0.229) (0.229) Affirmative -1.604* -1.508 -1.52 -1.52 Action Ban (.806) (0.808) (0.803) (0.808) R-squared .6216 .6216 .6215 .6216 (within) n 3324 3324 3324 3324 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Pell Grant amount. As no effect was found in the percent of FTFT student enrollment that received Pell Grants, this study next explored the average amount of Pell Grant students received. Table 6 presents the results of this analysis for a binary performance funding variable. Performance funding was found to have no statistically significant effect on the average amount of Pell Grant awarded to FTFT students. Additionally, all of the 95% confidence intervals ranged 78 across positive and negative variables, meaning it is impossible to attribute the direction of any effect that might occur. Three variables influenced the average amount of Pell Grant awarded at a statistically significant level: tuition rate, percent of revenue that came from state appropriations, and college age population in the state. Published in district tuition rate for the prior year was significant at the p< .05 level for all models. While statistically significant, the effect was small, with every $1,000 increase in tuition leading to a predicted decrease of $23 in average Pell Grant awarded. State appropriations as a percent of total institutional revenue was significant at the p<.05 level. For every 1% increase in the percent of total revenue coming from state appropriations, there was an expected decrease in average amount of Pell grant awarded of $2.58 to $2.72 depending on the model. Put another way, institutions that relied more heavily on state appropriations as a source of revenue, had lower average Pell Grant amounts. The state college age population was a statistically significant predictor at the p<.05 level for all models. While statistically significant, the effect was small with limited practical significance. For each 10,000 person increase in college age population, there was an expected decrease of $0.30 in the average amount of Pell Grant awarded. In other words, the college age population in a state would have to increase by over 32,000 people for an expected $1 decrease in average Pell Grant awarded. The R-squared value shows that about 85% of the variance of average Pell Grant awarded to students can be explained by variables in the model. One notable difference can be seen between the models with different lags. The performance funding variable varied from 32.411 to -2.649 with the highest value occurring with the one year lag and the lowest value from the three year lag. However, as noted above, none of these were statistically significant. Models utilizing a performance funding count variable demonstrated similar results with two exceptions in 79 statistical significance. Published in district tuition rate was significant at higher levels for the performance funding count models with one and two year lag models significant at the p<.01 level and the models with no lag or a three year lag significant at the p<.05 level. Additionally, the state college age population was significant at the p<.01 level for the two year lag model for the performance funding count variable. Full results for the performance funding count variable models are available in the appendix. Table 6 Results for the Effects of Binary Performance Funding on Pell Grant Amount Variable No lag 1 year lag 2 year lag 3 year lag Performance funding 26.674 32.411 27.799 -2.649 (binary) (18.503) (18.034) (16.603) (20. 525) Institutional Aid 3.564 3.403 3.381 3.932 Amount (7.532) (7.534) (7.585) (7.621) Tuition -22.697* -22.751* -23.366* -22.722* (9.153) (9.194) (9.253) (9.213) Appropriations as -2.680* -2.577* -2.611* -2.722* Percent of Revenue (1.251) (1.245) (1.259) (1.269) State Need Based 77.622 79.193 76.708 73.248 Aid (52.578) (53.402) (52.983) (52.711) College Age -0.298* -0.280* -0.295* -0.294* Population (0.119) (0.116) (0.117) (0.118) Appropriations per 8.308 9.615 9.847 8.031 FTE (7.854) (8.073) (7.943) (7.770) Affirmative Action 7.944 18.688 19.609 7.865 Ban (46.583) (46.84) 47.669 47.418 R-squared (within) .8546 .8547 .8546 .8544 n 3324 3324 3324 3324 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Taken together, the models described above demonstrated that performance funding had no statistically significant effect on Pell Grant enrollment through all models, variables, and lags. This analysis showed that performance funding had no statistically significant effect on the percent of FTFT students who receive Pell Grants or the average amount of Pell Grant awarded when not differentiating by institutional type. This finding held true for both performance 80 funding count variables and binary performance funding variables. Additionally, this finding was consistent across models with no lags and across one to three year lag models. Further, tests of robustness were conducted which utilized the percent of students who received all federal aid and the average amount of federal aid received to account for the years of missing data. These models also found no statistically significant relationship between performance funding and Pell Grant enrollment. Underserved Minority Student Variables Tables 7 through 9 present the results of research question 1b) Does performance funding have an influence on the enrollment of underserved minority students? This study explored the research question through two dependent variables: 1) the total number of FTFT students who identify as underserved minority students; and 2) the percent of FTFT students who identify as underserved minority students. In order to create the underserved minority variables, students who identified as Black/African American, Hispanic/Latino, and Native American/Alaska Native were added together. As data was available across the whole time frame of this study, a fixed effects panel analysis was conducted over the 2000-2014 time frame. Total underserved minority students. Table 7 presents the results using a binary performance funding variable. Through all models, performance funding had no statistically significant effect on FTFT enrollment of underserved minority students. Four variables consistently influenced the total FTFT underserved minority enrollment: tuition rate, appropriations as a percent of total revenue, state need based aid, and the state underserved minority college age population. Published in state tuition for the prior year was a statistically significant predictor for all models (p<.01). For every $1,000 increase in tuition, the institution had an expected increase of 14 underserved minority students enrolled FTFT. Appropriations as 81 a percent of revenue was a statistically significant predictor across all models at the p<.001 level. For every 1% increase in the share of revenue that came from state appropriations, the institution had an expected decrease of almost two underserved minority students enrolled FTFT. State variables also contributed to the enrollment of underserved minority students. The amount of state need based aid provided per FTE was a statistically significant predictor at the p<.05 level for all but the no lag model. For every $1,000 increase in state need based aid per FTE the institution had an expected increase of 31 students who identify as underserved minorities enrolled FTFT. The state college age underserved minority population was a statistically significant predictor of underserved minority enrollment at the p<.001 level across all models. For every 1,000 person increase in underserved minority residents aged 18-24 in the state, there was a corresponding 1.3 person expected increase in underserved minority students enrolled at the institution. The models indicate that the factors that were the strongest predictors of underserved minority enrollment were financial based, for both students and institutions. Tuition rate and need based aid both increased the number of underserved minority students, indicating that cost is a significant factor in underserved minority enrollment. Institutional reliance on state appropriations (a measure of institutional financial need) was a strong negative predictor of underserved minority enrollment. In other words, institutions that relied heavily on state appropriations had lower predicted rates of underserved minority enrollment; implying that institutional revenue and financial need effect enrollment demographics. The models R squared values show that 40% of the variance in the total number of underserved minority students enrolled FTFT is explained by the variables included in the model. Models utilizing a performance funding count variable produced similar results with two notable exceptions. First, the performance funding count variable produced negative betas for 82 the effect of performance funding whereas the models with a binary variable estimated positive betas. Second, tuition rate was significant at the p<.05 level for all performance funding count models. Full results from the performance funding count variable can be found in the appendix. Table 7 Results for the Effects of Binary Performance Funding on USM Total Variable No lag 1 year lag 2 year lag 3 year lag Performance 7.787 3.900 2.765 2.497 Funding (Binary) (8.432) (8.296) (8.666) (8.786) Institutional Aid Amount Tuition -2.564 (3.060) 13.824** (5.193) -1.744*** (0.498) -2.489 (3.067) 13.795** (5.204) -1.756*** (0.498) -2.469 (3.068) 13.782** (5.197) -1.761*** (0.498) Appropriations as Percent of Revenue State Need Based 29.570 30.722* 30.877* Aid (15.199) (14.922) (14.877) College Age 1.297*** 1.293*** 1.292*** USM Population (0.139) (0.138) (0.137) Appropriations -1.349 -1.470 -1523 per FTE (4.183) (4.172) (4.148) Affirmative 3.647 4.413 4.430 Action Ban 23.053 23.385 23.397 R-squared .3969 .3967 .3966 (within) n 7016 7016 7016 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 -2.464 (3.075) 13.797 ** (5.208) -1.762*** (0.498) 30.914* (14.865) 1.291*** (0.137) -1.583 (4.086) 4.309 23.333 .3966 7016 Percent underserved minority students. Table 8 presents the results of a binary performance funding variable on the percent of FTFT students enrolled who identify as underserved minorities. Performance funding had a statistically significant negative result at the p<.05 level for the model with no lag model and at the p<.01 level for the one year lag model. With no lag, an institution with performance funding had an expected FTFT underserved minority enrollment level .64% lower than without performance funding. Using the mean FTFT student enrollment across all institutions in this study (1,742) this translates to about 11 fewer 83 underserved minority students. The one year lag model had an expected decrease of 0.70%, or an expected decrease of 12 students. Models with two and three year lags found no statistically significant relationship between performance funding and percent enrollment of underserved minority students. Table 8 Results for the Effects of Binary Performance Funding on USM Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance -0.636* -0.698** -0.477 -0.332 Funding (Binary) (0.257) (0.267) (0.278) (0.282) Institutional Aid -0.318** -0.320** -0.324** -0.325 ** Amount (0.102) (0.102) (0.102) (0.102) Tuition 0.413* 0.408* 0.412* 0.414* (0.181) (0.180) (0.180) (0.180) Appropriations -0.076*** -0.076*** -0.076*** -0.076*** as Percent of (0.020) (0.020) (0.020) (0.020) Revenue State Need Based 2.444*** 2.372*** 2.362*** 2.364*** Aid (0.555) (0.555) (0.556) (0.555) College Age 0.157*** 0.161*** 0.161*** 0.161*** USM Percent (0.029) (0.029) (0.029) (0.029) State -0.147 -0.146 -0.135 -0.126 Appropriations (0.171) (0.171) (0.171) (0.171) per FTE Affirmative -0.834 -0.922 -0.931 -0.067 Action Ban (0.630) (0.635) (0.631) (0.176) R-squared .2895 .2899 .2886 .2880 (within) n 7016 7016 7016 7016 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Five variables were statistically significant predictors of percent of underserved minority students enrolled: institutional aid amount, tuition rate, state appropriations as a percent of revenue, state need based aid, and the state percent 18-24 underserved minority population. The amount of institutional aid provided was a statistically significant predictor at the p<.01 level across all models. For every $1,000 increase in institutional aid, there was an expected decrease of about 0.32% in percent of underserved minorities enrolled. Utilizing the mean enrollment in 84 this study, this translates to about six less underserved minority students. Published in state tuition rate was a significant predictor at the p<.05 level across all models. For every $1,000 increase in tuition the percent of students who identified as underserved minorities would increase by and expected 0.41%, or about seven students. Appropriations as a percent of revenue was significant at the p<.001 level. For every 1% increase in reliance on state appropriations, the institution had an expected decrease in the percent of underserved minorities enrolled of 0.076%, which translates to about one student. Two state variables were statistically significant predictors at the p<.001 level: state need based aid and state percent college age underserved minority population. For every $1,000 increase in state need based aid provided per FTE there was an expected increase of 2.4% in the percent of underserved minority students enrolled. Converted to students, for every $1,000 increase in state need based aid, there was an expected increase of 35 students. For every 1% increase in the percent of underserved minority college age residents in the state there was an expected increase of 0.16% in the percent of underserved minority students enrolled, or about three students. The R-squared value demonstrates that about 29% of the variance in the percent of underserved minority students enrolled is explained by the variables included in the models. Conducting the models with a performance funding count variable reveal similar findings. Table 9 presents the results of a performance funding count variable on the percent of FTFT students enrolled who identify as underserved minorities. Performance funding is found to have a statistically significant negative result across all models at the p<.01 level except for the three year lag model, which was significant at the p<.05 level. With no lag, for each year of performance funding an institution had an expected decrease in underserved minority enrollment of 0.17%. Using the mean FTFT student numbers across all institutions in this study, results in 85 an expected decrease of about three students a year. With performance funding lasting an average of five years across all institutions in this study, that translates to an expected decrease of 15 students in the last year. Models with one and two year lags lead to similar decreases. The model with a three year lag had an expected decrease in underserved minority enrollment of 0.14% per year, or 2.4 less students a year and about 12 students by the end of the average five year length of performance funding. Table 9 Results for the Effects of Performance Funding Count on USM Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance -0.175** -0.173** -0.165** -0.139* Funding (Count) (0.058) (0.059) (0.060) (0.059) Institutional Aid -0.308** -0.311** -0.315** -0.320** Amount (0.1003) (0.100) (0.100) (0.101) Tuition 0.389* 0.392* 0.392* 0.397* (0.176) (0.176) (0.176) (0.177) Appropriations -0.079*** -0.078*** -0.077*** -0.076*** as Percent of (0.019) (0.019) (0.019) (0.0194) Revenue State Need Based 2.455*** 2.414*** 2.394*** 2.350*** Aid (0.557) (0.558) (0.556) (0.554) College Age 0.153*** 0.155*** 0.157*** 0.158*** USM Percent (0.029) (0.029) (0.029) (0.029) State -0.155 -0.145 -0.123 -0.107 Appropriations (0.170) (0.170) (0.169) (0.169) per FTE Affirmative -1.082 -1.122 -0.991 -0.875 Action Ban (0.635) (0.639) (0.629) (0.621) R-squared .2937 .2932 .2925 .2908 (within) n 7016 7016 7016 7016 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Similar to the binary models, the performance funding count models had five variables that were statistically significant predictors of the percent of FTFT students who identified as underserved minorities: institutional aid amount, tuition rate, state appropriations as a percent of 86 revenue, state need based aid, and state percent 18-24 underserved minority population. All had similar significance levels and betas as the binary performance funding models. To verify the findings, a Tobit analysis was performed as a check of robustness. Since a percent variable is limited to values between 0 and 100 percent, the data in this study is considered left and right censored. Therefore a double bounded Tobit analysis was performed with lower limits of 0 and upper limits of 100. Performance funding was a significant predictor at the p<.001 level for most models, with the exception of the binary two year lag model, which was significant at the p<.01 level and the binary three year lag model, which was significant at the p<.05 level. All models had similar coefficients for both the variables of interest and the control variables. However the control variables were significant at higher confidence intervals. All control variables were significant at the p<.001 level accept for state appropriations which was significant at the p<.05 level for the binary no lag, 1 year lag, and 2 year lag models as well as the performance funding count no lag and one year lag models. State appropriations was not a statistically significant predictor for other models. The Tobit analysis showed only 3 censored observations and 7013 uncensored observations for all models, implying limited concern with censored variables. Since Tobit analysis does not allow the inclusion of state or year fixed effects (though state and year were included as variables in the Tobit models estimated) or robust standard errors and the Tobit analysis found limited censored observations, the panel analysis is preferred. Results of Tobit models are included in the appendix. In summary, performance funding was not a statistically significant predictor of total underserved minority FTFT enrollment. However, performance funding was found to be a statistically significant predictor of the percent enrollment of underserved minority students in almost all models executed. Performance funding was a consistent negative predictor for all four 87 models calculated with a performance funding count variable and was a negative predictor for both the no lag and one year lag models for the binary performance funding variable. Two year and three year lag count models had negative betas, but were not statistically significant. Additionally, using a double bounded Tobit model, performance funding was again a statistically significant negative predictor for all models calculated. These findings provide strong evidence that performance funding influences the enrollment of underserved minority students. Institutional Type and Performance Funding This section addresses research question 2) Does institutional type influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? To answer this research question, two categorical variables were created: one for Carnegie classification and one for selectivity. As the time period of this study includes three different Carnegie classification codes (2000, 2005, 2010), recoding the variables allows for analysis across all years in the study. The IPEDS Carnegie variable was recoded into a 10 category Carnegie categorical variable. Three codes cover doctoral institutions: Doctoral A (those listed as “very high research” or “extensive”), Doctoral B (those listed as “high research” or “intensive”) and Doctoral C (those listed as “doctoral/research universities”). Similarly, masters institutions are included in three codes: Masters A (those listed as “Masters I” or “larger programs”), Masters B (those listed as “Masters II” or “medium programs”), and Masters C (those listed as “smaller programs”). Baccalaureate institutions also span three codes: Baccalaureate A (those listed as “general” or “diverse fields”), Baccalaureate B (those listed as “liberal arts” or “arts and sciences”), and Baccalaureate C (those listed as “baccalaureate/associate”). The final category, Special Focus 88 Institutions, includes “schools of art, music and design,” “schools of engineering,” and “teachers colleges.” Table 10 shows the transition rates for the Carnegie categorical variable. Most institutions did not move across Carnegie classification codes, with between 80.00% to 96.88% staying in their Carnegie classification across all 15 years of this study. In fact, outside of special focus institutions (90.41% staying) and Baccalaureate/Associates Colleges (80% staying), most institutions had a staying rate above 93%. Table 10 Transition Rates Across Carnegie Categorical Variable Doct Doct Doct Mast Mast Mast Baccala Baccal oral oral oral ers ers ers ureate A aureate A B C A B C B Doctoral A Doctoral B Doctoral C Masters A Masters B Masters C Baccala ureate A Baccala ureate B Baccala ureate C Special Focus 3.79 0.00 0.00 0.00 0.00 0.00 0.00 Bacc alaur eate C 0.00 2.48 .31 .21 0.00 0.00 0.00 0.00 .10 0.00 95.5 5 3.24 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 .11 96.7 6 .41 2.30 0.00 .15 .04 0.00 0.00 0.00 0.00 .15 96.3 8 2.67 2.53 .89 .45 0.00 0.00 0.00 0.00 0.00 .35 93.3 1 4.61 .35 1.06 0.00 0.00 0.00 0.00 0.00 .14 .28 93.6 2 1.42 96.88 1.14 .14 0.00 0.00 0.00 0.00 0.00 .26 2.11 1.58 96.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20.00 0.00 80.00 0.00 0.00 2.74 0.00 0.00 2.74 0.00 4.11 0.00 0.00 90.41 96.2 1 1.35 Special Focus 0.00 To create the selectivity categorical variable, the IPEDS admission rate variable was transformed into five categories. Institutions that admit less than 30% of the students who apply were coded as very highly selective institutions. Institutions with admission rates between 30 89 and 49.99% were coded as highly selective. Institutions that admit between 50 and 74.99% of the students that apply were coded as selective. Institutions with admission rates between 75% and 84.99% were coded as moderately selective and institutions that admitted at least 85% of the students that applied were coded as open access institutions. As can be seen in Table 11, the selectivity categorical variable had higher rates of transfer between categories. Between 62.39% and 79.75% of institutions remained in their selectivity category across all years. Moderately selective institutions and very highly selective institutions remained at the lowest rates with institutions staying 63.21% and 62.39% of the time respectively. Selective institutions had the highest rates of staying at 79.75%. Table 11 Transition Rates Across Selectivity Categorical Variable Open Access Moderately Selective Highly Selective Selective Open Access 73.64 17.71 7.03 1.10 Moderately 13.62 63.21 22.28 .89 Selective Selective 2.97 9.92 79.75 6.95 Highly 2.01 2.55 20.00 71.41 Selective Very Highly 2.56 0.00 11.97 23.08 Selective Very Highly Selective .51 0.00 .41 4.03 62.39 Carnegie Classification To address research question 2a: Does Carnegie classification influence the effect of performance funding on underserved students’ enrollment at public four year institutions, eight models were estimated. To start, models exploring how Carnegie classification may influence the effect of performance funding on Pell Grant dependent variables were calculated with both performance funding binary and count variables. Then models were executed to explore how Carnegie classification may influence the effect of performance funding on underserved minority enrollment variables. Each is presented in turn below. All models included the same controls 90 and fixed effects as the main models. As the control variables in these models had similar betas and significance levels as the main models, only interaction effects are presented below Carnegie classification and Pell Grant variables. To explore if the effect of performance funding on Pell Grant variables varies by Carnegie classification, models were estimated with an interaction effect between Carnegie classification and performance funding both in a binary and count form. Models were estimated with both average Pell Grant amount as well at the percent of FTFT students who receive Pell Grants as the dependent variable. The results are presented in Table 12 with Doctoral A institutions (those classified as very high research or research extensive) without performance funding as the comparison group. Utilizing a performance funding binary variable, only one Carnegie classification was a statistically significant predictor of the average amount of Pell Grant awarded. Doctoral A institutions that were subject to performance funding had an expected average Pell grant $75.01 lower (significant at the p< .05 level). Utilizing a count variable for performance funding, only special focus institutions with performance funding were a statistically significant predictor at the p< .001 level. These institutions had a predicted average Pell Grant amount that was $64.41 lower. For percent of FTFT students who received Pell Grants, two Carnegie categories were statistically significant predictors with the binary performance funding variable, and three were statistically significant predictors with the performance funding count variable. For the binary performance funding variable, the Doctoral A institutions were a statistically significant predictor at the p<.05 level and special focus institutions were at the p<.001 level. Doctoral A institutions subject to performance funding had a predicted enrollment of Pell Grant students that was 1.47% lower. Utilizing mean enrollment for Doctoral A institutions (4,084) this translates to 91 about 60 fewer students. Special focus institutions with performance funding had a predicted enrollment of Pell Grant students 4.85% higher or utilizing their mean enrollment (305) about 15 students. Table 12 Results of Performance Funding Interacted with Carnegie on Pell Grant Variables Performance Funding Binary Performance Funding Count Pell Grant Percent FTFT Pell Grant Percent FTFT Amount Pell Grant Amount Pell Grant Doctoral A -75.0136* -1.472 * -3.367 -0.481* (29.614) (0.718) (5.727) (0.203) Doctoral B 22.879 0.174 13.187 -0.104 (93.265) (1.164) (7.629) (0.139) Doctoral C -47.094 -0.443 6.642 0.304** (105.941) (2.506) (4.515) (0.107) Masters A -199.416 -3.245 -0.939 -0.024 (121.230) (2.698) (4.509) (0.099) Masters B -189.393 -3.200 1.769 -0.162 (128.129) (2.893) (5.658) (0.143) Masters C -276.822 -4.694 -11.923 -0.221 (140.783) (3.069) (7.501) (0.125) Baccalaureate A -198.222 -3.438 -4.739 0.215 (147.792) (3.104) (13.360) (0.151) Baccalaureate B -33.013 -3.495 56.631 0.115 (162.335) (3.304) (46.774) (0.366) Baccalaureate C Special Focus 104.311 4.849*** -64.405*** -1.439*** Institutions (95.876) (1.225) (4.867) (0.149) R squared .8565 .6247 .8558 .6255 n 3322 3322 3322 3322 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Note: no institutions classified as Baccalaureate C are included in performance funding states, therefore those results are blank. Utilizing a performance funding count variable three Carnegie categories were statistically significant predictors of percent of FTFT students who receive Pell Grants. Doctoral A institutions were statistically significant predictors at the p< .05 level and had predicted enrollments of Pell Grant students .48% lower for each year of performance funding. Utilizing 92 mean enrollment this translates to 20 fewer students each year or about 98 students fewer in the last year utilizing the average five year length of performance funding. Doctoral C institutions were statistically significant predictors at the p<.01 level and had predicted Pell Grant enrollment .30% higher for each year of performance funding or utilizing the enrollment mean (1879) about 6 students per year and 28 students by the end of the average five year length of performance funding. Finally, special focus institutions were statistically significant predictors at the p<.001 level and had expected Pell Grant enrollment 1.44% lower or utilizing the mean enrollment, about 4 students fewer per year and 23 students by the end of the average length of performance funding. Carnegie classification and underserved minority student variables. To explore if the effect of performance funding on underserved minority student enrollment varies by Carnegie classification, models were estimated with an interaction effect between Carnegie classification and performance funding both in a binary and count form. Models included both the total underserved minority population and the percent of the FTFT class that identified as underserved minority students. Table 13 presents the results from these models with Doctoral A institutions without performance funding as the comparison group. Utilizing an interaction between a binary performance funding variable and Carnegie classification, two Carnegie categories were statistically significant predictors for total underserved minority enrollment. Doctoral A institutions with performance funding had a predicted enrollment of about 64 more underserved minority students with significance at the p<.05 level. Doctoral B institutions (those classified as high research activity or research intensive) with performance funding had predicted enrollment of underserved minority students almost 99 students higher (significant at the p<.001 level). 93 Table 13 Results of Performance Funding Interacted with Carnegie on USM Variables Performance Funding Binary Performance Funding Count USM Total USM Percent USM Total USM Percent Doctoral A 63.624* -0.213 4.406 -0.061 (28.551) (.644) (3.819) (0.094) Doctoral B 99.098*** 0.829 4.597 -0.024 (26.704) (.604) (2.807) (0.069) Doctoral C 53.460 1.513 4.142* -0.015 (52.291) (1.157) (2.022) (0.082) Masters A 62.860 1.132 -2.909 -0.221** (60.697) (0.943) (1.861) (0.081) Masters B 16.620 0.054 -4.785** -0.287*** (63.440) (1.083) (1.645) (0.082) Masters C -0.249 -0.870 -6.785** -0.369 (64.913) (1.644) (2.546) (0.206) Baccalaureate A 34.455 1.464 -1.489 -0.158 (63.761) (1.282) (1.887) (0.141) Baccalaureate B 16.090 0.350 -4.818 -0.058 (71.082) (1.725) (4.531) (0.195) Baccalaureate C Special Focus 100.568 -0.806 -2.841 -0.839 Institutions (60.809) (4.558) (4.851) (0.841) R squared .4119 .2996 .4009 .2969 n 6992 6992 6992 6992 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Note: no institutions classified as Baccalaureate C are included in performance funding states, therefore those results are blank. Utilizing an interaction between a performance funding count variable and Carnegie classification resulted in three statistically significant predictors of underserved minority enrollment: Doctoral C, Masters B, and Masters C institutions. Doctoral C institutions were significant at the P<.05 level and had a predicted underserved minority student enrollment about 4 students higher per year of performance funding or about 20 students by the end of the average length of performance funding. Masters B institutions (those classified as medium programs or Masters II) with performance funding had a predicted enrollment rate of underserved minority students almost five students lower for each year of performance funding or 25 students by the 94 end of the average length of the policy (significant at the p<.01 level). Masters C institutions had a predicted enrollment rate of underserved minority students almost seven students lower for each year of performance funding or 35 student in the last year of the average length of the policy (significant at the p<.01 level). Exploring the effect of the interaction between performance funding count and Carnegie classification on underserved minority percent produced similar results, with two statistically significant predictors at the p<.01 level. Masters A institutions (those classified as larger programs or Masters I) with performance funding had predicted enrollment of underserved minority students .22% lower for each year of performance funding or utilizing mean enrollment (1433) about three less students a year and 15 less a year by the end of the average length of performance funding. Masters B institutions with performance funding had predicted enrollment of underserved minority students .29% lower for each year of performance funding or using mean enrollment (880) about 2.5 fewer students a year and 12 fewer students in the last year of the average length of performance funding. The interaction between binary performance funding and Carnegie classification produced no statistically significant predictors for underserved minority percent. Overall, the models above demonstrated that Carnegie classification might influence the effect of performance funding on underserved student enrollment. The percent of Pell Grant students enrolled varied by institutional type with two Carnegie categories at statistically significant levels for a binary performance funding interaction and three at statically significant levels for a performance funding count interaction. Utilizing a binary performance funding interaction term, Doctoral A institutions with performance funding had predicted enrollment of Pell Grant students lower than the reference group and special focus institutions with 95 performance funding had higher predicted enrollment of Pell Grant students. Utilizing a performance funding count interaction, Doctoral A institutions again had lower predicted enrollment of Pell Grant students per year. Additionally, Doctoral C institutions had higher predicted enrollment of Pell Grant students and special focus institutions had lower predicted enrollment of Pell Grant students per year of performance funding. Institutions again demonstrated divergent responses to performance funding in regards to enrollment of underserved minority students. A performance funding binary count interaction resulted in two Carnegie classifications with statistically significant enrollment patterns. Both Doctoral A and Doctoral B institutions had higher predicted total enrollment of underserved minority students. Utilizing a performance funding count interaction two institutions had enrollment patterns statistically significant from the control group. Masters B and Masters C institutions both had lower predicted enrollment for total underserved minority students per year. Similarly, Masters A and Masters B institutions with performance funding had lower predicted percent of underserved minority students enrolled for each year of performance funding. Taken together, these results revealed that institutions may respond differently to performance funding based on Carnegie classification. Selectivity To answer research question 2b: Does selectivity influence the effect of performance funding on underserved students’ enrollment at public four year institutions, eight models were estimated. First, models exploring how selectivity may influence the effect of performance funding on Pell Grant dependent variables were calculated with performance funding binary and count variables. Then models were executed to explore how selectivity may influence the effect 96 of performance funding on underserved minority enrollment variables again for both binary and count performance funding variables. Selectivity and Pell Grant variables. To explore if the effect of performance funding on Pell Grant variables varies by selectivity, models were estimated with an interaction effect between a categorical selectivity variable and performance funding both in a binary and count form. Models were estimated with both Pell Grant amount as well at Pell Grant percent as the dependent variable. The results are presented in Table 14 with open access institutions without performance funding as the comparison group. Table 14 Results of Performance Funding Interacted with Selectivity on Pell Grant Variables Performance Funding Binary Performance Funding Count Pell Grant Percent FTFT Pell Grant Percent FTFT Amount Pell Grant Amount Pell Grant Open Access 50.060 -0.325 -0.311 -0.063 (35.005) (0.462) (4.405) (0.080) Moderately 82.640** -0.559 2.119 -0.034 Selective (29.846) (0.552) (4.058) (0.090) Selective 81.434* -0.792 2.372 -0.072 (32.322) (0.543) (4.568) (0.094) Highly Selective 61.304 -0.184 0.249 -0.114 (56.668) (0.831) (4.883) (0.116) Very Highly 233.543* 1.035 12.950 -0.363* Selective (94.591) (1.800) (8.082) (0.151) R squared .8556 .6163 0.8546 .6158 n 3111 3111 3111 3111 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 The interaction between binary performance funding and selectivity was a statistically significant predictor of average amount of Pell Grant awarded for three of the five selectivity levels. The moderately selective institutions with performance funding had a predicted average Pell Grant $82.64 higher (significant at the p<.01 level). The selective institutions with performance funding had a predicted average Pell Grant amount $81.43 higher (significant at the 97 p<.01 level). The very highly selective institutions with performance funding had a predicted average Pell Grant $233.54 higher (significant at the p<.01 level). The interaction between the performance funding count variable and selectivity did not produce any statistically significant predictors for the average Pell Grant amount awarded. The model with an interaction between a binary performance funding variable and selectivity did not produce any statistically significant predictors for percent of FTFT students with Pell Grants. Utilizing a performance funding count variable resulted in one statistically significant predictor at the p<.05 level for the percent of FTFT Pell Grant awardees. Very highly selective institutions with performance funding had a predicted enrollment of Pell Grant students .36% lower for each year of performance funding. Utilizing the mean enrollment for very highly selective institutions (1,840), this translates to about seven less students a year or about 33 students by the last year of the average length of the policy. Selectivity and underserved minority student variables. To explore if the effect of performance funding on underserved minority student enrollment varies by admission rate, models were estimated with an interaction effect between a categorical selectivity variable and performance funding both in a binary and count form. Models included both the total underserved minority population and the percent of the FTFT class that identified as underserved minority students. Table 15 presents the results from these models with open access institutions without performance funding as the comparison group. Calculating a model with the interaction of binary performance funding and selectivity resulted in only one statistically significant predictor of underserved minority student enrollment. Very high selective institutions with performance funding had predicted enrollment of underserved minority students 140 students lower (significant at the p<.01 level). Utilizing the 98 same interaction on underserved minority percent enrollment produced similar results, with both high selective institutions and very high selective institutions being statistically significant predictors. The highly selective institutions with performance funding had predicted enrollments of underserved minority students 1.10% lower, with results significant at the p<.05 level. Utilizing enrollment mean (1,749), this translates to about 19 fewer students. The very highly selective institutions with performance funding had predicted enrollments of underserved minority students 5.66% lower with results significant at the p<.001 level. Utilizing the mean enrollment for very highly selective institutions, this translates to about 104 less students. Table 15 Results of Performance Funding Interacted with Selectivity on USM Variables Performance Funding Binary Performance Funding Count USM Total USM Percent USM Total USM Percent Open Access 11.137 0.042 0.885 -0.050 (12.075) (0.330) (1.278) (0.056) Moderately 15.090 -0.342 0.935 -0.093 Selective (14.456) (0.405) (1.377) (0.073) Selective -1.303 -0.475 1.347 -0.073 (10.767) (0.384) (1.467) (0.056) Highly Selective -25.195 -1.101* -0.789 -0.221* (16.582) (0.518) (1.615) (0.087) Very Highly -140.280 ** -5.662*** -17.118*** -0.732*** Selective (46.965) (1.311) (3.899) (0.080) R squared .4255 .3266 .4240 .3308 n 6156 6156 6156 6156 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Estimating the models with an interaction between performance funding count and selectivity produced similar results. Very highly selective institutions with performance funding had predicted enrollment of underserved minority students 17 students lower for each year with performance funding (significant at the p<.001 level). Extrapolated out to the average length of performance funding, results in 86 fewer students by the last year of performance funding. Utilizing the same interaction on the percent of underserved minority students, both highly 99 selective institutions and very highly selective institutions were statistically significant predictors. Highly selective institutions with performance funding had expected underserved minority enrollment .22% lower for each year of performance funding with results significant at the p<.05 level. Utilizing mean enrollment, this translates to about four students a year or 19 at the end of the average length of performance funding. Very highly selective institutions had predicted enrollment of underserved minority students .73% lower for each year of performance funding or about 13 fewer students a year or 67 students fewer the last year of the average length of performance funding, with results significant at the p<.001 level. Taken together, the models above demonstrated clear evidence that selectivity influences the effect of performance funding on underserved student enrollment for both Pell Grant eligible students and underserved minority students. A binary performance funding interaction showed that three of five selectivity categories were statistically significant predictors for the average amount awarded in Pell Grants. Additionally, the very highly selective category was a statistically significant negative predictor for percent of FTFT students who receive Pell Grants. Further, the very highly selective category was consistently a statistically significant negative predictor for both total underserved minority enrollment and the percent of underserved minority students enrolled. This finding held true for both a binary performance funding interaction as well as a performance funding count interaction. For percent of underserved minority students enrolled the highly selective category was also a negative statistically significant predictor. Together these results demonstrated that more selective institutions respond to performance funding differently than less selective institutions. 100 Conclusion These findings demonstrate that performance funding does have the potential to influence enrollment profiles at U.S. public four-year institutions. Specifically, this study found it changed the enrollment of underserved minority students. Further, these influences may not be equitable across all institutions, and instead may effect lower status institutions in a different manner than higher status institutions. Specifically, those with more flexibility in their enrollment profile may be more likely to change their enrollment of both Pell Grant students and underserved minority students. These findings have profound implications for higher education institutions, policy formation, and social equity. The next chapter discusses these findings in context to previous research, how these findings may affect the goals of performance funding, higher education institutions, and social equity writ large, and presents implications for both policy creators and policy researchers. 101 Chapter 5 Summary of Study Performance funding policies are increasing in prevalence as state legislators push for greater efficiency and specific quantifiable outcomes from higher education institutions. Advocates of these policies claim they increase efficiency by rewarding reductions in cost and promoting desired outcomes. However, this claim assumes that higher education operates within a competitive marketplace where the rules and assumptions of market economics (such as rational choice theory or perfect knowledge) can be met (Gumport, 2001). Yet, many higher education researchers believe standard economic market assumptions do not apply to institutions of higher education and believe policies and decisions based on these assumptions are flawed (e.g., Gumport, 2001; Marginson, 1997; McMahon, 2009; Slaughter & Rhoades, 2004). If higher education does not function as a traditional market, then instituting performance funding policies could create quasi markets, where market behavior arises from the priorities of the policy makers, as opposed to true economic efficiency and supply and demand of resources (Taylor, Cantwell & Slaughter, 2013). If performance funding policies value efficiency (increased economic outputs at lower costs) and quasi markets reward institutions that engage in behavior valued by the policy, then institutions may engage in behaviors to increase their quality and decrease their costs in order to reap the benefits of the performance funding policy. However, due to the iron triangle of cost, quality, and access (Immerwahr et al., 2008), there may be unintended outcomes from these behaviors. Specifically, the quasi markets created by performance funding policies may create incentives for institutions to attend to the metrics measured by policymakers (efficiency) and give less attention to the metrics for which institutions are not held accountable (underserved 102 student enrollment). In other words, performance funding may cause institutional leaders to feel they must change enrollment profiles (Immerwahr et al., 2008). Limited research has addressed this phenomenon and explored how higher education institutions are balancing the competing needs of demonstrating efficient economic outcomes via performance funding while maintaining access and success for all students, particularly in the four-year sector. As such, this study explored how performance funding influences the enrollment of underserved students with a focus on the following research questions: 1) Are there unintended consequences from performance funding policies related to underserved students’ enrollment at public four-year institutions? a. Does performance funding have an influence on the enrollment of Pell Grant eligible students? b. Does performance funding have an influence on the enrollment of underserved minority students? 2) Does institutional type influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? a. Does Carnegie classification influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? b. Does selectivity influence the effect of performance funding on underserved students’ enrollment at public four-year institutions? In order to examine the influence of performance funding on underserved student enrollment, this study utilized OLS based panel analysis for the years 2000-2014. While most performance funding research addresses state level questions, this study specifically focused on institutional level behavior. To answer the research questions, models with four different 103 dependent variables were calculated: average amount of Pell Grant awarded, percent of FTFT students who receive Pell Grants, total underserved minority enrollment, and percent of FTFT students who identify as underserved minority students. Models with zero, one, two, and three year lags were calculated as well as models with both a binary and count performance funding variable. All models included controls for institutional and state variables as well as robust standard errors. Further, institutional and year fixed effects were included in all models. Models were calculated without institutional differentiation and with institutions categorized by Carnegie classification and selectivity. A total of 48 main models were utilized for this study. Discussion of Findings Underserved Student Enrollment Previous research has raised concerns that performance funding may change enrollment patterns, particularly that institutions may decrease enrollment of underserved students (Bell, 2005; Colbeck, 2002; Dougherty et al., 2014b; Lahr et al., 2014; Naughton, 2004; Rutherford & Rabovsky, 2014). However, previous research either only raised this concern without confirming whether this change occurred and was qualitative in nature. For instance, higher education administrators are concerned that performance funding policies would result in “creaming” or only enrolling students who would perform well on performance funding metrics (Dougherty et al., 2014b; Lahr et al., 2014). However, this notion was only mentioned as a possibility with no administrators stating it had been an intentional decision at their institution. This study expands on current literature by quantitatively exploring if performance funding does change enrollment patterns, in particular for underserved students. Findings from this study justify the concern as results show that institutions do change their enrollment profiles when subject to performance funding. Institutions with performance funding enrolled underserved 104 minority students at a lower percent than institutions not subjected to the policy, suggesting that performance funding may indeed have unintended consequences. This finding supports previous qualitative research that suggested that institutions may engage in “creaming” or admitting students who are more likely to succeed in higher education as a response to performance funding (Colbeck, 2002; Dougherty et al., 2014; Lahr et al., 2014). However, previous qualitative research did not investigate if in fact performance funding does lead to changes in enrollment profiles. Recently, one quantitative study did explore this specific question. Umbricht, Fernandez, and Ortagus (2015), studied a variety of unintended outcomes in relation to performance funding in Indiana. Specifically, through a difference in difference analysis they found that performance funding decreased the enrollment of minority students in Indiana. The findings of this current study expand on Umbricht et al.’s (2015) study and suggest that Indiana is not the only state where these unintended outcomes are occurring. Decreasing the enrollment of underserved minority students has several implications for the objectives of performance funding policies, for higher education institutions specifically, and for social equity broadly. Most performance funding policies include increasing the number of residents with postsecondary credentials as one of their goals. This requires increasing higher education completion rates but also often necessitates increasing the number of students enrolling in higher education as well. If less underserved minority students are enrolling in higher education institutions this can be detrimental to the state’s goal of increasing higher education access and completion. Further, decreasing racial diversity in higher education is not only antithetical to the purpose of higher education but also goes against the desires of many performance funding policy creators. In fact, a few states have specifically tried to prevent changes in underserved student enrollment by creating metrics to try to counteract this creaming 105 effect. Up to 13 states have included a metric or measure within the formula specifically addressing underserved student enrollment (HCM, 2015). For instance, in recent years Michigan added a metric that measured the enrollment of Pell Grant students. The increasing prevalence of these metrics in recent years shows that policy creators are cognizant of the creaming effect and are taking steps to ensure performance funding does not decrease diversity of students enrolled in higher education. The potential of decreasing underserved minority students enrollment is also detrimental for higher education institutions and the goal of exposing students to diverse experiences while in higher education in order to create beneficial learning experiences for all students. Racial segregation has increased in the U.S. and most students now live in segregated communities before entering higher education. Often times, higher education is the first time students are exposed to the opportunity to learn from peers from different cultures, values, and experiences (Gurin, Dey, Hurtado, & Gurin, 2011). Research shows that students’ exposure to diversity on their campus is beneficial for a wide variety of student outcomes across all student racial identities (Astin, 1993; Gurin et al., 2011; Hurtado, 2001). By decreasing the enrollment of underserved minority students, performance funding may take away this beneficial learning experience for many students. Finally, this finding may have impacts on social equity. With performance funding spreading across the country (NCSL, 2016) institutions responding by closing access to underserved minority students could have serious repercussions to the whole higher education system and educational attainment across the U.S.. Educational attainment in the U.S. for adults 25-34 is lagging behind other nations and in order to match the level of higher performing nations the U.S. would need to increase the number of adults with college degrees by almost 8% 106 a year until 2020 (Heller, 2013; Perna, 2013). However, in order to do accomplish this, institutions need to enroll a more diverse group of students as the high school graduate population is becoming more racially diverse (NCES, 2014; WICHE, 2012). Excluding underserved minority populations from enrolling in higher education limits the students eligible to enroll in and complete higher education. Therefore, if performance funding policies limit enrollment of underserved students, these policies may be antagonistic to the goal of increasing U.S. higher education participation and completion. Additionally, research shows that a wide variety of societal benefits (e.g. monetary returns to the state, working in underserved communities, cultural engagement) are linked with students having diverse experience in higher education (Gurin et al, 2011). Decreasing racial diversity in higher education would limit these beneficial diverse experiences and therefore decrease societal benefits associated with higher education such as increased monetary returns. The fact that institutions with performance funding decreased their enrollment of underserved students provides supporting evidence to the hypothesis that when faced with pressure to increase efficiency (i.e. increase quality and decrease cost) from performance funding, institutions responded by changing the enrollment of underserved minority students as predicted by Immerwahr et al.’s (2008) iron triangle of cost, quality, and access. While this study is not causal in nature and therefore the results can not be interpreted to mean that performance funding caused this change in enrollment patterns, the finding does present further support and evidence that institutional leaders feel that cost, quality and access are interrelated. As institutional leaders feel the pressure of performance funding metrics, which ask for higher quality without increases in costs, at least some institutions look to decisions around access for the solutions. 107 One possible explanation for this phenomenon is that as institutions feel the financial pressure of decreased state funding (SHEEO, 2015) they may feel forced to make decisions to improve efficiency in order to perform better on performance funding measures. However since institutional leaders often lack data to know what relationship, if any, exists between institutional expenditure and outcomes (Powell et al., 2012) they may look elsewhere to determine how to improve their efficiency. Performance funding policies have explicit metrics that outline how the state views higher education performance and how they assess improvement in efficiency (i.e. higher graduation rates). Therefore, this pressure could lead to institutional leaders feeling the need to enroll students who historically have performed better on metrics measured by performance funding (such as retention and graduation rates) with the hopes of improving their score on performance funding metrics and therefore increasing their state funding. As research has shown underserved minority students have lower retention and graduation rates overall (NCES, 2014) this may result in institutions changing their enrollment to lower rates of underserved minority students in order to perform well on the metrics in the performance funding policy. If institutions are responding to performance funding due to financial pressure, institutions that rely on state appropriations more heavily may feel more pressure to perform better on performance funding metrics in order to not see a decrease in their state funding. Results from this study give credibility to this theory. In all models of this study, state appropriations as a percent of institutional revenue is a strong negative predictor on the enrollment of underserved minority students. The fact that appropriations as percent of institutional revenue was a strong and consistent negative predictor of enrollment give credence to Immerwahr et al.’s (2008) belief that cost, quality and access are linked in an iron triangle. As 108 institutions became more reliant on state appropriations (regardless of whether they are subject to performance funding), they decreased their enrollment of underserved students, with most models significant at the p<.001 level. This study also explored if performance funding influences the enrollment of Pell Grant students. Results from the analysis consistently indicate that performance funding is not a statistically significant predictor of the percent of Pell Grant students enrolled or the average amount of Pell Grant awarded by institutions. However, null results do not necessarily indicate that there is no relationship between Pell Grant variables and performance funding. While no statistically significant results were found for the models used, it is important to note that several states included in the model only had performance funding for three years. It may take more than three years for change to occur based on performance funding policy (Hillman, Tandberg, & Gross, 2014). For instance, for an institution to respond to their score on a performance funding model, several years may need to go by where the institution is not only subject to performance funding, but receives their score and allocation. Without several years of data, an institution may not have a clear view of the trend that is occurring for their institution. Complicating the results further, a few states implement performance funding, but incorporate a soft implementation where for a year or two institutions will receive their scores on the metrics but no changes to monetary allocations will occur. Therefore, the results of this study may not fully incorporate any results that may occur five, six or seven years after performance funding goes into effect. Hillman, Tandberg, and Gross’ (2014) study found no relationship between institutional performance and performance funding until seven years post implementation. Therefore, there still exists the possibility that a change in Pell Grant enrollment could be seen in a longer time frame. Future analysis should continue to examine this 109 phenomenon when institutions have been subject to performance funding for longer time periods. In fact, other studies that included different years of study, with different states included and excluded found that performance funding may lead to enrolling students from higher income families (Klechen & Stedrak, 2016). These results seem to give credence to the belief that higher education institutions respond to policy initiatives by creating quasi markets. If enrollment was strictly based on supply and demand then performance funding should have no effect on enrollment patterns. Further, state population factors would have a stronger effect. However, this study found that state population factors either had no statistically significant results or had marginal effects if statistically significant. Institutional Type This study also explored whether the effects of performance funding on underserved student enrollment varied by Carnegie classification and institutional selectivity. Findings revealed that institutions do respond differently to performance funding based on Carnegie classification and selectivity. When interacted with Carnegie classification, performance funding was found to be a statistically significant predictor of underserved minority enrollment for five institutional types out of a total of 10 classifications. Similarly, when interacted with Carnegie classification, performance funding was found to be a statistically significant predictor of Pell Grant enrollment and funding amount for three institutional types out of a total of 10 classifications. Specifically, Doctoral A institutions with performance funding decreased both the percent of Pell Grant students they enrolled and the average amount of Pell Grant awarded (implying they were enrolling less financially needy students), while Doctoral C institutions with performance funding increased the percent of Pell Grant students they enrolled. In regards to 110 underserved student minorities, Masters A and Masters B institutions with performance funding both decreased their enrollment of underserved minority students. Separating institutions into five selectivity categories produced similar results. When interacted with selectivity level, performance funding was found to be a statistically significant predictor of Pell Grant enrollment or funding for three selectivity levels. Additionally, performance funding interacted with selectivity was found to be a statistically significant predictor of underserved minority enrollment for two levels. Specifically, moderately selective, selective, and very high selective institutions with performance funding increased the average amount of Pell Grant awarded. Moderately selective and selective institutions with performance funding did not change the percent of Pell Grant students at statistically significant levels, however very high selective institutions with performance funding decreased the percent of Pell grant students they enrolled at the same time. Implying that very high selective institutions with performance funding decreased the number of Pell Grant students they enrolled, but that those they did enroll had greater need than previous enrollment classes. Taken together, these findings present compelling evidence that not all institutions respond to or are affected by performance funding in similar ways. While limited quantitative research on performance funding has explored effects at the institutional level, this finding is similar to results from Rabovsky’s (2012) study that found institutions responded to performance funding differently when separated by institutional type with expenditures on instruction as the dependent variable. In particular the current study’s finding of diverse policy effects is noteworthy as it could help explain the mixed results research on the effectiveness of performance funding. If institutions are responding to performance funding differently, then exploring questions only through state level may not fully capture 111 institutional policy response and effectiveness. For instance, if a state has 14 public four-year institutions, five of which respond to performance funding by meeting the goals and increasing graduation rates, but nine institutions have no statistically significant change in graduation rates, the state level results could potentially show no statistically significant results. Further, the preponderance of studies utilizing difference in difference design further exasperate this problem by only matching on state level variables without consideration for the institutional variation that may be occurring within and between the states. Results show that institutions that are more selective (and therefore have more flexibility with their enrollment profile) and more likely to decrease the percent of underserved minority students enrolled. This finding gives credibility to the PAT framework perspective that institutions will focus on maximizing their own utility (performing well on performance funding) as opposed to maximizing state utility (enrolling a diverse population). While performance funding seeks to decrease this shirking, it seems to have continued the process for those institutions that are able to edit their enrollment profile. This finding is particularly troubling as it demonstrates that performance funding may shift underserved students to lower status institutions. This finding has implications for performance funding policies, for higher education institutions specifically, and for social equity writ large. Most performance funding policies have a goal to increase student completion, in order to reap the benefits that arise from a highly educated populace. By further segregating students by institutional type, performance funding is having an unintended outcome counter to the goals of the policy. While this study did not explore overall student completion, the findings show that underserved student access is changing with the implementation of performance funding. Shifting underserved student access 112 to lower status institutions may have a detrimental effect on student completion. Less selective institutions often have fewer resources and lower graduation rates (Schudde & Goldrick-Rab, 2016). In fact, college completion rates at less selective institutions are almost 25 percentage points lower than rates at very highly selective institutions (Schudde & Goldrick-Rab, 2016). Shifting underserved students to these institutions may therefore be counterproductive to performance funding’s goal of increasing student completion. This finding may also affect institutions and their administration. If underserved students are being shifted to lower status institutions, this creates a segmentation of higher education by racial diversity that is antithetical to the purpose of higher education and creating beneficial learning experiences for students enrolled. For instance, if underserved minority students are shifted to less selective institutions than more selective institutions will loose diversity, which would negatively impact all the students enrolled at these institutions (Gurin et al, 2011). Additionally, less selective institutions, including open access institutions, may then see an increase in underserved students. While the increase in diversity will benefit these institutions and students, these institutions are often more limited in their financial resources and may not be able to provide all the supports required for an increased student body. Finally, this finding has repercussions for social equity. Access to the most selective institutions is still unequal in the U.S. and has created a system where institutions with the greatest access provide the least social benefit, whereas institutions with the potential to provide the greatest social benefit are still inaccessible to many (Schudde & Goldrick-Rab, 2016). However, if performance funding may decrease enrollment of underserved students in higher status institutions, then underserved students may be pushed into lower status institutions potentially including 2-year institutions. While these institutions can provide an excellent 113 education (often for a lower cost), these institutions also often have fewer opportunities for students. Further, significant variation in earnings and labor market outcomes occur based on college type (Schudde & Goldrick-Rab, 2016). Institutional selectivity positively impacts earnings for bachelor degree recipients and the relationship increases over time (Long, 2010). Even when controlling for factors that predict enrollment in selective institutions, the positive effect remain for underserved minority students. Therefore, shifting underserved students to lower status institutions may decrease these societal benefits. This study also supports the concern that institutional leaders have raised that performance funding policies do not take into consideration the diverse missions of institutions in their state (Dougherty et al., 2014b). Rutherford and Rabovsky (2014) argue that performance funding policies try to apply a one size fits all mentality to diverse institutions and that the policies ignore many contextual factors that may influence an institution’s ability to respond to the requirements or goals of performance funding. The diverse effects of performance funding found in this study demonstrate that these policies influence institutions differently and may create significant bifurcation in a state’s higher education system as those who the policy benefits receive more funds and those who are not able to respond to the metrics of the policy receive lower appropriations. This study therefore also further supports CCA’s (2012) recommendation that performance funding policies take into account different institutional types or be compared only to their peer institutions such as is seen in Ohio and Michigan. Implications Implications for Policy This study suggests several implications for policy makers. First, it is clear that policy effects are not one size fits all. While future research should explore if other policies also have 114 diverse effects based on institutional type, it is important in the immediate to at least consider the diverse effects of performance funding based on Carnegie classification and selectivity. It is clear that institutions are not homogenous and have varying institutional missions and goals. Institutions may therefore choose to respond differently to performance funding based on these missions and goals or may even have different ability to respond to performance funding mandates. These diverse mission and goals are not always represented in performance funding policies and may create unequal repercussions from the policy. Further, institutions have different abilities to respond to policy mandates. This study demonstrated that more selective institutions appear to have greater ability to change their enrollment profile in response to performance funding. However, enrollment and student performance are only one area of performance funding metrics. It can be assumed that institutions may also respond to performance funding by changing and altering other business practices such as research funding to perform well on metrics associated with those measures. It can likewise be assumed that different institution types are able to make these changes to different degrees or with different amounts of ease. Therefore policy makers should acknowledge these differing abilities to respond to performance mandates and provide supports and timing that allow institutions to make the necessary changes. The diverging response to performance funding policies also suggests the need to not compare diverse institutions within a state, but to compare institutions within institutional type. Creating a performance funding policy that compares institutions with their institutional type peers helps ensure institutions are compared to those with similar missions and ability to respond to performance funding mandates. 115 Another policy implication is that institutional change can be slow and the effects of policy may not be seen for several years. The slow nature of change at institutions could cause complications with performance funding policies that often push to see results and change quickly. Policies that could have financial implications for institutions (such as performance funding policies) should allow for a few years where institutions are aware of the changes they need to make and be given the chance to do so without financial repercussions. In fact, the CCA (2012) argues for a period of time where institutions can familiarize themselves with the metrics and how they would perform with limited financial impacts. Likewise, it is important to note that policy effects are often not seen till many years post implementation. Therefore policy makers should take great care to not make decisions based on information and results seen in the first few years post policy implementation. Finally, as performance funding may have an effect on enrollment profiles, it is important to consider including metrics related to underserved student enrollment. In recent years, several states have implemented metrics that hope to counteract the desire of higher education institutions to engage in creaming. These metrics often include weighing underserved students differently in progression and completion metrics or a separate metric specifically focused on underserved student enrollment. Findings from this study demonstrate the importance of metrics such as these, but also raise questions regarding the diverse response based on institutional type. As states implement these types of metrics, it is important to keep in mind that different institutions may respond differently, and the metrics should account for this. Implications for Research The findings from this study present several areas for potential future research. First, this study demonstrated the need to separate by institutional type when conducting policy evaluation 116 research. This study presents evidence that performance funding policy effects are not monolithic and researchers do their study a disservice if the do not explore policy effects by institutional type. Additionally, as most performance funding research explores outcomes at the state level, future research should continue to explore performance funding at an institutional level. For instance, most research that explores the effectiveness of performance funding does so at the state level. It would be beneficial to explore similar research questions at an institutional level. In particular, the mixed results arising from research exploring the effectiveness of performance funding present compelling research questions related to whether there are diverse effects within the state across institutional types. Second, state appropriations as a percent of institutional revenue was a strong and negative predictor of underserved student enrollment in almost all models. Yet, limited research exploring institutional financial decisions utilizes this control variable. This occurs despite literature that demonstrates that an institution’s reliance on state appropriations influences institutional decision making. Future research should consider this relationship carefully and where appropriate include a measure of reliance on state appropriations as a variable. Third, as noted above, as more data becomes available and institutions maintain performance funding policies for longer time periods, research should continue to explore whether performance funding has an influence on Pell Grant enrollment. Many performance funding policies did not go into effect until after 2008 so limited data is available on their effectiveness and any unintended outcomes that may occur. This study found no evidence that performance funding changed enrollment of Pell Grant students, however several states in the dataset only had performance funding for three years or less, leaving the possibility of long term effects unknown. 117 This study found evidence that institutions change their enrollment profiles by decreasing overall underserved minority enrollment. Future research should explore this further to determine if this decrease is across all races at random or if certain races are seeing larger decreases than others. Given the changing demographics of high school graduation cohorts and the need to increase higher education enrollment in order for the US to stay competitive, it is vitally important to determine specifically where the enrollment changes are occurring. A final area for future research is to explore if these changes to enrollment profiles, assumed to occur to improve institutional performance on performance funding, actually lead to higher scores on performance funding metrics. If institutions are responding to the pressures of performance funding by narrowing enrollment profiles with the hope to increase their performance and therefore increase their funding from state appropriations, it is important to explore if these enrollment changes do lead to the end goal of increased funding. Further, as most policy leaders would state that they do not want to see enrollment decrease due to performance funding, research should explore if steps states have taken to decrease this unintended outcome actually prevents institutions from changing their enrollment profiles. For instance, in recent years, a few states have included metrics that focus on maintaining or increasing the admissions of underserved students. Future research should conduct rigorously controlled quantitative studies that explore if the inclusion of such a measure does prevent creaming or changing of enrollment profiles. Conclusion As performance funding spreads, researchers have expressed concern regarding potential unintended outcomes including institutions changing their enrollment profiles. Therefore, this study examined if performance funding influences underserved students’ enrollment at public 118 four-year institutions in the U.S. through a fixed effects panel analysis for the years 2000-2014. The findings revealed that performance funding does decrease the percent enrollment of underserved minority students and that performance funding has diverse effects on underserved student enrollment by institutional type. These findings have significant implications for performance funding policies, for higher education institutions and for social equity broadly. The findings of this study should not be taken as an attack on performance funding policies, but as a call to think through the unintended outcomes that may occur and create more robust policies that consider the enrollment of underserved students and the diverse policy effects on institutions based on mission and ability to respond to performance funding mandates. 119 APPENDIX 120 Table 16 Results for the Effects of Performance Funding Count on Pell Grant Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance -0.133 0.043 0.065 0.078 Funding (0.079) (0.080) (0.078) (0.069) (Count) Institutional -0.555*** -0.562*** -0.566*** -0.567*** Aid Amount (0.110) (0.110) (0.110) (0.110) Tuition 0.658* 0.626* 0.620* 0.623* (0.295) (0.296) (0.296) (0.296) Appropriations -0.0489 -0.046 -0.046 -0.046 as Percent of (0.031) (0.031) (0.031) (0.031) Revenue State Need 0.361 0.460 0.394 0.406 Based Aid (0.669) (0.691) (0.695) (0.688) College Age -0.003 -0.003 -0.004 -0.003 Population (0.003) (0.003) (0.003) (0.003) State -0.126 -0.091 -0.087 -0.089 Appropriations (0.227) (0.229) (0.228) (0.228) per FTE Affirmative -2.034 -1.394 -1.410 -1.549 Action Ban (0.835) (0.886) (0.828) (0.811) R-squared .6221 .6216 .6217 .6219 (within) n 3324 3324 3324 3324 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 121 Table 17 Results for the Effects of Performance Funding Count on Pell Gant Amount Variable No lag 1 year lag 2 year lag 3 year lag Performance 2.118 4.505 4.702 1.543 Funding (3.796) (3.195) (3.130) (2.621) (Count) Institutional 3.830 3.540 3.369 .3.713 Aid Amount (7.568) (7.560) (7.586) (7.594) Tuition -23.250* -24.454** -24.462** -23.315* (9.298) (9.431) (9.307) (9.208) Appropriations -2.675* -2.655* -2.674* -2.695* as Percent of (1.256) (1.259) (1.256) (1.253) Revenue State Need 75.345 .72.488 68.042 72.319 Based Aid (53.408) (52.280) (52.070) (52.313) College Age -0.295* -0.312** -0.327** -0.301* Population (0.118) (0.119) (0.119) (0.118) State 8.511 9.654 .9.551 8.496 Appropriations (7.945) (7.976) (7.911) (7.844) per FTE Affirmative 15.520 29.837 22.302 9.569 Action Ban (48.554) (49.362) (48.172) (46.800) R-squared .8544 .8545 .8546 .8544 (within) n 3324 3324 3324 3324 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 122 Table 18 Results for the Effects of Performance Funding Count on USM Total Variable No lag 1 year lag 2 year lag 3 year lag Performance -1.085 -1.319 -1.314 -1.403 Funding (1.278) (1.275) (1.235) (1.268) (Count) Institutional -2.304 -2.291 -2.317 -2.337 Aid Amount (3.069) (3.071) (3.074) (3.081) Tuition 13.450** 13.457** 13.440** .13.408* (5.164) (5.163) (5.156) (5.158) Appropriations -1.791*** -1.788*** -1.778*** -1.766*** as Percent of (0.498) (0.498) (0.497) (0.497) Revenue State Need 31.716* 31.495 * 31.254* 30.739* Based Aid (15.012) (14.946) (14.911) (14.870) College Age 1.288*** 1.290*** 1.291*** 1.294*** USM (0.138) (0.138) (0.138) (0.138) State -1.714 -1.681 -1.513 -.1.349 Appropriations (4.139) (4.122) (4.091) (4.071) per FTE Affirmative 2.861 2.289 3.299 4.317 Action Ban (23.619) (23.624) (23.380) (23.352) R-squared .3968 .3969 .3969 .3969 (within) n 7016 7016 7016 7016 Robust Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 123 Table 19 Tobit Results for the Effects of Binary Performance Funding on USM Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance Funding -0.635*** -0.699*** -0.477** -0.337* (Binary) (0.149) (0.150) (0.153) (0.155) Institutional Aid -0.316*** -0.317*** -0.321** -0.322*** Amount (0.0492) (0.049) (0.049) (0.049) Tuition 0.377*** 0.371*** 0.375*** 0.376*** (0.067) (0.067) (0.067) (0.067) Appropriations as -0.073*** -0.073*** -0.073*** -0.073*** Percent of Revenue (0.010) (0.010) (0.010) (0.010) State Need Based Aid 2.449*** 2.378*** 2.368*** 2.369*** (0.316) (0.315) (0.315) (0.315) College Age USM 0.157*** 0.162*** 0.162*** 0.161*** Percent (0.013) (0.013) (0.013) (0.013) State Appropriations -0.159* -0.158* -0.148* -0.138* per FTE (0.068) (0.067) (0.067) (0.068) Affirmative Action -0.819* -0.907** -0.916** -0.894** Ban (0.336) (0.336) (0.336) (0.336) Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 Table 20 Tobit Results for the Effects of Performance Funding Count on USM Percent Variable No lag 1 year lag 2 year lag 3 year lag Performance Funding -0.175*** -0.172*** -0.166*** -0.140*** (Count) (0.023) (0.024) (0.025) (0.025) Institutional Aid -0.305*** -0.308*** -0.312*** -0.317*** Amount (0.049) (0.049) (0.049) (0.049) Tuition 0.352*** 0.355*** 0.355*** 0.360*** (0.067) (0.067) (0.067) (0.067) Appropriations as -0.076*** -0.075*** -0.074*** -0.073*** Percent of Revenue (0.010) (0.010) (0.010) (0.010) State Need Based Aid 2.46*** 2.420*** 2.399*** 2.356*** (0.314) (0.314) (0.315) (0.315) College Age USM 0.153*** 0.156*** 0.158*** 0.159*** Percent (0.013) (0.013) (0.013) (0.013) State Appropriations -0.167* -0.157* -0.135* -0.120 per FTE (0.067) (0.067) (0.067) (0.067) Affirmative Action -1.067** -1.107** -0.977** -0.860* Ban (0.336) (0.336) (0.335) (0.335) Standard Error in parentheses *Significant at p<.05; **significant at p<.01; ***significant at p<.001 124 BIBLIOGRAPHY 125 BIBLIOGRAPHY Advisory Committee on Student Financial Assistance. (2013). Measure twice: The impact on graduation rates on serving pell grant recipients. Washington, DC: ACSFA. Baltagi, B. H. (1995). Econometric analysis of panel data. Chichester, England: Wiley. Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education, 145, 47-58. Baum, S., Ma, J., & Payea, K. (2013). Education pays 2013: The benefits of higher education for individuals and society. New York, NY: The College Board. Bell, D. A. (2005). Changing organizational stories: The effects of performance-based funding on three community colleges in Florida (Doctoral dissertation). Available from ProQuest Dissertations and theses database. Bound, J. & Turner, S. (2007). Cohort crowding: How resources affect collegiate attainment. Journal of Public Economics, 91, 877-899. Bound, J., Lovenheim, M. F., & Turner, S. (2010). Why have college completion rates declined? An analysis of changing student preparation and collegiate resources. American Economic Journal: Applied Economics, 2, 129-157. Burke, J. C. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco, CA: Jossey Bass. Burke, J. C. & Minassians, H. P. (2002). The new accountability: From regulation to results. New Directions for Institutional Research, 116, 5-19. Burke, J. C., & Serban, A. (1998). Performance funding for public higher education: Fad or trend. New Directions for Institutional Research, 97. Carnevale, A. P. & Strohl, J. (2013). Separate and unequal: How higher education reinforces the intergenerational reproduction of white racial privilege. Georgetown Public Policy Institute. Chen, R. (2011). Institutional characteristics and college student dropout risks: A multilevel event history analysis. Research in Higher Education, 53, 487-505. Chen, R. & St. John, E. P. (2011). State financial policies and college student persistence: A national study. The Journal of Higher Education, 82(5), 629-660. Complete College American (CCA). (2012). Performance funding: From idea to action. Indianapolis, IN: Jones, D. 126 Complete College American (CCA). (2013). The game changers: Are states implementing the best reforms to get more college graduates? Indianapolis, IN: CCA CCA (2016). http://completecollege.org/the-alliance-of-states/ Accessed on November 7, 2016. The College Board. (2016). Trends in student aid 2015. New York, NY: The College Board Colbeck, C. L. (2002). State policies to improve undergraduate teaching: Administrator and faculty responses. Journal of Higher Education, 73(1), 3–25. Davis, J. H., Schoorman, F. D., & Donaldson, L. (1997). Toward a stewardship theory of management. Academy of Management Review, 22(1), 20-47. Delaney, J. A., & Doyle, W. R. (2011). State spending on higher education: Testing the balance wheel over time. Journal of Education Finance, 36(4), 343–368. Desrochers, D. M. & Hurlburt, S. (2014). Trends in College Spending: 2001-2011. Washington, DC: Delta Cost Project. Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea: the community college experience. In T. Bailey & V. S. Morest (Eds.), Defending the community college equity agenda (pp. 51–86). Baltimore, MD: Johns Hopkins University Press. Dougherty, K. J., Jones, S. M., Lahr, H., Natow, R.S., Pheatt, L., & Reddy, V. (2014). Performance funding for higher education: Forms, origins, impacts, and futures. The ANNALS of the American Academy of Political and Social Science, 655, 163-184. Dougherty, K. J., Natow, R. S., Jones, S. M., Lahr, H., Pheatt, L., & Reddy, V. (2014). The political origins of performance funding 2.0 in Indiana, Ohio, and Tennessee: Theoretical perspectives and comparisons with performance funding 1.0. CCRC working Paper No. 68. Community College Research Center, Teachers College, Columbia University. Dougherty, K. J., & Reddy, V. (2013). Performance funding for higher education: What are the mechanisms? What are the impacts? ASHE Higher Education Report, 39(2), 1-134. Dougherty, K. J., Natow, R. S., & Vega, B. E. (2012). Popular but unstable: Explaining why state performance funding systems in the United States often do not persist. Teachers College Record, 114(3). Retrieved from http://www.tcrecord.org/Content.asp?Content Id=16313 Eaton, J. S. (2012). The future of accreditation. Planning for Higher Education. Edgell, M. S. (2009). Higher education finance policy: A comparative analysis of two dynamics of social contract in three bologna process countries (Doctoral dissertation). Available from ProQuest Dissertations and Thesis database. (UMI No. 3395449) 127 Ewell, P. T. (1994). A matter of integrity: Accountability and the future of self-regulation. Change, 26(6), 24-29. Freeman, M. S. (2000). The experience of performance funding on higher education at the campus level in the past 20 years (Doctoral dissertation). Available from ProQuest Dissertations and theses database. Fuller, C., Lebo, C. & Muffo, J. (2012). Challenges in meeting demands for accountability. In R. D. Howard, G. W. McLaughlin, W. E. Knight & Associates (Eds.), The handbook of institutional research (299-309). San Francisco: Jossey-Bass. Gansemer-Topf, A. M. & Schuh, J. H. (2006). Institutional selectivity and institutional expenditures: Examining organizational factors that contribute to retention and graduation. Research in Higher Education, 47(6), 613-642. Goldstein, L. (2012). A guide to college and university budgeting: Foundations for institutional effectiveness (4th ed.). Washington, DC: NACUBO. Gumport, P. J. (2001). Built to serve: The enduring legacy of public higher education. In P. G. Altbach, P. J. Gumport & D. B. Johnstone (Eds.), In defense of American higher education (pp. 85-109). Baltimore, MD: Johns Hopkins University Press. Hall, K. B. (2000). Tennessee performance funding and the University of Tennessee, Knoxville: A case study. (Doctoral dissertation). Available from ProQuest Dissertations and theses database. Hauptman, A. M. (2011). Reforming how states finance higher education. In D. E. Heller (Ed.) The States and Public Higher Education Policy. Baltimore: John Hopkins University Press. Heck, R. H., Lam, W. S., & Thomas, S. L. (2014). State political culture, higher education spending indicators, and undergraduate graduation outcomes. Educational Policy, 28(1), 3-39. Heller, D. E. (2011). The financial aid picture: Realism, surrealism, or cubism? Higher Education: Handbook of Theory and Research, 26, 125-160. Heller, D. E. (2013). The role of finances in postsecondary access and success. In L.W. Perna & A.P. Jones (Eds.), The state of college access and completion (96-114). New York, NY: Routledge. Heller, D. E. & Marin, P. (2004). State merit scholarship programs and racial inequality. Cambridge, MA: The Civil Rights Project at Harvard University. Hillman, N.W., Tandberg, D. A. & Fryar, A.H. (2014). Evaluating the impacts of “new” performance funding in higher education. Educational evaluation and policy analysis. 128 Retrieved from http://epa.sagepub.com/content/early/2014/12/09/0162373714560224.abstract Hillman, N., Tandberg, D., & Gross, J. P. K. (2014). Performance funding in higher education: Do financial incentives impact college completions? The Journal of Higher Education, 85(6), 826-857. Immewahr, J., Johnson, J., & Gasbarra, P. (2008). The Iron Triangle: College Presidents Talk About Costs, Access, and Quality. The National Center for Public Policy and Higher Education. Jaquette, O. & Parra, (2014). Using IPEDS for panel analyses: Core concepts, data challenges and empirical applications. Higher Education: Handbook of Theory and Research, 29, 467-533. Keller, C. M. (2012). Collective responses to a new era of accountability in higher education. In R. D. Howard, G. W. McLaughlin, W. E. Knight & Associates (Eds.), The handbook of institutional research (371-385). San Francisco: Jossey-Bass. Kelly, A. P. & Lautzenheiser, D. K. (2013). Taking charge: A state-level agenda for higher education reform. Washington, DC: American Enterprise Institute Kelly, A. P. & Schneider, M. (2012). Getting to graduation: The completion agenda in higher education. Baltimore: Johns Hopkins University Press. Kezar, A. (2004). Obtaining integrity? Reviewing and examining the charter between higher education and society. Review of Higher Education, 27(4), 429-459. Kim, J. (2012). Exploring the relationship between state financial aid policy and postsecondary enrollment choices: A focus on income and race differences. Research in Higher Education, 53, 123-151. Lahr, H. Pheatt, L. Dougherty, K. J., Jones, S. M., Natow, R. S., & Reddy, V. (2014). Unintended impacts of performance funding on community colleges in three states. New York, NY: Teachers College, Columbia University. Lane, J. E. (2012). Agency theory in higher education organizations. In M. N. Bastedo (Ed.), The organization of higher education: Managing colleges for a new era (278-303). Baltimore, MD: The John Hopkins University Press. Lane, J. E. & Kivisto, J. A. (2008). Interests, information, and incentives in higher education: Principal-agent theory and its potential applications to the study of higher education governance. In J. C. Smart (Ed.), Higher Education: Handbook of Theory and Research vol. 23 (141-180). New York, NY: Springer. LeGrand, J. (1991). Quasi-markets and social policy. The Economic Journal, 101, 1256-1267. 129 Marginson, S. (1997). Markets in education. St. Leonards, Australia: Allen and Unwin. McPherson, M. S. & Schapiro, M.O. (1991). Does student aid affect college enrollment? New evidence on a persistent controversy. American Economic Review, 81(1), 309–318. McMahon, W. (2009). Higher learning, greater good. Baltimore, MD: Johns Hopkins University Press. McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 1–24. McLendon, M. K., Hearn, J. C., Mokher, C. G. (2009). Partisans, professionals, and power: The role of political factors in state higher education funding. The Journal of Higher Education, 80(6), 686-713. Mitchell, M., Leachman, M. & Masterson, K. (2016). Funding Down, Tuition up: States cuts to higher education threaten quality and affordability at public colleges. Washington, DC. Center on Budget and Policy Priorities. Mitchell, M., Palacios, V., and Leachman, M. (2014). State are still funding higher education below pre-recession levels. Washington, DC. Center on Budget and Policy Priorities. Naughton, B. A. (2004). The efficacy of state higher education accountability programs. (Doctoral dissertation). Available from ProQuest Dissertations and theses database. National Association of State Budget Officers (NASBO). (2013). Improving postsecondary education through the budget process: Challenges and opportunities. Washington, DC: Pattison. National Association of State Student Grant and Aid Programs (NASSGAP). (2016). 46th Annual survey report on state-sponsored student financial aid. National Center for Education Statistics (2016). The Condition of Education 2016. Washington, DC: Kena et al. National Student Clearinghouse (2014). Completing college: A national view of student attainment rates-fall 2008 cohort. Herndon, VA: Shapiro et al. National Conference of State Legislatures (2006). Transforming higher education: National imperative-state responsibility. Washington, DC: NCSL National Conference of State Legislatures (2015). Retrieved from: http://www.ncsl.org/research/education/performance-funding.aspx 130 National Governors Association. (2007). Innovation America: A compact for postsecondary education. Washington, DC: NGA. O’Neal, L. M. (2007). Performance funding in Ohio’s four-year institutions of higher education: A case study (Doctoral dissertation). Available from ProQuest Dissertations and theses database. Opoczynski, R. (2014). Quantifying the Future of Michigan: Implementation and Short Term Impacts of a Performance Funding Policy. (Unpublished research). Perna, L.W. (2013). Conclusions: Improving college access, persistence, and completion: Lessons learned. In L.W. Perna & A.P. Jones (Eds.), The state of college access and completion (208-224). New York, NY: Routledge. Phillips, M. A. (2002). the effectiveness of performance-based funding: Does Florida’s system of outcome measurements improve community college performance? (Doctoral dissertation). Available from ProQuest Dissertations and theses database. Pike, G. R. (2013). NSSE benchmarks and institutional outcomes: A note on the importance of considering the intended uses of a measure in validity studies. Research in Higher Education, 54, 149-170. Pike, G. R. & Graunke, S. S. (2014). Examining the effects of institutional and cohort characteristics on retention rates. Research in Higher Education, 56, 146-165. Powell, B. A., Gilleland, D. S. & Pearson, L. C. (2012). Expenditures, efficiency, and effectiveness in U.S. undergraduate higher education: A national benchmark model. The Journal of Higher Education, 83(1), 102-127. Rabovsky, T. (2012). Accountability in higher education: Exploring impacts on state budgets and institutional spending patterns. Journal of Public Administration Research and theory, 22(4), 675–700. Rhee, B. (2008). Institutional climate and student departure: A multinomial multilevel modeling approach. The Review of Higher Education, 31(2), 161-183. Rutherford, A. & Rabovsky, T. (2014). Evaluating impacts of performance funding policies on student outcomes in higher education. The ANNALS of the American Academy of Political and Social Science, 655, 185-208. Ryan, J. F. (2004). The relationship between institutional expenditures and degree attainment at baccalaureate colleges. Research in Higher Education, 45(2), 97-114. Sanford, T., & Hunter, J. M. (2011). Impact of performance funding on retention and graduation rates. Educational Policy Analysis Archives, 19(33). Schaller, J. Y. (2004). Performance funding in Ohio: Differences in awareness of Success 131 Challenge between student affairs administrators and academic affairs administrators at Ohio’s public universities (Doctoral dissertation). Available from ProQuest Dissertations and theses data-base. Scott, M., Bailey, T., & Kienzl, G. (2006). Relative success? Determinants of college graduation rates in public and private colleges in the U.S. Research in Higher Education, 47(3), 249–279. Shin, J. C. (2010). Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60(1), 47–68. Shin, J. C., & Milton, M. (2004). The effects of performance budgeting and funding programs on graduation rate in public four-year colleges and universities. Education Policy Analysis Archives, 12(22), 1–26. State Higher Education Executive Officers (SHEEO). (2005). Accountability for better results: A national imperative for higher education. Denver, CO: SHEEO State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance: FY2012. Denver, CO: SHEEO. State Higher Education Executive Officers (SHEEO). (2014). State Higher Education Finance: FY2013. Denver, CO: SHEEO. State Higher Education Executive Officers (SHEEO). (2016). State Higher Education Finance: FY2015. Denver, CO: SHEEO. Singer, J. D. & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. Oxford, NY: Oxford University Press. Slaughter, S. & Leslie, L. (1997). Academic capitalism: Politics, policies, and the entrepreneurial university. Baltimore, MD: Johns Hopkins University Press. Slaughter, S. & Rhoades, G. (2004). Academic capitalism and the new economy: Markets, states and higher education. Baltimore, MD: Johns Hopkins University Press. Sparks, P. J. and Nunez, A. (2014). The role of postsecondary institutional urbanicity in college persistence. Journal of Research in Rural Education, 29(6), 1-19. Tandberg, D. A. (2010). Politics, interest groups and state funding of public higher education. Research in Higher Education, 15(5), 416–450. Tandberg, D. A. and Griffith, C. (2013). State support of higher education: Data, measures, findings and directions for future research. Higher Education: Handbook of Theory and Research, 28, 613-685. 132 Tandberg, D. A. & Hillman, N. W. (2013). State higher education performance funding: Data, outcomes and policy implications. Paper presented at the Association for the Study of Higher Education. Taylor, B. J., Cantwell, B. & Slaughter, S. (2013). Quasi-markets in U.S. higher education: The humanities and institutional revenues. The Journal of Higher Education, 84(5), 675-707. Thelin, J. (2004). A history of American higher education. Baltimore: The Johns Hopkins University Press. Titus, M. A. (2006).Understanding college degree completion of students with low socioeconomic status: The influence of the institutional financial context. Research in Higher Education, 47(4), 371–398. Titus, M. A. (2009). The production of bachelor’s degrees and financial aspects of state higher education policy: A dynamic analysis. The Journal of Higher Education, 80(4), 439–468. Toman, J. K. (2008). Partners or adversaries: A comparative case study of the accountability relationship between higher education and state government in three states. (Unpublished doctoral dissertation). University of South Dakota: Vermillion, South Dakota. Toutkoushian, R. K. and Hillman, N. W. (2012). The impact of state appropriations and grants on access to higher education and outmigration. The Review of Higher Education, 36 (1), 51-90. Toutkoushian, R. K. and Shafiq, M. N. (2010). A conceptual analysis of state support for higher education: Appropriations versus need-based financial aid. Research in Higher Education, 51, 40-64. Van Der Klaauw, W. (2002). Estimating the effects of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review, 43(4), 1249-1287. Volkwein, J. F. (2009). The assessment context: Accreditation, accountability, and performance. New Directions for Institutional Research, 3-12. Volkwein, J. F., & Tandberg, D. A. (2008). Measuring up: Examining the connections among state structural characteristics, regulatory practices, and performance. Research in Higher Education, 49(2), 180–197. Weerts, D. J. and Ronca, J.M. (2012). Understanding differences in state support for higher education across states, sectors, and institutions: A longitudinal study. The Journal of Higher Education, 83 (2), 155-185. Western Interstate Commission for Higher Education (2012). Knocking at the college Door: Projections of high school graduates. Boulder, CO: Prescott, B. T. & Bransberger, P. 133 Wooldridge, J. M. (2002). Econometric Analysis of Cross Section and Panel Data. The MIT press. Wooldridge, J. M. (2013). Introductory econometrics: A modern approach 5th edition. South Western: Mason, OH Zhang, L. (2010). The use of panel data models in higher education policy studies. Higher education: Handbook of Theory and Research, 25, 307-349. Zhang, L. and Ness, E. C. (2010). Does state merit-based aid stem brain drain? Educational Evaluation and Policy Analysis, 32 (2), 143-165. 134