RETHINKING DATA-DRIVEN NATIONAL EDUCATIONAL GOVERNANCE THROUGH THE CASE STUDY OF THE NATIONAL ACHIEVEMENT SURVEY (NAS) IN INDIA By Jainisha Chavda A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Education Policy – Doctor of Philosophy 2022 ABSTRACT There is an emerging body of literature theorizing the role of data as a key instrument for national educational governance in a globalized and neoliberal world. These insights and theorization are predominantly derived from the experiences of Anglo-American countries with test-based accountability (TBA) policies. TBA policies hold schools accountable based on their performance in standardized tests. To expand the discussions on how data can serve as an instrument of national educational governance, in this dissertation I present an instrumental case study of the National Achievement Survey (NAS) in India. Using data from different types of policy literature and interviews with key officials, I argue that since 2017, NAS- a nationwide standardized assessment data collection effort in India is used both as an instrument to continue, promote, and support established bureaucratic governance practices (standard governance) and to introduce governance by cross-state comparisons to extend the central government’s “soft power” (comparative governance). In this manner, I argue that NAS as an instrument of data-driven national educational governance differs significantly from the commonly understood models of test-based accountability in Anglo- American countries in its form, motivations, purposes, and possible implications. While the adoption of TBA models in Anglo-American countries has indicated a shift to a “post-bureaucratic governance” approach that more closely adheres to neo-liberal principles (see Maroy 2008; 2009; 2012), NAS continues to maintain the traditional bureaucratic approach. I explore the policy, organizational, and technical context which explains the rise and importance of NAS since 2017 in India. I trace India’s education policy and data developments since the 1990s to show the clear demand for data like NAS 2017 in India. Once the demand gained credibility and urgency, I show how NAS 2017 emerged as the only feasible instrument to address this need owing to the Indian bureaucracy’s technical capacity constraints at central and state levels. In absence of any alternative, NAS has been targeted for many purposes and towards many users. Lastly, I argue that the rising importance of NAS since 2017, reflected in how it is being employed, can be attributed to path dependency or its ability to enhance existing organizational practices without creating many disruptions. I call this the “new wine in an old bottle” approach. I discuss the key implications of understanding the NAS model for the literature on data-driven national educational governance. I also discuss how understanding India’s experience with NAS illuminates the complexities of global education reform for developing countries and provides a distinctly different model of data-driven national educational governance compared to the TBA approach prevalent in Anglo-American countries. Copyright by JAINISHA CHAVDA 2022 ACKNOWLEDGEMENTS I have been waiting to write this for a long time because I am extremely overwhelmed by all the warmth, support, and love I have received on this journey. It is hard for me to put down everything in words without being emotional. I hope I can do justice to everyone’s contribution. I want to first begin by thanking all the experts and officials who gave their generous time to participate in this study, despite the various challenges of the covid pandemic. This dissertation wouldn’t have been possible without their rich inputs. I want to thank the best advisor and mentor ever- Dr. Amita Chudgar. There can be no one like her! It is very difficult to write this acknowledgment for her because I don’t know where to begin. Dr. Chudgar is one of the most empathetic people I know. She is extremely compassionate towards her students’ circumstances and stands by them no matter what. Dr. Chudgar has left no stone unturned in guiding and supporting me. She was available to help me 24×7, be it professionally or personally. I cannot imagine how I would manage without her relentless feedback and guidance on this work and countless other writeups. I have learned the true meaning of ‘research rigor’ by working with her as an RA. I am so thankful and proud to have a person like Dr. Chudgar in my life. One of the biggest joys for me in my Ph.D. has been speaking to her in my mother tongue Gujarati and reminiscing about our shared love for India. Thank you Dr. Chudgar for giving me this homeliness away from home. You’re the best! I want to thank Dr. Rebecca Jacobsen for standing by me like a rock and just being the large-hearted person that she is. She has always encouraged me and reached out to help me, even without my asking. I have learned so much from her excellent feedback on my interview protocols and dissertation draft. I have had the privilege of taking the maximum number of classes with her during this Ph.D., either through coursework or various workshops. In all those interactions, I have been highly captivated by her extraordinary passion to teach and help students. Watching her teach is always the most thrilling and enlightening experience for me! I am honored to know and have the support of an incredibly generous human being like her. I want to thank Dr. Bethany Wilinski for her invaluable suggestions on my work right from the beginning of my Ph.D. I have learned a lot from her about doing qualitative research in the international context. Taking her course helped me boost my confidence as a qualitative researcher, and in fact, led me to win an award! I am also grateful to Dr. Wilinski for always looking after me by regularly sharing various professional opportunities. v I am grateful to Dr. Lynn Paine for helping me see my work from unique perspectives and connect dots I couldn’t imagine. Her incredible efforts through OISE have helped me expand my knowledge, resources, and connections in the field of international education. Participating in those sessions and events has truly made me feel part of an international community with like-minded people. I want to thank professors outside my committee who have influenced and supported me. I want to thank Dr. Soma Chaudhuri for so enjoyably teaching me the science and art of doing gender-based research, and also giving me a lot of useful personal advice. Working with her was a deeply enriching experience for me. I want to thank Dr. Michael Sedlak for going out of his way to encourage me when I joined this program. His handwritten appreciations on my class assignments are some of my most treasured possessions. I hope he knows how much self- confidence these gestures have given me. I want to acknowledge the faculty whose classes or projects have shaped me- Dr. David Arsen, Dr. Madeline Mavrogordato, Dr. Amy Parks, Dr. Vaughn Watson, Dr. Josh Cowen, Dr. Scott Imberman, Dr. Steven Gold, Dr. Lucero Radonic, Dr. Yi-Ling Cheng, and Dr. Marilyn Amey. I want to credit MSU’s Ed policy program, the College of Education, and the Graduate School for financial and various other forms of support throughout these five years without which pursuing this Ph.D. wouldn’t be possible for me. I also want to convey my thanks to the administrative and support staff in the program who made my life easy by helping me complete various requirements and formalities. Outside of MSU, I want to thank Dr. Ayesha Khurshid at Florida State University and Dr. Payal Shah at University of South Carolina for always being very generous in guiding me. I am grateful to Dr. Matt Witenstein at the University of Dayton for his incredible positivity, encouragement, and help in my research participant recruitment process. I feel blessed for the friends I have made on this journey. Thank you Vanika and Shota for being great research partners; I have learned so much from both of you. Thank you for your feedback on my work and for always helping me do better. My special thanks to Vanika for regularly checking on me and driving me around in her car so that I don’t miss out on social events. I am very grateful to Youngran for being a lovely friend and for all the memorable evenings we have spent together. I give my sincere gratitude to Sandy, Andrea, Pauline, Alyssa, Michelle, vi Jessica, Danielle, Amy, Aliya, Daman, and other great colleagues in the Ed policy program for always being extremely warm, helpful, and supportive. I am so proud to know all of you and learn from you. I convey my thanks to Adam and Vivek- my APU project colleagues and Adrienne- my CIES NSC colleague for a wonderful time working together. I want to thank my lovely friends in different parts of the US- Dipen, Himani, Vimlesh, Anuj, Gayatri, Sarvesh, and Harshal for the great conversations and memories that have given me cheerfulness in difficult times. I am indebted to Himanshu and Bella Samantaray and their lovely daughters for constantly looking after me all these years and hosting my wedding festivities in the US. I am thankful to Apoorva, my flatmate, for always being accommodating and bearing my characteristic monologues. I also want to mention Brijen and his gujju gang for some great company during the Garba season. Turning to my family, I want to begin by thanking my mom Raksha for persevering day and night to fulfill all my petty needs and making my well-being her mission in life. Thank you mammi for being my rock of Gibraltar and sheltering me in your utmost pure and unconditional love. I want to thank my dad Amit for being my confidante, spiritual guide, music guru, and intellectual critic. From Krishna and Kabir to Nanak and Mahavir, from Gurbani and Qawwali to Prabhatiya and Abhang- thank you pappa for bringing so many colors to my life! This is what gave me light on dark days. Thank you for being the man of the greatest character, morals, and integrity. Of all things in my life, I am most proud to be born your daughter. I am grateful to my brother Yug, the most spiritual soul in this family, for radiating exceptional divinity and positivity in our lives just by being himself. Thanks Yugi for looking after our family when I was away all these years. Not to forget- many thanks to you and dad, my in-house comedians, for the regular dose of slapstick humor that always makes me laugh. And last but not least, thank you my lovely little sisters Bhakti and Shraddha for singing and dancing with me and transporting me to a completely different world of fantasy. I miss you both so much! I owe my deepest love and gratitude to my most selfless and caring husband Harsh for being the wind beneath my wings. What would I do without you? Thank you for always putting my needs above yours and earnestly looking after every tiny aspect of my life. Thank you for always bearing with me and never complaining. I want to thank my in-laws Rajeshbhai and Chandrikaben for being my devout supporters and gossip partners. Thank you both for putting all your work aside and driving me around to various field sites and everywhere else I wanted. vii I also want to thank and acknowledge the remaining members of my large but very close family whose contribution is immeasurable- Zaver dada, Rama dadi, Bhavna fui, Ambica nani, Hitesh pappa, Indu mammi, Ashish, Luv, Alpa mami and Akshu- thank you all for spoiling me with your gifts and unconditional love all these years. I am extremely sad that my grandfather Zaverbhai, the most towering personality I have ever seen, who often declared his pride in my achievements, left this world some months ago without witnessing this accomplishment. This is an irreversible loss for me that may never heal. I miss you dada! I am indebted to you and dadi for setting the best examples for me and giving me a privileged childhood with the fondest memories. Lastly, I want to thank and bow my head to the most important of all, whose mention itself brings tears to my eyes- my spiritual masters, my Gurudev- my Maa and Bapu. Using words to convey my love for you is the shallowest thing I can do. My hands shiver in finding means to express my gratitude for you. Thank you for everything! I dedicate my thesis and my everything to you! मेरे गुरुदे व, तेरे चरण ों में सुमन श्रद्धा के अर्पित है तेरा ही दे ना ये ज है,वही तुझक समर्पित है मेरा ये मन, मेरा ये तन, मेरा कण कण तुझक समर्पित है - कृष्ण दास viii TABLE OF CONTENTS LIST OF FIGURES ....................................................................................................................... xi LIST OF ABBREVIATIONS ....................................................................................................... xii CHAPTER 1: INTRODUCTION ................................................................................................... 1 1.1 Research Questions ............................................................................................................... 3 1.2 Study Contribution ................................................................................................................ 3 1.3 Dissertation Outline............................................................................................................... 4 CHAPTER 2: THE TEST-BASED ACCOUNTABILITY MODEL ............................................. 6 2.1 Test-Based Accountability (TBA) ........................................................................................ 6 2.2 TBA Pioneering Countries .................................................................................................... 7 2.3 Global Adaptors of TBA ..................................................................................................... 12 2.4 Emerging Consensus on Effects of TBA ............................................................................ 14 2.5 Conclusion ........................................................................................................................... 20 CHAPTER 3: BACKGROUND ON INDIA ................................................................................ 22 3.1 General Country Information .............................................................................................. 22 3.2 Federal System of Education............................................................................................... 23 3.3 Centralized Educational Governance: Schools with limited power at the bottom of a vast bureaucracy ............................................................................................................................... 24 3.4 Capacity Gap Between Top and Lower Levels of Bureaucracy- The “Flailing State” ....... 27 CHAPTER 4: DATA AND METHODS ...................................................................................... 31 4.1 Background on Study Conceptualization and Case-Study Design ..................................... 31 4.2 Journey of Conducting this Study ....................................................................................... 32 4.3 Methodological Scope and Limitations............................................................................... 47 CHAPTER 5: NATIONAL ACHIEVEMENT SURVEY AND INDIAN BUREAUCRATIC GOVERNANCE ........................................................................................................................... 49 5.1 Unified District Information System for Education (U-DISE) ........................................... 49 5.2 National Achievement Survey (2017 and beyond) ............................................................. 51 5.3 Use of NAS (2017 and Beyond) ......................................................................................... 54 5.4 Conclusion ........................................................................................................................... 65 CHAPTER 6: DEMAND FOR NAS 2017 IN INDIAN EDUCATION POLICY ....................... 68 6.1 Phase 1 (1990 to early 2000s): Towards Universalization of Primary Education: Focus on Inputs and Schooling ................................................................................................................. 69 6.2 Phase 2 (early 2000s to 2015): Universalization of Elementary and Secondary Education: Primarily focused on schooling and inputs with emerging attention to learning assessments . 78 6.3 Data for Phase 2: Focus on Universalization of Elementary and Secondary Education and Launch of Pre-2017 NAS .......................................................................................................... 83 6.4 Increasing concerns about the poor quality of education and institutionalization of NAS . 87 6.5 Phase 3 (2015 to present): Greater Attention to Learning Outcomes and Commencement of Integrated Universalization ....................................................................................................... 89 6.6 Conclusion: Clear Demand for National Assessment Data (NAS 2017) After Decades of Inputs and Schooling-Focused Policy Priorities ....................................................................... 94 CHAPTER 7: NAS 2017 AS THE ONLY FEASIBLE INSTRUMENT ..................................... 97 ix 7.1 Central Level Capacity Issues .............................................................................................. 97 7.2 State Level Capacity Issues ............................................................................................... 107 7.3 Conclusion ......................................................................................................................... 116 CHAPTER 8: USING NAS FROM 2017 AS AN INSTRUMENT OF NATIONAL EDUCATIONAL GOVERNANCE IN INDIA: “NEW WINE IN AN OLD BOTTLE"? ........ 118 8.1 Standard Bureaucratic Governance ................................................................................... 119 8.2 Comparative Bureaucratic Governance............................................................................. 122 8.3 Conclusion ......................................................................................................................... 128 CHAPTER 9: CONCLUSION AND IMPLICATIONS ............................................................ 131 9.1 Implications for Literature ................................................................................................ 134 9.2 Implications for India ........................................................................................................ 137 9.3 Implications for Developing Countries ............................................................................. 141 9.4 Limitations of Study .......................................................................................................... 143 REFERENCES ........................................................................................................................... 145 APPENDIX A – GUJARAT BACKGROUND ......................................................................... 162 APPENDIX B – INTERVIEW GUIDE (SEMI-STRUCTURED) ............................................. 163 x LIST OF FIGURES Figure 4.1: Data sources for Phase 1 ............................................................................................ 34 Figure 4.2: Document review protocol ......................................................................................... 34 Figure 4.3: Example of code application ...................................................................................... 37 Figure 4.4: Example of coding scheme......................................................................................... 37 Figure 4.5: Process followed in Phase 1 ....................................................................................... 39 Figure 4.6: Example of selection of interviewees based on preliminary findings in Phase 1 ...... 40 Figure 4.7: List of respondents with key interview topics ............................................................ 43 Figure 4.8: Process followed in Phase 2 ....................................................................................... 44 Figure 4.9: Example of descriptive and conceptual interview coding .......................................... 45 Figure 4.10: Process followed in Phase 3 ..................................................................................... 46 Figure 6.1: Timeline of key policy, capacity, and data developments ......................................... 69 xi LIST OF ABBREVIATIONS ABL Activity Based Learning Methodology ACER Australian Council for Educational Research AI Artificial Intelligence APPA Australian Primary Principals Association APPEP Andhra Pradesh Primary Education Programme APST Australian Professional Standards for Teachers ASER Annual Status of Education Report AWPB Annual Work Plan and Budget BAS Baseline Achievement Survey BRC Block Resource Center CABE Central Advisory Board of Education CBSE Central Board of Secondary Education CCE Continuous and Comprehensive Evaluation COVID Coronavirus Disease 2019 CPR Centre for Policy Research CRC Cluster Resource Center CSS Centrally Sponsored Scheme CWSN Children with Special Needs DCF Data Capture Format DFID Department for International Development DIET District Institute for Education and Training DPEP District Primary Education Program DPO District Project Office DSEL Department of School Education and Literacy EMIS Educational Management and Information System ENLACE Centros Escolares ERA Education Reform Act ESD Educational Survey Division GERM Global Education Reform Movement GMR Global Monitoring Report xii GOI Government of India IANS Indo-Asian News Service IAS Indian Administrative Services ICT Information and Communication Technology IRB Institutional Review Board JRM Joint Review Mission LEA Local Education Authority LEP Limited English Proficient MAS Midterm Achievement Survey MDG Millennium Development Goal MHRD Ministry of Human Resource Development MIS Management and Information System MLL Minimum Learning Level NAEP National Assessment of Educational Progress NAPLAN National Assessment Program – Literacy and Numeracy NAS National Achievement Survey NCERT National Council of Educational Research and Training NCLB No Child Left Behind NIEPA National Institute of Educational Planning and Administration NPE National Policy on Education OBC Other Backward Class ODA Official Development Assistance OECD The Organization for Economic Cooperation and Development PAB Project Approval Board PAISA Planning, Allocations and Expenditures, Institutions: Studies in Accountability PGI Performance Grading Index PIRLS Progress in International Reading Literacy Study PISA Program for International Student Assessment POA Programme of Action PQ Pupil Questionnaire PROBE Public Report on Basic Education xiii PTR Pupil Teacher Ratio QSA Queensland Studies Authority RISE Research on Improving Systems of Education RMSA Rashtriya Madhyamik Shiksha Abhiyan RQ1 Research Question 1 RQ2 Research Question 2 RTE Right to Education SAT Standard Assessment Test SC Scheduled Caste SCERT State Council of Educational Research & Training SDG Sustainable Development Goal SDI Sustainable Development Index SEMIS Secondary Education Management and Information System SEP Preferential School Subsidy Law SEQI School Education Quality Index SIDA Swedish International Development Agency SIMCE The Sistema de Medición de la Calidad de la Educación SIS State Implementation Society SLAS State Level Achievement Survey SLMT State Level Master Trainer SMC School Management Committee SOP Standard Operation Procedure SPO State Project Office SQ School Questionnaire SRC School Report Card SS Samagra Shiksha SSA Sarva Shiksha Abhiyan ST Scheduled Tribe TBA Test Based Accountability TIMSS Trends in International Mathematics and Science Study TQ Teacher Questionnaire xiv UDISE Unified District Information System for Education UEE Universalization of Elementary Education UNICEF United Nations Children's Fund UT Union Territory VOI Vibes of India xv CHAPTER 1: INTRODUCTION Numerical data1 are being acknowledged as a key instrument or technology for national governments to govern education in a globalized and neoliberal world. In increasingly neoliberal times, data are allowing governments to introduce new processes in educational governance by involving different stakeholders without relinquishing government control (Ozga, 2011). In these new governance forms, data act as a powerful tool for social regulation by substantially reconstituting the lives of their subjects (see Lawn 2013; Hardy 2015; Williamson 2014, 2016, 2017; Lewis and Hardy 2017 etc.). Data, especially in the form of comparisons, are helpful in legitimizing political actions by creating an illusion of urgency (Nóvoa and Yariv Mashal, 2003, p. 427). The recognition of data as an indispensable, growing, and pressing instrument for national educational governance has predominantly emerged from Anglo-American countries due to the adoption of test-based accountability (TBA) policies. TBA is an arrangement of holding public schools responsible and accountable based on their performance in standardized tests. Schools are made liable to perform through policy reforms that grant incentives, rewards, remedial measures, and/or sanctions to schools based on their test results. Apart from this direct form of accountability, TBA also creates indirect forms of accountability and pressure through the publication of school performance data which builds scrutiny from the wider public and alters school competition and choice dynamics. Studies on the effects of TBA on schools, teachers, parents, markets, social equity, local politics, etc. across these countries have revealed some common findings. For example, alteration of school competition and choice dynamics, immense pressure for teachers and students, narrowing of classroom instruction, schooling equated with test performance, the emergence of new ways to gamify the system, and increase in educational inequality (e.g. Figlio and Lucas, 2004; Klenowski, 2010; Hutchings, 2015; Getzler and Figlio, 2002; Peters and Oliver, 2009 etc.). These effects make (test-based accountability) data a powerful means of national educational governance in these countries. The literature on data as an instrument of national educational governance is overrepresented by the use of data for TBA models and policies which largely reflect the Anglo- 1 Please note that in this dissertation wherever I use the term data it refers to numerical data. I consider numerical data in plain figures as well as its various representations such as graphs, metrices, indices, charts, etc. 1 American experience. There is a need to explore diverse ways in which countries are designing, developing, and employing data as an instrument of national governance based on their requirements and context. This is particularly important because current literature on this topic problematically assumes the global spread of test-based accountability (see Diaz Rios, 2020; Takayama and Lingard, 2019), an approach that even today (after almost 30 years since it began in the US and UK) has not been adopted by most countries around the world. My dissertation contributes to filling this gap in the literature with an instrumental case study of the National Achievement Survey (NAS) in India which I argue is used from 2017 as a tool for continuing, promoting, and supporting existing bureaucratic governance practices (standard bureaucratic governance) and steering state governments’ action softly through the power of cross-state comparisons (comparative bureaucratic governance) by the central government in India. From 2017, NAS is a nationwide and standardized district-level sample survey measuring learning competencies in grades 3, 5, 8, and 10. The data is generally collected every 3-4 years. NAS explicitly does not gather school-level data and is not designed for school- level governance or accountability. Rather, NAS is designed for supporting the “higher levels” of educational bureaucracy at the central, state, and district levels. At these levels, NAS’s standardized assessment data are crucial as they form the basis of practically all major educational decisions including, educational planning, policymaking, curriculum design, pedagogical development, and teacher training. This way NAS is designed to shape and inform standard bureaucratic governance practices. Secondly, the central government has prepared an innovative educational index called the Performance Grading Index (PGI) to grade and compare the performance of states and districts in a PISA-like manner by using NAS and other types of input data. This way NAS is used for soft comparative governance purposes. The NAS approach from 2017 is different from the TBA model that is more common in the West. It is not designed as an alternative to TBA in India but rather aims to serve different purposes. However, one interesting aspect to note about NAS from 2017 is that it targets many diverse users- from central government to school teachers, for direct and core governance processes described above, despite being a district-level sample survey conducted in 3-4 years (i.e., the data represent what is happening at the level of districts, which are large administrative units in India, but it is meant to be used by a classroom teacher in one of the thousands or more schools in the district to guide their teaching-learning practices). In TBA countries, there is greater 2 reliance on annual standardized national tests or census assessments/surveys (i.e., data that provide information at a specific student, classroom, or school level, and thus data that are much closer to the context the teacher is teaching in) for such core governance purposes. Having established the unique NAS model from 2017 and its importance in Indian educational governance, I investigate the reasons that explain this model and its central role in educational governance. Particularly, I explain this from three overlapping yet distinct perspectives: India’s policy priorities, India’s current technical capacity for data collection (or lack thereof) at the national and the state level, and the Indian organizational and bureaucratic perspective. My findings are derived from the analyses of various documents (e.g. statistical reports, government websites, policy frameworks, media reports, academic literature, etc.) and interviews with officials in charge of India’s key education data initiatives including NAS. 1.1 Research Questions Below I provide the specific research questions I will answer in this study: 1) What is NAS and how is NAS used for national educational governance in India since 2017? 2) What policy, technical capacity, and organizational factors explain NAS’s central role in educational governance in India from 2017? 1.2 Study Contribution The primary contribution of this study is in showing the dynamic and context-dependent nature of data as an instrument of national educational governance, thereby revealing and questioning critical assumptions that have not been carefully evaluated in the literature. For example, while the literature assumes that relying on instruments like data is an indication of countries’ transitioning to post-bureaucratic governance (see Maroy 2008; 2009; 2012), this case study of NAS in India shows that this may not necessarily be the case. Rather, reliance on data as governance instruments is quite possible without significantly transforming existing bureaucratic structures and processes. Without studies like this it becomes easy to obscure the fact that approaches other than test- based accountability are becoming important to educational governance in other parts of the world. This could constrain our ability to think about data in different ways as a social construct and in terms of its effects. It is necessary to ask different sets of questions that are perhaps more relevant to other national and regional contexts (Takayama and Lingard, 2019, p.451) and acknowledge 3 that data can take different meanings in different contexts. Studies like this dissertation are helpful “to prevent understanding of distinctive Anglo-American policy problematics from passing off as ‘universally relevant’ to the rest of the world “(Takayama and Lingard, 2019, p.451). There have been some prior studies that have tried to question the assumed universality of test-based accountability models (e.g. Diaz Rios, 2020; Takayama and Lingard, 2019; Benveniste, 2002, etc.). However, these studies are about countries that collected test data similar to Anglo- American countries, i.e. standardized school-level test data but did not pursue TBA due to factors such as lack of political or socio-cultural compatibility. My study goes beyond this binary of TBA vs. non-TBA and discusses a newer dimension and scope of data for national governance. This study also has strong relevance for low and middle-income countries or countries that have adopted approaches other than TBA due to lack of capacity. Since the practice of standardized testing or assessments is rapidly traveling across the globe, this case study demonstrates how a developing country with limited capacity and a range of other constraints tries to respond or react to this global trend, what technical complexities arise in this process, and ultimately in what ways this global reform becomes localized or recontextualized- exhibiting a blend of global and local characteristics. The Indian context entails many characteristics such as centralized governance, poor capacity at lower levels of bureaucracy, limited school autonomy, etc. that shape India’s adoption of a data instrument like NAS 2017 for national educational governance. Such contextual factors and related discussions may be relevant to many low and middle-income countries but are difficult to find in current Anglo-American-focused literature. 1.3 Dissertation Outline The remaining document is organized as follows - Chapter 2 describes test-based accountability models from key countries represented in the literature with a brief background on their origins, explains how they indicate a transition to post-bureaucratic governance, and summarizes some common findings on their effects noted across these countries. This discourse helps to frame the need for alternate ways to conceptualize data as an instrument of governance and provides the primary justification for this study. Chapter 3 provides a background on India which is relevant to my investigation. Chapter 4 describes data and methods adopted across different phases of this study to understand and explain India’s NAS approach. Chapter 5 answers RQ1 and explains how since 2017 NAS is used for standard and comparative bureaucratic governance in India. Chapters 6, 7, and 8 answer RQ2 about why from 2017 NAS has gained a 4 central role in Indian educational governance. Chapter 6 explains India’s policy context necessitating the demand for data like NAS 2017. Chapter 7 explains the technical capacity challenges for collecting data like NAS 2017 in India and shows how NAS 2017 was the only feasible instrument to govern India nationwide. Chapter 8 explains the Indian education bureaucracy’s organizational context that makes this new NAS user-friendly for standard and comparative bureaucratic governance. Chapter 9 concludes the study with a discussion on the potential implications of this work for the literature, India, and other developing countries. 5 CHAPTER 2: THE TEST-BASED ACCOUNTABILITY MODEL Since the purpose of my dissertation is to expand the understanding of data as an instrument of national educational governance beyond the test-based accountability models (TBA), I use this chapter to explain what TBA is, how it emerged, how it has spread around the world, and what are its common effects. First, I briefly describe the key features of TBA, its roots in neoliberal policies of the US and UK, and its role in shaping the post-bureaucratic governance approach. Second, I provide more specific details about TBA in data pioneering countries of the United States and the United Kingdom and its spread to other countries like Australia. Third, I discuss the common effects of TBA found across different countries. I recognize that there are vast differences in education policies and contexts of all countries, and each country adopts its own unique TBA approach in alliance with its context. However, due to certain common features in models across these countries, there has been increasing consensus around the effects of TBA. 2.1 Test-Based Accountability (TBA) As described in Chapter 1, TBA is an approach to transforming national educational governance by using standardized testing to generate direct and indirect forms of school accountability. Direct accountability, which is also referred to as ‘consequential accountability’, is the use of test data in a manner where schools and/or school-level staff are held accountable through rewards, incentives, remedial measures, and/or sanctions attached to the data. Indirect accountability also called ‘market-based accountability’, is where school performance and other types of data are made openly available either online or offline, to steer accountability from wider public stakeholders such as parents. TBA is recognized as a global phenomenon, with origins in neoliberal policies of the US and UK and has traveled across the world creating certain common effects. As the upcoming sections will discuss, the origins of TBA lie in the emergence of the New Right Movement promoting neoliberal economies in the United States and the United Kingdom in the 1980s and 1990s. TBA was employed by the federal/central governments in these countries as an instrument for greater regulatory oversight and pressure on schools to perform. The model has been adopted across countries in different ways and with different intensities. In some countries there is a greater emphasis on indirect accountability over direct accountability, and vice-versa. In countries like the United States, the emphasis on direct and indirect accountability is equally strong. Despite differences in education policy and socio-economic contexts of the countries, due to fundamental 6 similarities in their TBA models (i.e. attachment of school performance data to rewards/ incentives/ sanctions /remedial measures, and open sharing of school performance data for indirect accountability), major similarities have been observed across these countries regarding effects of TBA, which I will discuss later in this chapter. 2.2 TBA Pioneering Countries The United States and the United Kingdom are generally considered the pioneer countries in the global trend of TBA. Guthrie and Pierce (1990) argue that the neoliberal international political economy and its influence on national economic policies and politics has been the major driver behind this. Even though the United States has emphasized developing standards and England has focused on improving curriculum and inspections as their main educational reform strategies, their TBA approaches share similarities which will be discussed ahead. Adoption of neoliberal and post-welfare state policies to improve the post-war economy are some of the important reasons for increasing focus on testing in these countries. Global events such as the cold war, changing technology, and later international competition from Asian countries and markets forced these countries to adopt policies to maintain their competitiveness and economic edge over the world (Guthrie and Pierce, 1990; Hursh, 2007). This resulted in the adoption of neoliberal and post-welfare state policies, a substantial shift from the earlier Keynesian welfare approach. In the Keynesian approach, the government shared some responsibility for safeguarding the conditions that could enable people to flourish. But under neoliberalism, the government was seen as interventionist that threatened individual liberty through taxes and other regulations (Hursh, 2007). The focus became more on promoting personal responsibility through individual choice with markets, and the individual was seen as the maker of his/her own destiny (Hursh, 2007). Greater emphasis was put on ‘the deregulation of the economy, trade liberalization, the dismantling of the public sector [such as education, health, and social welfare], and the predominance of the financial sector of the economy overproduction and commerce’ (Vilas, 1996). Given neoliberalism was seen as the solution to improving the economy (Guthrie and Pierce, 1990), much of the blame for economic issues in the US and England was put on their respective education systems, and schools were seen as targets in need of reform. Under the leadership of President Reagan in the United States and Prime Minister Thatcher in the United Kingdom, there were greater calls for improved schooling as part of the New Right movement (Carl, 1994). They were in response to a major ideological shift in the public consciousness due to 7 increasing dissatisfaction and distrust of public management of education (Dorn, 2007). There was greater anxiety around the ability of schools in the US and UK to equip students with the skills they need to excel in a globalized economy (Fitz, 2003). Under the neoliberal influence, public schools were seen as rigid and difficult to transform due to their bureaucratic nature (Carl, 1994). Therefore, opening private schools, and creating markets and competition between public and private schools was seen as an important solution to improving education quality (Hursh, 2007). The availability of international comparative assessments played a major role in fueling this sentiment (Ravitch, 2002). These assessments started in the 1960s with the First International Mathematics Study in 1964 which saw participation from the United States, Japan, and countries in Europe. In upcoming sections, I explain in greater detail how both US and UK went about adopting TBA. Maroy (2008) argues that the adoption of TBA is one of the indications of countries’ transitioning to a post-bureaucratic governance approach, which is significantly different from the bureaucratic governance approach. This description seems apt of these countries. Below I summarize the key distinctions he makes between both approaches: Bureaucratic governance is a system where the state takes it upon itself to implement educational services. This was seen as necessary to spread mass education or equal access to education for all citizens in the country. States also adopted this system as mimicry of practices found in other countries. Having standard norms and practices to govern education was seen as crucial for economic growth and social mobility. Under bureaucratic governance, there are standardized and identical norms for all components of the system. There is a clear division of work for everyone through written and precise rules. A hierarchy is set up by the state through which it controls all agents and ensures their compliance with rules and procedures. The entire mode of governance is based on conformity to general rules. Teachers have significant autonomy and control in classrooms, but in most other areas the states define or dictate practices. There is joint regulation by state administrative bodies and teachers, with parents or other actors having practically no say in educational governance. Post-bureaucratic governance has two key traits. Firstly, it promotes school competition to let the market govern educational affairs without removing the influence of the state. The state recognizes that the bureaucratic nature of the education system makes it inefficient and therefore, competitive pressure is required from users to improve it. The state does not disappear in this 8 arrangement. It defines key objectives of the system but leaves it to the schools or local entities to work out the means of carrying out these objectives. It also provides free choice of schools to users by providing public financing to schools based on enrollments. Schools compete to carry out the work of education while adhering to centrally defined frameworks and objectives. Secondly, the post-bureaucratic governance approach entails an “evaluation culture” or “governance by results” where an external school performance evaluation is set up and incentives and/or sanctions are introduced to improve performance as defined by the state. 2.2.1 United States In the United States, neoliberal reforms treated testing or standardized assessments as instruments to solve the problem of poor educational quality by associating them with school accountability reforms introduced by federal acts such as NCLB. During 1970s and 1980s the racial achievement gap in standardized assessments was perceived as a public crisis (Tyack & Cuban, 1995). The Nation at Risk report in 1983 highlighted these concerns and led to shifting the blame of poor performance to schools (Lee, 2008). In order to “fix” the schools by introducing greater school accountability measures, states started developing accountability systems. Testing was integral part of these systems as it allowed to create competition between schools to perform by comparing their performance over assessment results. It directly fit with the New Right movement. The pressure from parental decision making and school choice processes was believed to transform the educational quality across the board. Texas was the first state to start testing for this purpose (Yarema, 2010) in 1980s and the effort was endorsed by President Bush and state governors in 1989 to tie student test scores to school performance. Standards were seen as necessary to ensure equity (Stotsky, 2000), and model school tests upon. All these ideas and developments culminated in a concrete policy called No Child Left Behind (NCLB) in 2001 which became the first national policy framework where standards, assessments, and school accountability were linked with each other (Datnow & Park, 2012). The current and more refined TBA model of United States has its origins in this NCLB reform. NCLB created an interesting scenario in the United States where although the federal government reduced its social welfare orientation, it increased its regulatory oversight over education through the power of funding. In United States, education is primarily a state and local responsibility. The states bear the primary responsibility for the maintenance and operation of public schools and are also involved in selection and regulation of curriculum, teaching methods, 9 and instructional materials. Under the state governments, the school districts or local education authorities are responsible for meeting district’s education objectives. They prepare plans and execute various tasks to achieve these objectives. Apart from this, they are generally involved in selecting curriculum materials, school staff recruitment and maintenance, monitoring finances, ensuring compliance with various rules and standards, maintaining school buildings etc. Historically, the federal government has played minimum role in the educational governance. However, under the influence of neoliberalism over the years, federal government’s oversight in K-12 education has increased through conditional funding provision to schools (McGuinn, 2006; Baltodano, 2012 etc.). Direct/consequential school accountability was enforced in United States through requirements of federal acts such as No Child Left Behind (2001), Race to the Top (2009), and Every Student Succeeds Act (2015). The states in the U.S are holding schools accountable for the achievement of all groups of students. The states assign ratings to schools largely based on their student performance in state assessments and graduation rates, for all students and within different student subgroups (The Education Trust, 2014). If a school is consistently underperforming for any group of students, the ratings reflect that. Based on the school ratings, three types of schools are identified- schools that are very low-performing (in the bottom 5 percent) for all students or have low graduation rates; schools that are consistently underperforming for any group of students; and schools that are very low-performing (in the bottom 5 percent) for one or more groups of students( The Education Trust, 2014). Each of these schools are subject to different actions with different intensities for improvement. In case of consistent poor performance, it could also include measures like school takeover and closure(The Education Trust, 2014). Similarly, different measures have been taken by state governments to foster indirect school accountability. For example, school report cards are prepared and shared on government websites giving detailed information of school performance on range of indicators for all students and different subgroups (Figlio & Lucas,2004). School ratings are also made public with the report cards in compelling visual graphics. School performance data and ratings can be easily compared with other schools in the state as well as the state average. Nearly all states prepare and disseminate this school information to inform parents and community about school quality (Figlio & Lucas, 2004). For example, in the state of California, various types of school data, including scores from standardized assessments, are made publicly available on the government website (see California 10 Department of Education, 2022). The performance of a school in English and Mathematics is compared with state average scores and clearly indicated on a visually attractive scale. An equity report is provided about the number of student groups that fall under different performance categories. The performance of English learners is also provided across different scales. There is a lot more information for each school provided on the website such as conditions of learning, teacher preparation, and placement, class assignments, availability of resources, school facilities, planned improvements, pupil outcomes across groups, parental involvement, pupil engagement, school climate, etc. 2.2.2 United Kingdom In United Kingdom the origins of TBA lie in the 1970s when education policy started relying on market mechanisms such as school choice and competition to govern schools and improve their quality. During this period, the national leadership from Prime Minister Callaghan and then Thatcher started putting responsibility on the individual to improve the state of education by making informed consumer choices and putting pressure on schools to perform (Smith, 2014). The Education Reform Act 1988 and Schools Act 1992 started the practice of national testing based on national curriculum, requiring local education bodies to publish comparable school performance results known as league tables (Edwards & Whitty, 1992; Teelken, 1999; West & Pennell, 2000). The league tables were to provide parents key information for making informed school choice. Schools had to compete for increasing their enrollment figures as they received per pupil funding. This was further strengthened by the Parent Charter in 1991 and establishment of Office of Standards in Education (Ofsted) in 1992. During the Blair government in early 2000s too attempts were made to promote school competition, and strict measures were laid out for schools that failed inspection (DiGaetano, 2015). In order to efficiently govern schools with these neoliberal reforms, testing and school inspection processes were centralized and strengthened to foster the TBA approach. The education sector in the UK was significantly decentralized till 1988 with Local Education Authorities (LEAs) shouldering the maximum responsibility for education delivery including curriculum, modes of instruction, planning, recruitment, training etc. However, from 1988 onwards with the national Education Reform Act (ERA) a significant amount of centralization has taken place in the education sector with the central state determining many key aspects of education such as 11 curriculum, assessments, inspection etc. Currently students are tested in Standard Assessment Test (SAT) to measure their achievements compared to nationally set learning targets. There is a heavy emphasis on improving direct and indirect school accountability through testing and inspection data. Schools and their governing bodies are held accountable to the local authorities and the national inspection agency Ofsted for their national performance on these tests and other important parameters such as spending of school resources etc. A negative school inspection by Ofsted can result in serious consequences for the viability of a school. Consistently low performing schools can face consequences such as warning notices, replacement of governing body, and public naming and shaming of schools (Acquah, 2013). Different types of school information are published by the national Department of Education (2022a) such as performance in national tests, reports of school inspection, graduation rates etc. School league tables are prepared in England which rank schools in the country across different measures. These rankings play an important role in choosing schools for the parents, creating intense competition among schools. 2.3 Global Adaptors of TBA The TBA model from US and UK has travelled to other countries as a powerful trend. In recent years some measures have been taken in the US and UK to mellow down performance based school accountability movement. However, data based school competitions and evaluations still exist in both countries with significant implications for schools (Smith, 2014). Irrespective of the relative slowdown in US and UK, there has been a major global movement towards TBA models, involving testing for direct accountability and open school data for indirect accountability, with many countries adopting these practices from US and UK (Hanushek & Raymond, 2004, p.407). Sahlberg (2010) calls this the Global Education Reform Movement (GERM) [as illustrated in Butland (2008), Lemke et al. (2004), and Figlio and Loeb (2011)]. Volante (2007) calls this one of the most powerful trends in education policy in last 20 years. Despite many countries adopting TBA, Australia is the most widely documented adaptor of TBA, inspired by the neoliberal policy frameworks of the US and UK. Even though there are some case studies from other countries like Chile, Mexico, South Korea, etc., Australia is often included in the main discussions around this topic along with scholars from the US and the UK. These three countries dominate the literature on TBA, and case studies from other countries remain on the margins. Australia caught up with this trend slightly later in 2007 with the Rudd 12 administration’s initiation of NAPLAN (Australian Curriculum, National Assessment Program to assess young people’s literacy and numeracy achievements) and My School Website, borrowing from the United States and United Kingdom models (Lingard, 2010). Lingard (2010) describes that in Australia the emergence of the new school accountability regime is reflective of neoliberal influences on national educational policymaking, and international policy borrowing occurring due to a contemporary policyscape facilitated by flows of politicians and policymakers. Similar to the pioneering countries, the neoliberal test-based accountability movement in Australia has aligned with greater central oversight and regularization in education. Australia is a federation where state governments maintain responsibility for schooling. However, due to the neoliberal wave, in the past decade, there have been unprecedented attempts to align policies, processes, and goals at the national level (Savage and O’Connor, 2019). There have been many national reform initiatives such as NAPLAN, Australian Professional Standards for Teachers (APST), etc. to align subnational schooling systems, improve outcomes, ensure greater equity in the provision, and tackle issues such as duplication or inconsistencies between jurisdictions (Savage, 2016, 2017). Australia’s TBA approach also has both direct and indirect school accountability measures. The results of high stakes test NAPLAN (Lingard and Sellar, 2013) are published by the Australian government’s website “My School”, and in newspapers. Data are available to compare schools across the country. Federal funding to schools is linked to school performance in NAPLAN. Data shared on the MySchool website (see Department of Education, 2022b) includes the performance of a school in different skill areas for two different years to track school performance across years. It compares the performance of schools with students of a similar background as well as all Australian students. It is interesting to note that earlier this website allowed to directly compare schools against each other as seen in examples from the United States and the United Kingdom. But this practice has been discontinued from 2020 onwards and now the focus is on how schools are progressing in relation to similar students across Australia. It is also possible to track the school’s performance over the years. The results of NAPLAN are used to inform policy development, resource allocation, curriculum planning, and intervention by governments, as well as being measured by education authorities, schools, and the community. Chile is another noted example of TBA, with significant emphasis on school choice (Ahumada, Montecinos, and González, 2012). Since the 1980s, schools’ performance is assessed 13 by the central government through SIMCE, a census-based standardized test that annually appraises students’ learning in all types of schools according to national curriculum standards (Meckes and Carrasco, 2010). There is a preferential school subsidy (SEP) or adjusted voucher for students from disadvantaged families attending state-funded schools. In order to receive this funding schools must design a school improvement plan and be accountable for performance in SIMCE. Based on the test performance and other criteria, schools are ranked in Chile and this exercise shapes school choice decisions in the country (Mizala et al, 2007; Hofflinger et al, 2020 etc.). Similarly, Mexico started the high stakes ENLACE test in 2006, published results per school, and promoted economic incentives for teachers (Rivas and Sanchez, 2022). One can find different versions of TBA in many other countries as well such as Hungary, Brazil, South Korea etc. (see Smith, 2014). As this section shows, TBA is increasingly a global phenomenon as several countries are adopting this approach with certain similarities despite differences in education policy and socio- economic contexts of the countries. For example, attachment of school performance data to rewards/ incentives/ sanctions /remedial measures, and open sharing of school performance data for indirect accountability. Due to these similarities, significant overlaps have been observed across these countries in the effects of TBA. In the next section, I describe those common effects below. 2.4 Emerging Consensus on Effects of TBA After briefly describing what is TBA, how it started and spread, and the different models in different countries, I now provide a summary of some common effects of TBA found in these countries. As described earlier, the consensus is largely emerging from data pioneering countries and Australia. Scholars in these countries dialogue with each other in elevating the research on this phenomenon. This problem was also acknowledged by Takayama and Lingard (2019) who describe this literature as being highly Anglo-American-centric. Therefore, in this section, most of the findings come from scholars in data pioneering countries and Australia. 2.4.1 TBA creates networked governance and impacts school choice dynamics TBA is seen to transform governance and power structures in these countries. It requires as well as reinforces participation from new actors and stakeholders in education governance (Jarke and Brieter,2019). Ozga (2009) observes that the use of data for educational governance 14 through inspections in England opens the way for more horizontal and networked forms of governance instead of hierarchical or vertical forms of governance. However, doing that does not eliminate central control over education. Rather, data actually make the education sector much more centralized while giving the illusion of governing from distance (Ozga, 2009). She observes that data allow the state to work through new forms and processes, without changing its authority (Dale 1999). This observation has been seconded by scholars in other countries such as the United States, Australia, Canada, Norway etc. The increasing amount of performance and evaluation-related data publicized regularly has increased parental scrutiny in the schooling process. Observing this phenomenon, Jarke and Brieter (2019) argue (referring to Anglo-American and some European countries) that schools and classrooms no longer remain a confined physical space but have transformed into a ‘distributed datascape’. They argue that earlier parents and other entities were able to participate in children’s school activities in a limited manner. But monitoring of schools through data, by the government as well as wider public, has led to greater scrutiny and involvement of different actors in the work of schools and changed the boundaries of schools as learning places. Gorur (2015) argues that sharing of school performance data on MySchool website in Austrailia has created information empowered citizens who participate in debates and interventions in new ways. Manolev et al (2019) show that regular updates on student behavior through a school’s social platform called Class Dojo fosters strong parental engagement in Australia. TBA has also created new external actors to govern schools. Anagnostopoulos, Rutledge, and Jacobsen (2013) describe that education in the United States is not only governed by formal education departments but also a range of new actors such as software companies, consulting firms, research organizations etc. due to emergence of data-based accountability policies. Parcerisa et al (2020) observe an intense proliferation of commercial school improvement services in Chile that have created a separate economic sector employing numerous education experts and transformed the process of educational development. Rivas and Sanchez (2022) also observe similar findings in other Latin American countries. TBA’s practice of publicizing school performance data has impacted school choice dynamics in several countries. In case of England, Roberts-Holmes & Bradbury (2016) argue that instead of fostering collegiality between school communities, sharing of comparative open school data has created competition between ‘statistical neighbors’, through various data packs and 15 dashboards. Hastings and Weinstein (2008) find that information on school test scores led significantly more parents to choose a better performing school in a natural experiment in North Carolina, United States. Friesen, Javdani, and Woodcock (2009) show that parents in British Columbia, Canada revise their beliefs when information about school quality is provided through report cards and engage in school changing behavior. There are also indirect ways in which impact on school choice dynamics has been observed. For example, Black (1999) finds that house prices in districts with better test score schools are higher in Massachusetts, United States. Figlio and Lucas (2004) find that school grades have an impact on house prices and residential location decisions in Florida, United States. Koning and van der Wiel (2013) find that publication of school rankings in Netherlands has an effect on school choice as positive school-quality score increases the number of students choosing a school. Nunes et al (2015) find that publication of rankings has clear effects upon families and schools in Portugal as number of students attending schools rated poorly decrease, increasing probability of school closures. In the United States, there is also strong evidence that performance information disseminated via school “report cards” directly shapes voter perceptions about the quality of local schools (e.g., Chingos, Henderson, & West, 2012; Jacobsen, Saultz, & Snyder, 2013). These studies show a considerable effect of sharing open school data on the market. It is important to note that the literature recognizes that this effect on school choice dynamics is not absolute but relative to some factors, or its intensity is dependent upon various conditions. For example, Kane et al (2003) find that real estate prices do not respond to yearly fluctuations in given measures of school quality in North Carolina, United States. Teske and Schenider (2001) argue that parents from low income or marginalized communities in United States may not be engaged with school data in the desired manner compared to better educated and more involved parents. 2.4.2 TBA intensifies and restricts classroom instruction TBA is seen to narrow teachers’ instruction and pedagogy in classrooms. In England, studies find that teachers narrow their pedagogy to ensure that children succeed with testing expectations (e.g. Roberts-Holmes & Bradbury, 2016). Teaching is restricted to adjust to the narrow interpretations of literacy and numeracy in the tests. It acts detrimental to the process of building relationships with children (Bradbury, 2019). Reay and William (1999) note a shift from group work and enquiry based learning in classrooms to increasingly competitive and 16 individualistic attitudes towards learning. In United States and Austrailia too teachers devote far greater attention to contents included in the tests and deemphasize contents that are not tested (e.g. Koretz & Hamilton, 2006; Hamilton et al 2012; Polesel et al 2012 etc.). For instance, Taylor et al (2001) find that in Colorado, United States in order to transmit content relevant to tests, there remains limited range of activities for students in classrooms and few opportunities to experience excursions and field trips. Rivas and Sanchez (2022) also confirm instances of teaching to test in various Latin American countries. Teachers become less creative and focus more on cramming than instruction (Cunningham and Sanzo, 2002). Klenowski (2010,2011) argues that culturally responsive teaching techniques are significantly reduced in classrooms of Austrailia, along with trust in teacher professionalism. Hofflinger & von Hippel(2018) observe similar practices of narrow teaching experiences in Chile and further coaching by schools via outsourced services to prepare children for tests. Hargreaves (1994) observes that in countries like United States, England, Canada, Austrailia, New Zealand etc. teachers are becoming deskilled and turning into technicians that are mandated to deliver a prescribed and narrow product. Observing these trends across countries, scholars note that the foundational notions of schooling and teaching have transformed as a result of TBA. Lingard et al (2013, p. 541) explain that TBA acts as a meta-policy that pushes traditional pedagogies ‘from a distance’. Due to excessive testing, education systems are failing from inducing deeper pedagogical change or transform the ways teachers deliver instruction (Diamond, 2007; Firestone, Mayrowetz & Fairman, 1998). The work of teaching, learning, and schooling has become firmly located within the ‘meta-narrative of schooling as performance’ (Ball et al, 2012, p.515). The idea of school improvement has become ‘highly reductionist’ through hierarchical rankings of schools, serving as rewards/punishments. 2.4.3 TBA creates pressure on students and teachers to perform TBA is noted for promoting reductionist ideas of learning and taking away the joy of learning from students. Williamson (2014, p.12) argues that it reshapes teachers and children ‘into data that can be measured, compared, assessed and acted upon’. Children become ‘miniature centers of calculation’ (Williamson, 2014, p.12). Hutchings (2015, p.1) demonstrates the feeling of being reduced to data pieces in England: “it is deeply saddening that some of the pupils interviewed feel reduced to a statistic – jumping through hoops for the benefit of others, and with 17 no space to discover the creative and positive learning that school should provide”. An environment is created where children could be reduced to “schools’ statistical ‘raw materials’ that are minded and exploited for their maximum productivity gains” (Roberts-Holmes & Bradbury, 2016). Lobascher (2011, p. 15-16) also cites range of studies to argue students’ intrinsic motivation in Austrailia is replaced from the love of learning to extrinsic rewards and threats that completely reduce the enjoyment of learning experience (e.g. Anagnostopoulos 2003; Au 2007; Jones 2007; QSA 2009; Williams 2009). TBA also creates severe pressure and anxiety on students to perform in tests. Cohen (1989) argues that children start labelling themselves as failures at very early stages of their learning journey. Reay and William (1999) demonstrate how test results in England create immense anxiety and negative self-perceptions among even higher achieving students. Children express discomfort about impact of test results on their future life prospects, and also exhibit jealousy and aggression towards higher achieving students (Reay and William, 1999). Paris and McEvoy (2000) describe instances of children “freezing” with fear while taking tests, and experience fear and anxiety in Texas, United States. Brown et al (2004) cite instances of students’ expressing feeling of incompetence, labelling by teachers, problem behaviors, suspensions etc. in North Carolina, United States. Many other studies too cite instances of emotional, psychological, and physical stress among students in the United States (e.g. Madaus et al, 2009; Flores and Clark, 2003 etc.). TBA overwhelms and overburdens teachers by severely controlling their routines with the power of numbers. Teachers are evaluated based on assessment measures and how much value they have added (Stevenson, 2017). A notion is created that individual contribution is what counts (Roberts-Holmes, 2015). In England, teachers feel constrained and exhausted by demands for production and analysis of data (Bradbury, 2012). They are overwhelmed and ‘burdened with the responsibility to perform’ (Ball & Olmedo, 2013, p. 88) due to constant pressure. Ball et al (2012, p.523) argue that in such high-stakes culture, data production, tracking, and mining become the ‘new technical professionalism’ driving teachers’ lives. Ball (2003, p.216) describes that struggles are highly individualized where teachers in England find their values challenged or displaced by the ‘terrors of performativity’. Rivas and Sachez (2022) also confirm the loss of teacher autonomy in complex ways in Chile, Mexico, Colombia, Brazil and Peru. Observing similar trends across countries, it is argued that a new form of control has emerged over teachers where the state creates an environment of “regulated self-regulation” (Jessop, 2002, p. 199; Fenwick et al., 2014). 18 TBA has reconstituted, narrowed, and technicized the responsibilities of teachers. Teachers that aspire to survive or succeed in this new environment have to reconstitute themselves as “neoliberal professionals” (Ball, 2003, p. 217). There is increasing restriction on teachers’ capacity to exercise professional discretion, resulting in weakening of teacher expertise, authority, and professionalism (Bradbury & Roberts-Holmes, 2017; Hardy, 2018; Perryman, 2009). There are increasing pressures on “datafied teachers” to rely on numerical data and evaluative tools to guide their pedagogical decisions and classroom practices (Holloway, 2019; Hardy 2018 etc.). Quality of teachers becomes narrowly defined by numbers, where improving quality equates to increasing numbers instead of improving practice and cultivating collaborations (Perryman, 2009; Taubman, 2009). Lingard (2009, p.16) decries this “culture of performativity” undermining teachers’ sense of professional worth and argues that it affects the “very souls of teachers”. 2.4.4 TBA creates incentives to gamify the system To cope with the pressures of TBA, schools find strategies such as ability grouping to manipulate or gamify the system to demonstrate performance and avoid facing negative consequences. In England, TBA facilitates the allocation of children into groups by providing evidence of their different ‘abilities’ (Bradbury, 2019). Teachers and school staff report increasing their focus on students with better potential to move from below to above the threshold of proficiency (Booher-Jennings, 2005; Hamilton et al., 2007; Pedulla et al., 2003). Getzler and Figlio (2002) and Cullen and Reback (2002) find in Florida and Texas states of the United States respectively that schools classify more students as special needs or limited English proficient (LEP), and therefore remove them from taking tests. Instructional time is reallocated, with more attention to tested items against non-tested items (e.g. Koretz and Barron; 1998; Koretz and Hamilton, 2006 etc.). Hofflinger and von Hippel (2018) find that schools serving disadvantaged students in Chile tend to inflate their accountability ratings by having up to 30% low performing students miss their tests. Australian Primary Principals Association (APPA, 2009) reported that some schools in Australia were required by their line managers to lift results by a certain percentage, and so schools devoted most of their resources to able students, leaving less able students less attended the first five months until the completion of tests. Apart from grouping, there are also other strategies such as pre-determination of test results, tweaking classroom population, etc., and in some cases unethical practices of cheating. A range of companies has emerged in the Australian market offering to test the children and 19 providing results prior to the NAPLAN test, undermining good teaching. Some schools have even encouraged some students to remain absent on test day. Several large surveys across countries indicate that teachers design their classroom presentations and instructional materials as per test formats, drill students on the same format of questions, and change the sequences in which they present topics to accommodate the testing schedule (Stecher, 2002). Sometimes there are also reports of explicit unethical practices to game the system such as outright cheating (Hamilton et al, 2012). For example, teachers giving heavy-handed hints to students about correct answers during the tests, test-prep sessions featuring actual test items, administrators erasing incorrect responses on students’ answer sheets and substituting them with correct responses etc. (Popham, 2006). 2.4.5 TBA reinforces existing inequities TBA is found to reinforce existing inequities as it causes standardization of educational practices, and of students themselves, disregarding many differences in the needs, abilities, and achievements of different students (Peters and Oliver, 2009). This is especially true for children from minority communities and those with disabilities and special education needs (Peters and Oliver, 2009). Cunningham and Sanzo (2002) also echo this and point out the additional role played by unequal support of families available to children from low socio-economic backgrounds. As test scores are routinely used to make important decisions about student placement, children from disadvantaged groups get placed into lower-track programs and classes, perpetuating class and racial inequalities (Froese-Germain, 2001). Results are also used to make decisions about grade promotion and retention, with less privileged students facing greater chances of being held back, with little improvement in their education (Froese- Germain, 2001). TBA also makes it hard to recruit staff in poorly performing schools (Ingersoll et al, 2016). The parental perception of schools is affected by their performance scores, resulting in parental choice or in most cases “white flight” with detrimental effects for low SES schools working in disadvantaged communities (Ball, 2008; Ho, 2011; Davis et al, 2015; Knaus, 2007; Ryan, 2004 etc.) 2.5 Conclusion As this chapter shows, TBA approaches started in the US and UK due to specific neoliberal policy orientations and spread in different parts of the world, indicating a shift to post-bureaucratic educational governance, and resulting in similar effects such as expanding educational governance, shaping school choice, narrowing instruction, putting pressure on teachers & students, creating 20 incentives for system manipulation/gamification, and reinforcing existing inequities. Of course, their specific intensities and relative attributes might be different depending on the context. Despite this rich literature, many issues pertaining to data-driven national educational governance remain unexplored due to the dominance of studies on TBA models. The focus on countries that have pioneered and mimicked a particular form of test-based governance is inadequate to acknowledge the emergence of other forms of data-driven governance in education and study the implications of those emerging models. The primary contribution of my dissertation is to expand this discourse on data-driven national educational governance through a close examination of the Indian model using National Achievement Survey (NAS) for bureaucratic governance. The next chapters provide a background discussion on India and the data and methods of this study. 21 CHAPTER 3: BACKGROUND ON INDIA In this chapter, I provide some relevant background information on India, specifically the socio-economic characteristics of the country, its federal education system, centralized educational governance, and its “flailing” bureaucratic capacity. This helps to contextualize the findings in later chapters about why NAS is being used for standard and comparative bureaucratic governance in India, from policy, technical capacity, and organizational perspectives. Standard bureaucratic governance refers to the use of NAS to continue, promote, and support established bureaucratic governance practices, and comparative bureaucratic governance refers to governance by cross- state comparisons to extend the central government’s “soft power”. These background details on India also underscore what makes it a unique case study country, compared to other countries represented in the literature. Some of these factors such as centralized governance are also relevant to many developing countries around the world. 3.1 General Country Information India is a large country with immense geographic and socio-cultural diversity, occupying the greater part of South Asia. With approximately 121 billion population and roughly one-sixth of the world’s total population, India is the world’s second most populous country after China. As per Census 2011 data, it is comprised of 36 states2 and union territories3, 640 districts4, 7,933 towns, and more than 600,000 villages. Each state and union territory has one or more official languages, and according to the 2001 Census, the people of India speak 122 major languages and 1599 other languages. India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Baha’i populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country. Within religions, the people of India are further divided into more than 3,000 castes and 25,000 subcastes. Many studies have shown that the caste system has created socio-economic stratification in the country and contributed to economic inequality. Politics in India are largely 2 The states of India are bigger in population size than many countries of the world. For example, Gujarat is the 9 th most populous state of India with a population of more than 60 million. This is bigger than population of countries like South Africa, Kenya, Spain, Canada, and many others. It is also bigger than population of California (39 million), which is the most populous state of United States. 3 A union territory is a type of administrative division in India. Unlike the states of India, which have their own governments, union territories are federal territories governed directly by the Central Government of India. 4 India's districts are local administrative units inherited from the British Raj. They generally form the tier of local government immediately below that of India's subnational states and territories. The area of these districts ranges from 3.5 sq miles to 17,600 sq miles, depending upon the area of the state. The population of the districts ranges from 8000 to 11 million. 22 driven by caste and religious issues. Government policies and regulations are also influenced by this politics. Educational initiatives include special provisions and reservations for students from specific castes and minority groups. The socio-cultural and political diversity of this scale and proportion is unparalleled in the global south. 3.2 Federal System of Education India has a federal government with 3 tiers- central, state, and local bodies. This structure is relevant to understanding the governance of education in India. a) Central Government: The central government in India is responsible for setting the general direction of education policy, providing guiding frameworks, laying down governing tenets & principles, granting technical support through apex-level bodies, and implementing centrally sponsored schemes (CSS). b) State Government: The state governments are in charge of designing specific education- related legislation, policies, and regulations for implementation, within the framework provided by the central government. The responsibility of policy execution rests chiefly with state governments. States have the ultimate responsibility in terms of education planning and decision- making. Therefore, they prioritize interventions and overlap resources. c) Local Bodies: There are different types of local government bodies in India depending on the type of location. Please see IDR (2020) for more information. Rural Areas: There are three nested local bodies. At the apex, is the district council (zilla parishad) which is made up of a cluster of block councils (IDR, 2020). These blocks are in turn made up of village councils (IDR, 2020). [*States with a population of less than two million may also choose to have a two-tiered structure, without the intermediate block-level institution (IDR, 2020)]. Urban Areas: There are three types of local bodies based on population- 1) municipal corporations for areas with a population of more than one million, 2) municipal councils/ municipalities for areas with less than a million people, and 3) town councils for areas transitioning from rural to urban (IDR, 2020). Individual state governments are responsible for the functioning of their respective local bodies. Hence, the actual powers and functions of these local institutions are highly dependent on the laws of the state in which they operate. Local bodies in general are responsible for preparing district-level education plans, operating public schools in their jurisdiction, and implementing 23 education schemes. They appoint staff, provide equipment, and finance these schools through local taxes and grants from the State Government. The government schools are directly under the control of district local bodies. 3.3 Centralized Educational Governance: Schools with limited power at the bottom of a vast bureaucracy India’s federal system has many centralizing features, in general, and in education, seen as essential for nation-building and maintaining stability in a country with vastly diverse states. Scholars have described India’s federalism as ‘quasi-federal’ or ‘holding together federalism’ (Stepan, 1999; Wheare, 1964). The constitution identifies several expenditure functions that are ‘concurrent’ or shared by the center and state. The center has an overriding veto power in case conflict arises with states on matters related to concurrent subjects. Education is one such concurrent subject. The Planning Commission of India (1950-2015) would often play a major influencing role in planning and finance-related matters in the state as well as concurrent subjects, creating a culture of centralization (Rao and Singh, 2006; Srinivasan and Wallack, 2011). Many scholars have argued that such centralization was required for the sake of nation-building post- independence in 1947, given India was a large country with vastly different states. The stronghold of the center has played a crucial role in ensuring India’s union of states (e.g. Tillin, 2017). The Centrally Sponsored Schemes (CSS) play a major role in maintaining significant power of the central government in India’s federal governance. Post-independence, there have been many changes in India’s federal system, with states gaining much greater autonomy and power to negotiate center and state relations (Aiyar and Kapur, 2019). However, scholars have noted the overall centralized nature of India’s fiscal architecture despite greater state autonomy. This is largely attributed to significant central government funding in education and other social sectors at the state level through Centrally Sponsored Schemes (CSS). The rationale behind these schemes is to ensure equalization, i.e. minimum standards of public services to all citizens, irrespective of the state they belong (Aiyar and Kapur, 2019). CSS in the education sector has not only provided a greater role to central government but has also led to a separate institutional channel to exercise this role. School education in India is primarily a state responsibility and therefore largely financed by state government revenue through its state departments. However, since more than 20 years now, the central government is significantly contributing to this sector through CSS called Sarva Shiksha Abhiyan (SSA; valid - 24 2001to 2018; focused on elementary education) and Samagra Shiksha (2018-present; renamed SSA and expanded to include preschool to grade 12). Earlier SSA and now Samagra Shiksha policies and frameworks are designed by the central Ministry of Education and implemented by specially created state implementation societies (SIS), that work separately but in coordination with regular state education departments for the purpose of ensuring the universalization of school education and implementation of Right to Education in India. Central government’s funding through CSS like SSA and Samagra Shiksha is attractive for states since it reduces their financial burden on plan expenditures. In India, expenditure on any activity (economic or social) is viewed as plan and non-plan expenditure. Plan expenditures are the developmental expenditures that result in new initiatives and innovations (Rani, 2007). In case of elementary education, that could be new school buildings, hostels, teacher improvement, incentives etc. (Rani, 2007). The non-plan expenditures are the non-developmental or committed expenditures, which in case of Indian education refer to salary of teachers and other staff(Rani, 2007). Samagra Shiksha contributes to plan expenditures and state governments take full responsibility for non-plan expenditures. Currently, the central-state government share in Samagra Shiksha is 60:40. When considering the total expenditure on school education, state governments contribute the biggest share. But within Samagra Shiksha, i.e. plan expenditure, the share of the central government is greater. A graph by Bordoloi and Kapur (2019) shows how the central government’s funding allocations in Samagra Shiksha have consistently increased in recent years. The central government through this funding power significantly directs state-level policies and practices in education. Through SSA and now Samagra Shiksha, the central government has determined resource allocation, scheme design, and held the power to approve state-specific plans and budgets (Aiyar and Kapur, 2019). It has become an important means through which the central government directs and influences expenditure at the state level (Aiyar and Kapur, 2019). Due to the functioning through state-implementation societies, Samagra Shiksha entails its own planning process at subnational levels (state, district, block, cluster, and village), divorced from the state budget associated planning tasks carried out by the state departments (Aiyar, Chaudhuri, and Wallack, 2010). Since budgets are approved annually on the basis of plans submitted by the state governments to the central government, its gives officials at the central level significant influence over final budgets at the state level (Aiyar and Kapur, 2019). Sanan (2014) argues that this arrangement creates state governments as mere implementing agents, responding to rules and 25 orders from the center. The central government also has the power to withhold funding to state governments, if conditionalities are not met (Aiyar and Kapur, 2019). Based on my interview with state-level officials in Samagra Shiksha, some of their projects are initiated by the central government and every state has to implement it compulsorily as part of the funding5. In this center vs. state stronghold in education governance, the role of district bodies comes down to planning and district management as per the state government’s instructions. In India, the district is the unit of education planning. District offices prepare district education plans under the assistance of Samagra Shiksha by collecting and aggregating data from schools. Districts prepare perspective plans and annual plans, reflecting all investments being made and required (Rani, 2007). Allocation of resources to districts depends upon appraisal of these plans, the commitment of the state with respect to state share, reports of supervisory teams regarding the quality of program implementation, and the availability of financial resources in that year (Rani, 2007). States aggregate district-level demands and present them to the central government for approval in order to release their share of funding. Districts are significantly dependent on approvals by state and central governments for their budgets. Within the districts, all key decisions related to sanctions and procurement are taken by the district bodies (Aiyar, 2011). In terms of district planning, they heavily depend on the instructions of the state government. Based on my interview with state-level Samagra Shiksha officials, they provide districts guidance on how to prepare budgets and plans. They decide and communicate the projects to be implemented and their process of execution. Below this vast education bureaucracy, with major educational decision-making already taken up by higher authorities, public schools are left with little power. Schools demand funds from districts but have no decision-making power over the timing of receipt of these funds (Aiyar, 2011). The de-facto funds also have to be spent as per the priorities of state and district administration (Aiyar, 2011). It has been noted that although the educational planning system of India has a bottom-up structure (data collected from schools and aggregated to higher levels), financial decision-making is still quite centralized (Aiyar, 2011). It significantly affects school 5 Since the interview was not recorded, I cannot provide specific quote. 26 autonomy and in many cases disempowers them (Aiyar, 2011). A study by PAISA 6 finds that in many districts, even expenditures for school grants are based on formal or informal orders from district and block officials, without adequate consideration of school needs (Aiyar, 2011). In summary, India has a centralized educational governance system, despite being a federal country. Below a vast education bureaucracy, schools are left with little autonomy, especially in financial matters. This has happened because the central government directs the state government’s policies and practices by providing a significant share of plan expenditure through Centrally Sponsored Schemes (CSS) like Sarva Shiksha Abhiyan (SSA) and now Samagra Shiksha. District bodies play a major role in the planning and district-level implementation process but under the instructions of the state government. 3.4 Capacity Gap Between Top and Lower Levels of Bureaucracy- The “Flailing State” Lant Pritchett, a renowned development economist currently Research Director at RISE7 coined the term “flailing state’ for India in 2009 after working as a lead socio-economist for the south Asia region at World Bank to describe the vast capacity gap between top and lower tiers of bureaucracy in India. Pritchett (2009) equated this to a head being disconnected from its limbs. This observation has been echoed by many policy scholars studying India and is often used to explain why policies in India do not translate into practice as they were intended. I explain this concept of “flailing” below by discussing key capacity differences between top and lower levels of bureaucracy. Top Levels: The top layers of Indian bureaucracy are generally considered quite competent as per global standards, resulting in the adoption of globally legitimate policies. Scholars contend that India produces world-class civil service officers with quite rigorous training and exposure who serve as policymakers and planners at top levels of government administration8(see Dasgupta, 2020; Kapur, 2020; Rajagopalan & Tabarrok, 2019; Pritchett, 2009, etc.). Therefore, policies in India are well-made and innovative. These top officials tend to be well-informed of global policy 6 PAISA (Planning, Allocations and Expenditures, Institutions: Studies in Accountability) is a project by a premier think tank in India called Center for Policy Research that tracks financial flows from central government to service delivery points. 7 Research on Improving Systems of Education (RISE) is a global research center at University of Oxford for understanding education systems in developing countries to overcome learning crisis. 8 Pritchett (2009) writes that Indian Administrative Services’ (IAS) selection process makes Harvard admission look like “walk in the park”. 27 trends, as they are closely connected with Anglo-American elites (in some cases even more than the Indian populace) (Rajagopalan & Tabarrok, 2019). Due to their educational background and significant exposure to the international context, these top bureaucrats sometimes initiate or support global policies, irrespective of their relevance to and the capacity of the Indian administration to execute them(Rajagopalan & Tabarrok, 2019). Such mimicry is often well-intentioned and not necessarily pursued to pacify external or internal actors(Rajagopalan & Tabarrok, 2019). It is also not an attempt to exclude the majority from democratic policy-making processes but simply a result of their educational, intellectual, and professional background (Rajagopalan & Tabarrok, 2019). Andrews et al (2017) also note that top- level officials in India have the tendency to take on tasks inspired by external actors or global trends, which ultimately overwhelm state capacity. Often times initiatives reveal their unrealistic expectations of the range, complexity, scale, and speed with which organizational capability for it can be built (Andrews et al, 2017). Kapur and Mukhopadhyay (2007) call this tendency of large- scale programs with little structural change due to weak implementation - a “Sisyphean State”. Lower Levels: The capacity of lower levels of Indian bureaucracy, especially service delivery workers or street-level bureaucrats, is completely mismatched with that of top levels, creating major discrepancies in the implementation of policies. Three major capacity problems have been noted with the lower levels: an acute shortage of staff, heavy workload, and absenteeism. I explain each of these below. There is an acute shortage of street-level bureaucrats in India (Pritchett, 2009; Rajagopalan & Tabarrok, 2019; Kapur, 2020). For example, the share of local government employees in total employment in India is 5 times less than in the United States and China (Kapur, 2020). This could largely be because of local body budgets. The Indian government spends around 3% of total government expenditure on local bodies, compared to 27% in the United States and 51% in China (Ren, 2015). This results in poor delivery of many basic services in India such as health, education, water, sanitation, etc. (Kapur, 2020). There is an acute shortage of resources, both human and financial, at the lowest levels of government (Kapur, 2020). This is a crucial point because irrespective of the carefully developed policies and programs by top bureaucrats, their fate ultimately depends on how they are implemented by the local bureaucracy. There is also an immense shortage of teachers in India. For example, in 2016-17 around 92,275 elementary and secondary schools in India were running with a single teacher for the entire 28 school as per UDISE data (Indian Express, 2022). In the state of Himachal Pradesh, a total of 2,057 government schools are run by just 1 teacher (Lohumi, 2021,). This shortage of staff has resulted in a heavy workload. The existing staff are always overburdened, multi-tasked, without specialization, and hence inefficient (Kapur, 2020). Based on my fieldwork in Gujarat state of India over the years, I have learned that teachers are overburdened with administrative tasks and other government-assigned duties, giving them little time to teach in classrooms. Schools generally have a maximum of 2-3 teachers among students from grades 1 to 8, without a clerk or other staff member who can take up administrative tasks. Plus, they are expected to perform a range of other duties such as participating in large-scale government events, awareness-building campaigns, conducting elections, collecting data for citizens’ election cards, celebrating key occasions, etc. Recently, there were reports of engaging teachers in water conservation drives by the government in northern areas of Gujarat (IANS, 2018). Similarly, in the state of Rajasthan the government has relied on teachers to motivate couples for family planning methods, monitor women’s self-help groups, distribute drought and flood relief supplies, etc. (Ramachandran, 2005). This practice of involving teachers in non-academic duties is not limited to particular states but is a nationwide phenomenon. Absenteeism is a widely noted issue with Indian street-level bureaucrats (especially in rural areas) across different public sectors such as education, health, civic work, etc. Many studies have noted the problem of unauthorized teacher absenteeism across India (e.g. Mehrotra, 2006, PROBE, 1999, etc.). A World Bank study found through unannounced visits to 3700 government schools across 20 states and 35,000 attendance observations that 1 in 4 teachers was absent in rural areas (Mooij & Narayan, 2010). Instances of absenteeism exist due to poor accountability and governance mechanisms to monitor teachers (Kapur, 2020). Apart from the above observations, an interesting paradox about Indian bureaucratic capacity has been noted by well-known political scientist Devesh Kapur, a professor at John Hopkins University. Kapur (2020) argues that India has a strong record in successfully managing complex tasks on a massive scale but lags behind in delivering basic services. India delivers on macroeconomic outcomes rather than microeconomic outcomes. The bureaucracy works well where tasks are episodic instead of when delivery and accountability have to remain consistent and reliant on state capacity at local levels (Kapur, 2020). For example, India is effective and efficient in difficult functions like sending satellites to space, preparing a mission to Mars, 29 conducting fair and electronic elections in such a vast country, organizing the world’s largest human gathering (more than 10 million people) called “Kumbha Mela”, ensuring countrywide polio vaccination, etc. (Kapur, 2020) But India remains inadequate and inefficient in delivering basic functions like health and education with satisfactory quality (Kapur, 2020). Lastly, all of these discrepancies in capacity exist in India amidst a severe culture of corruption which manifests in various ways across the system and further strengthens capacity constraints (Gupta, 2017; Quah, 2008, etc.) Overall, India’s massive diversity, centralized educational governance, and flailing bureaucratic capacity make it an interesting case study, and quite useful for explaining the rise of instruments like NAS for educational governance. India’s demographic, economic, geographic, and cultural diversity is unparalleled in the world. It has centralized educational governance with major powers given to central and state authorities, and some to district authorities, leaving little room for school autonomy-especially in financial matters. It has a “flailing” context which creates a major capacity mismatch between the top and lower levels of bureaucracy, where top levels strive for global legitimacy and lower levels struggle with issues such as staff shortage, absenteeism, multitasking, and lack of specialization. All of these factors play a major role in how India is governing education over the years, and the types of data it collects and uses for governance purposes. The story of NAS and why it has gained such prominence in India draws on these background factors. In the next chapter, I discuss the design, data, and methods of my study. 30 CHAPTER 4: DATA AND METHODS After explaining the literature on the TBA approach and setting out the need to study India’s approach to using NAS for bureaucratic governance, I will now explain the data and methods adopted for this study. Capturing the nature and context of India’s approach to data-driven national educational governance was an iterative process, with considerable back and forth. It involved three main phases -1) Understanding NAS, 2) Expanding knowledge of NAS and capturing factors shaping NAS, and 3) Explaining the importance of NAS in India. I will explain all these phases in this chapter and also provide a discussion on methodological scope and limitations. 4.1 Background on Study Conceptualization and Case-Study Design This study was originally conceptualized with two fundamental driving questions: 1) What is India’s approach to data-driven national educational governance, and in what ways does it differ from the TBA model? 2) What factors explain the origin and presence of the Indian approach? In other words, what type of education data does India have, what is India doing with it, why, and what might be its implications? Right from the beginning, my interest remained in exploring the Indian scenario because prima facie it appeared that India did not have a TBA approach, despite a considerable potential to pursue one. While the literature was brimming with studies on TBA and its different adaptations, I could not find similar discussions in the literature from India. I was aware of India’s massive success in collecting timely annual data from its 1.5 million schools since the mid-1990s on a range of parameters through its UDISE system (Unified District Information System for Education) and its use in educational planning purposes. This was a remarkable achievement for India given it is still challenging to reach some parts of India due to geographical and infrastructural constraints (e.g. electricity). The central and state governments were also announcing many programs to improve learning outcomes in the country. Despite that, I found that insights emerging from Indian literature about challenges faced by schools and teachers did not speak about the TBA model and its effects as documented in the literature discussed in Chapter 2. It appeared that India could be an interesting setting to study data-driven national educational governance. I became curious to explore what India was doing with its data for governance purposes, and how data were associated with upcoming educational reform interventions in India. I wanted to understand the Indian scenario not only for the sake of India, 31 but to understand how the Indian approach can contribute useful insights to the current literature and help in expanding knowledge about data-driven national governance models and their effects. Given these motivations, adopting a qualitative case-study approach seemed most suitable. The fundamental purpose of doing a case study is to explore a phenomenon within its context (Baxter and Jack, 2008). According to Yin (2003), the qualitative case study approach is well suited to understanding and evaluating complex interventions and programs. My motivations for this study can be plainly categorized into what, how, and why questions. According to Yin (2003), these questions can be well answered through the qualitative case-study approach. It is particularly helpful when the boundaries between the phenomenon and context are not clear (Yin, 2003). One major justification for doing a qualitative case study came from the fact that many empirical studies in the literature are also case studies. The larger goal of this study is to expand notions about approaches for data-driven national educational governance. Hence, this case study can be particularly categorized as an Instrumental Case Study. An instrumental case “provides insight into an issue or helps to refine a theory. The case is of secondary interest; it plays a supportive role, facilitating our understanding of something else. The case is often looked at in-depth, its contexts scrutinized, its ordinary activities detailed, and because it helps the researcher pursue the external interest. The case may or may not be seen as typical of other cases” (Baxter & Jack, 2008, p. 549). 4.2 Journey of Conducting this Study In the subsections below, I describe the 3 phase journey of understanding the Indian approach to data-driven national educational governance through NAS and explaining its context. 4.2.1 Phase 1: Understanding NAS The first stage of my study was about understanding NAS. As described in section 4.1, I had begun with the broad question - What is India’s approach to data-driven national educational governance, and in what ways it is different from TBA? I conducted a literature review to understand the types of data collected in India, the type of data systems/initiatives, their purposes, and their apparent effects. This process helped me become aware of and acquainted with NAS, a nationwide standardized assessment data for national educational governance. Below I explain the key decisions and processes involved in this phase. Particularly I discuss the decision to focus on one state for a better understanding of the national context, key data sources, document selection 32 strategy, how I narrowed my focus on NAS and analyzed the documents, and preliminary findings from Phase 1 that guided my steps in Phase 2. a) Focusing on one state for better understanding: I decided to focus on one state to better understand the national context. I realized that in India central as well as state governments both collect education data. The central databases and systems are applicable nationwide, and hence available to make comparisons across states, districts, and school entities. However, some data collected at the state level are specific to certain state-level interventions. Therefore, I decided to also review literature from one state in order to ensure that I do not miss out on understanding important details relevant to country-wide governance which is my focus. I selected the state of Gujarat for this purpose as having lived in Gujarat I was familiar with the Gujarati language used in policy documents. Focus on Gujarat is also quite appropriate because Gujarat is one of the most economically developed states in India and is known for its proactive efforts to collect education data and modernize the sector with e- governance. However, ironically, Gujarat also performs poorly in some key education indicators. More details on this can be found in Appendix A. b) Data Sources: I studied central (nationwide) as well as Gujarat-specific data initiatives by reviewing a variety of data sources. The review of different sources was essential because there is hardly any literature with a comprehensive compilation on this topic. Oftentimes information available from one source is either not updated or detailed enough. Therefore, it became necessary to review a variety of sources. Plus, in case-study research like this gathering diverse evidence is always required and preferred. The case-study method requires the use of multiple sources of evidence to capture the case in its complexity and entirety (Yin, 2003). In Figure 4.1, I provide all the sources that were used to understand India’s nationwide data-driven governance approach and in Figure 4.2 I provide a protocol I used to collect data from these different sources. 33 Figure 4.1: Data sources for Phase 1 Figure 4.2: Document review protocol I mainly relied on central and state government websites, reports, statistical publications, and data dashboards as my main data sources, and I referred to them multiple times while I was 34 understanding the Indian approach. All other data sources such as academic literature, media articles, expert blogs, and YouTube videos were mainly used to better interpret and contextualize what I was observing in the main data sources, by identifying confirming and disconfirming evidence across sources. I selected the timeframe from 1990-present in shortlisting the literature because during mid-90s India first time developed its capacity to collect computerized education data from the school level. This was also the period when TBA models were emerging in the United States and the United Kingdom, which have been the global influencers or spreaders of TBA. This timeframe of more than 30 years can provide a strong understanding of India’s policy and data developments. Most of the materials I reviewed were in the English language, except for some media articles and videos from Gujarat that were in the Gujarati language. c) Document Selection Strategy: I reviewed the websites by searching for all the content across different tabs and web pages that was related to education data, statistics, databases, surveys, indicators, learning goals and achievement, assessments, educational outcomes/performance, education/school accountability, and Educational Management and Information System (EMIS). I found the link to separate websites with data dashboards within these websites and reviewed all the information that was available within them. Within official policy documents, I found that the program frameworks and national education policies are the core documents based on which all educational decision-making takes place in India. They were useful for understanding the policy context and intended vision behind current and future data-based governance plans. I reviewed annual reports in order to separate the “proposed” and “visionary” aspects of policy from the “practice” or “implementation” aspects of policy. Annual reports were immensely useful in tracing the timeline of different data initiatives and getting a sense of how important they were amidst many other activities undertaken by the government. I have provided the timeline of these initiatives and discussed them in Chapter 6. Official presentations were useful in checking if there was any policy detail that I was missing and not covered in program frameworks, national education policies, and annual reports. The consultant reports and presentations were also useful for the same purpose. Finally, the advisory report was reviewed as it was cited in some of these documents for its recommendations regarding improving learning outcomes in the country. 35 The expert blogs I reviewed were from the organizational websites mentioned in table 4.1. These organizations/think tanks are recognized in India for their reportage on the latest education- related news, developments, opinions, etc. I searched throughout these websites to find tabs and web pages with anything related to education data, statistics, databases, surveys, indicators, learning goals, assessments, educational outcomes/performance, education/school accountability, and EMIS. In some cases, I also utilized the search tab to review specific content related to data initiatives like NAS, UDISE, PGI, etc. For the media articles, I used the advanced search feature of Google to find all articles containing specific words listed in table 4.1. Regarding YouTube videos, I found them as suggestions on YouTube and reviewed them as they were relevant to this topic. I reviewed all these sources to look for confirming and disconfirming evidence compared to policy document reviews. Through this review, I was also able to learn important insights and findings that were not covered in the policy documents. This way I was able to refine and strengthen my findings. d) Identifying NAS as the official, national-level learning assessment data In the initial stages of this entire review process, it became clear to me that two nationwide official data collection efforts exists in India: UDISE (annual, school level) focusing on schooling inputs, student demographics, and enrollment related data, and NAS (every 3-4 years, district level) on learning performance. Therefore, my search inevitably remained focused on descriptions and commentaries related to these two initiatives. Based on my notes gathered using the protocol, I wrote memos about what I was seeing about India’s data-driven national governance approach. I also coded my notes in some places to capture the big ideas I was observing through them. These codes also helped me in my memo writing. I demonstrate examples of how I applied codes to documents in Figure 4.3 and how I developed a coding scheme in Figure 4.4. 36 Figure 4.3: Example of code application Figure 4.4: Example of coding scheme 37 e) Preliminary findings from Phase 1 Through this exercise, I came up with two preliminary findings. First, India did not seem to be implementing a TBA model currently. I did see TBA-type strategies being proposed for the first time in the New Education Policy (2020) document, but there is no clarity on the pathway through which such a strategy will be realized in the future. I provide more information on this in Chapter 6, section 6.5. At present, India’s UDISE system, largely focused on schooling inputs, has been collecting school-level data. Based on this data, India has been releasing school report cards since 2008/9. Interestingly, these school report cards do not report any standardized school performance data. Furthermore, policy documents do not clearly mention the specific goals or purposes behind releasing these report cards. Multiple sources confirmed that there is a lack of awareness among people in India about the very existence of these report cards. As I will discuss in Chapter 5, the school report cards are not as user-friendly as seen in other countries. This indicated that although India had adopted a strategy similar to TBA, i.e. publicizing school data, it may not be targeted towards TBA due to no inclusion of standardized performance data on the cards, a lack of clarity behind their purpose, and a lack of awareness in the wider public about their existence. Second, while there seemed to be no TBA for schools, there appeared to be TBA-type governance for states through NAS data. NAS is India’s only nationwide learning performance data which is only available at the district and state levels. It does not provide any school-specific information. Particularly interesting was the use of NAS through the Performance Grading Index (PGI)9and its equivalent (but now discontinued) School Education Quality Index (SEQI) where states were graded and ranked against each other respectively. I noted that there was a far greater policy and media spotlight on state-level data in India than on school-level performance data (since they were not collected). I was curious to understand what all of this means as an approach to national data-driven educational governance. Therefore, with these questions in mind, I moved on to the next phase of this study, i.e. expanding knowledge of NAS and capturing factors shaping it. Figure 4.5 is a 9 As I will explain in Chapter 5, PGI was originally made for states. It is only few months back in June 2022 that PGI was released for the first time for districts. 38 graphical summary of the Phase 1 procedures I followed. The findings from this phase are explained in detail in Chapter 5. Figure 4.5: Process followed in Phase 1 4.2.2 Phase 2: Expanding knowledge of NAS and capturing factors shaping NAS a) Interview Objectives and Selection of Interviewees In order to confirm and enhance the preliminary findings from Phase 1 and understand the reason behind it, I conducted 14 interviews with officials currently/previously in charge of the data initiatives at the central and state level. The central-level interviews were my main data sources. The state-level interviews were conducted to confirm and better understand what was applicable nationwide in India, from states’ perspectives. Figure 4.6 is an example of how I connected the emerging findings in Phase 1 to the design of my interviews and the selection of interviewees in Phase 2. 39 Figure 4.6: Example of selection of interviewees based on preliminary findings in Phase 1 b) Interview Design and Approach My interviews explored questions like- why specific initiatives were conceptualized, how did they evolve, what purposes they served in the Indian education system, how they were developed, and if any challenges they faced. Since all the data initiatives were different, I asked them questions specific to the initiative to understand the differences in the use of school-level vs. state-level data. The interview guides can be found in Appendix B. All my interview guides were different from each other, as they were designed in reference to specific data initiatives and the position of the interviewee. Countries with TBA models like the United States, United Kingdom, Australia, etc. are developed countries and have much longer experience in collecting and using standardized data. Even in a developing country like Chile, which has adopted a school TBA, there is greater experience and capacity in this area as it started in the 1980s. Therefore, my interview questions were prepared with the intent to particularly elicit the role of Indian states’ capacity or ability to pursue data-based governance either in technical, organizational, or institutional terms. In order to gain some grassroots understanding of the uses and effects of data initiatives, I also interviewed 2 district officials and 3 schoolteachers. I interviewed one non-government assessment expert often working with central and state governments in order to gain a third-party perspective on emerging plans in this area in India and their possible implications. 40 Due to little updated literature on India’s data initiatives, I kept all my interviews semi- structured. I used a guide to keep the interviews focused, but I heavily relied on the input of the interviewees. Central-level interviews were in English and state-level interviews were in a mix of English and Gujarati. c) Impact of COVID COVID had a major influence on my study and made it far more challenging than usual to arrange interviews and elicit desired interview responses. Due to COVID-related travel restrictions, my interviews were conducted over zoom and telephone. I received the IRB-exempt approval for my interviews in late June 2021. However, it took many months to complete all the interviews because from May to August 2021 India was severely affected by the second wave of the COVID outbreak. This second wave was far more deadly than the first wave. A large number of people lost their lives, many were hospitalized, and many severely lost their immunity. Many senior professionals in India took time to resume their official duties as their post-covid health recovery took a long time. In states like Gujarat, the COVID outbreak coincided with an outbreak of Chikungunya. My respondents were severely affected by these situations. Either they themselves or their close family members were suffering from severe illness. Some of my respondents also lost their family members to COVID. Given this scenario, it was very tough to arrange my interviews. Making appointments with interviewees remotely was a challenging process. For the central- level officials, emails and LinkedIn messages were the only means of seeking appointments. I had to write many reminder emails to get in touch with central-level officials. For the state-level officials, cold calling was the only means of securing appointments, as their email ids were not available online. I made many attempts to call different officials and seek their time. Some of them also turned down my requests due to their own health challenges or within the family. Given the time difference between India and United States, I made myself available 24x7 to catch the availability of respondents. Given this challenging process, I was able to finish my interviews in November 2021. I recognize how remote data collection, especially over the telephone, might have affected my interviews given there is little scope to make a meaningful connection with interviewees and make them comfortable before beginning the interview. 41 d) Respondent Details I conducted 14 interviews in total (see table 4.7), with respondents selected with a purposive sampling strategy because only the people in charge of initiatives either as part of the core team or as consultants can best explain the origin and evolution story behind them. I found their references through their names on reports or government websites. The teachers I interviewed were selected through purposive as well as convenience sampling. There often tends to be a significant difference in rural and urban schools in India in terms of their infrastructure, student demographics, number of school staff, student enrollment, and most importantly issues like access to computers, internet connectivity, and influence of government authorities. Through my previous research work with schools, I learned that schools in urban areas are often under stricter monitoring and influence of higher authorities compared to schools in remote rural areas because state and district education offices are located in urban areas. Therefore, I decided to interview teachers from rural and urban schools to get a general understanding of the uses and effects of data initiatives. Through existing connections, I was able to approach two teachers in different rural areas, and one teacher in an urban area. I would like to note that although I have learned a lot from the teacher interviews about the type of data initiatives, their implementation, school-level concerns, etc. the findings of this dissertation are not directly derived from these interviews. However, the teacher interviews were extremely helpful in better understanding the overall context in which NAS was positioned. In Figure 4.7, I provide some key details on the interviewees. To ensure anonymity, I do not mention their specific positions and organizational affiliations. Since I heavily rely on interview data from two consultants involved in the development of NAS in Chapters 5,6,7 and 8, I would like to note that I refer to them as Consultant 1 and Consultant 2. Consultant 1 was involved in NAS before 2016 and is currently advising the government on the topic of improving learning outcomes. Consultant 2 was only involved in NAS in the 2017 survey round but was instrumental in changing the design of the survey and its implementation. Both these consultants are renowned experts on the topic of learning assessments in India. I do not mention their organizational affiliation and their national/international status to protect their identity. Figure 4.8 is the graphical summary of the process followed in Phase 2. 42 Figure 4.7: List of respondents with key interview topics 43 Figure 4.8: Process followed in Phase 2 4.2.3 Phase 3: Explaining the importance of NAS in India The third phase involved analyzing the interview data for a better understanding of NAS and explaining the context behind NAS as the instrument of national educational governance in India. I transcribed all the interviews and analyzed them through the Thematic Analysis Method (Braun and Clarke, 2012). The goal of this method is to systematically identify, organize, and offer insights into meaning across data sets. It is helpful in identifying meaning in relation to a particular topic and research question. The analysis process is targeted toward answering a particular research question and finding patterns and larger meanings to answer it. Using this method, I first immersed myself in all the interview data by carefully listening to the audio twice as well as reading the transcripts thoroughly. This helped me fully familiarize myself with the data and know the type of topics covered in each interview. While I was doing this exercise, I made many jottings and highlighted items that appeared potentially interesting for my research question. Listening to the audio helped me capture the surface-level meaning, and making 44 the jottings helped me think analytically about the meaning behind the data. As I was reading, I would always ask myself- in what ways does this point help explain India’s NAS approach? I made notes on each interview and across interviews. I was reflecting on individual data initiatives and also observing things across initiatives. The jottings were not polished excerpts, but casual notes to help me get initiated in thinking about analyses. This exercise was extremely helpful in building my memory and recall for the data in later stages. The next step involved coding the data. I hand-coded the transcripts and applied descriptive and conceptual codes. All of these codes were inductively derived, based on their potential relevance in answering my research objectives. Figure 4.9 provides some examples of how descriptive and conceptual codes were assigned. Please note that this is just an indicative example and does not include all the codes I applied to the passages. Figure 4.9: Example of descriptive and conceptual interview coding Once the codes were ready, I started comparing them to classify them into broad categories. I came up with three broad categories: Policy Context, Technical Context, and Organizational Context to explain India’s adoption of NAS. These 3 categories have led to Chapters 6,7 and 8 respectively. Once I sorted all the codes into three categories, I read within the categories to develop themes from the codes within them. I wrote many analytical memos on every code and across codes which helped me come up with themes that explain the factors behind adopting NAS. 45 A theme “captures something important about the data in relation to the research question and represents some level of patterned response or meaning within the data set” (Braun & Clarke, 2006, p.82). My understanding and description of India’s NAS approach also became sharper with this process, and I confirmed it as the standard as well as comparative bureaucratic governance approach. Both the model and its explanations fit well with each other. Figure 4.10 summarizes my process for Phase 3. Figure 4.10: Process followed in Phase 3 Based on this exercise, my research questions were significantly refined and are now described as follows: 1) What is NAS and how is NAS used for national educational governance in India? 2) What policy, technical capacity, and organizational factors explain NAS’s central role in educational governance? 4.2.4 Validating the Findings: In order to ensure that I do not miss out on any data pieces that explain the model, I read through all my codes again and confirmed if I classified them correctly. I also read through the transcripts again to ensure I did not miss any crucial point explaining the model, or whether I 46 interpreted them inaccurately. Also, while I was developing codes and writing analytical memos, I read substantial literature on some recurring concepts and keywords. For example, literature on the Indian education finance system, Indian education governance, Sarva Shiksha Abhiyan, Right to Education Act, District Primary Education Program, No Detention Policy, Annual Status of Education Report, State level Assessments in India, New Education Policy, etc. I integrated all the findings from this literature review into the narratives of the emerging themes. It significantly helped me refine the narratives and improve my interpretation by identifying confirming and disconfirming evidence between the literature and my data analyses. Lastly, to ensure an appropriate interpretation of my data, I regularly shared my memos with my advisor and presented them at different conferences and gatherings. Having this external oversight significantly helped improve my work.10 All of these steps helped me in triangulating the findings and building validity and reliability. In addition to looking within my data and ensuring they are interpreted and analyzed appropriately, I have also provided alternate explanations/insights on my findings that are not covered by the data but might be potentially important. I have noted them in my findings chapter where applicable, and also in the final chapter on implications. In summary, my dissertation was divided into three phases- right from identifying the Indian approach to explaining it. In the first phase, I focused on reviewing a range of different documents that helped me identify NAS as the official national-level learning assessment data. The preliminary findings from this phase indicated an absence of a TBA-type approach in India at the school level, but instead TBA type use of NAS data for states. These preliminary findings led me to phase 2 where I designed and conducted interviews to better understand these findings and capture the reasons behind them. In the last phase, I conducted thematic analysis of the interviews that led to chapters 6 to 8 of this study. I validated these interview findings through an extensive literature review, external feedback, and reexamination of my data. 4.3 Methodological Scope and Limitations My study is bounded in three important ways. Firstly, this study refers to the overarching Indian data-based governance model through NAS because it is applicable nationwide. It will not 10 I would like to acknowledge my dissertation committee for their extremely helpful suggestions to improve my research process and emerging findings. 47 discuss state-specific nuances and differences. It will also not discuss data initiatives taking place within particular school systems such as the Central Board of Secondary Education, Navodaya Vidyalaya, etc. Secondly, I originally intended to significantly capture the effects of the NAS approach. However, based on the interviews, it is too early to find any concrete effects. However, I do discuss the possible implications of the approach in the final chapter. Lastly, the latest round of the National Achievement Survey (NAS) was conducted in November 2021 and my interviews with NAS-related officials took place in July-August 2021. Therefore, most of the discussions around NAS have remained focused on the experience of the 2017 survey. This does not have major effects on the validity of my findings, as NAS 2021 is largely similar to NAS 2017 except for a few changes that I have noted in the upcoming chapters. Lastly, to better clarify the goal of my study, I would like to emphasize that although the study focuses on India, I aim to broaden the conceptual thinking and conversations around the use of data as an instrument of national governance. Through this study, my intention is to show that countries may adopt data instruments different from TBA for various reasons for national educational governance. As a result of this, there might be different issues and topics pertaining to these countries which may not necessarily be covered by TBA dominant literature. The next chapter describes India’s NAS approach, followed by chapters that explain the reasons behind its increasing importance. 48 CHAPTER 5: NATIONAL ACHIEVEMENT SURVEY AND INDIAN BUREAUCRATIC GOVERNANCE In chapter 4, I note that my close review of the education data governance system in India revealed two primary efforts: 1) UDISE: School-level inputs, demographics, and enrollment related data (annual) and 2) NAS: District and state-level standardized learning assessment data (3-4 years). In this chapter, I first begin with a brief discussion on UDISE as it is a data governance effort that precedes NAS by almost a decade. The background on UDISE is helpful in understanding the story of NAS that I am interested in. After this discussion on UDISE, I explain what NAS is and how it is used for national educational governance in India. 5.1 Unified District Information System for Education (U-DISE) UDISE is a school-level EMIS system (educational management and information system). It primarily collects different types of inputs related data (but no standardized learning assessment data) from schools in India. This effort started in the mid-1990s and became fully developed around 2012. UDISE collects computerized school-level data from approximately 1.5 million public and private schools in the country. Each school in India is provided with a unique 11-digit UDISE code. The data are collected from schools on various aspects including school profile (management, sources of funding, school type, the language of instruction, etc.), enrolment and repeater information (by age, sex, social class, etc.), teacher provision (including availability and qualification of teachers, teacher training), infrastructure and learning facilities, school level exam pass percentages, and receipts of school grants (NIEPA, 2017; Bordoloi and Kapoor, 2018). UDISE is a centrally designed system, however, the states and union territories have full authority to add provisions to capture any additional data/information relevant to their needs (Mehta, 2017). Because of UDISE, school data from across the country have been made available to the government, a remarkable achievement for a country like India where developing such capacity across a vast geographical scale is immensely challenging. UDISE is not designed to serve purposes like TBA as it does not collect any school-level standardized performance data. This has been a major critique of UDISE (e.g. CCS, 2019; Bordoloi and Kapoor, 2018; Pritchett, 2014a etc.). This is also an important reason why NAS being a district and state-level learning assessment data has gained so much importance as a national data instrument in India. Various government publications such as policy implementation frameworks and annual reports largely refer to the use of UDISE for the purposes of bureaucratic 49 planning, monitoring, and evaluation. More specific details on the purposes of UDISE have been covered in Chapter 6 where I provide background on the evolution of UDISE. The government disseminates UDISE data annually on government websites in the form of performance dashboards, school report cards (SRCs), district report cards, state report cards, and national report cards. These data are used to report key educational indicators such as dropout rate, enrollment rate, gender parity index, transition rate, schools with toilets and water, etc. Interestingly, public sharing of school data through school report cards (SRCs) is also significantly different from TBA approaches discussed in Chapter 2. The UDISE+ Know Your School website (see MoE, 2022a) currently hosts latest SRCs in India, where one can see that the 2020-21 SRC is filled with data on school management, student demographics, and schooling inputs such as drinking water facilities, libraries, digital facilities, etc. These data may be helpful for government authorities in school-related decision-making, however, it is not clear how they might be relevant for parents in making school choice and other decisions. Plus, these data are presented in a format that might be hard for parents or other grassroots stakeholders to read. It is hard to understand the meaning behind many terms/acronyms used in the report card (e.g. CWSN, SMDC, CRC, etc. )for those who are not familiar with them. There is no guide that can be useful for laypersons in interpreting these terminologies. On the website where these SRCs can be downloaded, there is no feature to compare schools with each other or with state standards over any of these indicators. The only way to compare schools is to download each of these report cards and manually make the comparison. Lastly, the report cards are only available in English on the government website, making it difficult for the majority population who don’t know English to access them. It appears that SRCs were not intended to promote TBA or create TBA-type effects in India. The lack of standardized learning performance data and the constraints mentioned above are of course major indications. But more importantly, there has been no clear policy or objective statement from the government about the reason and purpose behind these SRCs. Scholars have criticized the general lack of awareness about SRCs (mainly due to a lack of efforts to publicize them), due to which parents, teachers, and other grassroots stakeholders do not use them (e.g. Singh, 2022; Shah, 2019a; CCS, 2019; Bordoloi and Kapoor, 2018 etc.). 50 5.2 National Achievement Survey (2017 and beyond) I now turn my attention to the primary focus of my dissertation, the National Achievement Survey (NAS). In the remainder of this chapter, I explain what NAS is and how it is used for national educational governance in India. The importance and use of NAS in the education governance process in India has grown rapidly since 2017 (NCERT, 2019; 2018; MoE, 2021a) coinciding with some significant changes in NAS administration. In my analysis therefore, I particularly focus on the post-2017 period and the key features of NAS 2017 (which also largely apply to NAS 2021). 5.2.1 Key Features of NAS 2017 The National Achievement Survey (NAS) is a large-scale survey of students’ learning, administered periodically since 2001 at the elementary level (Grade 3 5,8) and from 2015 onwards at the secondary level (Grade 10). The subjects tested in Grade 3 are maths and language, Grade 5 are maths, language and environmental studies, and grade 8 are maths, language, science and social sciences. Apart from learning assessment, the survey also collects background data on teachers, students, and schools to correlate background variables with learning outcomes. The survey is led by the Educational Survey Division (ESD) of the National Council of Educational Research and Training (NCERT), under the aegis of the Department of School Education and Literacy (DSEL), Ministry of Education (MoE). According to the Ministry of Education the survey “gives a system- level reflection on the effectiveness of school education” (MoE, 2021a). NAS 2017 was administered in 701 Districts of 36 States and Union Territories. Nearly 2.2 million students studying in 1,10,000 government and government-aided schools participated in this survey (NCERT, 2019). To connect student performance with their background Pupil Questionnaire (PQ), School Questionnaire (SQ) and Teacher Questionnaire (TQ) was used to collect data. The Teacher Questionnaire was administered to approximately 0.28 million teachers. As described earlier, this data is collected to correlate students’ performance vis-à-vis contextual variables (MoE, 2021a). The survey instrument was developed in English language but implemented in 20 regional languages (NCERT, 2019). The 2017 survey included some unique features that marked a departure from previous applications of the NAS. In the sections below, I describe some key features of NAS since 2017, including attention to district-level sampling, focus on learning competency, and a single-day assessment cycle. 51 a) Expanding the scope from a State-level to a District-Level Sample Survey Until 2017 NAS was only conducted as a state-level sample survey. However, it was deemed by the central government and NCERT team in-charge that state-level results were not useful for meaningful governance as Indian states are quite large and there are major variations in educational contexts across districts. (To remind about the context, Indian states are bigger in population size than many countries in the world and districts are local administrative units that form the tier of local government immediately below the states. More information about the size of states and districts in India is given in Chapter 3 section 3.1 and also in the footnote below11). This view is reflected in the NAS 2017 national report. NCERT has been implementing these surveys on a sample basis at the State/UT level. Even though the learning gaps were being identified and shared, it did not percolate to the grass root and since the interventions suggested were generic, it lost its applicability and suitability at the implementation stage. With the passage of time it was felt that these concerns need to be addressed in a much more decentralized manner. Keeping the above in view, it was proposed to conduct the National Achievement Surveys, with districts as the unit of sampling. (NCERT, 2019) Therefore, for the first time, NAS was conducted at the district level in 2017 and will continue to remain so. The core NCERT official I interviewed also echoed a similar sentiment, as given below. It is important to note that this official had strikingly similar language to the report above. For example, both mention that the results “did not percolate to the grass root”. Now we were doing state-level studies in NAS right from 2001. But we saw that reports were prepared, but they did not percolate at the grass root level. It did not actually help in improving teaching-learning at the grassroots level. We also realized that within a state there is a lot of heterogeneity among districts. They are not similar at all. For example, you would know about UP (Uttar Pradesh)12. It is such a huge state, there are 76 districts out there. So to get a more granular report, and to make it more applicable at the grass- root level, we started producing district-level reports. (NCERT core staff) 11 Gujarat state is the 9th most populous state of India with a population of more than 60 million. This is bigger than the population of countries like Canada, Spain, South Africa, Kenya etc. The area of Indian districts ranges from 3.5 sq miles to 17,600 sq miles, depending upon the area of the state. The population of the districts ranges from 8000 to 11 million. 12 Uttar Pradesh is the largest state in India in terms of population. It has a population of more than 200 million. This is close to Brazil’s population, the 7th most populous country in the world. In terms of land area, Uttar Pradesh is the 4th largest state with an area of around 93,000 square miles. This is approximately comparable to the state of Minnesota or Michigan in the United States. 52 b) A shift away from measuring curricular knowledge to measuring Learning Competencies The second important feature of NAS 2017 was that it measured learning competencies, unlike previous rounds of NAS surveys which measured curricular knowledge. NAS survey questions were developed using NCERT’s learning outcomes framework (2017). Based on my interview with core staff at NCERT, the previous rounds of NAS (before 2017) were similar to US’s NAEP design, but the policy perspective towards learning measurement changed in 2017 due to which they discontinued this practice. Earlier they compiled common core content across states and measured students’ expertise in the same. However, in 2017 a major shift happened when the central government decided that the education policy must give greater focus on learning competencies built as a result of content, and not the content itself. To this effect, NCERT prepared a framework for learning outcomes (2017) subject-wise for every grade. Given this change, NAS was also transformed to measure competencies and reflect this new shift in policy thinking. This shift is similar to PISA’s promoted approach of measuring learning skills and competencies. Before 2017, we used to follow the (United States’) NAEP design. We used to get the syllabus from different states and look at the syllabus and then call out the core syllabus for each of the subject domains and then prepare the assessment items based on those core syllabus to target. But when India introduced learning outcomes, the entire outlook to pedagogy changed. It was not focused on learning of the content, but content was seen as a vehicle- using it how we can develop different competencies in the child. So this competency-based document in India, we (NCERT) came out in 2016, and these competency based learning outcomes were developed for each grade and each of these different subjects as well. When the competencies were identified grade wise and subject wise, it was but natural that NAS was also required to change. So in NAS 2017 we targeted totally competency-based and that too giving focus on the learning outcomes. (NCERT Core Staff) The consultant who helped the NCERT staff design the new survey instrument measuring learning competencies mentioned that the problem with measuring content is that it measures rote memorization instead of actual learning. Therefore, a shift to competency-based assessment was made, and having a learning outcomes framework already prepared by NCERT, the shift became easier. This appears like a response to the rising concerns in India about the education system promoting rote learning, memorization, and test-taking instead of critical thinking (e.g. see Kumar 2019; Nayak, 2018 etc.). When we are into learning assessments, the instruments are the heart of any assessment. Because everything depends on what the questionnaire is going to ask. So when we 53 reviewed the items, we found that most of the items are measuring rote memorization. And we wanted to make it competency-based rather than content based. The good point was that NCERT had already defined learning outcomes that students are expected to know. So, we linked the learning assessment items to learning outcomes. (NAS Consultant 2) c) Move to Single Day Assessment from a prolonged yearly cycle of data collection The third important feature of NAS 2017 was that it was conducted on the same day across the country. Previously, as shown in NCERT (2019, pp.2), this grade-specific data was collected in one yearly cycle, followed by a different grade in another cycle. The time cycles for each grade were also not consistent. It took 3-4 years to complete the implementation and reporting of a round, that too different grades each time. This was one of the major critiques of NAS, as this design made it difficult to compare learning outcomes effectively (NCERT, 2019). 5.3 Use of NAS (2017 and Beyond) By 2017, India thus had standardized learning performance data from around 2.2 million students in grades 3, 5, and 8 from over 100,000 government and government-aided schools. But as I alluded to above, these data were not used for TBA to work with any of these individual schools, as the data is collected at district and state levels, not providing any school-specific information. In this section, I explain how NAS is instead being targeted for standard as well as comparative bureaucratic governance in India. Standard bureaucratic governance refers to the inclusion of NAS in continuing the existing standard bureaucratic governance practices in India, while comparative governance refers to the reliance on the power of cross-state comparisons to govern education more softly and steer processes from the center, at a distance. 5.3.1 Standard Governance Since 2017 there has been a substantial focus by the central government on institutionalizing the use of NAS for standard bureaucratic governance purposes. On paper, NAS 2017 formalized the use of this data in governance processes such as state and district-level education planning, designing district-specific pedagogical interventions and learning materials, teacher training, and curricular reform (NCERT, 2019). The efforts to effectively translate this into practice have been going on since the results of NAS 2017 were released. I provide more information on these efforts in this section. My interviews confirmed that NAS data use in the manner proposed here has already begun. However, I don’t have information about the extent to which data are being used in practice. 54 As per the report, the objectives regarding the use of NAS data are to support states/UTs, districts, and blocks to interpret and understand findings, improve school-wise attainment of learning outcomes, and ensure academic support for the design and implementation of interventions to improve learning levels of students (NCERT, 2019). Please NCERT (2019, pp. xvi) for a useful summary on the intended uses of NAS for standard bureaucratic governance purposes like national and state policy making, teacher training, curriculum design etc. As per NCERT (2018, p.7), NAS is aimed to bring transformations in governance such as “shift the focus of student learning from content to competencies, help teachers divert teacher- learning in the desired manner, make responsible and alert for ensuring quality education of other stakeholders, especially the parents/guardians, School Management Committee (SMC) members, community and the state functionaries”. It is believed that “the learning outcomes defined explicitly will help to guide and ensure the responsibility and accountability of different stakeholders” (NCERT, 2019, p.12). It is important to note that this is the only statement in the entire NAS report (NCERT, 2019) which refers to “accountability”. Apart from use within the country, NAS is also expected to serve as a tool for connecting India to global parameters. NAS Consultant 2 explains below that the NAS 2017 introduced reporting in terms of proficiencies to align with UNESCO and report against SDG (Sustainable Development Goal) 4 indicators. And also, when we tried to develop the 2017 report, you can see the proficiency development for the first time. Proficiency development means we connected learning assessment data with the competency level… which means the basic level, below basic level etc. That was aligned to report the SDG goal 4. Because SDG goal 4 came sometime around 2015. And one of the learning goal is the percentage of children achieving minimum proficiency levels in literacy and numeracy. Grade 3 to grade 5 level….[…]..And with that scale, we developed it so that India could report against the SDG goal 4 indicators. (NAS Consultant 2) In order to institutionalize NAS data for standard bureaucratic governance purposes discussed above, many efforts have been undertaken and are still being pursued. These efforts indicate the importance given to NAS by the central government in establishing it as the national currency and instrument of educational governance. The central government has fully owned the data and through NCERT is making active efforts to make them part of the formal educational governance. Such emphasis is particularly striking given NAS is a district-level sample survey. It is not a student examination or other type of school-level data. This is unlike TBA models around the world that have relied on such school data to transform governance processes. 55 As per the preface of NAS 2017 by the national coordinator in NCERT (2019), NAS is much more than an exercise to collect and report data to states and districts. The implementation of NAS includes in its ambit the “capacity development” of various state, district, and school-level officials to use this data. This indicates that NAS is not designed for simply sharing results and letting state and district bodies interpret and use data themselves. Rather, the central authorities may work to help these lower entities in making sense of this data and using it for proposed governance purposes. It is designed to look behind the scorecard to illuminate how our education policies and practices need to evolve to improve the learning levels of our children. The implementation of NAS includes in its ambit the capacity development of school leaders, teachers and the whole network of officials at blocks, DIETs, SCERT, boards of school education and the Directorate of Education in the different States/UTs. (NCERT, 2019) As per NCERT (2019, p.4), one of the major differences between NAS 2017 and previous rounds of NAS is the dissemination strategy. The previous rounds only included Joint Review Missions (meetings between key central and state-level authorities) and the MHRD/NCERT website as part of the dissemination strategy. However, in 2017 the dissemination strategy was expanded to include new entities at the central, state, and district levels. For example, at the central level Central Advisory Board of Education (CABE), MHRD, and NCERT were included. At the state level, chief ministers, members of parliaments from lower and upper houses of parliament, and state-level education functionaries such as principal secretaries, state project directors, and SCERT (state council for educational research and training) directors were included. At the district level, district officials, block and cluster-level personnel, and school leaders were included. NCERT prepared district report cards (DRCs) to communicate NAS results to district officials to improve district-level interventions, and state report cards for state-level officials to assess the state scenario and provide support to district interventions(NCERT, 2019). The NAS 2017 report (NCERT, 2019) provides snapshots of DRCs on pages 175 and 176, and snapshot of state learning report card on page 177. Based on the information provided by Consultant 1 in the interview, the American Institutes of Research (AIR) was involved in designing these report cards. The NCERT (2019) reports that the national report on the survey results contains more technical details and advanced statistical reporting compared to state and district level for user-friendliness at these levels. It is evident from these snapshots that the reporting of NAS DRCs is significantly different from the school report cards (SRCs) by UDISE discussed earlier in section 5.1. The NAS DRCs appear less cluttered with information than UDISE SRCs. The information is provided in 56 terms of percentage which makes it a little easier for the user to understand. There is a guide on the second page to better interpret the figures on the first page on a granular level in terms of specific learning outcomes. It also provides a performance range that could be helpful in understanding the performance of students. Towards the end, it provides the lowest performing learning outcomes of students in a district, indicating priority areas where district bodies must intervene. As per the NCERT Core Staff, the user-friendly reporting of NAS through these DRCs was seen as critical to not only help district officials make decisions, but also to help teachers in understanding the district performance. Earlier when NAS was at the state level and reporting was not user-friendly, it affected the grass-root use of data as noted in section 5.2. However, as I mention earlier, it is still not clear how teachers might find this data helpful in classrooms. Initially when we had NAS in 2000 to 2008-9, it was this thick! Do you think this thick report of data will be used and understood and internalized by grass root teachers? No way! No way! It has never been used. It just lays in the library somewhere. And at best the researchers like you who are kind enough, they make use of it. And also, so much money has been involved in collecting the data and doing the studies. (NCERT Core Staff) The dissemination of NAS 2017 data along with guidance on how to use them was done across different levels of bureaucracy through different means. The NCERT is also spearheading this dissemination activity, along with administering NAS. NAS reports were shared with all chief ministers and chief secretaries of 36 states/union territories of India (NCERT, 2019). Meetings were organized with members of upper and lower houses of parliament and personal delivery of district reports was made in hard copies to inform them about the NAS outcomes of their districts (NCERT, 2019). Workshops were organized at state and district levels to communicate the results as well as guide them how to use the results for policy, planning, teacher training, and improving pedagogical interventions (NCERT, 2019). The state workshops focused on how state officials can understand state and district report cards and accordingly perform differential planning at the district and state levels (NCERT, 2019). As per NCERT (2019), in the district-level workshops, the copies of district report cards were used as exemplars and district officials were ‘handheld” to understand and interpret them (NCERT, 2019). The districts shared their activities and interventions with NCERT and received feedback (NCERT, 2019). NCERT also developed a document called Post NAS Interventions: Communication and Understanding of DRCs to explain in detail how to interpret and use district-level report cards (NCERT, 2019; 2018). It also 57 encapsulates steps to be taken at various levels (national, state, district, block, and school) in implementing these findings through various interventions (NCERT, 2019). Although many efforts have been employed to disseminate NAS data use throughout the bureaucracy, it is not clear how they might connect it to classroom practices. As per the NCERT core staff, the district report cards were prepared considering schoolteachers. However, with district-level sample data giving little specific information about the learning levels of children in classrooms, it is not clear how the government will make this data useful for teachers. It appears that NAS data is being targeted to achieve too many things at a time by also targeting teachers in this dissemination process. NCERT has prepared district master trainers to conduct workshops for district officials as well as teachers to understand the district level NAS results and improve pedagogy accordingly to improve learning outcomes. One major change we made in NAS 2017 was that for different stakeholders we prepared different reports. We prepared district reports especially targeting the schoolteachers. If you see the district report cards, they are only two pagers. One page front and other back. The first page talks about the participation, the different percentage of students performing below levels, and the second page gives us learning outcome wise how students are performing. So we started developing at district level somebody called master trainer. From each district we took two people, and they were trained as master trainer from 701 districts, where NAS was done. In India, we have a chain of people in every district in India. After district we have BRC, then CRC, and the teachers. Some of the teachers are also part of BRCs and CRCs. So these district master trainers trained and communicated to teachers. As a teacher when I see that children in my district are performing lower in competency regarding analyzing situations in math or science, then I as a teacher I can try and focus on that. If you see district report card, we have identified 5 low performing learning outcomes. So for each district we had 10 district report cards. 3 for grade 3,5 and 4 for grade 8. If as a teacher I see what are the areas which are problematic for children in the district, then I start finding out what is wrong. Why is it that students are not able to understand. And then what we do is that we also work with them in developing pedagogy to improve those learning outcomes. So we started with two focus: 1) identifying and understanding the district report card to find out what are the learning outcomes in which the district students are not performing well, and then 2) come out with pedagogy which would likely to improve the learning outcomes. (NCERT Core Staff) In addition to preparing the master trainers, NCERT also directly participated in some of these local trainings. If it is a small state like Goa, where there are only two districts -north and south goa, I personally conducted the sessions for their classroom teachers. But again in bigger states perhaps difficult. It is difficult, but it can be done. And we have developed a work schedule on how master trainers will go about and train teachers and how those trainings will be done using the BRCs.(NCERT Core Staff) 58 NCERT has developed short, mid, and long-term interventions which indicate how NAS might be used in India for educational governance (see NCERT, 2019; 2018). The short-term interventions include training of state-level master trainers (SLMTs) to communicate NAS results as per district report cards, assist different stakeholders (state and district education officials, teachers, head teachers, school management committees, parents, etc.) in understanding results, demonstrate exemplar pedagogical interventions developed by NCERT, train teachers etc. The midterm interventions include development of an intervention handbook by NCERT, preparation of a team of teachers as master trainers in different subjects, development of different indices (e.g. teacher quality index, infrastructure index etc.) associating NAS data with other data sources like UDISE to help states in evidence based planning and budgeting, development of testing items for state level, development of ICT based learning resources and materials etc. The long term interventions include development of policy briefs for systemic review and reform, guidelines and suggested practices for curriculum reform and teacher training, web based application to address needs and concerns of teachers and students etc. The short-term interventions started in 2018, and given the COVID pandemic, it is not clear at what stage the interventions have reached. However, it seems that after NAS 2021 there has been considerable progress in the use of NAS data for planning and teacher training purposes in India from various updates on Twitter handles of some NAS coordinators and state as well as district level officials in India. Given these developments, it appears that the National Achievement Survey has become an integral part of educational governance in India. Particularly, educational governance for higher entities in charge of schools. NAS is positioned to bring “systemic” transformation by directly using it in formal governance processes right from central to school level. The quote below by NCERT core staff reflects this sentiment. And if you see that NAS 2017, the results were used in National Education Policy, they were quoted there. And it has to be used at different levels-state and districts, for improving the education system. What the education system used to be before NAS 2017 in India is very much different than now if you study. Because many states have now become aware of how we can change. This is mainly because we have gone down at the district level. After NAS 2017, we also did something called Post NAS to communicate the results of NAS 2017 with the state and district level to ensure that it reaches the teacher and the school. So when we went and did the Post NAS activities, the state personnel themselves came up with and said that we can see the state and district learning reports that which districts are not performing well and in what district which is the type of school, management, that is 59 not doing well. Whether the boys are doing well or the girls. In India, the states also prepare their proposals, and they also have their PAB with the ministry. So these NAS reports were really helpful for districts in preparing their annual work plan budget. We call it AWPB. (NCERT Core Staff) 5.3.2 Comparative Governance I use the term ‘comparative governance’ to refer to the idea of ‘comparative turn’ in global educational governance as proposed by Martens (2007). Practices such as ratings, rankings, etc. which were traditionally applied to industries like sports, entertainment, etc. are now being applied to entities such as political institutions. Such comparisons have significant power in shaping public discourse by creating an air of competition around performance by attributing relative positions (Martens, 2007). Martens (2007) explains that comparisons reduce transaction and information costs as they become instruments for highlighting certain best practices that can be copied by poorer performers. Comparisons are also useful for establishing normative criteria for appropriate behavior (Martens, 2007). Comparisons allow these shifts to happen as they signal a “scientific” approach to political decision-making (Martens, 2007). They convey that decisions are based on objective criteria, making parties evaluated pressured to converge to those norms, practices, or behavior that are regarded as best practices as per the comparison framework (Martens, 2007). From the perspective of governance, these comparisons show that power does not need to be wielded only by traditional means of regulation, but also by “soft” practices such as ratings or rankings (Abbott and Snidal, 2000). They are good instruments to indicate a shift from government to governance (Grek, 2009). Grek (2009) argues that there is a “taken for grantedness” about such comparative indicators despite all the commentary seeking contextualization in their interpretation. This is indicative of how they have become ingrained in the contemporary education policy lexicon around the world (Grek, 2009). Apart from standard bureaucratic governance, NAS data is also used for comparative governance purposes in two ways. One is through the display of NAS results in the NAS performance dashboards and NAS national reports in a comparative manner. Second and more important is the use of NAS data in the Performance Grading Index (PGI) which grades states against each other. This practice is similar to PISA comparisons. I discuss both uses below. a) NAS Reports and Dashboards Comparative reporting of NAS data through report and dashboard indicates comparative governance. NAS survey results are made available on websites in the form of documents such as 60 national report cards, state report cards, district report cards, and data dashboards with comparison features. The NAS 2017 dashboard (see MHRD, 2018a) displays average performance in each subject for states and districts in terms of gender, location, management, social group, and according to specific learning outcomes. It also provides a range on percentage of students who answered questions correctly. It provides state comparisons on NAS scores using a choropleth map of average state performance in each subject. The NAS 2021 report (see MoE, 2022b) and dashboard (see MoE, 2022c) also come with state comparisons using a choropleth map in each subject. Interestingly, in 2021 reporting states are compared in reference to how they perform in comparison to the national average. Average state performance is also reported side-by-side in terms of their significance. All of these reporting practices are visually more sophisticated, easier to understand by a range of stakeholders, and create a clear comparative discourse compared to the UDISE school reporting discussed earlier in section 5.1. NAS results have also received wide media coverage, with headlines and articles comparing state performances against each other, putting a significant spotlight on state performance. For example, “How Rajasthan was both the best and worst state for education” (Bansal, 2022) or “Chandigarh goes from bottom to top in 3 years” (Behal, 2018). Often times their performance is directly linked to the policies and politics of the state governments within the states. Such headlines can be found much more for state performance, than district-level performance. Well-performing states can also be seen using NAS data to take credit for the work done by their government. Therefore, NAS results have started putting a significant spotlight on state performance. This spotlight is often comparative, creating a sense of competition between states. b) Performance Grading Index (PGI) The central government (ministry of education) has introduced an interesting use of NAS data for “softer” and “comparative” governance purposes by publishing the Performance Grading Index (PGI) from 2017-18 onwards. So far, three rounds of PGI have been released. PGI is a state- level13 education index prepared with various indicators including NAS data. States are given overall and indicator-specific grades based on their performance and compared against each other. 13 Until 2020, PGI was only released at the state level, however, very recently (i.e. June 2022) for the first time it was also released at the district level for the years 2018-19 and 2019-20 and called PGI-D. In this section I am only focusing on PGI at state level as it has longer tradition and more elaborate use compared to PGI-D which was just released. 61 The purpose behind designing PGI is defined quite broadly and rather vaguely by the government, i.e. to “catalyze transformational change in the field of school education” (MoE, 2021b, p. 3). Below I discuss PGI’s composition, its comparative reporting practices, and their governance implications. PGI Composition: PGI is scored on a total of 1000 points across 70 indicators, each assigned a weightage of 10 or 20 points. The PGI grading system has 10 levels. Level 1 indicates top performance and a score between 951 and 1,000 points. Level II, also known as Grade 1++, indicates a score between 901 and 950. Those with Grade 1+ or Level III have scored between 851 and 900. The lowest is Grade VII or Level X, and it means a score between 0 and 550 points. There are also sub-indicators under some indicators, where the total points of indicators are distributed among the sub-indicators. Considering the sub-indicators, total number of parameters in PGI are 96. The ministry proposes to timely review and revise this list of indicators, depending upon the need (MHRD, 2018b). The PGI indicators are grouped under five key domains derived primarily from UDISE and NAS data: 1) access (e.g. enrollment ratio, transition rate, and retention rate); 2) governance processes (digital attendance system, recruitment of teachers etc.); 3) Infrastructure and Facilities (toilets, libraries, drinking water etc.) ; 4) Equity (difference in performance between scheduled caste and general caste students, gender etc.); 5) Learning Outcomes and Quality (performance in NAS). Domains 1, 2, 3 are measured using annual data from UDISE and other administrative sources, the Learning Outcomes domain is measured using NAS, which is a survey done in around 3-4 years. This means that even though the PGI was released yearly, it contained the same NAS figures from the 2017 survey14. PGI’s indicators reflect how India is occupied with two different policy priorities. For example, performance in indicators such as the percentage of schools having libraries, drinking water, functional toilets, etc. indicates that India still has a long way to go to ensure quality inputs. 14 I would like to acknowledge an effort in India similar to PGI, which is now discontinued. Around the same time as PGI was introduced, India’s chief general planning body Niti Aayog also released a similar index and called it School Education Quality Index (SEQI) using the same data sources. Similar to PGI, it also used the NAS 2017 data to measure performance. The goals and purpose of both these indices were the same. But SEQI was only focused on outcomes and PGI is focused on both inputs and outcomes. Other major differences between the two were the approaches of comparison (PGI focused on performance grading, SEQI focused on performance ranking), and no of sub-indicators (PGI- 70 and SEQI-33). Later SEQI was discontinued in favor of PGI as PGI was seen as more comprehensive and having two indices could cause confusion and duplication. 62 The governance indicators show that India is still grappling with fundamental governance issues such as teacher absenteeism and staff shortage. At the same time, ensuring improvement in learning levels is also a major concern. The composition of PGI appears a one-stop integration of India’s two nationwide data systems- UDISE (inputs) and NAS (learning outcomes), which have existed quite independently of each other. Therefore, PGI appears like an effort to integrate two different policy priorities (inputs and learning outcomes) and their respective data systems. PGI’s Comparative Reporting: PGI data is released through reports and dashboards on government websites with compelling comparative graphics. These graphics are also widely shared on the Twitter handles of the education minister at the center and other associated handles of central government. States are segregated into different levels such as Level 1, 2,3 etc based on their PGI performance and placed side by side in a color coded manner so that every state and its level is clearly apparent [see Page 8 in (MoE, 2021b) and Page 5 in (MoE, 2022d)].Similar color wise state level comparative reporting is also done using choropleth maps of India (Page 6 in MoE (2022d) and Page 12 in (MoE, 2021b). Performance of states in PGI is also tracked over the years in comparative manner. One can find that within each grade category there is vast socio-economic, demographic, and geographical difference between states. For example, in 2019-20 report (MoE, 2021b) among the states that have grade I++ or Level II, Tamil Nadu has a population of 67.86 million, Kerala 34.63 million, and Andaman Nicobar Islands has a population of only 0.43 million. PGI’s Governance Implications: In some ways, PGI can be considered a PISA-like experiment in India. Through PGI the central government creates a new dynamic between the center and states where data in the form of grades, represented in a comparative and visually compelling manner, are used as an instrument for nudging states to demonstrate performance on target indicators set by the central government, in consultation with state governments. Through these comparisons, the central government is also relying on the platform of PGI to “officially” stimulate inter-state policy borrowing of best practices. This is similar to how OECD governs global discourse on best practices by using PISA. For instance, the 2017-18 report on PGI released by the central ministry (MHRD, 2019) states this in reference to PGI. The exercise, which is the first of its kind at such a scale, envisages that the Index will propel the States and Union Territories (UTs) towards undertaking multi-pronged interventions that will bring about the much-desired optimal education outcomes. The purpose of the PGI, therefore, is to help the States and UTs to pinpoint the gaps and accordingly prioritize areas for intervention to ensure that the school education system is 63 robust at every level. At the same time, it will also act as a good source of information for best practices followed by States and UTs which can be shared. (MHRD, 2019, p.1). Similar to NAS results, PGI results have also received wide media coverage, with headlines and articles pitting states against each other. For example- “Kerala tops among large states, UP scores last” (The Hindu, 2019) and “It’s Tamil Nadu and Kerala again, plus Punjab, that top Modi govt’s school grading index” (Sharma, 2021). The news publications not only report the overall PGI score, but also the sub indicators that demonstrate poor and higher performance. For example, see a report in Times of India (2020) below: In the domain relating to governance and management, the top score of 279 was that of Gujarat. That was 78% of the maximum points. Governance and management is of the highest importance as insights help in critical structural monitoring from the attendance of teachers to ensuring a transparent recruitment of teachers and principals.(Times of India, 2020) These media publications often compare two contrasting states against each other. Meaning, when a generally poor performing state overtakes another state which has a better public image in terms of educational performance, the media publications substantially highlight that. For example, the Times of India (2020) report mentions that “Gujarat has overtaken Kerala and moved from the third rank to second in the Performance Grading Index” and Tribune India (2022) reports “The school education system in Haryana has been ranked below neighboring Punjab and Chandigarh by the Union Ministry of Education.” The reports also capture reactions of higher officials in the state governments on their performance. For example, as Hindustan Times (2022) notes the statement of the state school education director as follows ‘We will review the scores and work on the shortcomings to further improve Chandigarh’s score in the coming years.” Just like PISA, the power of comparative grading and presentation through PGI and the subsequent media coverage seems to have made its impact on the states. As described by the NAS consultant 1, in the initial years of implementing NAS 2017 the use of NAS in the form of PGI was more impactful at the state level for acknowledging the new priorities and preparing themselves for the next steps. At the moment I don’t think the data is being used much at those levels (referring to NAS data at state and district levels). But one thing also happened parallelly. The government came up with two indicators- PGI and SEQI. For that significant weightage is given to learning outcomes and data for that comes from NAS. So I think because of that states have become conscious about improving the NAS results. Not so much as in this is finding and therefore I need to do this. Now if you talk to states, they will ask you tell us how to improve the NAS performance. So then tell them that you need to look at this report and see the 64 areas where you are lagging behind and then you need to work on it. So independently the NAS report hasn’t moved things. But it is the use of this data by the central government for performance grading that seems to have made states look at the performance and ask how we improve our performance. (NAS Consultant 1) 5.4 Conclusion From 2017 onwards, NAS is conducted as a district-level sample survey of the learning performance of students in public schools of India (government and government-aided) covering different subjects in Grades 3,5,8, and 10. NAS does not measure curricula knowledge but focuses on capturing the learning competencies of students. It is the only standardized learning performance data available nationwide in India. NAS results are shared through the district, state, and national reports, and the NAS dashboard. It is clear that from 2017 onwards NAS is intended to become an important instrument of nationwide educational governance in India, and significant efforts have been and are being put in this direction. One important indication of the increasing power of NAS in India is that while NAS 2017 only covered public schools, NAS 2021 also covered private schools. NAS is designed for standard bureaucratic governance practices, or practices that are regular feature of the Indian bureaucracy such as planning, pedagogical design, curriculum development, teacher training etc. One important thing to note about NAS use for standard bureaucratic governance is that it is targeting many users. NAS is expected to be used by not just central, state, and district officials in Indian bureaucracy but also schoolteachers. Given that NAS is a district-level sample survey, not giving information about school or student-level performance, it is not clear how it might be useful for teachers in classrooms. It appears that in absence of any other standardized testing data of good quality, NAS has become the go-to data source for various educational decision-making, at least at the moment. Another important thing to note is that currently there are no accountability norms to improve performance on NAS scores. However, there is a softer way of nudging states and districts to perform in NAS through the power of comparative governance.. Two features make NAS a unique data instrument for standard bureaucratic governance. Firstly, despite being a district-level sample survey conducted in around 3-4 years, the uses and effects for which NAS is being employed are core and direct governance processes. This is unlike other global large-scale sample surveys that tend to influence these processes more indirectly. In TBA countries especially, such core governance processes heavily rely on the use of annual 65 national tests or census surveys/assessments which provide much more granular information at the student or school level. Secondly, the ownership of the data by the central government and the various ‘centrally led’ efforts to disseminate this data and institutionalize its use is striking and shows how the government is actively elevating NAS as a currency of national educational governance. This is in contrast to the experiences of some developing countries where the government did not make active efforts to institutionalize data, leaving them out of key governance processes (see Diaz Rios, 2020). Both these features also indicate that NAS is significantly different from TBA models where data use has fostered post-bureaucratic governance approach. Here the use of NAS substantially remains within the bureaucracy, for the purposes of bureaucracy. It sustains the heavy handed role of bureaucracy in the Indian education system (background on centralized education governance discussed in Chapter 3, section 3.3.), without relinquishing its functions. Comparative reporting of NAS data on NAS reports, dashboards, and more importantly, through Performance Grading Index (PGI) which also grades states along with comparisons, appears to have brought comparative governance in Indian bureaucracy. I propose the term comparative governance to refer to the “comparative turn” (Martens, 2007) generally emerging in global educational governance. This means comparisons in the form of ratings, rankings etc. to create narratives about high vs. low-performing entities and initiate the transfer of best practices from high to low-performing entities. Such comparisons are seen to have great power to nudge practices through softer means compared to harder means of rules and regulations. PGI is also officially employed for the purpose of inter-state policy borrowing. The wide media coverage received by PGI is steering states to act in different ways. One striking observation about PGI is that the central government is not using terminologies like “accountability” as a reason for developing it. At the moment, it also does not come with any set of incentives or sanctions to improve performance. PGI is only carried out as an exercise of sharing the data in a compelling manner through reports and dashboards, which are then given major coverage by the media. However, this might change in the future as the central government is planning to make future PGIs more outcomes-focused and also attach some incentives like grants for improving performance. It would be interesting to see how this competition plays out, incites policy borrowing, and shapes governance processes in the country. 66 The quotes below from NAS Consultant and PGI’s core staff member hint at these upcoming shifts. One thing is that at the central level you shouldn’t be focusing on inputs at all. You just need to focus on the outcome. Now I am working closely with them on these things, and we are trying to get rid of those input level indicators. Let the states do whatever they want with inputs. Actually, let the schools decide what they want to do. But we need to focus on outcomes at the central level. So one is to minimize those indicators. (NAS Consultant 1) There is a 15th Finance Commission that actually is looking after how to allocate some incentives to various states in the school education sector and they have suggested a formula… using not all the parameters. They have shortlisted some 10 parameters from PGI, and they are telling that consider that particular thing and based on that some incentive will be granted to states. So we are right now in the process of working it out because it will probably come from this financial year. (PGI Core Staff) Having explained what NAS is and how it is gaining an important place in national educational governance from 2017 onwards in India, I will now explain the reasons behind its rising importance through Chapters 6, 7, and 8. Chapter 6 shows that there was a major demand for data like NAS 2017 in India due to the lack of such data and rising pressure on the government to improve learning outcomes. Chapter 7 shows that given this scenario, NAS 2017 was the only feasible instrument that could be developed to govern India considering its many capacity constraints. Chapter 8 shows how this new NAS has blended into the existing bureaucratic governance practices, making it conducive for Indian educational governance without making many disruptions. As a result, NAS has garnered an important place as an instrument of educational governance in India. 67 CHAPTER 6: DEMAND FOR NAS 2017 IN INDIAN EDUCATION POLICY In the earlier chapter, I explained what NAS is and how it is used for supporting the standard, existing bureaucratic governance practices, and creating a form of comparative bureaucratic governance in India. Through these uses, I also explained how it has come to assume a central position in nationwide educational governance from 2017 onwards. In this chapter, I explain why NAS has become so important in India as a governance instrument. I show how the changing education policy context in India has created a clear and pressing demand for data like NAS (2017, 2021 versions) in India. To make this argument about a clear and pressing need for NAS 2017-like data, I trace India’s policy and data developments15 from the 1990s onwards till now. I divide this time into three significant phases: Phase 1 (1990s to early 2000s); Phase 2 ( early 2000s to 2015); and Phase 3 (2015 to present). In Phase 1 the national focus was on the universalization of primary education, in Phase 2 the attention shifted to the universalization of elementary and secondary education, and Phase 3 is currently focusing on improving learning outcomes and competencies in the country along with the universalization of K-12 education. I show how until Phase 3, India had adopted an inputs and schooling focused policy (i.e. greater focus on quality access, enrollment, retention, and completion with equity over specific learning content or competencies) and governance approach (that aligned with the UDISE data), instead of learning outcomes-focused approach. Bureaucratic capacities, organizational structures, and systems were built to serve this inputs and schooling- focused approach. As a result, the government data collection efforts, its design, purpose, and use until Phase 3 also reflected this orientation. It is mainly in Phase 3 that the government has embraced a clear and more concrete focus on improving learning outcomes, creating a demand for data like NAS (2017, 2021). Tracing this historical process is significant for the story of NAS because it helps to explain the need for NAS-like data (2017, 2021) in India and how previous data initiatives like UDISE designed for input and schooling-focused purposes in Phases 1 and 2 are not useful for serving the learning-focused goals that NAS is aimed at. Yet, tracing the policy process also gives a glimpse of how these previous efforts like UDISE created a data infrastructure that may have generated a certain path dependence in how 2017 onwards NAS was organized, and 15 To remain focused I only discuss efforts that have culminated in UDISE and NAS as they are currently two official sources of education statistics in India available nationwide and at regular intervals. Other disintegrated or external efforts to collect education data during these phases can be obtained from Tilak et al (2014) and Mehta (2017). 68 its use orchestrated (More specific details on this in Chapter 7 and 8). The Figure 6.1 below summarizes the key policy, capacity and data developments across three phases, which the upcoming sections will explain. Figure 6.1: Timeline of key policy, capacity, and data developments 6.1 Phase 1 (1990 to early 2000s): Towards Universalization of Primary Education: Focus on Inputs and Schooling Starting from Phase 1, i.e. 1990 is necessary to understand why India did not have any standardized school-level learning performance data for nationwide governance until recently, which ultimately led to demand for district-level learning data like NAS 2017. Phase 1 shows that 69 from the beginning of the 1990s India had far more fundamental issues to address in education such as providing universal access to primary education (grades 1 to 5) and ensuring children complete this education. India did not have the bureaucratic capacity before that time to efficiently plan and monitor these universalization efforts. Therefore, from the 1990s to the early 2000s the government significantly focused on providing universal access to primary education (ensuring each child is enrolled in school and does not drop out) and also building bureaucratic capacities parallelly that help perform this task efficiently and also expand it to higher school grades in the future. Since the bureaucratic capacity had to be built from scratch in some areas, the government introduced a range of pioneering measures such as greater central government investment in education, decentralized planning, new offices and institutions, project mode reform, level- wise incremental expansion etc. In many ways, this was a period in which the education bureaucracy the way it exists today was built. The education data collected in this period focused on the priorities that were urgent at that time i.e. universal access, universal supply of quality inputs, schooling, and assisting bureaucratic functions of planning, monitoring, evaluation etc. at the primary education level. In this section, I detail these educational challenges from Phase I and how the government employed pioneering bureaucratic changes to deliver universal primary education, and how data were developed to address these priorities. This discussion shows that while countries like US and UK were collecting standardized test data since 1960s, in India systems, infrastructures, and processes were being built to collect inputs and enrollment-related data at the primary school level. The Indian government was focused on ensuring that all children were enrolled at the primary level. The worries about what they were learning or testing them were yet far from anyone’s mind. Considering that in general it is easier to collect such information compared to standardized assessments, this also shows how much capacity gap there was between India and the developed countries during this timeframe. 6.1.1 Challenge of Universalizing Primary Education in India The Indian government’s promise of providing universal elementary education in the country took a late start in the 1990s. Post India’s independence in 1947, the newly formed Indian constitution in 1950 made a commitment to the education of children in Article 45. It made the state responsible for providing free and compulsory education to all children up to the age of 14 by 1960. However, this goal was not achieved by successive governments due to resource 70 constraints, lack of political commitment, and various other factors (Harris, 2017). It was in the late 1980s and early mid-1990s that significant international funding started getting directed towards elementary education by high-profile donors like World Bank, which led to significant efforts in this direction(Colclough & De, 2010). Apart from the issue of late beginnings, the task of achieving universal elementary education in India was considered the most daunting compared to any other nation in the world due to its massive scale(Colclough and Lewin, 1993). By the 1990s, ensuring universal elementary education in developing countries became a global development agenda, endorsed by major international organizations and agencies(Colclough & De, 2010). Given India’s population size (one-third of the world’s poor), relative poverty, and rate of population growth, India was given significant attention in these global discourses (Colclough & De, 2010). It was treated as a priority country with regard to the achievement of international targets that later culminated in the Millennium Development Goals (MDGs) (Ward, 2011). India’s progress was essential to ensure successful implementation of discussions held at the World Conference on Education for All in Jomtien in 1990 because out of a global total of 145 million out of primary school children, around 30-40 million children were estimated to be in India (Lockheed, Verspoor, and Associates 1990, 28–9). The annual additions to Indian primary age cohort were larger than anywhere else with more than one million new places required every year just to avoid retrogression in the proportion of children enrolled (Colclough and Lewin, 1993). India’s National Education Policy of 1986 (MHRD, 1986) further provides insight into the scale of issues and the need for dedicated efforts in this direction, as described below. As a result, the policy treated universal access, enrollment, retention, and improvement in quality as its main priorities(MHRD, 1986). Even so, an acceptably large number of habitations are still without primary schools and nearly one-third of the schools in rural areas have only one teacher. The emphasis so far has been on enrolment of children - approximately 95% children in 6-11 age-group and 50% children in 11-14 age-group are enrolled in schools, the corresponding figure for girls being 77% and 36% respectively. However, nearly 60% children drop out between classes I-V and 75% between classes I-VIII. In urban areas there is overcrowding in schools and the condition of buildings, furniture facilities and equipment is unsatisfactory in almost all parts of the country. Rapid expansion, which was not accompanied by sufficient investment of resources, has caused a deterioration in academic standards. A programme of non-formal education has been started but in terms of spread and quality it is rather unsatisfactory.(MHRD, 1986) 71 6.1.2 Pioneering Interventions for Building Bureaucratic Capacity Access to foreign funding gave a major push to India to design and implement integrated efforts for the universalization of primary education. Post India’s adoption of economic liberalization reforms and subsequent structural adjustments due to economic crisis in 1991, it started accepting foreign funding in education sector in the form of loans. World Bank was the largest creditor, and the funding was viewed as providing a ‘safety net’ due to cuts in social sector spending (Sarangapani & Vasavi, 2003). European Union was also a large donor(Sarangapani & Vasavi, 2003). Prior to that there were few large scale foreign funded projects in the form of aid such as Andhra Pradesh Primary Education Programme (APPEP) funded by the ODA (now called DFID, UK), Siksha Karmi program funded by Netherlands, and Lok Jumbish funded by SIDA(Sarangapani & Vasavi, 2003). Combining funding of 1.62 billion USD from different sources (World Bank, ECU, DFID, UNICEF), the Government of India centrally designed and launched the District Primary Education Program (DPEP) in 1994 to make significant progress in the universalization of primary education (grade 1 to 4 in some states, and 1 to 5 in others), and also parallelly build state capacity in planning, management, and evaluation to achieve the goal (Glinskaya & Jalan, 2003). DPEP was largely focused on ensuring schooling inputs in primary education and maintaining children in schools. The central objectives of the DPEP program included providing access to primary education for all children (grade 1 to 4 in some states and 1 to 5 in others), reducing primary dropout rates to less than 10%, increasing learning achievements of primary school students by at least 25% and reducing gender and social gaps to less than 5% (MHRD, 2005 as cited in Azam & Hang Saing, 2017). The interventions such as provision of textbooks, teacher recruitment & training, building schools and other facilities, repairing existing school facilities, developing & providing learning aids and materials etc. were implemented to achieve these objectives (MHRD, 1997). DPEP’s understanding of educational quality was also input-focused, even though it was the first time an explicit mention was made by a government intervention on improving student learning levels. The idea of improving learning levels in DPEP was related to National Education Policy’s (1992 revision) emphasis on improving essential learning levels and Programme of Action’s (POA) (1992) concept of improving Minimum Learning Levels (MLLs) and providing education of comparable quality irrespective of social backgrounds of students, thus combining 72 quality with equity (MHRD, 2001). The measurement of student learning levels was to be done on a sample basis to evaluate the effectiveness of the program and was not part of any school accountability measures (see MHRD, 1997). Since DPEP was a first-of-a-kind and comprehensive initiative focused on delivering universal primary education, it became a path-defining initiative in India in terms of building the capacity of the bureaucracy at all three levels (national, state, and district) and introducing an implementation approach that was substantially distinctive from previous approaches. Below I discuss some of its most unique contributions to Indian bureaucratic capacity building and educational governance. Particularly, bringing greater involvement of central government in educational governance, initiating decentralized planning at the district level, creating new institutions and offices across levels, and starting a culture of policy implementation in a ‘project- mode’ with level-wise expansion. DPEP started the trend of greater involvement of central government in educational governance. Education is a “concurrent” or shared responsibility of the central and state governments in India. The central government’s responsibility is to set a national level policy, stimulate innovation, and create a planning framework (World Bank, 2003). Up until 1986, the states were fully responsible for the financing and provision of primary education. Since the urgency was given to progress towards primary education in the 1986 National Education Policy (MHRD, 1986), related Programme of Action (MHRD, 1992), and the 1990 Jomtien conference, the central government committed to financing a portion of development expenditure in this sub- sector through Centrally Sponsored Schemes (CSS) such as the DPEP (World Bank, 2003). DPEP’s funding was shared between center and state as 85:15 (Azam and Hang Saing, 2017). Therefore, DPEP initiated significant reform in India by bringing greater involvement of central government in primary education, and through that centering the importance of universal primary education as policy priority (World Bank, 2003). DPEP started decentralized education planning in India by making district the unit of education planning (Government of India, 2000). Earlier education plans were only made at state and national level (Varghese, 1994). This was realized to be problematic because there were huge variations in district level education outcomes and such higher level plans obscured allocation of adequate resources to most priority districts (Varghese, 1994). For example, as per 1991 census, female literacy rate varied between 8% (Barmer district, Rajasthan) to 94% (Kottayam district, 73 Kerala) (Azam & Hang Saing, 2017). Similarly, dropout rates varied from 0% in Kerala state to 60% in Bihar state (Azam & Hang Saing, 2017). DPEP project devised central guidelines to ensure “basic degree of uniformity amidst diversity” in district plans but within the guidelines granted states and districts flexibility to design programs to suit their local contexts (World Bank, 2003). Since DPEP involved pioneering interventions from central to state and district levels, new institutions, offices, and capacities at all three levels were initiated. Apex educational institutions like the National Council for Educational Research and Training (NCERT) and National Institute of Educational Planning and Administration (NIEPA) were involved in the process, and specific offices were developed or enhanced at all three levels for program planning, implementation, monitoring, research and evaluation, training, etc. (World Bank, 2003). For example, central DPEP bureau, state registered implementation societies (SIS), state project offices (SPOs), district project offices (DPOs), Block Resource Centers (BRCs), Cluster Resource Centers (CRCs) etc.(World Bank, 2003). DPEP involved many fundamental trainings and interventions at district and state levels such as school mapping exercises, preparation of micro level plans, developing teams of trainers and facilitators etc. (World Bank, 2003). Development of BRCs and CRCs filled the institutional vacuum at below district levels and were aimed to provide vital support and resources to schools and communities (World Bank, 2003). DPEP was a central government initiative, and its execution was done separately from the main education departments in a project mode (government also calls it “mission mode”) (MHRD, 1997). The semi-autonomous state implementation societies were created to manage project management and flow of funds to districts(MHRD, 1997). This was done due to the historical weakness and tendency of state education departments to not release funds for use at lower levels in timely manner (World Bank, 2003). DPEP was designed to intentionally bypass state finance departments and not commingle project funds with overall budget transfers to states (Govinda & Matthew, 2018). It was believed that this project-based approach would move things faster in the rigid education bureaucracy. For a fixed time this would give the required push in the primary education sector and bring greater efficiency than traditional approaches(Govinda & Matthew, 2018). It will empower DPEP’s management structure with certain degree of autonomy, and this will result in quicker decisions(Govinda & Matthew, 2018). Working through its own bank account will ensure timely and uninterrupted fund flow (Govinda & Matthew, 2018). Attainment of project goals was 74 set to a time period of five to seven years and this time bound approach was seen as crucial to keep the pressure on to sustain the momentum until the project goals were realized (World Bank, 2003). World Bank (2003) notes that the resultant heightened stature of primary education in the states made it possible to garner state support for the timely release of state shares of the project budget, and placement of effective Project Directors and Education Secretaries. The government, cited in World Bank (2003, p.44), describes that “DPEP was by design not a finance driven program, but one that sought to build systems that are cost-effective, equitable, replicable and sustainable”. Apart from the separate institutional set up and timeline based goals, another major element of DPEP’s project approach was that it was only implemented in selected districts of the states and not across the state (Govinda & Mathew, 2018). The states were selected based on their low female literacy levels, and the number covered by DPEP represented about half the total number of districts in India (World Bank, 2003). Therefore, while central government was pursuing DPEP program in these districts, state governments were parallelly implementing their own supply side interventions in non-DPEP districts and at higher academic levels (Govinda & Mathew, 2018). Related to the point above on DPEP working in a project mode is that DPEP started the trend of level wise project expansion (Govinda & Mathew, 2018). DPEP focused on selected districts in states, at only primary levels (grade 1 to 5 or 1 to 4) (Azam & Hang Saing, 2017). The first stage of the program was introduced in 1994 in 42 districts across 7 states and was completed in September 2001(Azam & Hang Saing, 2017). Stage 2 of the program started across 80 Indian districts in 1996 and was completed in December 2002(Azam & Hang Saing, 2017). Stage 3 started in 1998 in 27 districts and ended in March 2003(Azam & Hang Saing, 2017). Other stages, including stage 4 started in 1999–2000 and added 70 more districts (Azam & Hang Saing, 2017). The total number of districts covered by all DPEP phases was 219 (248 with bifurcated districts) covering all 18 states of India (Government of India, 2000). Based on significant progress in this area, it was implemented in non-DPEP districts (World Bank, 2003). This level wise expansion approach has been adopted by government since DPEP to universalize other remaining levels of education (Govinda & Mathew, 2018). The government launched an integrated universalization program for entire elementary education called Sarva Shiksha Abhiyan (SSA) in 2001. After the noticeable success in elementary education sector, government launched separate program for universalization of secondary education called Rashtriya Madhyamik Shiksha Abhiyan (RMSA) in 2008. Later both these missions were 75 integrated into a single integrated mission called Samagra Shiksha in 2018, also incorporating early childhood and higher secondary levels. 6.1.3 Laying Foundations for Schooling and Inputs Focused Data in India To manage the DPEP program or the universalization of primary education, a school-based EMIS system called District Information System for Education (DISE) was created that for the first time in India collected school-level data, that too computerized, albeit in selected DPEP districts. The task of designing and implementing DISE was accomplished by National Institute of Education Planning and Administration (NIEPA) with assistance from UNICEF. The data collection variables were decided based on the priorities of DPEP project, i.e. improving access and retention in primary education with satisfactory quality in selected districts. Therefore, DISE was the key source of data to implement DPEP. Before DISE, it was difficult to measure the real requirements of the country as there was no school and district level data available. It was difficult to get most accurate estimations of basic requirements such as priority areas for opening schools, requirement of teachers in schools etc. Mehta (n.d.) writes that at the project inception stage, a sound information base for planning and monitoring of project intervention was an almost non-negotiable requirement but there were many challenges to establish and sustain such a system. This was particularly so as the prevailing system had completely lost its credibility with the data users. The educational statistics collected by the states under the guidance of the MHRD were not only inadequate to meet the growing needs of the decentralized planning but were characterized by inordinate delays, highly aggregative and were not amenable to validation and reliability tests. Since school statistics formed the core of educational statistics, it was rightly recognized that major reforms in school statistics both in terms of their scope and coverage as well as availability have to be carried out (Mehta, n.d.). Since DISE was the first time data were to be collected from schools for district planning, the setting up of DISE required major bureaucratic capacity building and was not an easy process. Systems and procedures had to be established for data collection, management, computerization, reporting etc. in a such a manner that they could be useful for district plans, budget allocations, monitoring, and evaluation. The openly available YouTube videos on DISE’s official trainings give a glimpse of how setting up DISE was not easy as it required many trainings and workshops across different levels to communicate the system and procedures for quality compliance. Making this system fully operational and effective required the creation of new organizational structures and systems as demonstrated by Mehta (2017, pp.8). Data were to be 76 filled at school or village level and reported to CRC (Mehta, 2017). CRC would monitor data quality and aggregate data to report to BRC (Mehta,2017). District project office would compile the district data and report to state project office which would prepare state and district reports and maintain the state database of DISE (Mehta, 2017). At the central level, NIEPA was responsible for managing countrywide DISE database and preparing national, state and district reports (Mehta, 2017). NIEPA designed and developed core Data-Capture Formats in consultation with the experts and states and designed the software in-house for implementation at the district level and provided necessary technical and professional support to all the DPEP districts and states (Mehta, 2017). With DISE, additional mechanisms for data validation and quality control of school statistics were also introduced (Mehta, 2017). Despite all the new capacity building, organizational changes, and challenges, DISE was still not available for levels above primary education and in parts of India that did not implement DPEP project. In its first year, i.e. 1994-95, DISE was implemented in seven DPEP phase-one states in 42 districts, and it reached to about 65,000 primary schools/sections (Mehta, 2017). The first major review of the DISE software was undertaken during 1997-98 (Mehta, 2017). By the end of DPEP in 2000-01, DISE could cover 18 states and 272 districts, but all confined to DPEP project states and covered only the primary education (Mehta, 2017). DISE was remarkable in starting a robust system collecting data directly from schools, but it was not designed to measure or track learning levels within schools as it catered to a schooling and inputs-focused policy. DISE was the first effort in India where schools were registered with detailed information about their available infrastructure, facilities, resources, teachers etc. (Mehta, 2017). But since DPEP only aimed for ensuring minimum learning levels, learning assessment studies were conducted to measure the same and report the overall effectiveness of the program without making learning improvement part of any accountability norms or policy 16 (MHRD, 1997). However, this phase led to beginning a culture of baselines, social assessments, and data based planning in Indian context (see World Bank, 2003). The infrastructure developed in this period ultimately evolved to become UDISE system discussed in Chapter 5 and to be discussed in the next section. It also created support systems for conducting NAS surveys in the future. 16 More information about these studies can be found at: https://www.educationforallinindia.com/page47.html 77 6.2 Phase 2 (early 2000s to 2015): Universalization of Elementary and Secondary Education: Primarily focused on schooling and inputs with emerging attention to learning assessments The Phase 2 of Indian education policy from early 2000s to around 2015 focused on the universalization of elementary and secondary education, after the focus on universalizing primary education (universal access, enrollment, retention, completion etc.) in Phase 1. This was an important and interesting phase where many groundbreaking initiatives were taken by the central government. In this phase, the government largely employed inputs and schooling-focused approach to universalize elementary and secondary education by following the template that was created by DPEP in the earlier phase. Similar to Phase 1, capacities of bureaucracy had to be built in programmatic as well as data-related areas to support inputs focused planning, monitoring and evaluation of programs. Interestingly, in addition to the focus on inputs and schooling, the government also adopted measures that discouraged testing culture in elementary schools through initiatives like No Detention Policy and Continuous Comprehensive Evaluation. India also decided to withdraw from PISA assessments. Therefore, data collected in this period also largely reflected these developments. It was only towards the end of this phase when wide criticisms emerged of this approach, that the need for greater focus on learning outcomes was realized. In this section, I describe these influential policy initiatives of this period. 6.2.1 Universalization of Elementary Education with Inputs and Schooling Focused Policy From 2000 onwards, the work accomplished under DPEP became a template for universalizing elementary education. Based on the achievements made under DPEP through its unique approach, the central government decided to take a more ambitious plunge by starting the National Campaign for Universalization of Elementary Education or Sarva Shiksha Abhiyan (SSA) which was aimed at the universalization of entire elementary education (Govinda & Mathew, 2018; World Bank, 2003). The education scenario in the country described in the program framework gives a glimpse of the issues it aimed to address (MHRD, 2000): But the flip side is that out of the 200 million children in the age group of 6 – 14 years, 59 million children are not attending school. Of this, 35 million are girls and 24 million are boys. There are problems relating to drop – out rate, low levels of learning achievement and low participation of girls, tribals and other disadvantaged groups. There are still at least one lakh habitations in the country without schooling facility within a kilometer. Coupled with it are various systemic issues like inadequate school infrastructure, poorly functioning schools, high teacher absenteeism, large number of teacher vacancies, poor quality of education and inadequate funds. In short, the country is yet to achieve the elusive goal of Universalization of Elementary education (UEE), which means 100 percent 78 enrolment and retention of children with schooling facilities in all habitations. It is to fill this gap that the Government has launched the Sarva Shiksha Abhiyan. (MHRD, 2000) Similar to the DPEP program, SSA was tasked with diverse policy priorities, albeit on a much larger scale. The objectives of SSA were (MHRD, 2000): • All children in school, Education Guarantee Centre, Alternate School, ‘ to School’ camp by 2003. • All children complete five years of primary schooling by 2007. • All children complete eight years of schooling by 2010. • Focus on elementary education of satisfactory quality with emphasis on education for life. • Bridge all gender and social category gaps at primary stage by 2007 and at elementary education level by 2010. • Universal retention by 2010. SSA’s implementation framework (MHRD, 2000) reveals that similar to DPEP, in SSA too depending upon district-specific requirements the interventions to achieve these objectives were input based. Overall, SSA proposed interventions such as construction of schools and other facilities, teacher recruitment, teacher capacity building, strengthening of academic support structures at local level such as block and cluster resource centers, special girls hostels, curriculum development, scholarships etc. The understanding of ‘educational quality’ in SSA was also input focused. As per the framework, the list of areas identified to improve quality education included- good school buildings and equipments, quality ECCE, classroom attendance and retention, teacher recruitment, pre-service and in-service teacher education, teacher motivation, school supervision, curriculum, teaching and learning materials, learning processes such as child centered activities, remedial teaching programs, stress free evaluation and grading systems, reduced course load, and participatory management (see MHRD, 2001). Assessment studies were planned to assess the effectiveness of overall programme and interventions periodically. In the SSA framework for implementation released in 2001, the section on Quality Issues in Elementary Education, the inputs focused on understanding of quality by the ministry is clearly evident. Thus ensuring quality in the inputs and processes becomes necessary' if the quality achievement is aimed at. Quality issues in elementary education will, therefore, revolve around the quality of infrastructure and support services, opportunity time, teacher characteristics and teacher motivation, pre-service and in-service education of teachers, curriculum and teaching-learning materials, classroom processes, pupil evaluation, monitoring and supervision etc. Indeed improvement of quality in these parameters and its sustenance is a matter of grave concern for the whole system of education. (MHRD, 2001) 79 SSA continued the DPEP style project mode, with a much larger vertical and horizontal scope. Based on the SSA implementation frameworks (MHRD, 2000; 2001), it appears that SSA was designed to fill the gaps which DPEP left behind and build on the template it created. It was organized similarly on a mission or project mode and expanded to districts across the country where DPEP was not implemented. Plus, the project was expanded to higher grade levels to cover entire elementary education. Therefore, SSA was tasked with significant vertical and horizontal expansions with capacity building in both these dimensions to achieve the task of universalization of elementary education. Given these vertical and horizontal expansions, many efforts were made to strengthen bureaucratic capacity across levels to deliver inputs for universal elementary education. The implementation framework (MHRD, 2000) discusses a range of areas across levels in which systems were to be established such as district level perspective planning till 2010, greater community participation in planning, microplanning, school mapping, preparation of village education registers etc. at district level. Similarly at higher level processes were to be established in monitoring and inspecting lower levels, determining resource requirements, and financing accordingly (MHRD, 2000). Therefore, it can be said that SSA not only had diverse programmatic goals but also major responsibilities in terms of capacity building to effectively execute the project to completion across the country from national to school/district level. As per SSA’s implementation framework (MHRD, 2000), carrying on the model of DPEP, SSA established center state partnership in achieving universal elementary education, and bestowed central government with greater powers. The assistance under the programme of Sarva Shiksha Abhiyan started with center-state funding share of 85:15 and then later became 75:25, 50:50 and 60:40 along the way (See MHRD, 2000; Bordoloi and Kapur, 2019). SSA formally established greater role of central government across elementary education sector over the country not just in terms of setting guidelines and frameworks but also funding, monitoring, evaluation, research, capacity building, planning and other forms of support. The center played active role through institutions like NIEPA and NCERT in building capacities from district to state levels. After significant progress under SSA in elementary enrollments, the government enacted an inputs and schooling focused `Right of Children to Free and Compulsory Education Act 2009' (RTE Act). This act granted elementary education as a fundamental right of children between ages 6-14 years of age and made the state liable to provide education without any cost or charges to the 80 child (Harris, 2017). The act specified some minimum norms for elementary schools in following areas. All of these norms were input and schooling-focused and did not contain any guidelines about improving learning levels and ways to improve school accountability. • Provisions for admitting out of school children in age-appropriate classes. • Duties and responsibilities of appropriate Governments, local authority, and parents in providing free and compulsory education, and sharing of financial and other responsibilities between the Central and State Governments. • Norms and standards regarding pupil teacher ratios (PTRs), buildings and infrastructure, school-working days, teacher-working hours, school management, library and learning equipments etc. Teacher deployment norm set at school level rather than district or state to avoid rural-urban imbalances. • Prohibition of deployment of teachers for non-educational work. • Appointment of appropriately trained teachers, i.e. teachers with the requisite entry and academic qualifications. • Prohibition of (a) physical punishment and mental harassment; (b) screening procedures for admission of children; (c) capitation fee; (d) private tuition by teachers and (e) running of schools without recognition. • Development of curriculum in consonance with the values enshrined in the Constitution, and which would ensure the all-round development of the child, building on the child's knowledge, potentiality and talent and making the child free of fear, trauma, and anxiety through a system of child friendly and child centered learning. 6.2.2 Measures to Avoid Testing Culture in Schools Below I explain how critical measures were taken in policy to avoid testing culture in schools in Phase 2. Particularly, I discuss the enactment of No-Detention Policy, Continuous and Comprehensive Evaluation, Reforming SSA with these provisions, and withdrawal from PISA. Not only was the RTE act inputs focused, as discussed earlier, but it also created an environment that shifted government schools’ focus away from any type of testing-based governance. Due to concerns about students dropping out from elementary school under pressure of school exam performance and to remove the taboo of failure due to class retention, RTE act introduced No Detention Policy which required the promotion of students in the next grade irrespective of their performance till Class 8 (Sabharwal, 2018). The RTE act instead promoted the use of Continuous Comprehensive Evaluation (CCE) to evaluate different aspects of children without treating it as basis of class promotion (Sabharwal, 2018), as it was seen as a more holistic and less stressful approach for student evaluation. As noted by MHRD (2009, p.11) it was conceived as “a procedure that will be non-threatening, releases the child from fear and trauma of failure and enables the teacher to pay individual attention to the 81 child’s learning and performance. Such a system has the best potential to improve quality, rather than punishment, fear of failure and detention”. The RTE Act blamed the system for lack of learning in the child, rather than the child itself. …each child has the same potential for learning, a ‘slow’, ‘weak’ learner or a ‘failed’ child is not because of any inherent drawback in the child, but most often the inadequacy of the learning environment and the delivery system to help the child, realize his/her potential, meaning thereby that the failure is of the system, rather than of the child. (MHRD, 2009) The act discouraged any form of high-stakes assessments for elementary school students, by endorsing views of National Curriculum Framework (NCERT, 2005) which mentioned- “Under no circumstances should board or state level examinations be conducted at other stages of schools, such as class V, VIII or XI.” (MHRD, 2009,p.15). Further clarifying its stance on assessments, the ministry released an advisory note in 2012 conceiving assessment as a continuous, more intimate and classroom-based exercise, involving diverse activities (see MHRD, 2012). It promoted assessments through activity based learning methodology (ABL), classrooms observations, anecdotal records etc. (see MHRD, 2012). SSA and RTE together strengthened the input and schooling focused understanding of educational quality in the policy. Post RTE Act, the SSA implementation framework was revised to reinforce the emphasis on “equitable quality” (MHRD, 2011, pp.56). The framework identified the core components of quality education as follows -appropriate aims of education, learning in age appropriate classes, subject balance and age appropriate syllabi, textbook content & production reform, libraries as learning sites, community knowledge, good use of time, assessment etc. (MHRD, 2011). With regards to assessment, the framework noted that assessments must not be compared, enhance motivation of students, remain stress-free, child centric, and foster creativity (MHRD, 2011, pp. 66-67). The government planned to conduct timely assessments not for school or child specific improvement, but evaluation of overall interventions as part of SSA (MHRD, 2011). Interestingly, another example of lesser focus on learning outcomes compared to schooling inputs is India’s participation in PISA in 2009 and later withdrawal from future PISA rounds due to extremely poor performance (72 rank out of 74 nations). Unfairness of test due to “socio-cultural disconnect” between the questions and students was cited as the reason behind withdrawal (Vishnoi, 2012). 82 6.2.3 Universalization of Secondary Education Parallel to these developments in elementary education, focus was also expanded to universalization of secondary education. This was done recognizing that due to emerging socio- economic conditions of the country elementary education was not adequate to equip a child with the necessary knowledge and skills to face the world of work and empower them to deal with the challenges of a globalizing economy (CABE, 2005). The project of universal secondary education was premised on the goals of universal access, equality and social justice, relevance, and structural and curricular aspects (MHRD, n.d.). It was launched as Rashtriya Madhyamik Shiksha Abhiyan (RMSA) or National Secondary Education Mission in 2009. In many ways RMSA was also implemented like SSA. The approach to achieve universal secondary education through RMSA was similar to the input and schooling focused model of elementary education i.e. construction of schools & facilities, teacher recruitment, curriculum & learning materials, residential schools, scholarships, distance learning programs, bridge courses, teacher training, reforming exam patterns etc. (see MHRD, n.d.). RMSA also adopted similar means to build and enhance capacity like SSA and therefore resulted in the initiation of separate project-mode management, district level planning, community participation, training of district and state level staff, system of monitoring and evaluation, etc. (see MHRD, n.d.)17. 6.3 Data for Phase 2: Focus on Universalization of Elementary and Secondary Education and Launch of Pre-2017 NAS The previous section showed how Phase 2 not only adopted an inputs and schooling focused policy to universalize elementary and secondary education, but also took separate measures to ensure a shift away from testing culture in elementary education. Given this policy context, the data collected in this period also reflected these priorities. As a result, India did not have any nationwide data instrument measuring learning performance at district and school levels. This phase led to a rich school level management information system called UDISE, devoid of standardized learning data. The NAS was also first launched during this period, but for a higher and macro level evaluation of programs like SSA (universalization of elementary education) and RMSA (universalization of secondary education) by only collecting data at the state level. 17 It is important to note that SSA and RMSA were separate missions and launched at different times. They were designed to be managed by separate entities at central, state and district level. As a result, two separate district plans were developed for elementary and secondary levels by different agencies. 83 Therefore, this NAS had much-limited scope compared to NAS 2017. This discussion provides such relevant details about why there emerged a real demand for data like NAS 2017 in Phase 3, as no such comparable data were collected in Phase 2. Particularly, I discuss how DISE was expanded to other levels in elementary education, a separate data system SEMIS was created for secondary education, both unified to create UDISE, and how NAS was launched for macro-level evaluation of SSA and RMSA. Expansion of DISE: In this phase, DISE underwent a geographical and vertical expansion to embrace the expansion of educational priorities from universalization of primary education to secondary education. When SSA was started in 2001, DISE was expanded to other non-DPEP districts and states, and upper primary levels to cover entire elementary spectrum (Mehta, 2017). Based on my interview with concerned official, the data capture format (DCF) or data collection protocol for DISE was modified to cover these levels. States were given flexibility to add more variables to meet their state specific requirements. When RTE was enacted in 2009, the DCF was further modified to report on RTE norms. So up to DPEP our prime objective was to collect information about primary level of education. During DPEP program were able to establish MIS units in the states. Also at the district level we were able to reduce time lag in availability of the statistics from earlier 7 to 8 years to slowly 1-2 years and then 1 year at national level and then few months at state district and block level. And disaggregated data was made available which was earlier not available. School, cluster, block, state, and also national level. And plans under district level under DPEP… they started developing(plans) using DISE data. In view of these achievements in 2000-1 the Sarva Shiksha Abhiyan program was launched, and Govt of India made two major decisions. One decision was to extend the coverage of DISE from primary to elementary level of education and the second was to extend the coverage from only DPEP states and districts to entire country. So that decision was taken in 2000-1…..In between the RTE was also enacted sometime in 2009. So again we modified the DCF in view of the requirements of RTE. There was a separate section for RTE indicators. Under section 1 to 1c- out of school children, special training etc. all the variables were kept in a separate section under the heading RTE variables or like that. So that was 2007-8. We started collecting data successfully in 2009. We also developed a separate database for RTE variables. (UDISE Core Staff) DISE was portrayed as a system that provides “comprehensive’ and ‘unified set of statistics for elementary education (Pritchett, 2014a). This is reflected in a foreword from Vice Chancellor of NIEPA given below (as cited in Pritchett, 2014a). The country has witnessed phenomenal expansion of school education system in recent years. Effective monitoring of such a vast system spread over diverse conditions that characterize different states and regions of the country demands comprehensive data base. NIEPA has been pursuing the goal of creating a reliable system of statistics on school 84 education during the last two decades through the District Information System for Education (DISE) which provides the basis for assessing the progress under SSA and on status of implementation of the Right to Education Act. The importance of this has further increased with efforts to extend the policy of universal education to cover secondary education stage of schooling also. Keeping this in view DISE is making a concerted effort to provide a unified system of school education statistics for all levels of schooling from elementary to higher secondary education. Separate data system for secondary education-SEMIS: When RMSA was proposed in 2007-8 for the universalization of secondary education, a separate EMIS was introduced called SEMIS (Secondary Education Management Information System). Similar to DISE, the use of this data was mainly intended for planning, budgeting, monitoring, and evaluation-related purposes under RMSA (Mehta, 2017). Based on interview with concerned official (more details ahead), it was kept separate because SSA and RMSA were separate missions and handled by different entities from the national to district level. Data collected by DISE and SEMIS: Both these systems, DISE and SEMIS, largely collected data on inputs, demographics, and enrollment related indicators mainly intended for planning, budgeting, monitoring, and evaluation-related purposes under SSA and RMSA (Mehta, 2017). Data were collected on school profile (management, sources of funding, school type, the language of instruction, etc.); enrolment and repeater information (by age, sex, social class, etc.); teacher provision (including availability and qualification of teachers, teacher training); infrastructure and learning facilities; receipts of school grants etc. In terms of learning, the only data DISE collected was school-level annual exam pass percentages, and SEMIS collected board exam pass percentages. However, these scores held little meaning for elementary education considering the No Detention Policy that promoted all students irrespective of their performance. Given these input focused data collected by DISE and SEMIS, the key statistical publications prepared from this data such as school report cards (SRCs), state and district report cards etc. also contained schooling and inputs related information, with many that may not be related to learning. Pritchett (2014a) observed that the section on ‘performance indicators’ in some statistical reports did not indicate learning performance but had indicators like “‘per cent of schools approachable by all-weather road’, ‘percent with boundary wall’, ‘per cent with ramp’ ‘Pupil teacher ratio’. Therefore, unlike open school data in other countries which contains information 85 on learning performance, school report cards in India did not include such information. This practice continues today, as I discuss in Chapter 5 section 5.1. Unification of DISE and SEMIS into UDISE: In few years of implementing DISE and SEMIS it was realized that the disintegrated functioning of these systems created many problems of coordination and duplication. Both the systems were designed separately without consultation with each other. One was designed as an online data entry system, and other was designed as offline data entry system. Both the systems used different IDs for same schools, creating much confusion and redundancy. Therefore, in 2012-13, after many pilot trials, consultations, and institutional changes, both the systems were integrated into a Unified District Information System for Education (UDISE). SEMIS was started very differently from DISE. We don’t know the reasons behind that. But it was kept separate from DISE. Because of this reason, we lost at least 5 years because of the twin system. We requested that the states have already extended their coverage from elementary to secondary. So it shouldn’t be developed independent to DISE. But somehow responsibility of it was given to some other department… They separately designed and developed DCF. We were not consulted in that process. The planning department designed and developed the DCF for SEMIS in consultation with ministry and in view of the requirements of RMSA. So we designed and developed an online system, by assuming that these are the secondary and higher secondary schools, considering they are better equipped with computer and internet facility and having teachers who can use online platforms etc. SEMIS continued up to 2010-11. They started collecting data in 2007-8. Though it was an online system, not a single state were able to upload the data online. All the paper DCFs were filled manually and brought to the state headquarters. And then data entry was done at state headquarter online. So the basic purpose of developing an online system was completely forfeited as there was nothing online… everything was offline. That system continued up to 2010-11. (UDISE Core Staff) National Achievement Survey (State Level; pre-2017 version): Along with the expansion of DISE in 2001, a parallel effort was made in 2001 to start the National Achievement Survey (NAS). As described at the beginning of this chapter, the original purpose of this achievement survey was to serve as an assessment of macro-level effectiveness of SSA, central government’s scheme for universalization of elementary education adopted in 2001. Based on my interview with NAS consultant (details in next section), introduction of NAS was also a continuation of DPEP template where assessment studies were done to evaluate effectiveness of the program. Therefore, NAS was started with this original strategy of doing three cycles of surveys- baseline, midterm, and terminal (NCERT, 2012). And just like SEMIS was developed to cater to RMSA (scheme for universalization of secondary education), NAS also started covering 86 secondary education in 2010 to assess effectiveness of RMSA. NCERT (2019) describes the purpose of pre 2017-NAS below: Under SSA, the original strategy was to administer three NAS cycles, wherein, each cycle covered classes III, V and VII/VIII. The three cycles were to be called as Baseline, Mid Term and Terminal Achievement Surveys. The Baseline Achievement Survey (BAS) was carried out during 2001-2004, followed by the Mid-term Achievement Survey (MAS) which was carried out between 2005-2008.(NCERT, 2019) The National Council for Educational Research and Training (NCERT) with support from the central ministry started conducting NAS, which was a state-level survey. It did not provide any information on district or school level performance, only aggregate state level performance figures were available. As described in Chapter 5, the survey for different grades was conducted in different years until 2017. Therefore, learning data for all these years were not available for the same reference year. The focus on NAS at state level and not lower levels, reflects that state remained the key unit of development under SSA, accountable to central government for SSA funding. Since DISE and later UDISE also did not collect any standardized learning data at school or district level, NAS data available at state level was the only nationwide data source on learning performance.18 6.4 Increasing concerns about the poor quality of education and institutionalization of NAS While this phase had adopted inputs and schooling-focused policy (i.e. SSA and RTE)and deterred the focus on testing through the No Detention policy, there emerged increasing concerns about the deteriorating quality of learning in India, especially in its government schools. There were reports and studies about poor implementation of SSA and RTE norms at the school level, worsening the quality of learning in government schools (e.g. Harris, 2017; Little, 2010 etc.). This was seen as a major reason for enrollments shifting from government to private schools (see IDFC Foundation, 2013). There were also widespread concerns about teaching and examination norms in schools that emphasized rote learning and syllabus completion instead of holistic understanding of concepts and their applications. These issues were extensively highlighted by media reports, academic studies, and influential reports by international organizations. 18 It is important to acknowledge in this section that apart from NAS, some states in India also started doing state level achievement surveys for the same purpose of assessing the impact of SSA programs within their state. A report by Dell Foundation et al (2021) provides more details about these state-level efforts, and notes that they were largely of poor quality and management and lacked any meaningful use of results. 87 Ramachandran (2006) notes that due to India’s schooling and inputs-focused education system, for higher-level administrators in charge of schools the concept of quality became equated with “efficient management”. They associated teacher motivation with “low absenteeism, maintaining discipline, proper record keeping, collection and reporting of data, utilization of funds allocated for teaching and learning material and giving exercises in the classroom and correcting them” (Ramachandran, 2006). Particularly important role in this country-wide realization has been played by the ASER (Annual Status of Education Report) Survey, a civil society-led household survey conducted by NGO Pratham. This survey was first conducted in 2005, and more consistently from 2009 onwards. This survey is the only “annual” source of information on children’s learning outcomes available “across” India today. More details about the survey are given in the passage below. ASER Survey: ASER is a household survey where all types of children are included– those who have never been to school or have dropped out, as well as those who are in government schools, private schools, religious schools, or anywhere else. In each rural district, 30 villages are sampled. In each village, 20 randomly selected households are surveyed. This process generates a total of 600 households per district or about 300,000 households for the country as a whole. Approximately 600,000 children in the age group 3-16 who are residents in these households are surveyed. (ASER Centre, n.d.). Interestingly, in case of India, it wouldn’t be wrong to conclude that civil society initiatives like ASER played a major role in driving government attention toward learning outcomes. ASER plus evidence emerging from range of other efforts such as India Human Development Survey, Education Initiatives study in 18 states, participation of two Indian states in PISA 2009, large longitudinal studies in Andhra Pradesh etc. have sustained communication around this issue and built pressure to reform the system (Pritchett, 2014b). This is evident from the wide national and international media coverage ASER has received over the years and various government documents that have cited its results. All of these concerns about poor learning were complementary to global agendas and priorities of international organizations that were rapidly shifting the attention from schooling to learning (see Hossain and Hickey, 2019). Given these rising concerns about deteriorating educational quality in government schools, Consultant 1 on NAS mentions that in around 2012 it was realized that NAS must be institutionalized, and not just be treated as an SSA evaluation tool. As a result, NAS became a 88 constant feature in the Indian education system. However, it is not clear how specifically the government was influenced to treat NAS as a continuous process, beyond SSA and RMSA programs. SSA was up to grade 8. And all of these NAS we did so far were considered as achievement surveys for the program. DPEP in the 90s and SSA etc. By 2012 we got it institutionalized, and we established that now it is not anymore just about the programs, but the country needs to know about education through achievement survey. So there was influencing of MHRD, NCERT and all, that this should be a continuous process, whether the SSA happens or not, the country needs to know this. That institutionalization has taken place 10 years back. So earlier whichever grades these centrally sponsored schemes worked with, they only assessed those. So if you see from DPEP time in 90s only 3 and 5 were included because focus was only primary education. Then later 8 was included as focus was elementary education. So yes, RMSA influenced the inclusion of Grade 10. (NAS Consultant 1) This institutionalization of NAS was also noted by NCERT (2019) as follows, indicating that the increasing focus on learning outcomes was responsible for making NAS a regular exercise in India. Over the last decade of SSA implementation, focus shifted from dealing with challenges around access, to improving quality of learning. Hence, NAS emerged as a tool to provide periodic feedback to the system on the health of the education in the country. NAS became a regular and ongoing feature of the Indian education system, with each round of NAS being referred to as a ‘Cycle’. Therefore, the Terminal Achievement Survey (TAS) scheduled to take place between 2009 – 2013 was renamed as Cycle 3. (NCERT, 2019) 6.5 Phase 3 (2015 to present): Greater Attention to Learning Outcomes and Commencement of Integrated Universalization The emerging national and global concerns about the deteriorating quality of education in Indian government schools discussed above created strong pressure and led to many policy changes in Phase 3 (around 2015-present) that have given much greater priority to learning outcomes than in earlier phases. In 2015, an influential report by government’s advisory body CABE called for greater attention to learning outcomes, assessments, and reformation of policies like No Detention Policy. This led to development of a framework by NCERT in 2016 on what students are expected to learn grade wise, amendment of Right to Education Act with greater focus on learning in 2017, and repeal of No Detention Policy for grades 5 and 8 in 2019. These important policy shifts were further bolstered when India introduced a New National Education Policy in 2020 (MoE, 2020) with a substantial focus on improving learning outcomes. India also decided to join back PISA assessments and compare itself against the world. Parallelly, the central government parted ways with its level wise expansion approach to universalization by merging 89 SSA and RMSA into Samagra Shiksha, focusing on the entire K-12 spectrum. I discuss these policy developments below as they have led to development of NAS 2017. The CABE committee’s (Central Advisory Board of Education- India’s highest educational advisory body) report to central government in 2015 cited the lack of assessments as a major reason for the poor performance of government schools. It criticized the No Detention policy arguing that while it had good intentions of keeping children in school and removing fear of exam failure, the policy was misinterpreted as “No Assessments” or “No Relevance of Assessment” (p.9), and hence left no motivation to perform for children, teachers, and parents. The report was majorly influenced by emerging global and neoliberal thinking of being outcome-focused to improve educational quality. Citing Eric Hanushek and Margaret Raymond, it stated that “You can’t improve what you don’t measure” (p. 10) and stressed the need for “outcome-driven orientation” (p.10) to periodically assess and improve the system and promote student achievement and accountability (CABE, 2015). Citing OECD’s report on Assessment and Innovation, it argued that if assessments are designed and disseminated properly, they can serve as an instrument to improve the quality of teaching and learning in schools, instead of causing teaching and studying to test. The report also attributed the lack of teacher accountability to lack of assessments in elementary education. Citing United States’ Gordon Commission on Future of Assessment in K-12 Education, the report emphasized that standards-based testing leads to greater teacher accountability and improvement in student learning outcomes. In line with this diagnosis, CABE report (2015) made following important recommendations about making Indian education system more learning outcome-focused: • Identify grade-level competencies for each grade. • At the school level, assess all children (census approach) against these competencies every year. • Use of these results by principals, teachers, and parents to prepare "School Development Plans" with special training provisions to address learning deficits of children at every grade. • Regulation of private schools by similarly tracking learning levels of all their students. • Recognize and reward high performing students, teachers, schools, blocks, and districts based on academic and non-academic metrics to motivate others. Design holistic annual assessments for the same. • Introduce/reinforce performance management processes for all teachers, school leaders and department officials linked to learning outcomes and CCE metrics. • Share best practices from high performing teachers, school leaders and schools. • Redesign teachers’ performance appraisal systems to link to children’s achievement. 90 • Implement No Detention provision in a phased manner. Provisional promotion after grade 5 and detention after grade 8. In 2017 NCERT released a set of expected learning outcomes in Languages (Hindi, English, and Urdu), Mathematics, Environmental Studies, Science and Social Sciences 19. These learning outcomes were defined as the “assessment standards indicating the expected levels of learning that children should achieve for that class” (NCERT, 2016; NCERT, 2017). Kapoor (2018) explains that these learning outcomes indicate “what a child should, ideally, have learnt by the time he or she moves from a grade to higher one- or what the outcome of the year’s education should have been”. Kapoor (2018) argues that having this framework is a major shift in policy perspective because focusing on learning outcomes indicates testing children’s understanding and meaning making rather than their ability to memorize or rote learning. This framework became an important initiative in India’s transition to learning focused education system as it conceptualized quality in terms of measured learning outcomes through assessments instead of inputs. It also indicated the use of assessments by different stakeholders for information and accountability purposes. Below is a passage from the document which indicates dissatisfaction with the inputs-focused approach, arguing that “timely provision” of inputs has not translated to learning levels, especially in reading and mathematical ability. Reports of Joint Review Missions for SSA in the past few years also mentioned that the learning levels of children are not up to the desirable level in spite of all the efforts made by the States/ UTs in terms of timely provision of teaching-learning and resource materials, teacher deployment and regular monitoring. These report a decline in outcomes of reading ability as well as numerical and mathematical ability which is a major concern at present. Keeping this in view, quality, as measured by learning outcomes to be achieved by all, especially for literacy, numeracy and essential life skills is crucial. (NCERT, 2016) By citing the global monitoring report and SDG, the report signaled alignment with global norms in shifting the focus to assessment-driven learning outcomes. It also mentioned the need for accountability through “vigil” by the community but did not mention civil society initiatives like ASER in making that happen. The focus of the Twelfth Five Year Plan for basic learning as an explicit objective of primary education and the need for regular learning assessments to make sure that quality goals are met. It is also in consonance with the recommendations of GMR-2015 and the SDG. Thus, monitoring of quality through assessments of learning outcomes at regional, national, and international levels is important. At the same time a vigil at the ground level 19 Elementary learning outcomes released in 2016 and secondary learning outcomes released in 2019. 91 by different stakeholders such as parents and community, for their accomplishment makes the system informed and accountable to adopt corrective measures at appropriate levels. (NCERT, 2017) The report also noted that this framework on learning outcomes would be integral to future education planning in India and instrumental in reducing regional disparities. It is to be used by states in designing their context-specific learning outcomes framework. This document is a step to overcome regional disparities in achieving the objectives of educational planning in our country. The States may adopt/adapt these as per their needs and contexts. It may help them to lay down stage-wise curricular expectations and class- wise learning outcomes. These can be used by stakeholders at both micro and macro levels to provide insights into the progression of a child’s learning in various classes. Thus, will be useful for teachers, parents and the entire system for improving the quality of learning and development of children in the elementary stage of school education. (NCERT, 2017) The Right to Education Act (RTE), which was earlier focused on ensuring schooling inputs as discussed in Phase 2, was also amended in 2017 to include learning outcomes as part of RTE rules and instate accountability of all stakeholders in improving learning outcomes. The act mandated state governments to define the expected learning levels of students in elementary education. The renewed focus on learning outcomes also led to an amendment to the No Detention Policy in 2019. The new rule now requires exam performance-based promotion in classes 5 and 8. Both these initiatives indicate the implementation of recommendations by the CABE committee report (2015) discussed above, and clearly emphasize India’s emerging focus on improving learning outcomes against the earlier approach of schooling, inputs, and ensuring a stress-free environment for students through default next grade promotion, irrespective of learning levels. Alongside these efforts to make Indian education system more learning outcomes-oriented, there were also initiatives to integrate the separate universalization initiatives of elementary education (SSA) and secondary education (RMSA) recognizing the lack of convergence and duplication of efforts (MHRD, n.d.). Therefore, both the schemes were united into a single scheme, including pre-school to senior secondary levels (K-12). This merger was announced in 2018 as Samagra Shiksha (SS) (Comprehensive Education). More recently, the New Education Policy released in 2020 has reaffirmed emphasis on improving quality and learning outcomes. It mentions 1) “the gap between the current state of learning outcomes and what is required must be bridged through undertaking major reforms that bring the highest quality, equity, and integrity into the system, from early childhood care and 92 education through higher education” (MoE, 2020, p.3); and 2) “The focus will be to have less emphasis on input and greater emphasis on output potential concerning desired learning outcomes” (MoE, 2020, p.11). India also officially announced its participation in PISA 2021, after a 10 year absence, to indicate this major shift in its policy priorities. The government stated that participating in PISA will help to assess the health of its education system, motivate schools and states to do better, and improve learning levels across the country (Edwards, 2019). The test will also move India away from rote learning toward more “competency-based examination reforms” (Edwards, 2019). 6.5.1 Data for A Learning Outcomes Focused Education System (NAS 2017) and Integration of learning, inputs, and schooling data into PGI In light of the above policy developments in Phase 3, the government has tried to significantly transform the data initiatives as well. The upgrade of NAS into NAS 2017 format and its uses for standard and comparative governance as discussed in Chapter 5 is a product of these developments. Chapter 5 provides more specific details about NAS 2017. In this section, I connect NAS 2017 to Phase 3’s policy developments. Based on interviews with associated officials, UDISE started functioning more smoothly in this phase, with better quality data and more efficiency. However, UDISE remained under the authority of NIEPA (and later the ministry in 2019), functioning separately from NAS which was under the authority of NCERT. It still did not collect any standardized learning data. Since there was no other learning data instrument to govern India nationwide, NAS which was already institutionalized in 2012, was upgraded and expanded in scope and purpose in this phase. As Chapter 5 describes, the previous rounds of NAS did not result in any use as they remained at state level. Therefore, the government shifted the purpose of NAS from a macro-level assessment of its programs like SSA and RMSA to a tool that could be used to regularly influence core educational processes such as planning, teacher training, curriculum design, pedagogical development, etc. at both district and state levels. NAS 2017 was proposed to be far more integral to educational development in India than before. As discussed in Chapter 5, NAS 2017 was transformed in many ways compared to previous versions of NAS. Its focus expanded from states to districts and instead of measuring curricular knowledge, it started measuring learning competencies based on the learning outcomes framework prepared by NCERT. In the 2021 cycle of NAS, the private schools were also included along with government and aided schools for greater comparability in performance. These major 93 changes in NAS have signaled the importance given by the central government in improving learning outcomes and transitioning to a learning outcome-focused education system. This could usher a new outlook in India’s data and policy journey. Given that NAS (collected every 3-4 years) and UDISE (collected yearly) remain disintegrated systems, despite both being able to report district level data, reflect the different phase wise journeys India has taken in education and its evolution from inputs to more outcomes focused education system. UDISE reflects India’s inputs and schooling focused past, and NAS reflects its new learning focused priorities. Since the focus on schooling and inputs still exists today to some extent, the policy is now trying to integrate its data focus on both inputs with schooling and learning outcomes through efforts like Performance Grading Index (PGI) discussed in Chapter 5. PGI, therefore, is made up with UDISE and NAS data, despite the fact that NAS is collected after 3-4 years and UDISE is collected yearly. The new education policy (2020) has further proposed many ambitious data-based governance initiatives for schools (similar to TBA models) as well as the entire education bureaucracy, indicating an even stronger emphasis on assessments and learning outcomes than before. For example, sharing multidimensional report cards with parents, using AI-based software to track student progress, introducing standardized exams in Grade 3,5 and 8, publicizing a variety of school data, establishing a national assessment center for guiding and conducting various assessments etc. States have been encouraged to do their own state assessments and achievement surveys to collect data relevant to the state-specific context. In response to this, many states are also revamping and redesigning their state-level assessments and surveys by partnering with international consultants and actively working towards a greater focus on outcomes (see Dell Foundation et al, 2021). However, it is not clear how these initiatives will translate into practice given various capacity and implementation constraints in India. 6.6 Conclusion: Clear Demand for National Assessment Data (NAS 2017) After Decades of Inputs and Schooling-Focused Policy Priorities By discussing India’s education policy and data developments since 1990 in three phases, this chapter shows that India’s transition to an educational governance system focused on improving learning outcomes is relatively recent compared to developed countries. From the 1990s till the early 2000s India was focused on universalizing access to primary education and ensuring that students complete primary education without dropping out. Until around 2015, universalization efforts were expanded to elementary and secondary education levels. Throughout 94 these phases, the education policy and interventions largely demonstrated an inputs-focused understanding of quality. This reflected the view that outcomes such as education quality and student learning/achievement would be achieved by providing adequate educational inputs. Furthermore, it was deemed that elementary students in government schools must remain in a stress-free environment, with no fear of any tests or exams, to ensure they do not drop out of school. Teachers were responsible for regularly evaluating and assessing student progress in their classrooms, without much oversight from higher authorities. Aiyar (2016; 2019) calls this approach ‘right to schooling’ instead of ‘right to learning’. It is only around 2015 onwards that more substantial initiatives were taken for a concrete focus on improving learning outcomes, as measured by assessments. However, when the country turned its attention to learning outcomes finally, it was met with an extensive yet somewhat uneven data infrastructure. Data collected in these earlier phases also reflected the inputs and schooling (i.e. quality access, enrollment, retention, and completion with equity) orientation of the policy and governance system. For example, UDISE data collected from schools largely collected data on inputs, demographics and enrollment related figures. Assessments were mainly conducted for the purpose of evaluating the effectiveness of SSA and RMSA programs. The NAS data was also collected only at the state level. It was not conceptualized as a regular feature in Indian education, but as a tool to evaluate SSA and RMSA. Therefore, while nationwide data on inputs and schooling were available from all schools, standardized learning data were only available at the state level. The only other source of nationwide learning-related information was civil society surveys like ASER which were household surveys, not providing school-related information. Due to the inputs and schooling focused orientation, it is not clear how and to what extent the government used data like NAS or ASER for governance purposes. It can be seen that lack of bureaucratic capacity had an important role to play in the slower pace of policy and data developments discussed above. The government was constantly working on building organizational capacity from the national to district level to be able to meet the requirements of universalization programs like SSA and RMSA. Amidst many challenges, systems and structures were being developed and constantly revised for planning, monitoring, evaluation, etc. of these programs. In some cases initiatives were developed in a disintegrated fashion(e.g. SEMIS vs. DISE), causing duplication issues. As a result, they were merged later to bring more consistency. This capacity issue is also one of the major reasons, apart from factors like funding, 95 behind the level-wise approach to expanding universalization in India. Moreover, despite foreign technical assistance, the Indian government laid significant emphasis on the “Indian” nature of the policy and development process by involving national institutions at the frontline of all activities (Kumar et al, 2001). I provide more context to this in Chapter 7. Therefore, constant bureaucratic capacity building with level-wise expansion in addition to inputs and schooling focused policy outlook dominated the Indian educational governance context till around 2015. It is important to note that this inputs and schooling-focused orientation maintained centralized and bureaucratic educational governance in India. Critical decision-making remained centralized as the powers of sanctioning and providing inputs remained with the government. Data were mainly used as a tool to support bureaucratic decision-making and implementing supply-side interventions. Pritchett (2014a) noted that in many countries there exists a substantial loose coupling between schools and higher authorities, but in the case of India, the state has shouldered most responsibilities in education. A system is developed which is more attuned to process compliance with policy norms than actual functions of teaching and learning (Pritchett, 2015). There is a tendency to focus on implementable solutions that can be controlled and managed by the bureaucracy than what is actually required at the school level (Pritchett, 2014b). With the shifting policy focus to learning outcomes from around 2015 due to rising national and global concerns about the deteriorating quality of education in government schools, there emerged a clear demand for nationwide standardized assessment data that provided more granular information on learning outcomes than pre-2017 NAS at the state-level. As a result of this demand, it was decided to update NAS rather than create a different instrument as it was a familiar territory. Due to the history of conducting NAS since 2001, institutions and mechanisms were already established to a certain extent for conducting a national-level survey, making it relatively easier to bring updates than invent a new data instrument from scratch. In Chapter 7, I provide more details on why NAS in 2017 was only expanded till district level and not school level. I explain how capacity constraints at the central and state level in India were determining factors in this decision. 96 CHAPTER 7: NAS 2017 AS THE ONLY FEASIBLE INSTRUMENT In order to explain the central role of NAS from 2017 onwards in the national educational governance of India, I show in Chapter 6 that by 2015 a clear demand for learning assessment data had emerged to govern education nationwide amidst changing educational priorities from schooling to learning. In the face of this need, NAS emerged as the only available and feasible instrument to respond to these new priorities as it was conducted in the country since 2001. Due to this history, it was an easier option to rely on NAS which had some institutions and mechanisms already established to a certain extent. In this chapter, I explain the story of NAS 2017 and why it came to be the way it is, i.e. a district-level sample survey instead of a census survey/assessment or a national test covering all students that is found in TBA countries. I explain the technical capacity challenges at the central and state level due to which NAS 2017 was designed in this manner. This discussion shows why the version of NAS 2017 (which also continued in 2021) is the only data instrument of reasonable quality currently feasible in India to govern the country nationwide. Given the capacity constraints, it is nearly impossible for the Indian governance system to conceive of any other alternative. This is a major reason behind increasing reliance on NAS from 2017 onwards as discussed in Chapter 5, where it is targeted at different uses across different users, i.e. from central govt to teachers. Before beginning the discussion, I would like to emphasize that although this discussion of technical capacity refers to NAS 2017, it also applies to any other learning assessment in the Indian context. The insights from this chapter can not only be relevant for those with interest in NAS, but also those who are interested in exploring the challenges of doing any other standardized learning assessments in India. In addition to being a story of NAS, it is also a story of Indian state and bureaucracy’s capacity. 7.1 Central Level Capacity Issues Below I discuss the technical capacity issues pertaining to central level authorities like NCERT in designing NAS 2017. 7.1.1 Conducting nationwide standardized assessment is like doing PISA, only more complicated India’s unique diversity, scale, and capacity put many constraints on decisions regarding survey design. India is a large country, with immense demographic, geographic, and institutional diversity. This massive diversity and scale, plus existing bureaucratic/technical capacity challenges being a developing country create many challenges for conducting assessments. Even 97 shifting the national survey from state to district level (seen in 2017) is an extremely challenging assignment for NCERT. Moreover, there are hardly any examples outside of India that India can look up to that have successfully implemented assessments with diversity, scale, and capacity similar to India. Therefore, the NAS team at the central level (NCERT team and consultants) had to come up with their own solutions to deal with India’s unique challenges. Interviewees iterated multiple times how all of these issues were difficult to handle for them. Consultant 1 for example describes that doing a survey like NAS 2017 in India is no less than doing a PISA as there are minimum 18-22 languages in India. Children come from diverse linguistic home and schooling backgrounds. This poses a major challenge in designing survey instruments. Even the meaning of what counts as a “flower” changes based on the location. So first of all, let me start by some of the challenges India has. So let us talk about NAEP, or any other. There is one language. In India we have minimum 18 languages. Not only that, at national level we only look at these 18 or 22 languages. These are the minimum number of languages. But children come from very different mother tongue backgrounds, especially at grades 3 and 5 level. So measuring language learning is a very big challenge, and one needs to fully appreciate that. Children come with different mother tongues and use very different language in classrooms. So even equating items…now we do everything scientifically, including translation from base language to back translation, but the fact remains that there will be certain amount of variation. So what is understood in Tamil vs. what is understood in Oriya is going to pose certain amount of challenge and one needs to fully recognize that. It is literally like doing a PISA. Our 36 states are like their countries in terms of languages. Secondly, given the socio-economic and other kinds of diversity like regional and all, besides linguistic diversity, from a psychometric perspective too it is challenging. So for example in United States if you say McDonalds burger, everyone from west to east coast will understand its taste because it is the same. And I am not comparing food because of course food is a very different thing but even with idioms, the way the language gets used or even the context, is different. Even something like a flower means different things in Gujarat, Orrisa, and Northeast. So if you create a drawing of a flower and ask a grade 3 child, there is still going to be some challenge. So besides the linguistic diversity, topography, regional and other types of diversity also make a difference. So these are the challenges. (NAS Consultant 1) Even before NAS 2017, when NAS was done as a state level sample survey, it was hard to collect data on the same day from all the grades due to diverse state-level educational practices (e.g. timing of academic sessions) influenced among other things by regional holidays and events like local elections. This was further exacerbated by diverse climatic conditions in the country. And also the window for collecting the data…because academic sessions in India differ according to states. Some have summer closing schools some have winter closing schools. So it was difficult to get the data at a single point in time. So administration only lingered over one year. Only the administration of data. Again the calamity like in monsoon season 98 and all that affected the window of data collection. So what happened is that completing one cycle itself took 3-4 years. (NAS Consultant 2) Given that doing NAS was already a demanding exercise at the state level, when the NAS 2017 expanded to the district level, its implementation was an overwhelming exercise for the team. Consultant 2 notes how expanding NAS to the district level in 2017 was astounding for them from a training perspective as they had to involve 200,000 field investigators across the country while maintaining standardized and reliable implementation. Actually when we designed 2017 learning assessment, we tried to capture the learning levels of the children at the district level for the first time. Before that learning was consolidated at the state level. It hid all variations that take place at district level. We have lot of variations at district. That’s why we tried to develop a design where we can report at a district level. So this is one. And when we are going to district level then you know number of schools increase and all that. So we engaged nearly 200 thousand field investigators to support this exercise! That means training them at par..at the same level….and the administration is (to be) conducted in same way all across 36 states, 730 districts. This was astounding scale! (NAS Consultant 2) Consultant 2 also shares how they had to come up with their own indigenous solutions to maintain standardization in training the field staff. They had to invent a completely different system from the past. The previous system did not require as many field workers and did not have required checks to maintain standardization. Therefore, they developed new organizational systems and structures, developed standardized training materials, and conducted many workshops. One of the biggest challenges was that fieldworkers across the country have varying access to good quality computing devices and internet. Therefore, they had to rely on the use of WhatsApp20 to reach them and ensure that these thousands of fieldworkers were trained in a standardized manner. So how to train them equally? Earlier the practice was that there is a national team. They train the state team, they train district team, and district team train school investigator. That means lot of information lost during this transition. So what we come up with…we prepared 120 core team at national level..that national team directly trained the district people. Then these district people with the help of standardized material and short videos…they trained the school investigators. That means it reduced the information gap between what we are trying to implement and what is being implemented at field level. And we developed a lot of short videos. For example- as a field investigator what should be my 20 WhatsApp is a free mobile application for instant messaging. It also allows audio and video calling, sharing of images and documents, sending prerecorded voice messages, etc. The business account of WhatsApp allows sending different types of files to hundreds of people instantly. 99 role in this hour, my role in that hour… during administration…before administration and after administration etc. All these short videos, small animations, presentations etc. We took the help of WhatsApp so that it is being circulated. Then we conducted the district workshops at good places. (NAS Consultant 2) Reaching all the districts across the country and doing it for the first time put great strain on the implementation capacity of the overall bureaucracy. Consultant 2 and NCERT core staff note that this was to an extent that even during festivals, which are government holidays, the staff from central to district level were working day and night to ensure appropriate administration. Consultant 1, who was not involved in NAS 2017, but observed it as an expert, also acknowledged this. If you interact with any field level investigator or district level coordinator they will be excited and describe their experience of 2017. Because they worked as a unit all together…day and night, holiday/leave…even during festival time…they did not care about it...right from national team to state team. That’s why during 2017 the national achievement survey happened in a single day all across the country. It was declared as national assessment day in India. (NAS Consultant 2) When we were doing NAS 2017, I had my department of course, but then we used to be up there at 9 o’ clock or 10 o’ clock in the evening, and on the day of assessment we were up till 3 o’ clock in the night. Rather the entire week we were up. For queries and things coming from states. (NCERT Core Staff) In 2017, unfortunately in my view, all of them were done not only in the same year but also on the same day the tests were conducted. So it stretches the implementation bandwidth so enormously. (NAS Consultant 1) Consultant 2 also notes how they had to come up with interesting solutions to reach the remotest of areas in the country. In some instances, they also had to involve the army in reaching difficult terrains. On the same day it happened in all districts…even in the Naxalite areas21 where the government even today cannot reach…the field investigator went inside the jungle to reach the schools. Even in Arunachal Pradesh…the army helped with the helicopter to drop the questionnaire. So that was the arrangement at time. (NAS Consultant 2) Consultant 2 explains that given the vast variations across schools in India, sampling became another challenging exercise for them, in addition to training and implementation. Sampling is the backbone of any assessment exercise and is crucial in determining its quality. 21 Naxalite areas in India suffer from Naxalite and Maoist insurgents, often referred to as left-wing extremists. They have been responsible for many terrorist activities killing more than 4000 people. They are present across 10 states and around 70 districts in the country. 100 Therefore, it was necessary for them to follow rigorous procedures in drawing samples, however, the diversity of schools in India and the decision to collect data for grades 3,5 and 8 together made the task extremely challenging for the team. The person particularly notes that it would have been extremely expensive if they had hired technical agencies to do this exercise. Sampling is very very complex exercise in learning assessment. And you know in India because of the type of schools and all that there are so many variations. Some schools are till 3, some are till 5, some are till 7,8. And getting the sampling frames and all that it’s a big challenge. …[…]…Because the sampling has to be bias free. Whenever we were trying to get out the sample from the sampling frame, it should exactly be the same school and no change. That means the random numbers, and everything was document so well….the school list and everything. And in 2017 the grade 3, 5, 8 conducted at the same time. And in earlier times, the grade 3, 5 8 took 8-9 years. You can imagine 8-9 years…and it compressed to 1 year! So sampling was again astounding! Because of the district assessment report …you need to sample within the district level. We have 730 districts, 3 different grades. That means you have to draw at least 2100 sampling list from district level. And it’s such a gigantic work, if you would have gone to any technical agency…they must have taken one year and must have charged crores22 of rupees to do that! (NAS Consultant 2) Consultant 1 observes that despite world-class design and administration of NAS 2017 and now NAS 2021, doubts still arise about whether the survey is doing justice to the massive demographic diversity of the country, especially in terms of different caste groups. The work on that front is still ongoing and improving. Even in terms of sampling, because its such a large sample, we do believe that there happens a fair amount of representation of SC, ST and OBC. But is it exactly mirroring the population, that still remains a question in NAS. I am not saying that they are not adequately being represented. But I am not sure if it is exactly mirroring or matching the population. We do apply weights to try to capture as much as we can. So I am just trying to lay out some of the challenges till 2017 and we are now trying to mitigate them as much as possible. (NAS Consultant 1) 7.1.2 Building Foundational Domain Expertise Simultaneously is Challenging By domain expertise, I refer to the expert knowledge required to implement good quality assessments. India’s diversity, scale, and technical issues discussed above felt more challenging to the central team in charge of NAS 2017, as they themselves were learning many things on the way. They had to deal with many domain expertise issues to be able to prepare themselves for an exercise like NAS 2017. I explain these issues in this section. 22 1 crore is equivalent to 10 million. 101 In India, NCERT is the apex body responsible for matters related to examinations and educational assessments. Below NCERT, there are counterpart state-level institutions called SCERTs (State Council of Educational Research and Training) that perform similar tasks at the state level. Within the education bureaucracy, NCERT is considered the most equipped institution on these matters. However, until 2016, NCERT with a small team of top academics in the country itself underwent capacity-building interventions over the years to update their knowledge and capabilities to match the global trends. It was constantly building capacity to remain updated while also conducting NAS surveys parallelly at the state level. Until 2016, NCERT was learning about international best practices and underwent training in many fundamental aspects such as developing questions, using Item Response Theory, analyzing data, etc. One time the NCERT team went to ACER23, they also attended 1-2 conferences internationally. So beginning to see that look the world is looking at achievement surveys in a very different light than the way you have thought about it. Because they started DPEP and all in 90s and the World Bank said, so we did it kind of a thing. So we exposed them to PISA, TIMMS, PIRLs and all these surveys and there was an exposure created. So from specific technicalities like sampling, question development, item response theory to what is the value of this type of data, what can you do with it, how the findings must be utilized etc. so those were the different thematic areas with which we worked with them (NCERT). (NAS Consultant 1) The external consultant involved in the early capacity building (until 2016) noted how they worked with NCERT to expand their notions about their target audience from central ministry to state governments and how they worked to reduce the size of reports and improve reporting practices for better use of data at the state level. It is interesting to note how even for the consultant the state governments are the key actors and targets, signaling the centralized nature of education governance in India. NCERT folks are a small group , and initially in the first year or two, it surprised me and I am sure it surprised others as well that they thought their client was the (central) ministry. We are here to report to the ministry, saying that this is the National Achievement Survey report. Now going back to the basic principles of research when you gather data from anywhere, now there are a number of reasons why you should report back, but first ethical reason would be that you are asking the states to get the data, then you also need to give back something. They knew the principle of it, and I remember clearly that they would say ofcourse, ofcourse we should. They would say yes we give back by preparing state reports 23 ACER- Australian Council for Educational Research is an independent educational research organization based in Australia. 102 and by giving the disks, meaning sending the data back. So I asked is that enough? Ofcourse your client is ministry, but it is also the states. (NAS Consultant 1) Because if I show you the 2009 report of grade 5, it is like this thick! Almost 6 inch thick. And it had table upon table upon table of analysis. And 2010 and 2011 there was a grade 5 report that came out… it was 150 pages or something. And you know with true findings than giving tables for them to decipher. So that itself was a big shift, that how do you report the findings. (NAS Consultant 1) Even while preparing for NAS 2017, constant learning and capacity-building work took place within the NCERT team to prepare good quality and globally acceptable survey instruments measuring learning competencies instead of curricular knowledge and rote learning skills. This shift to measuring competencies was a major breakthrough for NAS. The consultant on the project describes how the staff, initially less confident, was later convinced to design survey instrument in-house instead of seeking support from outside agencies. They spent significant time studying global literature on this topic and designed globally competent survey instruments. Gaining this confidence to do things on their own was a major breakthrough for the team. So how I engage with the content experts, mostly the senior professors in the NCERT, subject experts, and all that…but the good point is that I motivated them that you don’t need any external support. You have adequate support inside NCERT to develop those items. Believe us, let us deliver it, let us start questioning ourselves, then lets see what is happening. Then we critically reviewed each and every item. It’s a peer review, it’s not going outside. Instead of going to external agency to support us…because what happens when we go there…they develop items…they give you an end product that you will use in this assessment. It’s not capacity development, its just about the end product. (NAS Consultant 2) So with that NCERT developed their own items. I can proudly say that. Support didn’t come from outside. NCERT professors and their staff did a wonderful job. But the way we probe and engage ourselves with peer reviews and critical reviews because we need to convert them to competency based items. Let’s ask difficult question to ourselves. That way we engaged ourselves and converted ourself to competency based items. Also, I probed them lets see if this is the concept we are going to assess, just see what the different scenarios are, different countries, just google it, how they are measuring this concept. Let’s google it. Let’s collect more information about this. Then came the items. Its not about I don’t want to develop 20 item or 30 items per day. Let’s develop 5 items per day but good items. And let’s understand how these concepts being measured are assessed by different contexts.(NAS Consultant 2) This also indicates how gaining “international” support has become such a powerful idea influencing even the topmost experts in India. It is not only about the objective imbalance of knowledge and research between the global north and south, but also about how it affects the confidence of people in the global south to become self-reliant. Deriving confidence from an 103 external consultant to develop the survey instrument on their own is also a complicated issue and open to many interpretations and implications. I discuss related points on this in the next section and in Chapter 9 on implications. 7.1.3 Achieving World Class Quality while maintaining Self Reliance and Local Relevance Apart from practical reasons, one of the major reasons why technical capacity becomes a major concern for decision-makers of any national assessment is the increasing scrutiny of assessments at national and international levels. The immense global attention given to the topic of assessments, especially in a comparative and competitive manner, majorly influences decision- makers to demonstrate quality- at least in administration and reporting practices. This influence of global scrutiny could be particularly applicable for India because as discussed in Chapter 3 section 3.4, the top-level authorities often strive for global validation and legitimacy. Therefore, this sentiment was reflected while deciding the scope of NAS as the top staff strived to demonstrate good quality within existing capacity. Making NAS a globally accepted survey was critical for them. When we were planning large scale assessments in India, we followed all internationally accepted procedures. Otherwise if we do not follow, it will not be valid and reliable. And it will not be of any use. And NAS has been accepted globally in all different platforms and it has been quoted and used. So to make it reliable and valid, it is necessary. It is crucial that when I am designing items for NAS or implementing NAS or doing sampling, I follow the international procedure. If you read the report, you will see that all the international standards are being followed. (NCERT Core Staff) Global acceptance was also the reason why UNICEF was made integral to the design and implementation of NAS 2017 survey, as UNICEF brought the required knowledge, expertise, and international connections that could be relied on to make the survey “world-class”. Having UNICEF also helped them maintain authenticity and signal process transparency. And when we really made an entry to learning assessment is …NCERT learning assessment instruments…so at that time there was a new education secretary and he also wanted to see the quality of instruments. So they invited UNICEF to have an impartial validation or evaluation of assessment instruments. (NAS Consultant 2) We also took help from UNICEF. If you see the functioning of UNICEF, if they are collaborating with any of the programs, they have their monitoring system as well. So that was very advantageous for us at NCERT because constantly NAS was always monitored by a third party. But whatever we did was not like we were doing it in closed doors, everything was open, and we were open to criticism and there was all transparency. That was a major reason of success of NAS. (NCERT Core Staff) 104 Designing and implementing a national assessment requires a hard balancing act for an institution like NCERT between seeking global legitimacy through the involvement of international consultants and delivering a project that is responsive to local needs and realities. International consultants often come with strong domain expertise but may lack practical perspectives about implementing surveys through vast Indian bureaucracy, especially at lower levels. They may not be aware of the formal and informal communication norms within the system, and the distinct ways information gets transmitted and interpreted. While ensuring appropriate mechanisms, NCERT must also ensure that practices adopted by them are relevant to the whole system- from the central to the district level. This becomes particularly challenging given India’s diverse states in terms of culture, language, institutions, governments, policies etc. Therefore, all capacity building initiatives for NCERT are not simply exercises to build world class domain expertise, but also spaces to actively use their expert judgement in maintaining the balance between global vs. Indian. This sentiment was conveyed by the core team member while discussing the complexities involved in maintaining self-reliance and control and how this has implications for the NCERT’s institutional legitimacy, given that it is government funded. NAS is NCERT’s project. But NCERT believes in collaboration. So we collaborated with different assessment agencies that were working in India. There were some private agencies working, so we collaborated with them. Everybody brings their own experience, and expertise, and also the learning as all of them are working at grassroot level. But it is for me at NCERT to decide what it is it that I think will be suitable, and what will be right for me to do. But if ultimately something goes wrong, it is NCERT problem. Of course, if something goes right, credit goes to NCERT. But then if something goes wrong, we are to be blamed.(NCERT Core Staff) Although it is not mentioned specifically by the official, I imagine that caring for NCERT’s institutional legitimacy may also be important for political reasons. NAS is an exercise conducted by NCERT, which is a government-affiliated entity. As I show in the quote below, in introducing the work of NCERT, the core staff stated that NCERT is “working for the government”. Therefore, any issues related to NAS not only reflect on the capabilities of NCERT but also on the government. This could be particularly important for the present government which is known for putting significant stress on India fostering self-reliance and “self-centered system” (term used by 105 the government) through campaigns and initiatives such as “Made in India” and “Atmanirmbhar Bharat” (Self Reliant India)24. NCERT is also a government organization. We are working for the government, to improve school education. So its not that NCERT takes any decision unilaterally or separate from ministry. They both always work in consonance in such decisions. Because these are major decisions, so they always happen in collaboration. (NCERT Core Staff) Similarly, the NCERT official also mentions how despite reviewing many large scale national and international assessments, they have always tried to evolve their “own” survey. However, it is not clear what it means to evolve own survey given the team was constantly building its own capacity across these years. We were not inspired from NAEP. 2017 survey is not inspiration from NAEP, because it is still carrying on with content based. US was doing this, Australia was also doing something similar. In 2001 when we started doing NAS, we reviewed many large scale assessments done internationally like TIMSS and PIRLS. All these researches in large scale assessments were done, and then we evolved our own in 2001. And it was carried out till 2017 but of course not in the same manner. Even though majorly NAS went through redesigning in 2017, minor changes we were doing all along. Those minor changes were happening. But 2017 we did major changes. (NCERT Core Staff) The same was also conveyed by Consultant 1, who was involved in capacity building for NCERT before NAS 2017. For the consultants, the exercise of capacity building was not a one- way exercise of consultants imparting knowledge and the NCERT team receiving them. It was a two-way learning exercise where consultants shared the knowledge of global best practices, and the NCERT team acquainted them with Indian institutional, capacity, and demographic context. Since NCERT has been doing NAS since 2001, they have significant experience in working with diverse state and district institutions. Of course, from a critical standpoint, this two-way knowledge sharing may not necessarily mean a level playing field between the consultants and Indian experts in practice, because the “world-class” knowledge transfers from the consultants to the Indian experts. Because these are professors, lecturers etc. with fairly good amount of service and so on. So that was the first principle, that we will work as partnership, rather than us teaching them something. So for us also it is learning about the context, how to work, what are the challenges etc. Because NCERT as a body works with all 37 states and UTs. So understanding how that mechanism works and all of that. So here we have technical skills, 24 More information can be found at https://aatmanirbharbharat.mygov.in/ 106 but we also need to learn about these issues of working in the field. So that is a learning we get from them. So it was more like a partnership.(NAS Consultant 1) The non-government assessment expert who has observed educational assessment initiatives across India indicates that the market for educational assessments in India might have an important role to play in fostering India’s inclination for self-reliance. In India, it makes more economic sense to pursue everything in-house rather than outsource it. It is not like United States where outsourcing may be cheaper than building inhouse capability. However, it is not clear in what specific ways it benefits central government to build capacity inhouse. Yes. When it comes to capacity, it needs to be built at all levels. Even at the national level and state level. Unlike US, for example in US, you have lots and lots of testing. Now I don’t know if this is going to come down. But generally lots and lots of testing for the last two decades. So here what would happen is that, for example, if there is California board of education and it wants to conduct a test for California, it would hire contractors who will come in and conduct it. They will get specialists and do it. They will not try to build their own capacity on that. The purpose of their board is to impart education. It is not to do assessments. Because assessments is a very specialized thing. And even if they have to get contractors best in the field, it would be less than even 2% or 1% of their expense. So nobody goes and builds something which is less than 2% of your expense. If you really look at it that, then even if it’s your own business, if the expense is such little percentage, why would you go out of the way to do it on your own? You would outsource it. Whereas in India, manpower is cheap. So there is lots and lots of people. You need to give employment. So that is one reason. (Assessment Expert) 7.2 State Level Capacity Issues Below I discuss the technical capacity issues pertaining to state level that shaped the development of NAS over the years. 7.2.1 Lack of System Support and Cost Constraints By system support, I refer to the support from the bureaucracy below the central level in terms of infrastructure, manpower, funding, knowledge, etc. to execute the vision of the domain experts. Consultant 1 noted that apart from the ongoing work on building domain expertise, the lack of system support was also responsible for keeping NAS surveys before 2017 at the state level. As far as the district is concerned, given the bandwidth, and given the technical know-how, we had maintained that let’s continue doing a great job with the states. (NAS Consultant 1) As per Consultant 2, lack of system support was the deciding factor for extending the NAS 2017 only till the district level and not going below the district level. When NCERT and MoE decided to transform NAS in 2017, they did consider the idea of doing NAS as a census learning survey collecting data from all students and schools instead of a sample survey. However, 107 Consultant 2 advised them against taking this approach given their lack of knowledge, and financial and infrastructural capacity constraints to collect and use the data appropriately. Therefore, it was decided to expand NAS 2017 from state to the district level but keep it a sample survey. The consultant believed that there was not enough capacity to pursue a census survey, and ultimately it was more important to use the survey data than just collect data. As per their judgment, India did not have the support system to use census learning data from every child and school in a meaningful manner. There was a plan to conduct learning assessment on a census basis. That was the plan at the ministry and NCERT level. That means going to assess each and every child in India. You know the total enrollment size is about 250 million in all schools in all grades. When we came in we tried to advocate with director of NCERT, concerned dept called education survey division, and MHRD that this is not going to happen. I mean how we are going to use learning assessment that is important. If we have plan how we are going to use such assessment data, then we can spend so much money and resources. If we don’t have a plan how we are going to use the data, particularly the teaching-learning practices in education policy planning, then don’t collect data from so many children. Because if you are going to collect data from each and every child, then there is not even an infrastructure to enter those data. And how you are going to use those data? (NAS Consultant 2) Consultant 1 mentioned that cost-effectiveness has also been a driving factor behind taking this approach. The person’s understanding of this rationale comes from the experience of working on NAS before 2017 and now advising government on various issues. However, Consultant 2 and the core staff from NCERT did not bring up this opinion. It is also so expensive to do this exercise involving every student. Through sample-based survey you can easily learn about the broad level outcomes more efficiently and at a much cheaper cost. So I think that is a choice that we made of keeping it sample. (NAS Consultant 1) Consultant 1’s view of keeping NAS sample-based for the sake of efficiency and cost- effectiveness also resonates with the current global discourses around how developing countries must venture into designing their assessments. For example, Birdsall, Bruns, and Madan (2016) note that ’It often makes sense for assessments initially to be sample based, while school systems develop the implementation capacity to ensure the integrity of test administration and results. However, it is eventually desirable to conduct census-based assessments. The latter generates the school-level feedback on learning progress that is essential for parents and communities to hold school directors and system officials accountable for results.’ Similarly, Clarke and Luna- Bazaldua (2021) and Wolff (2007) recommend sample-based assessments for developing 108 countries given their limited capacity to administer census assessments and the cost-effectiveness of sample based approach. Interestingly, NCERT core staff had a different perspective on this issue by stressing that the central level is not required to get into granular surveys, and rather it is the state governments that need to be “empowered” to do the same. NCERT’s core staff maintained throughout that NAS was intended to be sample based as the central level administration was only interested in knowing the “system-level outcomes”. Doing census assessments and approaching more granular levels should be limited to state governments, given their unique state-level context. States must be empowered to do assessments based on their own requirements and not depend upon central administration for the same. Although the official did not acknowledge that poor state capacity or cost-effectiveness was the reason behind doing NAS as a sample survey, the response did indicate that state governments still have a long way to go in terms of capacity building for good quality assessments. And this could be a major reason why currently there is no standardized assessment data apart from NAS in governing India nationwide. Actually we only said that states themselves should have SLAS- state achievement surveys. And it is documented in NEP 2020 also that states should have their own SLAS conducted and if possible every year […] And if at national level I conduct a NAS and share with the states my sample items, that may not really empower the states. So there is a saying that rather than giving a person a fish, you need to teach them how to do fishing. So we emphasize in NEP 2020 that let the states do this SLAS so that state teachers develop this learning of how to use and develop the items. They have more insight when they are actually doing it. (NCERT Core Staff) In the section below I provide more insight into why state capacity is poor in India, based on my interview with the non-government assessment expert. 7.2.2 Context behind Limited State Capacity It is clear in the above section that poor state capacity has an important role to play in keeping NAS as a district-level sample survey from 2017 and elevating its importance in Indian context. No other granular standardized data is possible to collect in India due to poor state capacity. Given the inconsistencies and irregularities at state level, having a more streamlined effort in the form of NAS is useful in getting a nationwide perspective on status of learning in India, along with variations across states, districts, and various social groups. In absence of reliable state-level sources of data, the central government’s reliance on NAS appears a practical stance to timely initiate and push forward the new focus on learning outcomes. Relying on my interview 109 with the non-government assessment expert, I provide here more context behind this issue of poor state capacity and why it may be a difficult issue to solve even in the future. Referring to the issues around state-level capacity, the assessment expert points to the erratic nature of administration in state bureaucracies, heavily influenced by politics, as a major obstruction to state-level capacity building. So whether states are doing it etc., it is very sporadic. Some states do it, some states don’t it. And they are also not doing it very regularly. So it’s still under the whims and fancies of the change in political government and change in bureaucracy. So every bureaucrat will come with their own priorities of what they think has to be done for education. And of course the state ministers have their own priorities as to whether it should be done or not. (Assessment Expert) A major problem lies in having the right fit or concurrence between the political leadership and supporting bureaucrats and sustaining that combination beyond 3-5 years because bureaucrats often get transferred to other departments. So there are 36 states and territories in India. So therefore different ones are at different levels. Like you have seen in the report. So if at all there is some strong political presence, political leadership, and there is also strong empowering and enabling bureaucracy, if both of them combine, very rare combinations. But it’s not that it is not there. I have seen that at least at any point of time in India out of 36 states and Indian territories at least you will find 3-4 cases where this type of combination exists. But it’s just that the combination may not be enduring. Because every 5 years the political masters get replaced due to elections. And also bureaucrats get transferred. Very rarely you will find a bureaucrat in a particular place for more than 3 years. (Assessment Expert) The problem is further compounded by the fact that the bureaucrats don’t come with backgrounds in education but generic administrative training. Gaining expertise in managing the education machinery takes a lot of time, especially for issues regarding improving learning outcomes. But due to quick transfers, there remains no steady and consistent focus on these issues. Amidst all this, having a new political government or minister creates even more instabilities. So what happens is that the Indian civil services is a general service. It’s a generic service. So they don’t necessarily have the expertise to understand issues related to education. It takes a lot of years for people to understand because there are so many layers and sublayers. The entire machinery itself is not an easy thing to understand. And then comes the quality of it. So when we are talking of learning outcomes, it is the quality of education. The machinery has been set up just to impart it. First it has to work. Then you look at quality..[..].. Now for the education secretary to even understand the mechanics, its such a complex thing. So for somebody to understand everything in 10 months, and then do wonders in 10 months, is something totally unbelievable. This is totally unrealistic. But that is how the system is. So for a new education secretary the first year just goes in understanding what are the teacher union issues, teacher transfer issues, salary issues and 110 so on. They wont even have the energy to focus on whether teachers are coming to schools, what are students learning, what is the quality of education and so on. So we really need continuity. So this combination of an interested bureaucrat, who wants to do much more than the regular routine stuff, then them being in the seat for a longer duration, and then for them to have a supporting political master, this combination is rare. Even if it happens, we must understand that it will be short lived. Maximum of 3 years. Then another bureaucrat comes. If the political master is the same, then this bureaucrat might want to continue it or agree to do it. They might say that yes he will do it, but he does not even understand it because he has come from another sector. It could be civil, road works etc. etc. So it’s not about whether they are able to do it or it, whether they are interested in it or not, but it’s about the necessary equipment, understanding, and the support with which they are able to do it. This is a rare combination. (Assessment Expert) Given these issues, the expert argues that there remains no “memory” in the system to continue the work consistently. Each time there is an extreme reshuffling of roles, tasks, and responsibilities. So these are the issues. So when you asked whether the states are regularly doing it or not, whether they have a mission and vision and all that, so many times when this interesting combination happens, then it starts. But then, after 2 to 3 years it fizzles out. And then when you go back to the same state, you will find that everybody down the line, right from the minister to the bureaucrat to all the officers etc. have got transferred. Therefore, there is no memory in the system. There is no knowledge management in the system. And as contractors we will often go and educate them that do you know that 3 years back this is what your state did? But the person sitting in that seat will have no clue that this was done! So these are generally the system problems that are there. (Assessment Expert) The expert also points to the common tendency within state administrations to overburden experts with various responsibilities, often not related to their job, and not giving them enough room to focus on their specialized duties. This is related to the point discussed in section 3.4 in Chapter 3 about lower bureaucracies in India being engaged in multitasking without specializations. Although the expert doesn’t mention it, it is important to recognize that not having a specialized team could also be related to the issue of high costs involved in building such teams. You have to understand that you have to give them adequate work so that they are not pulled in different directions. Oh, you are sitting here? Go and do teacher training. Or go and do election duty. That is what happens in our government. So that shouldn’t happen. You need to have a highly specialized team, and that team should be like you know Black Cats25 you have for security. You cannot tell Black cats to go and water the garden. Imagine you train such a high-power team, you train and keep, and you say why are you sitting there? Now go and drive my car. You go and water my garden. These things happen. So 25 Black Cats or the National Security Guard (NSG) are an elite counter-terrorism unit of India. 111 that shouldn’t happen. Develop a high-power team, equip them, and use them for what they are supposed to be doing. (Assessment Expert) Due to the inconsistencies in state-level administration and overall late awakening in India in terms of focus on learning outcomes, the expert argues that the notions around assessments and their purpose remained quite simplistic in India for a long time. It was not understood as a complex psychometric science, which affected the policy and administrative attention it received at the central and state level. So compared to 15 years ago, there has been some more growth (progress) in the understanding of assessments. But generally people think that assessments is just asking questions, and teacher knows how to ask questions at the end of the class. So if I have to do it at state level and national level, it is about getting teachers together and putting together a bunch of questions. And it is all about administering and then finding out whether children passed or failed. So nobody really understood that there is a lot more things you have to do to get a fair and reliable score or representative score a population, and so on so forth. So the complexities of psychometrics and assessments did not arise, though the awareness has become much more in last two decades compared to what it was earlier. So because they don’t think it is complex…. and people also think at the administrative level also…. hey just get a bunch of teachers and do this, why do we need to contract somebody outside to do that. So that is how the approach to capacity has been taken so far. That’s probably what led to it. (Assessment Expert) Given the vast capacity constraints at the state level and irregularities due to diversity among states, it makes sense why NAS use has been elevated to initiate a culture of policy and governance through assessments in India. The expert has also observed a significant change in the perception of assessments at the state level post-NAS 2017 survey. The exercise of implementing the NAS survey has signaled the complexities involved in conducting good quality assessments. However, there is still a long way to go to in terms of interpreting and using this data in the right fashion. But of course the National Achievement Survey in last two rounds has started recognizing that there is a lot more complexity there, and they are also outsourcing it to contractors and bringing in external technical support etc. So at least awareness has been built. And now the NAS in 2017 incidentally yesterday NAS got done again for 2021 across India. So all the 733 districts are participating in it. And states are involved in implementing it and collecting the data. Scanning them and so on. So states are understanding the complexities of just collecting the data itself. If you are going to be collecting the data, scanning the data, and so on and so forth, so there are complexities even in the test rolled out. For example this year they have 4 forms in lower grades, then every other child has to get the form. So therefore, even figuring out how many forms you have to take to the school, who will administer, how do you pick the child, which child gets what form, all of those things themselves are complex enough. So I would say that because of the NAS that type of awareness has definitely been built in the states, so that they understand that doing 112 achievement survey very robustly requires even a standardized test administration aspect. So that is very well understood. When it comes to use of data, I would say that some are using, some are not using. People really are not fully implementing or understanding how to extract meaningful information from the data. Therefore how to put it back into the classrooms. So that is relatively weak. (Assessment Expert) One major factor that this interview with the assessment expert does not reveal that could be important in maintaining poor state capacity across all these years is the centralized educational governance context of India. It is important to acknowledge that under India’s centralized governance system, the policies and frameworks set up by the central government play a major role in driving state-level policies and practices due to the funding arrangement under schemes like SSA, RMSA and Samagra Shiksha. The central government itself was largely under the influence of inputs and schooling focused education policy until around 2010/12 as discussed in Chapter 6 section 6.2. This was also the time when India’s topmost assessment institution NCERT was building its own capacity in domain knowledge. Therefore, it is likely that in such environment there may not be the desired push or urgency imposed on the state authorities from the central authorities to act in this area. And even if there was some push, it is hard to translate it into practice given its high costs and various bureaucratic challenges associated with state level and street bureaucracy. One major reason why the less push or assistance from central level could be a reason behind poor state capacity is reflected through the case of SLAS (state learning achievement survey). In 2013 the central government also proposed doing state level achievement surveys (SLAS) and provided states with grants to conduct them. This was in response to increasing concerns about poor quality of education in government schools in India in later half of Phase 2, as discussed in Chapter 6. This was also the time when NAS was first institutionalized in around 2012 as the regular feature of Indian education system. Introduction of SLAS was also an important event, where states were given grants for around 2-3 years for pursuing this exercise. As per the Annual Report (MHRD, 2017), in addition to grants the states were given “technical know-how for conducting such surveys through workshops followed by soft and hard copies of Standard Operation Procedure (SOP) to be used as guidelines while carrying out such surveys”. However, there is little documentation available on the impact/outcome of these surveys and how they were used for governance purposes. Based on the report by Dell Foundation et al (2021) it appears that states did not pursue this exercise of SLAS consistently, and there were major inconsistencies in 113 their approaches across states. Consultant 1 shared that this practice of funding SLAS was discontinued eventually as the central government perceived SLAS as duplicating efforts of NAS. So historically there used to be something called SLAS. And at some point there was this realization that there is some duplication. Also, there isn’t much expertise to do really a good survey of this kind. So the national govt used to fund…for about 3 years it funded the SLAS. And then it stopped funding. I think it said no more SLAS. We won’t fund you. You can do it at state level with your own money. But we won’t support it. It was exactly because of this thought that we don’t want to duplicate it. (NAS Consultant 1) Given there is little literature and documentation available about SLAS efforts in India, it is difficult to provide more explanation and context than this on this topic. However, the above example does indicate a major role of policy and governance dynamics at central level in influencing state level capacity to conduct assessments. Another important aspect to note about state capacity in India is the vast diversity in capacity levels across states to conduct assessments, making it difficult for the central government to rely on them to produce or support reliable data instruments helpful for national governance. Consultant 2 explains below how a capacity-building exercise for 16 states post-NAS 2017 was successful in only 4-5 states. (UNICEF) formed partnerships with these two organizations (ACER and AIR; after NAS 2017) to support 16 states in developing assessment capacity at the ground level. That means a group of 50-60 people capacitated in how to write good items how to design and analyze assessments, how to develop multi stakeholder reports. And it was all a practical exercise, not like delivering a workshop. It was a robust two year program designed for 16 states, involving states, and in this process we developed the capacity of 800-900 assessment experts, curriculum experts, teachers in all these 16 states. And out of these states, I can say only 4-5 states really took this exercise seriously. Because you know that when you design for 16 states, it happens that not all states do things equally.(NAS Consultant 2) 7.2.3 Address Conflict of Interest Issues and Incentives to Cheat After describing the context behind state capacity issues, I would like to discuss an important state capacity challenge regarding conflict of interest and cheating that has majorly influenced the development of NAS. It also indicates another reason why central government may want to be less reliant on states on matters such as developing a data instrument for nationwide governance. Despite significant investment during NAS 2017 in building the ground-level capacity for implementation, concerns arose post-survey about conflict of interest issues. Firstly, any assessment exercise requires a strong monitoring and controlling approach for quality execution. 114 This is primarily challenging given India’s vast scale and its flailing context with poor capacity at lower bureaucracy, explained in section 3.4 Chapter 3. However, the challenge became far greater post-NAS 2017 when the results began to be used in exercises such as PGI and SEQI. NAS, which before 2017 was seen as an inconsequential and customary state level sample survey, had now become a high stakes survey for the states. In 2017, NAS was designed by NCERT and administered by state level bureaucratic entities, but due to the high stakes nature of NAS 2021, it had to be administered/implemented differently from NAS 2017 to avoid the involvement of state-level bureaucratic entities which could create conflict of interest and give incentives to cheat. As a result, NAS 2021 was administered by Central Board of Secondary Education (CBSE), a central level institution that functions separately from state-level entities. For example, as explained by NCERT Core Staff, delegating the administration to CBSE will help NAS remain protected from state level malpractices. Yes. One thing that has happened to NAS is that after 2017, it has become a very high stakes exam. Because it is being used in SEQI, PGI and all that. So last time we involved states in the administration of NAS. So what the ministry has done now is that they have made it central, and the administration will not be done through states. We have a board that does big examinations, the CBSE board, will be involved in administering NAS in the state so that we do not really get affected by any malpractices at may arise because NAS 2017 became so high stakes. This is one thing we have done. (NCERT Core Staff) Similar point was also echoed by Consultant 1 who has been observing NAS over the years and noted this issue to be one of biggest weaknesses of past NAS surveys. So for example this year in 2021, there will be a NAS. So now the changes we are making, we are trying to make sure implementation goes well. So earlier they did the exercise with SCERT. Of course they also don’t have that much staff, so they also seek out help from the state machinery. You have to seek help from them in terms of just organizing it. They just shouldn’t be the people in the classroom conducting it. So this year now we are changing that, completely. So that other set of people will come conduct the assessment. Because the implementers and regulators shouldn’t be the same, to avoid conflict of interest. So in past that has been a challenge, avoiding conflict of interest. That is one of the big weaknesses so far in the manner in which we do NAS……[…] So implementation should be by a third party. At arm’s length distance. (Consultant 1) Consultant 1 also mentioned this issue in connection with PGI, where the data are reported by states to the central government. Given the increasing importance and coverage received by PGI, it has become challenging to ensure transparent reporting practices from states and maintain integrity of the data. 115 And the third thing I would like to say is some of the verification at the central level. I am not saying they should directly do this; they could hire a third party for this purpose. Because a lot of these things are self-reported. So as a state you might be honest for 1,2, or 3 initial years. But when you see that other states are inflating, you also start doing the same. And then there remains no value to this exercise after some time. So at least at a small percentage level, on a rotational basis, you also need to verify whether self-report items are accurate or not. I think those are 2-3 major changes that need to be made. (Consultant 1) Overall, it is interesting to note how the high stakes nature of assessments is being experienced in India by the states, instead of schools in TBA countries, due to the employment of NAS as the key instrument of national educational governance. This has some important implications which I will discuss in Chapter 9. It is also important to note the amount of distrust existing in the Indian federal governance system, which is also related to the issue of poor state capacity discussed in the earlier section. It can be observed that the issue of distrust and poor state capacity feed off each other. Having poor state capacity is instrumental in making states susceptible to cheating. On the other hand, this issue of cheating could also be one reason why the central government may not have relied on assessments by state governments and rather take the reins in their own hands of governing India nationwide with NAS, letting states do their own assessments independently. It could also be a major factor behind discontinuing central government funding for SLAS. 7.3 Conclusion Overall, this chapter discusses the technical capacity issues at the central and state level that have affected NAS 2017 and made it a district-level sample survey instead of a census survey/assessment or a national test covering all students as seen in TBA countries. The chapter also shows that given the capacity constraints, NAS 2017 is the only instrument that could be feasible for the central government to govern India nationwide in a changing policy climate where there was a clear demand for nationwide standardized assessment data. Particularly, I discuss that doing NAS 2017 is an extremely challenging exercise with a scope and scale comparable to PISA but with greater challenges. These challenges become more pronounced for the NAS team at NCERT because they themselves were developing their own domain expertise to conduct a survey of world class quality in a country like India. It is not easy to develop domain expertise being situated in India because there are few examples outside of India that the team can reference for learning how a survey can be conducted in a country with such massive diversity, scale, and capacity constraints. India is a unique country posing unique challenges, demanding indigenous 116 and innovative solutions to solve the problem. In doing that the central team has to maintain the hard balancing act of preparing a survey that is not only globally legitimate, but also caters to local needs and circumstances. Lack of system support, cost issues, and poor state capacity are also important factors in shaping NAS 2017, making it the only data instrument with reasonable quality for the central government to govern India nationwide. The absence of any other alternative instrument is a major reason for increasing reliance on NAS since 2017 for various uses and users (from central govt to teachers) even when the data are at the district-level, collected every 3-4 years, and not granular enough to inform practices in one of the thousands of schools in the district or tens of thousands of classrooms in these schools. 117 CHAPTER 8: USING NAS FROM 2017 AS AN INSTRUMENT OF NATIONAL EDUCATIONAL GOVERNANCE IN INDIA: “NEW WINE IN AN OLD BOTTLE"? In explaining why NAS has become central in Indian educational governance from 2017 onwards, the previous chapters have shown that there was a clear demand for nationwide standardized assessment data in India to govern the country, and NAS 2017 was the only instrument that the central government could develop due to various technical capacity constraints at central and state levels. In this chapter, I further explain why NAS has become central in Indian educational governance, by better understanding the intentions behind its uses from an organizational standpoint. Studying the rationale, intentions, and approach of the central officials in designing NAS use from 2017 onwards for standard and comparative purposes can further reveal what makes NAS attractive and feasible for the Indian education bureaucracy. Since these uses apply to both NAS 2017 and 2021, I use the term ‘new NAS’ to include to both these efforts. In this chapter, I show that in designing uses of new NAS, the intentions of the central officials are to supplement the existing bureaucratic practices. They intend to support and continue these existing practices and yet shift the organizational orientation and focus to learning outcomes without creating much disruption. This could be seen as a form of incrementalism or path dependency. The organizational theory also notes that bureaucratic organizations like the Indian government tend to be more inclined toward stability and incrementalism than disruptive innovations (e.g. Lindblom, 1989). Such an approach is helpful in ensuring organizational stability and survival (Malen and Knapp, 1997). Another important thing I show is how the central officials through different measures are demonstrating their intention of guiding/supporting states to transition to this new orientation. In this larger scheme of things, even the only novel aspect of NAS use, i.e. visual comparisons and gradings of states through PGI, also becomes apparently benign and almost unquestionable. Therefore, this embedded intention of employing new NAS to supplement existing practices and the measures to guide/support states significantly explains why new NAS has emerged as a key instrument of national educational governance and has maintained a bureaucratic governance approach instead of fostering post bureaucratic governance seen in TBA countries. To make this case in this chapter, I return to “standard” and “comparative” governance in sections 8.1 and 8.2 respectively. As discussed in Chapter 5, “standard” refers to the intended use of NAS for planning, teacher training, pedagogical development, curriculum design, etc. “Comparative” refers to how new NAS is being used in a comparative fashion to compare states, 118 particularly through PGI reports and dashboards, indicating a softer governance approach. After giving specific details of how new NAS is being used for standard and comparative governance in Chapter 5, in this chapter I provide discussion on the organizational context shaping these uses in this manner. 8.1 Standard Bureaucratic Governance Below I discuss how the standard use of new NAS is designed to assist existing bureaucratic practices by continuing the past focus on “system level” or aggregate outcomes and working with similar approaches through the same organizations. I also explain the measures taken to guide states in the shift towards a new policy orientation to learning outcomes through NAS. 8.1.1 Continuing the focus on “system level” (aggregate) outcomes Programs like SSA and RMSA were more concerned with what is happening “across schools” rather than what is happening “within schools”. They focused on aggregate outcomes rather than school-level outcomes. For example, some of the goals in SSA were to have all eligible children in the country complete five years of primary schooling by 2007 and eight years of elementary schooling by 2010. The new NAS is quite aligned with this focus on aggregate outcomes. The role of the central government within SSA and RMSA was to monitor progress on these aggregate outcomes and thereby make a range of decisions regarding policy, budgeting, planning, evaluation, etc. The interviewees often refer to these aggregate outcomes as “system level outcomes”. Through new NAS the central government can continue the past practice of focusing on aggregate outcomes as the data are reported at state and district levels. Having the new NAS is even better for the central government because it also provides district-level data in addition to the state level. This is an added advantage as having district-level data makes the central government push the state governments in using the new NAS data, as earlier due to lack of district-level data states did not use this data (discussed in Chapter 5, section 5.2). More context about this practice is provided ahead in section 8.1.2. NAS is an exercise that happens approximately every 3-4 years. Therefore, the central government can see all the aggregate figures at the end of 3-4 years, as it happened earlier, and make large-scale decisions. Given this scenario, NCERT core staff as well as consultants put great emphasis on the use of NAS for “system level” assessment and improvement. Whereas when I am doing a NAS, I am not interested in the individual child. I am interested in the system. I am interested in the education system at the school, district, and state level. 119 Also at the national level. When I am talking about bringing changes in the system, if you do a NAS in 2017, its always better to do it after a gap of 3 years. That’s what we have proposed. Doing NAS after every 3 years, so that the learning from the NAS also percolates down. And if you see that NAS 2017, the results were used in National Education Policy, they were quoted there. And it has to be used at different levels-state and district, for improving the education system. (NCERT Core Staff) 8.1.2 Working with the same institutions with the same methodology The new NAS allows to maintain the same working equations in the governance system and bring change with similar approaches as seen during SSA and RMSA days. For example, new NAS allows making reforms by the means of district-level planning and teacher training practices that are already institutionalized as part of SSA and RMSA as Chapter 6 has shown. SSA and RMSA have established infrastructures, systems, and procedures for district-level planning and teacher training. The use of NAS will be introduced within these already existing systems and practices. It is not targeted at creating new institutions, structures, or procedures. It is less overwhelming and more manageable for the large Indian education bureaucracy to continue this way of working with district-specific interventions rather than school-specific ones. The new NAS is therefore designed to bring “reform within the system” rather than “reform the system”. With this sentiment Consultant 2 explains that the larger goal through NAS is to influence policy, and that in India can only happen through the existing channels. Therefore, district-level focus in new NAS was designed so that they influence district-level teacher training and capacity building. Its not just about informing children and parents. But you have to influence policy and teaching-learning practices as well. If that is the goal then we don’t need to go to each and every child. Let’s collect at district level. Why district level because most of the teacher training, capacity development happen at district level. And also there is lot of variation that lies at district level. And trying to connect the learning difficulty with teacher capacity development within that particular district. For example- suppose children are finding difficulty in fractions in particular district. Or let’s say vocabulary. Let’s connect that learning difficulty in that district to teacher professional development in that district. If children in this particular district are finding these issues, then what are the interventions we need to do that can address those gaps at planning, policy, and teacher training level. That’s why we connect the evidence with activities at the district level. District level education plan, district level teaching development, even training materials that teacher educators are going to develop are all connected to learning difficulty in those districts. (NAS Consultant 2) Consultant 2 also shares why it is most critical to introduce the new focus on learning outcomes through education plans in India that are prepared at the district level. New NAS allows 120 associating cost/budget to outcomes right at the district level. This practice of integrating data into plans is considered the most efficient method to bring transformation in the Indian bureaucracy. It is also directly aligned with previous SSA and RMSA practices where UDISE data on schooling inputs was used for the same purpose. Moving on, it appears that UDISE and new NAS both will be used in district plans while proposing yearly interventions. But the important part is… the most critical thing is… the education data can be used in education sector plans. That is very very critical. Because now you know most of the education administrative levels are shifting from output to outcome. That means the focus is on learning. There is no second opinion about it. If the focus is on learning, then what should be our education sector plan? That means where we need to put more per cost to improve the learning outcomes. It should come right from the district. Because we have the district-level learning outcomes from 2017. So when we are developing the district level plans, we should refer to district level learning data. Again while developing state plans they should be connected to state level outcomes. And same at national level. So this is one of the easiest way to transform and achieve learning outcomes by integrating the evidences from learning assessments to education sector plans. (NAS Consultant 2) All of the discussion above shows how the designers of NAS have not been thinking “out of the box” in conceptualizing its standard uses, but rather maintain significant path dependency. They are maintaining the same framework of processes and practices that they practiced before the focus shifted to learning outcomes. The new NAS use represents a continuation of the past traditional practices but with a new orientation. In other words, it appears that new NAS is employed as a device to bring greater coordination in the existing bureaucratic organization towards a new organizational objective/goal. Chapter 6 shows that across all phases since 1990 data have evolved to cater to bureaucratic planning and management in India. This appears true even while the government is shifting its focus to learning outcomes through new NAS based on the perspectives shared by the core staff and consultants. Another evidence of this argument comes from the way in which consultants have spoken about the communication of new NAS data. For example, consultant 1 shares that they put major emphasis on improving reporting of new NAS as they wanted state governments to use new new NAS data. They tried to make sure that the reporting is user-friendly and relevant for states, to an extent that it is an easy-to-understand guide for them that tells them exactly what to do. The objective of NAS is to do a health check of our system……So states should clearly know what I need to do now to mitigate if there is a challenge. So how we report makes a lot of difference. Reporting should be done in terms of diversity like gender, region, social groups etc. So are rural schools better doing than urban schools or not? And what do we need to do based on these results? So reporting needs to be such a nice guide that the states 121 can easily understand that these are exactly the areas where we are lagging behind and this is what we need to do for this set of students in this region. So I think that is how it should be used. (NAS Consultant 1) Rather than rubbing states the wrong way or increasing their inconvenience while shifting the focus on learning outcomes and overburdening their capacity to comprehend new changes, the consultant argues that new NAS has been designed to “guide” them by systematizing their interactions with district-level bodies. Earlier, due to a lack of district-specific information states gave more homogeneous guidance to districts to improve their performance. This did not result in many impacts as the prescription was not tailored to district-specific needs. However, now with more district-specific data available in user-friendly reporting format it is believed that states can govern districts more effectively. Consultant 1 explains the same below. At the state level, the secretary can see that 6 districts are weak in this, 5 districts have to do this etc. out of my 30 districts. And that is the kind of analysis you do at state level, and then pass it on to district. And when you are doing trainings or providing guidelines to districts you can give them recommendations based on their specific performance. That kind of an analytic and systematic work needs to be done by the state. Then the district can take on. Its not like district people have no idea. They obviously know. But we just need to see that they are empowered enough through this kind of mechanism. So I think when states don’t do that (provide proper information and guidance), it leaves districts wondering, because all instructions are given to all 13 in same manner. So some districts might say this is not my problem, then why I am being told this. 50 things are told to everybody, but the relevant things to one district are maybe 12-13. And then this attitude develops that let them say what they want, and I will do what is possible for me. So you can even have a top down system, but the top needs to understand the tail (NAS Consultant 1) 8.2 Comparative Bureaucratic Governance In this section, I explain the path dependency of comparative governance seen through the use of new NAS via PGI in India. Apparently, having PISA-like visual comparisons of states appear new to Indian education. However, I show how that has a significant connection to past bureaucratic practices of competitive and cooperative federalism and is further used to enhance them rather than invent a completely different governance procedure. The only thing that might be genuinely novel is the practice of grading and visually comparing states. However, that practice also is significantly unconfrontational in the larger scheme of path-dependency. 8.2.1 Continuing competitive and cooperative federal practices with greater efficiency and sensitivity The practice of comparing states is an overall institutional philosophy that governs the entire federal governance system in India. For many years now the central government has been 122 popularizing the concept of “competitive and cooperative federalism”. This concept has not been well defined and articulated in the policy documents, however, the central idea is that although the center and states must cooperate for the development of the country, it is also important to have competition among states in order to build a sense of urgency and improve productivity. As the former secretary of NITI Aayog, an apex public policy think tank of the Government of India, describes “Competitiveness is an idea that has stood the test of time…[..]..More recently, the concept of national competitiveness and regional competitiveness has come into the mainstream. India can only achieve its ambitious growth targets by enhancing competitiveness at all levels of government. As the literature on competitiveness notes, there exists a powerful connection between economic and social development – improving competitiveness requires investment in both. This, in turn, requires coordination of our economic and social policies across various levels of government” (Kant, 2019). Under this philosophy, the central government has been evaluating state performance in education, funding them via SSA, RMSA, and now Samagra Shiksha, and nudging them in particular directions by giving recommendations from comparisons. This practice has been going on every year since SSA was launched in 2001. Therefore, irrespective of visual comparisons and gradings through PGI, the practice of central government comparing states against each other and officially recommending policy borrowing already existed. I provide three passages 26 from interview with state education official below to show how their meetings with central govt officials involved these practices. Central government has a project approval board (PAB). We have to get our budget approved from them. We have to submit a proposal of all the new initiatives we want to work on, after taking permission from our education committee. From there central government will approve it and after that work begins. So we have to approve our grant from central govt for any project. But we have some freedom. If we want to introduce any initiatives for quality, our director can use some funds to approve it. But main projects we get it approved from central government each year. (State Education Official-Samagra Shiksha) Mostly 80% are ongoing programs, and 20% are new programs. For example- programs related to CWSN or Kasturba Gandhi, they keep running once they have been started. Then employees we have kept for such projects, they also mostly continue. So more or less 80% repeats, and around 20% new gets added. Central government also adds their own initiatives into this. That also every state has to do compulsorily. For example-govt of India 26 These are not exact quotes but a summary of the conversation as the interview was not recorded. 123 has introduced program on foundational literacy and numeracy this year. Then every state has to adopt. (State Education Official-Samagra Shiksha) For the 20% new initiatives proposed by us, they normally check on their own. We have to do a presentation in front of the Project Approval Board. If they feel that this project is good, and the results are good, and if there are any pilot results and they are good, then based on that they approve the project. This process goes for 4-5 days continuously in New Delhi. Right now due to corona it takes place online. But every year our team stays in Delhi for 6-7 days. Even our director and secretary of education are present and then the project gets approved. There is lot of presentation and discussion in front of central officials. If they really like the project, then they also recommend it to other state as best practice. There are proper guidelines given by the central government about how to prepare your budget. So accordingly it gets approved. (State Education Official-Samagra Shiksha) Similarly, in this spirit of competitive and cooperative federalism, the central government was introducing various initiatives across sectors to increase a sense of competition among states before PGI. For example, in the education sector as per central ministry’s annual report 2016-17 (MHRD, 2017, p.43) a web portal called ShaGun was launched in 2016 as “repository of success stories and also to provide a platform for all stakeholders to learn from each other”. It was proposed that “this would also instill a positive competitive spirit among all the States and UTs”. It was also seen as an important means for the central ministry to monitor activities at the state and lower levels. The second part (of Shagun) is regarding the online monitoring of the SSA implemented by States and UTs and will be accessed by Government Officers at all levels using their specific passwords. It comprises questionnaires, related to various interventions under SSA and the performance of the State, which will be filled in by the States and UTs. There are 122 Reports which will be automatically generated from the data filled in the questionnaires. These reports, along with the success stories in the Repository, will create an online platform which can be viewed by officers in the Department, PMO, Niti Aayog etc., to see the status of implementation of the SSA and the elementary education in all States and UTs.(MHRD, 2017, pp.43) Given this scenario, PGI can be seen as an instrument of competitive and cooperative federalism, where states are compared against each other for competitiveness, and at the same time given specific recommendations by the center for improvement as a “spirit of cooperation”. Referring to the intentions behind having PGI, PGI core staff describes how the central government actively conducts state comparisons and initiates policy borrowing through annual meetings and PGI helps as a tool to achieve this purpose. So there should be an index, particularly the PGI, the idea was to have a combined index of both the input as well as some output indicators so that we know….and when we are doing ranking across state and union territories… then at the level of input which state is 124 distributing across different things will vary. The outcome will also vary. Then we can actually try to tell the states that see your outcome must become better. And these are the inputs where these states are above. Probably if you can do something like that or something similar, then your outcome will also improve. So it is essentially generating a system where one can learn from others. One state can learn from its neighbors. Maybe distant or immediate neighbor. And then apply it in their own domain. States can also look at their own progress and improve. (PGI Core Staff) The PGI core staff believes that having PGI has significantly helped the central government increase the efficiency of its annual meetings. It has become easier for them to review progress and make suggestions. Yes definitely. Because particularly with respect to indicators like infrastructure and all, the moment you find out that the states performance in that type of infrastructure is not that good, then there will be queries from the central government why this part is not happening. And suppose the score has improved. But in a particular indicator, the score is reducing. So immediately they will pinpoint that in this particular place the state is not doing well. So then again everybody will check, what were the proposals which the state has. To what extent those proposals have been actually approved. How much of it has materialized actually, even after approval. So all those things get highlighted through this. It is not the only way, but this definitely is also one of the good ways of finding out and improving efficiency at all levels.(PGI Core Staff) Furthermore, sharing these meeting discussions on the website further helps the central ministry to spread wider awareness and bring consonance. They are also attributing improvements in yearly scores to this exercise. Second thing what we directly do is…the ministry of education at the central government…it conducts regular meetings with all the state governments education departments. Particularly at the beginning of the financial year when the allocation of budget and other things are decided. In that meeting also, PGI is one part of the meeting where we present what was the states’ performance in this PGI. What was the performance of the other states. And where this state has lagged from the other states. All those things. It is available also in the PGI website. You must have seen all that. Because you know a lot of things about PGI. That I have already seen. So in that website…even after discussion…we upload those presentations. Suppose in the state meeting, only 20-20 persons have listened to what we have told…subsequently they can tell others also…just see this thing. To make that happen we have kept the presentation on the website. That way also the awareness is increasing. Generally what we are finding is that…if we see the movement of the scores over the years…and then we are finding for most of the states…the scores of 18-19 were better than 17-18. Score of 19-20 has become better than 18-19. So there is a general movement of aggregate scores for all the states over the years. So definitely PGI is having a positive impact. (PGI Core Staff) Through these explanations, it appears that using new NAS via PGI is a useful strategy for the central government to continue its practices of past comparisons and policy borrowing but with 125 greater pressure and efficiency by publicizing the data and bringing it under compelling visual comparisons. It is important to note that while PGI may help to generate a greater sense of urgency and competition in states it may not necessarily disrupt relations between central and state governments. There are three examples that indicate this: 1) Well before the introduction of PGI, the states are used to being compared and scrutinized with similar indicators used in PGI. PGI only represents the arrangement of data in a particular manner. For example, instead of the central government reviewing states’ performance separately in terms of retention rate at the primary level, secondary level, etc. they can directly visit the Access section in Annexures of PGI report and find all the relevant indicators. This can make the process of reviewing more efficient for the central government. At the same time, it can also be efficient for states to have this template as they can know in advance what the central government expects from them in terms of performance. 2) The comparisons of states are made relatively, i.e. between large vs. smaller states, to ensure “fairness” (see MHRD, 2019; MoE, 2021b). The language around PGI has remained about “encouraging” states to evaluate their performance. The government report notes that this is the sentiment behind grading states instead of ranking, as ranking is not inclusive and leads to a zero- sum game (2018b): The PGI has also been conceptualized as a tool to encourage States and UTs to adopt certain practices. (MHRD, 2018b, p.1) The PGI evaluation provides grades to the States and UTs, as opposed to ranking. Grading, by allowing several States and UTs to be considered at the same level, eliminates the phenomenon of one improving only at the cost of others, thereby casting a stigma of underperformance on the latter, though, in effect they may have maintained the status quo or even done better than earlier. (MoE, 2021b, p.4) It will be for the purpose of grading States and UTs rather than ranking them as it is felt that ranking leads to unhealthy competition. Further, a State/UT can improve its ranking only by pushing another State/UT down irrespective of the latter’s performance. Therefore, it is entirely possible that all 36 States and UTs may reach a stage where all are doing excellently, but some of them will have to be ranked 30 and below and that will create the impression that the bottom-most States and UTs are doing badly. Grading on the other hand will allow for more than one State/UT to occupy the same grade, and therefore all 36 States and UTs to ultimately reach the highest level at the same time, which is what will, in the final analysis, symbolize a developed India. (MHRD, 2018b, p.1) 3) The PGI official indicates cognizance of the influential role of state governments in the education sector and the need to take them on board for the successful implementation of PGI and 126 avoid instances of “cheating”. The quote below shows how the central government considers it important to take states on board since PGI is developed using the UDISE data reported by state governments to the central government. Even though the official does not state it explicitly, it does indicate their intentions of eliminating the chances of “cheating” at the state level. This issue of ensuring that states do not cheat in data-related matters was also discussed in Chapter 7 section 7.2.3 in reference to NAS 2017. Considering this scenario, PGI was designed after consulting with state governments. When the parameters were decided, at that time consultation was somewhat widened. First of all inside ministry the different divisions who deal with different aspects of school education were involved. It was not an exercise where statisticians decided that let us take these parameters. So it was essentially the program divisions, the officers that actually run the programs for improvement in school education, they were very much involved in it. Then inputs were also taken from some of the state governments. Because they will ultimately be using this, they will be providing data for many items. So they were also taken on board at the stage of development. (PGI Core Staff) PGI core official goes to the extent of saying that “Nothing can be done” if state governments are not on board. It was first introduced at the state level and kept that way for 3 years before attempting to start PGI at the district level so that states get some time to become comfortable with this practice. Two three things which definitely comes to my mind is that when we are preparing any index for the first time, the central government deals directly with state governments. States have state education departments, and all the districts are managed by the state education departments. So therefore, even to generate awareness it is very much important that first the state education departments come on board, and they appreciate the idea of a PGI and they start using it. To appreciate and start using it a minimum level of continuity is required. That is why 2-3 years of continuation of state PGI was very much important. And then the district level it is going. […..]…Education is essentially a state subject. So implementation of RTE and all lie with the state government. So unless the state government is on board nothing can be done. (PGI Core Staff) Given these examples, it is possible that PGI may help to generate a greater sense of urgency and competition in states without disrupting relations between central and state governments. 8.2.2 Explaining the only novel use of NAS: States’ Visual Comparisons and Gradings Having explained the path-dependent nature of new NAS use via PGI, one may then ask about what explains the novel element in the use of new NAS, i.e. the publication of state performance gradings and visual comparisons in the public domain that leads to wide comparative media coverage. This is the only practice that did not exist in the past. 127 The answer to that may lie in the fact that given the ‘comparative turn’ in international governance in education or the general social sector, the practice of making comparisons like this has become a pervasive and indisputable norm across the world. The current central government in India, under the leadership of Prime Minister Narendra Modi, is particularly known for its penchant for modernization and technicization through strategies such as datafication, digitization, industrialization, e-governance, etc. For establishments like these, PGI-type comparisons might be particularly attractive for gaining some political mileage. In fact, this reason is quite possible, as the government has also released indices similar to PGI in other sectors [e.g. the Sustainable Development Index (SDI), Health Index, Water Index etc.] that compare Indian states in other dimensions. The PGI core staff also mentioned in the interview that the idea of doing PGI may have originated from the Prime Minister himself. However, since the person was not part of the team then, he/she did not know the full story. Therefore, my interview data unfortunately do not illuminate the role of the present political leadership in conceptualizing PGI. And actually, what I heard is that the very concept of PGI, to have a ranking system, where the educational system – the states and UTs should know where they stand with respect to each other and all, these have come from very top level of the government. Might be from the Prime Minister himself. That is what I heard. (PGI Core Staff) Furthermore, it might be unpopular for states to oppose a practice like this since comparisons already happened during the meetings with the central government. Even before PGI, the minutes of these meetings were shared in the public domain. As described earlier, PGI’s visual comparisons and gradings only refer to a particular type of arrangement of data. PGI does not rank states but grades them, eliminating the chances of “zero-sum” competition. Plus the language around PGI is that of “encouraging states” and boosting “healthy competition”. Given this scenario, there remains little for the states to the complaint. As a result, even the novel element of new NAS use can be considered uncontentious for Indian educational governance. 8.3 Conclusion After showing in Chapter 5 that 2017 onwards NAS has been employed for standard and comparative bureaucratic governance by the central government in India, I provide an answer to why the use of new NAS has become central in India through Chapters 6,7 and 8. This chapter completes the explanation behind this question. I show in Chapter 6 by tracing education policy and data developments since 1990s that there was a clear demand for national-level learning assessment data like NAS 2017 in India and an emerging data infrastructure to support such a 128 demand. In Chapter 7, I explain the technical complexities of conducting NAS 2017 in India, at the central and state level, due to which only NAS 2017 which is a district level sample survey instead of a census survey or national test, is the only feasible instrument to govern India nationwide. This scarcity of other alternate instrument is a major reason behind increasing reliance on NAS. This chapter explains the emerging importance of new NAS in India from an organizational standpoint. It shows that the government’s motivation behind uses of new NAS through standard and comparative bureaucratic governance is to enhance existing bureaucratic practices. New NAS has been employed for uses that are significantly path dependent. For example, for standard bureaucratic governance new NAS will be used in practices like district-level planning and teacher training. These practices were already institutionalized by SSA and RMSA programs. New NAS will be included in these practices, just like UDISE has been in the past, to shift the focus to improving learning outcomes. In case of comparative governance, new NAS is being used through PGI by the central government to direct state action, make comparisons with other states, and officially instigate policy borrowing. This practice has also been going on since the establishment of SSA in 2001. As indicated by PGI core staff, having PGI will make it more efficient and create more urgency. In short, NAS is not currently put to drastically different uses (e.g. accountability) that may substantially alter governance structures or processes. Apart from path dependency, I also discuss how the central government has worked to take state governments on board with its uses of new NAS. In the case of standard bureaucratic governance, NAS reporting has been designed with the goal to guide states on how to act, and how to guide districts. In the case of comparative bureaucratic governance through PGI, steps have been taken to avoid disputes with states by including them in the PGI design process, positioning it as a way to “encourage” states, taking steps to portray fairness in comparisons, and waiting for states to “appreciate” the idea of PGI for 3 years before taking any further steps. I also explain how the only novel aspect of new NAS use, i.e. gradings and visual comparisons through PGI, is also largely uncontentious given the global norm of making such comparisons and states having little to complain about the features of PGI given its encouraging tone and overall framework of path-dependency. All these factors of path-dependent uses, intentions to guide states/ support states, and uncontroversial novel element help to explain the central role of new NAS in Indian national 129 educational governance. They clearly indicate that new NAS has been largely designed to change the system orientation to learning outcomes, without changing the system. In other words, “reform within the system” instead of “reform the system”. Also, “new wine in an old bottle”. All the uses that new NAS is intended for already existed within the system. New uses were not created to supplement the arrival of new NAS. The central government is owning the new NAS data and making efforts to introduce the data into the bureaucracy for the convenience of the central government as well as the overall bureaucracy. New NAS is designed to serve as a common currency across bureaucratic levels for various transactions related to learning outcomes. 130 CHAPTER 9: CONCLUSION AND IMPLICATIONS There is a need for greater representation in the literature of diverse ways in which data are used as instruments of national educational governance. Currently, the literature on this issue is occupied with discussions on test-based accountability (TBA) approaches, largely from Anglo- American countries. Test-based accountability involves holding schools directly and/or indirectly accountable to test performance. TBA is recognized as a global phenomenon due to its adoption by many countries across the world. Despite different contexts, it has resulted in many convergent effects across different countries. To avoid treating the adoption and implication of TBA as universally relevant experience, there is a need to explore diverse ways in which countries are designing, developing, and employing data as an instrument of national educational governance based on their requirements and context. I fill this gap in the literature with an instrumental case study of the National Achievement Survey (NAS) in India, which in spite of being an assessment of learning performance is a data instrument for national educational governance that is different from the TBA approach. From 2017 onwards, NAS is a national district-level sample survey of learning competencies conducted nationwide in India around every 3-4 years. It covers grades 3,5,8 and 10. NAS is not positioned as an alternative to TBA but is an instrument applied for a different purpose of bureaucratic governance. This employment indicates that NAS does not foster transition to the more neoliberal or market oriented post bureaucratic governance as seen in TBA countries. My study answers two research questions using diverse data sources: 1) What is NAS and how is NAS used for national educational governance in India from 2017? 2) What policy, technical capacity, and organizational factors explain NAS’s central role in educational governance from 2017? In order to answer question 1, I analyze a range of documents such as government websites, dashboards, policy documents, statistical publications, media articles, expert blogs, YouTube videos, etc. In order to answer question 2, I conduct a total of 14 interviews with key officials in charge of data initiatives, officials in state education administration, and school- level staff27. I perform thematic analyses of the interview data and through various strategies for validating interview data derive the findings for RQ2. The data collected for both questions and supporting literature review help in triangulating information and reinforcing the findings. The 27 I note in chapter 4 that the interviews with school staff are not used in this dissertation. 131 analysis process across these questions was therefore iterative. In terms of findings, chapter 5 answers question 1, and chapters 6 to 8 answer question 2. Answering question 1, I find in Chapter 5 that as an important instrument of national educational governance, NAS from 2017 is designed to be used for standard and comparative bureaucratic governance practices with unique features. Standard refers to regular bureaucratic practices such as planning, pedagogical design, curriculum development, teacher training, etc. Through its standard uses, the new NAS is trying to target many actors in the education bureaucracy right from central government to teachers. Comparative refers to compelling visual and numerical comparisons made across states through NAS reporting, particularly through the performance grading index (PGI) which involves PISA-like comparisons and gradings of states against each other. These comparisons perpetuate and enhance a powerful yet soft form of governance to nudge states in specific directions and officially initiate cross-state policy borrowing. With these uses, new NAS becomes a unique instrument because it has been employed for core/direct governance purposes despite being a district sample survey done in 3-4 years. Generally in TBA countries, annual data from national tests or census surveys/assessments are used for core governance purposes. The uses of new NAS also do not indicate transition to post- bureaucratic governance as seen in TBA countries. The uses remain within the bureaucracy, for the purposes of bureaucracy. They sustain the heavy handed role of bureaucracy in Indian education system without relinquishing its functions. The level of ownership and engagement by the central government in disseminating and institutionalizing its use is also quite striking and contrasting from the experiences of other developing countries noted in the literature. In beginning to answer question 2, Chapter 6 shows that there was a real demand for standardized performance data in India, particularly from 2015 onwards, to govern India nationwide owing to global and national pressures. Until then there was a greater emphasis on ensuring universalization through inputs and schooling (focus on quality access, enrollment, retention, and completion with equity) over improvement in specific learning outcomes. Through the No Detention Policy, there were clear provisions to steer the education processes and governance in elementary education away from testing and its effects. Data were developed to serve this policy focus. The government was also occupied with bureaucratic capacity-building initiatives throughout these phases to expand the universalization of education to other levels and collect appropriate data. However, due to rising national and global concerns about the 132 deteriorating quality of education in government schools, there emerged a strong demand for standardized assessment data to govern India. In absence of any other reliable source of such data, NAS became an attractive option as it was conducted by the government since 2001. It was easier for government to rely on NAS as institutions and mechanisms for conducting NAS were already established to a certain extent. Chapter 7 shows that NAS 2017, a district level sample survey, was the only technically feasible instrument that India could develop because of its capacity challenges at central and state level. This could also be a major reason behind increasing reliance on NAS from 2017 by targeting it to various uses and users from central govt to teacher level. Given India’s vast linguistic and cultural diversity, doing NAS 2017 is like doing PISA but with greater challenges. This poses major challenges for the team in designing the survey. These challenges were felt even more since the NAS team at NCERT was developing its own domain expertise over the years. Doing NAS 2017 required the team to be creative in developing indigenous solutions to the problems that arose due to India’s unique combination of diversity, scale, and capacity constraints at lower levels of bureaucracy (i.e. flailing context). There are a few examples outside of India that could be helpful to the team for this purpose. Furthermore, in making all the decisions, the NCERT team had to maintain the hard balance between producing a world-class survey and forging local relevance. Key decisions such as taking a sample survey approach instead of census and hiring implementing agencies depended on maintaining little reliance on state-level authorities as they suffer from capacity constraints and conflict of interest. Cost-effectiveness was also a major consideration for decision-makers. Due to NAS 2017 (and now also 2021) being the only technically feasible instrument, it makes sense why the government is heavily relying on it for various uses by targeting various actors from the central level to teachers even when the data are at the district-level and not granular enough to inform practices in one of the thousands of schools in the district or tens of thousands of classrooms in these schools. Chapter 8 shows that a major reason why from 2017 onwards NAS is emerging as central to national educational governance in India through standard and comparative bureaucratic governance is that NAS is designed to serve path-dependent uses without creating any disruptions in the system. It is like “new wine in an old bottle”. It is designed to only shift the organizational orientation from schooling and inputs to learning outcomes, while still continuing the inherited practices of the past. For example, the new NAS will be used for district-level teacher training and 133 planning- both practices institutionalized by SSA and RMSA. It does not come with any specific provisions to improve accountability, involve external stakeholders etc. that may play a role in shifting existing governance structures and processes. Comparative governance through PGI is also a more efficient way to pursue the same exercise of state comparisons and policy borrowing happening in yearly meetings of state officials with central government. In both areas of standard and comparative governance, efforts have been taken to avoid any conflicts with states. As a result, the only novel element of visually comparing and grading states also appears unconfrontational given the global comparative turn in education and overall path-dependent organizational approach. 9.1 Implications for Literature From 2017, NAS as an instrument of national educational governance demonstrates many global influences. It starts the trend in India of decision-making and governing based on learning outcomes. It marks a major shift in policy level thinking where the idea of development and progress will be measured less by rate of students completing school and more by rate of students achieving a certain learning proficiency. As seen in many countries, this new NAS brings out the intimate and classroom-focused issue of student learning and makes it part of the policy concern. As new NAS provides district and state-level learning data, it creates possibilities to hold districts, states, and the entire bureaucracy accountable/responsible for learning outcomes. Even though it is too early to conclude, access to new NAS data may also create opportunities for different stakeholders to indirectly participate in the Indian educational governance process in an unprecedented manner. Some evidence of this is already emerging. State governments in India have started hiring international consultants to help them design their state assessment systems or action plans in order to perform well in NAS. One example of this is the government of Gujarat’s collaboration with UK based NGO Reach to Teach to bring UK’s Ofsted style model in Gujarat (Yagnik, 2019). The new NAS also demonstrates two key local or Indian attributes. First, the demand to have learning data like NAS 2017 in India did not generate merely on account of dissatisfaction with the performance of schools. The criticisms for deteriorating learning levels were largely targeted at the government for poor implementation of SSA programs, as the educational governance is significantly centralized in India. Even today news reports continue to blame the government for its failure to provide adequate resources and staff to schools. Second, new NAS’s 134 development and use are largely designed and employed for centralized governance processes. The data-based governance approach propagated by new NAS is heavily reliant on the path- dependent role of central, state, and district entities. The powers and responsibilities of these entities may remain the same as in SSA or may be further enhanced. The governance of education was and may continue to remain bureaucratic under new NAS. Both these observations indicate different motivations and purposes behind data-based national educational governance in India compared to TBA countries. The Indian context shows that due to the blame on the government for failing to improve learning levels through its policies and practices, and absence of technical capacity to collect nationwide school level performance data, the government has come up with an approach that is useful for its own bureaucratic mechanisms. This is different from the origins of the TBA approach in Anglo-American countries, which had higher technical capacity for data collection and where the federal government put the failure of poor learning levels on schools, employing the TBA approach to hold them accountable. As discussed in Chapter 2 section 2.1, this has been called the post-bureaucratic governance approach where competition, contractual relations, and incentives are key tools to promote goals (Maroy, 2008, 2009, 2012). In the case of India, such transition to post-bureaucratic governance is currently not visible, rather the bureaucratic governance approach stays. The unique context of Indian bureaucracy and the powerful position occupied by it in the overall public governance in India may have an important role to play in sustaining the bureaucratic governance approach through NAS. Arora (2007) explains that India has always adopted its economic development approach with a significant social justice component to meet rising aspirations and expectations of its diverse population. This has resulted in the vast size and complexity of government operations. Dixit (2012) explains that Indian government bureaucracy is answerable to multiple constituencies, making it a system of agents with multiple principals. These principals include executive and legislative bodies, judiciary, representatives of business industry, civil society, religious groups etc. (Dixit, 2012). Each of these principals, especially legislatures, represent different interests(Dixit, 2012). These principals together negotiate interests, leading to a weighted average of their distinct goals, which becomes the goal which bureaucracy serves(Dixit, 2012). Often times the constitutional or legislative mandates come with vague frameworks, which everyone agree upon. As a result, bureaucracy becomes responsible for determining their precise operation and implementation, by negotiating with all 135 stakeholders(Dixit, 2012). This leads to high transaction and governance costs, which private firms may never wish to take up as it is not profitable enough(Dixit, 2012). As a result of this the Indian bureaucracy is habituated for taking up most public sector activities(Dixit, 2012) which increases the complexity of government operations and gives them major formal powers (Arora, 2007). Mathur (1991) has also noted that in the past, influential civil servants have played an active role in deflecting reforms aimed at reducing the role of bureaucracy. Given the continuation of bureaucratic governance through NAS in standard and comparative manner, it is also likely that the effects and implications of data on educational governance might also be significantly different in India. Although it is too early to comment on the implications of new NAS for India, this study indicates a range of variables such as a centralized governance system, flailing bureaucracy (staff shortage, absenteeism, lack of specialization etc. at lower levels), corruption, low technical capacity, poor school autonomy, federal politics, and massive linguistic diversity which are absent in the current literature due to dominance by developed economies, relatively decentralized education systems, relatively modern bureaucracies, lower levels of corruption, greater school autonomy, and relatively less linguistic diversity. This makes the literature less relevant to India and many developing countries with similar issues. Furthermore, given the adoption of the TBA type approach is challenging for India (at least for some years) and that India may be under significant influence of new NAS as the key national data instrument, the pervasive effects of TBA observed across the world and discussed in Chapter 2 may be less relevant for India. This could be an important insight for the literature as that it largely paints a “dystopic picture of infiltrating power of data in education” (Takayama and Lingard, 2019, p. 465). Case studies like this make us question this assumed pessimism about the implications of national data instruments, as they may not be globally shared or shared in those specific ways. These type of case studies are also important for understanding how bureaucratic states respond to rising global pressures for neoliberal governance, where international practices like data-based governance may be adopted as techniques for the purpose of global alignment, modernization, new public management etc. but without relinquishing bureaucratic control and transforming governance structures. This discussion exposes possible limits of current scholarship on data as an instrument of national educational governance. It shows how India’s bureaucratic governance approach through new NAS opens up a variety of questions and concerns that need investigation and representation in the literature. Studies in the future will need to give the required 136 attention to the socio-economic, political, organizational, technical, and cultural factors shaping data approaches adopted by countries to strengthen the understanding of global vs. local phenomena and their consequences. Studies must recognize that although TBA models are spreading across the world in different forms, what they actually mean and do could be considerably different, and that there is a definite possibility of other models gaining more importance over TBA due to various reasons, even if it is for some years. 9.2 Implications for India I discuss here the possible implications of new NAS in India, especially considering the absence of annual and school-level data instruments like a census survey or a national test. I note in Chapter 6 section 6.5.1 that efforts for school-level standardized test data are going on as NEP (2020) has proposed many strategies similar to TBA models. When that is fully implemented, it would be important to note how it interacts with the overarching framework of governance created by new NAS. However, till the time new NAS remains the overriding data instrument in India, there are some important implications that demand consideration. How does having new NAS data empower or equip schools is a major question. This is an important question because ultimately it is the teachers that play the most crucial role in boosting learning levels of their students. As I discussed in Chapter 5, the district report cards of NAS provide district level figures of learning performance, from a survey done in around 3-4 years. When districts themselves are large geographic and administrative units with populations that can exceed millions, how can a teacher find this information useful in classroom? (For example, as per Census 2011, the district of Northeast Delhi has a population of around 2.2 million over an area of 24 sq miles. Similarly, the district of Mumbai suburban has a population of around 9.35 million over 172 sq miles). More importantly, how can a teacher make meaning of a survey result that indicates learning competencies in an education system that is currently infamously functioning on rote learning? What does this data mean for government schools that are only running on 2-3 teachers across grades? The government is conducting workshops to train the teachers in interpreting and using this data in classrooms. However, the nature of this data and the different contexts in which teachers may operate even within the same districts create doubts about the utility of this data for teachers. Making key governance decisions using only 3-4 yearly district- level sample data may not be sufficient in supporting schools that have special needs or circumstances. Therefore, although it is better for India to have some learning data than no data, it 137 is doubtful the extent to which even this some data might be helpful in shaping the classroom processes that are the main drivers of learning outcomes. Interestingly, Consultant 1’s interview indicates that in case of India there may be even more important and serious issues to worry about rather than the issue of schools and teachers finding the new NAS data useful. Due to the influence of secondary board examinations in India that are notorious for being extremely tough and competitive, there is a general fear in schools about exams conducted by governments. Consultant 1 argues that if the purposes of NAS are not communicated to schools properly, they might think of it as a school level exam instead of district level exam. This could make NAS unnecessarily high stakes, even if it is not designed to be that way. This could also increase instances of cheating at school level, which due to India’s flailing context, may become hard to monitor and reprimand. Even the schools. Even while I say that NAS is a low stakes examination, where schools have no incentive to inflate the performance, however examination has been such a big thing in Indian school education that there is always a temptation. So there needs to be effective communication that look this won’t impact on you. So there is always a doubt that they will still be inflating. So 4 months before the NAS takes place, in newspapers and various mediums, you need to inform schools, parents everyone that the purpose of NAS is not individual performance. It is to understand what the state or district is doing and what can be done. I don’t think we are doing an effective job of it at all. I would again say at all. At least at district level trainings this should be communicated where district tell schools that this has nothing to do with you etc. I think it is happening at small level, but we need to increase. (NAS Consultant 1) Given this insight, if a 3-4 yearly district level sample survey like NAS which provides no information about schools but becomes high stakes for schools in India in the future, it could be an important case study for the world to break many assumptions and learn important lessons on the relationship between policy instruments, implementation, capacity, society, and culture. Secondly, since I find that NAS is a product of “reform within the system” than “reform the system”, it may be inevitably limited in the extent to which it can bring meaningful changes. If the system itself is the problem, NAS may work towards serving the problem rather than being part of the solution. Therefore, it is necessary for India to investigate and critique the purposes for which new NAS is developed and the channels through which it is deployed and used. I find an important insight on this from the interview with Consultant 2. The person explains that new NAS has steered the influx of individual state-level assessments (of questionable quality) across the country. However, given the instability of personnel tenure in state education administration (discussed in detail in Chapter 7 section 7.2.2), these state assessments have become a tool to gain 138 professional mileage, without critical consideration of its larger goals and purposes. This could have important implications on state educational governance in the future. And second is even the administrator for the sake of data and evidence, are overemphasizing on assessment. As in, in the baseline we were here and now we are here. Because most of the education administrators, let say education secretaries and state project director, their tenure is 1-2 year in the state. And they want to see when I came learning assessment is this, now this is this. So they are connecting it to their personal achievement or milestone. Truly speaking, honestly speaking, it is required or not I will come later to that. But you know within a system learning cannot shift so fast. That means this baseline 6 months back children were here, and now they are let’s say on top of scale. It cannot happen that way. (NAS Consultant 2) Another example of NAS reinforcing the existing system instead of transforming it comes from my observations on Gujarat state post NAS 2017. In making these observations, which are more speculative at this stage, I also lean on some of my conversations with teachers that I have refrained from using in this dissertation. In order to improve performance in NAS, the government in Gujarat has started conducting weekly curriculum based tests in government schools (DNA, 2018). The questions for these tests are sent to the school by the government and test the knowledge of students in every unit or chapter covered in school textbooks. Teachers give these tests in classrooms, grade students, and report scores to the government in timely manner. As per the state education minister (during year 2018), purpose of these tests is to help teachers periodically review the areas where students lag behind and take corrective actions in timely manner (DNA, 2018). He said that this activity will help to improve Gujarat’s performance in future NAS surveys (Times of India, 2018). The then education secretary elaborated that in government schools there has been a long standing problem of syllabus not being taught entirely during academic year. Therefore the intention is also to compel teachers to complete the syllabus and hence, tests are given chapter wise (Times of India, 2018). Teachers are required to electronically report students’ scores in these tests to the government, and student wise report cards are generated showing student performance in each learning area. In terms of impact of weekly tests, teachers report in my interviews that ever since the initiative has been implemented, it has created immense administrative workload for them, compromising their teaching time in classrooms. Connecting to the issue of staff shortage discussed in Chapter 3 section 3.4, it is a well-known fact that in most government elementary schools of Gujarat there are not more than 3 teachers. This situation exists across the state since many years and is widely reported in the media. Redtapism and sluggish pace of bureaucratic 139 decision making has been blamed for lack of teacher recruitment in schools (e.g. Times of India, 2021, Indian Express, 2022). Schools also do not have other staff apart from teachers- meaning there is no clerk to do school‘s administrative paperwork. Therefore, with schools having 3 teachers and around 100 students, the teachers spend a lot of time checking answer sheets of weekly tests and entering results in the government database. This takes so much time that often it compromises with their teaching duties. There have been many instances where teachers have spent around 3 hours entering the data in the system, but due to technical issues the data were never saved, and they had to do it all over again. One teacher said that due to these initiatives “teachers have turned into clerks in government schools”. They summarize their experience with weekly tests as entering data into government database. Teachers report that they are not sure about the usefulness of data collected through weekly tests, as these data do not make much impact on their lives apart from burdening their administrative duties. Furthermore, they point out that ever since their schools were given functional computers, they get constant demands from higher authorities to report different types of data (which the state government aggregates and uses for data analysis purpose to improve performance in NAS). Most of the times these are duplicate demands. For example, one teacher reported that they received an order from higher authorities to report data on whether students in their schools own mobile phones. They reported this data some weeks ago, but again received an order to report the same data. Teachers also report that in last few years demands for various types of old data, as old as 10-15 years ago, in terms of attendance, student backgrounds, finances etc. has increased. Teachers spend a lot of time fetching data from old files and reporting to the government. Therefore, since a lot of time is spent in such clerical activities, teachers are not able to devote adequate time to teaching and improving student performance. Teachers also have many other duties apart from this that further take away their time from teaching (the issue of multitasking discussed in Chapter 3 section 3.4). For example, government tends to engage teachers in activities such as event management for large scale government events, awareness building campaigns, conducting elections, collecting data for citizen election cards, etc. Often times, the state government orders government schools to celebrate various events such as Constitution Day, Republic Day, etc. for multiple days in a row. In order to celebrate such events, teachers spend significant time in preparing for celebrations such as creating charts and banners, and teaching songs, dances, skits, or speeches to students. This type of engagement too takes away their time from teaching. 140 If the scenario of overburdening teachers with various administrative duties and tests continues, which do not contribute to the work of teaching meaningfully but rather distract them from it, then the new NAS could usher a range of important implications for the work of schools and teachers by reinforcing existing centralized education system which gives little autonomy to schools under a vast education bureaucracy. These insights from teachers also illustrate how different data instruments applied in different manner can cause different implications. Since NAS is being used as the currency of national educational governance by the central government, particularly by relying on the soft power of comparative governance, the insights from teachers indicate that in practice these comparisons may also function as compelling indirect power. The wide media coverage of NAS and PGI comparisons pitting states against each other may push states to act and introduce initiatives that boost their NAS scores. The case of weekly tests in Gujarat discussed here is an example of that. If teachers in Gujarat continue to feel administratively burdened with these tests and range of other duties, the indirect power of comparisons may lead to further bureaucratization of teachers. Although it is the states and not schools that are being compared and nudged, the most intense effect of these comparisons might be felt on the lives of teachers. The impact on states maybe to the extent of introducing state-wide reform initiatives. But it is the teachers whose lives might be ultimately shaped by these initiatives, potentially in a counterproductive manner. Such an outcome could be quite contradictory to the scenario in TBA countries where teachers’ lives are shaped by the power of school comparisons. 9.3 Implications for Developing Countries This study on NAS has important takeaways for developing countries on similar pathways. It shows that local and international pressures make democratic central governments like in India adopt global reforms such as standardized assessments. However, this adoption is not a seamless process and does not translate as intended due to constraints posed by existing bureaucratic structures, norms, infrastructures, finance, and other technical capacity-related issues. The idea of introducing such assessments is “international”, “world-class” and “imported”. However, when the decision-makers begin implementing they encounter a range of challenges for which they may not be fully prepared. They must devise innovative indigenous solutions as these global ideas do not come with adequate information about implementation in different contexts. In designing these local indigenous solutions, one major risk arises of losing the “integrity” or “quality” of the reform. 141 This aspect of quality is particularly relevant for matters like assessments as they inherently demand a “scientific” and “standardized” approach and also attract a lot of global attention and scrutiny. Therefore, maintaining this quality of assessments necessitates the involvement of international actors and consultants. The involvement of international consultants may have different implications in different countries. Depending upon the capacity of a country’s own domain experts (equivalent of NCERT in India), there might be some power imbalance in decision-making between country experts and international consultants. This power imbalance can be problematic as it can open up doors for entry of certain vested interests. It can create a “strategically selective and conflicting terrain” for education policy making where certain ideas and actors gain preference over others (Verger et al, 2018, p.6). As Verger et al (2018) explain, such involvement can define problems for developing countries on one hand, and at the same time alter their capacity to respond to these problems by themselves. Moreover, depending upon the capacity of countries’ domain experts, they must also juggle the difficult task of forging world class format and quality while maintaining local relevance. The ultimate implemented version of the assessments then becomes considerably different from the travelling global ideas, as it is shaped by existing capacities and governance practices in the country. Oftentimes key global actors portray assessments as solutions to educational problems impeding developing countries. Assessments are one of the commonly proposed means to solving global learning crisis. Such discourses are spreading fast, assuming the transformative power of assessments. However, as this case study shows, assessments are not any magic pills whose entry can transform education. Rather assessments are in service to the existing technical capacities, organizational structures, and power dynamics. They are designed and employed in the manner they best serve the higher authorities. Therefore, impact of assessments will significantly depend on how they are intended and employed. Plus, any given data instrument has limitations about the social reality it can capture. The instrument can privilege certain actors over others simply on the basis of the information it can convey. Based on how assessments are designed, certain issues and processes may get more highlighted over others. Therefore, the promises of assessments in solving educational problems need serious critical reflection to accurately assess their strengths and limitations. 142 9.4 Limitations of Study Having explored the policy, technical, and organizational factors explaining the importance of NAS in India from 2017, I would like to acknowledge the key perspectives that my study has not covered but can be important in comprehensively explaining this phenomenon. India’s current political leadership might have an important role to play in the rise of new NAS in India. NAS might be an important instrument for this leadership in terms of political mileage. Post-2014, the national government formed under the leadership of Prime Minister Narendra Modi is well-known for its proactive stance in initiatives related to data, digitization, e- governance, and technological development across social sectors. My interview data had two important quotes indicating that this political influence might have an important role to play in the way NAS has been elevated and used. For example, the non-government assessment expert shared with me that the current political leadership might clearly have a role to play in shaping new NAS. .And I also think that to some extent it is also…the Gujarat model is what was carried over to the center. Because if you see in NAS 2017 also, the political parties in power have to give their blessing for such a large exercise. So that has been there. So if at all there has to be continuity in any of these things that you do, it depends a lot on political masters at that point of time, and what their agenda is. (Assessment Expert) A similar sentiment was echoed by PGI core staff that the idea for doing something like PGI came from the political leadership. And actually, what I heard is that the very concept of PGI, to have a ranking system, where the educational system – the states and UTs should know where they stand with respect to each other and all, these have come from very top level of the government. Might be from the Prime Minister himself. That is what I heard. (PGI Core Staff) However, apart from these insights, my data do not reveal anything more. I also could not find external literature to provide more context on this. Therefore, I would like to underline that my data do not explain the role of political leadership in the rise of new NAS and what is it about new NAS that might be attractive for this leadership to elevate it to this extent. The second important factor that my data are not able to explain is the role of top government bureaucrats/civil servants (i.e. education secretaries) in the central ministry. This could be a major factor because India has a tradition of top-level secretaries designing and leading major policy changes in the country (see for example: Bhattacharya, 2022; Varma, 2020). However, my data are not able to reveal the role of particular civil servants or secretaries in the process, if there was any. 143 The third factor that my data are not able to explain enough is the role of international actors. Studies have noted how international actors lobby for policy changes in developing countries through various platforms (e.g. Verger et al, 2018). Especially in the case of assessments, formal or informal lobbying by international actors might have created an important turn of events or moments making the government take notice of this issue and act in the form of new NAS. The national NAS report (NCERT, 2019) mentions the involvement of different international organizations in different aspects of developing NAS. Since they are the “experts” on technical matters, it is quite likely that many survey decisions might be taken under their heavy influence. However, I do not have enough data to fully explain and validate this. Also, capturing the role of international actors is an issue that may be better served with different research methods that can trace these actors and their actions over time, and across different spaces. 144 REFERENCES Abbott, K. W., & Snidal, D. (2000). Hard and soft law in international governance. International Organization, 54(3), 421–456. https://doi.org/10.1162/002081800551280 Acquah, D. (2013). School Accountability in England: Past, Present and Future. Centre for Education Research and Policy. https://filestore.aqa.org.uk/content/research/CERP_RP_DA_12112012.pdf?download=1 Ahumada, L., Montecinos, C., & González, A. (2012). Quality assurance in Chile’s municipal schools: facing the challenge of assuring and improving quality in low performing schools. Quality Assurance and Management, 183-192. Aiyar, Y. (2019). Schooling is not Learning. In Policy Challenges 2019-24: Charting a New Course for India and Navigating Policy Challenges in the 21st century. Center for Policy Research. https://cprindia.org/wp-content/uploads/2022/03/Policy-Challenges-2019-2024.pdf Aiyar, Y. (2016). From the Right to Schooling to the Right to Learning. India Infrastructure Report 2012: Private Sector in Education, 64. Aiyar, Y., & Kapur, A. (2019). The centralization vs decentralization tug of war and the emerging narrative of fiscal federalism for social policy in India. Regional & Federal Studies, 29(2), 187- 217. Aiyar, Y. (2011). From a Right to Schooling to a Right to Learning: Rethinking education finance. Annual Status of Education Report, 14-16. Aiyar, Y., Chaudhuri, J., & Wallack, J. S. (2010). Accountability for Outcomes. India Review, 9(2), 87-106. Anagnostopoulos, D. (2003). The new accountability, student failure, and teachers' work in urban high schools. Educational Policy, 17(3), 291-316. Anagnostopoulos, D., Rutledge, S. A., Jacobsen, R., & Henig, J. R. (2013). The Infrastructure of Accountability: Data Use and the Transformation of American Education. Harvard Education Press. Andrews, M., Pritchett, L., & Woolcock, M. (2017). Building state capability: Evidence, analysis, action. Oxford University Press. APPA (2009). Australian primary principals’ association position paper on the publication of nationally comparable school performance information. Australian Primary Principals Association. Arora, S. C. (2007). Responsible and Responsive Bureaucracy. Indian Journal of Public Administration, 53(2), 184–192. https://doi.org/10.1177/0019556120070203 ASER Centre. (n.d.). Overview of the ASER survey. asercentre.org. Retrieved September 21, 2022, from http://www.asercentre.org/Survey/Basic/Pack/Sampling/History/p/54.html Au, W. (2007). High stakes testing and curricular control: A qualitative meta synthesis. Educational researcher, 36(5), 258-267. Azam, M., & Hang Saing, C. H. (2017). Assessing the impact of district primary education program in India. Review of Development Economics, 21(4), 1113-1131. 145 Ball, S. J. (2003). The teacher's soul and the terrors of performativity. Journal of education policy, 18(2), 215-228. Ball, S. J., & Olmedo, A. (2013). Care of the self, resistance and subjectivity under neoliberal governmentalities. Critical studies in education, 54(1), 85-96. Ball, S., Maguire, M., Braun, A., Perryman, J., & Hoskins, K. (2012). Assessment technologies in schools: ‘Deliverology’ and the ‘play of dominations’. Research Papers in Education, 27(5), 513-533. Ball, S.J. (2008). The education debate. 4th Edition. Bristol: The Policy Press. Baltodano, M. (2012). Neoliberalism and the demise of public education: The corporatization of schools of education. International Journal of Qualitative Studies in Education, 25(4), 487-507. Bansal, S. (2022). How Rajasthan was both the best and worst state for education. The Times of India. https://timesofindia.indiatimes.com/india/how-rajasthan-was-both-the-best-and-worst- state-for-education/articleshow/88770055.cms. Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The qualitative report, 13(4), 544-559. Behal, A. (2018). Education in UTs: Chandigarh goes from bottom to top in 3 years. The Times of India. https://timesofindia.indiatimes.com/education/news/education-in-uts-city-goes- from-bottom-to-top-in-3-yrs/articleshow/62585727.cms Benveniste, L. (2002). The political structuration of assessment: Negotiating state power and legitimacy. Comparative Education Review, 46(1), 89-118. Bhattacharya, S. (2022). Amitabh Kant: The legacy of NITI Aayog’s outgoing CEO. Analytics India Magazine.:https://analyticsindiamag.com/amitabh-kant-the-legacy-of-niti-aayogs- outgoing-ceo/ Birdsall, N., Bruns, B., & Madan, J. (2016). Learning Data for Better Policy: A Global Agenda. Center for Global Development Working Paper Series. https://www.cgdev.org/sites/default/files/learning-data-better-policy1.pdf Black, S. E. (1999). Do better schools matter? Parental valuation of elementary education. The quarterly journal of economics, 114(2), 577-599. Booher-Jennings, J. (2005). Below the bubble:“Educational triage” and the Texas accountability system. American educational research journal, 42(2), 231-268. Bordoloi, M., & Kapoor, V. (2018). India: using open school data to improve transparency and accountability. UNESCO IIEP. https://www.iiep.unesco.org/en/india-using-open-school-data- improve-transparency-and-accountability Bordoloi, M., & Kapur, A. (2019). Samagra Shiksha GoI, 2020-21. Budget Briefs. Vol 12/ Issue 11. Accountability Initiative, Centre for Policy Research. https://accountabilityindia.in/wp- content/uploads/2020/01/Samagra-Shiksha-2020-21.pdf Bradbury, A. (2012). ‘I feel absolutely incompetent’: Professionalism, policy and early childhood teachers. Contemporary Issues in Early Childhood, 13(3), 175-186. Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’of early childhood education in England. Learning, Media and Technology, 44(1), 7-21. 146 Bradbury, A., & Roberts-Holmes, G. (2017). The datafication of primary and early years education: Playing with numbers. Routledge. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology, Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 57–71). American Psychological Association. https://doi.org/10.1037/13620-004 Brown, K. M., Anfara Jr, V. A., & Roney, K. (2004). Student achievement in high performing, suburban middle schools and low performing, urban middle schools: Plausible explanations for the differences. Education and urban society, 36(4), 428-456. Butland, D. (2008). Testing times: Global trends in marketisation of public education through accountability testing. NSW Teachers Federation. CABE. (2015). Report of CABE Sub-Committee on Assessment and Implementation of CCE and No Detention Provision (Under the RTE Act 2009). Central Advisory Board of Education, Ministry of Human Resources and Development. https://www.education.gov.in/sites/upload_files/mhrd/files/document- reports/AssmntCCE.pdf CABE. (2005). Universalisation of Secondary Education. Central Advisory Board of Education, Ministry of Human Resources and Development. https://www.educationforallinindia.com/universalisation%20of%20secondary%20education %20report%20of%20CABE%20Commuitee.pdf California Department of Education. (2022). California School Dashboard. https://www.caschooldashboard.org/ Carl, J. (1994). Parental choice as national policy in England and the United States. Comparative Education Review, 38(3), 294-322. CBHI. (2019). National Health Profile 2019 -14th Issue. Central Bureau of Health Intelligence http://www.cbhidghs.nic.in/showfile.php?lid=1147 CCS. (2019). Role of Learning Outcomes in Education Governance: Monitoring Child Progress for School Accountability and Reducing Information Asymmetries for Parents. Centre for Civil Society, New Delhi. https://ccs.in/sites/default/files/research/draft-blueprint-on-use-of- learning-outcomes-in-education-governance-april-2019.pdf Chingos, M. M., Henderson, M., & West, M. R. (2012). Citizen perceptions of government service quality: Evidence from public schools. Quarterly Journal of Political Science. Clarke, M., & Luna-Bazaldua, D. (2021). Primer on Large-Scale Assessments of Educational Achievement. World Bank. https://openknowledge.worldbank.org/handle/10986/35494 Cohen, D. (1989). Tests Fail to Pass Their Exams. In G Shrubb (ed), Mass Testing Tonic or Toxin? Australian Education Network. Colclough, C., & De, A. (2010). The impact of aid on education policy in India. International Journal of Educational Development, 30(5), 497-507. 147 Colclough, C., & Lewin, K. M. (1993). Educating all the children: strategies for primary schooling in the South. Clarendon Press. Cullen, J. B., & Reback, R. (2002). Tinkering toward accolades: School gaming under a performance-based accountability system. Department of Economics, University of Michigan. Cunningham, W. G., & Sanzo, T. D. (2002). Is high stakes testing harming lower socioeconomic status schools? NASSP Bulletin, 86(631), 62-75. Dale, R. 1999. Specifying globalisation effects on national policy: A focus on the mechanisms. Journal of Education Policy, 14(1): 1–17 Dasgupta, A. (2020). Democracy without capacity? Asia Pacific Journal of Public Administration, 42(1), 1-3. Datnow, A., & Park, V. (2012). Conceptualizing policy implementation: Large-scale reform in an era of complexity. In Handbook of education policy research (pp. 364-377). Routledge. Davis, T., Bhatt, R., & Schwarz, K. (2015). School segregation in the era of accountability. Social Currents, 2(3), 239-259. Dell Foundation, Center for Science of Student Learning, ConveGenius, Central Square Foundation, & Educational Initiatives. (2011). Large Scale Assessments in India. https://www.dell.org/wp-content/uploads/2021/07/Large-Scale-Assessments- Report.pdf?utm_source=dellfoundation&utm_medium=referral Department for Education. (2022a). Find and Compare Schools in England. https://www.compare- school-performance.service.gov.uk/ Department of Education. (2022b) My School. https://www.myschool.edu.au/ Desai, D. (2015). Gujarat government to track child education record using UID. India Today. https://www.indiatoday.in/india/west/story/gujarat-child-education-tracking-system-uid- sarva-shiksha-abhiyan-mukesh-kumar-dise-238403-2015-02-03 Diamond, J. B. (2007). Where the rubber meets the road: Rethinking the connection between high stakes testing policy and classroom instruction. Sociology of Education, 80(4), 285-313. Diaz Rios, C. M. (2020). The Role of Policy Legacies in the Alternative Trajectories of Test-Based Accountability. Comparative Education Review, 64(4), 619-641. DiGaetano, A. (2015). Accountability school reform in comparative perspective. Urban Affairs Review, 51(3), 315-357. Dixit, A. (2012). Bureaucracy, its reform and development. Review of Market Integration, 4(2), 135-157. DNA. (2019). Shortage of teachers, poor state of schools, and lack of facilities ask “padhega, khelega Gujarat, par kaise?” DNA India. https://www.dnaindia.com/ahmedabad/report- shortage-of-teachers-poor-state-of-schools-and-lack-of-facilities-ask-padhega-khelega- gujarat-par-kaise-2725863 DNA. (2018). Gujarat Government Announces Weekly School Tests from December. DNA India. https://www.dnaindia.com/ahmedabad/report-gujarat-govt-announces-weekly-school-tests-from- december-2696189 148 Dorn, S. (2007). Accountability Frankenstein: Understanding and taming the monster. Information Age Publishing. Edwards, S. (2019). India’s re-entry to PISA triggers mixed response. Devex. https://www.devex.com/news/india-s-re-entry-to-pisa-triggers-mixed-response-94286 Edwards, T., & Whitty, G. (1992). Parental choice and educational reform in Britain and the United States. British Journal of Educational Studies, 40(2), 101-117. Fenwick, T., Mangez, E., & Ozga, J. (2014). Comparative Research in Education: A Mode of Governance or a Historical Journey?. In World yearbook of education 2014 (pp. 29-46). Routledge. Figlio, D. N., & Lucas, M. E. (2004). What's in a grade? School report cards and the housing market. American economic review, 94(3), 591-604. Figlio, D., & Loeb, S. (2011). School accountability. Handbook of the Economics of Education, 3, 383-421. Firestone, W. A., Mayrowetz, D., & Fairman, J. (1998). Performance-based assessment and instructional change: The effects of testing in Maine and Maryland. Educational evaluation and policy analysis, 20(2), 95-113. Fitz, J. (2003). The politics of accountability: A perspective from England and Wales. Peabody Journal of Education, 78(4), 230-241. Flores, B. B., & Clark, E. R. (2003). Texas voices speak out about high-stakes testing: Preservice teachers, teachers, and students. Current Issues in Education, 6. Friesen, J., Javdani, M., & Woodcock, S. D. (2009). Does public information about school quality lead to flight from low-achieving schools? IZA Discussion Papers (4632). IZA-Institute of for the Study of Labor, Bonn. https://www.econstor.eu/bitstream/10419/35936/1/617849242.pdf Froese-Germain, B. (2001). Standardized testing+ high-stakes decisions= educational inequity. Interchange, 32(2), 111-130. Getzler, L. S., & Figlio, D. N. (2002). Accountability, Ability and Disability: Gaming the System. National Bureau of Economic Research. Glinskaya, E., & Jalan, J. (2003). Improving Primary School Education in India: An Impact Assessment of DPEP—Phase I. World Bank, Washington, DC. Gorur, R. (2015). The performative politics of NAPLAN and MySchool. In National testing in schools: An Australian Assessment (pp. 30-43). Routledge. Government of India. (2000). DPEP Calling, vol. VI, no. 11, December 2000, New Delhi Govinda, R., & Mathew, A. (2018). Universalisation of elementary education in India: Story of missed targets and unkept promises. Council for Social Development, New Delhi. Grek, S. (2009). Governing by numbers: The PISA ‘effect ‘in Europe. Journal of education policy, 24(1), 23-37. Gupta, A. (2017). Changing forms of corruption in India. Modern Asian Studies, 51(6), 1862- 1890. 149 Guthrie, J. W., & Pierce, L. C. (1990). The international economy and national education reform: a comparison of education reforms in the United States and Great Britain. Oxford review of education, 16(2), 179-205. Hamilton, L. S., Stecher, B. M., & Yuan, K. (2012). Standards-based accountability in the United States: Lessons learned and future directions. Education inquiry, 3(2), 149-170. Hamilton, L. S., Stecher, B. M., Marsh, J. A., McCombs, J. S., Robyn, A., Russell, J., Naftel, S. & Barney, H. (2007). How Educators in Three States Are Responding to Standards-Based Accountability under No Child Left Behind. Research Brief. RAND Corporation. Hanushek, E. A., & Raymond, M. E. (2004). The effect of school accountability systems on the level and distribution of student achievement. Journal of the European Economic Association, 2(2-3), 406-415. Hardy, I. (2018). Governing teacher learning: Understanding teachers’ compliance with and critique of standardization. Journal of Education Policy, 33(1), 1-22. Hardy, I. (2015). Data, numbers, and accountability: The complexity, nature, and effects of data use in schools. British Journal of Educational Studies, 63(4), 467-486. Hargreaves, A. (1994). Development and desire: A postmodern perspective. Harris, J. (2017). Universalizing elementary education in India: Achievements and challenges. UNRISD Working Paper, 2017 (3), United Nations Research Institute for Social Development, Geneva. Hastings, J. S., & Weinstein, J. M. (2008). Information, school choice, and academic achievement: Evidence from two experiments. The Quarterly journal of economics, 123(4), 1373-1414. Hindustan Times. (2022). PGI-D report: Chandigarh seventh best among country’s 733 districts. Hindustan Times. https://www.hindustantimes.com/cities/chandigarh- news/pgidreportchandigarh-seventh-best-among-country-s-733-districts- 101656710978600.html. Ho, C. (2011). My School’and others: Segregation and white flight. Australian Review of Public Affairs, 10(1), 1-2. Hofflinger, A., Gelber, D., & Cañas, S. T. (2020). School choice and parents’ preferences for school attributes in Chile. Economics of Education Review, 74, 101946. Hofflinger, A., & von Hippel, P. (2018). The response to high-stakes testing in Chile: Did schools improve learning or merely inflate test scores? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2906552 Holloway, J. (2019). Teacher evaluation as an onto-epistemic framework. British Journal of Sociology of Education, 40(2), 174-189. Hossain, N., & Hickey, S. (2019). The problem of education quality in developing countries. The Politics of Education in Developing Countries: From Schooling to Learning, 1, 1-21. Hursh, D. W. (2007). Marketing education: The rise of standardized testing, accountability, competition, and markets in public education. Neoliberalism and education reform, 15-34. Hutchings, M. (2015). Exam factories. The impact of accountability measures on children and young people. 150 IANS. (2018). Gujarat teachers told to offer 'Shramdaan' to deepen water bodies. The Quint. https://www.thequint.com/hotwire-text/gujarat-teachers-told-to-offer-shramdaan-to-deepen- water-bodies#read-more IDFC Foundation. (2013). India Infrastructure Report 2012: Private Sector in Education. Routledge. IDR. (2020) IDR Explains: Local Government in India. India Development Review. https://idronline.org/idr-explains-local-government-in-india/ Indian Express. (2022) Over 3000 primary school teachers to be recruited in Gujarat. Indian Express. https://indianexpress.com/article/jobs/gujarat-govt-to-recruit-over-3000-primary- school-teachers-7718891/ Ingersoll, R., Merrill, L., & May, H. (2016). Do accountability policies push teachers out?. Educational Leadership, 73(8), 44. Jacobsen, R., Saultz, A., & Snyder, J. W. (2013). When accountability strategies collide: do policy changes that raise accountability standards also erode public satisfaction? Educational Policy, 27(2), 360-389. Jarke, J., & Breiter, A. (2019). The datafication of education. Learning, Media and Technology, 44(1), 1-6. Jessop, B. (2002). Liberalism, neoliberalism, and urban governance: A state–theoretical perspective. Antipode, 34(3), 452-472. Jones, B. D. (2007). The unintended outcomes of high-stakes testing. Journal of applied school psychology, 23(2), 65-86. Kane, T. J., Staiger, D. O., Samms, G., Hill, E. W., & Weimer, D. L. (2003). School accountability ratings and housing values [with comments]. Brookings-Wharton papers on urban Affairs, 83- 137. Kant, A. (2019). Why cooperative and competitive federalism is the secret to India’s success. World Economic Forum. https://www.weforum.org/agenda/2019/10/what-is-cooperative-and- competitive-federalism-india/ Kapoor, T. (2018). Mapping Learning through Outcomes: Understanding the amendment to the RTE. Accountability Initiative: Responsive Governance. https://accountabilityindia.in/blog/mapping-learning-through-outcomes-understanding-the- amendment-to-the-rte/ Kapur, D. (2020). Why does the Indian state both fail and succeed?. Journal of Economic Perspectives, 34(1), 31-54. Kapur, D., & Mukhopadhyay, P. (2007). Sisyphean state: Why poverty programmes in India fail and yet persist. In Annual Meeting of the American Political Science Association (Vol. 30). Kingdon, G (2019). “The NEP does not separate funding and provision of education” Central Square Foundation. https://www.centralsquarefoundation.org/articles/nep-private-schools- geeta-gandhi- kingdon.html#:~:text=However%2C%20the%20NEP%20does%20not,%2Dprivate%20partn erships%20or%20PPP). 151 Klenowski, V. (2010). Are Australian assessment reforms fit for purpose? Lessons from home and abroad. Professional Magazine, 25(November 2010), 10-15. Klenowski, V. (2011). Assessment for learning in the accountability era: Queensland, Australia. Studies in Educational Evaluation, 37(1), 78-83. Knaus, C. B. (2007). Still segregated, still unequal: Analysing the impact of No Child Left Behind on African American students. In The State of Black America: Portrait of the Black Male, 105- 121. National Urban League, Washington, DC. Koning, P., & Van der Wiel, K. (2013). Ranking the schools: How school-quality information affects school choice in the Netherlands. Journal of the European Economic Association, 11(2), 466-493. Koretz, D. M., & Barron, S. I. (1998). The validity of gains in scores on the Kentucky Instructional Results Information System (KIRIS). RAND Education. Koretz, D., & Hamilton, L. (2006). Testing for Accountability in K-12. In Educational Measurement (4th ed., pp. 531–578). Praeger. Kumar, A. (2019). Cultures of learning in developing education systems: Government and NGO classrooms in India. International Journal of Educational Research, 95, 76-89. Kumar, K., Priyam, M., & Saxena, S. (2001). Looking beyond the smokescreen: DPEP and primary education in India. Economic and Political Weekly, 560-568. Lawn, M. (Ed.). (2013, May). The rise of data in education systems: Collection, visualization, and use. Symposium Books Ltd. Lee, J. (2008). Is test-driven external accountability effective? Synthesizing the evidence from cross-state causal-comparative and correlational studies. Review of educational research, 78(3), 608-644. Lemke, M., Sen, A., Pahlke, E., Partelow, L., Miller, D., Williams, T., Kastberg, D. & Jocelyn, L. (2004). International Outcomes of Learning in Mathematics Literacy and Problem Solving: PISA 2003 Results from the US Perspective. Highlights. NCES 2005-003. US Department of Education. Lewis, S., & Hardy, I. (2017). Tracking the topological: The effects of standardised data upon teachers’ practice. British Journal of Educational Studies, 65(2), 219-238. Lindblom, C. E. (1989). The science of muddling through. Readings in managerial psychology, 4, 117-131. Lingard, B. (2010). Policy borrowing, policy learning: Testing times in Australian schooling. Critical studies in education, 51(2), 129-147. Lingard, B. (2009). Pedagogizing teacher professional identities. In Changing teacher professionalism (pp. 101-113). Routledge. Lingard, B., & Sellar, S. (2013). ‘Catalyst data’: Perverse systemic effects of audit and accountability in Australian schooling. Journal of Education Policy, 28(5), 634-656. Lingard, B., Martino, W., & Rezai-Rashti, G. (2013). Testing regimes, accountabilities, and education policy: Commensurate global and national developments. Journal of education policy, 28(5), 539-556. 152 Little, A. W. 2010. Access to Elementary Education in India: Politics, Policies and Progress (Research Monograph No. 44). Falmer, UK: Consortium for Research on Educational Access, Transitions and Equity, University of Sussex. Lobascher, S. (2011). What are the potential impacts of high stakes testing on literacy education in Australia? Literacy learning: The middle years, 19(2), 9-19 Lockheed, M. E., Verspoor, A. M. and Associates (1990). Improving primary education in developing countries. Oxford University Press for World Bank. Lohumi, B. (2021). 2,057 govt schools run by single teacher in Himachal. Tribune India. https://www.tribuneindia.com/news/himachal/2-057-govt-schools-run-by-single-teacher-in- himachal-318010 Madaus, G., Rusell, M., & Higgins, J. (2009). The Paradoxes of High Stakes Testing: How They Affect Students. Their Parents, Teachers, Principals, Schools, and Society. Information Age Publishing. Malen, B., & Knapp, M. (1997). Rethinking the multiple perspectives approach to education policy analysis: implications for policy‐practice connections. Journal of Education Policy, 12(5), 419- 445. Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44(1), 36-51. Maroy, C. (2012). Towards post-bureaucratic modes of governance: A European perspective. In World yearbook of education 2012 (pp. 62-79). Routledge. Maroy, C. (2009). Convergences and hybridization of educational policies around ‘post‐ bureaucratic’ models of regulation. Compare, 39(1), 71-84. Maroy, C. (2008). The new regulation forms of educational systems in Europe: Towards a post- bureaucratic regime. In Governance and performance of education systems (pp. 13-33). Springer, Dordrecht. Martens, K. 2007. How to become an influential actor – The ‘comparative turn’ in OECD education policy. In Transformations of the state and global governance, ed. K. Martens, A. Rusconi, and K. Lutz, 40–56. Routledge. Mathur, K. (1991). Bureaucracy in India: Development and pursuit of self-interest. Indian Journal of Public Administration, 37(4), 637-648. McGuinn, P. J. (2006). No Child Left Behind and the transformation of federal education policy, 1965-2005. University Press of Kansas. Meckes, L., & Carrasco, R. (2010). Two decades of SIMCE: an overview of the National Assessment System in Chile. Assessment in education: Principles, policy & practice, 17(2), 233-248. Mehrotra, S. (2006). Reforming elementary education in India: A menu of options. International Journal of Educational Development, 26(3), 261-277. 153 Mehta, AC. (2017). Strengthening Educational Management Information System in India through U-DISE (A Story of its Evolution), presented at Comparative and International Education Society Annual Conference, Atlanta, 2017 Mehta, AC. (n.d.). DISE 2001: Important Highlights. Education for All in India. https://www.educationforallinindia.com/dise-highlights.pdf MHRD. (2019). Performance Grading Index (PGI) 2017-18 For States and Union Territories Ministry of Human Resources and Development, New Delhi. https://www.education.gov.in/sites/upload_files/mhrd/files/PGI_2017-18.pdf MHRD. (2018a). National Achievement Survey 2017 Dashboard. Ministry of Human Resources and Development, New Delhi. https://nas.gov.in/report-card/nas-2017 MHRD. (2018b). RTE Response: Draft Annexure on Performance Grading Index of all States and UTs on School Education. Ministry of Human Resources and Development, New Delhi. MHRD. (2017). Annual Report 2016-17. Ministry of Human Resources and Development, New Delhi. MHRD. (2012). Advisory on implementation of the provisions of section 29 of the Right of Children to Free and Compulsory Education (RTE) Act, 2009 (2012), Ministry of Human Resources and Development, New Delhi. MHRD. (2011). Sarva Shiksha Abhiyan: Framework for Implementation Based on The Right of Children to Free and Compulsory Education Act, 2009. Ministry of Human Resources and Development, New Delhi. MHRD. (2009). Right to Education Act: Clarifications on Provisions. Ministry of Human Resources and Development, New Delhi. MHRD. (2005). Annual Report 2004–05. Ministry of Human Resource Development, New Delhi. MHRD. (2001). Sarva Shiksha Abhiyan Framework for Implementation. Ministry of Human Resources and Development, New Delhi. MHRD. (2000). Sarva Shiksha Abhiyan: Programme for Universal Elementary Education In India. Department of Elementary Education and Literacy, Ministry of Human Resources and Development, New Delhi. https://www.educationforallinindia.com/ssa.htm MHRD. (1998). National Policy on Education 1986 (As Modified in 1992). Ministry of Human Resources and Development, New Delhi. https://www.education.gov.in/sites/upload_files/mhrd/files/document-reports/NPE86- mod92.pdf MHRD. (1997). District Primary Education Program Guidelines. Ministry of Human Resources and Development, New Delhi. MHRD. (1992). Programmme of Action (PoA) Ministry of Human Resources and Development, New Delhi. https://www.education.gov.in/sites/upload_files/mhrd/files/document- reports/POA_1992.pdf MHRD. (1986). National Education Policy. Ministry of Human Resources and Development. New Delhi. https://www.education.gov.in/sites/upload_files/mhrd/files/upload_document/npe.pdf 154 MHRD. (n.d.) Framework for Implementation of Rashtriya Madhyamik Shiksha Abhiyan. Ministry of Human Resources and Development, New Delhi. Mizala, A., Romaguera, P., & Urquiola, M. (2007). Socioeconomic status or noise? Trade-offs in the generation of school quality information. Journal of development Economics, 84(1), 61- 75. MoE.(2022a) UDISE Know Your School. Ministry of Education, New Delhi. https://src.udiseplus.gov.in/ MoE.(2022b). National Achievement Survey: National Report NAS 2021. Class III, V, VIII, & X. Ministry of Education, New Delhi. https://nas.education.gov.in/reportAndResources MoE.(2022c) National Achievement Survey 2021 Dashboard. Ministry of Education, New Delhi. https://nas.gov.in/report-card/nas-2021 MoE. (2022d). Performance Grading Index (PGI) 2020-21 for States and Union Territories. Ministry of Education., New Delhi https://pgi.udiseplus.gov.in/PGI-State-2020-21- Brochure.pdf MoE.(2021a). Welcome to National Achievement Survey, Presentation on 19th August 2021 Department of School Education & Literacy, Ministry of Education, New Delhi. MoE. (2021b). Performance Grading Index (PGI) 2019-20 For States and Union Territories. Ministry of Education, New Delhi. https://pgi.udiseplus.gov.in/PGI-State-2019-20- Brochure.pdf. MoE.(2020) National Education Policy. Ministry of Education, New Delhi. Mooij, J., & Narayan, K. (2010). Solutions to teacher absenteeism in rural government primary schools in India: a comparison of management approaches. The Open Education Journal, 3, 63-71. Nayak, A. (2018). An antidote to rote learning. Forbes India. https://www.forbesindia.com/blog/economy-policy/an-antidote-to-rote-learning/ NCERT. (2019). NAS 2017: National Achievement Survey: Class III, V, & VIII. National Report to Inform Policy, Practices and Teaching Learning. National Council of Educational Research and Training, New Delhi https://nas.education.gov.in/reportAndResources NCERT. (2018). Post NAS Interventions. Communication and Understanding of the District Report Cards. National Council of Educational Research and Training, New Delhi. https://www.researchgate.net/publication/333810778_POST_NAS_INTERVENTIONS_CO MMUNICATION_AND_UNDERSTANDING_OF_DISTRICT_REPORT_CARDS NCERT. (2017). Learning Outcomes at Elementary Stage. National Council of Educational Research and Training, New Delhi.https://ncert.nic.in/pdf/publication/otherpublications/tilops101.pdf NCERT. (2016). Learning Outcomes at Elementary Stage. National Council of Educational Research and Training, New Delhi. https://www.education.gov.in/sites/upload_files/mhrd/files/Learning_outcomes.pdf NCERT. (2012). National Achievement Survey: Class 5. National Council of Educational Research and Training, New Delhi. 155 NCERT. (2005). National Curriculum Framework. National Council of Educational Research and Training, New Delhi. https://ncert.nic.in/pdf/nc-framework/nf2005-english.pdf New Mexico Public Education Department. (2022a) New Mexico School and District Report Cards. https://newmexicoschools.com/ New Mexico Public Education Department. (2022b) New Mexico School and District Report Cards.https://nmindepth.com/2017/report-praises-nms-school-report-cards-as-easy-to-access- read/ NIEPA. (2017). National Institute of Educational Planning and Administration. http://www.niepa.ac.in/. Nóvoa, A., & Yariv-Mashal, T. (2007). Comparative research in education: A mode of governance or a historical journey?. In Changing educational contexts, issues and identities (pp. 368-387). Routledge. NSO. (2019). Key Indicators of Household Social Consumption on Education in India. National Statistical Office. Ministry of Statistics and Program Implementation. http://www.mospi.gov.in/sites/default/files/publication_reports/KI_Education_75th_Final.pdf Nunes, L. C., Reis, A. B., & Seabra, C. (2015). The publication of school rankings: A step toward increased accountability?. Economics of Education Review, 49, 15-23. Ozga, J. (2011). Governing narratives:“Local” meanings and globalising education policy. Education Inquiry, 2(2), 305-318. Ozga, J. (2009). Governing education through data in England: From regulation to self‐evaluation. Journal of education policy, 24(2), 149-162. Parcerisa, L., Verger, A., & Falabella, A. (2020). High-stakes accountability and the expansion of a school improvement industry in Chile. A public-private sector comparison. In A. Hogan & G. Thompson (eds). Privatisation and Commercialisation in Public Education: How the Public Nature of Schooling is Changing (pp. 119-133). Routledge. Paris, S. G., & McEvoy, A. P. (2000). Harmful and enduring effects of high-stakes testing. Issues in Education, 6(1/2), 145-160. Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. National Board on Educational Testing and Public Policy. Perryman, J. (2009). Inspection and the fabrication of professional and performative processes. Journal of education policy, 24(5), 611-631. Peters, S., & Oliver, L. A. (2009). Achieving quality and equity through inclusive education in an era of high-stakes testing. Prospects, 39(3), 265-279. Polesel, J., Dulfer, N., & Turnbull, M. (2012). The experience of education: The impacts of high stakes testing on school students and their families. The Whitlam Institute, Sydney. Popham, J. W. (2006). Needed: A dose of assessment literacy. Educational leadership, 63(6), 84- 85. 156 Pritchett, L. (2009). Is India a flailing state? Detours on the four-lane highway to modernization. HKS Faculty Research Working Paper Series RWP09-013, John F Kennedy School of Government, Harvard University. Pritchett, L. (2015). Creating education systems coherent for learning outcomes: Making the transition from schooling to learning. Working Paper. Research on Improving Systems of Education (RISE), University of Oxford. Pritchett, L. (2014a). The risks to education systems from design mismatch and global isomorphism. CID Working Paper Series. Harvard University. https://dash.harvard.edu/handle/1/37366305 Pritchett, L. (2014b). Turning a Condition into a Problem: ASER’s Successful First Ten Tears. Annual Status of Education Report (Rural) 2014. ASER Centre. https://img.asercentre.org/docs/Publications/ASER%20Reports/ASER%202014/Articles/lant pritchett.pdf PROBE (1999). Public Report on Basic Education in India. Oxford University Press. QSA. (2009). Student assessment regimes: getting the balance right for Australia. Professional Magazine, 24(November 2009), 20-26, Queensland Studies Authority. Quah, J. S. (2008). Curbing corruption in India: An impossible dream? Asian Journal of Political Science, 16(3), 240-259. Rajagopalan, S., & Tabarrok, A. (2019). Premature Imitation and India's Flailing State. The Independent Review, 24(2), 165-186. Ramachandran, V. (2005). Why schoolteachers are demotivated and disheartened. Economic and Political Weekly, 2141-2144. Rani, P. G. (2007). Every Child in School: The challenges of attaining and financing Education for All in India. International Perspectives on Education and Society, Vol 8, 201-256. Rao, M. G., & Singh, N. (2006). The political economy of federalism in India. Oxford University Press. Ravitch, D. (2002). September 11: Seven Lessons for the Schools. Educational Leadership, 60(2), 6-9. Reay, D., & Wiliam, D. (1999). ’I’ll be a nothing’: structure, agency, and the construction of identity through assessment. British educational research journal, 25(3), 343-354. Ren, X. (2015). City power and urban fiscal crises: The USA, China, and India. International Journal of Urban Sciences, 19(1), 73-81. Rivas, A., & Sanchez, B. (2022). Race to the classroom: the governance turn in Latin American education. The emerging era of accountability, control and prescribed curriculum. Compare: A Journal of Comparative and International Education, 52(2), 250-268. Roberts-Holmes, G. (2015). High stakes assessment, teachers, and children. In Exploring Education and Childhood (pp. 83-94). Routledge. Roberts‐Holmes, G., & Bradbury, A. (2016). Governance, accountability and the datafication of early years education in England. British Educational Research Journal, 42(4), 600-613. 157 Ryan, C. (2004). The impact of early schooling on subsequent literacy and numeracy performance- estimates from a policy induced'natural'experiment. Discussion Papers, Centre for Economic Policy Research, Australian National University. Sabharwal, K. (2018). ‘No Detention Policy: Rethinking Education System of India’. International Journal of Academic Research and Development, 3(1), 609-614. Sahlberg, P. (2010). The secret to Finland’s success: Educating teachers. Stanford Center for Opportunity Policy in Education, 2, 1-8. Sanan, D. (2014). Unravelling Rural India’s enduring water indigence: framing the questions, issues, options, and opportunities. Working Paper. Centre for Policy Research. New Delhi. Sarangapani, P. M., & Vasavi, A. R. (2003). Aided programmes or guided policies? DPEP in Karnataka. Economic and Political Weekly, 3401-3408. Savage, G. (2017). Neoliberalism, education, and curriculum. Powers of curriculum: Sociological perspectives on education, 143-165. Savage, G. C. (2016). Think tanks, education, and elite policy actors. The Australian Educational Researcher, 43(1), 35-53. Savage, G. C., & O’Connor, K. (2019). What’s the problem with ‘policy alignment’? The complexities of national reform in Australia’s federal system. Journal of Education Policy, 34(6), 812-835. Shah, R. (2019a). To improve learning levels, stop labelling schools. Hindustan Times. https://www.hindustantimes.com/analysis/to-improve-learning-levels-stop-labelling- schools/story-wnfkejz5cKVqYv8eEVVimN.html Shah, R. (2019b). Girl child education: 20 major states “score” better than Gujarat, says GoI report. Counterview.https://www.counterview.net/2019/12/girl-child-education-20-major-states.html Sharma, K. (2021). It’s Tamil Nadu and Kerala again, plus Punjab, that top Modi govt’s school grading index. The Print. https://theprint.in/india/education/its-tamil-nadu-and-kerala-again- plus-punjab-that-top-modi-govts-school-grading-index/673194/ Sharma, R. (2022). 350 teachers went missing from duty in 5 years: Gujarat govt. The Indian Express.https://indianexpress.com/article/cities/ahmedabad/350-teachers-went-missing-from- duty-in-5-years-gujarat-govt-7821719/ Shastri, P. (2019). Gujarat: “Invest in teachers to arrest dropout rates.” The Times of India. https://timesofindia.indiatimes.com/city/ahmedabad/invest-in-teachers-to-arrest-dropout- rates/articleshow/71882759.cms Singh, P. (2022). Needed, education data that engages the poor parent. The Hindu. https://www.thehindu.com/opinion/op-ed/needed-education-data-that-engages-the-poor- parent/article65476583.ece Smith, W. C. (2014). The global transformation toward testing for accountability. Education Policy Analysis Archives, 22(116). Srinivasan, T. N., & Wallack, J. S. (2011). Inelastic Institutions: Political Change and Intergovernmental Transfer Oversight in Post-Independence India. In India Policy Forum (Vol. 7, No. 1, pp. 203-251). National Council of Applied Economic Research. 158 Stecher, B. M. (2002). Consequences of large-scale, high stakes testing on school and classroom practice. Making sense of test-based accountability in education, 79-100. Stepan, A.C. (1999). Federalism and Democracy: Beyond the U.S. Model. Journal of Democracy 10(4), 19-34. Stevenson, H. (2017). The “datafication” of teaching: can teachers speak back to the numbers? Peabody Journal of Education, 92(4), 537-557. Stotsky, S. (2000). What's at Stake in the K-12 Standards Wars: A Primer for Educational Policy Makers. Peter Lang Publishing. Takayama, K., & Lingard, B. (2019). Datafication of schooling in Japan: An epistemic critique through the ‘problem of Japanese education’. Journal of Education Policy, 34(4), 449-469. Taubman, P. M. (2009). Teaching by numbers: Deconstructing the discourse of standards and accountability in education. Routledge. Taylor, G., Shepard, L., Kinner, F., & Rosenthal, J. (2001). A survey of teachers' perspectives on high stakes testing in Colorado: What gets taught, what gets lost. CRESST/CREDE/University of Colorado at Boulder, Boulder. Teelken, C. (1999). Market mechanisms in education: School choice in the Netherlands, England and Scotland in a comparative perspective. Comparative education, 35(3), 283-302. Teske, P., & Schneider, M. (2001). What research can tell policymakers about school choice. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 20(4), 609-631. The Education Trust. (2014). New School Accountability Systems in the States: Both Opportunities and Peril. The Education Trust. https://edtrust.org/new-school-accountability- systems-in-the-statesboth-opportunities-and-peril/ The Hindu. (2019). NITI Aayog’s Education Index: Kerala tops among large states, UP scores last. The Hindu Business Line.https://www.thehindubusinessline.com/news/niti-aayogs-education- index-kerala-tops-among-large-states-up-scores-last/article29556131.ece Times of India. (2021). Gujarat govt schools short of 14,000 teachers. The Times of India. https://timesofindia.indiatimes.com/city/ahmedabad/continuous-evaluation-to-be-must-for- students/articleshow/67098946.cms The Times of India. (2020). Gujarat gains second spot in MHRD school performance index. The Times of India. https://timesofindia.indiatimes.com/city/ahmedabad/gujarat-gains-second- spot-in-mhrd-school-performance-index/articleshow/74274415.cms Times of India. (2018). Continuous Evaluation to be must for students in Gujarat Government Schools. The Times of India:https://timesofindia.indiatimes.com/city/ahmedabad/continuous- evaluation-to-be-must-for-students/articleshow/67098946.cms Tilak, J., Panchamukhi, P.R., Biswal, K. (2014) Statistics on Education. Social Statistics Division, Central Statistics Office, Ministry of Statistics and Programme Implementation, New Delhi. http://www.mospi.nic.in/mospi_new/upload/Them_Paper_Education.pdf Tillin, L. (2017). India's Democracy At 70: The Federalist Compromise. Journal of Democracy, 28(3), 64-75. 159 Tribune India. (2022). Haryana school education system ranked below Punjab. Tribune India News Service.https://www.tribuneindia.com/news/haryana/haryana-school-education-system- ranked-below-punjabs-407511Tyack, D. B., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform. Harvard University Press. Varghese, N. V. (1994). District primary education programme: the logic and the logistics. Journal of Educational Planning and Administration, 8(4), 449. Varma, N. (2020). How civil servants are leading India’s fight against Covid-19. DailyO. https://www.dailyo.in/variety/civil-servants-lav-aggarwal-rajiv-gauba-ias-officers-32673 Verger, A., Novelli, M., & Altinyelken, H. K. (2018). Global education policy and international development: A revisited introduction. Global education policy and international development: New agendas, issues, and policies, 1-34. Villas, C. (1996). Neoliberal social policy managing poverty (somehow). NACLA Report on the Americas29(6), 16-25. https://doi.org/10.1080/10714839.1996.11722877 Vishnoi, A. (2012). Poor PISA score: Govt blames “disconnect” with India. Indian Express. http://archive.indianexpress.com/news/poor-pisa-score-govt-blames--disconnect--with- india/996890/ VOI. (2022). Gujarat: 33,000 Primary Schools Have Just One Teacher for All Classes. Vibes of India.https://www.vibesofindia.com/gujarat-33000-primary-schools-have-just-one-teacher- for-all-classes/ Volante, L. (2007). Educational quality and accountability in Ontario: Past, present, and future. Canadian journal of educational administration and policy, (58). Ward, M. (2011). Aid to education: the case of Sarva Shiksha Abhiyan in India and the role of development partners. Journal of education policy, 26(4), 543-556. West, A., & Pennell, H. (2000). Publishing school examination results in England: incentives and consequences. Educational Studies, 26(4), 423-436. Wheare, K. (1964). Federal Government. New York: Oxford University Press. Williams, P. (2009). Analytic Report: Factors contributing to and ways of improving Australia’s educational performance’. Submission to the Inquiry into the Administration and Reporting of NAPLAN Testing, Senate References Committee on Education, Employment & Workplace Relations, Canberra. Williamson, B. (2017). Big data in education: The digital future of learning, policy, and practice. SAGE. Williamson, B. (2016). Digital education governance: data visualization, predictive analytics, and ‘real-time ‘policy instruments. Journal of education policy, 31(2), 123-141. Williamson, B. (2014). Reassembling children as data doppelgangers: How databases are making education machine-readable. In Powerful Knowledge conference, University of Bristol, Bristol (Vol. 16). Wolff, L. (2007). The Costs of Student Assessments in Latin America. PREAL. 160 World Bank. (2003). Implementation Completion Report (IDA-26610) on a Credit in the Amount of SDR 180 million to India for a District Primary Education Project. https://documents1.worldbank.org/curated/en/676901468752758248/pdf/252660IN.pdf Yagnik, B. (2019) Gujarat Government to Grade Schools on Education Quality. Times of India.https://timesofindia.indiatimes.com/city/ahmedabad/state-govt-to-grade-schools-on- education-quality/articleshow/68208978.cms Yarema, C. H. (2010). Mathematics teachers' views of accountability testing revealed through lesson study. Mathematics Teacher Education and Development, 12(1), 3-18. Yin, R. K. (2003). Case study research: Design and methods (Vol. 5). SAGE. 161 APPENDIX A – GUJARAT BACKGROUND Gujarat is the 5th largest state in India in terms of area. In terms of gross state domestic product, Gujarat ranks 5th among all states. It is considered one of the most industrially developed states of India, a manufacturing hub, and a state with lower levels of unemployment. It is also one of the few states in India with surplus electricity. Within the policy circles and broad media coverage, Gujarat is generally labeled as one of the “tech savvy” states in education and general governance, and one of the pioneering states in education datafication. Gujarat was one of the first states in India to include assessments as part of the state education guidelines. It was the first state to implement a statewide school inspection and assessment drive called Gunotsav from 2009. It was also one of the first states to initiate a biometric child tracking system to prevent dropouts of children from schools, reduce absenteeism, and improve performance (Desai, 2015). However, despite Gujarat’s high economic development and push towards e-governance and technical advancement, its education indicators have been noted to be particularly poor in some areas. For example, a central government report (NSO, 2019) found that in the age group of 14-17, only 71% girls in Gujarat were enrolled in secondary and higher secondary levels, worse than 20 out of 22 major states (Shah, 2019b). As per the National Health Profile report (CBHI, 2019), Gujarat ranked at 21 out of 28 states in terms of higher secondary enrolment (Shastri, 2019). In a recent public event, the state’s former education minister admitted that Gujarat is a model state in many sectors but not education (DNA, 2019). This striking picture of Gujarat with economic development and technological advancement on one hand and poor educational outcomes on the other is quite counter-intuitive. 162 APPENDIX B – INTERVIEW GUIDE (SEMI-STRUCTURED) B1. UDISE (Unified District System for Education) Original intentions and vision behind UDISE Data: 1. When UDISE was being planned and even now when you have discussions around it with the government, what is the vision of UDISE that is talked about? (Probe: purpose, type of data, data platform, uses) 2. Who have been involved in designing UDISE data protocols? 3. How is it decided what data to collect and what not? 4. Who all are involved in deciding what data to put on the UDISE web platform? On what basis do they decide? 5. How did the idea of having school report cards in India come about? (Probe: inspiration from other countries, who was involved, intended uses & benefits) 6. How was it decided what data to include in school report cards and what not? School Level Data in UDISE 7. I have noticed that UDISE web platform provides many instant statistics at state and district level like dropout rate, transition rate, promotion rate etc. One can also compare across states and districts on these statistics. But the same statistics are not available for individual schools. We cannot compare any statistics across schools. Why do you think the website is set up like this? (Probe: central govt. priorities, involvement of states) 8. In your observation, how is UDISE data being used in India? (probe: schools and families) 9. For what purpose are school report cards used in India? Why or why not? (Probe: schools and families) 10. In your opinion, what should be done to increase the use of school report cards? (Probe: improve content of report cards. e.g. put indicators) Why or why not? Impact of UDISE across all levels: 11. In your observation, what has been the impact of having UDISE on the work of central government? (Probe: efficiency, accountability) 12. How about state and district government? 13. What has been the impact on schools? (Probe: teaching, school choice) 14. How about parents or families? 163 15. How can UDISE be made more helpful for schools? (Probe: having student learning data) Who should take lead in that direction? Closing Remarks: You have such a long association with the UDISE project. You have worked with a range of stakeholders in setting up this system. This is not an easy task, given how difficult it is to bring systemic reforms in a large country like India. When you look back at your UDISE journey, what aspects of it makes you feel proud? B2. NAS (National Achievement Survey) Original intentions and vision behind NAS: 1. What is the vision or rationale behind having National Achievement Survey in India? 2. For what purpose is NAS data used currently? 3. Is the survey inspired from any other achievement surveys in the world? (Probe: what did you take from it, and what was left out) School Level Data in NAS 4. NAS is designed as a sample survey. But not a census survey. Why was that decision made? (Probe: feasibility and capacity issues) 5. Was there any review of experiences from other countries in making that decision? 6. How was that decision taken? (Probe: who were involved) 7. In your observation, how is NAS data being used in India? Why or why not? (Probe: schools) 8. Are there any plans or discussions around collecting individual student data in the future? Why or why not? (Probe: role of proposed national testing agency in future, presence of state level surveys) 9. In your opinion, how will the NAS data help schoolteachers and principals? 10. Since the data is not available for individual student, will they face any challenges in using that data? Why? 11. Are there any plans to train or help teachers and principals in using NAS data? 12. In your opinion, what should be done to ensure that schoolteachers and principals use data? 13. Will NAS data be helpful to parents or families in any way? Why or why not? Impact of NAS data across all levels: 164 14. In your observation, what has been the impact of having NAS data on the work of central government? (Probe: efficiency, accountability) Why? 15. How about state and district government? Why? (Probe: announcement of state achievement surveys or other initiatives) Closing Remarks: Over a period of time, NAS has seen many developments and updated itself each time from the previous one. I am sure this is not an easy task, given how many capacity and implementation challenges we face in India. When you look back at your NAS journey, what aspects of it makes you feel proud? B3. PGI (Performance Grading Index) PGI Purpose, Vision and Development 1. How did you come up with the idea of having PGI? 2. What is the vision behind PGI? For what purposes it will be used in the future? (Probe: funding, sanctions, incentives etc.) 3. From where did you get the inspiration for starting PGI? Did you review such indicators in other countries? 4. Will PGI and SEQI be merged in the future, or they will coexist? Why or why not? 5. How was PGI designed? Who was involved in the development process? 6. Have you faced any challenges in calculating PGI? Collecting School Level PGI 7. PGI is calculated at state level currently. Why was it decided to focus only on state level? 8. Do you plan to calculate PGI at district and block level in the future? Why? (Probe: if state governments have such indicators) 9. Do you plan to calculate PGI at school level in the future. Why or why not? (Probe: if some state governments have school indicators) 10. If yes, what will you use as indicator for learning in that case since NAS data is not available at school level? 11. What are some challenges of calculating PGI at school level? Impact of PGI 12. What do you think will be the major impact of having PGI in India? 13. How will PGI affect (benefit) the work of central government? 165 14. How have states responded to their PGI scores? Have they raised any concerns about it? Have they announced any initiatives or plans to improve their scores? 15. In your opinion, how should states use PGI effectively? How will it affect their work? 16. How will it affect the work of district governments? Closing Remarks: I think PGI is a bold step in education sector in India. I am sure you must have faced many hurdles in reaching up to this point of releasing this indicator for 2 years. When you look back on your journey of beginning work on this project till now, what aspects of it make you feel proud? B4. State Level Data Landscape and Governance (MIS Office) State MIS Office: 1. What are the responsibilities of your office? 2. Could you please also explain the state to cluster level structure that comes under your office? (Probe: roles and functions of each level) 3. What type of data do you report regularly to central government? 4. What type of data do district offices report to you? State Data Landscape: 5. For what purposes is UDISE data used in Gujarat? (Probe: school report cards) 6. Apart UDISE data, are there any other data your office collects? Why or why not? 7. Is there any specific website or platform where one can find Gujarat specific education data and statistics? 8. Does your office play any role in national achievement survey? 9. Before national achievement survey, did Gujarat have any similar quality assessment? How was it conducted? Did your office play any role in that? 10. Recently, the government has announced PGI and SEQI to monitor performance of states. What work did your office have to do for that? 11. What challenges did you face in that process? Impact of Data on State Level Governance 12. In your experience, how have things changed in the state government after having UDISE? 13. How has the state government responded to their NAS results? Have they announced any plans to improve NAS scores? (Probe: state achievement survey like other states) 166 14. How has the state government responded to their PGI and SEQI indicators? Have they announced any plans to improve their ranking? 15. Have you had any discussions with your senior colleagues about these indicators? Any discussions on what will be the next steps ahead? Closing Remarks: Your work of managing state level data is really important. I am sure you must have faced many hurdles in reaching up to this point where you have so many data collected from across the state. When you look back on your tenure in this office, what aspects of your work journey make you feel proud? B5. Impact of Data on State Level Governance (State Project Office) Roles and responsibilities of central and state government in Samagra Shiksha 1. Could you please share what are the responsibilities of your office? 2. What offices report to your office? 3. What are some of the key initiatives being executed under Samagra Shiksha in Gujarat? 4. Are these initiatives designed by the central government or state government? 5. What role does central government play in Samagra Shiksha apart from funding? 6. What role does state government play? 7. What are the differences in responsibilities of state SSA office vs. state education department office? 8. How is the progress of Gujarat’s Samagra Shiksha evaluated by the central office? 9. What types of data do you have to submit them yearly? 10. Do you also have to provide some data on work done in order to secure funding? 11. Apart from UDISE, do you use any other data in your office? (Probe: school report cards) Impact of Data on State Level Governance 12. In your experience, how have things changed in the work of SSA after having UDISE? 13. Recently, there was a lot of media attention given to performance of states in National Achievement Survey. What steps is the state government taking to improve performance in NAS? 14. Does your office use NAS data for any purpose? Who else uses NAS data in the state? Why? (Probe: schools, families) 15. How will having NAS data affect the work you are doing? 167 16. The government also released new indicators to monitor state performance- SEQI and PGI. Has your office made any plans to improve performance on them? 17. How will having these indicators affect the work you are doing? Closing Remarks: SSA has implemented many large scale initiatives in Gujarat. What would you say are some of the biggest achievements of SSA in Gujarat? B6. District Level Data and Governance (District Education Office) 1. Could you please share what are the responsibilities of your office? 2. How is the work of district office reviewed or assessed by the state office? 3. What type of data do you have to report to state offices? 4. What role does your office play regarding UDISE? 5. How about national achievement survey? Gunotsav? 6. Do you use UDISE data for any purpose? Why or why not? (Probe: school report cards) 7. In the last survey, NAS released district report cards. Does your office use them for any purpose? Why or why not? 8. Who else uses NAS data in the state? Why or why not? Impact of Data on District Governance 9. In your opinion, what has been the impact of having UDISE system in the work you do? 10. If you compare district and state government’s work in Gujarat before and after having UDISE, what key changes do you see? 11. Now that NAS data is collected, will Gunotsav still continue? 12. How will having NAS data in the future impact the work you do? Why or why not? 13. In your judgement, how can NAS data be made more helpful for your work? 14. The central government are now calculating SEQI and PGI to evaluate state’s performance in education. What role does your office play for improvement in these indicators? Closing Remarks: Ever since SSA and RTE have been introduced, district offices have played a major role in implementing education reforms. What would you say have been some of the biggest achievements of your office in your district? B7. Perception of Schools and Impact on Schools (School Staff) 1. What types of duties do you perform on a daily basis apart from teaching? 2. Before the pandemic hit, how often did BRC/CRC staff visit your school? For what purpose? 168 3. What types of data do you regularly report to the government? 4. How does government use this data? 5. Do you use UDISE data for any purpose? Has it benefitted your work? Why or why not? (Probe: school report cards) 6. Do teachers use them? How about parents of students? Why or why not? 7. Do you have any recommendations for improving school report cards? How can its use be expanded ? 8. What impact have you seen on school functioning in general after having UDISE system? 9. Has Gunotstav been conducted in your school? Can you share what the experience was like? Did you find that exercise helpful? Why? 10. Apart from that, were there any other tests implemented in your school? (Probe: ekam kasoti.) How was your experience with those tests? Why? 11. As you are aware, the national achievement survey was conducted in 2016-17. Do you use this survey data for any work you do in schools? Why? 12. Has state government introduced any programs in schools to improve NAS ranking? 13. How do you think having NAS data will help you in future? 14. Do you think there is any difference between teaching for achievement survey vs. board or school exam? Why? 15. Do you think if students who prepare well for regular exams can take achievement tests confidently? Why? 16. Lastly, what type of data you wish government should collect so that it can benefit your work as principal and teacher? 169