STRIVING FOR EXCELLENCE IN THE GLONACAL HIGHER EDUCATION SYSTEM SHAPED BY UNIVERSITY RANKINGS: A MULTIPLE-CASE STUDY ON HIGHER EDUCATION INSTITUTIONS IN SOUTH KOREA By Sohyeon Bae A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Higher, Adult, and Lifelong Education—Doctor of Philosophy 2021 ABSTRACT STRIVING FOR EXCELLENCE IN THE GLONACAL HIGHER EDUCATION SYSTEM SHAPED BY UNIVERSITY RANKINGS: A MULTIPLE-CASE STUDY ON HIGHER EDUCATION INSTITUTIONS IN SOUTH KOREA By Sohyeon Bae In many parts of the world, university rankings are now more prevalent and popular than in the entire history of higher education. With growing attention paid to university rankings, the influence of such numbers is ubiquitous on higher education institutions as well as their stakeholders. Universities around the world implement various institutional practices to enhance their ranking status and transform themselves in order to align with ranking firms’ priorities. Although university rankings are re-shaping higher education system globally, individual institutions’ responses to the rankings in different national contexts have not been extensively explored in literature. This context-focused, in-depth knowledge is needed to understand the phenomenon as well as the extensive higher education system. This study examines institutional practices implemented in response to university rankings of three four-year institutions in South Korea. The South Korean case is important in the understanding of the ranking phenomenon for its unprecedented expansion of higher education and enhancement in global university rankings within a few decades. Through a multiple-case study based on qualitative evidence, this study explored how the Korean institutions responded to global and national university rankings in the different areas of institutional practices and how the local, national, and global agencies of higher education interact with the institutions in those practices. Key findings of this study include a better understanding of the challenging environment of Korean higher education for global competitiveness and the institutions’ wide range of efforts to enhance their ranking positions through implementation of various initiatives for organization, curriculum, faculty, research, students, and marketing. Despite the similarities of existing practices, the case institutions interpreted and responded to the rankings in quite different ways depending on their goals, hierarchical positions, resources, and challenges. These differences came from the multifaceted interactions each institution had with various agencies of higher education at the global, national, and local levels. The findings underscored the importance of exploring higher education phenomenon both from the perspective of individual institutions and expansive glonacal higher education systems. This study concluded with the implications and suggestions for future higher education research. The case study protocol and research notes were also documented and provided for researchers. Copyright by SOHYEON BAE 2021 ACKNOWLEDGEMENTS I believe my life is composed of a series of journeys. Among these journeys was the one to become a Ph.D., leaving the spaces that I was used to for more than 30 years. This was the most challenging but meaningful journey of my life. I would like to give my appreciation to everyone who encouraged and inspired me to complete this special adventure successfully. First of all, I want to express my sincere appreciation to my colleagues and friends at Michigan State University and in the broader Lansing area. My advisor, Dr. Dongbin Kim, inspired and supported me for the past five years since the day I arrived in Michigan. Dr. John Dirkx partnered with me so much, from two globally focused fellowship programs to my dissertation defense. Dr. Brendan Cantwell and Dr. Christina Schwarz also gave me insightful advice for the past five years, as well as in the dissertation committee. HALE faculty members, Dr. Riyad Shahjahan, Dr. Leslie Gonzales, Dr. Kristen Renn, and Dr. Ann Austin, afforded me opportunities to experience how to teach, learn, study, and communicate as a scholar. My HALE colleagues and their families, including the biggest cohort ever, graduates, and the continuing stream of new students help me keep going. Dr. Kyle Sweitzer and colleagues in the Institutional Research Office broadened my understanding of the planning and management at a U.S. research university. Korean educators in the college of education and Korean friends in Lansing refreshed my life as an international student. It was a great pleasure to know Dr. Adam T. Grimm, Ann Grimm, Kyle Chong, Staci Stark, and colleagues in the college who were open to diversity, equity, and inclusion. I will never forget the people I met in Lansing for their warmness and humor made it possible for me to survive in the winter land. v Family, friends, colleagues in Korea have been supporting me throughout the entire journey. I would like to express my heart to my family, who made me feel connected and comfortable all the time, despite the physical distances. Many friends flew or sent their love to Michigan to encourage me. Dr. Ochin Choe had been my all-time friend overcoming the time difference. I was lucky to know Dr. Minho Yeom during my master’s study for the amazing support and encouragement he gifted me for the past eight years. Dr. Hoisoo Kim has provided me practical advice and support ever since I prepared for study abroad. The fellow workers, professors, friends at CNU and SNU were welcoming always whenever I visited Korea and were instrumental in helping me complete the study. Thanks to all your support and concern, my Ph.D. journey ends up here. As even more exciting adventures await me, I take comfort in knowing I begin with a full and delighted heart thanks to all of you. I hope soon to be together with you all. vi TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES I. RESEARCH PROBLEM STATEMENT Introduction Purpose of Study Research Questions Higher Education in South Korea Significance of Study II. LITERATURE REVIEW Literature on University Rankings Evolution of University Rankings Methodologies and Measures of University Rankings Why Rankings Prevail Influences of Rankings Summary of Literature Conceptual Framework Geographical Landscape of Korean Higher Education A Glonacal Agency Heuristic Six Main Areas of Institutional Practices Revised Conceptual Framework xiii ix 1 1 5 6 8 12 15 15 15 17 20 22 28 29 29 31 33 34 38 39 39 41 41 44 49 51 52 52 54 55 57 59 59 59 59 Summary of the Section III. METHODOLOGY Design of the Study Sampling Case Selection Participant Selection Researcher’s Positionality Limitations Data collection Data Analysis Validity and Reliability Validity Reliability IV. FINDINGS Landscape of Korean Higher Education Higher Education on a Global Scale Globalized higher education. vii Within-Case Analyses Challenges of Korean HEIs Role of university rankings. Growing recognition of Korean HEIs. Pursuing global leaders. Domestic challenges. Case of University A Basic description. Influences of rankings on institutional practices. Organization. Curriculum. Faculty. Research. Students. Marketing. Case of University B Basic description. Influences of rankings on institutional practices. Organization. Curriculum. Faculty. Research. Students. Marketing. Case of University C Basic description. Influences of rankings on institutional practices. Organization. Curriculum. Faculty. Research. Students. Marketing. Cross-case Analysis: Based on a Glonacal Agency Heuristic Global Dimension Global agencies. Interactions among agencies. National Dimension National agencies. Interactions among agencies. Local Dimension Local agencies. Interactions among agencies. V. DISCUSSION AND IMPLICATIONS Discussion of Findings viii 61 62 63 63 64 65 65 65 66 67 70 70 72 73 74 75 75 77 77 81 82 83 84 85 87 87 88 89 92 93 95 97 99 101 101 101 102 103 103 104 107 107 107 111 111 Implications Landscape of Korean Higher Education amid Globalization UA, a Marathoner in the Ranking Race UB, a Triathlete in the Ranking Race UC, a Sprinter in the Ranking Race Three Cases in the Glonacal Higher Education System Implications for Glonacal Higher Education Agencies Exploring the Past, Present, and Future of the Ranking Phenomenon Implications for Higher Education Research Limitations of the Study Suggestions for Future Research Conclusion 112 113 115 117 118 120 120 122 124 126 128 128 130 131 133 135 142 APPENDICES APPENDIX A: INTERVIEW PROTOCOL APPENDIX B: CASE-STUDY PROTOCOL APPENDIX C: RESEARCH NOTES REFERENCES ix LIST OF TABLES Table 1 Major Global University Rankings and Their Performance Indicators Table 2 List of Interview Participants 3 48 x LIST OF FIGURES Figure 1 A Conceptual Map for a Glonacal Agency Heuristic Figure 2 A Revised Conceptual Framework for a Glonacal Agency Heuristic in South Korean 37 Higher Education Context Focusing on Institutional Practices 32 Figure C1 A Flow Chart of Sampling at UA Figure C2 A Flow Chart of Sampling at UB Figure C3 A Flow Chart of Sampling at UC 138 139 140 xi I. RESEARCH PROBLEM STATEMENT Introduction ‘11 MSU graduate programs rank in the top 15 nationally’ ‘SNU tops Asia Pacific’s most innovative university rankings’ ‘Monash is among top global universities in latest research rankings’ ‘The No. 1 university in the Arab world is in Saudi Arabia’ To many people in the world living at the present time, the above headlines have begun to be ordinary and familiar. Rankings of universities are now more prevalent and popular than ever throughout the history of higher education; they are widely shared through the news media, institutional websites, and university marketing materials and have become a source of academic legitimacy to the public in many countries. University rankings, hierarchical positions of higher education institutions (HEIs) determined by several numerical indicators, have been proliferating as competition among institutions has intensified due to massification and globalization of higher education (Altbach, 2012). When they first appeared in the early 20th century as national rankings, their influence was on limited audiences within a specific nation as only few institutions were included in their lists. However, they now have evolved into global rankings embracing thousands of HEIs in many countries, exerting more substantial influence on higher education systems and the students seeking to attend them globally (Hazelkorn, 2013). With growing attention paid to university rankings, their influences are now ubiquitous on a several set of stakeholders in higher education. Rankings now serve as a crucial tool for governance in higher education and indicators for economic competitiveness in nations (Erkkilä & Piironen, 2018). National governments have initiated special projects for the improvement of higher education to build globally renowned HEIs (i.e., highly ranked) (Marginson & van der Wende, 2007). HEIs over the world analyze their ranking results and make institutional efforts to 1 achieve higher standings in the rankings (Hazelkorn, 2015). More directly, the rankings have exerted significant influences on students’ college enrollment decisions across countries. Students often choose institutions they want to attend depending on institutional rankings and information presented by ranking agencies (Shin & Toukoushian, 2011). Among these several entities of higher education systems influenced by university rankings, HEIs around the world seem to experience the most dramatic changes under the influence of the rankings. According to Hazelkorn’s (2007, 2008) study on how HEIs in 41 different countries reacted to global university rankings, nearly two-thirds of the responding institutions replied that they considered their rankings and indicators for institutional decision- making and planning processes. To avoid descending in their rankings, HEIs started to devise ways to enhance their ranking status and modify their institutional practices based on what was assessed in the rankings. For example, considering the research performance emphasized in major global rankings, institutions made intentional efforts to recruit high-achieving scholars, encouraged greater research productivity through performance-based incentives and faculty assessment, and promoted publications in English-language peer-reviewed journals (Hazelkorn, 2015). The rankings even influenced various aspects of HEIs such as their organization, curriculum, student education, and marketing. With the pervasiveness of these strategies, HEIs in many parts of the world are gradually transformed to the image of universities striving for excellence in ways that exactly align with ranking priorities (Marginson, 2016). Although these institutional efforts seek to improve their institutions, it is questionable that excellence in ranking indicators equates to excellence in the quality of education. Scholars have argued that university rankings are not precise, sophisticated devices to assess multifaceted educational outcomes of HEIs, rather they are simple instruments 2 illuminating limited aspects of the outcomes for broad comparison (Altbach, 2012; Kehm, 2014). Even if there are different types of university rankings using several indicators, the rankings capturing full attention from stakeholders have been those assessing limited quantifiable elements, such numbers of publications, citations, student-faculty ratio, and industry income, consistently throughout a long period of time as seen in Table 1. These widely accepted rankings have been criticized for being methodologically unreliable, exclusive of qualitative indicators, data inaccuracy, and biases towards specific academic fields and types of schools such as technology or engineering schools (Kehm, 2014). Table 1 Major Global University Rankings and Their Performance Indicators Rankings Performance indicators Times Higher Education (2021) Teaching reputation, faculty/student ratio, awarded doctorates, research income, publications, citation impact, proportion of international students and staff, international co-authorship, industry research income Quacquarelli Symonds (2021) Academic and employer reputation, faculty/student ratio, citations per faculty, international faculty and student ratio Academic Ranking of World Universities (2020) Number of Nobel Prize and Fields medal winners, number of highly cited researchers, papers published in Nature and Science, papers in Science Citation Index journals If HEIs around the world homogeneously transform their education in pursuance of excellence in the rankings, this transformation would not necessarily culminate in the enhancement of education quality. In fact, HEIs in many parts of the world have changed their allocation of resources to give priority to educational outcomes recognized in major rankings, which led to controversial operational or institutional changes, such as heavy emphasis on research over teaching, or some disciplines such as science and engineering over others 3 (Hazelkorn, 2013). Pursuing a single idealized image of HEIs regardless of different educational needs, resources, and missions of individual institutions may result in the waste of institutional resources, which could have been invested for more fundamental and realistic tasks expected by stakeholders (Altbach, 2015). For example, if an institution founded and supported to educate its local community invests more of its resources in attracting international students/scholars in hopes of improving their global rankings, much needed resources may be diverted from its primary institutional mission. Even if university rankings have exerted such a significant influence on HEIs around the world, many parts of the ranking phenomenon in higher education (i.e., growing recognition and utilization of university rankings among HEIs and stakeholders) have not been explored extensively. Extant literature on the rankings mostly consists of historical studies, critical examinations of their methodologies, and explorations of societal factors that explain their proliferation. Less attention has been paid by researchers to their influences on HEIs (Hazelkorn, 2015). With the limited body of research on the influences of rankings, the extant literature only provides limited perspectives of the ranking phenomenon, rather than a comprehensive description of the implications of rankings in different societal contexts. More specifically, previous studies on the rankings’ influences are largely based on either surveys of administrators at several institutions or interviews with faculty and administrators of a single institution (Hazelkorn, 2007, 2008; Locke et al., 2008; Yudkevich et al., 2016). Therefore, their findings seemed either too broad to reflect differences among individual institutions or too bounded to facilitate transnational or trans-institutional comparison. To fully understand the influences of rankings on HEIs, more extensive studies examining the interplay between the rankings and several types of institutions in a specific national context are necessary. Of primary 4 consideration, the rankings would be differently integrated into higher education system of each nation developed by its unique history, law, policies, and funding (Marginson, 2006) as well as societal needs. Secondarily, since the higher education system in a nation consists of various HEIs with different purposes and levels of reputation, how institutions understand and react to the given rankings varies significantly. Examining institutions’ reactions to the rankings within a national context would greatly advance our understanding of how the rankings change a variety of institutions in different ways and transform a national higher education system as a whole. Purpose of Study The purpose of this study is to better understand how university rankings shape a higher education system in a specific national context focusing on institutional responses to various rankings. In the present globalized higher education environment, agencies of higher education are closely interconnected to each other and exert influences at the different dimensions including local, national, and global levels. When it comes to rankings, a variety of ranking agencies including national and global rankers are interrelated and have effects on the national and global education systems. From an individual institution’s perspective, an institution as an agency, takes various levels of actions to cope with the pressures of being ranked, acted upon from other agencies (e.g., government, global/national ranking agencies, and peer institutions). Institutional reactions to rankings are salient in various areas of institutional practices including students’ education, faculty, research, curriculum, organization, and marketing. Anchored in this expansive framework illuminating interactions/influences among local, national, and global agencies, this study explored how HEIs in a specific nation react to the rankings and how the external agencies of higher education interact with one another. 5 This study examined institutional practices implemented to respond to university rankings of four-year HEIs providing various academic programs in South Korea. HEIs selected for this study are diverse in terms of their status in ranking systems, operational control, size, and location. Since few studies have examined the influences of university rankings on multiple institutions within a national context, this approach offers an extensive view on the ranking phenomenon encompassing differences across institutions as well as distinctiveness of the national context. Through a multiple-case study design (Yin, 2018), the findings of this study illustrated thick descriptions of each case institution and analyses of emerging themes applicable across cases. By doing so, this study offers an elaborate conceptualization of the ranking phenomenon illuminating the interconnections existing among different local, national, and global agencies. This conceptualization offers a more nuanced understanding of higher education issues in the globalized environment, serving as a lens to examine the ranking phenomenon from a more expansive perspective. Research Questions As higher education is globalized and transformed to a mass form, HEIs that had been operating primarily within their local and national contexts in the past could extend their scope of influence beyond these boundaries. In this globalized higher education system, HEIs, educating not only domestic students but also global talents for broader impacts, engage in intense competition with their peer institutions and are expected to respond to internal and external pressures for excellence both on the national and global stage. University rankings as a product of institutional competition on a global scale prevail across different higher education systems in the world. The growing popularity of global university rankings has influenced higher education policies and initiatives both at the national and institutional levels, which has 6 accelerated competition among HEIs at the national level through the selective funding programs and increased public attention to national rankings concurrently. In this context, the following research questions guided the case-studies of HEIs in a national context (i.e., South Korea) for this study. 1) How does the university respond to global and national university rankings? - How are the responses presented in different areas of institutional practices? 2) How do the local, national, and global agencies of higher education interact with the university to implement institutional practices in response to university rankings? The first question aims to explore the influence of university rankings from the perspective of individual institutions by focusing on what institutions plan and implement in response to global and national rankings. In this study, institutional responses to both global and national university rankings are considered at the same time, since these rankings at the different levels are interconnected with one another, which makes it difficult to draw a simple distinction between their influences on the Korean higher education system. For example, one of the major media companies in South Korea, Choseon Ilbo, started publishing national rankings in collaboration with the Quacquarelli Symonds’ (QS) Asian university rankings. As seen in this case, global rankings seem to be closely linked to national rankings and used to compare performances of HEIs’ within the national system in South Korea. In this context, it would be natural for HEIs to make efforts to examine and enhance both their national and global rankings. The sub-question is to investigate diverse institutional responses to these rankings more systematically by classifying them based on different areas of practices. The second question is to expand understanding of the institutional responses to rankings beyond institutional boundaries. This question allows for examination of how different agencies of higher education at the local, national, and global 7 levels, such as peer institutions, governments, and global rankers, are interacting with individual institutions to lead to specific practices in response to the rankings. Through this question, it was possible to acquire more comprehensive knowledge in the dynamics of responding to the rankings in the broad higher education environment. Higher Education in South Korea Although previous research on global university rankings illustrated that HEIs in various countries made institutional efforts to enhance their global reputation (Hazelkorn, 2015), this study explores HEIs and the South Korean higher education system. The selection was primarily based on the significance of South Korea’s rapid shift to adapt to global higher education dynamics (Gopinathan & Altbach, 2005), which can help advance understanding of the ranking phenomenon. From this perspective, the Korean case is an example of how a country which had been considered to be economically or developmentally peripheral can be developed and transformed to exert substantial impacts on global higher education. Given the distinctive geo- political landscape of South Korea and Korean HEIs’ recent achievements in global university rankings, ranking phenomenon in Korea merits extensive exploration. South Korea, surrounded in East Asia by the People’s Republic of China (China), Democratic People’s Republic of Korea (North Korea), and Japan, has developed its higher education system under considerable external influence. First, the first establishment of modern HEIs was led by the non-Koreans. Although institutions for educating Confucian scholars at the higher level had existed since the ancient time in the Korean peninsula (Green, 2015), new types of western-style institutions teaching western knowledge were founded by Christian missionaries who came to the peninsula in the late 19th century (Lee, 1989; Lee, 2004). During the first half of the 20th century when Korea was colonized by the Empire of Japan, higher education in Korea 8 was used as a tool of colonial oppression by Japan to denationalize and discriminate against Koreans (Lee, 1989). During this period, Korean education unavoidably acquiesced to Japanese and western (mainly German and American) academic models (Kim, 2005). As the United States led the reconstruction of South Korea after World War II, the American model of four-year bachelor’s higher education degree programs were introduced to Korea with curriculum and organization also followed the American model (Lee, 1989). The South Korean government encouraged students/scholars to study internationally in developed countries and return to support South Korea’s recovery from the war and nation-building. These graduates, returning from overseas HEIs, occupied leading positions in the society including professorial positions (Byun & Kim, 2011; Kim & Lee, 2006). The establishment of modern higher education system in Korea, highly influenced by external forces, would closely reflect the formation of the center and peripheral dynamics in global higher education (Altbach, 1987). The South Korean case, in this sense, seemed to retain major features of the countries regarded as the periphery within the historical understanding of global higher education. However, South Korea’s rapid socioeconomic development can serve as a counterexample showing that the center-periphery framework would not fully elucidate the dynamics of the socio-economic systems around the world (Gopinathan & Altbach, 2005). This distinction shown in the socioeconomic development seems to also impact its higher education. The Korean higher education can be an impressive case for it has accomplished remarkable expansion and development within only a few decades from devastation after the Korean War. With the rapid economic growth driven by government-led economic reconstruction from the 1960s, the higher education sector started to expand to meet the increased demands for advanced learning in the 1980s (Green, 2015). Over the past few decades, Korea showed enormous 9 expansion in higher education and the higher education sector included 430 HEIs, junior colleges and four-year universities, that educated around three million students as well as 80,000 from overseas countries as of 2018 (Korean Educational Development Institute, 2018). The sector, which provided opportunities for only seven percent of age cohorts in the mid-1970s, supported almost 70 percent of high school graduates in the early 2000s (Kim & Lee, 2006). According to the Organisation for Economic Co-operation and Development (OECD, 2019), the higher education completion rate of Korea for people aged 24-35 was 69.6 percent in 2018 (one of the world’s highest), while that of 55-64 years-old age group was only 23.1 percent. These data show the rapid and substantial expansion of higher education access and completion in South Korea. South Korea became one of the advanced higher education systems in the world attracting and educating thousands of students from various countries around the world, especially students from Asian developing countries. In terms of university rankings, the development of Korean higher education reveals how the national government support and intervention can enhance the institutional educational outcomes assessed in the rankings through various policies. As the global competitiveness of higher education was not considered to be equivalent to that of its economy, the Korean government started national projects to build quality HEIs renowned in the global higher education market from the mid-1990s (Byun et al., 2013). Governmental projects such as Brain Korea 21 (1999-2012) and World Class University (2008-2012) primarily provided funding to a few selected research teams and institutions to promote knowledge production and publication. These efforts led to an enormous increase in the number of research papers in Science Citation Index (SCI) journals (Green, 2015). HEIs in Korea also took various actions, such as changing their governance structure, tenure and incentive system, and faculty recruitment, to be highly 10 ranked, globally renowned institutions (Shin & Yang, 2013). This wide range of strategic efforts show what types of initiatives and programs are prioritized and pursued in order to succeed in the rankings. Reflecting these national and institutional efforts, the global university rankings of Korean HEIs have been enhanced greatly for the past decade. For example, in the world university rankings of the Times Higher Education (THE), the number of Korean institutions within the global top 1,000 which was only four in 2011, grew up to 24 in 2020 (THE, 2011; THE, 2020). In the academic ranking of world universities (ARWU), 32 institutions were ranked in the global top 1,000 as of 2019, compared to 11 institutions in 2011 rankings (ARWU, 2011; ARWU, 2019). Considering that Korea has relatively few HEIs compared to other nations (e.g., China or the U.S.), increased recognition of Korean HEIs in the global rankings is quite notable. However, Korea’s vigorous pursuits of global excellence also led to a serious dilemma over whether to invest and improve the global dimension or national/local dimensions of HEIs (Shin & Jang, 2013). For example, Korean HEIs, encouraging English-medium instruction to internationalize their campuses as well as succeed in the rankings, confronted serious objection from faculty/students for the limited English proficiency (Cho, 2012). Thus, Korean HEIs seem to decide on which dimensions to focus under the strong influence of the rankings. These accomplishments and challenges of Korean institutions seem to be a potential model for many countries aspiring to build globally renowned HEIs and exploring methods and motivations of achieving their purposes, especially in Asia as other nations follow the South Korean developmental model (Schuman, 2009). The geo-political dynamics and expansion of higher education both within the nation and beyond the borders as well as its enhanced status in global rankings make the Korean case 11 distinctive and meaningful in exploring the ranking phenomenon. Since the global reputation of HEIs, baked into the primary criteria of university rankings, cannot be acquired within a short period of time, the rapid growing Korean HEIs’ global reputation/recognition is especially noteworthy. The Korean case clearly shows how HEIs are rapidly adapting to the influence of global, national, and local forces and seeking for enhancement of ranking status. Therefore, this case is highly informative to accomplish a concrete conceptualization of the ranking phenomenon in a higher education system where various internal and external forces are interacting with one another. This conceptualization can offer insights for analyses of other cases in any global higher education issues by expanding the analytical perspectives beyond national or institutional boundaries (Yin, 2018). In this sense, examining the Korean context greatly extends our knowledge of the rankings’ influences on HEIs by fulfilling analytic generalizations moving beyond a specific higher education context. Significance of Study This study of HEIs in South Korea react to university rankings contributes to a more extensive understanding on the ranking phenomenon and local, national, and global contexts in the globalized higher education landscape. More specifically, this study exploring diverse institutional efforts to enhance their ranking status in a nation provides meaningful knowledge for audiences who are interested in exploring institutional practices in response to the rankings. This is achieved within a discussion of the conceptual and historical context of the ranking phenomenon. First, this study addressed the gaps in the higher education literature about university rankings, which has often focused more on the methodological concerns and theorizations of university rankings, by looking closely into practices and interpretations of diverse institutions. 12 As briefly discussed earlier, previous empirical research on the rankings had examined limited types of institutions in limited national contexts (Azman & Kutty, 2016; Lo, 2014). Therefore, their analyses seemed not to recognize the different responses of institutions as active, independent agents of higher education with little discussion of the ranking phenomenon. In this regard, this study provides vivid descriptions on the institutional practices heard directly from the voices of diverse professionals in different areas of a given institution, rather than abstract theorization. Moreover, this study’s multi-layered approach to the ranking phenomenon capturing the local institutional contexts as well as national and global contexts of higher education enhances our knowledge on how university rankings change HEIs’ governance and how global, national, and local agencies interact to inform these processes, especially in the globalized higher education environment. This close examination of the phenomenon offers a more concrete conceptualization of the influence of rankings, which can be employed in analyses on other types of global higher education issues. Second, although the scope of examination is limited to a specific national context (South Korea), this multiple-case study on Korean HEIs provides implications transportable to other HEIs beyond South Korea. While it would be challenging for many institutions to emulate or aspire to replicate the few top-ranked ‘world-class universities’, it is also known that many HEIs in the world aspire to enhance their reputational status in the university rankings. This dissertation’s case studies explore representative examples of these aspirational universities, which have taken up varying positions in the global rankings and have strived to improve their ranking/reputation with mixed results. As Korean HEIs have obtained recent international recognition within a short period of time, this study explores the changes and challenges they have experienced while pursuing global excellence. This study provides insights to examine the 13 ranking phenomenon from a more holistic perspective. The cases of Korean HEIs in this study are relevant to a breadth of audiences, such as administrators from various HEIs aspiring to be ranked higher on the global stage and higher education researchers investigating the ranking phenomenon. In this sense, the findings of this study may inform higher education stakeholders in devising and implementing institutional initiatives for university rankings under consideration of both their expected benefits and drawbacks such as successful marketing, increased public support, overemphasis on research over teaching, and concentration on specific outcomes measured in the rankings. Finally, this study is meaningful in its exploration of the consequences associated with the unprecedented proliferation of university rankings in the past few decades from institutional perspectives. Most of the empirical studies on the rankings were published within the last decade only a few years since the widening acceptance of global university rankings. This study, exploring the ranking phenomenon after a decade of rankings dominating the conversation about HEIs’ performance, can offer new insights into the broad influences of the rankings which have extended on the higher education environment over the period of time. Although it might be difficult to draw a direct comparison between the findings of previous studies and current study due to the differences in research design and context, this study, containing full descriptions on the influences of rankings in the South Korean context and their implications, helps for a better understanding of the past, present, and future of the ranking phenomenon. 14 II. LITERATURE REVIEW Previous research on university rankings—after decades of observation and examination since their advent in the past century —can be categorized into two broad groups of literature, (a) exploring aspects of rankings and (b) analyzing influences of rankings. Literature exploring aspects of rankings includes studies tracing the evolutionary history of rankings, investigating their methodologies and measures, and exploring socio-cultural environments leading to their prevalence. The latter strand, literature analyzing influences of rankings, which had been relatively less extensive than the former, has explored how the ranking phenomenon could be contextualized and how rankings changed higher education in several national settings (Hazelkorn, 2015). As university rankings have gained more attention world-wide, the research foci which initially emphasized their drawbacks in methodology seem to expand to their impacts on society. This section provides a brief overview of the extant literature on university rankings including the above-mentioned research on the history, methodology, environments, and influence of rankings. Adding to the overview, a conceptual framework guiding this study is illustrated. Literature on University Rankings Evolution of University Rankings Ranking universities, allocating hierarchical positions to higher education institutions in a full row of institutions, is not a completely new idea or phenomenon in many parts of the world (Stuart, 1995; Lo, 2014). University rankings spring from the long history of competition between institutions to attract desired (‘quality’) students, faculty, and resources (Shin & Toutkoushian, 2014). As Altbach (2012) observed, their burgeoning is natural and inevitable amidst the massification of higher education. Since the early 20th century, researchers from 15 Europe and the U.S. devised various methods or indicators to compare HEIs or academic programs (Stuart, 1995). These pioneer rankings have gained attention and evolved to be the basis of the contemporary university rankings. Usher (2016) and Hazelkorn (2018) explained the historical evolution of university rankings by periodizing rankings’ history, focusing on the emergence of key rankings. In their categorization, university rankings, devised and changed by socio-political demands of their times, have developed in four different phases. The first phase includes above mentioned pioneer rankings in the early and mid-1900s, assessing HEIs based on reputation survey results and number of renowned figures in specific academic fields. The second phase was from the 1960s to 2000, when commercial/media companies such as U.S. News and World Report developed rankings in response to “massification, student mobility and marketisation of higher education” (Hazelkorn, 2018, p. 7). During this period, following the US rankings’ example, university rankings comparing HEIs nation-wide were initiated in countries like China in 1993 by a research group, South Korea and Japan in 1994 by media companies, and Germany in 1998 by a think tank (Dunrong, 2016; Kehm, 2014; Nam et al., 2018; Yonezawa et al., 2002). In the early 21th century, a new era of university rankings started when the Academic Ranking of World Universities (ARWU) was first published by Shanghai Jia Tong University, followed by a number of global rankings including Times Higher Education (THE), QS, and Webometrics (Usher, 2016). In Phase 4 (from 2008), the rankings have expanded to supra-national formats, managed by international organizations to meet the needs to assess the quality of education across countries (Hazelkorn, 2018). Thus, rankings—incessantly transforming to yield to the ever-changing societal demands for HEIs in different eras—became accessible and useful to a wider and global audiences over time. 16 As seen from their evolution along with the history of higher education, university rankings prevail over the higher education systems both nationally and globally in the 21st century. University administrators and leaders, who started to recognize the impacts of rankings, changed their practices to increase their status in rankings (Hazelkorn, 2015; Shin & Toutkoushian, 2011). National policy makers also started to design and implement multifaceted projects to enhance the quality of higher education with the goal of elevating their rankings, which seems to be assessed through the GURs and global league tables (Marginson & van der Wende, 2007; Dill, 2009). For example, some Asian countries including China, South Korea, Japan, and Singapore designed national projects to elevate their universities’ standing in global rankings by allocating resources for research (Byun et al., 2013). With the unprecedented rise of global student mobility in the past few decades, rankings gained more popularity among students by serving as a determinant in their educational decision making (Shin & Toutkoushian, 2011). Reflecting these growing emphases and interests on the rankings, the GURs’ websites receive millions of visitors and take on more critical roles for broader impacts on the world of higher education (Shahjahan et al., 2020). The literature on the history of rankings provides explanations on the backgrounds of the rankings, as well as emphasizes the necessity of examining the ranking phenomenon in various contexts. Methodologies and Measures of University Rankings As university rankings have evolved, ranking methodologies have also been modified over time to reflect the changes in higher education environment and criticisms of their accuracy or adequacy. Especially in the initial stages of university rankings, methodological concerns had constituted a significant proportion of the research on this area (Hazelkorn, 2015). Rankers have developed and adopted several quantifiable indicators to assess mainly research and teaching 17 performances of institutions among various aspects of education (Shin & Toutkoushian, 2011). Scholars have attempted to examine these methods and indicators used to rank institutions and identified their underlying assumptions, shortcomings, and biases. Despite the multiplicity of indicators used in different rankings, research performance and teaching quality of institutions have been the key aspects of the evaluation by most of the rankers to determine which institutions perform better than others. The rankers assessed research performance using the number of published articles, number of citations, and external research funding, while teaching quality would be assessed by student-faculty ratio, expenditure for instruction, and employer satisfaction survey results (Shin & Toutkoushian, 2011). For the global rankings, the indicators employed included the number of Nobel prizes recipients, reputational survey results, and research income (Altbach, 2012). The national rankers developed more comprehensive quantifiable indicators reflecting specific national contexts to assess the research and teaching aspects of HEIs (Çakır et al., 2015). For example, one of the domestic rankings in South Korea employed variables to assess research and teaching such as the amount of research funding per faculty, number of publications, number of citations, industry income, number of faculty, class size, and full-time faculty ratio (Joong Ang Ilbo, n.d.-a). Although there are differences among the specific types of variables used by various rankers (Hazelkorn, 2013; Stack, 2016), most of the university rankings both at the global and national level rely primarily on quantifiable educational data on research, teaching, and reputation of HEIs. The usage of these simple, quantifiable indicators, which enables a broad comparison between institutions regardless of their types and geographical locations, often leads to criticisms and questions on the accuracy and rationality of the data. Scholars have criticized that rankings used unreliable methodology and neglected qualitative indicators (Kehm, 2014). One of the most 18 evident sources of methodological ambiguity is the use of reputational survey results, which are basically complied from respondents from different parts of the globe who have a limited knowledge on the assessed institutions (Altbach, 2012). Universities, which become more visible to the public through hiring star professors or recruiting greater numbers of international students/scholars, would be likely to get higher scores in reputation (Altbach, 2012). Also, since rankings are reducing the quality of education to a couple of quantifiable indicators, they could only show limited aspects of an institution, not the whole array of education it offers (Hazelkorn, 2018). These quantity-based indicators lead to disproportionate emphases on research over teaching, hard sciences over soft sciences, and bigger research institutions over smaller specialized institutions, since their products are more quantifiable and visible (Altbach, 2012; Hazelkorn, 2013; Techler, 2011). Furthermore, the indicators used to assess institutions have not been proved or tested whether they adequately measure performance of a specific HEI (Dill, 2009). Rather, they seem to measure reputation of a given HEI in a higher education system, which are relatively dubious and socially constructed (Kehm, 2014). This section reviews studies examining the methodologies and measures of university rankings have brought up crucial questions as to whether their measures (and target of those measures) were valid and appropriate. These studies contributed to revisions of the ranking criteria when rankings were first developed and served to legitimize rankings by allowing their audiences participate in the refining process for the final products (Hazelkorn, 2015). However, it would be more influential to explore other aspects of rankings rather than the methodologies in the current higher education environment. Although scholars have criticized the measures of rankings for a long period of time, rankings survived and now have greater impacts on a global scale (Hazelkorn, 2018). Furthermore, rankers now use improved and sophisticated measures in 19 a more reliable, consistent manner to build up trust among the audiences. Considering these changes in context and needs, it is meaningful to conduct more studies on the prevailing ranking phenomenon from various perspectives, beyond the methodological concerns. Why Rankings Prevail As university rankings are published, quoted, and highlighted all over the world by stakeholders of higher education, they became a key research topic in higher education research which cannot be overlooked or disregarded (Lo, 2014). To fully understand the ranking phenomenon, scholars have made special efforts to offer adequate theorizations of why university rankings have persisted since the mid-1990s all around the world. As Hertig (2016) accentuated university rankings were proliferated because they align perfectly with the contemporary zeitgeist, where daily lives were constantly assessed and ranked based on measurable criteria. However, these simple, empirical measurements contain limited information about the value or legitimacy of various entities (e.g., universities, hospitals, restaurants), despite their public attention and even entertainment value. In this sense, university rankings have become popular as a type of infotainment (information-cum-entertainment) satisfying broader audiences (Bowden, 2000). Together with these explanations, researchers have interpreted this phenomenon as an inevitable consequence of massification, accountability, globalization, and marketization of higher education (Altbach, 2012; Shin & Toutkoushian, 2011). During the second half of the 20th century, higher education systems in many parts of the world experienced transitions from elite to mass education with the rapid increase in the number of enrolled students (Guri-Rosenbilt et al., 2007). Massification of higher education has brought numerous changes in various aspects of education including sizes, numbers, governance, finance, and student recruitment of institutions (Trow, 1974). As the scope of higher education expanded 20 in terms of the number of both students and institutions, stratification and competition among institutions become accelerated (Teichler, 2011). Institutions need to identify their distinctive niche in the expanded higher education system and prove their quality of education to attract more qualified students than peer institutions. Amid this fierce competition, university rankings, showing and comparing educational outcomes of institutions, came to get public attention and take on wider significance (Teichler, 2011; Yang & Chan, 2017). With the increased attention and investments in higher education, accompanied by the greater access to HEIs, higher education systems in many countries have challenged the issue of accountability (El-Khawas, 2007). Institutions, drawing direct or indirect governmental support, are now expected to be more responsible for their institutional actions and their educational outcomes. Many governments around the world imposed additional responsibilities on institutions by assessing and monitoring their performances in education and research (Toma, 2007; Yonezawa et al., 2002). This global demand for institutional responsibility and quality assurance of higher education gave rise to the “quantification of accountability” (Espeland & Sauder, 2016, p. 21), since all institutional performance were reduced to quantifiable indicators, which seem measurable, transparent, and auditable. University rankings can be interpreted as one of these accountability measures, displaying selected numerical indicators and calculated scores of each institution. In this era of quantification, they became an important social practice, salient in academic work, policymaking, and media (Erkkilä & Piironen, 2018). Globalization and marketization of higher education are other important forces to drive the development and usage of university rankings. Globalization spurs higher education, which has long been bound to national borders, to wider international engagement (Altbach & Knight, 2007). As higher education systems become more closely interact with one another, institutions 21 are now required to take part in the unforgiving competition of a single global market with their peer institutions beyond their national contexts (Marginson & van der Wende, 2007). To make their institutions take the lead in big race for human intellectual resources, national governments applied market competition logics to higher education (Dill, 2009). Under a marketized higher education system, the price mechanism determines various aspects of education including student education, research, and institutional activities (Brown, 2015). Therefore, institutions need to promote their educational products and compete with their peers internationally to secure human talent (Dill, 2009). In this globalized and marketized education environment, university rankings serve to offer information on education providers for students (i.e., customers) and on competing peers over the world for institutions (i.e., producers) (Hazelkorn, 2018). Influences of Rankings As university rankings have become household measurement tools beyond the academy, researchers in various academic disciplines started examining their influences on society from various perspectives. Given the contextual complexity each institution navigates, analyzing the influence of rankings separately among other factors has challenged researchers (Locke, 2011). Initial efforts to examine the influences of rankings were focusing on the broad social consequences such as institutional, national, and transnational policy reforms, which were basically comprehensive analyses on policies or previous studies. Based on the findings from these broader societal studies, a growing body of literature has explored the impacts of rankings, perceived and understood at the institutional level by examining experiences of stakeholders. Furthermore, the scope of research, which has been limited to several countries, has expanded to many parts of the world. This section reviews the findings of the recent empirical studies on the influence of rankings at the institutional level. 22 Due to the scarcity of empirical studies on the influence of rankings, Ellen Hazelkorn’s study on institutional leaders is considered to be the earliest extensive study of this topic (Locke, 2011). Hazelkorn (2007, 2008) conducted a survey of higher education leaders and senior managers from 41 different countries about the impacts of rankings on institutional practices. According to the survey results, more than half of administrators thought their rankings would improve the reputation of institutions and contribute to their student recruitment, partnerships, collaboration, program development, and staff morale (Hazelkorn, 2007). Almost two-thirds of the responding institutions used the ranking results as important criteria in their decision-making and strategic planning processes, which demonstrated how rankings had significant impacts on higher education (Hazelkorn, 2007). In her later research on the rankings’ impacts, Hazelkorn (2015) explained that HEIs around the world changed their institutional practices to respond to the rankings in various aspects of education. According to her study, these changed practices were prevalent in areas including research (incentivizing research performance, assessing research productivity), organization (incorporation, ranking task group), curriculum (cancellation/launch of programs, emulating Western models), student education (promoting international activities, recruiting international students), and marketing (expanding international partnerships, advertising in top tier journals) (Hazelkorn, 2015). Informed by these empirical studies, Locke and colleagues (2008) also conducted a survey of institutional leaders at universities in the United Kingdom on the impacts of league tables. Respondents replied that although rankings would not be the main driver of their institutional agendas, they reacted to rankings in some way, such as analyzing the results by establishing working groups, improving their reporting process of institutional statistics (Locke et al., 2008). These initial examinations of rankings’ influence on institutions are significant in 23 that they attempted to understand the ranking phenomenon from the individual institutions’ perspective and included various institutions either nationally or internationally. However, these studies used simple surveys to see the overall patterns existing in institutional reactions to rankings in broad national or international contexts. For example, the overall results were presented either by the percentages of institutions reporting their usage of rankings or the types of organizational changes influenced by the ranking results. Therefore, the findings provided limited knowledge of how different institutions in terms of their types and national contexts interpreted and reacted to rankings. Yet, this knowledge is helpful to learn more about how HEIs address the challenges in the environment and how the rankings change the extensive higher education system. Unlike the extensive studies on rankings at the national or international levels as discussed above, Espeland and Sauder (2007, 2009, 2016) provided a detailed sketch on the impacts of a specific subject ranking in a nation. They focused on how the U.S. News law school rankings influenced both institutional practices and individual decision-making. For their on- going studies on this topic, they conducted numerous in-depth interviews with law school administrators, faculty, and staff, as well as brief interviews with admissions personnel and law students (Sauder & Espeland, 2009). Their findings revealed that rankings had huge impacts on individual students’ school selection and organizational practices such as resource allocation, setting priorities, and hiring professionals. The authors explained this phenomenon stemmed from people’s tendency to change behaviors as a reaction to evaluation or observation. Grounded in this innate tendency, rankings aroused growing anxiety among administrators for falling behind and provoked an intense competition among institutions (Espeland & Sauder, 2016). With the in-depth examination on how individuals perceived and reacted to rankings, these on- 24 going studies help illuminate the internal mechanisms of rankings as well as their external outcomes. One limitation of these studies is their analyses were only on the influences of specific rankings (U.S. News rankings) on one specific type of institution (law schools) in the U.S. Since there has been a wide variety of rankings and higher education systems, it is useful to pursue more extensive academic studies about the rankings influences on different societal and institutional contexts. To address the scarcity of ranking research on varying social contexts, a group of scholars recently carried out a joint research project to examine the influences of university rankings on 11 different national higher education systems (Yudkevich et al., 2016). By conducting a case study on one mid-ranked university per a country, the researchers in the project analyzed how the universities reacted to rankings and changed their institutional practices. The countries examined in this research included Australia, Chile, China, Germany, Malaysia, the Netherlands, Poland, Russia, Turkey, the UK, and the U.S. Although there were differences in the extent to which rankings influenced the institutional strategic planning by countries, the findings suggested that rankings became common concerns for most institutions across countries and served as criteria to assess performance of universities. Especially in Asian contexts (China and Malaysia in this project), institutions, being pushed by their governments’ efforts to build world-class universities (WCUs) in their nations, established strategies to enhance their rankings such as hiring more foreign faculty for international publications, restructuring disciplines, and partnering with technical enterprises for more financial resources (Azman & Kutty, 2016; Dunrong, 2016). As ranking criteria are considered as quality indicators of HEIs, many changes happened in the Asian higher education environment, including pressure 25 for research publications in high-impact journals, proliferation of the ‘gaming’ strategy to succeed in rankings, and the redefinition of academic work (Azman & Kutty, 2016). These studies have a great contribution to the ranking literature, since they expanded the geographical scope of the research, previously limited to some European countries or the U.S. By doing so, these scholars underscored the subtle but crucial differences of the ranking phenomenon arising from different geopolitical contexts, which had been overlooked in previous international studies on rankings. Yet, one of the limitations of this collaborative research is that each study only was focused on a single mid-ranked university within a nation and used empirical data from each case as evidence to illustrate the rankings influences on the national higher education system. This limited inclusion might lead to a limited view on the whole systems. Since there would be huge differences in how institutions understand and react to rankings by institutional types or tiers, future studies at the national level need to embrace more types of institutions to offer a more complete view on national higher education systems. Compared to the studies of rankings’ influences on a single case university, Lo (2014) offered an extensive overview of how rankings changed the Taiwanese higher education through a multiple case study on five institutions from different tiers. From document analysis and interviews with administrators, faculty, and government officials, the researcher sought to theorize the influences of rankings observed from the Taiwanese higher education system by categorizing them into four different dimensions. According to his classification, the ranking phenomenon can be seen from 1) responses at policy, organizational, and individual levels, 2) individual degree of acceptance, 3) their usage as governance tools, zoning technology for cultural/academic sovereignty, agenda setting mechanisms, and 4) their implications for global higher education. 26 On the first dimension, Lo (2014) illustrated that HEIs in Taiwan actively responded to global rankings by adopting their indicators as criteria to assess performances of individual faculty and institutions. Following what rankings measured and prioritized, how many academic articles published and how many publications were on SCI journals became dominant indicators measuring academic performance. On the second dimension, this study described the different degrees of accepting university rankings at the individual levels as ‘love’ or ‘hate’ relationships. Although some respondents resisted the rankings for their dominating, normative power, there existed the dream to be like the image of world’s famous research university imposed by rankings at the same time. The differences in how to accept rankings were also evident between higher-tier universities aspiring to be in global knowledge network and lower-tier focusing on building connections to local communities. Moreover, Lo (2014) demonstrated that global rankings started to serve as a helpful tool to make Taiwanese higher education more visible and renowned in the globalized higher education world (Dimension 3). The ranking system stimulated Taiwanese higher education to reinvent and transform itself in the age of globalization with a critical subjectivity. Related to this point, Lo (2014) explained that global university rankings had influences on global higher education by accelerating reforms and restructuring of higher education systems of the non-West (Dimension 4). Especially, one of the most evident influences would be that the WCU paradigm constructed from global rankings dominated and changed many non-Western higher education systems (Lo, 2014). Lo’s (2014) findings provide insights into how rankings influence the higher education systems of East Asian countries and how ranking phenomenon is understood in a theoretical framework. By collecting data from administrators and faculty at five HEIs in Taiwan, 27 representing different tiers, and also officials from the national government, this study presented an extensive overview of the ranking phenomenon in a national higher education system of a non-Western country. Compared to the previous study on the rankings’ influence in a national context, Lo (2014) offers opportunities to examine how different institutions and individuals interpret and react to global rankings from the in-depth interview data. Despite Lo’s (2014) emphasis on the theoretical explanation of the phenomenon, the findings seemed to focus more on the similarities existing in the way participants understood the ranking phenomenon and offered a restricted view on the influences of the ranking by giving less attention to the subtle but substantial differences among institutions arising from institutional characteristics. In this approach, individual HEIs appeared to be small, invariable components contributing to the whole higher education system which is influenced by the global, national, and local forces, rather than independent agencies also making influences throughout the system. Moreover, as the researcher mentioned as limitations of the study, the data collected for this study were mostly from in-depth interviews with limited number of participants (five to nine faculty and administrators) per institution. This might have been a barrier to get a complete sense of the ranking phenomenon happening both at the individual and institutional levels. Inclusion of more diverse members of HEIs in consideration of the feasibility and sources of evidence would be helpful to enhance the understanding of multifaceted aspects of the rankings’ influences. Summary of Literature As reviewed throughout this section, previous research on university rankings have mainly focused on their evolutionary history, methodological concerns, and environmental backgrounds. Recent literature suggests less attention was paid to the influences of rankings on institutions or individuals in varying higher education contexts. Although several attempts were 28 made to examine how rankings change institutions, still many aspects of the influences at the institutional level need to be explored. Considering the ubiquity of university rankings all around the world, more extensive research on the influences of rankings in specific higher education settings needs to be accomplished by researchers from different backgrounds. This study seeks to address these gaps in the literature and examine the less-explored parts of the ranking phenomenon in a national higher education context like South Korea. Informed from the previous literature, the explanation of the conceptual framework devised for this study is presented in the following sub-section. Conceptual Framework The overall purpose of this study is to examine the influences of university rankings on HEIs by exploring institutional responses and reactions to them in South Korea. To extensively analyze the influences on HEIs, it is helpful to develop a thorough framework capturing the dynamics of the ranking phenomenon among diverse actors of higher education both nationally and globally as well as the environmental factors shaping the higher education contexts. At the same time, the framework needs to illuminate key areas of institutional practices to focus on in exploring each case institution. This section introduces a framework used in this study and elucidates how the framework is applied to the specific contexts this study explores. Geographical Landscape of Korean Higher Education The ranking phenomenon in a specific national context is constructed and influenced by multiple environmental factors which might vary across countries. Before developing a framework to look into the institutional territories of influences, it is important to have a conceptual map to contextualize the higher education landscape of South Korea. To draw a contextual map of the Korean landscape, this study embraces some of the concepts used in Lo 29 (2014) explaining the ranking phenomenon in Taiwanese higher education environment from a geographical perspective. Despite its structural complexity and ambiguity, this conceptualization is helpful to identify the key environmental factors that might similarly influence the ranking phenomenon in East-Asian contexts. Lo (2014) illustrated how to understand the ranking phenomenon in East Asian contexts from a geographical perspective. He emphasized the salient characteristics of the East Asian higher education systems by presenting a revised version of Altbach’s (1987) center-periphery framework which explained the stratification of higher education on a global scale and hegemonic dominance of the center over the periphery. According to his revised framework, the dynamics of global higher education, intertwined between the center and periphery, are shaped by five main factors. First, modern universities in developing countries were established following the Western tradition, not their own traditions. Second, the English language becomes a dominant language both in academic and other professional fields. Third, the uneven distribution of research capacity exists between the industrialized and developing countries. Fourth, Western countries host major knowledge communication channels. Fifth, the developing countries experience the brain-drain, while other countries become brain-gainers. Although this study does not accept the dichotomous distinction between the center and periphery and the Korean case does not fit well into this framework, the overarching concepts of the global landscape of higher education, Western hegemony and English dominance, as Lo (2014) helpfully elucidated university rankings’ influences, especially within the Asian higher education contexts. The context of South Korean higher education is also under significant influences of these dynamics of global higher education landscape. In this sense, these concepts are adopted in the analyses of Korean HEIs’ institutional practices and serve as a guide to 30 contextualize their implications. At the same time, this geographical perspective that centers East Asian higher education, is nuanced by the centering of South Korean cases, as a unique but transportable model in the course of the analysis. A Glonacal Agency Heuristic The key conceptual framework providing a theoretical foundation with this study upon the geographical landscape of Korean higher education is the glonacal agency heuristic, an analytical approach to a higher education system encompassing both global, national, and local dimensions (Marginson & Rhoades, 2002). Marginson and Rhoades (2002) posited that the shortcomings of current higher education research stemmed from the scant attention given to the global dimension beyond national and local dimensions. Inclusion of the global dimension is essential in studying the current higher education environment, since institutions in any parts of the world are not only influenced by global forces but also exerting impacts on the global higher education (Marginson & Rhoades, 2002). Therefore, they emphasized the simultaneous significance of the three dimensions (global, national, and local) and devised a reconceptualized framework for higher education research. As seen in Figure 1, the glonacal heuristic can be described as a hexagon figure consisting of six different influencing entities representing the global, national, and local levels (Marginson & Rhoades, 2002). The six entities include global agencies, global human agency, national agencies, national human agency, local agencies, and local human agency. These entities interact with each other and mutually determine the three levels. Marginson and Rhoades (2002) adopted the concept of reciprocity to explain the multi-directional flow of the six entities’ influences and the concept of strength to refer to “the magnitude and directness of the activity and influence” (p. 292). Also, they used the terms, layers and conditions, to capture the historical 31 circumstances generating the current activity and influence. Finally, by the term spheres of agency’ activity, they described the “geographical and functional scope of activity and influence” (p. 293). Figure 1 A Conceptual Map for a Glonacal Agency Heuristic (Marginson & Rhoades, 2002) In this framework, global agencies refer to international organizations such as the United Nations, OECD, international associations of HEIs, and global university rankers in this study. National agencies mean national or state governments, accrediting agencies, associations of institutions, and national rankers. Local agencies include institutions, academic programs, and faculty of institutions. At each level, there exists human agency including individual human agents having reciprocal impacts on different entities. While global, national, and local forces are intersecting, their strength of influence varies across different contexts. One important feature of this model is that an institution can be simultaneously a local, national, and global agent, exerting 32 influences in the landscape of global higher education. This flexibility offers a more in-depth examination on the complex and reciprocal flow of influences interacted among external entities and institutions. Furthermore, this allows us to closely explore individual HEIs, as agencies playing significant roles in the higher education landscape both nationally and globally, to which previous studies on the ranking phenomenon have paid little attention. By adopting this framework, this study provided new insights to understand the influences of university rankings on HEIs especially from the perspective of individual institutions in the globalized higher education environment. Unlike previous studies on the rankings examining their consequences in general rather than paying attention to individual institutions, this study focuses more on how local, national, and global forces are influencing individual HEIs differently. This analysis is accomplished through in-depth examination of their various activities in reaction to the rankings, as well as their influence as higher education agencies in the local, national, and global dimensions. Six Main Areas of Institutional Practices Since the main focus of the glonacal framework is the flow of influences among agencies of a higher education system, it is helpful to explicate institutional activities at the various institutional levels. To fully capture the variety of institutional practices in response to the rankings, this study utilizes the categories of institutional practices to enhance rankings previously developed in Hazelkorn (2015). As discussed earlier, Hazelkorn (2015) presented various institutional actions in response to rankings by categorizing them into practices related to organization, curriculum, faculty, research, students, and public image/marketing aspects. According to her study, research practices were usually initiatives to increase publications and organization-related practices were changes in structure or operation at the institutional level. 33 Curricular practices included structural changes to academic programs, while marketing practices were efforts to enhance reputation. Students and faculty-related practices were recruiting high performers and providing rewards based on performance. Although there are other important areas of institutional practices such as teaching and learning within higher education settings, this study mainly focused on the six areas mentioned above which are more closely related to the criteria used in university rankings. These categories and relevant examples of institutional practices served as a guide to explore the practices of the case institutions in this study for the elevated status in university rankings. The research protocol for interviews and document analyses, which are applied to each case institution, were developed from these categories and examples. To elaborate further, the interview protocol included a series of questions reflecting the six main areas of institutional practices. For example, questions like ‘How have the admissions procedures changed in response to the rankings?’ or ‘What kinds of international programs are emphasized to enhance the rankings?’ were included to explore students-related institutional practices. Revised Conceptual Framework As discussed earlier, a glonacal agency heuristic is the foundation of this study to examine the influence of university rankings on institutions. When using this glonacal agency framework in the contexts of this study, it is possible to get a sense of what institutional reactions to university rankings look like in a broader landscape of global higher education. This framework is applied in conjunction with Hazelkorn’s (2015) categorization of institutional practices in response to the rankings to illuminate the global, national, and local influences in the areas of the practices. In mapping the global higher education landscape specifically applied to 34 South Korean educational contexts, concepts in the geographical framework of Lo (2014) provide precise guidance. When the glonacal analytic heuristic is applied to the South Korean higher education system in terms of the ranking phenomenon, global agencies include transnational entities affecting the environment, such as global university rankers (strong and direct influence) and other international organizations like UNESCO (weak and indirect influence). The national government and associations of HEIs can be categorized into national agencies extending influence on institutions. National rankers (i.e., media companies publishing university rankings) can also be considered as national agencies. Local agencies in this case include institutions, programs, and faculty/staff of an institution. Institutions make meaning of the rankings (national and global agencies’ influences) and take specific actions (or no action) to respond to the influence. Through these institutional reactions, they can exert influences on the global, national, and local landscape of higher education as global, national, and local agencies. These agencies at the different levels are intertwined and influencing one another. The strength of influence each level of agencies exerts differs across case institutions. For some cases, the national agencies might extend the most significant influence to promote changes of institutional practice, while other institutions would change their practice primarily to fulfil the local communities’ expectations for excellence. This glonacal lens is applied to the key areas of institutional practices identified by Hazelkorn (2015) including practices for students, faculty, organization, curriculum, research, and marketing. The glonacal framework is used to analyze the global, national, and local agencies’ influences on the areas of institutional practices. These categories of practices might be expanded to encompass more multifaceted aspects of institutional practices in response to the 35 rankings. For example, research-related practices of the case universities can be examined by analyzing what levels of agencies are more impactful on institutional practices. Some practices would have been started under the significant influence of global rankers who emphasized research productivity. At the same time, these practices would have been influenced by national policies and politics to foster globally renowned research-intensive universities. Additionally, elements of this framework are subject to geographical considerations such as the hegemony of Western HEIs and English as the dominant academic language medium influence this examination of South Korean HEIs’ institutional reactions. These concepts help to contextualize the underlying meaning of the institutional practices. The revised conceptual map for this study is presented in Figure 2. 36 Figure 2 A Revised Conceptual Framework for a Glonacal Agency Heuristic in South Korean Higher Education Context Focusing on Institutional Practices Note: The framework was developed from Hazelkorn (2015), Lo (2014), and Marginson & Rhoades (2002). 37 Summary of the Section This section overviewed the previous literature on university rankings and explicated the conceptual framework guiding this study. As explained in the sub-section, studies on the rankings focused largely on their history, methodology, and social contexts. Since few studies have examined the influences of the rankings on HEIs in different national settings, this study was devised to address these gaps in the ranking literature. To analyze the ranking phenomenon within a specific national setting in the globalized higher education environment, the conceptual framework needs to capture the flow of influences among various stakeholders of higher education at the local, national, and global levels, based on the understanding of the geographical landscape of higher education. At the same time, the framework needs to serve as a guide to see the various aspects of institutional practices. The framework guiding this study, therefore, consists of a glonacal agency heuristic which serves to explore the influences of the local, national, and global agencies in multifaceted aspects of institutional practices at HEIs. 38 III. METHODOLOGY This section details the methodology applied to this dissertation. It includes the explanation of the case study approach, epistemological orientation, and research procedures in this study. Thereafter, this section discusses the rationale and procedures of sampling, data collection, and data analysis. Design of the Study The overall design of this study followed the procedures and criteria of a case study approach. Case study is a research design suitable for studies which aim to analyze a case (event, activity, program, organization, process) extensively with rigorous procedures in a relatively extended period of time (Creswell & Creswell, 2017). More specifically, a case study “investigates a contemporary phenomenon (the “case”) in depth and within its real-world context, especially when the boundaries between phenomenon and context may not be clearly evident” (Yin, 2018, p. 45). Since the ranking phenomenon cannot be separated from the current higher education context in Korea (e.g., pressure for globalization, economic development, and governmental initiatives for global excellence), a case study design is appropriate to gain in- depth and nuanced understanding of the South Korean case. In this sense, the selection of research design was primarily based on the characteristics of what was studied here rather than on methodological consideration (Stake, 2005). To fully capture the influences of university rankings on South Korean higher education, this study adopted a holistic multiple-case study design analyzing a specific phenomenon across multiple individual cases (i.e., HEIs) (Yin, 2018). By including diverse cases, a multiple-case study can have broader reach than a single case study which is often criticized for its lack of generalizability arising from the uniqueness of the case (Flyvbjerg, 2006; Yin, 2018). Thus, 39 examining multiple institutions in Korea provides a more comprehensive description on the landscape of Korean higher education beyond the individual institution. Among several epistemological orientations a case study can hold, this study embraced a constructivist/interpretivist worldview to examine the different perspectives of participants and the resonances of their different meanings on the phenomenon (Yin, 2018). This worldview argues that the everyday world is created by individual human-beings and the goal of research is, as a result, understanding the world from the different experiences of individuals by exploring their realities through thick description (Creswell & Creswell, 2017). This study, exploring the individual experiences of the ranking phenomenon, depended on qualitative evidence containing subjective meanings of university rankings individual participants develop which was gained from humanistic methods such as in-depth one-on-one interviews. The collected evidence was analyzed and interpreted in an interactive and inductive manner, reflecting the basic nature of qualitative research (Maxwell, 2013). For the rigor of the design, this study followed the multiple-case study procedures suggested by Yin (2018) which consisted of consecutive stages including 1) developing theory, 2) selecting cases and designing data collection protocol, 3) conducting case studies and writing individual case reports, and 4) drawing cross-case conclusions and modifying theory. Although the procedures, starting from a developed theory, might seem deductive to some extent, the overall process is inductive in that the researchers would revisit and modify the theory based on the lessons learned from each case study. As explained in the literature review section, this study started from developing a framework based on the pre-developed concepts explicating the ranking phenomenon including geographical perspective of Lo (2014), a glonacal analytic heuristic (Marginson & Rhoades, 2002), and six areas of institutional practices (Hazelkorn, 40 2015). This combined framework was applied to the research cases deductively and revisited after completing multiple case studies in an inductive manner for developing a more refined framework to apply to the higher education environment in Korea. Sampling Sampling, together with data collection and analysis, is an important part of the practice of qualitative research methods (Robinson, 2014). For a multiple-case study in pursuit of analytic generalizations from the examined cases, selecting the appropriate cases addressing research questions is a fundamental task to complete (Yin, 2018). This section offers a detailed description on how the researcher identified the cases and selected interview participants for this study. Case Selection In order to identify and select the cases analyzed in case studies, researchers should seek for applicable information served as criteria from their research questions and propositions (Yin, 2018). One of the primary research questions guiding this study was how HEIs reacted to university rankings; therefore, the cases this study focused on should be HEIs (in Korea), not individual stakeholders of higher education. Among hundreds of HEIs in Korea, this study selected three cases informed by the identification strategies used in previous case studies on the ranking phenomenon and Yin’s (2018) selection criteria for a multiple-case study. At the same time, the feasibility of research was also an important factor to be considered for successful inclusion of appropriate cases. The selection of cases for this study was accomplished by following three practical steps including specifying the most applicable institutional type, determining criteria to explore multiple institutions within the specified category, and compiling a list of potential cases in 41 consideration of the research feasibility. First, the case institutions in this study were limited to all four-year institutions offering a variety of academic programs rather than two-year institutions in Korea. This bordering came from the assumption that university rankings were more likely to be more consequential to four-year institutions than the other HEIs like vocationally-focused two-year institutions or four-year institutions offering limited number of specialty programs such as universities for teacher education. This decision is further reinforced by the fact that most of the major global rankings, such as THE and QS, and national rankings in Korea were primarily analyzing and assessing the institutional data of four-year (usually research-oriented) universities. Since these rankings had gained public attention, this type of institutions had been under more significant influence of the rankings. Second, informed by the previous studies, the positions of universities in the university rankings and institutional characteristics served as criteria to identify institutions to study in this dissertation. Researchers of the ranking phenomenon selected the case institution(s) based on the institutional reputation presented in global university rankings (Azman & Kutty, 2016; Dunrong, 2016). To include five different institutions for a multiple-case study, Lo (2014) chose institutions representing tiers of universities in the Taiwanese higher education system. By doing so, he could include universities with different institutional characteristics such as operational control (public/private), size (large/mid-sized), and specialized area, which were reflected in the various tiers. Since the Korean higher education system is not as distinctively tiered as in other national contexts, the major global rankings of institutions were considered to identify the potential case institutions for this study and group them according to their ranking status. This study narrowed down the pool of potential cases by utilizing global rankings, yielding all 4-year institutions with various academic divisions in Korea. In classifying the ranking levels, it is 42 essential to trace their global ranking positions for the past decade across different ranking instruments for the considerable ups and downs of institutional ranking results annually. After grouping the institutions into three categories (e.g., top, middle, and lower level groups), their operational control and location were also identified. These elements were important to include various types of institutions in light of the fact that the government regulation and funding had been applied differently to HEIs based on their private/public status and location (in the Seoul metropolitan area/other regions) in the Korean context (Chae & Hong, 2009; Yeom, 2018), which made significant differences in institutional strategic planning to respond to the rankings. Finally, the researcher selected three cases from the list of potential case institutions, considering the feasibility of gaining access to sites. Gaining access to sites was one of the most important tasks especially for case studies examining larger organizations like universities. Therefore, the researcher re-examined the groups of institutions by rankings and identified the most accessible cases from each group. The researcher’s prior connections and networking with the staff/faculty members as the gatekeepers at the HEIs played an essential role in this process. In finalizing the cases, the researcher made efforts to encompass institutions which had different types of operational control and location. The inclusion of case institutions with different ranking status aligns well with the replication logic Yin (2018) suggested employing in a multiple-case study rather than using general sampling logic. Similar to replications in multiple experiments, the findings of a multiple-case study can be substantiated through inclusion of both cases showing similar results (literal replications) and contrasting results (theoretical replications) at the same time (Yin, 2018). The case institutions selected for this study were all recognized to be the world’s best institutions by the global ranking agencies for less than 5 percent of HEIs around the world could 43 be listed in the best global university rankings. However, huge differences existed in their positions within the list of institutions presented in the global rankings. For example, some universities have been ranked in the world top 100 for many years, while others first appeared in the top 1000 rankings within the last year. Highly ranked universities might have developed systematic strategies to enhance their global standing, while lower ranked universities might have paid less attention to global rankings and prioritized other institutional goals. Considering these commonalities and differences, institutional reactions to rankings might be similar for some cases and contrasting for other cases. Through these literal and theoretical replications, the different patterns of ranking phenomenon in Korean higher education system were explored more extensively. The three case institutions selected from these procedures were all four-year, comprehensive universities offering graduate programs in various academic disciplines. In this study, the three cases were labeled as University A, B, and C for identification instead of their actual names. University A, B, and C were all ranked in the global top 1000 universities by the major global ranking agencies, while University A and C were ranked higher than University B. In terms of location, University A and C were in the capital area, while University B was in one of the Southern provinces. When it comes to operational control, the selected cases included two public (University A and B) and one private institutions (University C). Participant Selection For this study, using interview data as one of the primary sources of evidence, selecting interview participants was a fundamental task to be planned and implemented with rigor. In selecting the participants, the first step was to determine a sample universe, a set of units from which a sample was drawn, based on series of inclusion criteria (Robinson, 2014). In this 44 multiple-case study selecting specific HEIs as the cases to examine, the sample boundaries were relatively clearer, since the cases were concretely bounded organizations (Yin, 2018). Therefore, the sample universe of this study included various administrators, staff, and faculty within the case institutions, not external stakeholders existing beyond the organizational boundaries. Therefore, the selected participants of this study were administrators, staff, and faculty working at the case institutions. Among these members of universities, this study focused more on including the voices of staff, which played an important role in institutional practices but had been often neglected from higher education literature. At the planning stage, six to eight one-on-one interviews with administrators, staff, and faculty were expected to be conducted at each case institution, considering practical concerns of conducting in-person interviews and examples of previous studies. However, as all in-person research activities were suspended to prevent the infection of COVID-19 in March 2020, the entire recruitment and interview planning was adapted accordingly. Staff and faculty members expressed difficulties of getting involved in research when their institutions were struggling to respond to the immediate challenges of education in a pandemic era. Furthermore, it became more logistically difficult for the researcher to build close relationships with informants without any in-person interaction. To overcome these difficulties, the researcher made special efforts to reach out to the participants using various networks. The in-person interviews were substituted interviews conducted over secure video teleconferences (“Zoom”) and e-mail interviews were also conducted for the participants unable to join video interviews. The number of participants who communicated with the researcher either by video interview or email from each institution was as follows: 10 in total for the first case (four by video and six by email), nine in total for the second case (six by video and three by email), and four in total for the last case (all by video). 45 Although the number of participants varied, the sample size would be feasible and optimal for a multiple-case study on universities as seen from examples of Lo (2014) and Wayt (2015). Reflecting the six areas of institutional practices used in the conceptual framework, the researcher identified applicable participants who could provide relevant information directly linked to each area based on her professional experiences at Korean HEIs. For example, to get information on the faculty-related practices, interviewees included faculty members who experience a changed reward system and administrators (vice-presidents or directors) or staff members who worked in research, academic affairs, or cooperation offices. The list of participants including their positions and engaged departments is presented in Table 1. For confidentiality purposes, the participants were identified by their ID code containing information of their institutions (A, B, C), and interview methods (0 in the tenth place for video interviews, 7 for email interviews). As seen in Table 1, coordinators in different units at Case A and B were staff members carrying out various administrative tasks expected for each unit. For example, B06 in the planning and coordination office performed tasks including analyzing ranking results, reporting institutional data to ranking agencies, and participating in international conferences for rankings. Senior coordinators supported these tasks and managers supervised coordinators at the team or sub-team level. While a director at Case C performed a similar role as these managers, the position of a director at Case A and B had authority over the entire office. An academic assistant in a department office was a staff member supporting faculty and students within the department. Deans were faculty members who were appointed by the presidents as administrators for a given term (2 years) at Case A and B. The sampling strategies used in this study was purposeful sampling to identify and select participants that would provide the best help for researchers in understanding the research 46 phenomenon (Creswell & Creswell, 2017). For this study, one or two key informants of each case institution were identified and contacted first based on the researcher’s professional networks. The recruitment of participants then largely depended on interpersonal networks of the key informants. After recruiting initial participants, the researcher used snowball sampling to include more participants. To avoid potential biases, participants included administrators and staff from different positions or departments, and faculty from different disciplines. For a meaningful comparison across cases, the researcher made efforts to ensure the composition of participants in terms of their positions and academic/administrative departments for each case were as comparable as possible. Despite these efforts, due to each institution’s distinctive culture, it was impossible to recruit a comparable set of participants for the third case. For the third case, the researcher invited faculty members in several fields who had experiences in various administrative positions and expertise on university rankings to transcend the limitations. Participation was encouraged through incentivizing. Participants received a gift card which could be used in local stores when they completed interviews and agreed to get incentivized. 47 Table 2 List of Interview Participants Case A B C Interviewee ID A01 A02 A03 A04 A71 A72 A73 A74 A75 A76 B01 B02 B03 B05 B06 B07 B71 B72 B73 C01 C02 C03 C04 Interview Method Video Video Position Office/College/Field Coordinator Coordinator Admissions International Affairs Video Senior Coordinator Planning and Coordination Video Associative Dean Planning and Coordination Email Email Email Email Email Email Video Video Manager Manager Director Coordinator Assistant Public Relations Communication Residence Halls Academic Affairs College of Humanities Coordinator College of Natural Sciences Manager Manager Performance Management Center Planning and Coordination Video Senior Coordinator Graduate School Innovation Center Video Video Video Manager International Affairs Coordinator Planning and Coordination Dean International Affairs Email Senior Coordinator Public Relations Email Email Manager Manager College of Business Administration International Affairs Video Professor/Chair Engineering Video Video Video Director Strategic Planning/Promotion Team Professor Professor Social Sciences Humanities 48 Data Collection One of the most important features of a case study is the use of various applicable data from multiple sources, which enables to an in-depth and contextual analysis of a phenomenon (Baxter & Jack, 2008; Yin, 2018). Researchers utilizing case studies, therefore, are expected to gather extensive information through various data collection techniques (Creswell & Creswell, 2017). Yin (2018) illustrated six possible sources of evidence complementarily used in a case study to enhance accuracy and convincingness of the data, such as documentation, archival records, interviews, direct observations, participant-observation, and physical artifacts. For a thorough examination of the cases, this study attempted to include these multiple sources of evidence except observation and physical artifacts. Two practical concerns were considered for this decision. First, exploring the ranking phenomenon seemed unlikely to be presented through physical artifacts or in physical spaces within the cases. Second, it is difficult to observe participants’ behaviors (e.g., teaching, learning, and doing administrative tasks) related to university rankings in the educational settings of at HEIs. Considering these data sources, this study relied on the evidence gained from documentation, archival records, and interviews. To elaborate further the types of sources, documentation explored in included documents of each institution including mission, vision, and value statement, textual and visual contents of institutional websites, brochures, and public reports on outcomes and facts. Furthermore, to better contextualize the setting of this study Korean national policy documents related to university rankings were also included. Archival records encompassed news articles on institutions’ outcomes, policies, and key events both published by internal and external news media within the past five years. The records were accessed via the internet by keyword searches such as ‘university ranking’ or ‘evaluation’ from 49 the news portals and institutional websites. The researcher stored the records in electronic formats for analysis. The documents and records were selected based on their relevancy to university rankings. Considering the many places university rankings can influence policy and institutional-level decision making, the documents collected also included any types of texts and images related to evaluation, assessment, and performance of institutions. Other sources of evidence used in this study were interviews with the selected participants. The researcher conducted one-on-one in-depth virtual interviews, lasting approximately 50 to 70 minutes individually, with administrators, staff, and faculty of each institution. In the case of email interviews, the researcher sent an email to each participant including five to seven questions related to the participant’s role and institutional practices and got a reply from the participant including the answers to the questions. For all the interviews conducted for this study, the interview guide approach was adopted to draw out participants’ worldviews (Patton, 2015). The researcher provided a few pre-developed, structured questions related to research topics to participants and opened opportunities for them to present their own topics and responses (Rossman & Rallis, 2016). The questions used in the interviews were illustrated in the Appendix A. As the researcher gained more knowledge on the institutional practices of each case during the research process, the interview questions were updated and refined over time to more precisely engage with the participants. Interviews were Korean- medium, and electronically recorded and stored after obtaining informed consent from participants. The recorded data were transcribed into text and analyzed first to carry the original meaning before being translated to English by the researcher. The researcher, fluent in Korean and English, paid special attention to the subtle differences in meaning when transcribing and translating what the participants articulated. The entire process of transcription and translation 50 was intentionally conducted by the researcher who had years of professional experiences in Korean higher education and Korean-English translation. This process also made the researcher be more familiar with the data for a deeper-level analysis. The translated texts were reviewed by multiple colleagues to ensure precision of the transcripts. Data Analysis The data in this study, collected through qualitative methods, were analyzed following the analysis strategies of qualitative research. The analysis started with reading the collected data such as interview transcripts or documents and writing memos on what was seen (Maxwell, 2013). This initial overview allowed the researcher to get familiar to the data and build exploratory links to connect themes emerging in the data. After this initial exploration, these data were examined in-depth through categorizing analysis (Maxwell, 2013). The researcher identified “units or segments of data that seem important or meaningful in some way” (Maxwell, 2013, p. 107) and developed coding categories labeling the identified segments. The coding categories were originally developed from the theoretical propositions used in this study (Yin, 2018). Since this study adopted a glonacal analytic heuristic and six main areas of institutional practices to explain institutional efforts to be ranked higher, institutional reactions to rankings and external influences were categorized into global, national, and local level activities, and organization, curriculum, faculty, research, students, and marketing initiatives. The categories were elaborated and modified to fully describe the participants’ meanings and interpretations. The researcher examined the data based on the developed and evolving coding categories multiple times and organized them to identify patterns of the participants’ actions or meanings (Lune & Berg, 2016). For a more systematic categorization and exploration of the data, a computer-assisted analytic software for qualitative data, NVivo (ver. 11), was used. 51 Throughout the whole categorizing and organizing processes, specific analytic strategies helpful to achieve the purposes of case studies were pursued. The main purpose of case studies is providing thick descriptions of case(s) by which the readers interpret the cases and apply their lessons to other contexts (Rossman & Rallis, 2016). To develop thick descriptions of the cases, contextual information needs to be incorporated to categorized themes (Creswell & Poth, 2017). In this study, the researcher explored the contextual information of the case institutions from various sources including their institutional websites, brochures, government public records, and newspapers first and employed the information to elaborate the developed themes. Another analytic strategy for a multiple-case study is identifying common themes applicable across cases after individual case analyses (Creswell & Poth, 2017). Following this analytic format, this study provided within-case analyses on the three cases and a cross-case analysis among them. Validity and Reliability A quality research design should convince its readers of the accuracy and credibility of the research findings (Creswell & Creswell, 2017). For a trustworthy study, the researcher employed a series of procedures to present valid and reliable findings to readers. Validity In qualitative research, validity means the accuracy of the findings examined by the researcher through specific procedures (Gibbs, 2007). Adopting this procedural approach to validity, Creswell and Creswell (2017) suggested several validity strategies to assure the accuracy of qualitative research findings, including triangulating sources, using member checking, providing thick description, clarifying bias, presenting counter information, spending prolonged time, and using peer debriefing or external auditors. This study employed some of the strategies considering the characteristics of a multiple-case study and practical concerns. 52 The primary validity strategies used in this study were triangulation, thick description, and prolonged time, which aligned perfectly with the essential attributes for a case study design. First, the researcher triangulated the data sources by analyzing multiple sources of evidence. In this study, the themes were identified both from the interviews with participants, documentation, and archival records. Second, the researcher provided a thick description for readers to present the findings. The findings of this study contained detailed contextual information of the cases as well as rich descriptions of various higher education issues and participants’ perspectives emerging from the data, which allows readers to obtain a more extensive view on each case. Third, the researcher spent prolonged time (more than 10 months) in the research field to understand the phenomenon in-depth. To address the practical difficulties in analyzing multiple sites within a limited time for this study, the researcher employed alternative techniques (e.g., monitoring websites periodically and conducting email/virtual interviews) for continuing exploration of the cases. In addition to these basic procedures to assure the validity of the findings in general, case study researcher should seek to enhance the external validity, which means the generalizability of the case study finding to other settings (Yin, 2018). However, considering the qualitative nature of this analysis and distinctiveness of cases/contexts, it would be almost impossible to fully generalize this study’s findings to other institutions or national settings. Rather than directly addressing this issue, this study placed more emphasis on providing detailed descriptions of the ranking phenomenon at multiple institutions instead. For this purpose, the replication logic was used to select diverse case institutions. As discussed earlier, by including cases showing similar results as well as cases showing contrasting results, this study illustrated various aspects of the ranking phenomenon from several institutions. Therefore, some parts of the findings can produce 53 insights into the ranking phenomenon in specific national, institutional contexts different from this study. Reliability To ensure research quality, researchers should endeavor to ensure their approaches to the study are reliable (or consistent) across different research and applicable for different researchers (Cresswell & Creswell, 2017). Yin (2018) explained the goal of reliability in a case study is to make sure if other researchers, following the procedures of the study and analyzing the same case, got the same findings. To address this issue, the researcher was first informed by case study protocols developed in previous studies to guide research procedure development. Among various approaches to case studies, this study primarily followed the multiple-case study procedures proposed by Yin (2018). Based on these pre-developed protocols, the researcher established the criteria used for decisions about the procedures of this study such as case selection, getting access to sites, recruitment of participants, data collection, and analysis. This study’s procedures and concerns arising therefrom in the implementation processes for the study were documented from developing plans to analyzing data. These processes aimed to create a new research protocol and case study database applicable to this multiple-case study on HEIs. The case study protocol developed from previous case study examples and case study database containing detailed, in-depth information on the research processes would enhance the reliability of the study method, since they would offer clear guidelines for case study researchers to follow (Yin, 2018). In particular, for this study was conducted at multiple research sites and during a global pandemic when the research conditions were changing rapidly, the new protocol developed from this study can serve as a practical 54 guideline for future researchers who face the growing uncertainty in the higher education environment. Researcher’s Positionality I, the researcher of this study, have experienced Korean higher education from several perspectives throughout my life so far. My higher education experiences shape my understanding of Korean HEIs as well as my approach to how to examine the cases in this study. Since the interpretation of data in a qualitative research is dependent on researcher’s values, and personal backgrounds (Creswell & Creswell, 2017), it demonstrates trustworthiness to clearly state my own positionality as a researcher analyzing the ranking phenomenon in South Korean higher education. First of all, my experiences working as a staff member at Korean HEIs have greatly shaped my perspectives to see how institutions accomplish their goals and how their members take part in institutional efforts. From my experiences, despite the institutional strategic planning devised by higher level administrators, mid-level administrators and staff members still play important roles in implementing the strategic plans and policies. Therefore, I particularly value the staff members’ understanding of institutional practices more than any other members of a given institution and listen to their voices, which have been often neglected in higher education literature. Moreover, my prior educational and professional experiences in Korean higher education were informed mainly from my interactions with public institutions. From my experiences, different operational control (public/private) has great influence on the operation of HEIs in South Korea. This difference sometimes might require different analytic approaches appropriate to each case. I am quite used to the standards and values of public institutions, not private 55 institutions. Therefore, my prior knowledge on public HEIs could create biases which might lead me to judge the practices I observe at the case institutions based on the public institutions’ standards. The depth of analyses on the cases is also determined by my levels of understanding of different institutional types. I made additional efforts to minimize the biases and become open-minded to explore a multiplicity of institutions and institutional practices. One of the strategies was conducting some informal interviews with staff working at private institutions before starting the initial data collection. From these interviews, I got more familiar with the differences among institutions in terms of its organization and culture. Finally, my understanding of the ranking phenomenon might also influence the interpretation of the institutional practices. As a student and a staff member at HEIs in Korea and the U.S., I have observed the ranking phenomenon from the perspectives of the mid-ranked universities which I have belonged to. For the members of mid-ranked universities, striving to enhance their status, their attitude towards university rankings might take the form of a ‘love- hate’ relationship. Even if these institutions endeavor to be more like highly ranked universities, their efforts are usually challenged because of the substantial pre-existing disparities among HEIs in terms of resources. Observing these challenges of the mid-ranked universities, I sometimes questioned whether the homogeneous institutional efforts to enhance their rankings would be beneficial to the quality of education different institutions provide. By examining how HEIs make efforts to enhance their rankings, I would like to look for a clear answer to this question. I believe finding answers to this question would be a fundamental task to be done before transforming HEIs into what university rankings measure and emphasize. These ideas I have developed unavoidably affect how I see the ranking phenomenon at Korean HEIs. 56 Limitations Although this study was designed to provide valid and reliable descriptions on the cases, it has potential limitations as an extensive multiple-case study. In this section, methodological concerns in conducting this study such as biased case selection and limited access to data, as well as researcher’s biases, are explained. First, the selection of cases (HEIs) for this study seemed to be biased and bounded due to the limited accessibility to the research sites. Among 191 four-year HEIs in South Korea, about 30 institutions are recognized in one of the widely shared global university rankings (i.e., QS) as of 2020 (QS, n.d.). However, the number of case institutions explored in this study was only three. Considering the significant differences in institutional characteristics such as location (capital area or other provinces), operational control (public or private), and specialized areas (science/engineering focused or Arts/Humanities focused) of HEIs, the cases selected for this study based on the global rankings might not fully represent the different institutions in South Korea. Second, the participants selection and data collection might have potential limitations due to the restricted access to informants as a result of the pandemic. Although the case institutions consist of hundreds of staff and thousands of faculty, participants whom the researcher interviewed with were less than 10 for each case for some practical restrictions. While the researcher made efforts to encompass a breadth of an institution’s staff members when possible, the selected participants were not able to include faculty and students from specific disciplines, or staff from specific administrative units because of the restricted access. Furthermore, as the pandemic presented considerable challenges both in institutional practices and research activities, conducting multiple interviews with staff and faculty members struggling with these 57 challenges was almost impossible. Given that in-person interactions with participants were restricted, the researcher needed to conduct either one-time video interviews or email interviews. The amount of information gained from a single email interview was relatively limited compared to a face-to-face interview. This exclusion might lead to biased views and understanding on the institutional practices. Finally, the researcher’s own biases, values, and personal backgrounds might have affected the interpretations of what is noticed during the study (Creswell & Creswell, 2017). The researcher in this study tended to understand and interpret institutional practices based on her previous educational and professional experiences at South Korean universities. To help the readers understand the relationship between the researcher and the study, the researcher explicitly stated her past experiences with the research sites and research problem (Creswell & Creswell, 2017). 58 IV. FINDINGS This chapter aims to illustrate the major findings of the case analyses on the selected institutions following the research procedures explained in the previous sections. The researcher completed individual case studies on the three selected case institutions focusing on how they reacted to university rankings in various institutional practice areas. Also, cross-case analyses were conducted to describe the landscape of higher education and global, national, and local agencies’ interactions influencing these Korean institutions. This chapter begins with the description of the landscape of Korean higher education the participants experienced, followed by the three case analyses, and ends with the explanation on the interactions among glonacal agencies. Landscape of Korean Higher Education For a more nuanced understanding of the university rankings’ influence on the case institutions, it is essential to examine the higher education environment in which Korean HEIs are embedded. Primarily based on what was observed and articulated by the study participants, this section provides a brief but informative sketch of the higher education contexts in the present era. The main themes described in this section are how Korean HEIs responded to the globalized higher education environment and their unique challenges. Higher Education on a Global Scale Globalized higher education. The increased global interconnection was one of the most remarkable features characterizing the higher education environment surrounding Korean HEIs. This globalized environment, in which various stakeholders of higher education over the world are interconnected and interdependent, has influenced some fundamental aspects of Korean HEIs’ institutional practices. These influences were salient in institutional goal-setting, student 59 and faculty recruitment, and inter-institutional collaboration. For example, the three case institutions in this dissertation all presented their visions to exert a global impact through education and research together with their national agendas. They made efforts to invite international students and faculty from around the world and increased the presence of people from a variety of countries on campus. Korean institutions were also actively exploring their international partners (e.g., other HEIs, research institutes, and individual researchers) around the world and working with them on various collaboration projects such as student exchange, co- curricular, and research partnership programs. South Korean HEIs, collectively and individually, experience intense pressure to compete and survive in an increasingly globalized higher education environment. The environment was often described by the participants as “the global higher education market” (B07; C02) where HEIs would compete internationally as well as domestically for human and financial resources. Korean HEIs that had predominantly concentrated on education of domestic students are now expected to work towards their global recognition through faculty research and global networks to expand their global market share of students and researchers. As a staff member explained, internationalization of education became an important agenda of Korean HEIs, essential for the institutional development and success: “For the improvement of the organization, it is necessary to keep on inviting new students and excellent faculty. To do so, kind of diversity, it is important to invite more diverse, more globally excellent people to campus” (C02). This also means that in this globalized higher education environment the success of HEIs depends on whether they can be globally recognized and qualified enough to participate in and win the international competition for resources. 60 Role of university rankings. From the conversations with the participants, I found that global university rankings played a central role in stimulating the international competition between HEIs in the globalized higher education environment. As HEIs seek to build up their international reputation and show their competence in order to compete in the global higher education market. The rankings are widely employed to serve as an international indicator of HEI’s qualification. Many participants explained that global university rankings guided their institutions’ process of building international partnership. A staff member stated, “When exploring new partners, the first item we look up would be the (global) rankings. . . When introducing our institution, the first item we present would be the institution’s rankings” (A02). In a similar way, the global rankings also serve as guidance to college admissions for prospective international students of the three case institutions. Participants pointed out that the global rankings were be the primary information international students refer to when they decide their study abroad institutions (A01; B05; C01). Thus, the global rankings seem to open up and stimulate the international competition between HEIs by providing widely shared certifications and information of HEIs. In this circumstance, the three case institutions of this study were in a position of inevitably responding to the global rankings but to varying degrees. The participants working with international partners all explained that the rankings were quite widely accepted and pursued especially in Asian countries (A02; B05). In some Asian countries, such as China and Uzbekistan, the global rankings were publicly used as criteria for the job or residency application screening process; if applicants graduated from universities ranked in the global top 1,000, they would occupy more favorable positions in the screening (B07). Illustrating the growing, unavoidable influence of the rankings on higher education, the participants working with various 61 international partners agreed that most of HEIs could not overlook the rankings nowadays (A02; B05). An administrator in the international office emphasized that a broad consensus on the importance and benefits of the rankings emerged: “If a university improves its rankings, in fact that means the entire university is getting better. Now few universities neglect or ignore rankings. Rather, people are envious of the enhancement in the rankings” (B07). This participant explained that the institutions are asked to try to improve their university rankings, since the rankings are widely understood in South Korea to be an indicator of a quality university. Growing recognition of Korean HEIs. As university rankings have significant influence in an increasingly globalized higher education environment, where university rankings have significant influence, Korean HEIs have succeeded in obtaining international recognition over the past few decades. Most participants across the case institutions showed the substantial increase in the number of international students on campus (A01; B07; C01). A staff member managing international exchange programs explained that, “a decade ago, Korean students were eager to study abroad as exchange students. Now it was reversed. More foreign students would like to visit our campus as exchange students” (A02), suggesting an increase in student mobility. When asked about the underlying reasons for this increase, several participants agreed that the growing recognition of South Korea bolstered by the popularity of the pop culture ushered in an increase in interest in Korea as a site of short or long-term study abroad (A01; A02; B07; B73; C04). A participant with years of experience in international admissions recalled that the increase should be understood as a product of an improvement in South Korea’s national reputation, stating that “seven, eight years ago, the prestige of Korean education was not that great in South or East Asia. As the national prestige enhanced, the Korean education market got more attractive” (B07). South Korea’s perceived improvement of its national recognition, so, too, did 62 the global university rankings of South Korean HEIs thereby seemingly attracting more prospective students. A faculty member in the engineering field at University C emphasized that the enhanced global rankings led to the increase in the quantity and quality of international graduate applications in his department (C01). However, this seemed salient only in one of the case institutions, University C, which successfully improved its global ranking positions. These examples illustrated the growing recognition of Korean higher education primarily facilitated by the enhanced national recognition and the growing recognition of an individual institution influenced by the improved global rankings. Challenges of Korean HEIs Pursuing global leaders. Although Korean HEIs seem to adapt to the ethos of an increasingly globalized higher education systems, several challenges persist. Participants indicated that these challenges seemed to arise primarily from the geo-political dynamics shaping the Korean higher education system. The essence of these challenges lies in that Korean HEIs are bound to pursue the dominant higher education model which holds power over the global higher education systems rather than pave their own way. The participants pointed out the vulnerable position of Korean HEIs in the globalized higher education environment where HEIs in developed countries or English-speaking countries retained dominance of rankings and academic legitimacy. Some participants illustrated the gap between their own institutions and globally renowned universities in the Western countries, which were often described as “unmatched” or “incomparable leaders” (A02; B05) in the global higher education system. These global leaders which attract global talents and resources have become an ideal type of university Korean HEIs seek to pursue and emulate. However, only a few Asian HEIs have received international prominence, such as universities in Singapore or Hong Kong in part due to their English- 63 speaking internationalized setting, Korean HEIs started to pay more attention to these ‘rising stars’ as a model university to follow (A02; A03). Korean HEIs, long thought to be incapable of gaining international reputation, have made tremendous efforts to overcome their limitations. Their efforts were centered on Englishization of education and research by offering more English-medium courses and promoting research publications in international journals to invite more international students and enhance global recognition (Cho & Palmer, 2013). Domestic challenges. The case institutions were also required to meet the domestic challenges arising from social issues in Korea. One of the most salient challenges the participants reported was the intense domestic competition for the financial support from the Korean government. The government’s competitive funding system supports but a limited number of HEIs who demonstrate their excellence through evaluation (Han et al., 2018). To secure this funding, HEIs needed to meticulously prepare for the college evaluations and grant projects proposed by the government. For example, the Korean ministry of Education planned to provide financial support of 803.5 billion won ($714 million dollars at an exchange rate of 1:1.125) to selected universities to encourage innovation and only 108 billion won ($96 million dollars) for regional innovation projects in 2020 (Ministry of Education, 2019). A faculty member who had worked as a consultant explicated that some HEIs got expensive consulting services to develop more competitive grant proposals for success in the domestic funding competition (C03). The governmental restriction on tuition rates lasting for the past 10 years and financial difficulties arising from tuition freezes prompted Korean HEIs to take part in the national competition for funding (B02). Additionally, Korean HEIs encountered a series of problems resulting from an aging Korean population coupled with dramatic population decline in Korea. The government’s strictly controlled undergraduate admission quota sought to cope with the decline and a few 64 HEIs, failing to meet the education standards proposed by the government, were forced to reduce their student numbers or permanently close to accommodate decreased demand (C04). Although this was not directly applicable for the case institutions, they also experienced difficulties of recruiting graduate students who conducted research (B03; C02). These domestic and regional challenges, coupled with the globalization of HEI operation have radically shifted Korean HEIs’ operations. Within-Case Analyses In this challenging higher education environment of Korea where HEIs cannot avoid taking part in the simultaneous international and national competition for limited resources, university rankings have served as increasingly predictive indicators of institutional practices in this study’s case institutions. This section examines the three case institutions’ responses to the rankings by different areas of institutional initiatives including organization, curriculum, faculty, research, students, and marketing. As the previous section provided an overview of the environment of Korean HEIs witnessed from the three cases, the current section aimed to guide the readers to look into the individual case institutions and explore their interconnections to various higher education agencies. Case of University A Basic description. University A (UA, here after) is one of the most selective HEIs located in the capital area of South Korea. Since its foundation in the 1940s after Korea’s independence, UA has aimed to educate Korea’s intellectual elites leading the society, and served as a national university representing the country for many decades. In the 2000s, it established a long-term plan to enhance the quality of education and be a world-class research- intensive university (UA, n.d.-a). As a 4-year, comprehensive as well as doctoral degree granting 65 institution, it offers academic programs both at the undergraduate and graduate levels across various disciplines for more than 16,000 undergraduate students and 10,000 graduate students in 2019 (UA, n.d.-b). Being recognized as a top tier, national institution, UA has a more generous budget than other public institutions, both from government support and private donations. Its budget for the fiscal year 2019 was more than $700 million dollars (excluding donations) and more than half of the revenue came from the government (Ministry of Education, 2020). Driven by the government funding, educational expenditure per student exceeded $40,000 dollars in 2018 for this institution, which was one of the highest amounts among South Korean HEIs (Ministry of Education, 2020). As a top-tier institution, almost 100% of new students who got admitted to UA eventually enrolled (Ministry of Education, 2020). In terms of university rankings, UA has been highly ranked nationally and held upper ranks in both Asian and global university rankings presented by the QS and THE for the past decade (UA, n.d.-c). Although it is considered to be one of the top HEIs nationally, its global rankings have shown no significant change for the past decade (UA, n.d.-c). Influences of rankings on institutional practices. Being recognized as a top tier institution in the nation since its establishment in the 1940s, the case of UA showed limited evidence of how the university rankings influenced institutional practices compared to other cases. One administrator illustrated, “UA, which only submitted its data to the ranking agencies and publicized the results perfunctorily, has never been in a strategic position in the ranking competition” (A04). UA, occupying a higher position in the prestige hierarchy of Korean universities for a long time, according to this participant, seemed to have little urgency to respond to external pressure on enhancing its reputation in national, or even in global rankings. 66 From the documents and interview data, it was possible to understand how and why the institution took action, or sometimes no action on the criteria and results of the global rankings in changing its institutional practices. Organization. In terms of institutional initiatives for organizational operations, UA seemed not to actively react to the university rankings compared to other case institutions. Although it presented its global university rankings on the website and set its vision to foster world-leading academic programs (UA, n.d.-d), my initial impression from the interviews was that rankings seemed to have a limited influence on participants’ decision making on institutional planning and goal setting because of a lack of consensus on rankings’ importance. An administrator in the central administration who managed the planning office explained that UA had not been under considerable pressure on rankings for its historically established reputation in the nation (A04). Without pressing and urgent needs to improve its status, rankings seemed to not be considered influential for some members of UA, especially for those who were not in the central administrative units. One staff member in the College of Natural Sciences mentioned that “rankings had no impact on the individual college’s operation and policies” from what he experienced, and “administrators showed indifference or even negative opinions” about the rankings and ranking companies (A76). Another staff member supporting undergraduate admissions procedures added that UA’s public status was also attributed to this indifference, saying, “for UA is not an institution pursuing profit, unlike other private universities, very strategic, it appears not to cling to rankings” (A01). For these participants, the influence of the rankings perceived in their working environments seemed quite limited. However, after interviewing more participants whose tasks were closely linked to university rankings at UA, it seemed obvious that UA also had made efforts to examine and 67 improve their global rankings at the organizational level. These efforts had been intensified whenever newly appointed central administration (especially the university presidents) pointed out the importance of rankings over the past decades (A03; A73). However, these efforts were often discontinued after a new president was appointed in every four years. One staff member who had years of experience in several offices at UA explained, “from my opinion, UA’s efforts to enhance the ranking positions have depended on the appointed presidents’ intentions. When the president wants to improve the rankings, the administration all follows” (A73). He added that UA, endeavouring to be competitive in the rankings a decade ago, seemed to de-prioritize ranking competitiveness more recently. This showed that UA’s institutional initiatives to improve the global rankings through organizational reforms had been existing for a long time but not carried on continuously. From the interviews with the staff member and administrator currently working at the administration, it was possible to understand how UA’s organizational initiatives for the rankings continued currently. In the case of UA, the primary administrative unit leading the ranking- related organizational initiatives was the Office of Planning and Coordination similar to other cases. The office has a staff member, in charge of reporting/analyzing university rankings as well as other institutional evaluation, who conducted various relevant tasks, including collection and submission of institutional raw data to the ranking agencies, reporting annual results to administrators, and communicating with the agencies at international conferences (A04). Together with these regular tasks, the office established special task force teams consisting of staff from various administrative units to discuss how to enhance the global rankings and challenges in improving the ranking indicators (A01; A02). A few years ago, the office also conducted a simulation research project to assess UA’s ranking indicators compared to the top 68 ranked Asian institutions and decided to focus more on research indicators instead of internationalization indicators (A03). These examples showed how the global rankings gradually affected the planning of UA as well as self-evaluation of its education and research. Yet, the participants seemed to agree that the global rankings initiated a discussion among the members but seldom led to a nuanced organizational transformation at UA (A01; A02; A03). One noticeable organizational change related to rankings at UA was its recent establishment of the University Innovation Center. The administrator leading the center described its purpose as “archiving the data for institutional policies” and “making data-driven decisions on the policies and long-term planning” (A04). He added the underlying background of the establishment of this unit in relation to the organization culture of UA as follows: UA, in terms of the establishment and implementation of institutional policies, in general, is like a giant, for some aspects, with a big head. . . It was where government officials were appointed, refused to work, and transferred. From an administrative perspective, it has a short memory despite its huge size. It has no institutional memory because the data on the policies are not collected, accumulated without any archive. The policies, cynically speaking, are written in word files, stored in staff’s desktop computers, and might not be transmitted to newly appointed staff members when they are rotated. It is a memory-less process. (A04) Analyzing the institutional ranking results and devising ways to enhance them was one of this unit’s core tasks. The staff member in the planning office explained that the center was undertaking research on what efforts were necessary to improve rankings and how to simultaneously facilitate the institutional development through these efforts (A03). The participants mentioned that establishing a new administrative unit like the data center was not typical for UA which had developed a quite stable bureaucratic structure. However, the growing importance of rankings, together with the need for data management and data-driven strategic planning, seemed to facilitate organizational change through the newly established unit at UA. 69 Curriculum. Other than organizational restructuring, university rankings sometimes facilitate changes in curriculum through programmatic changes such as establishment or closing of academic program. In UA’s case, university rankings seemed not to have direct influence on its academic program offerings. An administrator in the planning office illustrated UA’s academic environment which shaped UA’s decision on program-level changes to improve its rankings: There are some departments having difficulty in recruiting students and producing publications at UA. . . But UA can never downsize like some overseas institutions which entirely close individual departments showing low performance. . . if UA cuts out the college of Humanities and Social Sciences altogether (leaving science and technology), the ranking indicators would be enhanced. But UA can never do that, for it is UA. (A04) He explicated that UA faced considerable pressures which prevented it from transforming its programs, such as government control over admission quota, public expectation for a variety of education, and conflicting interests among academic fields. One change in UA noticeable at the program level was the establishment of several professional graduate schools aiming to foster research in industrial engineering, international agricultural technology, and data science over the past few years (UA, n.d.-e). Although the establishment was not explicitly part of an effort to enhance rankings, the educational goals of these schools such as advanced research, industrial cooperation, internationalized education seemed to partially address the key metrics used in the rankings such as international publications, number of international students, and industry income. This indirect link suggests that the image of a quality university that the rankings emphasize can influence the way programs and curriculum are selected and presented and precipitate changes at the program level. Faculty. Institutional initiatives and policies for faculty hiring and compensation were under indirect and limited influence of university rankings. Compared to other case institutions 70 in this study, the recruitment and compensation systems for faculty of UA seemed not to fully reflect the criteria used in university rankings such citation index and publications in top-tier international journals. According to a staff member in the planning office, UA recognized the necessity of revising its regulations on faculty personnel management to enhance its ranking status but faced serious challenges. He explained the characteristics of UA’s faculty personnel system as follows: From my perspective, the most essential task to be completed is revising the faculty personnel regulations. Promotion and recruitment should be entirely based on their research performance and competency for conducting research. But UA, starting as a national university, allows its faculty to get promotion, reappointed, and tenured if only they meet the number of publications and years of experience, as if it were based on seniority. (A03) He added that the faculty compensation policies had not been fully performance-based nor prioritizing researchers’ publishing in top-tier journals as other institutions did to improve their research indicators in rankings. UA’s attempts to reform the faculty personnel management regulations were usually foiled due to the serious objections and complaints of individual faculty members, disciplines, and colleges. An administrator in the planning office details the complex nature of reforming the faculty regulations for the rankings at UA as follows: The president of UA, just for laughs, is said to be a person waiting on the 2,300 faculty members of UA, who think themselves as high level officials. . . If UA requires them to produce more publications to get more funding, they will laugh at. If it is a private university like University C, the faculty would follow without complaints. Here, they would strongly oppose to it saying, “How dare you administrate UA in such a way?” (A04) Among UA’s faculty related institutional policies, one of the most noticeable efforts to improve its rankings would be promoting recruitment of foreign faculty. Hiring more foreign- national faculty was recognized as a good way to accomplish internationalization of education, which were assessed in the global rankings. The institutional initiatives to recruit foreign faculty started in the early 2000s with the direct government financial support to promote 71 internationalization but gained limited success (Lee, 2002). The administrator explained the challenges UA had by saying, “UA falls far behind in the indicators for internationalization used in the rankings. It is well known already. But we cannot recruit many foreign faculty members for now. Salary system, curriculum, and many things need to be changed” (A04). To address this issue, UA increased the employment quota for foreign faculty and provided special support for the newly appointed foreign faculty such as offering relocation expenses and campus housing (A74). Additionally, participants detailed the enormous government financial investment allocated to universities for attracting foreign faculty with excellent research performances (called Nobel Prize level researchers) as a mark of international competitiveness (A74). Research. Closely linked to the faculty related institutional practices, UA emphasized research as its core component necessary to accomplish its mission and carried out several initiatives to facilitate research (UA, n.d.-d). Showcasing research accomplishments via various media and providing support for researchers had been quite prevalent practices for research- intensive institutions in Korea since the 2000s when the government-led educational development projects started. However, it was noticeable that UA’s research support programs were designed to reflect specific concepts and criteria adopted by global university rankings. For example, UA launched a new research support project offering generous financial support for selected academic fields in 2020 (A03). This project aimed to foster 10 academic programs whose research capabilities could be ranked in the world top 10 (UA, n.d.-d). UA selected 10 programs across different disciplines including linguistics, administration, material science, computer engineering, and medical science. A staff member in the planning office explained, “We do not have specific target ranking positions globally. But 10-10 project, as one of the president’s election pledges, to enhance the rankings of 10 academic fields until the global top 10 72 seems to be the most specific goal” (A03). As seen in this example, rankings were directly used to set the goals for research initiatives at UA. Furthermore, administrators explored the possibilities of using the ranking criteria for research performance such as number of publications, citation index, and even number of publications in top journals to assess and incentivize faculty productivity. An administrator explained this concern: Publications by UA faculty, in terms of quantity, are not that insufficient. In terms of quality, the citation index and h index are less than highly ranked universities in other Asian countries…Some argue differentiate journals by classes or levels, but it is hard to reflect all these for research assessment (A04). The words and phrases used in the presentation of the vision for research and discussions on research performance assessment were exactly originated from university rankings. Students. In terms of the institutional practices to support students, UA carried out various programs to internationalize its education and explored possible ways to improve its internationalization indicators in university rankings. This area is where the influence of global rankings was most salient at UA. Although the purpose of these practices was not limited to the improvement in rankings, the participants seemed to understand the international initiatives in relation with UA’s global rankings. Since the scores for internationalization were relatively poor compared to its peer institutions in English-speaking countries, UA made efforts to overcome these limitations by promoting student exchange, inviting more international students, and building support systems. A staff member in the international office (A02) mentioned that her office set the number of exchange students as the indicator of internationalization and encouraged Korean students to be educated overseas as well as invited international students. To invite international graduate students on campus, UA offers a breadth of support including international student residence hall, scholarships, and language training (A74). However, the efforts to recruit international students were less strenuous than other case institutions. UA, as 73 one of top-tier institutions securing more generous budget and qualified applicants, set “absolute standards for applicants’ qualifications and competency” (A01) and accepted highly qualified students in admissions. Staff members of UA discussed how to enhance the indicators of internationalization and pointed out the necessity of “courses taught in English, Chinese, or other foreign languages to increase the number of international students” (A01). These examples showed how UA reacted to the global ranking results and the criteria influenced the student experiences on campus. Marketing. The marketing of UA seemed to be one of the most distinct areas where the institutional response or reaction to university rankings were scarcely perceptible. However, t the global ranking results only briefly mentioned on its website, it was difficult to find publicly available marketing or advertising materials of UA which highlight its rankings. From the interviews with staff members in various administrative offices, I noticed that this limited use of rankings in marketing practices was quite commonplace at UA. One participant in the international office said, “We tend not to over-emphasize our rankings like THE when advertising compared to other Korean institutions (A02)” while another participant in the planning office said, “We, in fact, are not using ranking results for marketing of UA. Other Korean institutions send emails and text messages to advertise their rankings (A03)”. As seen in these quotes, their perception on the usage of the rankings in marketing at UA and that of their peer institutions in contrast were almost identical. This naturally led to questions about the underlying reason of UA’s ranking marketing strategies. The participants offered some insights to understand the UA’s marketing strategies not displaying the rankings on the front. The communication manager explained this matter as dissatisfaction of its global ranking results as follows: 74 UA has been always ranked highly in the national rankings. But for global rankings, the administrators do not think UA is competitive enough yet. So, we are not using the ranking results in diverse ways for promotion. . . Our team rather focuses on publicizing the research accomplishments, one of the criteria of the rankings (A72). Another staff in charge of rankings explicated this dissatisfaction more specifically in technical terms, saying “We are very careful of presenting rankings for we know our decent ranking results primarily stemmed from reputation scores higher than other institutions, not from indicators like research impact” (A03). From these conversations, it was noticeable that UA’s decisions on the usage of rankings in marketing were products of its deliberate and thoughtful consideration on the benefits and drawbacks of using them. In these marketing practices, how its local agencies, especially the administrators, perceived and understood the circumstance played an important role. Although the direct influence of rankings was hardly visible in UA’s marketing practices, the criteria and impact of rankings played a crucial role in leading the institutional decisions. Case of University B Basic description. University B (UB, hereafter) is one of the 10 national flagship universities in South Korea, located in one of the Southern provinces. The Korean government led its foundation in the 1950s to facilitate regional education for nation-wide development and reconstruction after independence. With its rapid expansion in the 1970s and 1980s, UB obtained growing recognition as an institution dedicated to the local community and democracy of the nation (UB, n.d.-a). Similar to UA, it is a four-year comprehensive, doctoral degree granting institution offering various education programs of different levels. It has a larger student body of enrolled students than UA, with more than 27,000 undergraduates and 5,000 graduates in 2019 (UB, n.d.-b). 75 With its robust standing in the local communities, UB showed nearly 100% of rate of levy within quota annually and had a large budget of $350 million dollars including the government support of $180 million dollars in 2019 (Ministry of Education, 2020). Since UB has been a top university representing the province, UB had no difficulty in recruiting undergraduate students even when the school age population had declined in Korea. The Korean government supports offered to UB covered ordinary operating expenses such as faculty and staff salary which was typical for most of these national institutions (B02). Also, UB obtained resources through project-based funding given to selected institutions for the improvement of the quality of education (B02). Despite its strengths, this institution has become considered less competitive than other private universities in the capital area (Kang, 2014). As the population and resources were concentrated in the capital area over the past few decades, students started to prefer institutions ‘in Seoul’ and the large metropolitan area to the flagship universities representing their provinces like UB. Accordingly, HEIs in the Seoul metropolitan area, where the resources were centered, had held a dominant position in the competition for government funding (Korean Higher Education Research Institute, 2020). Due to this imbalance in educational resources, UB’s educational expenditure per students in 2018 was about $15,000 dollars, which was less than half of the UA’s expenditure (Ministry of Education, 2020). The global and national rankings of UB also reflected its current position and share in the Korean higher education system. UB usually held middle to low ranks in the top 30 universities in domestic rankings following the institutions in the capital area (Joong Ang Ilbo, n.d.-b). Also, it occupied middle to low ranks in the top 1,000 in the QS or THE global rankings by 2020 (QS, n.d.; THE, n.d.). The rankings had not been improved greatly, but rather in decline especially in the case of global rankings. 76 Influences of rankings on institutional practices. The case of UB is the most salient in showing how global and national university rankings can be interpreted and influence changes to institutional practices. As the universities in the capital area became more preferred by students and faculty and occupied favorable positions in some government funding projects, UB experienced significant challenges of providing quality education under constant financial, competitive, and social pressures. To survive in this challenging circumstance, UB shifted to respond to the university rankings or evaluation both at the national and global levels through various institutional initiatives. Organization. Compared to other case institutions in this study, UB seemed to put in a great deal of effort to enhance their global and national ranking positions through organizational planning and goal setting. As an administrator in the international office said in his interview, “Rankings become more important in operation of the university, therefore, many presidents and administrators have interest in them. . . Now [UB] refers to rankings in its operation to a great extent” (B07). In UB, the global and national ranking results announced annually often provoked intense discussions on how to improve the indicators within the central administration. The Office of Planning and Coordination at UB plays a major role in leading these discussions among the staff and administrators, and requests different administrative units to devise possible ways to improve their weak indicators such as number of exchange students and foreign faculty. The above-mentioned administrator explained that the planning office held special meetings to explore the ranking results and identify specific areas that need to be improved for administrators whenever the results were announced (B07). To engage various administrative units on campus in a more unified effort for improving ranking scores, the office formed ranking task force (TF) teams consisting of staff and administrators from various offices. A staff member in charge of 77 ranking-related tasks explained, “Now we manage two TF teams. The first one aims to enhance UB’s weak indicators across different evaluations. And there is a newly established team for the IMPACT ranking. To this team, we invited associate deans from each office” (B06). After these meetings, each office was supposed to analyze its relevant ranking indicators and work out feasible plans to enhance the scores such as providing more research incentives to faculty or offering more scholarships for students participating in international exchange programs (B05). The coordinator in the planning office mentioned, “The improvement plans for the rankings submitted by each office gradually led to actual budget increase not only to implement the proposed initiatives, but also to improve the overall educational outcome of the office” (B06). This showed the rankings sometimes facilitated a more explicit organizational change in the case of UB than UA. Despite the influence of the rankings on the organizational operation, most of the participants pointed out that the institutional efforts to enhance rankings were mostly limited to the planning process and not moving towards meaningful changes on UB’s campus. A staff member working across various offices at UB suggested that this limitation arose from units’ different interests and tasks because “each office has its own regular tasks. Ranking-related tasks are extra ones for many offices. Therefore, it is hard to actively get involved in the efforts to improve rankings” (B02). This participant implied that improving ranking indicators were not considered as a primary institutional goal widely shared among the members. Some participants regard tasks of rankings/evaluations as what the planning office at UB primarily carried out, as seen in the following quotes: “The planning office does a lot for rankings, not my office directly” (B07), or “Ranking tasks almost squeeze out staff members in the central administration. That is not applicable to individual colleges” (B72). From the conversations with the participants in 78 various offices, the initial impression was that they were serving a subsidiary role to assist the planning office for the ranking-related tasks. Together with this lack of unified efforts, deficit budget issues were repetitively highlighted as a barrier to implementation of institutional initiatives for many participants. A staff member operating international programs in the graduate school explained this issue, saying “If UB intends to enhance the rankings, more budget and support are necessary. But now we are just reporting our performance when requested officially without further actions” (B03). Another staff in the international office added that she gave up proposing new initiatives to enhance ranking indicators for there were always budget problems as follows: When I attended one of the TF team meetings for rankings, I was thinking of proposing a new program to enhance the rankings indicators. But there are always budget issues. From the meeting, it was agreed that we were just focusing on maintaining and showcasing what we were doing more efficiently, rather than establishing new initiatives and renovating current ones, which inevitably requires more money. (B05) These partially illuminated why UB’s endeavors to improve the rankings ended up in “repeated planning and self-examination, but no improvement” (B06). Together with these internal challenges, the participants also pointed out external challenges UB faced in implementing the ranking-initiatives arising from its insufficient recognition in the higher education environment. Unlike other case institutions maintaining good standings in well-known global rankings, UB had greater urgency and stake in enhancing its visibility in any of the global rankings. Therefore, UB had to explore and respond to numerous rankings, aspiring to be successful in at least one of them through accurate reporting of data. A staff member in charge of rankings at UB explained the challenging situation as follows: We are involved in too many evaluations, almost all of them. From my perspective, we are preparing and submitting data for some rankings in which we cannot be ranked high. So, only the staff member in charge has difficulties. If only we can prioritize and concentrate. 79 Now we report to almost 17 ranking agencies. In many cases, we cannot do anything to improve. The administrators require to devise ways to improve the rankings. But I face limitations in proceeding. (B06) As seen in the quote, UB was struggling to respond to various global and national rankings, which became burdensome tasks for the staff at UB. At the same time, UB needed to pay more attention to national evaluations directly related to funding than global or national rankings which were difficult to improve and gave no direct financial benefits. One staff member managing institutional data explained that UB considered the criteria used in government-led evaluations more important than other indicators used in the national and global rankings for these reasons (B01). Since the criteria used in the government-led evaluations were prioritized to assess its performance, it seemed difficult for UB to pursue what was assessed in the global and national rankings simultaneously with its limited organizational resources. UB made efforts to address these organizational challenges by implementing some initiatives. Striving for a more systematic management of rankings and evaluations, UB recently established a special unit named Performance Management Center where institutional data would be archived and analyzed (B01). Similar to UA’s example, this administrative unit aimed to “manage the educational indicators used in the rankings and evaluations” and “function as a central axis in conducting global rankings, national rankings, (governmental) evaluation for structural reform, and self-evaluation” (B01). The participants explained that most of UB’s peer institutions currently established similar centers since the government required them to manage their educational performance systematically to be eligible for some of funding projects (B05). UB also requested for consulting services to one of the global ranking agencies in collaboration with other national flagship universities in non-capital regions of Korea to get helpful information on how to improve its rankings (B06; B07). The staff member provided more details 80 of the consulting as follows: Now, we say it is useless to compete with our peer flagship universities. We spent 50 million won to get consultation from QS, as a part of network for national flagship universities (locating in non-capital regions). We, as a national university, cannot pay the consulting costs independently. Therefore, we participated in a joint consulting at half cost. (B06) These examples illustrated how UB, operating in a stable bureaucratic system like UA under considerable government control, had been striving for improving its rankings at the organizational administrative level. As seen in these examples, UB’s vigorous attempts to enhance its rankings and reputation in the national and global higher education system were initiated under the considerable influence of the government and peer institutions at the national level. Curriculum. Beside this organizational restructuring, university rankings seemed to facilitate changes at the individual program level at UB. Since the undergraduate admission quota was strictly controlled by the government and recruiting graduate students was always challenging for UB, establishing and closing academic programs based on their performance would be difficult. Instead, UB updated its department evaluation system to reflect the indicators used in the national and global rankings a few years ago. A staff member in the planning office explained this change, showing the document stating the new evaluation plan as follows: “To improve the ranking indicators, it was recently agreed that integrating ranking results to the annual department evaluation. These are the criteria originated from the rankings” (B06). Ranking indicators, such as number of publications, number of exchange students, international students, and foreign faculty, constituted an important part of the evaluation criteria. Each year UB provided incentives for the programs gaining high scores and promotes their accomplishments among its members and stakeholders (B05). Although the evaluation is not likely to bring drastic transformation in academic programs and curriculum at UB, it should be 81 noted that introduction of ranking criteria would prompt each academic program to pursue what is assessed in the rankings. After this change, a high-achieving program at UB can be described as a program demonstrating excellence in the global/national ranking criteria. This might bring gradual changes in the operation of programs including design and implementation of curriculum, recruitment of faculty or students, and establishment of international partnerships. Faculty. Institutional initiatives and policies for faculty hiring and compensation of UB were more directly influenced by university rankings than UA’s case. Although there were similarities in how to assess research performance among the case institutions, UB more explicitly employed the criteria used both in global and national university rankings for its policies on faculty recruitment and compensation than UA. According to UB’s regulations on appointment and promotion of faculty (UB, n.d.-c), newly appointed faculty members were “required to offer specific number of courses taught in English” otherwise they had significant disadvantages in their annual research performance evaluation. UB also provided a detailed list of criteria used to assess research performance including an evaluation structure for academic publications based on the reputation of journals (UB, n.d.-c). For instance, faculty members who published articles in renowned science journals such as Nature or Science get 1,000 points, while other SCI journal articles were worth 200 points in their annual assessment. These criteria replicated what major ranking agencies value and prioritize in assessing research and education of HEIs, which also aligned with the criteria for the research funding projects and college evaluations by the government. UB’s faculty were requested to fulfil the growing expectations for research performance and internationalized education. Hiring more foreign faculty members for internationalization was also an important institutional agenda of UB recommended by the government. Since the number of foreign faculty 82 was one of the evaluation criteria used in the government research funding of projects and national/global rankings, UB struggled to invite more foreign faculty members to campus. One staff member supporting graduate students explained these difficulties as follows: UB could not even fill the foreign faculty employment quota approved by the government. We need to hire 15-16 faculty members, but it is not easy. If UB wants to invite more, there should be more efforts like publicizing the openings through various platforms and faculty’s international networks. UB seems not to provide systematic support. Rather, individual departments are expected to support them. Really difficult to invite foreigners. (B03) Another participant managing international partnerships also pointed out that there existed a disinclination among faculty groups to hire “foreign scholars who were considered to need extra support” compared to Koreans at UB (B05). University rankings thus influenced UB’s faculty employment policies and brought up a matter of concern among UB’s faculty and staff. Research. Like other case institutions in this study, UB developed and undertook various research support initiatives for research as one of its core missions. On the website, UB emphasized its goal to be “a global research hub” with “world class researchers to contribute to the society” (UB, n.d.-d). Examining the research support programs UB provided to its researchers, I noticed that most of the support was designed to increase the number of publications recognized by the global rankings and governmental research projects sharing similar criteria. For example, UB provides editing services for academic articles submitted to SCI level journals and special funding for each published article (UB, n.d.-e). The criteria of university rankings, shaping the faculty employment and compensation system, guided the research support for faculty in a similar way at UB. Enhancing research performance assessed in the rankings seemed to become an important goal for staff and administrators. Like other cases, the influence of global agencies was more prominent in UB’s research initiatives. One participant in the international office said that in the 83 TF team meetings to improve rankings, people always emphasized the importance of research and proposed to increase support for faculty’s publications in international journals (B05). Detailing the importance of research in relation to rankings, another staff member carrying out ranking-related tasks said, “Research, reputation scores we are struggling with, eventually, depend also on research. When we contacted the ranking agencies, they suggested publishing more articles in Elsevier journals to increase the possibility of being selected as reputation survey participants” (B06). The staff member also illustrated the challenges of research support initiatives at UB as follows: It is hard to change the faculty group. According to an institutional study, research performance of UB is not excellent, given that research support is greater than other national universities. Unless we provide more incentives, there might be no big difference. (B06) These explanations explicitly show how the staff members perceived research and purpose of research support programs. Their perception was shaped and influenced by the rankings or evaluations. Improving research performance in the rankings, rather than research itself, seemed to be an ultimate institutional goal to pursue. Furthermore, enhancing research was understood from the perspective of investment and performance. For the participants in this study, the whole system and process for research at UB seemed to operate like a knowledge-production factory where investing more resources would yield greater output to meet an academic production target. Students. University rankings also have influences on UB’s student support programs. These changes were observed mainly in institutional initiatives to internationalize education, which constitutes significant parts of what university rankings assess. First, UB provided financial support for instructors offering English-medium courses at the undergraduate level (B03). This support aimed to increase the number of English courses for the growing 84 international student population as well as gain good scores in one of the domestic rankings which had used it as an evaluation item (Joong Ang Ilbo, n.d.-a). Second, UB encourages students to participate in long-term study abroad programs by offering financial support and credit transfer. Similar to UA’s case, the international office of UB set the number of outbound international education program participants as an organizational goal, which was used in the domestic rankings. To increase students’ participation, UB provided fellowships for its students who studied abroad as exchange students. UB’s goal for internationalization seemed more focused on improving rank standings. A staff member managing international exchange programs at UB explained, “We once established a special program to send our first- and second- year students overseas, as the president required us to enhance the internationalization scores (number/proportion of international exchange students) as soon as possible through investing money” (B05). From these examples, it was possible to identify a common basis for these institutional practices to internationalize education which also applied to research support practices. Facilitating internationalization of education seemed to be understood as improving the relevant ranking indicators by means of financial resources. Finally, university rankings have shaped UB’s policies for the recruitment of international students. Inviting more students from overseas countries has been an important agenda item for UB, since UB experienced enormous challenges of recruiting graduate students to conduct research and securing financial resources due to a tuition freeze for the past 10 years (B02; B03). UB seemed to address these challenges for human and financial resources by inviting more students from overseas countries. Rankings, recruitment of international students, and tuition revenue were interconnected at UA. One staff member managing admissions procedures for international students illustrated the relationship among them as follows: 85 Students choose their degree programs based on the global, national reputation and evaluation results. As the school-age population decreased, universities recognize the importance of managing their ranking results to recruit students properly. Decreased student population leads to a decrease in tuition revenue. If the number of admitted students decreases due to the dropped rankings, the fiscal health of universities might be affected negatively from a long-term perspective. (B73). This explained why UB attempted to invite international students for the rankings and improve the rankings to recruit more. In recruiting international students, UB considers various ranking indicators related to international students from national rankers and global rankers such as diversity, as well as the number of students. Even diversity of international students was managed by a quantifiable indicator (number of students by different origin countries) proposed by a national ranking company. The same staff member in the international office explained, “When the diversity indicator of international students in the domestic ranking decreased, we examined our indicators and explored new target countries to invite more” (B73). Thus, UB’s institutional practices for international education were guided by what the ranking agencies assessed as a way to address the institutional challenges. Marketing. In the marketing of UB, university rankings seemed to be used frequently, but in a very controlled manner. Whenever the national and global ranking results were announced, UB delivered press releases to regional or national news media about the ranking results and posted them on the main page of its websites (UB, n.d.-f). In the news archive on its website, I found numerous news articles to promote its ranking and evaluation results dating back to the early 2000s. The planning office appointed a staff member to deliver the ranking results to news media for promotion of its performance to the public (B06). Additionally, in news articles, some distinctive ways of presenting the ranking results were noted. Rather than showing its entire global or national ranking results, UB presented its relative positions compared among its national flagship peer institutions in each ranking. For instance, the website 86 of UB showed promotional phrases such as “Ranked 2nd among national flagship universities in the Best Global University Evaluation” or “Research competitiveness ranked 3rd among national flagship universities and 15th in the nation” (UB, n.d.-f). From these practices, UC’s marketing seemed to be a product of interactions among various agencies including global/national rankers, government, students and stakeholders at the global and local levels, and local communities. Staff members explicated this selective interpretation and communication of ranking results resulted from its lower ranks in the global rankings. One participant communicating with international partner institutions said, “UB was ranked in the top 500 but now in the top 1000 universities globally. While we must use QS rankings for international partnership apparently, we become passive in uncovering it outwardly” (B06). Another participant developing international partnerships explained, “Introducing UB to students or institutions overseas, we do not promote our rankings for they are not that high. . .Yet, there are always institutions asking about rankings. So, we use Leiden or sustainable development rankings instead of QS” (B07). These responses illuminated that UB was in a fundamental dilemma between publicizing its ranking positions for the enhancement of international collaboration and presenting its successful accomplishments for the creation of ideal institutional image. Therefore, UB seemed to show its ranking results for marketing purpose but only delivering limited information helpful for its branding or image-making. Case of University C Basic description. University C (UC, hereafter) is a selective private university, located in the capital area of South Korea. The foundation of its modern form was in the mid-1940s with the accreditation from the government. Emphasizing the cultural heritage of East Asia and Korea, UC went through expansion and enhancement of educational environment during the 87 1970s and 1980s. In the 1990s, it experienced fundamental changes in its operation since a big business corporation acquired the university foundation (UC, n.d.-a). Similar to the previous case institutions, UC is also a 4-year, comprehensive, doctoral degree granting institution providing undergraduate and graduate programs in various academic disciplines. The number of enrolled undergraduates and graduates is approximately 25,000 and 8,000, respectively in 2019 (Ministry of Education, 2020). UC is well-known for its strategic management and enhancement of its national as well as global standing for the past few decades. UC’s accomplishments in management and rankings are primarily attributed to the strategic intervention and substantial investment of the corporation which took over the university foundation. The annual revenue exceeds $550 million dollars by 2019, but unlike UA and UB, only 45% of it comes from the tuition and 5% from the national and local governments), which is different from the previous two public case institutions (Ministry of Education, 2020). It spent more than $25,000 dollars per student for educational expenditure (Ministry of Education, 2020). In terms of university rankings, UC showed great improvement in its ranking results within a short period of time. In one of the domestic university rankings, this institution, not included in the top 10 universities out of 30 HEIs in the 1990s, was recognized as one of the top tier institutions in the 2010s (Joong Ang Ilbo, n.d.-b). The global reputation of this institution has grown constantly over the past decade until it holds upper ranks in the global university rankings (QS, n.d.; THE, n.d.). UC’s improvement in the rankings became an exemplary case emulated by HEIs in Korea and overseas countries. Influences of rankings on institutional practices. With UC’s remarkable improvement of its ranking positions, UC and its strategies to enhance reputation have been widely explored by several HEIs not only in Korea but also in other countries (C03). UC’s institutional practices 88 suggest a more systematic and advanced adoption and application of university rankings in its various initiatives compared to other institutions. Also, most of the initiatives to improve the rankings of UC have been continuously implemented over the past decade unlike other case institutions which were in discussing ways to respond to them. Compared to other cases showing limited institution-wide efforts and consensus for rankings, UC seemed to be more like a unified organization moving together towards accomplishing its institutional goals, closely linked to rankings efficiently. Organization. In UC’s case, university rankings exerted far-reaching and permanent effect on its overall operation of the institution. While other case institutions are examining their ranking results and exploring ways to improve their scores, UC already carried out these initial tasks and had transformed its organizational practices. When other case institutions sought immediate, partial solutions to enhance their rankings, UC seemed to work towards its ultimate organizational goals entailing the improvement of global rankings from a more long-term perspective. A director in the Strategic Planning Office explained that UC’s project group for external rankings or evaluations, similar to the ranking TF teams of other case institutions, started examining the ranking indicators more than 10 years ago (C02). He added that UC’s improvement in the rankings was a product of a long-term, consistent institutional efforts rather than short-term projects like the ranking TF team. He shared his experience at the TF team which had been operated in the past as follows: When UC dwelled on rankings about 10 years ago, interestingly, we were getting nowhere. It takes a long time until institutional efforts produce a noticeable outcome. Clinging to the rankings seems not necessarily to lead to improvement. . . The team had aimed to improve the rankings for three years. But at that time our rankings were the same. The primary lessons learned from the evaluation TF team was that we had plenty of data which were never used at the institutional level. After learning this, the team was dissolved. (C02) Based on the knowledge of its rankings and use of institutional data, UC started to make efforts 89 to collect, analyze, and process the data for the institutional planning. Participant interviews suggested that data-driven goal setting and management were recurring themes prevailing in various institutional initiatives UC implemented to facilitate campus-wide changes. Over the past decade, recognizing the importance of education data both at the institutional and programmatic levels, UC started to utilize its various data to assess its current state in comparison to peer institutions and establish its institutional goals. For instance, UC set its vision and specific strategies in five or ten year increments based on its extensive data about education, organization, research, and higher education environments (C02). Likewise, individual colleges and departments were also expected to set their performance indicators annually and compare their outcomes with other internal and external units (C01). UC analyzed performance of individual faculty members and departments and provided faculty or departments with detailed information on global academic trends and prominent research outcomes of other universities to stimulate innovation (C02). As seen from its vision to be “a global leading university” (UC, n.d.-b) ranked in the global 50 universities, university rankings and their criteria seemed to determine a comprehensive direction across multiple planning levels. The strategic planning and implementation of initiatives based on data became a key feature of UC’s institutional practices. A faculty member in the field of Social Sciences illustrated these organizational practices in relation to rankings, saying “When it comes to management, there is UC. UC always puts management of indicators in the first place. For instance, for rankings or evaluation projects, UC thoroughly examines what types of indicators are included and takes appropriate actions” (C03). Based on this data driven planning and management, UC showed enormous flexibility in resource allocation and organizational structuring unlike the public institutions in this study. UC 90 identified some rising academic fields, considered to be promising for the future, and invested substantial financial and human resources in those fields for more than a decade, which produced noticeable outcomes recently (C02). The participants attributed this flexibility to the financial support from its university foundation owned by the corporation (C02; C03). Since the strategic planning and management had continued to bring changes on campus, UC also established several administrative units to provide better support for the accomplishment of educational goals. UC founded special units such as an office for data management, centers for innovative higher education, and student success mostly for the first time in the nation. One participant who held administrative positions explained, “Reorganization such as establishment and closure (of offices/institutes) happens quite frequently” (C03) on UC’s campus. These examples explained that UC, quite sensitive to changes in the environment, exerted special efforts to transform itself to adapt to these changed contexts. Compared with other case institutions in this study, university rankings and organizational operation of UC seemed to be interconnected in a more comprehensive manner. Although UC set its institutional goals in a way directly influenced by the global ranking criteria, UC created its own organizational system and consensus promoting institutional changes to go beyond what was assessed in the rankings. Rather than clinging to the ranking results, UC paid more attention to more specific institutional data on a deeper level such as quality of research publication to assess and improve its performance. As a staff member stated, the rankings brought significant changes on UC, but UC developed a more advanced use of them: “Ranking results themselves have no meaning. . .They are meaningful in that they led to organizational innovation of a university which was not likely to pay attention to change” (C02). 91 Curriculum. In addition to the organizational transformation, global university rankings also facilitated changes at the program level and curriculum area at UC. The influences of the global rankings on the academic programs are salient on the following aspects: Establishment of new programs, concentration of graduate education, and program-level assessment. First, similar to UA’s case, UC established various academic programs and supported their operation to enhance its global recognition and research performance. As mentioned in the previous section, UC explored new academic fields expected to be prosperous in the future on a global scale and set up specific colleges, graduate schools, or programs with vigorous faculty recruitment and marketing. A staff member in the planning office explicated this process as follows: We set our vision in every five or 10 years officially to emphasize specific academic fields. For example, in 2010, when we thought East-Asian studies would be quite important, we established an East-Asia research institute where several academic disciplines in the Humanities got involved together. . . If more and more faculty members join in, research grants are accumulated, and more graduate students are admitted in these fields. (C02) In a similar way to the above-mentioned example, UC established new academic units such as the Institute for Convergence or integrated its previous programs to build independent colleges over the past decade (UC, n.d.-c). Successful establishment and expansion of academic programs to bolster their international and national reputations can, data suggests, lead to significant changes in the overall curricula UC offered. Secondly, in some fields of study, graduate education was more emphasized and transformed than undergraduate education at UC. A faculty member in the engineering field explicated this change as follows: “UC concentrated more on research and graduate education; therefore, undergraduate education has to maintain certain levels. Most of the graduate courses became innovative and unique, while undergraduate courses are usually at the basic level” (C01). Since producing research outcomes had been pursued as UC’s primary institutional goal, 92 graduate curricula underwent extensive renovation to promote innovative research. Also, UC offered graduate-level courses for its undergraduates interested in graduate studies and partial tuition exemption for students showing academic excellence to invite more students to graduate schools for enhancing research performance (C03). Finally, the criteria used in the rankings were adopted to assess performance of individual programs at UC. A faculty member serving as a department chair illustrated that the ranking indicators by subject was employed for departmental planning and self-evaluation as follows: “Without paying attention to other universities’ performance, we internally manage our performance based on the subject ranking indicators” (C01). According to his explanation, the ranking-based evaluation of programs “fostered competition among academic units” and “eventually enhances the rankings of the university” (C01). When the ranking indicators are used for program-level goal setting, it would be natural that the operation of individual programs are changed to fit into what the rankings assess and measure. Faculty. As in other case institutions in this study, institutional initiatives and practices for faculty hiring, promotion, and compensation of UC reflected the indicators of university rankings. The global rankings and national evaluations/rankings shared similar criteria for assessing HEIs’ research changed these initiatives to promote research on a global scale. Among academic communities, UC is locally known for the research requirements imposed on faculty members, more demanding than other HEIs. A faculty member who got appointed to UC in the mid-2000s said, “The minimum qualifications required for appointment, promotion, and re- appointment of faculty has increased for the past two decades. When I was appointed, this trends already began” (C04). He explained that his colleagues felt more extreme pressures on publications at UC compared to their peer faculty employed in other universities. Another faculty 93 member added that most of faculty applicants of UC now had publications far exceeding the minimum number of publications required of their job postings, since the central administration would only approve the appointment of applicants showing outstanding research productivity (C03). He shared his experience of hiring a new faculty member at UC as follows: My department has not been able to hire a new faculty member for years. Although faculty of a department submit a request for employment, the university administration rejects it if the applicant fails to meet the expected level of research such as (articles published in) SSCI or top journals. (C03) These all showed how UC promoted research, constituting large parts in the ranking criteria, through faculty hiring and promotion procedures. UC concentrated on recruiting renowned researchers all around the world to form clusters of influential researchers on campus. In some academic fields, the number of faculty increased greatly due to these vigorous institutional efforts. A faculty member explained, “As one of the ranking strategies, colleges and huge research project teams focused on recruiting international scholars identified as highly cited researchers or Koreans working overseas with potential scholarly impact” (C01). Another faculty member explained that UC “increased flexibility and hired faculty showing remarkable capacity even exceeding the faculty quota” (C03). He illustrated a case of a newly appointed scholar whose entire hiring process took less than a month accelerated by UC’s aggressive recruitment strategies for prominent researchers. In the faculty related practices of UC, it was possible to identify several themes, similar but different from other case institutions, prevailing in the participants’ experiences. First, UC now emphasized the quality and impact of research along with the quantity. A faculty member explicated the changed criteria used to assess faculty’s performance: 94 Most of faculty meet the minimum number of publications required for reappointment or promotion. Quality of research is considered. How this researcher is assessed globally (top X percent researcher) and what are the impacts of research products would be important (C01). Second, UC underscored publications written in English and included in SCI level journals similar to UB. UC provided a more detailed score chart explaining the different scores each applicant gets based on the levels of published journals on the faculty job postings. A faculty member pointed out the existing discrimination in the practices which prioritized English publications over Korean journal articles as follows: For instance, when it comes to English publications, it is required to write specific number of SCI-level journal articles. An applicant who published a few SCI articles gets grade A. If (the applicant was) publishing other English journal articles, the grade would be B. If the publications are Korea Citation Index (KCI) journal articles, the grade would be C. Domestic journal articles start from grade C. It has been criticized as (a kind of) racism (C04). Furthermore, UC also utilized the global rankings to recruit faculty who graduated from prestigious universities. The same faculty member explained, “UC also provided a score chart to evaluate the levels of universities in which applicants completed their doctorate degrees based on the global rankings” (C04). Even in fields like Korean studies applicants’ credentials from highly reputable non-Korean universities, such as American Ivy league institutions, play a critical role in the recruitment. UC seemed to consider hiring such applicants a way to increase the number of English publications (C04). Research. Closely linked to the faculty employment policies, UC designed and carried out various research support initiatives to enhance their research performance. Since the criteria of rankings have been reflected on UC’s vision for research, these initiatives can be interpreted as its responses to the rankings. From the conversations with the participants, I found that UC gave heavier emphasis on research and developed a more organized support system for research than other case institutions. A faculty member in the engineering field described UC as “one of 95 the pioneer private universities concentrating on research” where “most faculty are working towards transformation to a research-intensive university” (C01). These impressions precisely capture the institutional goal for the enhancement of research at UC accomplished through a streamlined institutional effort. Although previous cases faced difficulty in achieving institution- wide consensus about improving research performance, UC had been successful in implementing its initiatives for better research outcome based on consensus among its members (C02). Some participants said that UC underwent significant transformation from a teaching institution focusing on the Humanities and Social Sciences to a research institution renowned in Engineering and Medical Sciences over the past two decades (C03; C04). What made this dramatic transformation possible was the quantification of research and academic trends. A staff member in the planning office explained this strategy and its implications as follows: “If we visualize and quantify research, rather than clinging to rankings, and offer the quantified information to faculty, showing the changes in the department and academic field, then voluntary changes take place” (C02). Another participant added, “Unlike other universities UC never carries out the initiatives unreasonably or pushes faculty directly but in a more refined manner. UC provides information to encourage each department to move forward instead” (C03). At UC, participants suggested that providing quantified research performance data to faculty encouraged them to pay more attention to research and strive for improving their scholarly productivity. This quantification of research is also intimately connected to the faculty employment policies of UC, reflecting the ranking criteria. Another important feature of UC’s research-related practices would be its massive and extensive administrative and financial support for research, incomparable to that of other cases. A faculty member pointed out that UC offered “the highest amount of internal faculty research 96 grants” and “the best research support for faculty in the nation in terms of diversity” (C03). UC’s research grants and compensation were ranked in the top five among 191 four-year universities in Korea (C02). Furthermore, UC supported research expenditure and equipment more than other universities for its target academic fields as discussed earlier (C02). In terms of the multiplicity of institutional support, faculty members agreed that UC provided adequate support services designed to meet different needs and expectations of researchers. For example, UC hired staff members supporting research as representatives of individual colleges or funding agencies, which is a novel practice among South Korean universities (C01; C03). The faculty members seemed to be satisfied with this specialized and customized research support system of UC. These examples show how UC had been striving for creating a stimulating environment to improve its research performance. Students. The influence of university rankings seemed more salient in student related institutional initiatives at UC than other case institutions. Like other cases, enhancing internationalization of education was the primary purpose of these initiatives to succeed in the rankings. UC also made efforts to invite more international students on campus and increase the number of English-medium courses strategically. A faculty member in the Humanities pointed out, “Increasing number of international students and faculty and expanding courses taught in English were the most noticeable institutional practices to improve the internationalization indicators in rankings I have observed at UC for the past 15 years” (C04). UC’s efforts were more systematized and sustained to produce discernable changes on campus than other case institutions. In this regard, student-related initiatives of UC seemed to be under more pressure to meet the needs of a global target audience such as prospective international students. 97 UC had endeavored to attract more international students on campus to internationalize its education. A faculty member explained that UC was “striving for inviting more international students” so that “the percentage of international students increased rapidly for the past 10 years” (C03). Compared to the case of UA where the recruitment of international students was not actively promoted, UC’s international student population was far greater, either students pursuing non-degree programs or degree programs. The faculty member added that the vigorous recruitment of international students was also linked to the purpose of gaining tuition revenue (C03). As discussed earlier, nearly half of UC’s revenue came from the tuition, which made UC more actively involved in international student recruitment unlike public peer institutions. Since UC took up higher positions in the global rankings, faculty members experiencing the changed reputation of UC in the global higher education market. Another faculty member explained, “As the ranking status went upward, I could see the quality of international applicants went up. Now applicants graduating from renowned universities overseas started to apply for my program year after year” (C01). Thus, in UC’s case, university rankings and international student recruitment were closely interconnected to each other. In terms of English-medium courses, UC had been operating a quite organized system to offer more courses for internationalization and manage their quality. Compared to other case universities where English-medium courses were mostly recommended, UC strictly regulated the number of English-medium (or foreign language) courses required for each faculty. A faculty member explained the regulation for these courses as follows: “English-medium courses are not recommended but required. Newly appointed faculty must teach at least 9 credit hours in English out of 15 required credit hours per year” (C03). The number of credit hours for English medium courses required for faculty members seemed to be greater than those of other case institutions. 98 He also pointed out that UC worked toward managing the quality of English-medium courses to examine whether the courses would meet the language requirements. Reflecting these types of institutional efforts, courses taught in English or other foreign languages at UC accounted for almost 30 percent of the entire courses offered on campus, which aimed to contribute to cultivation of global competence and education of international students (UC, n.d.-d). Marketing. Among the case institutions in this study, UC’s use of the university rankings in marketing was the most salient. Most of the promotional documents published by UC including its brochures, newsletters, online bulletins, and websites presented and highlighted the rankings. At first, it could be assumed that publicizing its ranking results among wider audiences, like many other universities around the world, would be the primary marketing strategy of UC. Yet, the interviews with the participants suggested the underlying intention and meaning of these marketing practices at UC. The vigorous promotion of UC’s ranking positions seemed to be associated with the prestige hierarchy of universities, historically developed and widely shared among the public in Korea (Jung, 2010). For many decades, UC had not been considered as a top-tier institution according to the hierarchy, which exerted great influence on prospective students’ college choice. A faculty member explained that there existed widespread discontent about UC’s perceived prestige in Korea among stakeholders (C04). Therefore, “UC aspires to be ranked in the top three. UC thinks it can compete with the top universities, not in the same level with the second tier anymore. . . There has been so-called a third-place complex “(C04), he added. UC had been striving for achieving more prominent accomplishments and reputation than the top-tier institutions through various strategies. Given this circumstance, gaining excellent results in university rankings to inform the public was one of the primary institutional marketing goals of 99 UC. A staff member explained the purpose of UC’s ranking marketing as follows: “There exists a pre-developed hierarchy in perceived reputation of universities in Korea. UC attempts to spring a surprise by showing its greater accomplishments than its peers through the ranking results, against this perceived hierarchy” (C02). Thus, UC’s university rankings and ranking marketing can be understood as its efforts to challenge the widely held perception of prestigious universities in Korea, which had not recognized UC as a top-tier institution. In this sense, the influence of the local level agencies such as Korean students and other stakeholders were more obvious than other agencies in the marketing practices at UC. Compared to other case institutions in this study, UC employed more direct and effective marketing strategies. What distinguishes UC’s strategies from the other two case institutions was the increased the visibility of ranking results through multiple communication channels. A faculty member illustrated this marketing strategy by providing the following example: “When its domestic ranking position was enhanced greatly, UC covered the facade of its iconic building with a huge placard presenting the accomplishment, shared the news with its alumni communities, and presented everywhere on the websites” (C04). As in this example, UC actively used various channels to communicate its rankings, unlike other case universities, which employed their ranking results to a quite limited extent for marketing, Furthermore, UC made efforts to devise a more successful ways to communicate its rankings akin to an advertising company. A staff member explained that UC’s “public relations department would not be interested in the logic of rankings but in how to promote the results, how to articulate them effectively for the audiences” (C02). It is likely that the ranking results shared among the public by UC would only present the remarkable ranks themselves, without containing detailed information on its educational accomplishment. 100 Cross-case Analysis: Based on a Glonacal Agency Heuristic Thus far, this chapter has discussed university rankings combined with a range of higher education contexts to influence three Korean HEIs in this study at various dimensions. The ranking phenomenon explored in this study was quite complex in nature for a variety of ‘agencies’ and ‘forces’ at different levels were actively involved in implementing institutional changes to improve their rankings. To better understand this complexity, this section analyzes the three cases using a glonacal agency heuristic examining the intersecting dimensions (global, national, and local) of the global higher education system. Global Dimension Global agencies. The ranking phenomenon in the Korean higher education contexts occurs under significant influence of various global agencies including global university rankers, international students, international networks of HEIs, and individual HEIs. From the interviews and documents of the three case institutions, it seems obvious that global university rankers exert considerable influence over the HEIs and various stakeholders of higher education around the world. Another global agency playing a crucial role in the entire system are the international students who get information on their future study abroad destinations from the global rankers. Many participants across the institutions mentioned that international students, unlike domestic students, would rely on global rankings for their college application (A01; B02; C01). Moreover, associations of HEIs, consisting of HEIs in various countries for collaboration, are global agencies facilitating exchange of information on global rankings and networking among HEIs based on the rankings (A02; B07). Finally, individual HEIs are also exercising influence as global agencies by actively responding to the rankings as global agencies inviting international students, developing international partnerships, and promoting international research. 101 Interactions among agencies. The global agencies identified above are interacting with one another and extending impact on various agencies in national, and local dimensions. This section explores their interactions and influences on the case institutions as other global agencies. Among the various global agencies, global university rankers seem to have the most salient influence on the three case institutions. As seen in the case studies, since global rankings received international attention, the institutions examined their rankings and explored possible ways to enhance their positions. These interactions between the global rankers and HEIs were actively going on both in the cases of UA, a highly ranked institution, and UB, a regional HEI. In the case of UC, the interactions had generated gradual changes in institutional initiatives. International students and associations of HEIs were global agencies interacting with global rankers and HEIs. Since the global rankings were the primary source of information international students employed for application, HEIs striving for the recruitment of global talents would make efforts to enhance their ranking positions and publicize their results. The tuition revenue gained from the recruitment was also important for most private as well as public institutions in South Korea. This interaction was notable in the cases of UB and UC which endeavored to invite more students from overseas countries, but not in the case of UA which maintained a strict admission screening policy on international applicants (A01). International associations of HEIs were also interacting with the global rankers and individual HEIs. The case institutions were participating in the associations such as NAFSA: Association of International Educators (NAFSA) and the European Association for International Education (EAIE). During conferences hosted by these organizations, representatives of the three case institutions got more information on global rankings from formal and informal interactions with their partner institutions or ranking companies, and explored new partners referring to the rankings (A02; 102 B07). Furthermore, some ranking companies organized international conferences inviting HEIs from different countries to promote their services and network building. The case institutions also took part in those conferences and interacted with the ranking companies and other institutions with similar interest (A02; B06). Finally, individual HEIs ought to be considered as global agencies exerting influence on a variety of agencies in global, national, and local dimensions. Although HEIs might be regarded as local agencies in their higher education contexts, their interactions with other global agencies extended influence at the global level. For instance, the case institutions made efforts to enhance their global standing as a response to the influence of rankings. Their institutional efforts were concentrated on producing globally recognizable research outcomes and internationalizing their education through international students/faculty, English-medium courses, and international partnerships. As seen from these examples, the HEIs were interacting with global agencies and their interactions were extended beyond the national scope. In this sense, the case institutions are global agencies playing an important part in shaping the global higher education system under the influence of global rankings. National Dimension National agencies. National-level agencies including the national government and domestic university rankers exerted influence on Korean higher education contexts in terms of the ranking phenomenon. Since South Korea has developed a quite centralized higher education system, the national government exercises strict control over the operation of HEIs in various aspects. The case institutions were required to participate in the government-led evaluations and showcase their performance to the government for financial support (A03; B01; C03). Yet, the influence of the government varied by institutions based on their public or private status. 103 Additionally, a few media companies in South Korea published rankings of Korean HEIs since the mid-1990s. These domestic university rankers, often collaborating with the global university rankers, had impact on HEIs and various stakeholders of higher education in Korea. Several networks of Korean HEIs were shaping the way individual HEIs responded to influence of university rankings. Interactions among agencies. The national agencies are actively interacting with various agencies in the global and local dimensions and constituting a significant proportion of the Korean higher education system. First, the Korean Ministry of Education implemented a series of evaluations on educational outcomes of HEIs which employed several quantifiable indicators as criteria and assessed HEIs in a similar way to university rankings. The government also developed selective funding programs adopting global rankings as performance indicators for domestic evaluation. The national government’s influence on this system was considerable throughout the three cases, but more nuanced for some cases. Among the case institutions, UB with less flexibility in budget than other cases seemed to be under greater pressure from the government evaluations and thus made efforts to satisfy these criteria. A staff member in the planning office explicated UB’s perspective as follows: “The government-led evaluations, which have the most direct impact on our institution, would be more influential than other rankings. . . It is because of the financial support. Government-led evaluations are directly linked to budget, practically” (B01). In the cases of UA and UC, which had shown successful outcomes in the evaluations, institutional efforts were more concentrated on the government funding projects. A faculty member at UC illustrated the different level of attention given to the governmental interventions as follows: 104 Big universities, like UC, do not pay attention to government-led evaluations in fact. However, there are some evaluations directly linked to funding projects. . . UC is more concerned with these projects. Attention is given based on the funding size. (C03) The institutions actively participated in these funding projects which proposed for “strengthening specialization, reforming departments and curriculum, and enhancing research capacity” on a competitive basis (Han et al., 2018, p. 371). These projects were carried out to facilitate the interactions between global university rankers and HEIs. The government requested the institutions to present their global rankings to showcase their levels of excellence and performance in the selection and evaluation process of these projects. The global rankings were widely employed to respond to the government’s agency over the cases institutions as “an only way to prove the accomplishments” (A04) and “evidence on how a university perform well with financial support” (A02). Second, the media companies publishing the rankings of Korean HEIs domestically are other national agencies interacting with various other agencies at different levels. The influence of these domestic rankers on the case institutions seemed varying according to their perceived status in the hierarchy of Korean HEIs. UA, which maintained a high rank in the domestic rankings, would pay little/no attention to the domestic charts (A01; A02). UC had exerted uniquely comprehensive efforts to enhance its position in the domestic rankings and now focused more on global rankings after accomplishing significant improvement (C03). Unlike these cases, UB, ranked in the middle group, was struggling to improve the indicators used in the domestic rankings (B06). Although there exists a difference in how HEIs respond to the domestic rankers, they cannot be neglected because of their interactions with prospective students. A faculty member at UC explained that the domestic rankings were widely shared among high school teachers and students and used as a primary source of information for college application in 105 Korea (C04). To extend their scope of influence, the domestic ranking companies were interacting with the global ranking companies. For instance, two media companies in Korea in collaboration with global rankers such as QS or THE presented the global ranking results through their media outlets to compare the rankings of Korean HEIs (B07). These examples show how the domestic rankers interacted with other agencies and facilitated the interactions in the Korean higher education system. Finally, formal and informal networks of Korean HEIs are national agencies influencing the institutional responses to university rankings. Staff members in charge of rankings and evaluations at the case institutions frequently mentioned the informal networks connecting peer institutions in Korea (A02; B06; C02). These networks of staff across different institutions facilitated the exchange of information on rankings and relevant institutional initiatives. For example, a staff member of UB collaborated with staff of other national flagship institutions to conduct more extensive analysis of UB’s ranking scores (B06). The staff member explained this collaboration and competitive exploration among peer institutions as follows: “Once the ranking results are announced, UB compares them with other national flagship universities to see in which area it could not score highly. Then UB endeavors to improve the weak indicators” (B06). In the case of UC, the staff member managing the institutional data often interacted with members of other private institutions for benchmarking (C02). In the case of UA, the staff in the planning office also compared the ranking initiatives or data management practices of its peer institutions (A03). If networks are organized in a more formal way, the networks would actively interact with a wider range of agencies. For instance, UB had a membership in the association of national flagship universities. The HEIs within the network collaboratively held a conference on 106 how to improve the global rankings and directly interacted with the global university rankers to have a joint consultation on their performance (B06). Local Dimension Local agencies. In the Korean higher education system, local agencies such as Korean students and local communities are interacting with HEIs at the local level and other agencies at the different levels. At the local level, Korean students who apply for and study at specific HEIs are important agencies that can facilitate the interactions between HEIs and global/national university rankings. The local communities including people, organizations, and local governments are closely connected to HEIs and exercise influence over the glonacal higher education system. The faculty and staff members at the case institutions can also affect the institutional initiatives for the rankings to a great extent. Interactions among agencies. Compared to the influence of global and national agencies, local agencies seemed to have limited impact on the case institutions. Especially, in terms of students, the case institutions had no difficulty in recruiting undergraduate students because of their more secure position in the hierarchy of Korean HEIs and thus were not striving to reach out to prospective students. Staff members at UA and UB explained that their institutions had no urgent need for promoting their excellence to domestic students since students who could apply for and enrol in their institutions were determined already regardless of the rankings (A01; B06). Only UC which had made institutional efforts to improve its global and national rankings in a more strategic manner seemed to extend steadying influence on students through promotion of their ranking results (C04). Yet, even UC’s interactions with prospective students seemed to be limited compared to other institutions in Korea struggling with the fulfilment of admission quotas and endeavoring to increase the number of enrolled students on 107 campus (C04). In the three cases, the interactions with students were the most salient in the recruitment of graduate students. To achieve significant research accomplishments, the case institutions established various support programs to encourage more students to pursue the graduate degrees (B03; C03). Local communities seemed to have quite limited influence on the HEIs’ institutional efforts to enhance their rankings in the case institutions. From the interviews with the participants, it was difficult to identify any significant interactions with local communities. However, some participants explained that global and national agencies started to encourage HEIs to have more direct interactions with their communities. This was more salient in the case of UB, whose mission was closely related to service for local communities. UB emphasized its contribution to regional development as a means of improving their rankings (B02; B05). A staff member in the planning office illustrated the importance of contribution to the local community as follows: “Recently some government projects emphasized how a university contributes to the local communities. Local contribution is highly assessed. . .We need to help the community address the local issues and collaborate with the local governments” (B02). In addition to the government evaluations, UB actively participated in a new type of global ranking which assessed HEIs’ impact on local communities as well as global issues as an alternative way to showcase its excellence (B05; B07). A staff member discussed the criteria in this new ranking as follows: This ranking includes public access to university facilities, public events, preservation of local heritage, efforts for environmental. . . I think UB recognized its limitation on proving its excellence in quantifiable indicators used in the major rankings and focused on this ranking having more qualitative aspects instead. (B05) Thus, through the intervention of global/national agencies the interactions between the HEIs and local communities would be facilitated. 108 Finally, at the local level, interactions of the faculty at the case institutions with other global, national, and local agencies were more noticeable than the staff members. While staff members were more likely to focus on administrative tasks related to the rankings to accomplish the institutional goals across the cases, the faculty members seemed to react in several ways. Overall, the faculty in this study described the indifference to rankings existing in the faculty groups. “Typical faculty members at UA, frankly speaking, experience no pressure on rankings. Even if the domestic rankings go down sometimes, ‘Do they care?’ No, from my perspective” (A04). “I paid little attention to the rankings when I was not in an administrator position” (B07). “In fact, I have little interest in university rankings. The university uses them frequently in news media, but the faculty typically are not interested in” (C01). These are the quotes showing that rankings scarcely generated strong interest among faculty members at the case institutions. However, some faculty members who recognized the prevalence of the rankings in the global higher education environment through frequent interactions with international researchers and students became more interested in them. An administrator explained, “Faculty in disciplines which actively get involved in international research seem to pay more attention. When the results were announced, some of them contacted the administrators to suggest ways to improve the rankings (B07). Also, when faculty members were appointed as administrators in the central administration in the case institutions, they were usually striving to enhance the ranking positions and sometimes interacting with global/national rankers and global/national peer institutions (A03; B07; C03). Moreover, faculty at the case institutions were the most influential agencies in implementing the institutional initiatives for supporting research to enhance the rankings. In the cases of UA and UB, the participants mentioned the difficulty of establishing a new compensation system reflecting research performance arising from the strong opposition of their 109 faculty (A03; B06). A staff at UC further suggested that UC was successful in fostering its target academic fields since there emerged a consensus among the faculty (C02). These examples elucidate how faculty play an important role as a local agency in shaping the institutional responses to the rankings. 110 V. DISCUSSION AND IMPLICATIONS The purpose of this study was to examine institutional practices implemented to respond to university rankings of HEIs providing various academic programs in South Korea. This chapter offers a summary of the implications of this study’s findings, putting them in conversation with the previous research on the university rankings and the Korean higher education system. Implications for future research is also provided with the limitations of this study. The last part of this section is the conclusion of the study. Discussion of Findings This study is a multiple-case study (Yin, 2018) to explore how three four-year HEIs in South Korea responded to university rankings in close interactions with global, national, and local agencies of higher education. The research questions this study explored were (a) how the case institution responded to global and national university rankings in the different areas of institutional practices and (b) how the local, national, and global agencies of higher education interact with the institution to lead to specific practices in response to university rankings. To address these questions from the qualitative data collected at the three case institutions, an extensive analytic framework was proposed and revisited to understand the characteristics of the Korean higher education landscape, different areas of HEIs’ practices (Hazelkorn, 2015), and interactions among global, national, and local agencies of higher education (Marginson & Rhoades, 2002). The researcher explored various data sources including in-depth one-on-one interviews with the informants, and examining the institutional websites, brochures, policy documents published by the case institutions, as well as relevant news articles. 111 Landscape of Korean Higher Education amid Globalization From the analysis of the interview and document data, this study offered thematic exploration of the landscape of Korean higher education. One of the most remarkable themes observed from the data were the institutional efforts to strengthen global interconnection to survive in an increasingly globalized higher education environment. In this globalized world of higher education, university rankings (both international and domestic) were prevalent as an indicator of HEIs’ competency, prestige, and international competitiveness. Korean HEIs, once taking peripheral positions in the global higher education conversation, gained international recognition with the improvement of both their global rankings and South Korea’s national reputation. Despite these promising aspects, Korean HEIs seemed to face substantial global and domestic challenges. Since HEIs in Western or English-speaking countries were dominant in the global higher education system, Korean HEIs occupying a vulnerable position in the system had to outline and pursue Englishization of education and research across multiple levels of institution. Their efforts to succeed in the global higher education competition were accelerated by the domestic challenges such as the decrease in both the school-age population and tuition revenues in South Korea. This broad sketch of the South Korean higher education landscape serves as a guideline to understand how and why Korean HEIs respond to university rankings within their specific domestic contexts. The ranking phenomenon, influenced by multiple environmental factors which might vary across countries, needs to be examined by considering contextual information. Previous studies on the influence of the rankings were either too broad or too bounded in their analytic scope to explore the distinctive national environment shaping the phenomenon. Although this study embraced some of the concepts used in Lo (2014) explaining the ranking 112 phenomenon in Taiwan as general features of East-Asian higher education contexts, the findings suggested that the Korean contexts were not fully explained through the concepts like ‘Western hegemony’ and ‘English dominance’ derived from the center-periphery dichotomy (Altbach, 1987). Rather, the findings indicated that a globalized higher education system reflect existing global power dynamics, and Korean HEIs had to both compete in the global market, whilst navigating domestic challenges. The changes in the global higher education system and characteristics of the domestic environment in relation with the university rankings were more salient in the description of the Korean landscape than the widely shared concepts like Western/English dominance. This showed the importance of analyzing higher education phenomenon through a more in-depth examination of specific contexts rather than generalization and categorization of circumstances. If the contextual differences are examined successfully, it leads to deeper understanding of the interactions among different agencies of higher education and their consequences. UA, a Marathoner in the Ranking Race Situating the three case institutions in the landscape of Korean higher education, the researcher thoroughly examined the institutional practices in response to university rankings of each case. The first case institution (UA), an established top-tier, national institution for a long time, showed little attention or response to their rankings and felt little need to improve their standing. The global rankings had not exerted great influence on institutional planning, goal setting, or academic programs at UA. However, UA attempted to bring gradual change in its organization and programs to gain success in the globalized higher education system where the rankings prevailed. The faculty and research initiatives of UA were also minimally influenced by global rankings. UA neither changed its faculty personnel system nor incentive structure in any 113 significant way to enhance its research performance due to the strong objection from its faculty. Instead, UA invited more foreign faculty, hired globally renowned scholars, and initiated a research support project fostering several academic fields to improve its global rankings. UA’s student support initiatives were more closely related to the rankings for the international office set its goal to internationalize education following the ranking criteria. Finally, UA had been careful with the use of its global rankings for marketing, being dissatisfied with its performance. UA’s response to university rankings seemed to be significantly influenced by its top hierarchical position maintained for a long time in the nation and its considerable reputation in the global higher education system. The institutional practices UA showed were analogous to those of the British HEIs (Locke et al., 2008) which analyzed the domestic ranking results but scarcely employed them as central to their institutional agenda. For both cases, since their rankings were stable and historically developed, they had no urgent need to transform their practices to succeed in the rankings. Yet, even UA exerted efforts to enhance its global rankings in various ways, such as promoting international education and establishing a special unit for data management, noted by Hazelkorn (2015). The influence of global agencies (i.e., global rankers and international peer institutions) was so crucial that UA was under pressure to prove its excellence by occupying higher positions in the global rankings than its national and international peers. Local agencies, such as its faculty and administrators, played an important role in implementing these initiatives at UA. These agencies sometimes facilitated and held up the initiatives to improve the rankings. However, there remained constant tension among the members of UA over these ranking-oriented initiatives. Since UA had exercised special privileges as a top-tier institution in the nation for a long time, it seemed difficult to expect 114 greater levels of productivity or international engagement from its members who were seemingly complacent about the current domestic and international rankings for a better ranking position. UA, as a leading, highly-ranked institution in the national and global higher education systems, recognized the growing importance of global rankings but showed slow movements in transforming itself for the enhancement of its global ranking positions. In this sense, UA can be described as a “marathoner,” running in a long-distance race at a steady pace without sprinting towards its goal. In a marathon race, participants sometimes run or jog in order to pace themselves over the long distance, persisting through emotional and bodily tension throughout the race. UA’s continuous efforts for better global recognition and existing tensions over the institutional transformation among its members seem to be similar with a marathoner’s running experience. UB, a Triathlete in the Ranking Race The second case institution (UB) was an example showing how a HEI responded to different rankings at the global and national levels. UB was also an example of HEIs, being ranked in the middle, striving to enhance their rankings to compete with other prestigious institutions globally and nationally. UB was actively exploring possible ways to improve its position in numerous national and global rankings through organizational reform. Yet, it was difficult for this institution with a lack of consensus or flexibility in resources to devote a campus-wide unified effort to enhance the rankings. To overcome these limitations, UB launched various initiatives for the purpose of improving the rankings such as establishment of a data center, involvement in joint consultation, and development of a new departmental evaluation system. The faculty personnel management and research support systems of UB were redesigned to reflect the criteria used in the rankings. Although UB encouraged its faculty to publish English 115 articles in SCI-level journals and invited more international faculty members, many challenges such as disinclination among faculty existed. When it comes to student-related initiatives, UB set its goals for internationalization based on what the rankers assessed such as offering more English-medium courses, promoting study abroad, and recruiting more international students. For marketing, UB used its national and global ranking results in a selective manner to construct institutional image appealing to broader audiences including domestic/international students and the local community. UB’s wide ranging efforts to be successful in diverse national and global rankings aligned well with the institutional initiatives implemented at other mid-ranked Asian universities in Azman and Kutty’s study (2016). As a HEI ranked in the middle, UB seemed to be highly influenced not only by the national government’s initiatives to improve the higher education system and but also by the global rankings essential for interacting with global agencies such as international students and partner institutions. Setting its goals to gain success in the government-led evaluations for more funding, at the same time UB also implemented various initiatives to enhance its global rankings similar to the HEIs examined in Hazelkorn (2015). UB, occupying prominent positions in neither global nor national university rankings, had difficulties in securing human and financial resources under the challenging domestic environment. These circumstances made the case of UB distinctive from other mid-ranked HEIs examined by Azman and Kutty (2016). Moreover, being a national flagship university representing a region, UB was expected to contribute more to local agencies such as local communities than other cases. The variety of efforts to succeed in interacting with global, national, and local agencies and challenges of UB would summon up an image of a triathlete who needs to complete swimming, cycling, and running races. Unlike other case institutions focusing on the race for global 116 recognition, UB seemed to undertake demanding tasks imposed from global, national, and local agencies to accomplish success in the ranking race. As a triathlete gets training to improve all triathlon disciplines, UB examined its performance in different rankings and devised specific plans to enhance its ranking positions. The race seemed more challenging and dynamic for this case than others for the multiple tasks imposed from external agencies. UC, a Sprinter in the Ranking Race The third case institution (UC) showed more systematic responses to university rankings through its strategic planning and management compared to other cases. As a private institution having developed its unique organizational features with solid support from a corporation, UC successfully transformed itself into a research-intensive institution recognized in the global university rankings. UC, informed by the global ranking criteria more than a decade ago, had made unified institutional efforts to accomplish the goals based on the institutional data collected and analyzed systematically. Pursuing its vision to be a world-leading research institution, UC fostered specific academic fields and emphasized graduate education and program-level performance management based on the ranking criteria. The faculty appointment and compensation system at UC also underwent rapid changes to promote publications in international journals. UC encouraged its faculty’s research productivity for better performance by providing quantitative information on research and a wide range of support. In terms of initiatives for students, UC strategically managed the English-medium courses required by faculty and invited more international students on campus for the sake of the rankings as well as tuition revenue. The ranking results were more widely employed in the marketing practices of UC than other case institutions. UC attempted to break the pre-developed perception on the 117 reputational hierarchy of Korean HEIs by presenting its improved ranking positions through its unconventional, strategic marketing practices. The case of UC was an example of what institutional efforts were needed to enhance an individual HEI’s ranking positions. As discussed earlier, UC was one of the most interesting cases attracting attention of various HEIs aspiring to improve their global reputation in many countries. UC seemed to implement almost all types of efforts HEIs made for their rankings examined in the previous empirical studies (Azman & Kutty, 2016; Hazelkorn, 2015; Lo, 2014). In the ranking race, UC could be described as a sprinter whose race requires moving rapidly and at high intensity over a short distance. Compared to other case institutions which showed less unified efforts for the rankings, UC set clear goals connected to the ranking indicators and concentrated its resources to accomplish them within a short period time. Data-based management and campus-wide consensus on reform were the primary key for UC’s success in the ranking race. UC could expend all its energies into the transformation of institutional practices as a sprinter heading to the goal. However, as seen from other cases in this study, only a few HEIs could follow the example of UC, since the support and guidance of the corporation seemed to the driving force leading the institutional reform at UC for the success in the rankings. The case of UC suggested that HEIs which managed their quantifiable performance efficiently and secured resources to invite global talents could become a sprinter accelerating to maximum in the ranking race. Three Cases in the Glonacal Higher Education System The findings of independent case studies were combined and examined together based on a glonacal agency heuristic (Marginson & Rhoades, 2002) devised and revisited to explain the HEIs and landscape of Korean higher education. On the global dimension, global university 118 rankers have significant influence on HEIs participating in the global competition for resources as global agencies, university associations facilitating their internationalization, and international students exploring study abroad destinations. All the case institutions in this study were under great influence of these global agencies but to varying degrees. Considering the national dimension, the Korean government, domestic university rankers, and networks of Korean HEIs were shaping the practices of the case institutions in response to rankings. While the three HEIs were concentrating on securing financial support from the government funding projects, UB focused more on the government-led college evaluations than global/national rankings. Except UA, a top-tier institution in the nation, domestic rankers collaborating with global rankers exerted influence on the case institutions. The institutional practices were shared through the informal and formal networks of Korean HEIs. Within the local dimension, prospective students in Korea could exert influence on the ranking marketing of UB (for graduate recruitment) and UC (undergraduate and graduate recruitment). In the case of UB, the influence of local communities was salient for global and national agencies started to expect more contribution toward regional development. Also, the faculty at the case institutions played important roles in implementing the initiatives to enhance the rankings across different cases. The analysis of the complex higher education system consisting of global, national, and local agencies in this study provided a more comprehensive view of the ranking phenomenon in the Korean context, particularly from the perspectives of individual institutions. Previous studies on the global higher education environment and university rankings were more likely to provide a broad and brief overview of the global trends, national initiatives, and institutional responses, focusing on their similarities. Therefore, it was difficult to get a clear sense of why the HEIs implemented such initiatives and under what influences of higher education agencies at the 119 global, national, and local levels. Subtle differences existing in the institutional practices among HEIs which would lead to a competing interpretation were often veiled by emphasizing the explicit commonalities in the ranking phenomenon. As seen in the cases of this study, although the three HEIs seemed to pursue excellence in the rankings through similar initiatives, they showed substantial differences in their practices. For example, they had different goals, expectations, and challenges while pursuing these practices, experiencing different extents of pressures from various agencies of higher education. That is why the case institutions’ responses to university rankings cannot be generalized or integrated without fully considering the characteristics of individual HEIs with different goals, organizational structures, and access to resources. To adapt to the different environment they face, which is shaped by various agencies, HEIs need to interact with those agencies of higher education to a different extent. The different extent of the interactions determined how quickly and extensively HEIs responded to the rankings and whether the responses brought actual change on campus. In this sense, the glonacal agency model employed in the study was a useful lens to examine internal aspects of the institutions actively interacting with agencies in the extensive higher education system and the consequences of the interactions. Implications for Glonacal Higher Education Agencies Implications The findings of this study showed how three Korean HEIs oriented themselves toward measurable success (e.g., rankings) in an increasing globalized higher education environment, whilst competing domestically for an increasingly small pool of financial and human resources. The case institutions in this study sought to enhance their recognition in the global higher education system by changing their performance in various areas the rankings assessed. The 120 glonacal analysis of the cases allowed a deeper-level exploration of the institutional practices for enhancing their ranking positions. From this expanded perspective, these efforts for rankings are an inevitable adaption to the global context in which Korean HEIs were compelled to pursue some semblance of globally dominant universities (mostly highly internationalized research universities in English-speaking countries) to survive in the global market. These efforts were also bolstered to respond to their national and local pressures such as the selectiveness of Korean government’s higher education support programs for competitiveness and in response to the rapid decline in the domestic student population. To be successful in this complex system, the case institutions as both global and local agencies, were actively interacting with various other global, national, and local agencies. The position of Korean HEIs in this complicated, marketized higher education system seemed far from being favorable but rather vulnerable. Although this study was started from questioning the impact of rankings transforming HEIs in a similar way, Korean HEIs seemed to have little choice but to accept and cope with the unstable, challenging situation without critical examination of the rankings or the changes in their environments. Since the rankings subtly permeated the global, national, and local systems of higher education, pursuing excellence in the ranking indicators becomes the best way to become an ideal university with quality education and research succeeding in the global competition for human talent (Dill, 2009). Under this circumstance, the HEIs employed somewhat similar institutional initiatives operated by the price mechanism (Brown, 2015) such as incentivizing research activities, inviting more international students for revenue, and purchasing opportunities to interact with ranking companies based on the knowledge shared among their global, national, and local agencies. These findings suggested that exploring the entire system of higher education from the perspectives of HEIs would be the 121 primary task to better understand a higher education phenomenon like the ranking competition. Moving forward from this exploration, this study also calls on more proactive roles of glonacal higher education agencies including HEIs, governments, university associations, researchers, and ranking companies in examining and addressing the ranking phenomenon. It is essential for these agencies to admit the fact that there was limited evidence showing the transformation of HEIs towards the enhancement of the rankings eventually lead to desirable changes for education. Especially, for HEIs in various countries, aspiring to be enhance their ranking status, this study conveys this brief but meaningful message. Understanding the challenging glonacal higher education system shaped by the rankings needs to come before uncritically participating in the global ranking competition. Exploring the Past, Present, and Future of the Ranking Phenomenon This study expected to explore the consequences of the ranking phenomenon after almost a decade of rankings’ dominating of the global higher education conversation. To accomplish this goal, the researcher planned to compare what was found from the studies of the influence of rankings on HEIs conducted a decade ago with that of this study. As discussed earlier, making a detailed comparison among these studies with different design and context was a challenging task. Despite these difficulties, it was possible to trace the process and consequences of the ranking phenomenon from the findings of this study broadly. The following themes were most salient during the data collection and analysis process. First of all, the influence of university rankings exerted on HEIs seemed to have been significant for the past decade. The HEIs in this line of study underscored the importance of the rankings in the global higher education system and their challenging circumstance which drove them to respond to the rankings. While the previous studies presented criticisms on the 122 methodology of rankings and alternative ways to evaluate HEIs (Altbach, 2012; Kehm, 2014), the case institutions in this study focused more on surviving in the global higher education system shaped by the rankings. They implemented a variety of initiatives to enhance their ranking status. One interesting finding related to the institutional efforts was that the types of initiatives established by the HEIs did not differ substantially compared to those employed by the universities in the 41 countries examined in Hazelkorn’s study (2007, 2008), which was conducted more than a decade ago. This similarity in institutional practices seemed to come from the simplicity of the ranking criteria, which became more salient after the global rankings, using more simple indicators applicable for HEIs around the world than national rankings, started to prevail. HEIs had been implementing similar initiatives aiming to improve their research capacity, international collaboration, and educational environment assessed through these simple indicators. Furthermore, the findings of this study seemed to indicate that the rankings’ influence had extended to more extensive aspects of the societal contexts, compared to previous studies examining HEIs in various national settings. While the previous studies showed their influence primarily on institutions (Hazelkorn, 2007; Locke et al., 2008) and perception of faculty members (Lo, 2014), the participants in this study emphasized how the rankings prevailed in the entire global higher education system and even in non-education areas such as employment or residency application in the globalized environment. This suggested that the rankings had permeated in more nuanced dimensions of societies in various national contexts not to mention of higher education than when they started to be widely studied a decade ago as the global interdependence increased. As Erkkilä and Piironen (2018) mentioned in their study, university rankings were now more a social practice influencing not only education but also other societal 123 areas such as policy, media, and business. The findings of this study reinforce this observation of the growing influence of rankings on societal dimensions. This broad comparison would give some insights to understand the past and present of the ranking phenomenon, as well as look into its future. The findings of this study suggested that pursuing excellence in university rankings became an important priority for HEIs and their influence were extended beyond the scope of higher education compared to the past decades. This meant that the rankings succeeded to proliferate in the global higher education system despite their reported limitations and ample criticisms cited for over a decade. Considering the growing interactions among HEIs, rankers, and students in the globalized environment, the ranking phenomenon is likely to become more widespread and universal in various institutional and national higher education settings in the future. However, the long-term consequences of the ranking phenomenon, how university rankings transform higher education and whether the changes will be positive or negative, cannot be predicted at this point. In this regard, this study calls for more attention to the far-reaching consequences of the rankings in the future, especially in terms of how HEIs’ centered efforts to improve simple indicators of the rankings affect the higher education system and the glonacal agencies. Implications for Higher Education Research This multiple-case study makes a valuable contribution to the field of educational administration’s knowledge of global higher education environments, university rankings, and individual HEIs’ practices, as well as higher education research. The implications of this study for higher education research were most noticeable in the following aspects: recognizing individual HEIs’ perspectives, calling for a more expansive theory, and providing a methodological example of a multiple-case study. First, this study, examining multiple HEIs in 124 South Korea with different characteristics in-depth, presented more elaborate descriptions on how and why different HEIs responded to university rankings in close connection with the landscape of Korean higher education. Through this elaborate analysis of the cases, it was possible to explore the differences in the HEIs’ understanding of the rankings and institutional efforts to transform their operation in response. The HEIs in this study were active individual agencies of higher education working across the global, national, and local levels, not passive constituents of the higher education system. By including voices from more diverse members of HEIs beyond faculty, differing HEI approaches were made clear in this study. These findings suggested the importance of a more in-depth analysis of individual HEIs focusing on their characteristics rather than concentrating on the similarities existing in institutional practices. Second, this study based on the glonacal agency framework to explain the complex higher education environment, calls for more expansive frameworks to examine various higher education issues. Reviewing previous research on university rankings, it was difficult to find an appropriate conceptual framework to describe the ranking phenomenon in a Korean context. The frameworks employed in previous studies, such as four-dimensional framework (response, acceptance, tool, and implications) and reactivity/quantification of accountability, seemed either too simple to explain the entire contexts or too abstract to embody the essence of the phenomenon. Also, the frameworks were working well within the national or institutional boundaries, not beyond these levels. To address these limitations, the glonacal agency heuristic was adopted to capture the interactions among agencies at the different levels and six areas of institutional practices were added to investigate the institutions. Although developing a framework by integrating pre-existing concepts was a challenging process in this study, this expansive framework offered new insights into HEIs’ effort to enhance rankings in the 125 globalized system, beyond the national and institutional boundaries. This suggested that higher education researchers are required to continue their efforts to see existing issues from a more extensive perspective and to explore suitable frameworks capturing the complex systems. Finally, in terms of methodology, this study provided an example of studying multiple HEIs, especially in South Korean context. Exploring multiple HEIs within a limited period of time is a challenging task for researchers. The completion of this study was more meaningful in that it was even done during the time of a global pandemic when in-person interactions with study participants were seriously restricted. The entire process of this research suggested that there were numerous tasks and issues to be considered and addressed in conducting a multiple- case study on HEIs. The case study protocol and research notes developed in the process, thus, might be a helpful guide for future researchers by providing detailed explanation on research procedures, examples of expected challenges, and their possible solutions including the decision- making criteria during the research process (see Appendices B and C). In particular, the research notes included elaborate descriptions both on the internal challenges in research design process and unexpected external challenges emerging during the pandemic. This information is relevant for higher education researchers who are likely to struggle with internal challenges for a robust study and external challenges in higher education. Limitations of the Study Despite these contributions of this study to higher education research, this study also had some limitations. First, the limited number of participants and case institutions of this study led to only partial understanding and description on the ranking phenomenon in the Korean higher education context. Although the researcher strived to include participants in different administrative units and select multiple institutions with different rankings, geographic location 126 in Korea, and operational control, accessibility to research sites and participants played the most important roles in the selection process. The pandemic also had negative influence on this process by restricting the access to the sites and in-person interactions with the participants. Some participants were unwilling to participate in video interviews due to the pandemic’s rapid shifts in work schedules. It was also difficult to have deep level conversations with participants about this topic without building rapport that could have been more easily facilitated in face-to- face interactions. Second, the complexity of institutions’ unique and complex power flows made it difficult to trace all the agencies of higher education interacting within the broader glonacal system. Although this study attempted to explore various agencies at different levels shaping the ranking phenomenon, there still exists a group of agencies that are interacting with others in this system but not easily noticeable from the interviews and documents. For example, one of the participants in this study mentioned the difficulty of reporting institutional data to OECD based on the instruction and management of governmental oversight agencies. This interaction seemed to align with the interactions between global agencies and HEIs investigated in the study. However, this type of interaction could not be explored for no further evidence was found throughout the interviews. If this study could have included a wider range of participants and cases in a multiplicity of other institutional settings, a more extensive view of the glonacal system could have been offered. Finally, the data presented in this study is unlikely to have fully captured participants’ perceptions and intentions, nor the nuances of institutional producers due to the differences in languages. The researcher collected and analyzed the data mostly spoken and written in Korean. Although these data were translated in English under the researcher’s careful consideration, there might have been more accurate translations and presentations of interview data. Despite this limitation, the interviews, conducted in a language, most familiar and 127 convenient both for the participants and researcher, provided more complete description and in- depth understanding of the phenomenon. Suggestions for Future Research These limitations of this study open up new possibilities for future research on university rankings and individual HEIs in various contexts. Considering the limited number of case institutions and complexity of the higher education environment at the global, national, and local levels, more expansive studies examining university rankings and other higher education issues in various contexts would be essential to enrich our understanding of the globalized higher education environment and agencies of higher education. The following items are possible research questions higher education researchers might pursue in the future: (a) how university rankings influence institutional practices of HEIs in specific national settings (in particular, countries with developing higher education systems), (b) how institutional characteristics (including but not limited to specialization, history, organizational structure, culture, size, operational control, and positions in rankings) influence the responses to university rankings of a HEI or HEIs, (c) how global, national, and local agencies of higher education interact with one another in terms of an emerging higher education phenomenon within a higher education system such as the rise of online learning and restructuring of higher education, (d) what needs to be considered to study HEIs in different global, national, and local settings, such as developing an expansive framework capturing the interrelation among the settings and exploring the power imbalance existing in the different systems. Conclusion Despite the growing number of academic examinations on university rankings, there have been few attempts to examine their influence on different institutional and national contexts. The 128 Korean HEIs in this study were all running towards the goal in the race for excellence hosted by university rankings. The rankings, permeating in the marketized global higher education system, exerted significant influence on the Korean higher education system. If these were the general findings that could be revealed by previous studies on the rankings, this study offers fresh insights into the ranking phenomenon by exploring specific institutional and national contexts. The following sentences represent the different understanding of the phenomenon in this study. The three Korean HEIs in this study were joining the ranking race in a quite different manner: one running within leading groups at a steady rate, another completing cycling/swimming races before running, and the other running at top speed towards the goal. The Korean higher education system was shaped by the interactions among various agencies including the national government, global/national rankers, associations of HEIs, HEIs, and their stakeholders. This difference brings subtle but beneficial changes in examining and addressing higher education issues in various contexts. In this regard, this study presents a meaningful attempt to introduce a desirable change in Korean HEIs as well as the global higher education system reshaped by university rankings. 129 APPENDICES 130 APPENDIX A: INTERVIEW PROTOCOL RQ Interviewees Area Questions RQ1 RQ1 All General Please tell me a little bit about your role and experience at your institution/office. Can you tell me a little bit about your experiences with the rankings? All Reaction How do the university rankings influence your university? - When the ranking results were announced, what did you/your office usually do? - What do you think about the university rankings? - To what extent, the rankings influence institutional - practices from your perspective? In your role, what types of influence did you experience? Adm. Staff. from International Office/ Admissions Office Faculty, Staff from Research/ Academic Office Faculty, Staff from Research/ Academic Office How do the university rankings influence your office? - How do rankings influence programs/initiatives related to admissions? - How do rankings influence student international programs? Students - Can you tell me a little bit about the influence of the rankings perceived in the international education environment? - Why do HEIs pursue the rankings? - What scholarship programs does your university offer to recruit high achieving students from overseas? How do the rankings influence the faculty work? - Does the faculty contract reflect the criteria of the rankings? - How does your university manage faculty Faculty achievements? - What is emphasized at your university in terms of faculty achievements to enhance university rankings? - Can you tell me a little bit about faculty hiring process? - Can you tell me a little bit about foreign faculty? Research How do the rankings influence the research policies? - What is emphasized at your university in terms of research to enhance university rankings? - How does your university promote research? - What kinds of initiatives are developed and implemented to promote research? 131 Adm./Staff from Public Relation/ Marketing Office How do the rankings influence the marketing or branding of your institution? - To enhance reputation in the rankings, what does your university do? - How does your university advertise itself Marketing locally/nationally/globally for university rankings? - What types of institutional accomplishments are mostly emphasized and advertised to enhance the rankings? - Why is your university not using the ranking results for marketing? - Who determines or influence the ranking marketing? Adm./Staff From Planning/ Organization International Office Adm./Staff From Academic/ Planning/ Research Office, Faculty Curriculum RQ2 All Glonacal Agency How do the rankings influence institutional organization? - Have you experienced any changes in organization for rankings? - Are there any new departments or offices to address ranking-related tasks? - Are there any updates in facilities related to rankings? - Does your university have an office or team for strategic planning for rankings? - Does your university employ the ranking criteria and results for planning and management? How do the rankings influence the curriculum to enhance? - What disciplines are highlighted at your university in consideration of the rankings? - Are there any programs changed based on the results of the rankings? - Does your university make special efforts to manage the quality of education based on the ranking criteria? - Does your university change the resource allocation by department reflecting the ranking results? - How does your university employ the rankings in operation of programs? How do the global rankings/overseas institutions influence your practices? How do the national rankings/government policies influence your practices? How do the local communities/peer institutions influence your practices? What types of external pressures do you think the most influential to your institution’s practices for the rankings? 132 Background APPENDIX B: CASE STUDY PROTOCOL1  Exploring problem(s) linked to the topic and identifying the problem addressed in the study  Identifying previous research on the problem (exploring contents, contexts, framework, and methodology)  Formulating a theoretical, conceptual framework for the problem  Defining initial research question\(s) addressed in the study (can be revisited) Design/Procedures  Determining whether the case-study design is appropriate for the study (examining the scope, process, and methodological characteristics of the study)    Is the study an analysis of a bounded system (the case)? Is the context/environment crucial to understand the case? Is the study finding based on an in-depth and comprehensive investigation?  Identifying the characteristics of the case study based on the purpose and scope  By purpose: Descriptive, exploratory, explanatory, illustrative, or evaluative case study  By scope: Single or multiple-case study  Identifying research methods used to address the research questions  Figuring out the overall procedures of the study and preparing for each procedure Institutional Review Boards approval   Data collection: Instruments (such as interview protocols), consent forms, recruitment flyers, compensation for participation, methods, and recording  Case analysis: Within case study, cross case study, and analytic software  Review and revision of the framework/methodology  Developing a flexible schedule plan to complete each procedure Case Selection  Setting the specific boundaries to define the case(s) in the study  Establishing the criteria used to select case(s): Informed by previous research, considering the research questions, access to sites, and alternative choices  Selecting the case(s) the researcher explores based on the developed criteria Data Collection  Identifying the data collected for the study: Using multiple sources of data (triangulation) to increase validity  Defining the criteria used to collect the data  Updating instruments used for the study such as interview protocols  Establishing a data plan to collect, store, and process the data  Collecting the data based on the developed plan 1 The contents and organization of this protocol was informed by Brereton et al. (2008), Harrison et al. (2017), and Pervan & Maimbo (2005). 133 Analysis  Identifying the criteria for the interpretation of the findings  Starting analysis from the initial stage of the data collection  Providing descriptive, explanatory data, and individual case report  Providing a cross case analysis based on the individual case analyses  For qualitative analysis, developing an initial list of codes informed by the framework and previous studies Validity and Reliability  Defining validity and reliability concerned in the study  Explaining what efforts are exerted to increase validity and reliability Study Limitations  Explicitly stating the limitations of the study  Adding explanations on how to address the limitations 134 APPENDIX C: RESEARCH NOTES This document is composed of brief research notes the researcher took during the process of the research. These notes, describing the research progress, emerging issues, and solutions adopted for the study based on the case study protocol, would offer insights into how to conduct multiple- case studies especially in higher education settings. Background  (September 2019) I explored the problem of the university rankings in the higher education environment and the importance of this study. First, I thought that exploring the influence of the rankings on HEIs in Korea would be necessary for there was no similar study focusing on Korean contexts, which seemed to be a problem. After discussing this matter with colleagues, I realized that the issue addressed through this study could not simply be the lack of literature. The problem should be more like ‘real-life’, ‘practical’, or ‘actual’ issues of higher education environment. So, I re-examined the purpose and importance of this study.  (September 2019) I made some initial research questions for the study. After examining these questions for weeks, I could not see any problem or error on the questions. I thought I could go with these questions throughout the study without major revisions. [But in fact, the initial questions went through a significant transformation based on the feedback from my colleagues. For example, I used the term global university rankings, but my colleagues reminded me that my analysis was also related to national-level rankings. Therefore, I refined the terms and added more details.]  (September 2019) I thought reviewing previous studies on the rankings would not have taken long since I explored various relevant studies for the past two years. But for the past 10 years, numerous articles and books on the rankings came out. It was challenging to decide which to include and exclude. How to organize the literature review section was a different matter. So, I categorized the literature informed by previous meta-analyses and concentrated on studies about the influence of the rankings on HEIs. For these studies, I carefully examined the contexts, framework, and methodology.  (October 2019) I formulated a theoretical, conceptual framework to understand the rankings’ influence on HEIs in Korea. Previous studies adopted various concepts to explain the ranking phenomenon. I thought the frameworks were bounded in the national higher education systems. I explored several models used in higher education studies that could provide more expansive perspectives. After discussing it with my advisor and colleagues, I decided to use a glonacal agency heuristic. But some suggested additional tools to explore the practices of HEIs rather than the overall contexts. So, I adopted six areas of institutional practices used in the previous studies in the rankings. Design/Procedures  (October 2019) After I determined the research topic, I had been exploring possible research designs and methods. A case study design seemed most appropriate to explore how universities react to the rankings. It is a study of a bounded system that is under significant influence of the contexts through an in-depth/comprehensive investigation. For a nuanced level of understanding of the case, I chose the interpretivist perspective and qualitative 135 methods. At this stage, previous literature on educational research design and methodology was widely employed. It was difficult to decide which case study approach would be used since the definitions and interpretation of the case study design varied across different scholars.  (October 2019) I examined the case study design books and attempted to identify the purpose and scope of the study. This study will be descriptive and exploratory at the same time. To get a more extensive view of Korean higher education, I think exploring multiple cases would be necessary. I have a vague idea of how many HEIs are included and which HEIs can be accessed.  (November 2019) I identified the research methods used for this study. The research question is how HEIs respond to university rankings. To address this question, I think listening to the voices of various members of the case institutions would be great. At first, I was thinking of including students attending the universities to ask about their opinions and understanding of the rankings. But my colleagues suggested that faculty, administrators, and staff would be the best interviewees who could understand the institutional practices. For an in-depth, comprehensive analysis, various data sources including news articles, institutional documents, and websites, would be necessary. I checked if there might be some online resources available publicly.  (November 2019) I spent considerable time figuring out the overall procedures of the study and required tasks to conduct this study.  First, I had to provide a detailed description on how to select cases, collect data—recruitment, recording, compensation, consent—, and analyze data for the proposal.  Based on this description, I developed initial interview protocols used to interview the participants. The questions were primarily coming from the research questions and conceptual framework. One challenge I had was that I prepared too many structured questions asking about various institutional practices across different areas. Also, the questions were too straightforward and sometimes too complicating. I revised them over time and added some questions to build rapport with participants. [In practice, about 40 percent of the questions were used frequently. I updated protocols based on the circumstances.] (November 2019) I devised ways to conduct case analyses more efficiently. Informed by other multiple-case studies, I planned to conduct within-case studies for individual cases and cross- case study comparing and contrasting the cases. (March 2020) After establishing the initial research plan, I prepared for the Institutional Review Boards approval at Michigan State University. Since this study qualified for exempt review, an application form including its basic information, a consent form, an interview protocol, and recruitment flyers (with Korean versions) were required. I expected the entire process was completed within 10 days as usual. But due to the pandemic, all interpersonal research activities were suspended without prior notice. The review process underwent significant delay. I had to confirm to change my research plan to avoid any types of in-person interactions. The university also restricted research-related travels. Although I got approval from the IRB, it was difficult to figure out how to proceed further in this changed condition. (January 2020) I established a tentative schedule for this study. Most of the research activities went according to the schedule. But scheduling and conducting interviews took a longer time than I expected. Numerous external, internal challenges emerged. It seemed essential to allow some flexibility in developing a research schedule.    136 Case Selection   (November 2019) I decided to focus on individual universities as the cases in this study. To select the cases (HEIs in South Korea), I established a series of criteria based on the previous studies on the university rankings and HEIs in general. Among the various studies, Lo (2014) was quite helpful in this, for it was a multiple-case study of the implications of rankings on Taiwanese universities. Although it was helpful, I need to make efforts to adopt, refine, and improve the criteria suitable for my study settings. (February 2020) I could not finalize the case selection criteria nor the possible case universities until the proposal presentation. The committee members reviewed the criteria and the list of universities. In selecting the cases, access to sites would be more important than other criteria. Also, I considered whether the case universities would provide insights to higher education research for the future publication of the findings. The case of UC was included for this purpose. I selected three universities in Korea with different ranking positions, operational control, and location. Replication logic was also applied to compare various cases. I was also thinking of alternative case institutions to include if the access is denied in practice. Data Collection     (November 2019) Working on the research proposal, I identified the types of data sources employed for the study. To use multiple sources for an in-depth investigation on the cases, I planned to conduct one-on-one interviews, explore news articles, and examine institutional websites. I attempted to do some pre-exploration of the news articles/websites of some universities in Korea. This task was helpful to figure out how to access to the data and what types of data would be available. (March 2020) After completing the case selection, I started to recruit participants. One of the biggest challenges I faced was the restriction on research activities due to the pandemic. Even last month, I was planning to visit South Korea in May and complete all the interviews in person for a month. I thought this plan was feasible for I had built interpersonal connections with many staff/administrators in two of the case institutions. But neither in-person interaction with participants nor research-related travels were restricted suddenly. Furthermore, universities in Korea (to say nothing of other countries) were struggling with switching to distant learning to prevent the spread of COVID-19. It was almost impossible to request staff, administrators, and faculty to participate in a study, not directly related to their current challenges. So, I just started to collect documents and website data. (May 2020) Since the spring semester was postponed for a month on most of the campuses in Korea, recruiting participants seemed still difficult even in May. Universities seemed to work on newly emerging challenges such as supervising exams either in person or online, managing classrooms safely, and supporting international students for 14-day mandatory quarantine. I got in touch with some of my colleagues working at universities in Korea and asked about their current conditions. (June-August 2020) As the semester drew to an end, I planned to start recruiting participants and conducting interviews during the vacation. I contacted three colleagues working at UA. Considering the institutional culture and working environments based on my previous experience, I thought it would be better to contact key informants and ask for their help in recruiting participants than sending emails/flyers to encourage participation. Recruiting 137 participants at UA was straightforward since the key informant was a senior staff member having vast connections with several offices. As seen in Figure C1, via the key informant (A73) I could reach out to most of the interview participants at UA. Figure C1 A Flow Chart of Sampling at UA Note: The boxes with thicker border lines represent the interview participants, while the other boxes represent contact persons who introduced them to the researcher.  The first virtual interview was in early August, and the final one was completed in September. After conducting the interviews, I updated the interview protocol based on the new information I got from previous ones. Before starting the interviews, I explored the institutional practices of UA through documents and added some questions to ask for more details. All the interviews were recorded on the designated device. I made efforts to transcribe them by myself right after the interviews in order to take note of the nonverbal communication and initial impression that I experienced during the interviews. (August-October 2020) While conducting interviews with the participants of UA, I recruited participants at UB. Before starting the interviews, I examined UB’s documents related to its institutional initiatives to update the interview protocols. From the annual self-evaluation reports of UB on its website, I found that UB’s institutional goals were rather developed to perform better for the government’s evaluations. Therefore, I added some questions about national evaluations and priority among the rankings. From the first case, I realized that interviewing with staff in the planning office would be essential to overview the ranking- related initiatives. Therefore, the first interviewee I met at UB was a senior staff member at the data management center in the planning office. Since I was acquainted with many staff members at UB, sampling at UB was based more on direct personal relationships as seen in Figure C2. For this reason, it was quite challenging to set up virtual interviews and stick to the pre-developed protocol. Yet, I could complete all the interviews within three weeks. They were all very supportive and ready to share their experiences. I heard many interesting stories about the rankings from them. 138  Figure C2 A Flow Chart of Sampling at UB (October-December 2020) The third case institution was the most unfamiliar one for me. I heard stories of UC from the participants of UA or UB for its rankings had improved greatly. Unlike other cases which I had worked with, I was not sure how to reach out to staff, administrators, and faculty at UC. Recruiting participants was more challenging than in other cases. I requested several faculty members working at other universities to reach out to faculty members at UC. Two faculty members were eager to help me recruit participants. Yet, for more than three weeks, I could not recruit any participants. UC’s institutional culture seemed quite different from other public institutions. Participating in external research projects and sharing institutional strategies/initiatives for rankings with others seemed to be almost prohibited at UC. Some explained that the compensation for participation in this study was not sufficient. Encountering this barrier, I compromised on the number and range of participants at UC. Although I was thinking of exploring alternative cases, I could not exclude UC, for most of the participants I met so far emphasized UC’s accomplishment in rankings. Luckily, I could reach out to faculty members from various disciplines and a senior staff member working in the planning office at UC. It was difficult to set up interview times for the participants seemed to have quite demanding schedules. The interviews starting in October were completed in late December. Figure C3 shows the flow of interactions for the sampling at UC. 139 Figure C3 A Flow Chart of Sampling at UC Analysis     (November 2019) I identified the criteria for the interpretation of the findings and briefly explained them in the proposal. I planned to categorize the excerpts systematically to analyze the patterns. The initial categories were informed by the framework I developed. (June-December 2020) The analysis process started as the data collection began. From website content to interview transcripts, I read the collected data multiple times and wrote emerging themes on the research memos. This was quite helpful both to set the direction for the remaining research process and identify the patterns in the data for the analysis. The following items were examples of the memos: ‘Rankings as certifications to enter the international market, quite essential (Office of International Affairs, UA)’, ‘Universities in Singapore, Hong Kong prevail, a role model of universities’, ‘National evaluation matters at UB’, ‘UB does its best for the rankings but no gains’, ‘UC’s strategic, systematic approach to institutional reform’, and ‘UC, quite different from other cases’. After each interview, I transcribed the recording by myself and explored the emerging themes and issues to examine further in future interviews. Reading the data repetitively, I could find new themes and patterns which had not been noticed during the interviews. (January 2021) After the data collection and preliminary analysis were completed, I started primary analysis of the data based on what I had found from the previous research activities. The analysis process was inductive and deductive. I attempted to categorize the data based on the initial categories developed from the framework deductively and also explore the data to identify new patterns/concepts that were not explained by the framework inductively. For a more systematic analysis, I used NVivo version 11. Although it allowed quite limited functions for the data written in Korean compared to other languages, I could categorize the data by the newly developed categories using this software program. (January-February 2021) From the data analysis, I wrote up the findings to provide individual case reports including descriptive, explanatory data. Also, I provided a cross case analysis by comparing the individual cases. The entire process took a significant amount of time. Validity and Reliability  (November 2019) I defined validity and reliability concerned in the study, primarily informed by previous studies. There were various perspectives and interpretations of how to increase validity and reliability in qualitative studies. I explored and selected what seemed 140  most suitable for my research design. (February 2020) After discussing the definition of validity and reliability in this study with colleagues, I updated it to reflect the nature of the interpretivist perspective. At first, I thought I should provide detailed descriptions of research design/procedures to enable other researchers to get the same results by adopting it. Yet, I realized that the underlying assumption was incorrect. No one sees the cases in the same way as I do. What I need to do is just to provide a detailed case study protocol that can guide future case studies on HEIs. Study Limitations  (November 2019) I explicitly stated the limitations of the study and added explanations on how to address them. I updated this part after the data collection. 141 REFERENCES 142 REFERENCES ARWU. (2011). Academic ranking of world universities 2011. Retrieved December 30, 2020 from http://www.shanghairanking.com/ARWU2011.html ARWU. (2019). Academic ranking of world universities 2019. Retrieved December 30, 2020 from http://www.shanghairanking.com/ARWU2019.html ARWU. (2020). Methodology. Retrieved April 28, 2021 from http://www.shanghairanking.com/ARWU-Methodology-2020.html Altbach, P. G. (2012). The globalization of college and university rankings. Change: The Magazine of Higher Learning, 44(1), 26-31. Altbach, P. (2015). The costs and benefits of world-class universities. International Higher Education, 33, 5-8. Altbach, P. G., & Knight, J. (2007). The internationalization of higher education: Motivations and realities. Journal of Studies in International Education, 11(3-4), 290-305. Azman, N., & Kutty, F. M. (2016). Imposing global university rankings on local academic culture: Insights from the National University of Malaysia. In M. Yudkevich, P.G. Altbach, & L.E. Rumbley. (Eds.). The global academic rankings game (pp. 57-78). New York: Routledge. Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report, 13(4), 544-559. Bowden, R. (2000). Fantasy higher education: University and college league tables. Quality in Higher Education, 6(1), 41-60. Brereton, P., Kitchenham, B., Budgen, D., & Li, Z. (2008, June). Using a protocol template for case study planning. Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering, 12, 1-8. Brown, R. (2015). The marketisation of higher education: Issues and ironies. New Vistas, 1(1), 4-9. Byun, K., Jon, J. E., & Kim, D. (2013). Quest for building world-class universities in South Korea: Outcomes and consequences. Higher Education, 65(5), 645-659. Byun, K., & Kim, M. (2011). Shifting patterns of the government’s policies for the internationalization of Korean higher education. Journal of Studies in International Education, 15(5), 467–486. 143 Çakır, M. P., Acartürk, C., Alaşehir, O., & Çilingir, C. (2015). A comparative analysis of global and national university ranking systems. Scientometrics, 103(3), 813-848. Chae, J. E., & Hong, H. K. (2009). The expansion of higher education led by private universities in Korea. Asia Pacific Journal of Education, 29(3), 341-355. Cho, D. W. (2012). English-medium instruction in the university context of Korea: Tradeoff between teaching outcomes and media-initiated university ranking. Journal of Asia TEFL, 9(4), 135-163. Cho, Y. H., & Palmer, J. D. (2013). Stakeholders’ views of South Korea’s higher education internationalization policy. Higher Education, 65(3), 291-308. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage. Creswell, J. W., & Poth, C. N. (2017). Qualitative inquiry and research design: Choosing among five approaches. Thousand Oaks, CA: Sage. Dunrong, B. (2016). Global rankings and world-class university aspirations in China. In M. Yudkevich, P.G. Altbach, & L.E. Rumbley. (Eds.). The global academic rankings game (pp. 57-78). New York: Routledge. El-Khawas, E. (2007). Accountability and quality assurance: New issues for academic inquiry. In J. J. F. Forest & P. G. Altbach (eds.), International handbook of higher education (pp. 23–37). Dordrecht: Springer. Erkkilä, T. (2014). Global university rankings, transnational policy discourse and higher education in Europe. European Journal of Education, 49(1), 91-101. Erkkilä, T., & Piironen, O. (2018). Rankings and global knowledge governance: Higher education, innovation and competitiveness. Cham, Switzerland: Springer. Espeland, W. N., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology, 113(1), 1-40. Espeland, W. N., & Sauder, M. (2016). Engines of anxiety: Academic rankings, reputation, and accountability. New York: Russell Sage Foundation. Dill, D. D. (2009). Convergence and diversity: The role and influence of university rankings. In University rankings, diversity, and the new landscape of higher education (pp. 97- 116). Leiden, Netherlands: Brill Sense. Gopinathan, S., & Altbach, P. G. (2005). Rethinking centre-preiphery. Asia Pacific Journal of Education, 25(2), 117–123. 144 Green, C. (2015). Internationalization, Deregulation and the Expansion of Higher Education in Korea: An Historical Overview. International Journal of Higher Education, 4(3), 1-13. Gibbs, G. R. (2007). Analyzing qualitative data. In U. Flick (Ed.). The Sage qualitative research kit. Thousand Oaks, CA: Sage. Guri-Rosenblit, S., Šebková, H., & Teichler, U. (2007). Massification and diversity of higher education systems: Interplay of complex dimensions. Higher Education Policy, 20(4), 373-389. Han, S. H., Kim, S., Seo, I., & Kwon, K. S. (2018). An analysis of higher education policy: The case of government-supported university programs in South Korea. Asian Journal of Innovation and Policy, 7(2), 364-381. Harrison, H., Birks, M., Franklin, R., & Mills, J. (2017, January). Case study research: Foundations and methodological orientations. Forum Qualitative Social Research Sozialforschung. 18(1), 1-17. Hazelkorn, E. (2007). The impact of league tables and ranking systems on higher education decision making. Higher Education Management and Policy, 19(2), 1-24. Hazelkorn, E. (2008). Learning to live with league tables and ranking: The experience of institutional leaders. Higher Education Policy, 21(2), 193-215. Hazelkorn, E. (2013) How rankings are reshaping higher education. In V. Climent, F. Michavila, & M. Ripolles (Eds.). Los rankings univeritarios, mitos y realidades. Madrid: Editorial Tecnos. Hazelkorn, E. (2015). Rankings and the reshaping of higher education: The battle for world- class excellence. London: Springer. Hazelkorn, E. (2018). Reshaping the world order of higher education: the role and impact of rankings on national and global systems. Policy Reviews in Higher Education, 2(1), 4-31. Hertig, H. P. (2016). Universities, rankings and the dynamics of global higher education: Perspectives from Asia, Europe and North America. London: Springer. Joong Ang Ilbo (n.d.-a). 2010 Jonghappyeongga Jipyo [Comprehensive Evaluation Indicators for 2010]. Retrieved January 19, 2021, from http://univ.joongang.co.kr/university/index_view.asp?pg=1&ps=10&pb=10&sf=0&sw= &tf=&sm=&cf=0&sc=&ix=10&ht= Joong Ang Ilbo. (n.d.-b). Daehakpyeongga [College Evaluation]. Retrieved December 30, 2020 from https://news.joins.com/university/evaluation/list?cloc=joongang-section-subsection Jung, J. S. (2010). Hierarchical System of universities and measures of its resolution. Critical Review of History, 92, 133-157. 145 Kang, C. D. (2014). The comparative study of national and private universities` competitiveness in Korea. The Korea Educational Review, 20(3), 301-323. Kehm, B. M. (2014). Global university rankings—Impacts and unintended side effects. European Journal of Education, 49(1), 102-112. Kim, S., & Lee, J. H. (2006). Changing facets of Korean higher education: Market competition and the role of the state. Higher Education, 52(3), 557-587. Korean Educational Development Institute. (2018). Statistical yearbook of education. Retrieved from https://kess.kedi.re.kr/publ/view?survSeq=2018&publSeq=2&menuSeq=3894&item Code=02&language=en Korean Higher Education Research Institute (2020). Daehak wigi geukbokeul wihan jibang daehak yukseong bangan [How to foster regional universities to overcome the crisis of universities] (Policy research report no. 1). Korean Higher Education Research Institute. Lee, B. S. (2002, May 17). ‘All that glitters is not gold,’ hiring more foreign faculty has a downside too [Oigukingyosu ‘Bitjoeun gaesalgu’ chaeyonghwakdae maleun joeunde]. Retrieved from https://www.hankyung.com/society/article/2002051709851 Submission–ERIC, 4(1). Lee, J. K. (2004). Globalization and Higher Education: A South Korea Perspective. Online Lee, S. (1989). The emergence of the modern university in Korea. Higher Education, 18(1), 87-116. Lo, W. Y. W. (2014). University rankings: Implications for higher education in Taiwan. Singapore: Springer Science & Business Media. Locke, W., Verbik, L., Richardson, J. T., & King, R. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Bristol: Higher Education Funding Council for England. Locke, W. (2011). The institutionalization of rankings: Managing status anxiety in an increasingly marketized environment. In J.C. Shin, R.K. Toutkoushian, & U. Teichler (Eds.). University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 201-228). Dordrecht: Springer. Lune, H., & Berg, B. L. (2016). Qualitative research methods for the social sciences. Pearson Higher Ed. Marginson, S. (2006). Dynamics of national and global competition in higher education. Higher Education, 52(1), 1-39. Marginson, S. (2016). The global construction of higher education reform. The handbook of global education policy, 291-311. 146 Marginson, S., & van der Wende, M. (2007). Globalisation and higher education, OECD education working paper No. 8. Paris: OECD. Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rd ed.). Thousand Oaks, CA: Sage. Ministry of Education. (2019, December 11). Ministry of Education set its annual budget of 7 trillion won for 2020 [Gyoyukbu 2020nyeondo yesan 77jo3,871ekwon hwakjeong]. Retrieved from https://www.moe.go.kr/boardCnts/view.do?boardID=294&boardSeq=79270&lev=0&m=02 Ministry of Education. (2020). Higher education in Korea: Public disclosure information. Retrieved December 31, 2020, from https://www.academyinfo.go.kr/pubinfo/pubinfo0360/selectListLink.do. Nam, Y., Shim, S., & Kim, N. (2018, October 29). College evaluation 2018 [Daehak Pyeongga 2018]. Joongang Ilbo, p. 5. Retrieved from https://news.joins.com/article/23072970 OECD (2019), Population with tertiary education (indicator). doi: 10.1787/0b8f90e9-en Retrieved from https://data.oecd.org/eduatt/population-with-tertiary-education.htm Patton, M. Q. (2015). Qualitative research and evaluation methods (4th ed.). Thousand Oaks, CA: Sage. Pervan, G., & Maimbo, M. (2005). Designing a case study protocol for application in IS research. Proceedings of the Ninth Pacific Asia Conference on Information Systems, 1281-1292. QS. (2021). QS world university rankings-methodology. Retrieved April, 28, 2021 from https://www.topuniversities.com/qs-world-university-rankings/methodology QS. (n.d.). QS world university rankings. Retrieved December, 30, 2020 from https://www.topuniversities.com/university-rankings/world-university-rankings/2020 Robinson, O. C. (2014). Sampling in interview-based qualitative research: A theoretical and practical guide. Qualitative Research in Psychology, 11(1), 25-41. Rossman, G. R., & Rallis, S. F. (2016). An introduction to qualitative research: Learning in the field (4th ed.). Thousand Oaks, CA: Sage. Schuman, M. (2009). The miracle: the epic story of Asia's quest for wealth. NY: Harper Collins. Shin, J. C., Toutkoushian, R. K. (2011). The past, present, and future of university rankings. In J.C. Shin, R.K. Toutkoushian, & U. Teichler (Eds.). University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 1-18). Dordrecht: Springer. 147 Shin, J. C., & Jang, Y. S. (2013). World-class university in Korea: Proactive government, responsive university, and procrastinating academics. In J.C. Shin & B.M. Kehm. (Eds.). Institutionalization of world-class university in global competition (pp. 147-163). Dordrecht: Springer. Stack, M. (2016). Global university rankings and the mediatization of higher education. London: Springer. Stake, R. E. (2005). Qualitative case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 443-466). Thousand Oaks, CA: Sage. Stuart, D. L. (1995). Reputational rankings: Background and development. New Directions for Institutional Research, 88, 13-20. Sauder, M., & Espeland, W. N. (2009). The discipline of rankings: Tight coupling and organizational change. American Sociological Review, 74(1), 63-82. Teichler, U. (2011). The future of university rankings. In J.C. Shin, R.K. Toutkoushian, & U. Teichler (Eds.). University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 259-265). Dordrecht: Springer. THE. (2011). World university rankings 2010-11. Retrieved December 30, 2020 from https://www.timeshighereducation.com/world-university-rankings/2011/world- ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats THE. (2020). World university rankings 2019-20. Retrieved December 30, 2020 from https://www.timeshighereducation.com/world-university-rankings/2020/world- ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats THE. (2021). Methodology for overall and subject rankings for the Times Higher Education world university rankings 2021. Retrieved April 28, 2020 from https://www.timeshighereducation.com/sites/default/files/breaking_news_files/the_2021_ world_university_rankings_methodology_24082020final.pdf THE. (n.d.). THE world university rankings. Retrieved December 30, 2020 from https://www.timeshighereducation.com/world-university-rankings/2020/world- ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats Toma, J. D. (2007). Expanding peripheral activities, increasing accountability demands and reconsidering governance in U.S. higher education. Higher Education Research & Development, 26(1), 57-72. Trow, M. (1974). Problems in the transition from elite to mass higher education. In OECD (ed.). Policies for higher education (pp. 51–101). Paris: OECD. UA. (n.d.-a). Timeline. Retrieved January 5, 2021, from https://en.***.ac.kr/about/history/timeline 148 UA. (n.d.-b). Facts. Retrieved January 5, 2021, from https://en.***.ac.kr/about/overview/fact UA. (n.d.-c). Global standing. Retrieved January 5, 2021, from https://en.***.ac.kr/about/overview/ranking UA. (n.d.-d). Vision. Retrieved January 6, 2021, from https://en.***.ac.kr/about/overview/vision UA. (n.d.-e). Professional graduate schools. Retrieved January 7, 2021, from https://www.***.ac.kr/academics/graduate/professional_graduate_schools UB. (n.d.-a). Overview of history. Retrieved January 12, 2021, from https://global.***.ac.kr/About/History/Overview UB. (n.d.-b). Facts. Retrieved January 12, 2021, from https://global.***.ac.kr/About/Overview/Facts UB. (n.d.-c). Regulations. Retrieved January 13, 2021, from http://rule.***.ac.kr/Rule/Sub.aspx?sm=2&sub=2 UB. (n.d.-d). President greeting. Retrieved January 18, 2021, from https://www.***.ac.kr/MainIntro/President/Greeting UB. (n.d.-e). Research support. Retrieved January 18, 2021, from http://sanhak.***.ac.kr/user/indexSub.action?codyMenuSeq=20030&siteId=sanhak_new &menuUIType=top UB. (n.d.-f). Main page. Retrieved January 15, 2021, from http://***.ac.kr/***main.aspx) UC. (n.d.-a). About. Retrieved January 20, 2021, from https://www.***.edu/eng/About/s620/sub03_07.do UC. (n.d.-b). Vision Declaration. Retrieved April 19, 2020, from https:/www.***.edu/eng/About/vision/vision5.do UC. (n.d.-c). Academics. Retrieved January 23, 2021, from https://www.***.edu/eng/edu/graduateSchool/graduate_school.do UC. (n.d.-d). Inbound Exchange/Visiting Program. Retrieved January 30, 2021, from https://www.***.edu/eng/International/Study***/CourseInformation.do Usher, A. (2016). A short global history of rankings. In E. Hazelkorn (Ed.). Global rankings and the geopolitics of higher education. Understanding the influence and impact of rankings on higher education, policy and society (p. 23–53). Abingdon: Routledge. 149 Wayt, L. K. (2015). Pathways to Student Success: A multiple case study on four-year colleges' organizational change in performance funding states (Unpublished doctoral dissertation). The University of Nebraska-Lincoln, Lincoln, Nebraska. Yang, C. C., & Chan, S. J. (2017). Is higher education expansion related to increasing unemployment rates? A comparative analysis of Japan, South Korea, and Taiwan. International Journal of Chinese Education, 5(2), 162-186. Yeom, M. H. (2018). Critical reflection on community development and the role of universities. Journal of Educational Administration and Policy, 36(5), 385-417. Yonezawa, A., Nakatsui, I., & Kobayashi, T. (2002). University rankings in Japan. Higher Education in Europe, 27(4), 373-382. *Note: The names of institutions in the URLs of the websites were masked to avoid possible identification of the case institutions (UA, UB, and UC). 150