USING STUDENT LEARNING OUTCOME DATA AT COMMUNITY COLLEGES: UNDERSTANDING THE HOW By Mathew Devereaux A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Higher, Adult, and Lifelong Education—Doctor of Philosophy 2025 ABSTRACT This study explores how a community college successfully implemented and sustained the use of student learning outcome (SLO) assessment data to improve institutional quality. Utilizing interpretivist exploratory case study methodology, the research examines a community college recognized for its excellence in assessment practices by the National Institute for Learning Outcomes Assessment (NILOA). The theoretical framework used to ground this research is Ivancevich et al.’s (2014) Model of Organizational Change, while the conceptual framework utilized is Bolman and Deal’s (2017) Four Frames Model. Data collection included semi-structured interviews with faculty, staff, and administrators. Findings from this study reveal three overarching themes critical to the success of the initiative: (a) managing conflict and resistance, (b) the role of integrated leadership in driving change, and (c) building a culture of assessment through collaboration and inclusivity. Practical implications emphasize the need for institutions to create systems of peer support, align assessment practices with institutional missions, and train leaders in multi-frame analysis to navigate organizational complexity. These findings provide actionable insights for institutions, similar to the one studied in this dissertation, striving to close the assessment loop and improve student success through evidence-based decision making. Copyright by MATHEW DEVEREAUX 2025 ACKNOWLEDGEMENTS First, and foremost, I want to acknowledge my parents, Mike and Julie Devereaux, who instilled in me both the importance of education and the will to pursue my long-term goals. As lifelong public school administrators, their reverence for education was exhibited through their actions and inspired me to pursue and persevere the goal of obtaining my PhD. I must also acknowledge all of the positive support I received from my dissertation chair and advisor, Dr. Matthew Wawrzynski. While he sometimes told me things I didn’t want to hear, Dr. Wawrzynski always told me what I needed to hear. I thank you for your patience and wisdom along the way. I would also like to acknowledge and thank Dr. Wayne Sneath and Dr. Kriss Ferluga. Wayne, your support as I tried to balance my career and education was crucial to my success. I couldn’t ask for a better boss and friend. Kriss, your experience and input with the writing process was irreplaceable. Thank you for your time and effort in helping me through this. To my wife, Vanessa, this process has not been easy and I want to thank you for your constant support and sacrifice. There were lots of late nights waiting for me to drive back from East Lansing where you had to take care of our family alone. Thank you and I love you. To Lincoln and Delainey, thank you for always helping me feel better during the times this dissertation got overwhelming. To Maren, you have been my single biggest inspiration to finish this program. Without you, I very well may have fallen short. The reason I kept going is because I wanted to make you proud, and I hope I accomplished that. iv TABLE OF CONTENTS CHAPTER 1: INTRODUCTION……………..…………………………………………………..1 CHAPTER 2: LITERATURE REVIEW………………….……………………………………..12 CHAPTER 3: METHODS………………….……………………………………………………48 CHAPTER 4: BACKGROUND INFORMATION…………….………………………………..61 CHAPTER 5: RESULTS & THE FOUR FRAMES WORKING IN ISOLATION…………….68 CHAPTER 6: RESULTS & THE INTEGRATION OF THE FOUR FRAMES…..…………..101 CHAPTER 7: DISCUSSION AND FUTURE RESEARCH…………………………………..117 REFERENCES…………………………………...…………………………………………….137 APPENDIX A: INTERVIEW PROTOCOL…………………………………………………...149 APPENDIX B: INVITATION TO PARTICIPATE....…………………………………………151 APPENDIX C: CONSENT FORM….…………....…………………………………………....152 v CHAPTER 1: INTRODUCTION In higher education literature it is common to hear an author state that the field is not a monolith, rather there are a multitude of institutional types, each of which serve their own mission. Further, some institutional types, such as community colleges, serve multiple missions (Dougherty & Townsend, 2006). According to Ewell (2011), many community colleges simultaneously provide the first two years of a baccalaureate degree, associate degree instruction in many vocational fields that carries transfer credit, terminal occupational credentialing that has immediate workplace value, remedial and developmental instruction to render students college ready, noncredit instruction (i.e., literacy training and English as a Second Language), and contract training for employers and local businesses. While no two higher education institutions (HEI) are the same, every HEI in the United States deals with similar demands for accountability from their various stakeholder groups (i.e., students, parents, faculty, staff, lawmakers, taxpayers). The most resounding of calls for accountability came from the U.S. Department of Education with the release of The Future of Higher Education, commonly referred to as The Spellings Report in 2006. The report painted a seemingly bleak outlook for the landscape of higher education and plainly stated the need for more proactive measures of accountability to address this outlook. The boisterous call from the Federal Government created attention and action around the idea of using student learning outcomes (SLO) assessment data as an empirical means of holding HEIs accountable and proving value to stakeholders (Rhodes, 2015). According to Hernon and Dugan (2004), SLO assessment can be used as a means of ensuring institutional effectiveness by using the results for continuous improvement efforts. The good news is nearly all HEIs are attempting to show accountability by gathering SLO assessment data (Blaich & Wise, 2011). 1 One reason for the high level of participation in SLO assessment is because HEIs are compelled to do so as a part of the accreditation process. For example, the Higher Learning Commission, a nationally recognized regional accreditation body, requires, “The institution demonstrates a commitment to educational achievement and improvement through ongoing assessment of student learning.” (HLC Policy, 2019). The compulsory nature of the calls for SLO assessment coupled with high levels of compliance among HEIs shows that demands for accountability are being taken seriously. The problem is, even though a vast majority of HEIs are making an effort to engage in the assessment process, simply engaging in the assessment of SLOs without acting on the results to improve learning outcomes has the potential to be counterproductive in terms of wasted resources and low morale. For example, 58% of faculty surveyed said assessment efforts seem focused on satisfying outside groups and do nothing to help students (Lederman, 2018). Further empirical research seems to support these faculty perceptions. For example, in the Wabash National Study, Blaich and Wise (2011) found very few instances of actual change in response to information generated by institutional assessment of SLOs, and the reason provided for the lack of instances is very simple, “It is incredibly difficult to translate assessment evidence into improvements in student learning.” (p. 11). The information above leaves a glaring question: Are any institutions effectively using SLO assessment data to change and improve institutional quality? Purpose of Study Ironically, one constant in higher education is pressure to change. These pressures may arise from internal forces, for example institutional improvement plans, or from external forces, like new accreditation standards. Agents within HEIs continually seek ways to promote student learning with the end goal being to improve institutional quality, and these intentional acts 2 carried out in furtherance of desired new direction are what Kezar (2014) considers change. There are a multitude of factors that can impact what, why, and when change occurs, but of particular importance to this study is the question of how change occurs. The purpose of this study is to gain an understanding of how an institution gets to a point where they can successfully use SLO assessment data to change, with the goal being to improve institutional quality. The process of using SLO assessment data to change toward improving institutional quality is popularly referred to in the literature as “closing the loop.” Calls for closing the loop by using SLO assessment to improve institutional quality are ubiquitous in higher education literature (Baker, Jankowski, Provezis, & Kinzie, 2012; Banta & Blaich, 2011; Driscoll & Wood, 2007; Suskie, 2018). However, as Hamill (2015) points out, scant evidence exists on how to carry out this task. Due to the lack of evidence, this dissertation seeks to understand how a specific community college—recognized by the National Institute for Learning Outcomes Assessment (NILOA) for obtaining the Excellence in Assessment designation—got to the point where they are using SLO assessment data to change and improve institutional quality (Excellence in Assessment, n.d.). Put another way, this project will seek to gain understanding into the process of how a particular community college got to the point where they are effectively closing the loop. Understanding Organizational Behavior Broadly: A Conceptual Framework The purpose of this section is to provide a brief description of the conceptual framework used to understand organizational behavior in a very broad sense. A fuller description of the conceptual framework can be found in the literature review section. Gordon Carstedt once said, “The world simply can’t be made sense of, facts can’t be organized, unless you have a mental 3 model to begin with” (as cited in Bolman & Deal, 2017, p. 11). The mental model used to organize and make sense of results from this study is Bolman and Deal’s (2017) Reframing Organizations, which will act as a conceptual framework (see Figure 1). This framework outlines four distinct frames, through which organizational issues may be understood: structural, human resource, political, and symbolic. The reason I chose this conceptual framework is because organizations are complex structures, which require cooperation of various individuals and groups who often have differing views, beliefs, and motives, and these multiple frames will allow for a more complex understanding of organizational issues (Bolman & Deal, 2017; Markóczy, 2004). This broad conceptual frame provided various lines of inquiry to probe during the data collection phase of the study. Understanding How Organizations Change: A Theoretical Framework The purpose of this section is to provide a brief description of the theoretical framework used to understand how organizations change. A fuller description of the theoretical framework can also be found in the literature review section. According to Regoniel (2012), a theoretical framework can provide a representation of the relationships between variables in a given phenomenon. Understanding how colleges change is not an easy task, in fact, there is an entire field of study dedicated to change management. A comprehensive model of organizational change presented in the book Organizational Behavior and Management (Ivancevich, Konopaske, & Matteson, 2014) is used to understand how the specific institution in question engaged in the change process (see Figure 2). While institutions are largely free to change how they see fit, community colleges are currently compelled to engage in change based on evidence gathered through SLO assessment (U.S. Department of Education, 2019; HLC Policy, 2019). External forces requiring community colleges to engage in this change process have effectively 4 fulfilled a portion of Ivancevich et al.’s (2014) model—performance outcomes, diagnosis of the problem, and selection of appropriate method of change. Because much of the change process is outside of community college’s locus of control, this study aims specifically to understand the remaining portions of the change process that are within the institution’s locus of control. The portions of the change process under consideration for this study fall in the bottom portion of Ivancevich et al.’s (2014) model in Figure 2—impediments and limiting conditions, implementation of the method, and program evaluation. Research Question The overarching question to be answered by this study is, “How did one particularly successful community college implement and sustain the use of SLO assessment data to improve institutional quality?” Specifically, an institution who has been successful in SLO assessment is defined as one who integrates assessment practices throughout the institution, provides evidence of student learning outcomes, and uses assessment results to guide institutional decision-making and improve student performance (Excellence in Assessment, n.d.). Scant information currently exists on how community colleges implement the practice of using SLO assessment data to improve institutional quality. Due to this lack of scholarship, an exploratory case study on how one particularly successful community college got to the point where they are using SLO data to improve institutional quality is warranted. This study examines an institution who has been recognized by NILOA as achieving their Excellence in Assessment designation. Focusing on a single institution allows for a fuller understanding of my research question, “how did one particularly successful community college get to the point where they are implementing and sustaining the use of SLO assessment data to improve institutional quality?” 5 Significance of Study The following are three specific implications this study has for the higher education community: 1. Higher education’s crisis of perceived value 2. Community college’s importance to higher education as a whole 3. Institutional consequences of collecting SLO data and not using it. Crisis of Perceived Value Since the release of the Spellings Report (2006), questions surrounding the value of higher education have risen to the top of the national conversation. According to Kelchen (2018), the most commonly cited reason for this push towards assessing educational quality is to prove the monetary value of higher education. The continual increase in cost of tuition and fees— which is increasing much faster than inflation—coupled with a decrease in public appropriations continues to be a driving factor in the $1.4 trillion student debt crisis (Fain, 2018). These interconnected financial pressures have, and will continue to, drive the constant calls for accountability in higher education (Fain, 2018). A recent Gallup polling report from Jones (2018) shows just how accurate the Spellings Commission’s (2006) predictions of heightened calls for accountability and waning public support for higher education were. According to Jones (2018), only 48% of American adults responding to polling expressed confidence in the value of higher education. Public trust in the quality and value of higher education has eroded over a long period of time and will likely take a long time to regain it. Banta and Blaich, (2011) argue that only by holding institutions publicly accountable for evidence of learning will we ensure that student learning is among the high- priority activities in which colleges and universities engage. 6 The rapid decline in public perception of quality in American higher education negatively affects the legitimacy of the entire field, so finding a way to empirically improve the quality of higher education is a crucial first step in reestablishing trust that higher education is a public good worth pursuing. By gaining an understanding of the organizational process undertaken by a community college that has successfully implemented and sustained the use of SLO assessment data to improve institutional quality, others in the community college sector may reimagine existing or develop new processes for using SLO data to improve their institutional quality. Relying on more direct and empirical measures of student learning may help to rebuild public trust in the quality and value of higher education that has been lost over recent years. Importance of Community Colleges to Higher Education Since the inception of the first American community college in 1901—Joliet Junior College—these institutions have played a key role in facilitating the massification of higher education (Drury, 2003; Geiger, 2011). According to the National Center for Education Statistics (NCES) and the National Student Clearinghouse Research Center (NSC), 38% of all students enrolled in higher education in the United States are attending community colleges while 49% of all 4-year students in the 2015-2016 academic year attended a community college within the last 10 years (Ginder & Kelly-Reid, 2018; NSC, 2017). These statistics demonstrate how important the community college sector is to the overall health of higher education. Not only are community colleges in some way affecting nearly half of all students in higher education, they play a particularly important role in the lives of many marginalized populations, such as older students, low socioeconomic students, and racial and ethnic minorities. For example, 42% of students enrolled at community colleges are 25 years of age or older (Ma & Baum, 2016). Adult learners often bring unique challenges to their educational 7 experiences and are more likely to encounter educational obstacles such as attending college while working full-time, being a spouse, and or caring for dependents (Fairchild, 2003; Quaye & Harper, 2014). According to the Community College Research Center, community colleges also enroll, by far, the largest number of beginning postsecondary students who come from the lowest household income quartile (“Community College FAQs”, n.d.). Students from low socioeconomic backgrounds encounter unique challenges, which can act as social, psychological, and or physiological barriers to success (Jury et al., 2017). Lastly, racial and ethnic minorities, specifically Black and Hispanic students, are overrepresented at community colleges and underrepresented at 4-year universities (“Community College FAQs”, n.d.). For the 2014 cohort, 31% of first-time full-time undergraduates attended public two-year institutions. Of this group, 36% were Black students, while 43% were Hispanic, and only 28% were White (Ma & Baum, 2016). Research over the years has shown time and again that students from racial and ethnic minority groups, especially Black and Hispanic students, tend to encounter more barriers to success and achieve consistently lower educational outcomes than other groups (Ward, 2006). This evidence clearly shows how important community colleges are in terms of accessibility, not only to American higher education as a whole, but particularly to some of the more vulnerable populations. The hope is by understanding how one very successful institution got to the point of using SLO data to improve institutional quality, other community colleges may be motivated to undertake similar organizational changes to successfully close the assessment loop. Also, by focusing efforts on the community college sector, this study has the potential to help improve institutional quality at institutions who serve a majority of our most vulnerable and marginalized student populations. 8 Institutional Consequences of Collecting SLO Data and Not Using it The last reason this study had to be conducted is because it has potential to help institutions who are struggling with the detrimental effects of being forced to engage in assessment of SLOs without having a plan for using the data. According to Kinzie, Hutchings, and Jankowski, (2015, p. 56) The purpose of assessment is not achieved simply through the collection of vast amounts of valid and reliable data. Rather, assessment’s purpose is to answer questions, shape better policies, make better decisions—all designed to improve student success and strengthen institutional performance. There are opportunity costs from engaging in a task and not following through completely, which can have negative impacts on time, resources, and often employee morale (Cain, 2014; Chadi, Jeworrek, & Mertins, 2017). The requirement of assessing SLOs will likely continue to be essential for all HEIs. By carrying out this study, community colleges who are struggling to effectively use SLO assessment may learn from an institution who is doing it successfully. By expanding the collective understanding of how community colleges can successfully use SLO data to improve institutional quality, this study may help these institutions be more purposeful and effective with their SLO data collection and subsequent use of data to drive institutional improvement. Why Using SLO Data to Improve is so Difficult Developing and implementing an effective SLO assessment plan is becoming a necessity for every institution of higher education, but this development has proven to be very difficult. Regardless of the level of institution, the process of using SLO assessment data to improve is made difficult because ensuring institutional commitment, especially from faculty, but also from 9 administration, is challenging. Historically, faculty members in higher education have proven to be less than enthusiastic about engaging in assessment practices, especially when those assessment practices are seen as being forced upon them (MacDonald, Williams, Lazowski, Horst, & Barron, 2014). Another reason why using SLO assessment data to improve institutional quality is so difficult has to do with the need for human capital to carry out the task. Organizing, implementing, analyzing, and subsequent strategic planning based on results of SLO assessment is something that requires a lot of resources and time, which community colleges, generally, do not have (Nunley, Bers, & Manning, 2011). A final reason why using SLO assessment data to drive institutional improvement has proven to be challenging is because it can be difficult to select or develop valid and reliable measures (Nunley, Bers, & Manning, 2011). The previous reasons provided are by no means an exhaustive list of reasons why community colleges are finding it difficult to use SLO assessment data to improve institutional quality, but they are some of the more commonly cited reasons in the literature. Summary of Chapter The sections of Chapter 1 laid the groundwork for this study by introducing the topic broadly, discussing the purpose of the research, previewing the conceptual and theoretical underpinnings, presenting the research questions, and developing rationale for the significance of this work. Using SLO assessment data to inform institutional effectiveness initiatives can help community colleges close the assessment loop and provide empirical support to bolster the value of higher education. By gaining a deep understanding of how one particularly successful community college got to the point where they are using SLO assessment data to improve institutional quality, this study aims to provide a prototype for success. It is important to note the goal of this study is not necessarily to be prescriptive for all other community colleges. However, 10 when there is a lack of scholarship surrounding a particular topic—as is the case with community colleges using SLO assessment data to drive institutional improvement—providing a detailed example of how this challenge is addressed successfully can be a productive place to start. 11 CHAPTER 2: LITERATURE REVIEW This study seeks to understand how one community college got to a point where they are successfully implementing and sustaining the use of SLO assessment data to improve institutional quality. This is a complex issue, which requires an understanding of relevant scholarship from a range of subjects. The purpose of this literature review is to introduce and analyze some integral concepts at play in the field of SLO assessment and to show how they converge to apply pressure on institutions in the community college sector. Specifically, the concepts of accountability, accreditation, and assessment of SLOs; current knowledge of using SLO assessment data; and the conceptual and theoretical frameworks—which will help inform the direction of inquiry and make meaning of results—are discussed at length in this section. Beyond simply defining the aforementioned issues, this section also seeks to apply a critical eye to these concepts bringing to light the challenges they pose to community colleges. The constant pressure—both internal and external—upon institutions to assess and use SLO data to improve institutional quality, coupled with a lack of literature on how to engage in this task, can be a cause of great pressure for any HEI. However, community colleges are particularly susceptible to this pressure due to woeful staffing levels in institutional research offices, which is where assessment responsibilities typically fall (Morest & Jenkins, 2007). The susceptibility to pressure from inefficient staffing, which is common to many community colleges, is a major reason why this study aims to understand how one community college effectively and continually uses SLO assessment data to improve institutional quality. Accountability, Accreditation, and Assessment: Differentiating Key Terms When reading the literature on student learning outcomes, several key concepts arise early and often. Specifically, the ideas of accountability, accreditation, and assessment are 12 repeated ad nauseam. What is most disconcerting is that while these concepts are ubiquitous in the literature, they are often conflated, which leads to more confusion for community college practitioners who are trying to address these issues. The goal of this section is to introduce and operationalize each of these concepts and discuss how they affect community colleges. The use of metaphors is often a helpful tool in conveying how complex ideas work because they aid in making associations between one familiar concept and another (Jensen, 2006). In explaining how accountability, accreditation, and assessment are separate but interconnected concepts, a metaphorical comparison can be drawn. Metaphorically speaking, accountability can be seen as a destination, accreditation a vehicle, and assessment a map. Using this metaphor, this section provides evidence for how each of these concepts work to create pressure on community colleges. Accountability By design, the American system of higher education is large and unorganized, and this haphazard structure has brought with it some real benefits like independence, diversity, and competition for excellence (Carey & Schneider, 2010). As Carey and Schneider (2010) also point out though, this lack of organization has led many institutions toward seeking individual interests at the expense of the collective national interest. Zumeta (2011) aptly points out, if higher education places too much emphasis on independence at the expense of broad public goals, the field risks its reputation, and ultimately its independence. The balance between complete institutional independence and rigid oversight is the crux of the accountability debate in higher education. Broadly speaking, accountability can be defined as a responsibility for one’s actions to various stakeholders, which usually arises as a result of legal, political, financial, personal, or 13 moral ties (Zumeta, 2011). The concept of holding institutions of higher education accountable for their actions is not a new one, and as Marchand and Stoner (2012) put it, “whoever pays the bill for higher education is, of course, entitled to an accounting” (p. 18). However, what makes the concept of accountability particularly complex is the fact that it is a social construction, therefore its meaning is highly contextualized and depends on the time and space in which it is being used (Zumeta, 2011). Until recently, accountability in higher education was seen as a literal accounting of the flow of input resources and of how decisions were made by individuals in charge (McLendon, Hearn, & Deaton, 2006). Results of these accountability efforts were expected to be reported to a diverse set of stakeholders (i.e., students, parents, state and federal taxpayers, donors, and institutions themselves). However, several periods of rapid expansion coupled with large public investments have led to a more established model of accountability for American higher education (Ewell & Jones, 2006). The release of the Spellings Report in 2006 started a seismic shift in what it meant for an institute of higher education to be held accountable. The previous focus on only resource inputs has evolved to include more attention on performance outcomes, specifically SLOs (McLendon et al., 2006). The Spellings Report clearly articulates the importance of SLOs in determining accountability with the following statement, Despite increased attention to student learning results by colleges and universities and accreditation agencies, parents and students have no solid evidence, comparable across institutions, of how much students learn in colleges or whether they learn more at one college than another(U.S. Department of Education, 2006, p. 13). While the exact definition of accountability for any one institution of higher education can vary depending on who, specifically, is holding them to account, the National Association of 14 Independent Colleges and Universities along with the American Federation of Teachers, agree the essence of what constitutes accountability in higher education is found in the evidence of SLOs (Ewell & Jones, 2006). Thanks in large part to the Spellings Report, the destination of accountability is coming into sharp focus for community colleges, but these institutions must also have intimate knowledge of their vehicle—a clear understanding of the mechanisms for moving toward the destination—in order to reach that destination. Accreditation Metaphorically, if accountability is the destination for HEIs, accreditation is the vehicle that gets them there. The goal of accreditation, according to the U.S. Department of Education, is to ensure that institutions of higher education meet acceptable levels of quality, and is one of three elements of oversight used by the government to recognize the legitimacy of an institution of higher education (U.S. Department of Education, 2019). The first postsecondary accreditation body was established in 1895 with the goal of defining the boundary between high school and college, while also establishing a system of peer review as a condition of membership (Hegji, 2017). Nearly a century later, in the 1980s, accreditation bodies began using SLOs as a means of holding institutions publicly accountable (Hegji, 2017; McLendon et al., 2006). For an accreditation agency to have the authority to compel HEIs to act, they must be recognized by the U.S. Department of Education (U.S. Department of Education, 2019). There are currently three broad types of accreditation bodies overseeing the field of higher education in the U.S.: (a) regional accreditors, which grant acceptance to an entire institution, and as of 2017 3,509 institutions are regionally accredited; (b) national accreditors, which also grant acceptance to entire institutions and as of 2017 there are 241 nationally accredited institutions; and (c) specialized or programmatic accreditors who also operate across the nation but focus on specific 15 programs within an institution and institutions with a single purpose (i.e., engineering and technological schools) (Hegji, 2017). According to the NCES data, as of 2016 there were 4,583 degree-granting institutions in the U.S., which means the 3,509 HEIs under regional accreditors’ purview accounts for roughly 75% of all HEIs in the U.S. (Hegji, 2017; NCES, n.d.). Because regional accreditors are responsible for overseeing a majority of all HEIs in the U.S., they tend to be viewed as having more authority compared to other types of accrediting groups. Currently, there are seven regional accreditation agencies who together make up a group called the Council for Higher Education Accreditation (CHEA) (CHEA, 2019). When it comes to the accreditation process, community colleges face some unique challenges. Historically, community colleges serve multiple missions and offer a multitude of educational programs, which commonly fall into a few broad functions: transfer preparation, career and technical education programs, remedial education, and non-credit programs (e.g., community education, continuing education) (Baime & Baum, 2016). Because of these multiple missions, community colleges must meet accreditation standards for several different accreditation bodies. For example, along with meeting standards for regional accreditation through the Higher Learning Commission, Lansing Community College must report to 15 programmatic accreditation bodies (Lansing Community College, 2019). According to the U.S. Department of Education’s Accreditation Handbook (2019) accreditation agencies must demonstrate they have standards, which address institutional success in relation to student achievement. Therefore, community colleges should focus their assessment efforts toward the gathering and use of SLO assessment data in order to fulfill a large portion of their accreditation requirements in one process. This streamlining may have the potential to relieve a great deal of 16 both internal and external pressure from stakeholders, while at the same time conserving resources by reducing redundancy. Assessment of SLOs Continuing with the metaphor from the last section, if accountability is the destination and accreditation the vehicle, then assessment of student learning outcomes is the map that guides institutions down the most direct path to accountability. Linda Suskie (2010) spoke eloquently and succinctly on the function of assessment when she stated, “Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society” (para. 7). While assessment is vitally important for community colleges, it is important for institutions to remember that it is one only one, albeit integral, piece of a puzzle, and should not be the ends in and of itself. There is great diversity among U.S. institutions of higher education, however one enduring promise made to students in the mission of nearly every HEI is the goal of academic growth among the student body (Gaff & Meacham, 2006; Suskie, 2010). Many institutions of higher education espouse similar rhetoric in their mission statements about the topic of student learning. Often, these statements make broad claims about empowering students to reach their educational goals and develop lifelong skills that will prepare them for the workforce and civil society. These claims, as stated, can be very difficult to measure, especially when institutions rely on traditional metrics of institutional assessment such as incoming student characteristics, retention rate, and graduation rate (Alfred, Ewell, Hudgins, & McClenney, 1999). If solely relying on traditional metrics of institutional assessment like the ones mentioned above, one might conclude if an institution is providing adequate access, retaining, and graduating a large number of students, then the institution is succeeding, yet merely focusing on completion may 17 not be good enough (Ramaley, 2012). Newer measures of institutional effectiveness have become popular in recent years, which include job placement rates, starting salary, amount of debt, but it is argued that these, like the older measures of effectiveness, are mere proxies for the true goal of higher education—student learning (Rhodes, 2015). These proxy measures tell us nothing of what, when, where, or how a student learns, and they definitely do not speak to the quality of knowledge, skills, and abilities they learned while at the institution. On the other hand, Ikenbery and Kuh (2015) claim one way to broaden access, contain cost, while at the same time enhance student success for all is to use SLOs as metrics for institutional accountability. If a central tenet of most community college missions is to help students grow academically (Gaff & Meacham, 2006; Suskie, 2010), then SLOs should be used by these institutions to assess if they are succeeding in attaining their mission, and in understanding how they can become more successful. By gaining an in-depth understanding of how one institution has been particularly successful at implementing and sustaining the use of SLO assessment data to improve institutional quality, other similar institutions may be able to use this knowledge in similar efforts. There is a vast difference between the act of gathering SLO data and the act of using that data to inform action. While very little is known about the question of how institutions use SLO data, there is quite a bit of existing scholarship about the act of assessing and gathering data on SLOs. The following subsections provide a landscape of the current state of SLO assessment in the community college sector by addressing some basic questions including: (a) What is assessment or what forms may it take? (b) Where within the institution assessment may be conducted or at what level of measurement? (c) Why should institutions assess or what function does it serve? (d) Who is currently assessing SLOs? (e) How are institutions assessing SLOs? 18 What SLO Assessment is (Form)? In simple terms, assessment can be seen as a tool used to compare an intended outcome with actual outcomes (National Institute for Learning Outcomes Assessment, n.d.). However, one potential reason why community colleges have struggled with assessing SLOs is because assessment does not have a singular form or function. For example, an assessment can be direct (e.g., a 50-question algebra test) or indirect (e.g., a survey asking a student their perceptions of how much they learned in algebra class). While both types can be helpful in providing insight into student learning, accreditation agencies require direct measures of SLOs (U.S. Department of Education, 2019). Referring to form, there are several forms or types of direct assessment methods including but not limited to embedded course assessment, which reflect student performance on a given course task. A portfolio, which is a collection of student work that shows a student’s achievement and progress over time. Performance assessment, which is a demonstration or task completed in the presence of evaluators who evaluate with a common rubric. Professional jurors or evaluators, who may come from outside agencies or industry to evaluate and provide feedback on student work. Achievement testing, which measures the degree to which a student has gained an understanding of course material. Papers or essays, which measure the extent to which a student can synthesize and apply knowledge learned during a course (Hernon & Dugan, 2004). The sheer volume of different forms assessments may take can be overwhelming to community colleges who are struggling to assess effectively. Where to Assess SLOs (Levels of Assessment)? Assessment can and should take place at various levels of an organization. SLO assessment is most commonly measured at three distinct levels: institutional, program, course 19 (Seclosky & Denison, 2012). Institutional-level assessment evaluates the outcomes of the organization as a whole and the results may help inform strategic planning, decision making, accreditation, revising institutional outcomes, improving student engagement and success, creating a culture of teaching and learning, and enhancing faculty collaboration (Kinzie et al., 2015). Program-level assessment is more focused on students, faculty, and learning outcomes within a major or discipline (Kinzie et al. 2015; Seclosky & Denison, 2012). Results of program- level assessment are commonly used to set faculty priorities, determine professional development opportunities, improve student support services, revise curriculum, align curriculum, and to improve program outcomes (Kinzie et al., 2015). Lastly, course-level assessment is used to determine if students are meeting the intended learning outcomes of a given course, and results are commonly used to provide feedback to both students and faculty on their respective performances (Miller & Leskes, 2005). While community colleges should assess SLOs at each of these levels as part of a robust assessment plan, this study focuses on institutional-level assessment. The reason I have chosen to concentrate on this level is because understanding how a community college uses institutional- level SLO data—as opposed to program or course-level data—to inform the improvement process provides the opportunity for a more holistic understanding of how assessment data is used in the institutional improvement process. Institutional-level data encompasses all programs and courses and is often overseen by interdisciplinary committees, which means all academic units of an institution are involved in institutional-level assessment. If an institution can manage to successfully use SLO data to improve at the institutional-level, successfully implementing the SLO assessment process at program and course-levels may prove to be easier. 20 Why Assess SLOs (Function)? Early research on educational assessment by Bloom (1968) showed some of the positive influences assessing student learning can have on informing instruction (William, 2011). Bloom (1968) argued the variability in student academic performance was merely a reflection of the differences in the rate at which students learn, and the fact that student academic performances often take on the characteristics of the normal distribution was caused by the failures of instructors to recognize differences in how students learn (William, 2011). The research on mastery learning conducted by Bloom (1968), along with his subsequent studies conducted in the 1980s on the topic of using evaluation to inform learning, help build a compelling argument for why institutions should use SLO assessment data to improve teaching and learning. SLO assessment can serve several different functions. What follows is a list of some of the more popular uses of SLO assessment as identified by the Council of Chief State School Officers, (2012): ● Formative assessment, which is intended to be used during instruction to provide feedback on student learning and allow for differentiation of pedagogical practice. ● Summative assessment, which is intended to act as an evaluation of achievement and is usually administered after learning is supposed to have occurred. ● Diagnostic assessment, which is meant to provide an indication of knowledge students do or do not currently possess. ● Interim or benchmarking assessment, which is often used as periodic “snapshots” of student learning progression (Council of Chief State School Officers, 2012). 21 One common misconception of the aforementioned assessment functions is they are “types” of assessment. In fact, the various types of assessment—mentioned in the form subsection—can be used to serve any and all of these functions. This section outlined several reasons why institutions engage in SLO assessment, but a common thread in all the examples provided in the previous paragraph is the idea of using assessment to aid in student learning. In reality, the most common rationale institutions provide when asked what motivates them to assess SLOs is to fulfill accreditation requirements (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). This, somewhat superficial, rationale for engaging in SLO assessment may help explain why there are so few institutions actually using these data to improve institutional quality (Blaich & Wise, 2011). Who Currently Assesses SLOs? Those in charge of developing a plan for how their institution will engage in the assessment of SLOs must make many consequential decisions. Assessment practitioners must determine which form their SLO assessment will take as well as what function SLO assessment will serve for the institution. Research indicates the vast majority of HEIs are, at least, reporting on their efforts and intentions to assess SLOs. According to research conducted by the Association of American Colleges and Universities, 87% of academic officers polled assess SLOs across the curriculum, 11% claim they intend to begin assessing SLOs, while the remaining 2% acknowledge the need to begin (Association of American Colleges & Universities, 2016). These results indicate HEIs of all kinds are taking broad calls for assessment seriously. Further, more recent research from Blaich and Wise (2011) found most institutions already have more than adequate SLO assessment policies and procedures in place, which are generating plenty of actionable evidence. As mentioned earlier, new expectations from both the U.S. 22 Department of Education and all recognized regional accreditation agencies requiring the direct assessment of SLOs have likely played a large role in motivating the majority of HEIs to begin the implementation and collection of SLO assessment data (U.S. Department of Education, 2019; Kuh et al., 2014). How are SLOs Assessed? While research indicates nearly all HEIs are implementing and collecting data from SLO assessment, there is a great deal of variation in how institutions complete this task (Baker et al., 2012). Much of the current assessment literature focuses on the process of developing and enacting institutional assessment plans, and out of this research there have been a dearth of suggestions for best practice in SLO assessment. According to Hernon and Dugan (2004), “Best practice refers to the processes, practices, and systems that performed exceptionally well and are widely recognized as improving an organization’s performance and effectiveness.” (p. 303). The best practice literature yields some common themes regarding how institutions should develop and implement an SLO assessment plan. The following is a list of common themes found in a search of SLO assessment plan best practice literature: ● Begin with the end in mind. Know what information you want to gain from assessing SLOs. ● All stakeholder groups should be consulted throughout the process, but faculty participation in the development of SLO assessment tools and use of data is of utmost importance. ● Attempt to create a culture of assessment by linking SLO assessment to the institutional vision, mission, and goals. 23 ● Successful SLO assessment requires a commitment of adequate financial resources including training and professional development. ● Development and implementation of SLO assessment should be multifaceted in terms of level of measurement, form, and function, while adhering to accepted methodological standards. ● SLO assessment should be embedded within current institutional processes. The more authentic and less intrusive the assessment is for students and faculty the better. ● A clear communication strategy is necessary to bring legitimacy to the process and outcomes. ● Results from SLO assessment must be used to improve institutional outcomes (Crowell & Calamidas, 2015; Kinzie et al., 2015; Kinzie, Jankowski, & Provezis, 2014; The Academic Senate for California Community Colleges, 2010; Serban, 2004; Stassen, 2012). This list of common themes in SLO assessment best practice provides very broad suggestions for how institutions might implement an SLO assessment initiative, and because of the vague nature of these suggestions, no two HEIs engage in assessment of SLOs in the exact same way. With the vast majority of HEIs engaging in the assessing and gathering of data on SLOs (Kuh, et al., 2014), one might conclude the task of evaluating SLOs is being completed and the status quo is just fine, but that is not the case. Manning (2011) states, “The most important aspect of assessment is not that ‘we have done it’ but that ‘we have used the results to inform action’” (p. 19). As established in the introduction, community colleges are compelled to gather SLO data and use it to improve, and while there is reason to believe institutions are gathering the information, there is little evidence to show many institutions are using the data 24 they collect to improve institutional quality (Blaich & Wise, 2011; Kuh et al., 2015; Kuh et al., 2014). Using SLO Data to Improve Institutional Quality The previous section discussed the act of assessing SLOs, specifically the questions addressed were of what, where, why, who, and how institutions are engaging in this action. There is a substantial amount of literature on the act of collecting SLO assessment data, and research suggests most institutions are engaging in the collection of SLO data (Blaich & Wise, 2011). In this section, I discuss the topic of using SLO assessment data to improve institutional quality, which is a far less researched topic. The scholarship that does exist on the use of SLO assessment data speaks mostly to the questions of what actions can be taken and why those actions help to improve institutional quality. The following section critically examines the existing literature around what specific actions can be taken toward using SLO assessment data and why those specific actions can help institutions to improve institutional quality. Using SLO Data to Improve Institutional Quality: The What and Why The underlying reason most often stated in the literature for what and why actions should be taken to use SLO assessment data is to improve institutional quality, and the action of using SLO data to improve institutional quality are referred to as evidence-based decision-making (Ikenberry & Kuh, 2015). The Brookings Institute contends the ultimate objective for an educational institution engaging in evidence-based decision-making is to progress toward three outcomes (a) improved student learning, (b) increased equity, and (c) stronger accountability relationships amongst all stakeholder groups (Custer, King, Atinc, Reed, & Sethi, 2018). These three outcomes can be seen as the why institutions should be using SLO assessment data to improve institutional quality. 25 In the text Using Evidence of Student Learning to Improve Higher Education (Kuh et al., 2015), the authors advanced five examples of good practice for using SLO assessment data to improve institutional quality including (a) allowing institutions to differentiate outcomes at distinct institutional levels, (b) beginning with the ends in mind, (c) leveraging external processes, (d) linking assessment to internal processes, (e) closing the loop (Kinzie et al., 2015). These five examples of good practice can be seen as the what institutions should be doing to successfully use SLO assessment data to improve institutional quality. This section will examine the five examples of good practice provided in Using Evidence of Student Learning to Improve Higher Education—the what—and show how each of these examples relates directly to The Brookings Institute’s three objectives for evidence-based decision-making—the why. What and Why: Differentiate Outcomes at Distinct Institutional Levels. The first actionable way SLO assessment data can be used to improve institutional quality is by differentiating outcomes at various institutional levels. Community colleges are stratified organizations whose success can be measured at aggregated or disaggregated levels including the institutional, program, and course-levels (Kinzie et al., 2015). Successfully engaging in both the measurement and use of SLO data can allow institutions to pursue various improvements at targeted levels of the organization. For example, research by Kuh et al. (2014) showed course and program level SLO data leading to more changes in the areas of policies, programs, and practices. On the other hand, Kinzie et al. (2015) posit institutional-level SLO data is being used more to strengthen connections between institutional goals and strategic planning, incorporating results in accreditation processes, revising institutional outcomes, improving student engagement and success, and improving teaching and learning. 26 Being able to aggregate or disaggregate SLO assessment data to understand success at various institutional levels allows institutions to address the three objectives put forth by The Brookings Institute in the following ways. First, by differentiating outcomes at various levels of the institution, faculty and administrators can gain a better understanding of where gaps in learning are occurring (Custer et al., 2018). According to Hutchings, Kinzie, and Kuh (2015), both the University of California-Davis and the University of Maryland-Baltimore County have begun engaging in a process called “evidence-based pedagogies” where they are leveraging SLO data to uncover obstacles to student success and develop interventions. Second, the idea of increasing equity in higher education does not merely apply to institutions’ acceptance policies, rather the issue of equity is just as important for students who are trying to complete their college journey. By being able to disaggregate SLO assessment data to the course-level, faculty can better address specific student needs and drive support for student academic services, which helps ensure all students have an equitable chance to succeed (Kinzie et al., 2015). Third, by analyzing institutional performance at various levels a clearer picture of accountability may be illustrated. For example, analysis at the institutional level may show students performing above a deemed acceptable level in their general education learning outcome categories. However, a disaggregated analysis could also show one particular program doing exceptionally well and another program underperforming. If only analyzed at the aggregated level, a great opportunity for institutional improvement could be missed. What and Why: Beginning with the Ends. As Beld (2010) suggests, assessment results are not the ends, rather using the results to improve teaching and learning, and subsequently improving institutional quality is the end goal of assessment. I argue by not relying on SLO assessment data to improve institutional quality, 27 institutions are failing to consider the true ends of the assessment process. For example, when attempting to characterize student outcomes, many institutions are still relying on traditional measures of institutional quality (e.g., retention, student to faculty ratio, employment status after graduation). By relying on traditional measures, these institutions are attempting to make claims about the ends of the assessment process—improved SLO achievement—by proxy, which rarely reflect student learning (Powell, 2013; Rhodes, 2015). Further, if institutions continue to gather traditional measures of institutional quality like the ones mentioned above, these data are not necessarily actionable and do not provide much insight into where or why students are failing to meet expected outcomes. Institutions must focus from the very beginning on how they will use the data collected. If the goal of an institution is to improve student learning, which most institutional mission statements reflect (Gaff & Meacham, 2006; Suskie, 2010), focusing assessment on SLOs and using those SLO data allows institutions to take a more direct path to that end (Kinzie et al., 2015). I believe beginning with the ends of the SLO assessment process in mind—using the SLO data to improve quality—will make The Brookings Institute’s first objective of improving student learning more attainable. Second, beginning with the ends of the SLO assessment process in mind can also increase equity for all students as seen in an example from Juniata College. Based on results from their SLO assessment results, faculty at Juniata College were worried their students’ writing ability were not meeting expectations, and subsequently scores on the National Survey of Student Engagement indicated students at their institution were not as engaged in the writing process as other peer institutions (Jankowski, 2011). Based on both sets of data, faculty developed new goals and curricular standards for their students, which they felt were more on par with the rigor of their peers (Jankowski, 2011). If students are held to lower 28 standards of rigor, they may be at a disadvantage for subsequent opportunities in life. Therefore, this increase in academic rigor and expectations for students at Juniata College is an example of using SLO assessment data to increase equity in higher education. Lastly, beginning the SLO assessment process with the ends in mind allows for enhanced accountability to all stakeholders by harvesting empirical and genuine academic artifacts from students. There seems to be a certain level of agreement—at least among accreditors and the federal government—that accountability for community colleges is contingent on evidence of student learning (U.S. Department of Education, 2019; Higher Learning Commission, 2019). Going directly to the source—SLO assessment data—to assess institutional performance as opposed to using traditional proxy measures allows community colleges to take the most direct route to accountability as laid out by both the government and accreditors. What and Why: Leveraging External Processes. As discussed in previous sections, one of the best practices of engaging in assessment of SLOs is to be as least intrusive as possible. For example, faculty are encouraged to embed SLO assessments into the classroom just as they would any other assignment. Embedding assessments within a course allows for the assessment to count as SLO data as well as a normal course grade. In a similar vein, community colleges are already mandated to engage in the collection of SLO assessment data to satisfy accreditation requirements, so using these data to also develop quality improvement initiatives helps to decrease redundancy. Results from NILOA’s survey of provosts revealed the accreditation process was the most common reason cited for using SLO assessment data (Kuh & Ikenberry, 2009). Kinzie et al. (2015) argue even though the accreditation process can sometimes make SLO assessment less meaningful, it can and should be used as a motivator for using the SLO data to make institutional improvements. To speak colloquially, using SLO 29 assessment data to improve institutional quality can allow community colleges to kill two birds with one stone. Mooring the idea of leveraging external processes back to The Brookings Institute’s three objectives for using evidence-based decision-making is quite simple. The external process of accreditation is the most frequent reason SLOs are gathered on campuses (Blaich & Wise, 2011), while a second purpose of accreditation is to ensure institutions can exhibit evidence of student learning and attempt to improve student learning (U.S. Department of Education, 2019; Higher Learning Commission, 2019). Therefore, if an institution is improving student learning through SLO assessment data, they should inherently be fulfilling all three of the objectives of evidence- based decision-making. What and Why: Linking Assessment to Internal Processes. A major finding in the SLO assessment research suggests normalizing the assessment process through organizational structures and culture is an extremely important, yet challenging, step for any community college trying to ensure the success of an SLO assessment initiative (Barrett, 2012; Beld, 2010; Chaplot, 2010; Colson, Berg, Hunt, & Mitchell, 2017; Head, 2015; Holzweiss, Bustamante, & Fuller, 2016; McCullough & Jones, 2014; Petrides & McClelland, 2007; Powell, 2013; Rosaen, Hayes, Paroske, & De La Mare, 2013; Somerville, 2008; Walser, 2015; Zubrow, 2012). As far as linking organizational structures to the SLO assessment process, Kinzie et al. (2015) suggest the creation of positions specifically for assessment purposes is a key first step in developing a sustainable SLO assessment initiative. For example, a focus group conducted by Kinzie (2010) reported campus staff felt the creation of an associate dean for student learning position was a catalyst in creating an ingrained culture of assessment at their community college. The associate dean was responsible for collaborating with existing 30 committees who dealt with assessment, which allowed for a stronger link between all areas of the academic mission and the SLO assessment process (Kinzie, 2010). By linking to, or creating, internal structures in furtherance of the SLO assessment process, community colleges may subsequently create an enduring culture of assessment, which Kinzie et al. (2015) suggest is critical to sustain any SLO assessment initiative. Similar to leveraging external structures related inherently to the three broad outcomes of evidence-based decision-making set forth by The Brookings Institute, linking assessment to internal processes is also inherently tied to these three broad outcomes. By linking assessment to internal processes, institutions are acting in furtherance of improving student learning overall (Kinzie et al., 2015). What and Why: Closing the Loop. The essence of my study lies within the idea of closing the assessment loop, defined by Kinzie et al. (2015) as, “measuring the impact of the action taken to improve student learning and, the ultimate stage, gaining evidence of improved student learning.” (p.69). What is important to remember is that assessment is never complete. Once closed, the assessment loop is then repeated in future iterations toward further improvement of institutional quality. The problem is very few institutions have achieved this goal (Banta & Blaich, 2011). While the goal of closing the loop has proven elusive for many community colleges, Kinzie et al. (2015) have established the following vital steps for this process: (a) taking time to reflect on assessment results, (b) documenting changes made based on results, (c) examining if the implemented changes proved to be successful. Linking the idea of closing the loop to the ultimate objective of evidence-based decision- making put forth by the Brookings Institution is, again, quite intuitive. First, by closing the assessment loop, institutions are establishing changes based on current SLOs with the express 31 goal of improving those outcomes. Second, by attempting to improve SLOs at the aggregated (institutional) level as well as at disaggregated (program and course) levels, institutions can better understand which student groups need more attention, thus enhancing equity. Lastly, by closing the loop and continuing to engage in the assessment process through subsequent iterations, institutions can show stakeholder groups empirical evidence of their institutional improvement efforts. What is Left to Know: The How To reiterate, the goal of this study is to understand how an institution got to the point of using SLO assessment data to improve institutional quality. To this point, I have discussed why institutions should take actions toward using SLO data to improve institutional quality along with what actions can be taken to advance toward this end. While the literature has addressed many issues of what to do and why to do it in the context of using SLO assessment data, I believe the most significant barrier preventing more community colleges from using SLO assessment data to improve institutional quality is a lack of understanding as to how this can be done. Overall, scholarship in the field of higher education is full of best practices. As presented in the previous section, there are even best practices for using SLO assessment data. Unfortunately, best practices are often general and unclear about how to enact the advice (Fullan, 2004) or full of what Argyris (2000) refers to as “nonactionable advice”. In other words, best practice literature often does a great job of answering questions of what and why, but how is a key question that is often overlooked and is precisely what I seek to understand. This section goes into greater depth explaining the frameworks that were introduced in the first chapter and how they were used to interpret collected data. First, I discuss 32 the conceptual framework from Bolman and Deal’s (2017) Reframing Organizations highlighting areas of relevance to this study. Second, I discuss the theoretical framework from Ivancevich et al. (2014) Organizational Behavior and Management, again highlighting areas of relevance to this study. Conceptual Framework: The Four Frames. As evidenced by the previous review of the literature, understanding the issue of how SLO assessment data are used effectively by community colleges to drive institutional improvement is a multifaceted task. This study seeks to understand the organizational structure of a particular institution and how the various levers of power are engaged to drive change. In order to gain a better understanding of how one institution is successfully using various facets of organizational power to implement the use of SLO data-driven institutional improvements, I rely on Bolman and Deal’s (2017) Reframing Organizations, specifically, the Four Frame Model of understanding organizations (see Figure 1). This model was chosen because the multifaceted approach lends itself to the complex nature of HEIs, which have competing groups of interests who must interact and work toward shared positive outcomes. In order to understand how some institutions are more effective at resolving their issues and advancing toward shared positive outcomes, it may prove helpful to examine a single institution from multiple perspectives. Bolman and Deal (2017) plainly state organizations are complex, surprising, deceptive, and ambiguous. Moreover, they are made up of people whose behaviors can be incredibly difficult to predict (Bolman & Deal, 2017). Because of their complexity and unpredictability, the authors believe organizational issues should be examined from multiple viewpoints. The authors refer to these viewpoints as the “four frames” and posit organizational issues can be understood through these distinct frames. Specifically, the authors define a frame as, “a coherent set of ideas 33 or beliefs forming a prism or lens that enables you to see and understand more clearly what’s going on in the world around you.” (Bolman & Deal, 2017, p. 42). The four frames are structural, human resource, political, and symbolic. In the following subsections I introduce the four frames and discuss ways each frame might help to advance understanding of how some community colleges may effectively use student learning outcome data to advance institutional improvement. Figure 1 The Four-Frame Model Note. An overview of the Four-Frame Model From “Reframing Organizations: Artistry, Choice, and Leadership,” by Bolman & Deal, 2017, pp. 20. 34 Structural Frame. The structural frame emphasizes organizational goals, roles, rules, and hierarchy (Rice, 1991). Two undergirding principles of the structural frame are differentiation—who is responsible for what and when—and integration—how individual efforts interact to ensure harmony (Bolman & Deal, 2017). Understanding how complex issues, such as the sustained use of student learning outcome data, it will be of utmost importance to also understand the organizational structure of the institution in question. While structure is often seen as needless bureaucracy or regulation, Bolman and Deal (2017) argue if structure is overlooked organizations may waste time and resources in misguided efforts. In the context of this study, understanding the structural frame will require knowledge of the two main principles: differentiation or the division of labor and integration or the coordination of diverse efforts. Regarding division of labor, understanding who is responsible for things like determining assessment measures, administering and collecting SLO data, analyzing SLO data, communicating results, deciding upon improvement initiatives, establishing lines of communication to stakeholders, and defining success are all structural processes to be explored. Along the lines of coordinating efforts, understanding who has formal authority over goals, policy, and procedure; who delegates responsibility; and how and if teams are utilized are important organizational mechanisms to explore. The previously stated areas of differentiation and division of labor is not an exhaustive list, rather it is a sample of some important concepts within the structural frame that will need to be understood. Human Resource Frame. In the human resource frame, the emphasis is on the individuals who make up the organization. Organizations need people for the expertise, talent, and energy they bring, but 35 people also need organizations for the intrinsic and extrinsic rewards they provide (Bolman & Deal, 2017). In an ideal scenario the fit between the organization and the employee is a mutual symbiotic relationship where both parties benefit. However, there are times when this fit is not good in which case one or both parties suffer. What may be most relevant to this study from the human resources frame is the idea of intrinsic motivation. Bolman and Deal (2017) introduce several theories of motivation, and each of them share a common thread, which may be best articulated in Herzberg et al.’s (1959) two-factor theory of motivation. Herzberg et al. (1959) found extrinsic rewards, such as salary and working conditions, could contribute to dissatisfaction, but they did little to enhance satisfaction. Conversely, intrinsic rewards, like a sense of achievement, responsibility, and autonomy significantly contributed to satisfaction (Rice, 1991; Cherwin, 2013; Bolman & Deal, 2017). In practice, this frame may take the shape of what Bolman and Deal (2017) refer to as “high-involvement” strategies. These high- involvement strategies may include training, empowerment of the workforce, staff development, teaming, diversity, and attention to the needs of the workforce (Bradbury, Halbur, & Halbur, 2011; Bolman & Deal, 2017). Within the context of this study, institutions will be asked about SLO assessment training, incentives for being involved in the process, feelings of ownership and autonomy in the process, creative control, how groups and teams are created and supported, how differences of opinion are handled, the importance of emotional intelligence and regulation in teamwork, and the recognition and appreciation of work. Political Frame. According to Bolman and Deal (2017), “politics is the realistic process of making decisions and allocating resources in a context of scarcity and divergent interests.” (p. 178). This 36 pairing of scarce resources with divergent interests creates a steady stream of conflict within organizations, and this conflict inevitably leads to a focus on obtaining power through bargaining, negotiating, and the formation of coalitions (Omisore, Abiodun, 2014). Bolman and Deal (2017) discuss how the political frame can be used to serve personal agendas, often to the detriment of the organization, but they also acknowledge politics can be a force for positive organizational change and effectiveness as well. The tactful politician engages in four key skills: agenda-setting, mapping the political terrain, networking and building coalitions, and bargaining and negotiating (Bolman & Deal, 2017, p. 204). Because my study is focusing on an organization who has succeeded in reaching their goals pertaining to the use of SLO data, I seek to understand if and how these four political skills were leveraged. Some potential areas of inquiry include how and by whom the agenda (goals and schedule) for using SLO assessment data is developed? Was political pushback expected and if so, were counterstrategies developed? What role did informal communication play in navigating political pushback? Were there any internal or external influencers (not initially involved) called upon to help in the process? Were any relationships identified as key to obtaining intended results? How were relationships with potential opponents broached? What were key negotiations or bargains struck during the decision-making process? The political frame is fascinating, but participants may be less willing to provide honest feedback, due to the personal and informal nature of exercising political power. Symbolic Frame. Deriving largely from the disciplines of sociology and organizational theory, the symbolic or cultural frame of Bolman and Deal’s (2017) reframing organizations makes the argument that what is most important is not what happens, but rather what it means. This 37 assumption leaves room for individual interpretation of experiences, which can and often do differ. Organizational symbolism and or culture is created and continues to change over time, and is made up of values, norms, and shared beliefs, which help to create a shared sense of identity, purpose, and meaning (Martin, 2012). These values, norms, and shared beliefs help quell anxiety during times of uncertainty, and act as the tie that binds, uniting people across the organization to work toward shared goals (Bolman & Deal, 2017). As explained in Reframing Organizations (2017) organizational symbols can manifest in many ways, but some common forms present in HEIs are myths, vision, and values; rituals or ceremonies; and metaphor and humor. Myths are shared dreams, which help to encapsulate an organization’s values. These values act as guardrails to planning an organization’s future or vision (Bolman & Deal, 2017). Rituals and ceremonies are recurring organizational activities that carry with them meaning by putting into action the organization’s values (Martin, 2012). Understanding how the institution in this study relies on symbolism and culture to implement and sustain the use of SLO assessment data to improve institutional quality will be extremely important. Potential areas of inquiry for the symbolic frame include but are not limited to how the purpose of using SLO assessment data was established or defined? Who was integral in establishing the narrative of purpose? Was there any pushback or alternative narratives for this purpose and if so, how were those addressed? How does the use of SLO assessment data connect to the institution’s mission and vision, and did the mission or vision need to be changed to reflect this idea? Were any events or ceremonies created (i.e., assessment day for faculty), if so, how were these decided upon? Are any comparisons or metaphors used to help make meaning of the process of using SLO assessment data? 38 Integrating the Frameworks. While each of the four frames in Bolman and Deal’s (2017) model provides unique and important views from which to make sense of organizational issues, what is important to understand is these frames do not exist in a vacuum. Problems facing organizations, and the actions taken to address them, may be interpreted in different ways by different people. For example, Bolman and Deal (2017) provide an example of how the activity of decision making may be seen through the structural frame as a rational sequence to produce a correct decision; through the human resource frame as an open process to produce commitment; through the political frame as an opportunity to gain or exercise power; and through the symbolic frame as a ritual to confirm values and provide opportunities for bonding. While Bolman and Deal (2017) do not prescribe specific answers to organizational issues, they do suggest certain frames are matched more effectively to certain situations depending on the contextual conditions. For example, if individual commitment and motivation are essential, the human resource and symbolic frames may work best while the structural and political frames may not; if technical quality of the decision is important, the structural frame may work better and the other three may not; if high levels of ambiguity and uncertainty exist or if conflict and scarcity of resources are prevalent, the political and symbolic frames may work best, while the structural and human resource frames may not (Bolman & Deal, 2017). Bolman and Deal (2017) argue because organizations are complex entities with simultaneous events occurring, which can have multiple interpretations, in order to have a holistic view of any organizational issue, one must attempt to understand the issue from multiple vantage points. Each individual frame has the potential to provide valuable insight, but taken in isolation, each frame also presents possible blind spots in understanding. 39 Theoretical Framework: A Model of Organizational Change and Development. The previous section about the conceptual framework discussed how actors within an organization use the levers of power to implement change. While the conceptual framework helps provide an understanding of how organizational power structures can be used to drive change, it does not necessarily further our understanding of the change process specifically. In order to thrive in what is seen as an unprecedented time of change (Kezar, 2014), HEIs must be able to proactively lead change in response to both internal and external forces (Andrews, 2017). This process of employees working to move an organization toward intended, meaningful, and lasting change is considered organizational development (Ivancevich et al., 2014). Like many other topics discussed in this study, organizational change and development is a vast field, filled with many different theories. In order to understand how the community college in this study engaged in the change process, I will be relying on Ivancevich et al.’s (2014) model of organizational change and development (see Figure 2) as a theoretical framework. The reason this model was chosen is because it looks at change in a holistic manner. Some models of change take a more myopic view, simply focusing on the occurrence of a given change initiative. In this model, special focus is put on both root causes or “forces” of change, as well as how the success of a given change initiative is evaluated. As was briefly discussed in the introduction, because community colleges are compelled by several external stakeholder groups to use SLO assessment data to improve institutional quality, part of the Ivancevich et al. (2014) model is not entirely relevant for this study. The top four boxes in Figure 2 are not part of my inquiry as they have already received attention within the context of SLO assessment. In the literature review I covered the forces for change (e.g., accountability and accreditation), the fact HEIs are not currently meeting expected performance 40 outcomes, and how accreditors and government bodies already determined both the problem and appropriate methods for addressing the problem. Because the first four sections of the model are beyond the control of the institution, this study will focus on how the institution addresses the remaining three sections, which are impediments and limiting conditions, implementation of the method, and program evaluation (Ivancevich et al., 2014). Figure 2 A Model of Organizational Change Note. A model of organizational change and development. From “Organizational Behavior and Management,” by Ivancevich, Konopaske, & Matteson, 2014, pp. 514. 41 Impediments and Limiting Conditions. Regardless of the specific change initiative in question, the context of the organization will play a role in the success or failure of implementation. Ivancevich et al. (2014) outline four context-specific impediments to change: resistance to change, leadership climate, formal organization, and organizational culture. By gaining an understanding of how the proposed institution in this study recognized and dealt with impediments and limiting conditions when implementing the use of SLO data to improve quality, subsequent institutions may learn how they were successfully overcome. Therefore, these four context-specific impediments inform many of the questions asked of the participating institution. Resistance to change. Any situation involving change and uncertainty around humans’ routines and habits has the potential to create feelings of fear and anxiety for those involved (Grupe, & Nitschke, 2013). In response to anxiety about change, individuals and even the embedded structures of the organization have the tendency to resist. Ivancevich et al. (2014, p. 512-513) outline five strategies for dealing with resistance to organizational change: (a) Individuals and organizations must have a clear reason to change, (b) The more people involved in the change initiative at multiple levels of the hierarchy the higher the likelihood of success, (c) Communication must be an ongoing effort, (d) Identify and help guide champions or supporters of change within the organization, and (e) Create a learning organization that is resilient and flexible. Understanding if and how the proposed organization used any or all five strategies when attempting to overcome any resistance to change will be an important part of understanding the change process. 42 Leadership climate. The second impediment to change outlined in the Ivancevich et al. (2014, p. 527) model is leadership climate, which is defined as the nature of the environment created by leadership style and the administrative practices of managers. The authors stress buy-in from mid-level managers as a key component to the success of any change initiative. In terms of this study, I will need to understand how the decision to change was made and communicated. Finding out which leaders helped or hindered the change process will also be key to understanding the change process. Formal organization. The third possible impediment to organizational change outlined by Ivancevich et al. (2014, p. 528) is the formal organization, which includes the philosophy and policies of top management, along with legal precedent, organizational structure, and systems of control. The authors emphasize that a change initiative in one of the previously listed sources must be compatible with all other sources, or else resistance is likely to occur. In terms of this study, understanding where current policy and procedure may have acted as a barrier to using SLO assessment data in improving organizational effectiveness is important. For example, when existing policies and procedures were in opposition to the change, did those existing policies and procedures remain or were they abandoned in the name of change? Organizational culture. The final potential impediment to organizational change presented in Ivancevich et al.’s (2014, p. 528) model is organizational culture, which they define as the pattern of beliefs based on group norms, values, and informal activities. Ensuring that any change initiative implemented considers how it could react with the prevailing cultural conditions within the organization is an 43 important aspect of the change process. Understanding how the culture of the organization may act as a barrier to change may prove to be difficult because organizational culture is based on employees’ perceptions and how those perceptions manifest in beliefs, values, and expectations (Ivancevich et al., 2014, p. 39). Because understanding individuals’ perceptions is difficult, overtly asking participants about their perceptions of the organizational culture, and how those perceptions may have informed their behavior during the change process is necessary. While no two institutions are the same, I believe any community college trying to implement the use of SLO assessment data to improve institutional effectiveness will see similar impediments to change. My hope is by clearly articulating how the institution in this study addressed and overcame these four categories of impediments, subsequent institutions will be able to draw parallels and overcome these impediments in similar ways. Implementing the Method. Understanding how the proposed institution completed the second part of Ivancevich et al.’s (2014) model, implementing the method, will be another important detail for subsequent institutions to learn from. When implementing the change process Ivancevich et al. (2014, p. 528-529) suggest focusing on the dimensions of timing and scope. The timing for engaging in a change initiative should depend on the groundwork that has—or has not—been laid for the initiative as well as the operating cycle of the organization. Therefore, beginning a change initiative right before a particularly busy period for the organization would not make sense. Understanding how the timing for implementation was decided upon by decision-makers will be another important line of inquiry for this study. For example, knowing how the timeline and, specifically, the start date of the initiative was developed, and what considerations went into planning the timeline will be critical. 44 The scope of change refers to the scale at which the initiative will reach. Organizations must consider if the change initiative will be implemented throughout the entirety, or if it will be phased into smaller organizational units. Organizations implementing a change initiative should do so by introducing the change into successive small-scale units where the organization can learn from mistakes and experiment with variations of the intervention (Ivancevich et al., p. 528). Another important aspect of scope is the potential of experiencing scope creep. Scope creep refers to changes made to original goals, often to satisfy stakeholders, which may result in substantial changes to the initiative (Mikkelsen, 2018). For this study, I seek to discover how the scope of the change initiative was determined and carried out. Also, regarding scope, I examine whether the phenomena of scope creep was an issue for this change initiative, and if it was an issue, how it was dealt with by the organization. Program Evaluation. The final section of Ivancevich et al.’s (2014) model of organizational change and development is the program evaluation phase. While organizations must identify change to be made and follow through with implementing the change, the organizational change and development process does not end there. After implementing a change initiative, organizations must gather feedback in the form of data to measure the desired result. After initial feedback is gathered, organizations must determine if the desired results were obtained, and if not, what adjustments must be made to move toward the desired result. Revisions to the change initiative are then implemented and reinforced in subsequent iterations. Understanding how and when success was defined for the community college in this study will be another very important source of information. 45 Working Hypothesis Examining how the community college in this study managed to implement and sustain the use of SLO assessment data to improve institutional quality will yield evidence of influence from both Bolman and Deal’s (2017) reframing organizations and Ivancevich et al.’s (2014) model of organizational change and development. This evidence will likely show elements of the frameworks either acting to enable or as a barrier to using SLO assessment data to improve institutional quality. I hypothesize the community college in this study will show evidence of purposeful integration of some or all of Bolman and Deal’s (2017) four frames when enacting their SLO assessment change initiative. I also hypothesize data will show an understanding amongst assessment leaders—either stated or implied—of the three distinct phases of organizational change and development in Ivancevich et al. (2014) model of organizational change and development. While the particular community college and their key assessment leaders may not use the exact language from both frameworks, I predict they will exhibit an understanding that the issue of using SLO assessment data to improve institutional quality is best approached by integrating strategies from multiple frames. Summary I began this section with the goal of clarifying some key terms from the assessment literature—accountability, accreditation, and assessment—which are often convoluted. When defining what is meant by assessment, particular focus was emphasized on the form and function of the assessment process. The use of SLO assessment data was discussed with particular focus on the questions of why institutions are using SLO assessment data, and what does it look like for an institution to use SLO assessment data? While existing literature focuses on the why and what questions of using SLO assessment data, not much information exists on specifically how 46 community colleges successfully implement and sustain policies and practices for using SLO assessment data to improve institutional quality. Of the literature that does exist on the questions of how to use SLO assessment data to improve institutional quality, much of it is general and not extremely clear on how to specifically enact the advice, or what Argyris (2000) terms unactionable advice. Lastly, the conceptual and theoretical frameworks were discussed, and it was explained how the four distinct frames along with the model for organizational change and development provide unique yet integrated lines of inquiry for understanding how institutions successfully use SLO assessment data to improve institutional quality. 47 CHAPTER 3: METHODS The purpose of this study is to gain an understanding of how a particular community college successfully uses SLO assessment data to improve institutional quality. The goal of the following chapter is to establish the methodological means for carrying out the study, along with providing a justification for why the methods were chosen. Specifically, I will discuss the research paradigm, research design, limitations, and my positionality as a researcher with personal connections to the study. Research Paradigm and Methodology The concept of community colleges using SLO assessment data to improve institutional quality is a fairly new one (Kuh et al., 2015; Blaich & Wise, 2011; Kuh et al., 2014). When dealing with a new phenomenon, researchers must make informed decisions on the most appropriate research paradigm and subsequent methodology (van Esch, & van Esch, 2013). The subject matter in this research, community college organizations and how they change, is quite subjective. Due to the subjective nature of the phenomenon in this study, I believe the appropriate research paradigm in this case is interpretive. Interpretivism begins with human interpretation as a starting point for developing knowledge about the social world (Prasad, 2005). The intent of interpretivist work is to understand how people feel, perceive, and experience the social world, with the goal of gaining meaning and motivation for individuals’ behaviors (Chen, Shek, & Bu, 2011). Understanding the perceptions and motivations driving individual behavior is important because the world is complex and ever-changing, and by framing studies through an interpretive lens, researchers begin to understand how the same phenomenon can be viewed differently by various individuals involved (Glesne, 2011). I want to make clear that with this study I am not merely seeking to find out what actions took place, when they took place, how 48 they took place, and what the result was. Rather, with this interpretivist methodology I am trying to discover the untold and sometimes counter-narratives behind this process. By seeking each participants’ unique interpretation of how this initiative was undertaken, I hope to gain a more holistic understanding of how this institution successfully grappled with this SLO assessment initiative. Qualitative methodology is often used when conducting interpretivist research because it can provide rich and contextualized explanations of complex social phenomena (Thomas, 2003; Willis, 2007). This research seeks to explain the context by which a particular community college experiences a complex social phenomenon (i.e., using SLO assessment data to change and improve institutional effectiveness). Based on the fact this study is being framed within an interpretivist paradigm, qualitative research methodology is the most appropriate means of data collection and analysis. Research Design Again, the central problem this study aims to address is the fact that, while they are compelled to do so, not many community colleges are effectively using SLO assessment data to improve institutional quality. Because there are not many examples of institutions doing this in practice, and because the literature around using SLO assessment data to inform practice is also scarce, I feel it is most useful to attempt to gain a deep understanding of an institution who is using SLO assessment data to improve institutional quality successfully. Case Study According to Yin (2009), when determining a methodological approach, one should consider the following three conditions: (a) the type of research question, (b) the control a researcher has over the events, and (c) the focus on contemporary versus historical phenomena. 49 The research question in this study focuses on the question of how a particularly successful community college uses SLO data to improve institutional quality. According to Yin (2009), the question of how is best studied using case study. As for the control I have over the events in question, because this study is examining events an institution has already experienced, I have no control over those events. Yin (2009) would, again, suggest case study when a researcher has no control. Lastly, the use of SLO data to improve institutional quality is a very contemporary issue, for which Yin (2009) also suggests the use of case study. The answers to Yin’s (2009) three questions all suggest an exploratory case study research design is most appropriate. An exploratory case study investigates phenomena, which are characterized by a lack of existing research, and that is precisely what Hamill (2015) says is the current situation around the topic of using SLO assessment data to improve institutional quality. Case studies can illuminate both the uniqueness and commonalities of a given situation (Stake, 1995), therefore, by gaining a deep understanding of how the process of using SLO assessment data to improve institutional quality is being done successfully at one community college, other community colleges may learn from these successes. Single-Case vs. Multiple-Case Study There are many factors to consider when designing a case study, but one fundamental consideration is the number of cases to be considered. There are pros and cons to both single and multiple-case study designs, and determining which design is best requires a consideration of multiple factors. For this study, one major factor in settling on a single-case study design was the fact that there are few community colleges who are known to be successfully using SLO assessment data to improve institutional quality, so finding multiple cases would have been a big challenge. A second reason for choosing a single-case study design is because this design is 50 better than multiple-case design when studying groups of people, which is what an institution (e.g., a community college) is (Gustafsson, 2017). Unit of Analysis Another crucial step in qualitative research is determining the unit of analysis. According to Yin (2009), the unit of analysis will be evident after the research question or questions are determined. The overarching research question in this study is, “how did one particularly successful community college implement and sustain the use of SLO assessment data to improve institutional quality?” Based on this question, the primary unit of analysis for this study is the institution. However, institutions themselves are not necessarily operative, rather, individuals making up the institutions are what determine success or failure. Because the institution is not a sentient being, it is necessary to consider sub-units of analysis including relationships between members of the assessment committee and individual experiences. The exploration of sub-units of analysis along with the global unit (i.e., the institution) is what Yin (2009) refers to as an embedded case study. While overarching conclusions are drawn regarding the institution, it is important to understand how individuals worked in collaboration to produce successful institutional outcomes. For this study, there could be multiple sub-units of analysis (e.g., departments or committees), and depending on the sub-unit chosen, a completely different study would emerge. I have made the conscious decision to use individuals as the sub-unit of analysis. I recognize that I am not accounting for potentially powerful motivating forces at other various sub-units but addressing all other possible sub-units of analysis is beyond the scope of this study. Site Selection and Sample In this study the population from which to sample included all community colleges in the United States who are currently using SLO assessment data to improve institutional quality. 51 Purposive sampling is a common technique used in qualitative research to identify information- rich examples where participants are particularly knowledgeable about a phenomenon of interest (Patton, 2002; Cresswell & Plano Clark, 2011). Specifically, the purposive sampling technique I use is intensity sampling, which involves drawing on rich examples of the phenomenon, which are not necessarily unusual cases (Patton, 2002). While generalizability is not always the intended goal of qualitative research, I preferred to find an institution that was not an extreme outlier in terms of size, program offerings, or mission. I determined the selected institution should be public because it is typically the case that public institutions provide a wider array of program offerings. As far as institution size, the institution’s enrollment should be close to the average for public community colleges in the U.S., which according to the Community College Review as of 2023 was 5,655 students (“Average Community College Student Size”, n.d.). Regarding program offerings, along with career and technical programs, the community college selected had to have a broad selection of general education programs. Also, the institution’s mission must explicitly include student learning as one of its intended goals. Lastly, the institution for this study had to be recognized within the field of higher education as a leader in SLO assessment practices. To establish this criterion, I relied on the National Institute for Learning Outcomes Assessment (NILOA). NILOA has great cachet within the field of higher education and holds strategic partnerships with every regional accreditation organization as well as many other important organizations (e.g., American Council on Education, Association of American Universities, American Association of Community Colleges, Association of Public Land-grant Universities, Association of American Colleges & Universities, State Higher Education Executive Officers Association) (Partner and Collaborating Organizations, n.d.). NILOA recently developed the Excellence in Assessment (EIA) designation, which is awarded 52 annually to institutions who, “successfully integrate assessment practices throughout the institution, provide evidence of student learning outcomes, and use assessment results to guide institutional decision-making and improve student performance.” (Excellence in Assessment, n.d.). The participating community college was selected from the group of community colleges who have received this designation since its inception in 2016, and who fulfilled the additional criteria previously mentioned. Data Collection and Analysis Data Collection. After an institution was selected, I began the process of collecting data. Because the current study is interpretivist, which seeks to understand individuals’ subjective experience, the second means of data collection was participant interviews. Oltmann (2016) claims interviews are the most direct interaction between the researcher and participant, and the best way to gain another person’s perspective, which is why interviews will be such an integral part of this study. I decided to start with 7 participants for interviews, with the possibility of interviewing more if they were able to produce pertinent information. Participants were then selected based on consultation with the head of the assessment committee. I made it clear to the committee chair I was looking to gain perspective from individuals who could speak to both the successes of this initiative as well as the shortcomings. Before conducting any of the interviews, I crafted a semi-structured interview protocol. The protocol included several general guiding questions meant to get the participant thinking about the key challenges to and catalysts for success. The questions also focused on Bolman and Deal’s (2017) four frames (see Appendix A). Pilot testing was conducted with a few assessment coworkers at my home institution to help bolster validity and ensure the usefulness of the 53 protocol. The piloting process proved to be a useful exercise and led to several changes in the wording of the questions to address ambiguity. Once pilot testing was complete, I moved on to the actual participant interviews. Participants for the interviews included faculty, staff, and administrators involved in the development and implementation of the SLO assessment process. Interviews were roughly one- hour in length, and I asked participants to tell their story of how the college successfully implemented and sustained the use of SLO assessment data in the institutional improvement process. By asking such a broad question in the beginning, participants were free to highlight what they felt were the most important factors in carrying out this initiative. Also, I asked probing follow up questions along with questions from the interview protocol as needed. Microsoft Teams software was used to transcribe the audio of all interviews conducted. Data Analysis. When analyzing data collected in the participant interviews, I relied on interpretive content analysis, which is defined as, “making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use” (Krippendorff, 2004, p. 18). This methodology differs slightly from basic content analysis in the sense that basic content analysis uses deductive coding categories, relies on literal coding, and pays little attention to context or meaning making (Drisko & Maschi, 2016). Interpretive content analysis is neither merely literal nor descriptive, allowing researchers to seek both precursors and consequences of interactions, from which a holistic understanding of cause and effect along with the explicit content of the interactions may be gained (Drisko & Maschi, 2016). However, with this interpretation comes possible threats to reliability, which require extra methodological rigor (Baxter, 1991). Based on Drisko and Maschi’s (2016) methodological suggestions, the analysis of interview data began 54 with generating exploratory or emergent codes based on the content of the data and with special consideration to its meaning in context. A code list of relevant topics was then developed from these emergent codes. Several of Saldaña’s (2013) coding methods were considered in first cycle coding including grammatical, elemental, and affective methods. After first-cycle coding was completed, I conducted code-mapping as a means of organizing the data as well as a data auditing process. Lastly, I conducted second-cycle coding methods including focused coding, axial coding, and theoretical coding. Focused coding takes what was broken down in first cycle coding and fits it together in large categories of data that make the most analytic sense (Charmaz, 2006). The next step in the second cycle coding process is axial coding, which helps to remove redundancies in codes and to expound upon properties and dimensions of more dominant coding categories (Saldaña, 2013). The final phase of second cycle coding is called theoretical coding. Theoretical code acts as a metaphorical spine integrating the “core category” to the rest of the categories of codes by specifying possible relationships between them and establishing a narrative theoretical direction (Saldaña, 2013). To help with organizing the coding process, I will be relying on NVivo software. Criteria for Research Quality In the field of academic research, qualitative, particularly interpretive qualitative works, often come under intense scrutiny surrounding the reliability and validity of their methodological practices. To effectively address methodological concerns all researchers should address five criteria of trustworthiness in qualitative research including credibility, transferability, dependability, confirmability, and reflexivity (Korstjens & Moser, 2018). This section will address each of these five criteria of research quality in detail. 55 Credibility Credibility in qualitative research can be equated with internal validity for quantitative research, in the sense that both address the basic question, “is the study measuring what it purports to be measuring” (Lincoln & Guba, 1985)? To address this concern, I rely on member- checking and peer review. Member checking is a method for establishing credibility in qualitative research, in which the researcher feeds back data, analytical categories, interpretations and conclusions to participants to check for accuracy (Lincoln & Guba, 1985; Sim & Sharp, 1998). I engaged in member checking at two distinct points of the study. First, I sent all transcripts of interviews back to each participant for them to ensure accuracy. Second, I sent my list of coded themes to participants to, again, ensure accuracy. Lastly, I have asked one of my Ph.D. classmates to act as a peer debriefer for this study. This person acted as an outside source to critique and question the appropriateness of all aspects of this research. Transferability The goal of qualitative research in general, and of this study in specific, is not necessarily to generalize the findings to all other community colleges. However, I would like to maximize the possibility for other institutions to benefit from this undertaking, and because of this I will be looking to enhance the transferability of the study. Transferability is defined as, “the degree to which the results of qualitative research can be transferred to other contexts or settings with other respondents” (Korstjens & Moser, 2018, p. 121). Transferability can be obtained by using thick descriptions, which includes not only detailed descriptions of behaviors and experience, but also of context, so the behaviors and experiences become more meaningful to the readers (Lincoln & Guba, 1985; Sim & Sharp, 1998). By accurately transcribing interviews and engaging in member 56 checking, I am confident I have attained these thick descriptions of behaviors and experiences of participants in this study. Dependability and Confirmability Because the next two criteria for trustworthiness of qualitative research are so closely aligned and addressed through similar means, I address both in the same section. Dependability involves the idea of consistency and is addressed through adhering to accepted standards of methodological design, while confirmability is centered around researcher neutrality and is gained through adhering to the process of data analysis (Lincoln & Guba, 1985). Both of these ideas are addressed through keeping a precise audit trail during the research process, which will include strict documentation of procedures, interviews, and decisions made during data collection and analysis procedures (Imel, Kerka, & Wonacott, 2002; Merriam, 2002; Yin, 2009). I journaled my actions during the data collection and analysis process which included contemporaneous field memos to ensure the dependability and confirmability of this research. Reflexivity The fifth and final criteria for trustworthiness of qualitative research is reflexivity, which is the process of critically reflecting upon one’s own bias, and relationship to respondents and to the research as a whole (Lincoln & Guba, 1985). As a researcher I am not separate and apart from my study, rather I am in and of the study. It is possible my views shifted based on my exposure to and increased understanding of the topic at hand. To account for this, I reflected on my positionality (see positionality statement) and kept a reflective journal during the data collection and analysis procedures to account for the evolution of my views during this study. 57 Limitations In my opinion, there is no such thing as perfect research. Even in studies with extreme methodological rigor, biases—both implicit and explicit—can creep into and shape the outcome of the project. While I am doing my best to limit the potential bias involved in this study, I feel it is best to address areas where these biases are likely to be found. One area of particular concern was the selection of interview participants. The primary contact for this research was the administrator in charge of assessment at the selected institution, and I relied on this person to point out the handful of individuals who were most influential in implementing the use of SLO assessment data to inform institutional improvement. Both my theoretical and conceptual frameworks (Bolman & Deal, 2017; Ivancevich et al., 2014) posit with any organizational change initiative—in this case implementing the use of SLO assessment data to inform institutional improvement—comes impediments or barriers to the successful implementation of the change. I worry the participants chosen for me may have had views closely in line with the assessment administrator and may not have been as able or willing to discuss the barriers to implementing the use of SLO assessment data to improve institutional quality. To address this limitation, I was transparent with the assessment administrator from the beginning by stressing the importance of understanding how the barriers to change were overcome. A second limitation of this study comes from single-case study methodology. While I have established the rationale for why a single-case study was chosen in the research design section of this paper, single-case studies are not perfect. Herriott and Firestone (1983) claim evidence from multiple case studies is often more compelling. I acknowledge the inherent limitation of the single-case study methodology and see this study as a starting point, from which, future multiple-case study research. 58 Positionality I believe attempting to completely remove all personal influence from research, be it qualitative or quantitative, is patently impossible. Instead of trying to excise my personal experiences, views, and values from this study, I attempted to openly state and embrace them, for as Denzin (1986) stated, “Interpretive research begins and ends with the biography and self of the researcher.” (p. 12). This section is meant to discuss some aspects of my life that help inform my direction as a researcher. One aspect of my life that has informed my positionality has been my educational journey as a student. While the entirety of my experience in higher education has been in social science, both my bachelor’s and master’s degrees were steeped in positivist epistemology. I did not realize it at the time—typical of many individuals of a majority group—but opposing views on the nature of knowledge and experience were waiting for me to discover them. During the first semester of study in my doctoral degree I realized how subjective, even the most scientifically rigorous research can be. My belief that the same reality and mechanism for knowledge creation exists for everyone and can be quantified, analyzed, predicted and repeated began to waver after reading Sipe and Consable (1996). Since then, I have become more comfortable with the fact each of us experiences the world and gains knowledge differently based on our own personal journeys. A second influence on this research is the fact that I currently work for a community college both as an instructor and as a coordinator for assessment efforts. Like many others in higher education, the institution I work for is not yet where we need to be in terms of using SLO data to inform practice. The relevance of this challenge is part of why I was drawn to this topic. I believe if done carefully, assessing SLOs can have a positive impact on institutional quality, 59 however, I do not believe that simply engaging in more student assessment is a magic elixir to heal all that ails the field of higher education. I hope my personal experience in both the subject of SLO assessment and the community college sector will help me to gain a more holistic perspective of how community colleges can use SLO data to improve institutional quality. Summary Chapter 3 provided an overview of the methodological considerations for this project. Specifically, I discussed the research paradigm, research design, data collection and analysis, and data quality assurance. Also addressed in Chapter 3 were some of the limitations for this study. Lastly, I address the issue of researcher positionality in an effort to openly state and contextualize my personal biases that will undoubtedly influence this work. 60 CHAPTER 4: BACKGROUND INFORMATION The intent of Chapter 4 is to provide critical background information on the institution at the center of this case study in order to build the necessary context for understanding and interpreting the subsequent data analysis and conclusions. In order to protect the anonymity of the institution and the participants of this study, I have chosen to use the pseudonym Greenville Community College (GCC) for the institution. The specific name of the state system of higher education will be anonymized and referred to as SHES. The regional accreditor for this institution–Middle States Commission on Higher Education–is far enough removed from GCC, so it will be referred to as MSCHE. Lastly, the institutional demographic statistics presented in this section will not be exact numbers. I chose to approximate numbers to further protect the anonymity of GCC. There are three major sections of Chapter 4: first, is an overview of the institution; second is a description of the participants and their roles at the institution; and third is an interpretation of why and how GCC chose to recast assessment and get to a point where they use SLO assessment data to drive institutional improvement. Institutional Overview As was mentioned in the Site Selection and Sample section of Chapter 3, I tried to be intentional in selecting the institution for this case study. The goal was to find an institution that excels in the use of student learning outcome data as evidenced by their NILOA distinction, but also represents an “average” community college in terms of enrollment, program offerings, and mission. While the intent of this study is not necessarily to be generalizable, it might be helpful for readers and practitioners to be able to find some commonalities between the institution being studied and other institutions going through similar issues. 61 Based on the selection criteria mentioned in the previous paragraph and outlined in greater detail in Chapter 3, I chose to perform this single case study at Greenville Community College (GCC). Permission to use GCC in my study was granted after I initially reached out to my informant through email. Located in the eastern United States, GCC falls under the Carnegie Classification of Associate’s Colleges: High Transfer-High Nontraditional and as of fall 2020, was home to between 4,000-7,000 students (Carnegie Classification, n.d.). GCC offers Associate-level degrees or undergraduate Certificates in between 50-80 different programs with the most popular programs being Liberal Arts and Sciences-Associate’s Degree, Business Administration-Associate’s Degree, and Registered Nursing-Associate’s Degree (U.S. Department of Education, n.d.). Participant Overview Interviews were conducted with seven participants who were either currently or formerly employees of GCC and who had extensive knowledge and experience with the initiative to use SLO data to improve institutional quality. The participants came from varied departments on campus and included faculty members of various disciplines, academic administrators, academic support staff, and assessment coaches. The assessment coaches were all faculty members from various disciplines as well, but had decreased teaching loads to allow them to also take on the role of assessment coaches. Because some of what was divulged during the interview process was potentially personal and sensitive information, each participant was given a pseudonym to help protect their identity. Luke During his time at GCC, Luke worked as a faculty member and was eventually promoted to Associate Vice President. When asked about his role as Associate Vice President, Luke said, 62 Associate Vice Presidents are in charge of lots of things, but my main sphere of control was curriculum. I was in charge of the curriculum for the college. From developing courses, packaging them into programs, and then getting them through our governance system, and getting them approved by the state Department of Ed, and then implementing all of the changes. Recently, Luke left GCC to pursue another opportunity in education, but was still very knowledgeable about all of the efforts made to improve assessment at the institution. Mary Mary was hired at GCC in a dual role acting as the Assessment Coordinator as well as a faculty member. Mary, hired in 2013, has earned several promotions and currently holds the position of Vice President of Strategic Initiatives and Assessment. Mary proved to be an integral part of why GCC was able to initiate and sustain the effort to measure and use SLO data for institutional improvement. When asked if any individuals stuck out as important to the success of this assessment initiative, one participant said, “I think I would attribute a lot of the success to Mary. She is one of the smartest people I’ve ever met.” Mark Recently hired to the full-time position of Assessment and Academic Data Coordinator, Mark had been working with the assessment team at GCC as a contract employee for several years prior. His position is to play a supporting and organizing role for all assessment related activities with a particular focus on data curation. While his experience was somewhat limited compared to many of the other participants, his deep technical knowledge was very helpful in understanding specifically how data are collected, stored, and used to aid in strategic decision making around assessment of SLOs. 63 Hannah Hannah is a mid-career faculty member at GCC who teaches in the Visual and Performing Arts department. Hanna’s opinion was of particular importance to this project because, unlike most of the other participants, she was not necessarily an early adopter nor was she a cheerleader for this initiative. Her initial skepticism turned into pragmatism as it became more evident that this SLO assessment initiative was something that GCC was obligated to do. “We value governance very much and we respect the process of governance, whether or not we always agree with the outcomes. So even if the outcome comes from a contentious vote, we're going to follow the flow of governance.” Due to her pragmatic stance, Hannah was often a key negotiator between assessment staff and members of the Visual and Performing Arts department during the implementation of the SLO assessment initiative, which made her a rich source of data. Beth Beth is another mid-career faculty member in the Science and Technology department. After starting her career in K-12 education, Beth was very aware of SLO assessment by the time she got to GCC. Pertaining to assessment, Beth explained, In graduate school I had an opportunity to compare standardized testing, high stakes, testing, summative and formative assessment in my own classroom, and learn how to develop that kind of thing, which is not, I think, typical for most college faculty outside of an education department. Because of this previous experience with SLO assessment, Beth understood the need for SLO data to inform institutional improvement decisions, which made her an early adopter for this initiative and another great source of data. 64 Susan Along the same lines as Beth, Susan is a mid-career faculty member in the English Department at GCC. Also like Beth, Susan was an early and eager adopter of the SLO assessment initiative. Susan was able to see the big picture behind SLO assessment and understood that this initiative could be seen as an opportunity for faculty to take more of an active role in institutional decision making. “You know we need to take the power back and like actually do something more than checkboxes.” This attitude helped Susan become one of the driving forces behind GCC’s success in SLO assessment. Jenny An assistant professor within the Environmental Conservation and Horticulture department for the last 7 years, Jenny joined GCC after the institution had decided to make a commitment to using SLO data to drive decision making. Realizing SLO assessment was going to become a more substantial part of her job, Jenny helped lead the push to implement program coordinators within the department of Environmental Conservation and Horticulture. With the help of other stakeholders, Jenny lobbied for release time to be granted to these new program coordinators, in part, to work on the SLO assessment initiative. Being one of the only departments at GCC that did not have program coordinators, the approval of this request went smoothly. “The process was really just a matter of trying to make our department match what other departments had.” The extra time to focus on SLO assessment, which Jenny helped to secure, was an important factor in the department’s success. Recasting Assessment at GCC The decision for GCC to hone their focus on SLO assessment and making data-backed decisions was not made out of the blue, and it was not made completely by choice. There were 65 several factors that led to the “recasting” of assessment—as Mary put it—including forces both internal and external to GCC. Internally, there was a palpable frustration amongst faculty with the status quo around institutional assessment practices. Speaking to this frustration, Beth said, When I first came here in 2008 and for some time after that, there was a lot of talk about assessment, but nothing happened. When we did get together for assessment, it was usually last minute and we would talk a little bit and we'd be like, this is great. We would talk about student work and what we're all doing in our classroom and say, ‘we should do this more,’ but it would end there. And then three years later, we would do the same thing. It just seemed like we were checking a box for accreditation. Several other participants offered similar sentiment when discussing how assessment had traditionally been handled at GCC. Externally, GCC accreditors, SHES and MSCHE were mandating the institution to address several issues, and one of the common sense ways these issues could be addressed was through more rigorous assessment. The first external issue GCC needed to address quickly was getting into compliance with SHE’s 64 credit rule, which stated no Associate’s programs could require more than 64 credits (State Higher Education System, n.d.). According to Mary, GCC had the most degrees out of compliance in the entire SHES, so the institution was bracing for a massive amount of work. The second source of external pressure was GCC’s 2012 accreditation findings, which highlighted a lack of assessment of institutional learning outcomes, specifically pertaining to general education goals. Starting her tenure as Assessment Coordinator in the middle of the chaos from the 64 credit rule and the 2012 accreditation findings, Mary came to the conclusion there needed to be a singular solution to these curricular and assessment issues faced by the institution. 66 After much discussion between Mary, Luke, and other stakeholders, the beginnings of a unified solution to the internal and external pressures facing GCC began to emerge and it was referred to as the Learning Framework. Luke recalled, “I and Mary created the Learning Framework as a brainstorm in her office one night, and then we just started sketching it out.” The result of this brainstorm session was a graphic representation of how GCC’s curriculum and assessment could fit together and compliment one another. The formal adoption of the Learning Framework took place with a vote in GCC’s Academic Senate and it set into motion the creation of all new general education and course learning outcomes, which is how GCC got from a point of crisis in 2012 to earning NILOA’s Excellence in Assessment designation for their progress in using SLO data to make institutional improvements. Summary Chapter 4 provided the institutional context necessary for understanding and interpreting the remainder of this study. Three distinct sections of background information were covered, including an institutional overview, a description of the participants and their roles within the institution, and an interpretation of why and how GCC recast SLO assessment practices allowing them to get to where they are today. Chapter 4 built the necessary context for understanding and interpreting the subsequent data analysis and conclusions. 67 CHAPTER 5: RESULTS & THE FOUR FRAMES WORKING IN ISOLATION The focus of this dissertation was to understand the process by which a community college came to use SLO data to make institutional improvements. This chapter explores this process by analyzing participant descriptions of their experiences implementing the use of SLO assessment data to improve institutional quality through the lenses of both Bolman and Deal’s (2017) four frames and Ivancevich et al.’s (2014) model of organizational change. However, before delving into evidence of Bolman and Deal (2017) and Ivancevich et al., (2014), I take time to discuss the portions of the Ivancevich et al., (2014) model that are not pertinent to this study in an attempt to provide the full context of this change initiative. The remainder of this chapter discusses how the four frames (i.e., structural, human resource, political, and symbolic) are represented in the participant experiences and how these frames acted in isolation as individual levers of institutional change to address what Ivancevich et al. (2014) refer to as impediments to change. To be clear, when a theme is stated as an impediment to change, it does not mean it is inherently bad or useless for an institution to engage in the described activity. For example, Ivancevich et al. (2014) list the formal organization and organizational culture as two examples of impediments to organizational change. By listing the formal organization and organizational culture as examples they are not implying these two aspects of an organization are automatically impediments to organizational change. Rather, they seem to be saying the formal organization and organizational culture are common aspects of an organization from which impediments to change may arise. The themes under the headings labeled challenges provide evidence described by participants as areas GCC could have been more strategic about. The themes under the headings labeled successes provide evidence described by participants as areas GCC was successful in the implementation from the start. Finally, I present evidence from 68 participant interviews of GCC addressing the key components of implementing the method portion of Ivancevich et al.’s (2014) model (i.e., timing, scope, and experimentation). Excluded Elements of Ivancevich et al., (2014) Model One important issue must be addressed before presenting the results of the study. Ivancevich et al.’s (2014) model outlines seven steps in a successful organizational change. I argue the first four steps in this model are beyond the control of HEIs in the case of SLO assessment. Also, while the last step—program evaluation—is relevant for SLO assessment changes, this change is so new, even for GCC, that there was no data to collect on this step in the organizational change process. While I believe these steps in the model are not as relevant to this study, it is worth discussing them briefly to add context and provide a rationale for why they are not included in this study. Forces for Change Colleges and universities have increasingly been expected to use SLO data to inform institutional improvement. However, the extent to which institutions can autonomously enact change is often constrained by external forces. Ivancevich et al. (2014) identify both external and internal forces that drive organizational change. In the specific context of higher education, external forces (i.e., economic pressures, technological advancements, and socio-political shifts) exert significant influence over institutional decision making. Accrediting bodies and policymakers act as gatekeepers of change by determining the metrics of institutional success and the acceptable parameters for improvement efforts. For example, federal and state funding policies increasingly tie financial support to student performance indicators such as graduation rate, retention, and employment outcomes (Kelchen, 2018). These external forces establish the 69 priorities for institutional assessment and improvement efforts, often limiting the scope for HEIs to define their own strategic changes and subsequent success measures. Performance Outcomes The second stage of the Ivancevich et al., (2014) model involves performance outcomes at the organizational, group, and individual levels. In higher education, these outcomes are largely dictated by accreditors, who establish the benchmarks institutions must meet to maintain their status. Regional accreditors require institutions to demonstrate evidence of student learning and continuous improvement (Ewell, 2009). Because accreditation is essential for eligibility for federal financial aid, institutions have little choice but to align their performance outcomes with these externally determined criteria (Banda & Blaich, 2011). Consequently, colleges cannot fully control how they define or measure success, as these are largely imposed upon them. Diagnosis of the Problem According to Ivancevich et al., (2014), organizations must diagnose the problem by gathering information, engaging in participatory assessment, and involving change agents. In higher education, however, the diagnosis of student learning issues is often pre-determined by external mandates. Accreditors require institutions to engage in assessment processes that prioritize certain forms of data collection, such as standardized learning outcomes assessment or institutional effectiveness reporting. While institutions may have some discretion in interpreting data, they must ultimately align their diagnoses with accreditor expectations. Similarly, state and federal policymakers influence problem identification by emphasizing workforce readiness, equity gaps, and degree completion rates as key areas of concern (Lederman, 2015). Thus, colleges operate within a constrained framework where the problems requiring attention are predetermined. 70 Selection of the Appropriate Method The final step in the top portion of Ivancevich et al.’s (2014) model involves selecting appropriate methods for addressing identified issues. While institutions technically have autonomy in choosing intervention strategies, their choices are constrained by the priorities set by external agencies. For example, many accreditors require institutions to adopt specific tools, such as VALUE rubrics from the Association of American Colleges & Universities (AAC&U) or standardized tests like the Collegiate Learning Assessment (CLA) (Jankowski & Marshall, 2017). Additionally, policymakers at the state level may mandate performance-based funding models that incentivize particular interventions, such as guided pathways initiatives or competency-based education programs (Dougherty & Reddy, 2013). As a result, institutions are often constrained to a narrow set of acceptable change strategies rather than having the freedom to develop context-specific solutions. Program Evaluation Ivancevich et al., (2014) outline feedback, adjustment, revision, and reinforcement as methods for engaging in program evaluation. While program evaluation as a key component of the model for organizational change and development, and is relevant to GCC, there is a lack of data provided by GCC on this portion of the model. It is not to say GCC is not engaging in efforts to evaluate the effectiveness of their change initiative, rather, the changes they have implemented were very new. GCC was early in the process of using SLO data to inform decision making at the time of data collection and had not yet implemented mechanisms for evaluating the effectiveness of their new SLO assessment policies. Therefore, the data presented in the results section of this study is limited to the impediments and limiting conditions and 71 implementation of the method portions of Ivancevich et al.’s (2014) model of organizational change and development. Structural Impediments & Limiting Conditions: Challenges Several participants described experiences that are tightly aligned to the way Bolman and Deal (2017) define the structural frame. These experiences could not be logically connected to any of the other three frames, therefore, it seems these are examples of the structural frame working in isolation. Also, the experiences outlined in this section align with the way Ivancevich et al. (2014) describes impediments to organizational change as existing conditions which can negatively influence the outcome of management change programs. After conducting several rounds of thematic coding, the following themes were outlined as structural impediments to change: (a) prioritization and resource allocation, (b) institutional adaptability and resilience, (c) collaborative and inclusive approach, and (d) long-term perspective. The subsequent section provides greater detail and evidence in support of the themes constructed from the data. Prioritization and Resource Allocation Time and again during the interview process, participants mentioned the institution’s administration must make it a known priority to all stakeholders their intention to collect and use SLO assessment data. GCC struggled with making the collection and use of SLO data a priority for a long time because there were no formal structures in place to assure accountability. Mary stated, “If there isn’t someone there to help with assessment, or if it is not part of someone’s job, it won’t get done.” Once Mary was hired as the assessment coordinator, she put into place a formal structure to help prioritize and integrate the use of SLO data to improve institutional outcomes. The specific structure is referred to as the faculty coaching model. While the creation 72 of the faculty coaching model is seen strictly through the structural frame, the execution of the role incorporates multiple frames and will be discussed in detail in a later section. Jenny also brought up how a lack of prioritization brought major challenges and delays to implementing change around assessment saying, One of the biggest problems we had was getting access to our college website. We had to go in and make resources available for faculty to do the work, but we couldn’t get the permissions we needed to make the changes. Jenny elaborated, saying, due to the fact they could not get the necessary permissions, they had to rely on Google Drive to effectively disseminate the information and collaborate. Again, relying on the structural frame early on in this scenario would have behooved GCC. If the roles, goals, and rules were clearly established from the beginning, the need for access to an information sharing and collaboration platform–the college website or Google Drive–could have been addressed early on. One of the best and most important ways for an administration to show that change is a priority is to provide the necessary resources to implement the change. At times, this lack of resources was a limiting condition for GCC. Jenny explained by saying, “One of the biggest challenges we’ve had is getting people paid to do this work, particularly some of the external contractors we’ve asked to help us. There was not a specific budget for assessment, so it made it difficult.” Engaging the structural frame by establishing a budget for assessment activities will help ensure the short- and long-term success of similar assessment initiatives. Institutional Adaptability and Resilience The structural frame outlined by Bolman and Deal (2017) underscores the roles, goals, and rules involved in organizations, and GCC quickly realized that the roles, goals, and rules 73 involved in any organizational change initiative can change in an instant. Mary stated, “One thing I’ve seen is change, whether in leadership or some sort of external change, is going to happen. We just kept taking steps forward. Sometimes they were small steps, but always forward.” A very similar point was made by Hannah who said, Both our state higher education system and our regional accreditor changed the assessment standards while we were trying to implement this change, so now we were trying to revise the learning framework and implement guided pathways and it just became initiative overload. Relying on the structural frame along with leaders to keep moving in a positive direction, even when tasks seem insurmountable or overwhelming, kept GCC on a positive forward path during these assessment changes. Institutions who are trying to undertake an initiative like GCC need to understand the roles, goals, and rules within the structural frame can, and more than likely, will change. In order to persevere through these changes institutions must embrace the change and immediately try to make some sort of positive step forward to avoid complacency. Collaborative and Inclusive Approach Implementing an effective assessment plan involving the use of SLO data to make decisions is no small task, and it requires buy-in and effort from many stakeholders of the institution. Because institutions of higher education are typically large organizations, invisible barriers to cooperation often impede cooperation. Mark stated GCC is continually working to dismantle their own silos, which have inhibited collaboration in the past, “Looking at our different divisions that are overseen by the Provost and Student Affairs cannot be divorced from other divisions like Enrollment Management. They have to be guided by the same mission and 74 strategic plan.” Mark goes on to say, “We have launched an interconnected planning model where we are measuring co-curricular and curricular performance through the same assessment lens.” While SLO assessment is often seen as strictly an academic departmental pursuit, institutions would be wise to work with other institutional departments from the moment a change initiative is decided upon and determine where cooperation and shared roles, goals, and rules can be established. Another structural issue that consistently arose in interviews was the need to collaborate specifically with the Information Technology (IT) department from the very beginning. GCC required time and faced frustrations before fully understanding the importance of IT collaboration. Planning and executing an assessment initiative requires an immense amount of data collection, organization, and storage. Jenny stated, We have had some friction with creating our own technical systems for data storage and use. It got to a point where faculty and assessment coaches threw their hands up in frustration. We have since implemented Google Drive, and it has been much smoother since. Making decisions based on SLO data necessitates efficient technology, and institutions should utilize formal structures (i.e., roles, goals, and rules) to ensure their IT departments are an ongoing part of planning and implementation. Long-Term Perspective Institutions like GCC, who are attempting to shift the paradigm for how they engage in strategic decision making (i.e., using SLO assessment data to drive institutional improvement) must understand this is a long-term commitment involving constant change. Several participants 75 shared anecdotes outlining how GCC struggled to deal with some of the organizational structures resulting in longer than anticipated timelines for change. Hannah stated, Our governance structure, in terms of process, works well. But like any democratic process, it takes time. You have one group reporting to the next group then waiting on feedback from another group before a decision can be made. Sometimes decisions need to be more timely. Susan also brought up the issue of time and how it was struggle for some at GCC who wanted fast results if they were going to put in effort when she said, We were under the burden of providing proof and evidence that what we were doing actually could make a difference in faculty members’ teaching and learning; the part of your job they actually love. It can make a difference in the curriculum they teach and how they teach it to their students. It took us a long time to be able to give that burden of proof, and so we had to start with the faculty members who are most open to change. An institutional-level change initiative is never as simple as it may seem, and this is something GCC had to struggle with to learn. Setting realistic expectations of the time and effort to enact such a change early in the process through the mechanisms of the structural frame–roles, goals, and rules–may have been helpful for GCC to deal with these struggles. Structural Impediments & Limiting Conditions: Successes Participants were often in agreement on positive actions taken by GCC, which aligned with Bolman and Deal’s (2017) structural frame. This section focuses on those actions taken by GCC within the structural frame that helped the institution to implement the ongoing use of SLO assessment data to improve. From my interpretation, the actions described in this section represent Bolman and Deal’s (2017) structural frame working in isolation, without the influence 76 of the human resource, political, or symbolic frames. The actions described by participants were coded into four themes: (a) strategic and incremental approach to implementation; (b) empowerment and support for stakeholders; (c) credibility and validation; (d) proactive leadership and momentum. Strategic and Incremental Approach to Implementation Implementing large-scale organizational change is not an easy task. This point was reinforced by Luke when he said, “A lot of large college reforms die because you try to bring the end product to the people. It's just too much for the people. Too much to be accepted all at once.” The administrators, led by Mary, deliberately took a strategic and incremental approach to implementing new assessment efforts. Luke recalled, “We created a system for faculty to work through each course in their program and revise the curriculum so it aligned with the new learning framework, and most importantly to think about how they planned to assess it.” These intentional efforts then allowed for the development of a structured system. This structured system for revising and aligning curriculum was piloted by several early adopters as Luke recalled, We didn't try to put everybody through the process all at once. We found some of our people who were really into it and we used them to help us pilot and figure out some processes and paperwork type things. Implementing in such an incremental and strategic way seems to have allowed GCC to be innovative and find some early success to build from. Having some programs moving forward with implementing changes to the assessment process, while others lag behind may sound counterintuitive, but this was actually a strategic 77 decision made by GCC. Mary explained the logic behind allowing departments to move at their own pace when implementing this assessment initiative, You can't wait for everybody to be done. Your whole college will take forever to get everything done. You have to show some early success and move forward, so you get your early adopters and you showcase them to the holdouts. GCC allowed for flexibility in the scope and timing of implementation of this assessment initiative and Mary felt that was vitally important saying, “You’re not going to hit it out of the park every time, so that initial success from the early adopters is key in building momentum for the institution.” As shown by GCC, welcoming any success while implementing a change initiative is of the utmost importance. Empowerment and Support for Stakeholders Change can be a scary thought for many people, but one thing GCC discovered during this time of change is empowering and supporting the individuals responsible for implementing change can help ensure success. Expecting individuals to be resistant to change at first, Mark discussed how the assessment team decided to start with familiar topic areas as a way to support faculty members engaging with the assessment process for the first time saying, Early adoption, where we really kind of cut our teeth doing assessment, was around written communication and critical thinking, which we think of as two overarching outcomes. They're institutional outcomes in that they cut across all of our programs and departments. GCC utilized the structural frame by implementing the rule or practice of starting with content areas familiar to all faculty. This practice allowed faculty to focus more on the assessment concepts and processes without being confused or out of their depth pertaining to content. 78 Another way GCC utilized the structural frame to empower and support assessment stakeholders was by creating replicable processes. According to Luke, “Mary and her small team developed a fairly robust documentation process for faculty to utilize, which made the process of updating courses to fit the new learning framework and then determining the mode of assessment more systematic.” Once the updating process and documentation was successfully piloted by early adopters, it was then replicated by faculty groups who had been more reluctant to engage in the assessment initiative. Early in the implementation of this assessment initiative, GCC discovered they would need to take action if they wanted to gain more participation from some of the holdouts. Again, assessment leaders relied on aspects of the structural frame (e.g., goals and structured time) to achieve greater participation. “We didn’t just ask faculty members to do this whenever they found time. We gave them goals and deadlines and then we actually carved out time for them to work on this,” said Luke. Providing the holdouts with proximal goals and a structured framework to reach those goals proved to be a successful combination for GCC. Credibility and Validation Taking on a large organizational change initiative comes with risks. For GCC, vast amounts of resources (i.e., time and money) have been allocated to this initiative, and if not successful, there can be serious repercussions like turnover, losing accreditation, or worse. GCC realized they needed support for this endeavor, not only from internal stakeholders, but also from external sources to help bring credibility and validation to the process. Local advisory boards were engaged early in the process to help build support for the hard work ahead. As Jenny recalled. “We relied heavily on our advisory boards in the beginning. We asked them to weigh in on the things that we were recommending to change about our programs and also about how we 79 would assess.” Administrators at GCC felt getting the validation from advisory boards on the changes they were planning to implement would help motivate faculty and provide assurance that the hard work to come would be worth it. Once GCC began to implement the changes to their assessment practices, they continued to build credibility and validation of the process. As Beth recalled, When I sit down with a program or a course coordinator and we begin to revamp their assessment process, one of the questions that I ask about key assessments is “What do you wanna know? What would be most useful to you?” Which is much better than saying, “Well, you know, all SHES and MSCHE says we have to know this.” Beth was advocating for starting the process by showing the faculty member how you can make their lives easier, and this helped to build credibility and buy in for the process. I followed up with Beth asking how GCC knew if they were being successful in their attempts to build credibility and validation for the new assessment process. She answered by saying, As assessment coaches started helping more people, more people started wanting coaching. So it's sort of institutionalized itself where people would email me and say, “I'm gonna want you to be my coach.” They didn’t necessarily have anything coming up. They just knew there are coaches and they want to be on my list. All the faculty have kind of gotten to the point where they're like, “Oh, coaches are the way that I get things done, so I'll just reach right out to whatever coach and make sure that I get things done.” Sort of like a nuclear reaction, right? It just keeps itself going in that way. 80 While there are probably more scientific ways to measure the level of credibility or validation (e.g., survey methods), GCC began seeing increased demand from faculty to be involved in the assessment change process, which they interpreted as credibility and validation of the process. Proactive Leadership and Momentum The theme of proactive leadership and momentum showed a sizable amount of agreement among participants; perhaps the most agreement amongst all themes. Luke gave his input on how to get the process started and begin to build momentum saying, So we didn't try to put everybody through the process all at once. We found some of our people that were really into it and we used them to help us start a pilot and figure out some processes and paperwork type things. This really helped to get us off on the right foot. Luke’s sentiments around starting small and with individuals who were familiar and excited were echoed by Jenny who said, A number of programs do similar assessments. So we started with art, nutrition and kinesiology. We built rubrics that we could use to assess student artifacts that would capture course learning outcomes and program learning outcomes for these three programs at the same time. Then we would get together afterwards to look at the data and ask the content experts, “How does this make a difference? Where do we need to make changes?” Starting small and starting with concepts everyone is familiar with seemed to help GCC build momentum for the large-scale changes in assessment practices. The previous paragraph highlighted examples of how GCC built initial momentum through proactive leadership, but sustaining momentum is something that is equally important 81 and can be more difficult. Mary, who has been the leader of GCC’s drive to improve SLO assessment practices, had a great deal to say about the importance of being proactive and keeping momentum. “Forward momentum and transparent conversation; just getting people to the table to participate in conversation, even if imperfect, is how you get change to happen.” When discussing how to keep momentum going, Mary also brought up the importance of flexibility saying, You also have to be flexible, but the biggest piece is figuring out how to balance when to allow flexibility and when not to. Like you can't wait for everybody to be done. Your whole college will take forever to get everything done. So, you get your early adopters and you showcase and reward their success, while allowing for flexibility for some of the stragglers. Reiterating the importance of forward momentum, Mary concluded her thoughts on this topic saying, You have to realize you're not going to hit it out of the park every time and that forward momentum is sort of, above all else, important. So, sometimes you take the forward momentum and you take stock of where you are and what you need to do better and you keep moving forward. The amount of agreement around this theme shows how important staying proactive and keeping momentum was to GCC’s success in implementing these changes to their SLO assessment practices. Human Resource Impediments & Limiting Conditions: Challenges There were fewer impactful examples of the human resource frame of Bolman and Deal’s (2017) four frame model acting in isolation as impediments to change provided by participants. 82 However, the constructed themes seemed to be quite important, and necessary for GCC to eventually navigate successfully. As previously stated, the central concepts making up the human resource frame are needs, skills, and relationships (Bolman & Deal, 2017). The themes derived from participant interviews, which were interpreted as impediments to change were (a) continuous motivation from leadership and (b) emotional support and reframing. The subsequent section provides greater detail and evidence in support of the themes found in the data. Continuous Motivation from Leadership As discussed in the structural impediments to change section, GCC had a somewhat difficult time understanding just how long and arduous the commitment to re-envisioning the assessment process would be, which led to frustration and burnout amongst faculty who were trying to implement the change. Pertaining to the theme of continuous motivation, Hannah recalled, We had to switch database technology providers in the fall of 2020, which put our progress on hold. After the switch was complete, we repeatedly tried to get in touch with our assessment coach, but didn’t hear anything until spring of 2021 when we were informed our assessment coach was no longer our assessment coach. It wasn’t because of any tension, it was simply because we had only budgeted to get the process started and a new coach wasn’t assigned to us yet. Hannah also stated there was turnover at the Provost position during this time, which meant there was even less direction from leadership. The lack of motivation and clear communication of operational objectives from leadership during this time period seems to have negatively impacted the frustration and burnout amongst faculty. 83 The issue of continuous motivation was also outlined by Luke, when he discussed some struggles leaders were having with certain holdouts within the faculty ranks. Specifically, Luke said, You have the old-timey faculty member that says everything was better back in the day, and that’s so classic. For a long time, we were stuck thinking about the people who were saying these things, but then we shifted and started thinking not about the people but about what their motivations were. From there we determined how to meet the motivations of this archetype who is holding out and making the process more difficult. The ability of leadership to not focus on the individual, but rather their motivations, seems to have been a turning point for overcoming motivation as an impediment to change. Emotional Support and Reframing There are times when people do not respond to motivation; when people are simply upset and want to be validated for feeling the way they do. Beth shared how she came to understand this fact after dealing with several faculty members who were frustrated and emotionally drained by this change initiative. Specifically, Beth said, Faculty have to do extra things. They're going to have to meet with coaches, to rewrite things, and re-examine programs and courses. There is some resentment in terms of the extra work that is involved in this process, and it can feel overwhelming. One of the techniques I learned is to let people complain and give them a moment to be like, “This is awful. I really don't like it. I don't like what I have to do. I don't like that I might have to change something I do in my class because of some external pressure.” I would let them know their complaints were totally valid and that I didn’t necessarily like it either. Most importantly, after letting them vent, I would always ask, “But how can I make this work 84 for you?” To have another faculty member validate those feelings but reframe them afterward makes things much easier. Large-scale organizational change is difficult and one of the most challenging aspects for GCC is effectively managing the human capital driving the change. Burnout, disagreement, resentment, and other negative emotions are natural when dealing with large and difficult projects, and GCC is still grappling with how to best deal with these issues. Hannah illustrates this ongoing difficulty saying, It's a lot. In addition to changing standards among SHES and MSCHE, we are also revising the Learning Framework, and implementing guided pathways, which was proposed by our previous Provost who left before it was finished. I would say it’s initiative overload, and our real obstacles right now are burnout and morale. I went into our assessment day planning on skipping it. I was just going to take a personal day because I didn’t feel like I could do it. My department chair said to me, “It's virtual. Just show up, turn your camera off and then do what you need to do on the back end.” Even though GCC has successfully implemented positive changes in its SLO assessment practices, the institution still struggles to maintain positive momentum, particularly within the human resource frame. Human Resource Impediments & Limiting Conditions: Successes Participants consistently discussed the key concepts of the human resource frame–needs, skills, and relationships–outlined by Bolman and Deal (2017), and many of the experiences highlighted instances in which positive aspects of the human resource frame were leveraged to advance institutional change. Similar to the human resource impediments to change section, there were fewer themes derived in the human resources implementation of the method section. 85 However, the extent of agreement between participants on these themes was quite strong. The themes derived from the data were: (a) systematic approach to change, and (b) inclusive decision-making. Systematic Approach to Change Several participants brought up the concept of systematic or systems thinking when discussing how many individuals worked together to make this change initiative successful. The first individual who stressed the importance of taking a systematic approach to this change was Luke, who said, Mary and I are systems-level thinkers. You have to think of how individual parts work together to affect the whole. It's not that it's the only way to think, but in order to get something through like this, I mean you have to think big picture and we certainly tried very hard to help other people see some of the big picture issues of running large scale reforms at colleges. Hannah also stressed the importance of being systematic when building the team to implement this change initiative saying, You have to have your aces in their places. How did those aces get to those places? That was Mary. Mary is picking faculty members and assessment coaches that have positive attitudes and can fight through the resistance. I definitely think it was systematic and strategic. Administrators at GCC, namely Mary and Luke, took the time to plan out how the new assessment practices would work as an entire system, but also how each individual part of that system would function to increase their chances of success. 86 Inclusive Decision-Making The concept of being inclusive and collaborative when making decisions was another important aspect of GCC’s success with their SLO assessment changes, which aligned with Bolman and Deal’s (2017) human resource frame. Many of the decisions made surrounding the implementation of new SLO assessment practices were done so within GCC’s Curriculum Committee. Beth felt the spirit of inclusivity and collaboration amongst this long-standing committee helped foster those ideals within the SLO assessment change initiative stating, It's always been a very collaborative group. A group that can make jokes, can question, and can come together around a problem. Even if people don't agree, it's always very gentle. It's not necessarily like, ‘you're doing this wrong’ and, ‘this is not how we treat people.’ There's just gentle redirection from that committee and it makes for easier collaboration. Hannah agreed with Beth about the willingness to be inclusive and collaborate amongst the Curriculum Committee saying, The willingness to collaborate despite resistance or obstacles and differing value systems and beliefs was super important. The process was a success because we overcame those and we were still able to collaborate. Utilizing the goodwill and spirit of inclusivity from this already intact committee seemed to help springboard this change initiative in a positive direction. GCC, and in particular the Curriculum Committee’s, ability to maintain an inclusive decision making environment seems to have allowed the SLO assessment initiative to persist through some of the most challenging times. 87 Political Impediments & Limiting Conditions: Challenges Much like the previous section on the human resource frame, there were few instances discussed by participants of the political frame working in isolation as impediments to change. The central concepts of Bolman and Deal’s (2017) political frame are power, conflict, competition, and politics. Most of the experiences participants brought up pertaining to the political frame acting as an impediment to change had to do with conflict, and these experiences seemed to fit into a singular theme. The theme derived from participant interviews, which was interpreted as an impediment to change, was (a) patience and persistence with conflict. The following subsection provides the details of this single theme. Patience and Persistence with Conflict Community colleges are complex organizations often containing multiple groups of stakeholders who have competing interests. Coupling the competing interests of various stakeholder groups with an ever-shrinking pool of resources, GCC found itself with a recipe for conflict. Hannah shared how political conflict impeded GCC’s ability to move quickly, but patience and persistence won out saying, There were definitely individuals who were intentionally holding up the change. But, one thing I will say about the faculty at GCC is that we value governance very much and we respect the process of governance, whether or not we always agree with the outcomes. So even if the outcome comes from a contentious vote we're gonna follow the flow of governance. We combatted the conflict by being patient and letting the governance process work. Initially, because Hannah referenced both political conflict and the formal structure of governance, I wondered if this was evidence of two frames (i.e., structural and political) working 88 in combination. However, my interpretation was that the patience employed by supporters of the change was separate from the governance process itself, and represented the political frame working in isolation. Hannah went on to discuss a particularly contentious case of political conflict between a longtime faculty member and an assessment coach noting, This individual had been at the college for roughly 30 years and so it's not like we haven't always been assessing stuff. It's just in the last decade become a more formalized process and more technical and time consuming in nature. So this person was particularly resistant. All I can tell you is that there was a major disagreement between two of the faculty members and their assessment coach. That particular coach walked away from our area. So, essentially, we just took a backseat. Jump ahead about six months and we were assigned a new coach for our general program and this is where I step in and say, “just give me what you've done so far on paper and let me do the rest of it. I’ll work with the coach.” In this case the impediment to change–political conflict–was, at least temporarily, seen as insurmountable. GCC’s assessment administrators decided to employ patience and persistence by temporarily walking away and were productive in another area before returning to this department to try again. Luke provided similar stories about patience and persistence in the face of political conflict. After sharing specific instances of conflict, Luke gave a summative comment on conflict saying, There were many who didn't want to engage in meaningful assessment, so we ignored them in some ways knowing we’d come back to them. By the time we did get to the 89 resistors, we had already updated and changed about 85% of the programs. They knew it was now their turn and the resistance was largely gone. These examples of patience and persistence in the face of political conflict showcase how GCC dealt with impediments to change within the political frame. Political Impediments & Limiting Conditions: Successes Illustrated by the examples in the last section, the political frame can be wrought with conflict and competition, which can act as impediments to an organizational change initiative. Several participants, however, provided examples of how GCC engaged with the political frame to successfully implement their new assessment practices. Again, there was a singular theme derived from the participant interviews for the political implementation of the method. From the participant interviews a single theme was discovered showing the political frame used to implement the change initiative. The theme was (a) finding diverse allies. The following subsection provides the details of this single theme. Finding Diverse Allies Much like actual politicians, assessment administrators at GCC knew they would need a base of supporters if they were going to successfully implement the proposed changes to their practice of assessing and using SLOs to drive institutional improvement. During the planning phase, before any major changes had taken place, Luke recalled GCC’s effort to build political support saying, We built a team of both faculty and non-faculty members that were our allies, and we used them to help us in our planning processes. We also used them as eyes and ears throughout the building to help spread the word and build more support. 90 Luke went on to say, “We would often hear back from these allies about potential issues or resistance, and we were able to hedge these issues off by networking solutions through our large group of allies.” This proactive style of gaining support also seemed to help GCC gauge and combat potential political resistance. Hannah again echoed Luke’s assertion regarding the importance of having a diverse group of allies saying, They got us working together. They got the key faculty leaders who were the cheerleaders, who were going to spread the word and get the buy-in, and also were going to work hard to get the evidence. This was especially important in terms of the buy-in, but also just working our way through resistance. These participant experiences seem to show how GCC felt it was very important to be proactive within the political frame. Symbolic Impediments & Limiting Conditions: Challenges The central concepts of Bolman and Deal’s (2017) symbolic frame are culture, meaning, metaphor, ritual, and stories. Again, there were several instances of Bolman and Deal’s (2017) symbolic frame working in isolation as an impediment to change interpreted from participants’ descriptions of experiences in implementing new SLO assessment practices. From the participant interviews a single theme was discovered showing the symbolic frame working as an impediment to change. The theme was (a) cultural shift through perception management. The following subsection provides the details of this single theme. Cultural Shift Through Perception Management Assessment in higher education, even the assessment of SLOs, is not a new practice. However, effective assessment of SLOs and subsequent use of the data is a very new concept. 91 The literature reviewed showed countless instances of burnout and apathy towards the assessment process amongst faculty who often viewed assessment as a useless activity to placate accreditors (Cain, 2014; Chadi, Jeworrek, & Mertins, 2017). GCC’s initial impediment to implementing the SLO assessment change initiative was to address historical failures. Susan provided a great example saying, Initially, I think the biggest hindrance to our success was a faculty body that had no experience in assessment actually ever being successful or useful. It was just this pain in the butt bureaucratic thing you had to do. We were under the burden of providing proof and evidence that what we were doing actually could make a difference, and it took us a long time to be able to give that burden of proof. Luke shared a similar memory about the past acting as a symbolic impediment to change stating, We had the freedom to decide how we wanted to change our assessment practices, but it didn’t change the fact that these changes were imposed by accreditors. We had to work hard to overcome the image of who was behind all of this. GCC really seemed to struggle to address and challenge pre-existing perceptions or misgivings amongst faculty about the use of student learning outcome data. Eventually, through utilizing the symbolic frame, GCC was able to acknowledge and address historical challenges and frustrations that hindered previous attempts to assess and use SLO data to drive institutional improvement. Symbolic Impediments & Limiting Conditions: Successes For GCC, the symbolic frame working in isolation to facilitate the implementation of improved SLO assessment methods was an area where participants had much to say. Participant input seemed to congregate around two consistent themes. The themes derived from participant interviews, which were interpreted as successful implementation of the method were (a) strategic 92 communication and messaging and (b) narrative building and meaningful engagement. The following subsection provides the details of these two themes. Strategic Communication and Messaging Several participants shared their opinions and experiences about how GCC emphasized communication and messaging of the improved SLO assessment initiative from the start. Susan shared how important it was to explicitly tie assessment practices to the mission and vision of the institution saying, The mission of the institution is the most important thing and if you can tie assessment directly to the mission, it’s hard for people to refuse to do it. When you walk onto our campus the first thing you see are huge banners that tout our mission along with our institutional learning outcomes, which are the bedrock of our assessment practices. Luke shared a very similar sentiment when he recalled how GCC intentionally branded their assessment process, “We created the learning framework, which was a visual representation of our mission, vision, and values and it laid out aspects of our assessment process.” These first examples of how GCC branded and advertised their improved SLO assessment processes highlight clear examples of the symbolic frame. Hannah added more context to the importance of strategic communication and messaging to the overall success of their assessment initiative saying, The assessment coaches had to let us know what we had to do and how to do it, but the biggest part to start off was coaching us through the language of assessment, so we could all be on the same page and have meaningful conversations. Establishing a shared vocabulary and language early in the process seemed to be an important factor in the successful implementation of the method for GCC. 93 Several times participants pointed to the importance of communicating early successes had in the implementation of the new SLO assessment process. Luke stated, “We worked hard to get the word out about initial successes. We would tout early adopters who worked with assessment coaches to align their curriculum to the new assessment processes.” Mary added to the importance of showcasing success by talking about how GCC is working to showcase positive assessment results, “We are getting better about reviewing assessment results at an institutional level. These positive results will reverberate better if you bring it out of the darkness into the light.” Communicating success seems to be a way that GCC tried to build support and legitimacy for the SLO assessment initiative. Narrative Building and Meaningful Engagement Nearly all of the participants shared frustration about how assessment had historically been viewed as an external requirement foisted upon the institution by individuals and institutions with no direct ties to GCC. The shared frustration amongst participants seemed to be about how difficult it was to overcome this old narrative and the hard feelings it had created amongst faculty members. Hannah outlined GCC’s first step in successfully building a new narrative and more meaningful engagement saying, I give a lot of credit to Mary and the assessment coaches because no matter how negative or resistant the climate was, they listen, legitimize the opinion, they understand it, and then they help us keep working in a positive direction. With assessment this is one area of our college where we have our aces in their places. We have people who can deflect the cynicism and the resistance, and be patient. 94 Assessment professionals at GCC were able to hear and legitimize issues with the old narrative of assessment and still push forward with building a new narrative and more meaningful engagement with the assessment process. Once GCC was able to combat resistance and hard feelings from past assessment failures, it was time to begin establishing a new narrative about how SLO assessment would look at GCC moving forward. Participants seemed to be in agreement about how they were able to successfully establish a new assessment narrative for GCC, and the way they did it was by no longer framing the narrative around accreditation imperatives and instead building a narrative around using SLO assessment for improving student success and usefulness to faculty. Hannah discussed this reframing saying, “GCC is really driven by students. We want our students to be successful most of all. These changes to assessment were put in place to help identify how we can make students more successful.” Jenny shared her experience about how GCC worked to reframe the narrative toward student success saying, It's like fundraising. If you're approaching a funder, a donor, if they have the means to give and you can talk to them about the mission and you know they care about it, they're going to give. It's the same thing with faculty. We know they care about students and you have to talk about how this work is going to make life better for students. So, I think the fundamental message was we crafted the right one. Framing the importance of quality SLO assessment for improving student success seemed to be a winning message for GCC. Adapting the narrative toward how SLO assessment can make the lives of faculty easier is another way GCC leveraged the symbolic frame to implement the method. Beth recalled how she crafted her message to resistant faculty saying, 95 When I talked to a faculty member who said, “This is busy work and I hate it.” I say, “I totally understand. But what if we all sat down together and found I teach the concept of change over time in my biology class and you teach change over time in calculus too. If we develop our artifacts together and thoughtfully, we can have data we can use to evaluate the effectiveness of both our classes without doing anything extra.” The faculty member was much more open to this type of collaborative message. Susan also provided an example of the usefulness of messaging SLO assessment to resistant faculty in a way that shows how their lives will be made easier saying, We got to talking about how useful assessment can be and I said, “Well, I can go to our database and pull everybody who does information literacy, then I can go to the library and let them know all the classes that are going to need research and assistance. And then the library can go to those faculty and set up sessions that would fit exactly into their course.” The faculty member was excited to be able to have this done without adding any more work to their plate. I reiterated to them it's not just about the assessment artifacts, it's about the planning before the artifact. These examples show how resistant faculty may be assuaged by a carefully crafted assessment narrative. Implementation of the Method: Timing, Scope, & Experimentation According to Ivancevich et al. (2014), after addressing the impediments and limiting conditions to change, organizations must determine how they are to implement the change method. Specifically, organizations must address issues of timing, scope, and experimentation. The following subsections provide evidence of GCC confronting the issues of timing, scope, and experimentation gleaned from participant interviews. 96 Timing When implementing a change initiative, it is possible for institutions to have perfect plans and execution, yet still fail to successfully implement the initiative. One reason why perfect plans and execution can still fail is the institution’s failure to evaluate the proper timing of the change initiative. According to Ivancevich et al. (2014, p. 528), “Timing refers to the selection of the appropriate time at which to initiate the intervention.” The administrators leading these assessment changes (i.e., Luke and Mary), were keenly aware of the importance of timing for this change initiative. Luke said, I don’t know if everyone was ever really ready for a change of this size, but we had these external markers from our accreditors, which we wanted to align our change schedule with. We looked at our next accreditation study and we worked our way backwards of what we needed to have in place in order to be ready for them. That’s how we created a 6-year timeline on what we needed to have done for the accreditors. When discussing the same issue of aligning the timing of GCC’s SLO assessment change initiative with the requirements of their external accreditors, Mary echoed Luke’s sentiment saying, The benefit to aligning our timeline with the accreditor was we had the opportunity to sort of clean up everything. All the courses were rewritten, all the course learning outcomes were examined, and there was an honest conversation about what is the walk out the door, knowledge in this course and in this program? By considering the best time for which to implement their SLO assessment change initiative, GCC thwarted potential challenges. Also, by integrating their change initiative with the required 97 changes and timeline for their external stakeholders, GCC was able to harness the credibility and motivation of the accreditation process. Scope Creating a perfect SLO assessment process and using the data collected to improve outcomes is a vast undertaking for any HEI. Institutions looking to implement a change of this magnitude must realize it is not feasible to accomplish all at once. Nor is it effective to incorporate change too slowly. There must be a balance between being too ambitious and not ambitious enough. Ivancevich et al.’s (2014) concept of scope becomes relevant at this stage of the discussion. In an organizational change initiative, scope refers to the magnitude or scale at which the change will be implemented. Again, administrators at GCC, mainly Mary, showed awareness of this issue saying, It turns out, when you adopt an entire new framework, and you rewrite all your institutional learning outcomes, and you line up your accreditation schedule to align with these changes, it takes time. You can't let up because it didn't get done in a year. You have to keep going and you have to stagger it and take both bite-size and big-sized chunks all at the same time. Assessment coach, Jenny, also addressed how scope of implementation is a constant consideration. Jenny stated, We realize that not everyone is ready for these changes, so we talk about who is ready and who needs help. We talk about who's been collecting a lot of data and is ready for the next step. For example, right now we're planning for summer assessments and we're looking at who's got a lot of stuff in the hopper, because now that everyone has really started to collect, we're starting to think about where the opportunities are. 98 At first, it may seem counterintuitive to purposefully tamp down ambition. However, in the long run of an organizational change initiative like GCC’s SLO assessment change, finding a sustainable scope of implementation is crucial to long term success. Experimentation There is not a singular prescriptive way for institutions to implement the use of SLO assessment data to improve institutional quality (Baker et al., 2012). Because of this fact, institutions are free to experiment to determine what methods work best for them. As mentioned in the previous section, GCC determined it would be best for them to implement these SLO assessment changes in a piecemeal fashion, beginning with those programs which were most open to the changes. According to Ivancevich et al. (2014), this piecemeal implementation provides great conditions for experimentation. When programs at GCC experimented with the implementation of these SLO assessment changes, assessment administrators received feedback, allowing them to learn from each successive iteration. “As the experimental attempts provide positive signals that the program is proceeding as planned, there’s a reinforcement effect.” (Ivancevich et al., 2014, p. 529). Luke discussed how GCC benefitted from their small-scale experimentation, and realized larger scale success saying, We found some of our people that were really into assessment, and we used them to help us pilot and figure out some processes and administrative type things. These people were willing to deal with some of the trial and error, and once we got it figured out, we were able to present more polished processes to the more resistant people. Mary also presented her experience with experimentation when discussing the creation of a visual representation of their assessment process saying, 99 We recast assessment at GCC with the creation of the learning framework, which is a graphic organizer by which we make sense of the outcomes that we require of our students. We piloted this concept and found it helped people develop a shared meaning and understanding of what we were trying to accomplish. The willingness of GCC to try new ideas, evaluate the outcomes, and scale up the successful concepts was key in driving success on a large scale. Summary Chapter 5 provided the results showing how each of Bolman and Deal’s (2017) four frames were interpreted to be working in isolation to address Ivancevich et al.’s (2014) potential impediments and limiting conditions. These results were broken down separately by impediments and limiting conditions that GCC struggled with, and impediments and limiting conditions where they found immediate success. I also presented evidence of GCC addressing Ivancevich et al.’s (2014) portion of their model called implementation of the method, which includes addressing issues of timing, scope, and experimentation. 100 CHAPTER 6: RESULTS & THE INTEGRATION OF THE FOUR FRAMES The previous chapter explored participant experiences where each of Bolman and Deal’s (2017) four frames were interpreted as working or being leveraged in isolation. However, organizations are complex entities where decisions and subsequent activities can be perceived or interpreted differently by distinct actors. Bolman and Deal (2017, p. 301) echo this previous statement saying, “Multiple realities produce confusion as individuals see the same event through different lenses.” This complexity calls for change agents to understand the interplay and integration of the four frames to make the most effective decisions. The following chapter presents the interpretation of participant experiences where multiple frames were being integrated to implement change to SLO assessment practices at GCC. Because there are four frames and in any scenario where multiple frames are involved there could be two, three, or four of the frames at work; 11 possible combinations of the frames exist. Although this chapter does not present examples of participant experiences for all 11 combinations, the combinations that proved to be the most important to GCC’s success in implementing SLO assessment change are discussed. It is important to note that in the last chapter, evidence was presented showing GCC having both challenges and immediate success around the potential impediments and limiting conditions. However, in this chapter, the themes derived from participants were exclusively examples of successful ways in which GCC addressed potential impediments and limiting conditions. Combination: Structural, Human Resource, and Political Within the data collected there were many examples provided where more than one of Bolman and Deal’s (2017) four frames seemed to be integrated and working at the same time. Specifically, this section focuses on examples where the structural, human resource, and political 101 (STHRPL) frames were interpreted as working at the same time and towards the same end. Despite their distinct emphases, these frames are not mutually exclusive, rather, they intersect and complement one another in complex ways. For instance, while the structural frame provides the framework for organizing tasks and resources, the human resource frame informs how these structures influence individual and group behavior. Similarly, the political frame adds a layer of understanding by revealing the underlying power dynamics that influence both structural design and human resource practices (Bolman & Deal, 2017). Based on participant interviews, two major themes emerged, which reflected the integration of the structural, human resource, and political frames. Those themes were coded as (a) managing conflict and resistance to change and (b) securing resources. The following subsections provide greater detail on these themes. Managing Conflict and Resistance to Change Conflict and resistance were consistently reported by participants in this study while implementing changes to the assessment process at GCC. The nature of both conflict and resistance presupposes competing sides, which sets the conditions for the political frame to be effective. Conflict also involves people and their relationships, which inherently draws on the human resource frame. The structural frame relies on roles, goals, and rules and several participants provided similar examples highlighting how the structural frame was leveraged in these situations of conflict where the political and human resource frames were already being utilized. Luke discussed how leaders of the assessment change initiative anticipated conflict and resistance and planned accordingly saying, Academic Senate had already approved everything we were implementing. A faculty body approved this process, so they couldn't argue that we weren't going to do it because 102 we were going to do it. The ones who were resisting, they’re faculty, they're the group who approved it, so the argument was short-lived. Mark shared a similar experience about how conflict and resistance were quelled relying on the integration of the STHRPL frames saying, People got a bit territorial in their disciplinary content. For instance, social science felt they were the only faculty qualified to teach about social justice. However, there are other course learning outcomes that align to these diversity, inclusion, and social justice outcomes and the Assessment Committee established these outcomes can be taught and measured in other courses. The Assessment Committee is representative of the entire college, involving a representative from each department, so if they approve a course as meeting the knowledge and skill areas for DISJ outcomes, it leaves less of a toehold for a particular department to disapprove. Hannah also experienced this type of pushback, summing up the experience in fewer words saying, “There were a lot of disagreements and not all decisions were amicable, but we just did it through governance.” Relying on solid institutional structures during times of strife in the political and human resource frames seems to have been a particularly useful tactic for GCC. There are times when conflict and resistance from individuals presents an impasse. Hannah recalled a time of great disagreement between a senior faculty member and an assessment coach. Tension was high and feelings were hurt. Hannah, who showed buy-in to the process, volunteered to take a larger role, while those who were in conflict stepped away temporarily. 103 I stepped in and took what had been done to that point and worked with the coach to complete it. I assured them I’d see that our views were adequately represented. I took the initiative to take care of this because I knew it was tense. Allowing those engaged in great conflict to step away from the process, while allowing those who bought into the process to take a greater role proved to be helpful for GCC. Luke discussed an experience where after dealing with a particularly harsh conflict, he and Mary had a helpful revelation pertaining to combating conflict and resistance. He recalled, When dealing with future conflicts, we decided to challenge the archetype of resistance, not the individual. For example, you have the old time faculty member who thinks everything was better back in the day. Or you have the faculty member who believes their discipline is pure and everything else isn’t. Or you have the administrator who needs attention. That helped us a lot, actually, to not think about the person who was causing us problems, but what are their motivations? Why is this person causing us problems here and there? How do we work with them or around them? While Luke mentioned working around individuals at the center of conflict, he and others shared how the goal was to work with the naysayers. Luke said, When someone was being more of a barrier than a solution to the process, we often recruited them onto our committee. This was an effective way of guiding them onto our side. They had good ideas that helped shape the process and what they were arguing about was, often a very small issue. Mary echoed Luke’s sentiment about winning over the naysayers, saying, I mean, personality is always challenging, right? Assessment is never super popular with people. I think we won some hearts and minds because I think we've done really good 104 work and I think there's some people who are absolutely getting it and seeing improvements. These people who are won over, can be the biggest champions moving forward. Stripping conflict of the personal nature and then recruiting those naysayers seemingly allowed GCC to diagnose and solve future conflicts and resistance more easily. Securing Resources One of the most important determinants of success for any institutional change initiative, such as GCC’s SLO assessment shift, is having adequate monetary resources. Community colleges are in the midst of a monumental long-term decline in enrollment, which has led to ever scarcer resources (Marcus, 2023). There are many ways to go about securing resources for an institutional project, each way potentially relying on a single or multiple frames. Typically, securing funding for official projects requires an official process (i.e., the structural frame) to be utilized. There are also less formal methods of resource attainment employed including relying on relationships (i.e., the human resource frame), and leveraging power and influence (i.e., the political frame). Several participants shared the importance and difficulty of obtaining adequate resources to achieve the ends of GCC’s SLO assessment change initiative, which seemed to be a mix of the STHRPL frames. On the importance of having a direct relationship with resource allocators, Mary said, If I hadn't changed positions (now VP), it would have been really, really critical that I continue to have alignment with folks who have the ability to resource a project. Being able to leverage positive relationships into project funding was and is crucial. Mark was even more specific regarding the relationships most important to funding saying, 105 Both the President and Provost were very supportive of the process. The President was particularly supportive. Through those relationships we were good at getting the necessary resources. For example, we were able to get release time for faculty and we were able to create a tech position to create customized software for us. Not only was obtaining resources an important step for GCC’s success in implementing the SLO assessment initiative, but finding ways to conserve or save resources was equally important. Luke shared how GCC was able to conserve resources saying, “We utilized student help through internships, which was a money saver for us. We then parlayed that intern into a full-time position upon graduation, which saved money in training and getting them up to speed on the duties.” By recognizing the importance, not only of the formal structures necessary to obtain resources, but also the human resource and political aspects of resource attainment, GCC was able to gather the necessary resources to continue moving the initiative in a positive direction. Combination: Structural, Human Resource, and Symbolic The next meaningful combination revealed through participant interviews included the structural, human resource, and symbolic frames (STHRSY). The experiences conveyed by GCC faculty and staff in this section seem to show various individuals utilizing one or more of the three frames (STHRSY) to make sense of and successfully navigate the scenarios discussed. As discussed in the previous section, Bolman and Deal (2017) state these frames are not mutually exclusive, so it is possible for two individuals working together to solve the same problem to frame the problem–and potential solution–using a different frame or frames. Pertaining to the combination of STHRSY, one theme emerged amongst participants, and that theme was coded as (a) creating a culture of assessment. The following subsection provides greater detail on this theme. 106 Creating a Culture of Assessment The concept of creating a culture of assessment is repeated in the literature ad nauseam, however, creating a culture is not something easily prescribed or repeated. According to Weiner (2009) there are, however, certain institutional attitudes and behaviors connected with particularly successful cultures of assessment. Weiner’s (2009) list of institutional attitudes and behaviors include things like common use of assessment terms, ongoing professional development, faculty ownership, administrative support, and several others. Looking at one of these institutional behaviors–ongoing professional development–one can easily see shades of the STHRSY frames. For example, the structural frame must be leveraged to create formal and ongoing professional development sessions. The ties between professional development and the human resource frame are very clear because the goal of professional development is to grow the knowledge and understanding of personnel. Lastly, the ongoing nature and sense of comradery found in effective professional development, which is indicative of the symbolic frame, can create a ritualistic aspect to the assessment practices. Leaders at GCC knew creating a culture of assessment was important, but they had to experiment with many different methods to find the most effective way to establish and sustain said culture. Mary stated, As cliche as it sounds, success has and always will be the development of a culture of assessment. For me, this is when systematic conversations about teaching and learning are happening within the faculty to make and drive better instruction.” In the end, GCC seemed to rely heavily on the creation of rituals to establish their own culture of assessment. Speaking to the development of a culture of assessment, Mark stated, “I think it really started when we adopted our learning framework and began aligning our programs to our 107 learning framework. We now have a graphic organizer that demonstrates what we are trying to accomplish.” Once a mental model was established making it easier for everyone on campus to understand the overarching goals of the assessment changes, Jenny felt understanding faculty’s current wants and needs helped solidify the creation of the culture of assessment. Jenny said, We tend to be people that will listen, so we ask our colleagues how they feel about the changes and the new processes. The coaches meet every single week, so we take what we heard from our departments and use that input to plan worthwhile assessment activities. I think we've started to shift the culture to people valuing the assessment process and activities. Active listening and responding to the wants and needs of faculty involved in these assessment changes seems to have strengthened the culture of assessment at GCC. In an attempt to solidify their culture of assessment at GCC, assessment leaders established Teaching and Learning Days, which, according to Mark, created opportunities for “guided meaning-making.” On the creation of Teaching and Learning Days, Mary alluded to the use of the structural frame stating, “We utilized governance to establish these two professional learning days per year dedicated to assessment.” Mary wasn’t the only person who implied the importance of the structural frame when trying to establish a culture of assessment. Referring to the establishment of Teaching and Learning Days, Mark stated, “These are contracted and mandatory days for faculty that we use to conduct what we call ‘closing the loop conversations.’” While a culture of assessment can become self-perpetuating, it doesn’t necessarily start out that way, which GCC seemed to recognize and they addressed this by relying on formal structures. The experiences outlined in this section highlight the importance of 108 the interplay between the STHRSY frames, and how GCC successfully used these frames to establish their own culture of assessment. Combination: Structural, Human Resource, Political, and Symbolic The final combination described by multiple participants included all four frames (STHRPLSY). The experiences discussed in this section describe times where participants are viewing the situation and behaving in ways which leverage aspects of all four of Bolman and Deal’s (2017) frames. To reiterate an important point touched upon in previous sections, these frames are not mutually exclusive, so it is possible for two individuals working together to solve the same problem to frame the problem–and potential solution–using a different frame or frames. Regarding the STHRPLSY combination, two major themes emerged from participant interviews, and they were coded as (a) common understanding through communication and (b) faculty coaching model. Common Understanding Through Communication When implementing large-scale change within an organization, especially when there are heterogeneous groups working in collaboration, building shared or common understanding is crucial for success (Christiane Bittner & Leimeister, 2014). While building a shared understanding is clearly important to the success of a change initiative, there are almost an infinite amount of ways to approach building this shared understanding. For example, organizations may leverage formal communications like marketing campaigns or press releases, which leverage the structural and symbolic frames. To the same end, an organization may rely more heavily on informal communications to build shared understanding, like hallway conversations to recruit supporters, which relies more on the human resource and political frames. Effective change initiatives, like the one undertaken at GCC, utilizes multiple 109 approaches to create a common understanding spanning all four of Bolman and Deal’s (2017) frames. Leadership at GCC, along with rank and file supporters of the assessment changes, seemed to be keenly aware and intentional about the need to undertake a multifaceted approach to create a shared understanding of the proposed changes to assessment. Regarding the attempt to create shared understanding, Jenny said, That was very intentional. The coaches, every single week, talked to our departments. We tend to be people that will listen, so we ask and solicit. We keep our ears to the ground to find out what people are saying. We plan occasional group emails, we talk about who needs help. We talk about who's been collecting a lot of artifacts and is ready for our next assessment. We're always trying to think about where the opportunities are to connect with users. Beth shared her experiences around building common understanding and emphasized the informal communication channels saying, One of the strategies at the beginning of this involved Mary, Luke, and Susan spent a lot of time walking the halls and talking to people. Really just looking around and saying, “hey, do you have any questions about this too? Can we help understand? What do you think? Do you think it's a good idea?” Getting that input builds some goodwill, then when you have one side versus the other, it is much easier to talk and discuss the problem and move on. Beth went on to stress the importance of this informal method of creating common understanding saying, “I think faculty to faculty conversation is the most important. If you can have faculty who understand assessment and why we are doing it, it becomes easier to get them to put some effort in.” 110 Multiple participants discussed the importance of more formal efforts to build a common understanding of the assessment changes. Beth brought up the marketing efforts stating, We also had our marketing person who created really great communication materials for us. And in visual communication, which I think is really important, using a graphic organizer to represent the Learning Framework (a building with pillars) was hugely successful. Marketing took that concept and really brought it out into really good visual communication. Mark shared his thoughts on how he felt GCC was able to facilitate common understanding by stressing the importance of Teaching and Learning Days saying, I think that that's been really pivotal in terms of helping faculty to see the importance of assessment and the closing loop conversations. Leaders of Teaching and Learning Days act as thought partners, people who are a little bit more steeped in how to look at assessment data, how to interpret assessment data. And because it’s led by faculty coaches, it’s not the administration saying, “Hey, you're required to show up today.” It’s a peer saying, “We know what you do is important and we want to have these conversations with you about it.” Jenny felt the most important formal communication effort leading to GCC’s success was the faculty coaching model saying, There are key people on campus who need to buy in, and if they don't buy in, you can't really make any progress. By utilizing this coaching model we are able to find out who needs extra conversation about this? Who do you need to put in a position of influence? Who needs communication? On the importance and volume of communication, Jenny went on to say, 111 The communication has to be 10 times what you think it's going to be in order to be successful. I think it's leveraging your chairs to some extent. Using the chain of command to assist you. If you can't get the chairs to buy in, then you're not going to get anywhere. Jenny concluded her thoughts on creating common understanding by sharing a powerful thought saying, Faculty want to be trusted and recognized for their efforts. They are constantly improving their program and that they're constantly doing what's best for their students, but the old way of going about it is not quantifiable. Helping them to understand how these assessment changes can help to establish that trust and recognition. GCC’s use of multifaceted communication methods, which relied on all four of Bolman and Deal’s (2017) frames to build common understanding, seemed to be a key component to the success of implementing their institutional assessment changes. Faculty Coaching Model The final theme, and arguably most important to the success of GCC’s assessment change initiative, also involved all four of Bolman and Deal’s (2017) frames. The faculty coaching model was discussed positively and at length by every participant. The structural frame was seen in the creation of formal positions and the allocation of resources (i.e., faculty release time). The coaches were hand picked based on their existing relationships and upon their ability to cultivate positive working relationships, which is central to the human resource frame. Another reason these faculty were chosen was their existing positive reputations and their ability to deal with conflict. The combination of a good reputation and negotiation skills made these faculty members well-equipped to deal with challenges from the political frame, which were sure to come. Lastly, faculty coaches were chosen for their ability to effectively communicate, both 112 formally and informally, toward a shared understanding of the assessment changes they were attempting to facilitate. The responsibility of communicating the goals of the assessment change initiative is reflective of the symbolic frame. Regarding the creation of the faculty coaching model, Mary said, I couldn’t be in all places at all times, and pretending that faculty would just snap their fingers and get it done on their own is another problem with the expectations around assessment. It was clear we needed several key people working together to lead this and we needed resources. Being able to rely on trusted coworkers to conduct this important work by proxy allowed Mary to engage in higher-level strategic issues regarding the assessment changes. Beth spoke about how the faculty coaching model was one of the first things Mary initiated upon being hired as Assessment Coordinator saying, When she got the job I went right to her office and I said, “Hey, I've been part of the assessment committee and I've been really interested in this. I'm with you. What can I do to help?” And she asked me how much release time I would need to really help. And she went and advocated for that release time to lighten my course load so that I could focus on the assessment work. And the same is true of Susan. Then Mary asked, “What are we going to call you?” We did things called boot camps for a little while, but we weren’t into the military metaphor, so we went with the title of coach and it took off from there. I think the title of coach really worked because it didn't feel like the administration saying, “You have to do this.” It felt like one of your peers saying, “Here's why I think it's important. Here's what I see here is what the potential is.” 113 The nearly universal agreement amongst participants regarding the positive impact of the faculty coaching model was clear throughout this study. While Luke agreed with others regarding the importance of the faculty coaching model, he recalled the time it took to get this model approved saying, “It didn’t happen overnight. It took an immense number of hours to finally convince our former administrator to allow for the release time.” Even though it took a great deal of time and effort, Mark explained why the wait was worth it saying, In the past there have been a great deal of challenges to get faculty to buy into the process of assessment. But, having these peer assessment coaches partner with faculty to assist on things like bringing courses or programs through our governance system, or leading teaching and learning days, brings credibility and buy-in to the entire process. Susan provided a similar view as Mark, focusing on the importance of the peer aspect of this model saying, If Mary were to be the person, or some other administrator saying, “Hey, come on in, sit down with me. Let's talk through this.” Do you think you would see the same success that you're seeing with the faculty coaching model? If it weren't faculty? No, you wouldn’t, and Mary would say the same thing. And also, Mary needs to be the authority. She needs to be able to tell the faculty, “You need to do this. Not participating is a breach of contract. This is something that you are required to do.” Susan described this dynamic–having faculty coaches leading the work and administrators acting as the authoritative force–as the strategic separation of powers. 114 Jenny also thought the faculty coaching model played a key role in successfully reimagining GCC’s assessment practices, however, she felt the coaches’ active and intentional communication was the key factor. Jenny said of the faculty coaching model, There are key people on campus who need to buy in, and if they don't buy in, you can't really make any progress. By having this coaching model, we were able to discuss questions like who needs extra conversation about this? Who do we need to put in a position of influence? Who needs the extra communication? We ended up leveraging our chairs to a large extent. If you can't get your chairs to buy in, then you're not going to get anywhere. While participants may have viewed the importance of the faculty coaching model from different frames (i.e., allocation of resources, building buy-in through existing relationships, political capital of Mary and the coaches, or ability to communicate strategically), all of the participants agreed on the importance of this model to the success of this change initiative for GCC. Overarching Themes Over the last two chapters, many themes were presented showing how GCC integrated aspects of both Bolman and Deal (2017) and Ivancevich et al.’s (2014) models. During the data analysis stage, these many themes were coded once again and winnowed down into several inclusive and overarching themes. The results of this final round of coding yielded 3 themes, which were labeled (a) managing conflict and resistance, (b) the role of integrated leadership in driving change, and (c) building a culture of assessment through collaboration and inclusivity. These overarching themes will be discussed at length and situated within the larger context of higher education in the final chapter. 115 Summary Chapter 6 provided the results of participant interviews highlighting how Bolman and Deal’s (2017) four frames integrated, working at the same time. Not every possible combination of the four frames was presented, rather only a few of the combinations interpreted as vitally important to GCC’s success were presented. Those combinations were STHRPL, STHRSY, and STHRPLSY. Of particular importance to GCC’s success seemed to be the STHRPLSY combination, specifically the faculty coaching model program. All participants seemed to agree on the importance of the faculty coaching model to the success of the assessment change initiative. Chapter 6 ended by introducing the final overarching themes, which were derived by a final level of thematic coding taking into account themes from the four frames working in isolation and the four frames working in combination. 116 CHAPTER 7: DISCUSSION AND FUTURE RESEARCH Chapters 5 and 6 presented major themes derived from participant interviews explaining how GCC implemented organizational changes to improve their SLO assessment practices. The emergent themes from Chapters 5 and 6 were centered around the initial research question, specifically, how did one successful community college implement and sustain the use of SLO assessment data to improve institutional quality? This final chapter provides a summary of the study, a discussion of the major findings, the implications for practice, and the implications for future research. Summary of Study For decades, HEIs have attempted to measure SLOs, yet there is much difficulty finding effective ways to do so (Hegji, 2017; McLendon et al., 2006). Institutions have even more difficulty finding meaningful ways to use the SLO data collected to make impactful changes (Blaich & Wise, 2011). Institutions engaging in assessment related activities that produce non- actionable data are likely wasting resources and diminishing morale among faculty and staff (Baker et al., 2012). In a time of increasing scrutiny and doubt around the value of higher education, institutions—particularly scarcely resourced community colleges—cannot afford to continue pouring resources into SLO assessment activities only to obtain non-actionable results. As mentioned in the previous paragraph, I sought to answer how GCC found success in changing their SLO assessment processes to yield actionable data, from which to improve institutional outcomes. My original hypothesis was this case study would show evidence of GCC’s integration of some or all of Bolman and Deal’s (2017) Four Frames Model when enacting their SLO assessment change initiative. I also hypothesized data would show an understanding amongst GCC’s assessment leaders—either stated or implied—of the three 117 distinct phases of organizational change and development in Ivancevich et al.’s (2014) Model of Organizational Change. In Chapter 3, I discussed the utilization of an interpretivist methodology, and the goal of this interpretivist work was to discover how this institution managed to implement and sustain the use of SLO data to improve institutional quality by understanding the lived experience of the individuals tasked with planning and carrying out this change. The data collection procedure helped to unearth previously untold and firsthand interpretations of how GCC was able to change institutional assessment practices for the better. While the participants of this study are all unique individuals who experienced this change in different ways and with different motivations, there were clear similarities in their experience of this change initiative. These shared and impactful experiences were distilled through multiple levels of coding to produce themes, which were addressed in Chapters 5 and 6. Further analysis of the themes presented in Chapters 5 and 6 was conducted, which yielded three overarching themes dubbed (a) managing conflict and resistance, (b) the role of integrated leadership in driving change, and (c) building a culture of assessment through collaboration and inclusivity. The following section provides a discussion of the evidence (i.e., three overarching themes) of GCC utilizing both the conceptual and theoretical frameworks hypothesized at the beginning of this dissertation to successfully implement their SLO assessment changes, and how this contributes to current practice and future literature. Discussion The most significant finding in this study is evidence of GCC’s assessment leaders utilizing the frameworks from both Bolman and Deal (2017) and Ivancevich et al. (2014) in their successful SLO assessment change initiative. The assessment leaders of GCC may not have intentionally organized their efforts with these particular frameworks in mind, yet the evidence 118 collected in the participant interviews clearly applies to both proposed frameworks. In reviewing the extant literature, researchers have independently used both the Bolman and Deal (2017) Four Frames Model and Ivancevich et al.’s (2014) Model for Organizational Change, along with other similar theoretical models, to ground studies in higher education and organizational research (Birnbaum, 1988; Bolman & Gallos, 2011; Creswell J. W. & Creswell J. D., 2018; Kezar, 2014; Miller, 2003). So, seeing evidence of these models in use at GCC was not very surprising. What was surprising, was seeing such strong evidence for both models being used simultaneously by GCC to successfully organize and implement their SLO assessment change initiative, as was initially hypothesized. Evidence of the Effectiveness of Theoretical and Conceptual Frameworks Metaphors are often helpful to convey complex information, and a fitting metaphor to convey the importance of the conceptual and theoretical frameworks of this study is construction. GCC’s desire to change their SLO assessment practices can be seen like the desire to construct a house. The theoretical framework—Ivancevich et al. (2014)—is the blueprint in this metaphor, governing what important structural components must be built to successfully complete the construction of the house. The conceptual framework—Bolman and Deal (2017)—is the tools necessary to carry out the construction. One of the main aspects of the Ivancevich et al. (2014) model I hypothesized would be evident within participant experiences was the impediments and limiting conditions. Within this portion of the model, Ivancevich et al. (2014) posits, regardless of the specific change initiative in question, the context around resistance to change, leadership climate, organizational culture, and the formal organization will determine the success or failure. The three overarching themes identified because of the overwhelming agreement of their importance among participants, were 119 managing conflict and resistance, the role of integrated leadership in driving change, and building a culture of assessment through collaboration and inclusivity. These three overarching themes map nearly exactly to three of the four contexts outlined by Ivancevich et al. (2014). Ivancevich et al.'s (2014) context named resistance to change aligns with the first overarching theme called managing conflict and resistance. The second of Ivancevich et al.'s (2014) contexts, referred to as leadership climate, aligns with the second overarching theme called the role of integrated leadership in driving change. The third of Ivancevich et al.’s (2014) contexts, named organizational culture, aligns with the third overarching theme named building a culture of assessment through collaboration and inclusivity. The fourth context, formal organization, was not represented overtly in the three overarching themes, however, shades of this context can be seen in how GCC utilized the organizational structure and systems of control to implement the faculty coaching model. Evidence of GCC utilizing Bolman and Deal’s (2017) frames both in isolation and in an integrated manner was discussed at length in Chapters 5 and 6. However, addressing how GCC used the four frames in conjunction with Ivancevich et al.’s (2014) model to tackle the three overarching themes is important. The first overarching theme identified by participants in this study, managing conflict and resistance, directly relates to Ivancevich et al.’s (2014) context of managing conflict. To address this context, GCC leaders attempted to leverage relationships first (i.e., human resources frame), but when existing relationships were not enough to move past the conflict, they relied on the formalized rules, which were agreed upon using shared governance (i.e., structural frame). Such actions exemplify how GCC applied Bolman and Deal’s (2017) four frames to address specific impediments and limiting conditions outlined in Ivancevich et al.'s (2014) model of organizational change and development. 120 The second overarching theme outlined in this study, the role of integrated leadership in driving change, directly relates to Ivancevich et al.’s (2014) context of leadership climate. The entirety of Chapter 6 discusses how assessment leaders at GCC understood every change they asked faculty and staff to make had the potential to be interpreted differently by every individual involved. As Dougherty and Townsend (2006) explained, no two HEIs are alike, and that is especially true for community colleges, which often serve multiple missions. This study suggests, first and foremost, assessment leaders must seek to understand their own institutional context so they can successfully integrate Bolman and Deal’s (2017) four frames effectively. In fact, thematic analysis showed participants describing experiences which mapped to 10 of 11 possible combinations of Bolman and Deal’s (2017) four frames. This integrated leadership, described by participants, seems to have provided an element of sustainability for this change initiative by offering multiple potential solutions to address issues with implementation. The third overarching theme outlined in this study, building a culture of assessment through collaboration and inclusivity, directly relates to Ivancevich et al.’s (2014) organizational culture. As Kinzie (2015) notes, the importance of collaboration and inclusivity to the success of assessment activities cannot be overstated. Institutions who create and implement new SLO assessment practices through an intentionally collaborative and inclusive process, build stronger links between all areas of the academic mission and the SLO assessment process (Kinzie, 2010). GCC utilized a multifaceted approach to building a culture of assessment by leveraging the curriculum committee, faculty coaching model, and reframing communication of assessment. The curriculum committee was described by several participants as the most important committee at GCC and was made up of an intentionally diverse and interdisciplinary set of members. The formalized power of this group (i.e., structural and political frames) along with the 121 inclusivity and representation of the entire institution (i.e., human resource and symbolic frames) were fundamental in garnering buy-in and building a sustainable culture of assessment at GCC. Another way GCC successfully built a culture of assessment was by implementing the faculty coaching model. In order to help faculty begin to measure and use SLO data in decision making, peers who were familiar with the new assessment processes were selected and provided with release time (i.e., structural frame) to drive the process. Having funding for assessment activities allowed GCC to leverage the structural frame by creating paid roles to accomplish their goals. The faculty coaches were selected based on their knowledge of assessment and their ability to influence others (i.e., human resource frame). The faculty coaching model was unanimously perceived as a positive factor in GCC’s success. The nature of the evidence seems to suggest the findings—utilizing both Ivancevich et al. (2014) and Bolman and Deal (2017) to implement and sustain SLO assessment change initiatives—may be more generalizable than initially assumed. Portions of Ivancevich et al.’s (2014) model, specifically impediments to change, seem to be broadly applicable, and may help other institutions who are similar to GCC. Bolman and Deal’s (2017) four frames are a bit more specific, in that, while it is likely any institution implementing a similar organizational change will utilize many combinations of the frames, the specific institutional context will drive which combinations are most effective. Utilizing one framework and not the other may yield partial results. For example, an institution may have a firm understanding of their historical context and know when and in what combination to utilize Bolman and Deal’s (2017) four frames to address what Ivancevich et al. (2014) calls resistance to change. However, if the institution is not aware of the issues of leadership climate and organizational culture outlined by Ivancevich et al. (2014), the change initiative will likely see significant challenges. On the other hand, an 122 institution may be aware they need to address Ivancevich et al.’s issues of leadership climate, resistance to change, and organizational culture, but are not looking at these issues through all four of Bolman and Deal’s (2017) frames, and may not be utilizing the most appropriate means of implementing the change initiative. This institution would likely see significant challenges as well. What I believe can be generalized to similar institutions who are going through an organizational change similar to the one faced by GCC is to look to Ivancevich et al.’s (2014) model of organizational change and development as a blueprint for understanding likely impediments, and to look to Bolman and Deal (2017) four frames for the tools to overcome those impediments. Implications for Practice While this study is about one specific institution, there are broader implications pertaining to SLO assessment within higher education. This study can be used to help guide institutions similar to GCC move toward an understanding of how to implement effective SLO assessment practices and use the subsequent data to improve. In the following section, I provide three implications for practice within higher education, which are based upon the three overarching themes found in the data. First, institutions should establish a structure of peer faculty support for the initiative to combat conflict and resistance. Second, institutions should provide training and professional development for leaders focusing on Bolman and Deal’s (2017) four frames and how these frames can be effectively leveraged to address Ivancevich et al.’s (2014) impediments and limiting conditions. Lastly, institutions should overtly connect all levels of learning outcomes to the institutional mission to lay a foundation for building a culture of assessment. 123 Implication for Managing Conflict and Resistance What was abundantly clear from the interviews in this study was how faculty coaches were the most effective means of assuaging resistance to this change initiative. Faculty peer support helps create an environment where individuals feel more comfortable engaging in new initiatives because trust and credibility are often established more quickly among colleagues (Kezar, 2014). Kezar’s (2014) book goes on to suggest grassroots change efforts led by peers are perceived as more authentic and less hierarchical, which can reduce resistance and make faculty more likely to be open to and engaged in institutional change efforts. The success of faculty- supported initiatives is likely due to the fact that faculty peers understand the unique challenges and concerns that come with teaching responsibilities. Faculty supporters are better suited to present new initiatives to other faculty in familiar language and from a shared perspective as opposed to a mandate from administrators. Also, reluctant faculty are more receptive to ideas presented by others who share similar professional experiences and values (Austin, 2011; Guskey, 2002). Despite their success, resistance to change at GCC was not entirely eliminated. Faculty concerns about workload, loss of autonomy, and the perceived redundancy of assessment efforts persisted throughout the implementation. Addressing the resistance required both structural and relational strategies, highlighting the multifaceted nature of change management in higher education and the usefulness of Bolman and Deal’s (2017) frames. Resistance to change often manifests in different ways, from outright opposition to more passive disengagement. At GCC, resistance fell into three primary categories. The first resistance category was philosophical resistance where some faculty members viewed SLO assessment as an externally imposed bureaucratic exercise rather than a tool for improving teaching and learning. The second 124 resistance category was cultural resistance where GCC’s history of assessment failures had created a climate of skepticism, making it difficult to establish trust in any new process. The third category of resistance was practical resistance where faculty workload concerns, lack of training, and fear of punitive use of assessment data drove hesitancy. Both assessment leaders and faculty coaches recognized the resistance to SLO changes was taking different forms. This understanding allowed GCC to tailor the response strategies where faculty coaches tended to address philosophical concerns by reframing the narrative around assessment from a compliance activity to a means of improving student success. Assessment leaders addressed the practical resistance by instituting annual assessment training and creating the faculty coaching role to relieve some of the burden from faculty. Resistance to change is not merely an intellectual or procedural challenge. Often, particularly in the case of GCC, resistance is deeply emotional. Faculty members may experience anxiety, frustration, or resentment when faced with change, particularly when past initiatives have failed or felt forced. At GCC, historical grievances about ineffective assessment practices contributed to a negative perception of the new initiative. The faculty coaching model, again, played a large role in addressing the emotional barriers presented by reluctant faculty. Coaches relied on active listening and validation of concerns, while demonstrating a commitment to righting past wrongs. Once they found some early successes, coaches utilized these successes to build credibility for the process. Lastly, coaches focused on having clear and consistent communication to ensure ongoing transparency with the change process. Based on the success of GCC’s faculty coaching model and other supporting scholarship, establishing a structure of peer faculty support as early as possible is a critical strategy for combating conflict and resistance to institutional change. By leveraging the trust and credibility 125 among colleagues, faculty peer support can reduce resistance and increase willingness to participate in new initiatives. The ability of faculty to communicate in familiar and relatable terms can ensure new ideas are effectively conveyed and embraced, enhancing overall receptiveness to institutional changes. Together, these benefits underscore the importance of investing in peer support structures to help bring sustainable positive change. Implication for the Role of Integrated Leadership in Driving Change The importance of assessment leaders at GCC’s ability to adapt during the implementation of SLO assessment changes, using multiple combinations of Bolman and Deal’s (2017) four frames to address Ivancevich et al.’s (2014) impediments and limiting conditions, cannot be overstated. In Reframing Academic Leadership, Bolman and Gallos (2011) apply the four frames specifically to higher education, arguing academic leaders benefit from understanding and applying these frames. They emphasize colleges and universities are complex and ambiguous institutions, where challenges often require multi-faceted solutions. This literature suggests academic leaders who can shift between different frames are better equipped to manage change, resolve conflicts, and inspire faculty and staff. Further, Kezar and Eckel (2002) discuss the importance of using multiple perspectives, like those offered by Bolman and Deal’s model, when managing organizational change in higher education. They note successful change efforts often require understanding structural processes, human relationships, political dynamics, and the culture of an institution. The success of the initiative hinged not only on the strategic application of Bolman and Deal’s (2017) four frames but also on the leaders’ ability to recognize and mitigate Ivancevich et al.’s (2014) impediments and limiting conditions. Ivancevich et al., (2014) highlight leadership climate as a critical factor in successful organizational change. Leadership climate significantly impacts how faculty and staff respond to 126 change efforts. At GCC, the leadership climate was carefully cultivated to foster collaboration and engagement, while minimizing resistance. This aligns with Kezar and Eckel’s (2002) argument that transformational change in higher education requires leaders who understand institutional culture and use multiple leadership perspectives to engage faculty and staff. One of the critical aspects of GCC’s leadership approach was its emphasis on adaptive leadership. Assessment leaders recognized that rigid, top-down mandates often fail in higher education, and had failed for GCC in the past. Instead, they adopted a flexible leadership model that allowed for distributed leadership, in which authority and responsibility were shared. This distributed leadership approach allowed faculty coaches to take ownership of assessment initiatives, thereby reducing resistance and increasing buy-in. This result is in line with Kezar’s (2014) findings that when faculty perceive leadership as inclusive and participatory, they are more likely to embrace change initiatives. Research suggests Bolman and Deal’s (2017) four frames, along with Ivancevich et al.’s (2014) model, serve as an effective tool for leaders in higher education. This dissertation along with past literature referenced in the previous paragraph, suggests training academic leaders in these models can improve their ability to understand and manage the complexities of their institutions, making them more adaptable and strategic in addressing challenges like the ones presented by implementing SLO assessment changes. It can be common for leaders in higher education to climb the ranks without formal training in organizational change or leadership theories. As a result, these leaders may struggle to manage complex institutional changes effectively. To address this potential pitfall, institutions should invest in professional development programs that train leaders in applying Bolman and Deal’s (2017) four frames and Ivancevich et al.’s (2014) model of organizational change to real-world challenges. By equipping 127 leaders with the ability to analyze situations through different frames and an understanding of common impediments or limiting conditions, institutions can foster a more flexible and responsive environment for implementing large-scale changes. Other institutions looking to implement similar changes would be wise to prioritize professional development and training for institutional leaders to ensure they understand the importance of integrated leadership in driving institutional change. The training should focus on several key concepts. First, situational leadership training can teach leaders to assess different contexts and apply the appropriate leadership frame or frames. Conflict resolution and negotiation training can help to address resistance effectively without escalating tensions. Change management strategies can help to prepare leaders to anticipate and overcome common barriers to change. Lastly, faculty engagement techniques can help leaders to understand the motivational factors that drive faculty participation in change initiatives like the one in this study. Utilizing Ivancevich et al.’s impediments and limiting conditions to build scenarios where leaders must deploy Bolman and Deal’s (2017) frames strategically could serve as very effective training content. By prioritizing integrated leadership strategies, institutions can increase their capacity for continuous improvement, ensuring assessment practices lead to meaningful and sustainable institutional change. Implication for Building a Culture of Assessment Through Collaboration and Inclusivity Assessment leaders at GCC understood how complex SLO assessment can be having multiple levels of learning outcomes to measure simultaneously. Having an organizing principle, from which all learning outcomes flowed back to, was very important for leadership. In GCC’s case, they relied on the institutional mission as their assessment lodestar. Connecting learning 128 outcomes directly to the institutional mission fosters a sense of shared purpose and aligns assessment efforts with the values of the institution. It should be noted, the connection between institutional mission and institutional culture is complex. Institutional culture can be viewed as an iceberg floating in the ocean. There is a portion of the iceberg or institutional culture that can be seen above the water, which, in this case, would be the institution’s stated mission. Institutions often publicize their mission and try to ensure all decisions are made in furtherance of said mission. However, there is also a larger portion of the iceberg under the water. Under the water is where the unstated portions of the institutional culture play out. Political battles and jockeying for power occur in this realm, and there may be times when some parties involved in these struggles work in direct opposition to the stated mission. Bolman and Deal’s (2017) Four Frames can be highly effective during times of infighting, as evidence by GCC in Chapters 5 and 6. The fact that an institution’s stated mission is only a small portion of the overall culture should not deter an institution from seeking to rely on the stated mission as the backbone of the culture. When stakeholders—such as faculty, staff, and administrators—see how their work contributes to broader institutional mission or goals, they are more likely to be invested in assessment initiatives. Research shows stakeholder buy-in is critical for the success of assessment practices, as it increases motivation and ensures active participation across departments (Banta & Blaich, 2011). When assessment is perceived as meaningful and mission- driven, faculty are more inclined to collaborate and contribute to a culture of continuous improvement. Moreover, connecting learning outcomes to the institutional mission encourages cross-departmental collaboration by emphasizing shared goals. This approach brings diverse groups together, fostering a more inclusive environment where ideas can be exchanged. 129 According to Kezar (2014), institutions that promote collaboration and alignment with the mission experience more cohesive and effective assessment processes. By framing assessment as a collective effort to advance the institution’s mission, departments are more likely to work together and share resources to improve student learning outcomes. Institutions looking to implement SLO assessment changes, like GCC, should start this process by connecting all learning outcomes to be assessed directly to the institutional mission. Research by Kuh, Jankowski, Ikenberry, and Kinzie (2014) highlights those institutions with a clear, mission-aligned assessment strategy are more successful in fostering an inclusive assessment culture. This initial step brings in stakeholders from all areas of the institution, fostering collaboration and inclusivity, and gets all parties to a shared understanding of why assessing SLOs is important to the institution. Drawing this connection between SLO assessment and mission can help to build the ever-elusive culture of assessment, which can help institutions sustain their assessment practices over time. One of the most significant challenges pertaining to SLO assessment is leadership turnover, which can disrupt assessment initiatives. Tying assessment to the mission is one way of building sustainable assessment practices. Institutions can prioritize building assessment cultures that outlast individual leaders by integrating assessment expectations into strategic plans, accreditation frameworks, and faculty governance structures (Kinzie, 2015). When institutions commit to these principles, assessment moves beyond compliance-driven exercises and becomes a transformative tool for institutional improvement. Implications for Future Research Because scant evidence exists on how institutions get to a point where they are using SLO data to improve, this study acts as merely a starting point, from which future researchers 130 must examine further. Exploratory studies like this one often come with many limitations, which I discussed in Chapter 3, and I believe the future research discussed below highlights and addresses some of those limitations. Specifically, I propose three avenues of future inquiry: comparative studies of successful and unsuccessful institutions, longitudinal studies to understand the long-term sustainability of SLO assessment initiatives, and student perspectives and influence on SLO assessment. Comparing Successful and Unsuccessful Institutions Understanding how and why GCC was able to successfully implement SLO assessment changes, which is what this study attempted to do, is an important first step. Some institutions, like GCC, successfully use SLO data to drive meaningful improvements in teaching and learning, while others face persistent challenges, including resistance from faculty, a lack of alignment with institutional goals, and limited capacity to analyze and act on assessment findings (Ewell, 2011). Understanding how and why other institutions were unsuccessful in their attempts at implementing similar changes could prove to be just as valuable as understanding the successes. Future research could examine whether the use of the contextual and theoretical frameworks put forth in this study plays a significant role in these differences. My study posits Bolman and Deal’s (2017) Four Frames Model provides an effective lens for understanding how structural, human resource, political, and symbolic factors interact to influence institutional success in implementing change. Institutions that effectively use SLO data may be those that approach the process adaptively, leveraging these frames to address organizational complexity and foster collaboration and inclusivity. Similarly, the theoretical framework in this study, Ivancevich et al.’s (2014) model of organizational change, provides insights into how leadership, communication, and organizational culture shape the effectiveness of change initiatives. A 131 working hypothesis could be that successful institutions are—knowingly or unknowingly— utilizing these frameworks to implement their SLO assessment change initiatives compared to their unsuccessful counterparts. This proposed research is important because it would further illuminate potential causal factors driving both success in implementing the use of SLO data. While much has been written about the challenges of implementing SLO assessment, fewer studies focus on comparative analysis to identify factors that lead to success. Additionally, linking these outcomes to specific frameworks, like I did in this study, could provide a more solid theoretical foundation for understanding the mechanisms of success. Institutions could use these insights to design more effective strategies for assessment implementation. Long-Term Sustainability of SLO Assessment Initiatives The large-scale practice of measuring SLOs is, relatively speaking, new in higher education. Using the SLO data to drive institutional improvement, which is what this study focuses on, is an even newer concept. As the scholarship around implementing the use of SLO data to inform decisions continues to grow, we must simultaneously grow our understanding of how institutions can effectively sustain these practices. Institutions often struggle to maintain momentum in assessment practices due to things like leadership turnover, shifting priorities, resource constraints, and changing expectations from stakeholders (Ewell, 2011). Future research on long-term sustainability of SLO assessment practices could build upon the findings from this study by understanding how the initial successes translate into lasting practices. A working hypothesis could be institutions who show evidence of utilizing Bolman and Deal’s (2017) and Ivancevich et al.’s (2014) models to implement their SLO changes are more likely to sustain these practices over time than institutions who do not. 132 One way to draw an understanding of long-term success of SLO assessment initiatives could be to conduct longitudinal analysis comparing institutions who show evidence of using the theoretical and contextual frameworks to implement their SLO assessment practices and those who do not. This research could not only lend support to the idea that Bolman and Deal’s (2017) and Ivancevich et al.’s (2014) models are effective tools for the implementation of SLO assessment changes, but could also examine these models’ ability to create institutional systems that are resilient to future changes or challenges. For example, institutions that embed SLO assessment practices into the culture, align them in strategic planning, and institutionalize adaptability in leadership development may be more successful in the long-term. On the other hand, institutions that view assessment as a compliance task foisted upon them, may see issues in sustaining these practices, especially when there are leadership changes or other competing priorities. Understanding Student Perspectives and their Influences on SLO Assessment Of utmost importance, yet frequently overlooked, student perceptions of SLO assessment present a potentially valuable avenue for future research. Understanding how students perceive the purpose and value of SLO assessment and how these perceptions affect their engagement with the SLO activities is crucial to the success of any SLO assessment change initiative. A working hypothesis for this research is students who perceive SLO assessment as aligned with their educational and professional goals are more likely to engage meaningfully in these activities compared with students who do not see alignment with their goals or do not understand SLO assessment. The idea that students are more likely to engage with educational activities if they have a positive perception of the activity is not new (Kuh, 2008). If students understand the various 133 levels of SLOs they are expected to achieve and see them as connected to their academic and professional goals, they are more likely to engage in learning activities and assessments aligned with these SLOs. On the other hand, if they see SLOs and assessments as disconnected from their goals, they will be less likely to meaningfully engage. Another reason why this topic is important to understand is because higher education—as a whole—places a high value on equity and inclusion. Understanding students’ current perspectives of SLO assessment and its connection to their goals is the first step toward equity and inclusion in the SLO assessment process, but it can only be obtained by treating students as stakeholders in the assessment process. I believe institutions who view students as stakeholders and take the time to understand their perceptions of the institutional SLOs and assessment practices will see one of three outcomes. First, institutions may find their SLOs and assessment practices are aligned with student goals, and students understand this alignment. Having an understood alignment between SLOs and student goals would be the best scenario. Second, institutions may find their SLOs and assessment practices are not aligned with student goals and, obviously, there would be no understanding of alignment amongst students. Having no understood alignment between SLOs and student goals would be the worst scenario as the institution would need to radically re- evaluate their mission and purpose. Lastly, institutions may find their SLOs and assessment practices are aligned with student goals, but students do not fully understand or value this alignment. Although such a gap in understanding is not ideal, it is a common challenge for many institutions. Many institutions struggle with developing an understanding amongst faculty and staff of how and why they need to measure and use SLO data (Elliott, 2015). Expecting students to have an understanding of how SLOs connect to their educational and professional goals when faculty and staff often do not have a clear understanding, is what makes me believe the last 134 scenario discussed above is the most likely for many HEIs. Institutions will only know which of the three scenarios they fall into if they include students as stakeholders in the SLO assessment process. Conclusion Effective assessment of SLOs and the subsequent use of the data to drive institutional improvement has been a major challenge for the few institutions who have been persistent enough to attempt it. Those who have attempted, have had varying degrees of success, yet no institution has perfected this monumental task. The findings of this study underscore the great potential and critical need for effectively using SLO data in driving institutional improvement within community colleges. Through an in-depth case study of a single community college’s (GCC) success, this research highlighted the multifaceted challenges and opportunities associated with implementing and sustaining data-driven SLO assessment processes. The data from this case study revealed the application of Bolman and Deal’s (2017) Four Frames Model and Ivancevich et al.’s (2014) Model of Organizational Change was central to their success. To implement and sustain this change initiative, GCC focused on three overarching issues including managing conflict and resistance, the role of integrated leadership in driving change, and building a culture of assessment through collaboration and inclusivity. These three overarching issues aligned closely to the theoretical framework for this study, Ivancevich et al.’s (2014) Model of Organizational Change. GCC developed effective strategies to address the three overarching issues, which all showed alignment to the contextual framework for this study, Bolman and Deal’s (2017) Four Frames Model. The findings of this study show the need for expecting and being proactive with conflict and resistance to change. The use of faculty coaches as peer support and symbolic alignment of 135 all levels of learning outcomes back to the institutional mission were particularly helpful in reducing resistance and building trust among stakeholders. These strategies were not only helpful in the initial implementation of the assessment changes, but seem to be contributing to the longer-term sustainability of GCCs new assessment practices. The findings also underscored the importance of assessment leaders’ ability to adapt and integrate their leadership methods based on Bolman and Deal’s (2017) four frames. In general, administrators tried to rely on relationships and formal structures to implement the change initiative, however, these tactics were quickly adapted if necessary. The willingness and ability to adapt leadership methods proved to be particularly helpful. Lastly, the findings of this study underscore the need for community colleges to adopt systematic approaches that integrate assessment into the broader organizational structures and processes through inclusive processes. By integrating assessment into the broader organizational structures, institutions can help to build a culture of assessment where collecting and using SLO data to improve becomes second nature, instead of just a compliance activity. As the field of higher education continues to grapple with questions of perceived value, accountability, and equity, this study contributes to a growing body of knowledge on how SLO assessment can be effectively leveraged to address these challenges. However, this study also highlights the need for future research, particularly understanding why some institutions succeed while others fail to implement SLO assessment changes, the long-term impacts of these SLO assessment changes, and the perspectives of the most important stakeholders, students. By growing our understanding in these areas, future research can provide even more impactful suggestions for community colleges and other HEIs. 136 REFERENCES Alfred, R., Ewell, P., Hudgins, J., & McClenney, K. (1999). Core indicators of effectiveness for community colleges. Toward high performance. (Report No. 141). American Association of Community Colleges. https://files.eric.ed.gov/fulltext/ED426749.pdf Argyris, C. (2000). Flawed advice and the management trap: How managers can know when they’re getting good advice and when they’re not. Oxford University Press. Association of American Colleges & Universities, (2016). Trends in learning outcomes assessment: Key findings from a survey among administrators at AAC&U member institutions. https://www.aacu.org/sites/default/files/files/LEAP/2015_Survey_Report3.pdf Austin, A. E. (2011). Promoting evidence-based change in undergraduate science education. Change: The Magazine of Higher Learning, 43(3), 30-38. Average Community College Student Size. (n.d.). Retrieved from https://www.communitycollegereview.com/average-college-size-stats/national-data Baime, D., & Baum, S. (2016). Community colleges: Multiple missions, diverse student bodies, and a range of policy solutions. Retrieved from Urban Institute website: https://www.urban.org/sites/default/files/alfresco/publication-pdfs/2000899-Community- Colleges-Multiple-Missions-Diverse-Student-Bodies-and-a-Range-of-Policy- Solutions.pdf Baker, G. R., Jankowski, N. A., Provezis, S., & Kinzie, J. (2012). Using assessment results: Promising practices of institutions that do it well. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/UsingAssessmentResults.htm Banta, T., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22-27. Barrett, J. M. (2012). Writing assessment in the humanities: Culture and methodology. Journal of Assessment and Institutional Effectiveness, 2(2), 171-195. Baxter, L. (1991). Content analysis. In B. Montgomery & S. Duck (Eds.), Studying interpersonal interaction (pp. 239-254). Guilford Press. Beld, J. M. (2010). Engaging departments in assessing student learning: Overcoming common obstacles. Peer Review, 12(1), 6-9. Birnbaum, R. (1988). How colleges work: The cybernetics of academic organization and leadership. Jossey-Bass 137 Blaich, C. F., & Wise, K. S. (2011, January). From gathering to using assessment results: Lessons from the Wabash inational study (NILOA Occasional Paper No. 8). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment.org/ Bloom, B. S. (1968). Learning for Mastery. Instruction and curriculum. Regional education laboratory for the Carolinas and Virginia, topical papers and reprints, number 1. Evaluation Comment, 1(2), 1-12. Retrieved from https://files.eric.ed.gov/fulltext/ED053419.pdf Bolman, L. G., & Deal, T. E. (2017). Reframing organizations: Artistry, Choice, and Leadership. Jossey-Bass. Bolman, L. G., & Gallos, J. V. (2011). Reframing academic leadership. Jossey-Bass. Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27-40. Bradbury, B. L., Halbur, K. V., & Halbur, D. A. (2011). Authority and leadership via a multiple frames approach. In A. Johnston & G. Johnston (Eds). Journal for the Philosophical Study of Education. AuthorHouse. Cain, T. R. (2014). Assessment and academic freedom: In concert, not conflict (NILOA Occasional Paper No. 22). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from https://www.learningoutcomesassessment.org/documents/OP2211-17-14.pdf Carey, K., & Schneider, M. (2010). Accountability in American higher education. Palgrave MacMillan. Carnegie Classification (n.d.). Greenville Community College. Retrieved from https://carnegieclassifications.acenet.edu/institution/greenville-community-college/ Chadi, A., Jeworrek, S., & Mertins, V. (2017, July 19). Meaningless work threatens job performance. [web log]. Retrieved from http://blogs.lse.ac.uk/businessreview/2017/07/19/meaningless-work-threatens-job- performance/ Chaplot, P. (2010). Implementation and sustainability of learning assessment efforts: Facilitators and inhibitors. Journal of Applied Research in the Community College, 17(2), 33-41. Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage Publishing. 138 Chen, Y., Shek, D. T. L., & Bu, F. (2011). Applications of interpretive and constructionist research methods in adolescent research: Philosophy, principles, and examples. International Journal of Adolescent Medical Health, 23(2), 129-139. Cherwin, K. A. (2013, March 14). How to foster motivation in an academic workplace. Higher Ed Jobs. Retrieved from https://www.higheredjobs.com/articles/articleDisplay.cfm?ID=416 Christiane E. A.. & Leimeister, J. M. (2014, December 5). Creating shared understanding in heterogeneous work groups: Why it matters and how to achieve it. Journal of Management Information Systems, 31(1), 111-144. Colson, T., Berg. B., Hunt, T., & Mitchell, Z. (2017). Simple, transparent, and less burdensome: Re-envisioning core assessment at a regional public university. Journal of Assessment and Institutional Effectiveness, 7(1), 92-114. Community Colleges FAQs. (n.d.). Retrieved from https://ccrc.tc.columbia.edu/Community- College-FAQs.html Corbin, J., & Strauss, A. (2008). Basics of qualitative research: Techniques and procedures for developing grounded theory (3rd ed.). Sage. Council for Higher Education Accreditation. (2019). Accreditation and student learning outcomes: Perspectives from Accrediting Organizations (CHEA Publication No. 20). Washington, DC: Council for Higher Education Accreditation. Council of Chief State School Officers (CCSSO). (2012). Distinguishing formative assessment from other educational assessment labels. Retrieved from https://www.michigan.gov/documents/mde/CCSSO_Assessment__Labels_Paper_ada_60 1108_7.pdf Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Sage. Cresswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed method research. Sage Publications. Crowell, T., & Calamidas, E. (2015). Comprehensive five-year program assessment study. Journal of Assessment and Institutional Effectiveness, 5(1), 1-33. Custer, S., King, E. M., Atinc, T. M., Read, L., & Sethi, T. (2018). Toward data-driven education systems: Insights into using information to measure results and manage change. Retrieved from https://www.brookings.edu/wp-content/uploads/2018/02/toward- data-driven-education-systems.pdf Denzin, N. K. (1986). Interpretive biography. Sage Publications. 139 Dougherty, K. J., & Reddy, V. (2013). Performance funding for higher education: What are the mechanisms? What are the impacts? ASHE Higher Education Report, 39(2), 1-134. Dougherty, K. J., & Townsend, B. K. (2006). Community college missions: A theoretical and historical perspective. New Directions for Community Colleges, 136, 5-13. Driscoll, A., & Wood, S. (2007). Developing outcomes-based assessment for learning-centered education: A faculty introduction. Stylus Publishing. Drisko, J. W., & Maschi, T. (2016). Content analysis. Oxford University Press. Drury, R. L. (2003). Community colleges in America: A historical perspective. Inquiry, 8(1), 1- 6. Elliott, R. W. (2015). Faculty development: An essential strategy to promote critical thinking in students. Research & Teaching in Developmental Education, 32(1), 36–49. Retrieved from https://files.eric.ed.gov/fulltext/EJ1112487.pdf Ewell, P. T. (2011). Accountability and institutional effectiveness in the community college. New Directions for Community Colleges, 2011(153), 23-36. Ewell, P. T., & Jones, D. P. (2006). State-level accountability for higher education: On the edge of a transformation. New Directions for Higher Education, 135, 9-16. Excellence in Assessment (EIA) Designation. (n.d.). Retrieved from https://www.learningoutcomesassessment.org/eia/#1564428314302-7a3fdaa8-a3b6 Fain, P. (2018, March 2). Demonstrating value. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2018/03/02/author-discusses-new-book- accountability-push-higher-education Fairchild, E. (2003). Multiple roles of adult learners. New Directions for Student Services, 102, 11-16. Flick, U. (2009). An introduction to qualitative research (4th ed.). Sage Publications. Fullan, M. (2004). Leading in a culture of change. Jossey-Bass. Gaff, J., & Meacham, J. (2006). Learning goals in mission statements: Implications for educational leadership. Liberal Education, 92(1), 6-13. Geiger, R.L. (2011). The ten generations in American higher education In P. G. Altbach, P. J. Gumport, & R. O. Berdahl (Eds.), American higher education in the twenty-first century (pp. 37-68). Johns Hopkins University Press. 140 Ginder, S. A., Kelly-Reid, J. E., & Mann, F. B. (2018, November). Postsecondary institutions and cost of attendance in 2017-18; degrees and other awards conferred, 2016-17; and 12-month enrollment, 2016-17: First look (provisional data) (NCES 2018-060rev). Retrieved from https://nces.ed.gov/pubs2018/2018060REV.pdf Glesne, C. (2011). Becoming qualitative researchers: An introduction. Pearson. Grupe, D. W., & Nitschke, J. B. (2013). Uncertainty and anticipation in anxiety: An integrated neurobiological and psychological perspective. Nature Reviews Neuroscience, 14(7), 488-501. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4276319/ Gustafsson, J. (2017). Single-case studies vs. multiple-case studies: A comparative study. Academy of Business, Engineering and Science, Halmstad University, Halmstad, Sweden. Retrieved from www.diva-portal.org/smash/get/diva2:1064378/FULLTEXT01.pdf Hamill, S. B. (2015). Evaluating and redesigning a college assessment system to close the loop. Journal of Assessment and Institutional Effectiveness, 5(1), 34-57. Harper, S. R., & Quaye, S. J. (2014). Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations. Routledge. Head, R. B. (2015). Evidence-based management and community college decision-making. Journal of Applied Research in the Community College, 22(1), 25-33. Hegji, A. (2017). An overview of accreditation of higher education in the United States. (Congressional Research Service Rep. No. 7-5700). Retrieved from https://fas.org/sgp/crs/misc/R43826.pdf Hernon, P., & Dugan, R.E. (2004). Outcomes assessment in higher education: Views and perspectives. Libraries Unlimited. Herriott, R. E., & Firestone, W. A. (1983) Multisite qualitative policy research: Optimizing description and generalizability. Educational Researcher, 12(2), 14-19. Herzberg, F., Mausner, B., & Snyderman, B. B. (1959). The motivation to work. Wiley. HLC Policy. (n.d.). Retrieved from https://www.hlcommission.org/Policies/criteria-and-core- components.html Holzweiss, P. C., Bustamante, R., & Fuller, M. B. (2016). Institutional cultures of assessment: A qualitative study of administrator perspectives. Journal of Assessment and Institutional Effectiveness, 6(1), 1-27. Hutchings, P., Kinzie, J., & Kuh, G. D. (2015). Evidence of student learning: What counts and what matters for improvement. In G. D. Kuh (Ed.), Using evidence of student learning to improve higher education (27-50). Jossey-Bass. 141 Ikenberry, S. O., & Kuh, G. D. (2015). From compliance to ownership: Why and how colleges and universities assess student learning. In G. D. Kuh (Ed.), Using evidence of student learning to improve higher education (1-26). Jossey-Bass. Imel, S., Kerka, S., & Wonacott, M. E. (2002). Qualitative research in adult, career, and career- technical education. Practitioner File. Columbus, OH: ERIC Clearinghouse on Adult, Career, and Vocational Education. Retrieved from http://www.eric.ed.gov/PDFS/ ED472366.pdf Ivancevich, J. M., Konopaske, R., & Matteson, M. T. (2014). Organizational behavior and Management. McGraw-Hill. Jankowski, N. (2011). Juniata college: Faculty-led assessment. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from https://www.learningoutcomesassessment.org/documents/OP2211-17-14.pdf Jankowski, N., & Marshall, D. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Stylus Publishing. Jaschik, S. (2018, October 9). Falling confidence in higher ed. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2018/10/09/gallup-survey-finds-falling- confidence-higher-education Jensen, D. (2006). Metaphors as a bridge to understanding educational and social contexts. International Journal of Qualitative Methods, 5(1), 36-54. Jones, J. M. (2018, October 9). Confidence in higher education down since 2015. [Gallup Polling Report]. Retrieved from https://news.gallup.com/opinion/gallup/242441/confidence- higher-education-down-2015.aspx Jury, M., Smeding, A., Stephens, N., Nelson, J. E., Aelenei, C., & Darnon, C. (2017). The experience of low-SES students in higher education: Psychological barriers to success and interventions to reduce social-class inequality. Journal of Social Issues, 73(1), 23-41. Kelchen, R. (2018). Higher education accountability. Johns Hopkins University Press. Kezar, A. (2014). How colleges change: Understanding, leading, and enacting change. Routledge. Kezar, A., & Eckel, P. D. (2002). Examining the institutional transformation process: The importance of sensemaking, interrelated strategies, and balance. Research in Higher Education, 43(3), 295-328. Kinzie, J. (2010). Perspectives from campus leaders on the current state of student learning outcomes assessment: NILOA focus group summary 2009-2010. Urbana, IL: University 142 of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from https://learningoutcomesassessment.org/documents/FocusGroupFinal.pdf Kinzie, J., Hutchings, P., & Jankowski, N. A. (2015). Fostering greater use of assessment results: Principles for effective practice. In G. D. Kuh (Ed.), Using evidence of student learning to improve higher education (51-72). Jossey-Bass. Kinzie, J., Jankowski, N., & Provezis, S. (2014). Do good assessment practices measure up to the principles of assessment? Progress, Trends, and Practices in Higher Education, 26(3), 1- 5. Korstjens, I., & Moser, A. (2018). Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. European Journal of General Practice, 24(1), 120-124. Retrieved from https://www.tandfonline.com/doi/pdf/10.1080/13814788.2017.1375092 Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Sage Publishing. Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Association of American Colleges and Universities. Kuh, G. D., & Ikenberry, S. O. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from https://www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from https://www.learningoutcomeassessment.org/documents/2013%20Abridged%20Survey% 20Report%20Final.pdf Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. Jossey-Bass. Lansing Community College Program Accreditation. (n.d.). Retrieved from https://www.lcc.edu/about/accreditation/program.html Lederman, D. (2015, February 18). The new normal on completion. Inside Higher Ed. Lederman, D. (2018, October 31). Conflicted views of technology: A survey of faculty attitudes. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/survey/conflicted-views-technology-survey- faculty-attitudes 143 Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage Publications. Ma, J., & Baum, S. (2016, April). Trends in community colleges: Enrollment prices, student debt, and completion. (College Board Research Brief). Retrieved from https://research.collegeboard.org/pdf/trends-community-colleges-research-brief.pdf MacDonald, S. K., Williams, L. M., Lazowski, R. A., Horst, S. J., & Barron, K. E. (2014). Faculty attitudes toward general education assessment: A qualitative study about their motivation. Research & Practice in Assessment 9, 74-90. Manning, T. M. (2011). Institutional effectiveness as process and practice in the American community college. New Directions for Community Colleges 153, 13-21. Marchland, S., & Stoner, J. (2012). A brief history of accountability in higher education. Phi Kappa Phi Forum, 92(1), 16-18. Marcus, J. (2023, April 4). Community colleges are reeling. ‘The reckoning is here.’ The Associated Press. https://apnews.com/article/community-college-enrollment- bb2e79222a4374f4869dc2e5359f2043 Markóczy, L. (2004). Multiple motives behind single acts of cooperation. The International Journal of Human Resource Management, 15(6), 1018-1039. Martin, J. (2012). Symbols, sagas, rites, and rituals: An overview of organizational culture in libraries. College & Research Libraries News, 73(6). Retrieved from https://crln.acrl.org/index.php/crlnews/article/view/8779/9345 McCullough, C. A., & Jones, E. (2014). Creating a culture of faculty participation in assessment: Factors that promote and impede satisfaction. Journal of Assessment and Institutional Effectiveness, 4(1), 85-101. McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 1-24. Merriam, S. B. (2002). Qualitative research in practice: Examples for discussion and analysis. Jossey-Bass. Mikkelsen, M. F. (2018). Projects, success, and complexity. International Project Management Association Research Conference 2017. UTS ePRESS. Miller, M. T. (2003). Responding to competition: A governance strategy for public higher education. Information Age Publishing. Miller, R., & Leskes, A. (2005). Levels of assessment: From the student to the institution. Washington, DC: Association of American Colleges and Universities. 144 Morest, V. S., & Jenkins, D. (2007). Institutional research and the culture of evidence at community colleges (Community College Research Center Rep. No. 1). Retrieved from https://ccrc.tc.columbia.edu/media/k2/attachments/insitutional-research-culture- evidence.pdf National Center for Education Statistics. (n.d.). Fast facts. Retrieved from https://nces.ed.gov/fastfacts/display.asp?id=84 National Institute for Learning Outcomes Assessment. (n.d.). Assessment Glossary. https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/NILOA- Glossary.pdf National Student Clearinghouse Research Center. (2017, June 12). Persistence and Retention: 2017. Retrieved from https://nscresearchcenter.org/snapshotreport28-first-year- persistence-and-retention/ Oltmann, S. M. (2016). Qualitative interviews: A methodological discussion of the interviewer and respondent contexts. Qualitative Social Research, 17(2). Retrieved from http://www.qualitative-research.net/index.php/fqs/article/view/2551/3998 Omisore, B. O., & Abiodun, A. R. (2014). Organizational conflicts: Causes, effects, and remedies. International Journal of Academic Research in Economics and Management Sciences, 3(6), 118-137. Retrieved from https://pdfs.semanticscholar.org/dc47/343acf285d3c6e7af9d5bb935981ac251c02.pdf Patton, M. Q. (2002). Qualitative research and evaluation methods. Sage Publications. Petrides, L. A., & McClelland, S. I. (2007). Decentralizing data through decision-support systems: The impact of increased access to data on decision making. Journal of Applied Research in the Community College, 15(1), 7-15. Powell, C. (2013). Accreditation, assessment, and compliance: Addressing the cyclical challenges of public confidence in American education. Journal of Assessment and Institutional Effectiveness, 3(1), 54-74. Prasad, P. Crafting qualitative research: Working in the post-positivist traditions. M. E. Sharpe. Ramaley, J. (2012, March 2). Do college-completion rates really measure quality? The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/Do-College- Completion-Rates/131029 Regoniel, P. (2012, November 5). What is the difference between the theoretical and the conceptual framework? Retrieved from https://fitness-gear-equipment.knoji.com/what-is- the-difference-between-the-theoretical-framework-and-the-conceptual-framework/ 145 Rice, D. R. (1991). What every faculty development professional needs to know about higher education. To Improve the Academy, 226, 89-96. Rhodes, T. L. (2015). Assessment: Growing up is a many-splendored thing. Journal of Assessment and Institutional Effectiveness, 5(2), 101-116. Rosaen, S. F., Hayes, R. A., Paroske, M., & De La Mare, D. (2013). A dialogic approach to implementing general education assessment at the department level. Journal of Assessment and Institutional Effectiveness, 3(1), 33-53. Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Sage Publications. Secolsky, C., & Denison, D. B. (2012). Improving institutional decision making through educational measurement, assessment, and evaluation. In C. Secolsky & D. B. Denison (Eds.), Handbook on measurement, assessment, and evaluation in higher education. Routledge. Scott, J. (1990). A matter of record. Polity Press. Serban, A. M. (2004). Assessment of student learning outcomes at the institutional level. New Directions for Community College, 126, 17–27. Sim, J., & Sharp, K. (1998). A critical appraisal of the role of triangulation in nursing research. International Journal of Nursing Studies, 35, 23-31. Sipe, L, and S Constable (1996). A chart of four contemporary research paradigms: Metaphors for the modes of inquiry. The Journal of Culture and Education 1, 153-63. Somerville, J. (2008). Critical factors affecting the assessment of student learning outcomes: A Delphi study of the opinions of community college personnel. Journal of Applied Research in the Community College, 15(2), 109-119. Stake, R.E. (1995). The art of case study research. Sage Publications. Stassen, M. L. A. (2012). Accountable for what? Journal of Assessment and Institutional Effectiveness, 2(2), 137-142. Suskie, L. (2010, October 26). Why are we assessing? Inside Higher Ed. Retrieved from https://www.insidehighered.com/views/2010/10/26/why-are-we-assessing Suskie, L. (2018). Assessing student learning: A common sense guide. Jossey-Bass. The Academic Senate for California Community Colleges, (2010). Guiding principles for SLO assessment. Retrieved from https://www.asccc.org/sites/default/files/publications/SLO- paper-Fall2010_0.pdf 146 The State Higher Education System. (n.d.). Seamless Transfer Policy FAQs. Retrieved from https://system.shes.edu/academic-affairs/student-mobility/seamless-transfer-policy- faqs/#:~:text=Resolved%20clause%20%23%207%20of%20the,programs%20shall%20re quire%20no%20more Thomas, R. M. (2003). Blending qualitative and quantitative: Research methods in theses and dissertations. Sage Publishing. United States Department of Education. (n.d.). College scorecard. Retrieved from https://collegescorecard.ed.gov/school/?191199 United States Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education (Report No. ED-06-C0-0013). U.S. Department of Education. United States Department of Education. (2019). Accreditation handbook (34-CFR-602). U.S. G.P.O. van Esch, P., & van Esch, L. J. (2013). Justification of a qualitative methodology to investigate the emerging concept: The dimensions of religion as underpinning constructs for mass media social marketing campaigns. Journal of Business Theory and Practice, 1(2), 214- 243. Vannoni, M. (2014). What are case studies good for? Nesting comparative case study research into the lakatosian research program. Cross-Cultural Research, 49(4), 331-357. Walser, T. M. (2015). Evaluability assessment in higher education: Supporting continuous improvement, accountability, and a culture of assessment. Journal of Assessment and Institutional Effectiveness, 5(1), 58-77. Weiner, W. F. (2009) Establishing a culture of assessment: Fifteen elements of success–how many does your institution have? Fighting Back. Retrieved from https://www.aaup.org/article/establishing-culture-assessment William, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 3- 14. Willis, J. W. (2007). Foundations of qualitative research: Interpretive and critical approaches. Sage Publishing. Yin, R. K. (2009). Case study research: Design and methods. Sage Publishing. Zubrow, J. (2012). Case study: Engaging adjunct faculty in program assessment. Journal of Assessment and Institutional Effectiveness, 2(1), 77-102. 147 Zumeta, W. M. (2011). What does it mean to be accountable? Dimensions and implications of higher education’s public accountability. The Review of Higher Education, 35(1), 131- 148. 148 APPENDIX A: INTERVIEW PROTOCOL Introduction: The researcher will greet the participant and introduce himself. After ensuring the comfortability of the participant, the researcher will provide a brief personal background (MSU HALE doctoral student working to complete his dissertation, while also working at a community college as an instructor and assessment consultant). Purpose: The researcher will explain that the purpose of the study is to understand how a particular community college has implemented the use of student learning outcome assessment data to improve institutional quality. Procedures: The researcher will explain that open-ended questions will be asked of individual participants to understand their personal experiences. Interviews will last approximately one hour but could possibly go longer depending on how much information the participant chooses to share. Audio recordings of interviews will be transcribed upon the completion of the interviews and the text documents will be shared with the individual participants to check for accuracy. Once the data are collected, transcribed, and member-checked, all names and identifying information will be removed to protect the identity of the research participants. Consent: Participants will be urged only to share information they are comfortable sharing. Also, participants will be informed their identity will be protected by using pseudonyms and they will have the opportunity to choose their own pseudonym. If they choose not to provide a pseudonym, one will be selected for them by the researcher. Finally, participants will be reminded that they may end the interview or their overall participation in the study at any time. Dialogue: Preliminary interview questions (research instrument) are as follows: -Tell me about your tenure at the institution as it pertains to the issue of student learning outcomes assessment. Specifically, what is your current position? What position were you in during the implementation of using student learning outcomes assessment data to improve institutional quality? -Tell me the story of how your institution came to use SLO assessment data to improve institutional quality. -What was the driving force that got this initiative started (internal/external motivators)? -Who were the initial leaders or agents for change in this initiative? -What did the decision-making process look like during his initiative? Was the decision- making process the same for all issues that arose (formal vs. informal)? -Often, a decision is made to implement a given initiative followed by the actual process of change. Were the individuals making decisions and the individuals involved in the change process the same people? If not, could you tell me why there were different groups? 149 -What was the process for developing this assessment initiative (using SLO data to improve institutional quality)? -Who was involved and how were they selected? -How were ideas put forth and solutions determined? -What/who were barriers to this initiative (i.e., systems, policies, people)? -Internal (i.e., institutional barriers, pushback from employees)? -External (i.e., stakeholders outside of the institution) -How were these barriers dealt with? -What/who were facilitators to this initiative (i.e., systems, policies, people)? -Internal (i.e., institutional systems that helped the process, positive culture, employees who were champions of change)? -External (i.e., stakeholders outside of the institutions)? -How were these facilitators leveraged or called into action? -How was the scope and timing of this initiative determined? -How was the timeline for this project developed? -Were there any key actions taken that occurred at important moments in time? -How was the breadth and depth of this initiative determined? -When and how was “success” of this assessment initiative determined? -Who played a major role in setting metrics for success? -Were there any disagreements or negotiating criteria for success? -What do you feel were the most important factors in your institution’s success with this particular assessment initiative (e.g., individuals, groups, institutional structures, events, etc.)? 150 APPENDIX B: INVITATION TO PARTICIPATE Hello, My name is Mathew Devereaux, and I am a doctoral student in the Higher, Adult, and Lifelong Education program at Michigan State University. Under the supervision of my advisor, Dr. Matthew Wawrzynski, I am engaging in a research study to gain a better understanding of how community colleges use student learning outcome data to drive institutional improvement. In order to carry out this study, I am looking to interview members of faculty and administration who played a part in the creation of your assessment process, particularly how your institution began using student learning outcome data to inform institutional improvement. Each interview will include—but is not limited to—several open-ended questions about participant experiences around the development of your institutional assessment plan, again, with particular emphasis on the use of student learning outcome data in informing institutional improvement. These interviews will be recorded, however, I will be removing all personal information of interviewees to protect their privacy. Upon completion of the interviews, I will be analyzing responses to discover emerging themes. During this step, other individuals who are assisting me in the research process will see the written responses of interviewees, however, these responses will be completely anonymous. Please note that while several precautions will be taken to keep all identifying information private, certain details shared by participants could potentially make their identities known to others reading this dissertation. If you are a faculty member or an administrator who played a role in creating the policies and procedures governing the collection and use of student learning outcome data, and you would like to participate in an interview—approximately 1 hour—please email me by [date] at devere27@msu.edu. If you have any questions about this project, or are interested but need more information, please feel free to reach out as well. Sincerely, Mathew Devereaux devere27@msu.edu 151 APPENDIX C: CONSENT FORM Using Student Learning Outcome Data at Community Colleges: Understanding the How Consent Letter Dear Participant: This is an invitation to participate in an interview as part of a study seeking to understand how community colleges use student learning outcome data to improve institutional effectiveness. By gaining input from you and others at your institution about your personal experiences creating and implementing the current assessment policies and procedures, we hope to expand our understanding of how community colleges engage in the process of creating assessment plans. The current study entitled Using Student Learning Outcome Data at Community Colleges: Understanding the How is authored by Mathew Devereaux, under the supervision of Dr. Matthew Wawrzynski. The interview, should you choose to participate, will last approximately one hour. This time allotment is flexible, should the time required for your responses exceed one hour. Your involvement in this study is voluntary, therefore, you can choose not to participate or to answer some but not all the questions. All unique identifying information you provide will be removed from the data set when responses are analyzed. A few objective third-party individuals who are advising me on this project will see your responses, but these individuals are required to keep your information confidential. It is my hope that individuals who participate in this study will experience positive effects from the self-reflection asked of them, as well as pride in knowing that other institutions may benefit from their experience. The final version of this study will not contain identifying information of any participant. However, it is possible that certain responses by participants may unintentionally reveal their identity to others who read this. Let it be known, if you choose not to participate or not to respond to certain questions, there will be no penalty and it will not affect your status at your institution in any way. At any point during this study, you may decide to withdraw from participation with no penalty, and your privacy will continue to be protected. If you have any questions or concerns about this study, please contact Dr. Matthew Wawrzynski, Associate Professor in Educational Administration, 419A Erickson Hall, Michigan State University, by phone at (517) 355-6617, or by email at mwawrzyn@msu.edu. If you have any additional questions or concerns regarding your rights as a study participant, or are not satisfied with any aspect of this study, you may contact—anonymously, if you prefer—Kristen Burt, Director, Human Research Protection Program, by phone: (517) 884-6020, email: burtkris@ora.msu.edu, or by post mail: 4000 Collins Rd. Suite 123, Lansing, MI 48910. Thank you and I look forward to your participation. I agree to participate in this study. In addition, by signing below I agree to allow my responses to be audio taped for research purposes of this study. Signature__________________________ Date________________________________ 152 Name (Printed)__________________________ 153