!!! EXPLORING ENACTED MENTAL MODELS !OF LEARNING OUTCOMES ASSESSMENT IN HIGHER EDUCATION ! By ! William Frank Heinrich !!!!!!!!!!! A DISSERTATION Submitted to Michigan State University !!in partial fulfillment of the requirements for the degree of ! Higher, Adult, and Lifelong Education - Doctor of Philosophy 2015ABSTRACT EXPLORING ENACTED MENTAL MODELS !OF LEARNING OUTCOMES ASSESSMENT IN HIGHER EDUCATION ! By ! William Frank Heinrich ! This study explored thinking and activity, or enacted mental models, of faculty and staff who have some experience with learning outcomes assessment in higher education. Intervi ews and concept maps were used to surface various influ ences, descriptions of actions , and connections between actions for 12 participants occupying either staff or faculty roles. A ll participants were known to have engaged in learning outcomes assessment. Important outcomes include descriptions and categorization of influences labeled disciplinary training and socialization, environmental and cultural influences, and incentives and accountability. Also found were motivating factors for conduct ing assessmen t and common assessment mindset patterns that influenced behavior. By supporting connected mindsets in assessment, various behavior changes can be encouraged to help identify the value of institutional learning outcomes to multiple stakeholders . Findings p oint assessment leaders toward adjustments to assessment -related training and professional development to better incorporate or consider individual mental models about their own influences of training, their current environment, and relationships to accountability. This study contributes to literature and practice by describing discrete influences on assessment and how influences work together in various formats to result in various assessm ent mindsets across levels of an institution . This work is licensed under the Creative Commons Attribu tion 4.0 International License WILLIAM FRANK HEINR ICH 2015 ! iv To my partner, Heather, who moved across the world with me, keeps me going in the hard times, shares in my joy, shines a light at my feet, loves me when it hurts, is an inspiring mother to our children, grows lovely flowers, and bakes amazing rosemary and olive cookies. Thank you for your commitment, dedication, and sacr ifice to help me and us succeed. I love you. This is for you. And to my children Adelaide Victoria Ricks Heinrich and Maeve Malana Ricks Heinrich, thank you for your patience with my absences, and your big hugs and tender kisses whenever I return. I pray this is an example of a way to live up to your potential. Do what you will, but do it as best as you can , my loves. ! v ACKNOWLEDGEMENTS I thank my Dissertation Committee for their guidance and for sharing the benefit of their experience, intu ition, and debate. I wouldnÕt want it any other way. Thanks to Marilyn Amey, who showed me the possibilities back in 2001 and always opened doors when I needed them; who never doubted me and gave me all the rope I needed to do things the hard way first. Th ese experiences and your support helped me realize a great number of intellectual, pragmatic, and otherwise joyful outcomes. And thanks to J ames Fairweather for making my arguments better; Matt Wawrzynski for pushing me to be a clearer thinker; and Doug Es try for giving me hope that this project matters. Thanks to the NASPA Region IV -E Research and Assessment Grant Committee for the dissertation research funding award at just the right time to help me push this project along. Thanks to many colleagues acros s institutions and my close supporters throughout Student Affairs and higher education: Brian Arao, Alex Belisario, Stan Dura, Pat Enos, Samara Foster, Tom Fritz, Steve Geiger, Kelly High -McCord, Joe Johnson, Kim Lau, Marielos Ortiz -McGuire, Joy Pehlke, Kris Renn, Pam Shefman, Susan Welte, and Sarah Woodside. Thanks to my mentors along the way who showed me new ways to be curious, think, act, and practice with professionalism . Each of you helped me find much joy and success: John Dirkx, Christ ine Geith, Lissy Goralnik, Geoff Habron, Jim Lucas, Reitumetse Mabokela, Reggie Noto , Jeno Rivera, Laurie Thorp, and Steve Weiland. I appreciate you taking the meeting. Thanks specifically to Karla Bellingar and Kathy Dimoff for making this journey easier by lowering barriers and making the hurdles as pleasant as possible. From the rushed signature to ! vi the signed check, I reall y could not have done this with out your support. On top of it, thanks for always making time to ask about my family and kids, keeping me grounded in whatÕs important. Thanks to my classmates, especially John Bonnell, Erin Carter, and Davina Potts (together we are the Squirrels). I am inspired by each of your examples, your compassion, your expertise, your grace, and your tenacity in the se pursuits. LetÕs get nuts soon. Thanks to my siblings and their partners, Elyse Heinrich, Ed and Sarah Heinrich, Meghan and Rob Mullin, Laurie and Phil Alsot, Katy and Scott Winter, Jeanne and Jim Falkiner. Y ou have been gracious listeners to some badly told stories and yet you have encouraged me with your examples of family and love. In so many ways, you are my educators. Thanks to my parents, Robert and Vivian Heinrich, who showed me ChristÕs love and a solid work ethic; how to do good, and how to live up to my potential. I only hope I can pay it forward. And thanks to the members of my extended community --fellow parents of small (and older) children, drinkin g dads, bike riding, surfing, sea -river -surf kayaking, and Nordic ski enthusiasts, craft -beer bums, and barbecue b uddies. Your kind words, welcoming attitude s, and willingness to distract me are much appreciated. L ive by these words, my friends: ÒI arise in the morning torn between a desire to save the world and a desire to enjoy the world. This ma kes it hard to plan the day.Ó ÐE. B. White ! vii TABLE OF CONTENTS LIST OF TABLES ................................................................................................................... x LIST OF FIGURES ............................................................................................................... xi Chapter 1: Introduction to the D issertation ..........................................................................1 Research Questions ................................................................................................................5 Definition of Terms ...............................................................................................................5 Conceptual Frame ..................................................................................................................7 Significance of the Study .......................................................................................................9 Overview of the Dissertation ...............................................................................................11 Chapter 2: Review of Relevant L iterature .........................................................................12 Learning Outcomes ..............................................................................................................12 Instruction, department, and institution -level learning outcomes ....................................14 Co-curricular learning outcomes ......................................................................................17 Assessment in Higher Education ..........................................................................................18 Instruction, department, and institution -level assessment ................................................19 Accreditation -level accountability ....................................................................................23 Co-curricular learning assessment ...................................................................................25 Summary of Learning Outcomes and Assessments in Higher Education ............................27 Systems and Mental Models in Higher Education ...............................................................27 Overview of systems thinking ...........................................................................................28 Mental models ..............................................................................................................30 Influence of Mental Models on Systems Components ........................................................32 Trust .................................................................................................................................32 Shared meaning .................................................................................................................33 Organizational l earning ....................................................................................................34 Summary of the L iterature ....................................................................................................36 Chapter 3: Methodology and Methods ...............................................................................37 Constructivist Paradigm ......................................................................................................37 Qualitative Research Design ...............................................................................................38 Sampling ..........................................................................................................................39 Research s ite ..................................................................................................................39 Participant s election ..........................................................................................................40 Data Collection ....................................................................................................................41 Interviews .........................................................................................................................42 Documents .........................................................................................................................42 Participant s afeguards .......................................................................................................42 Data Analysis and Reporting ...............................................................................................44 Data Trustworthiness and Credibility ..................................................................................48 Limitations ...........................................................................................................................49 ! viii Chapter 4: Findings ..............................................................................................................51 Participant Profiles ..............................................................................................................53 Assessment Influences .........................................................................................................55 Influences of training and disciplinary socialization ........................................................57 Environmental and cultural socialization ..........................................................................60 Incentives and accountability ............................................................................................63 Action Group F indings ........................................................................................................68 Category matrix ................................................................................................................70 Participant and Action Group D escriptions .........................................................................73 Isolated action group ...........................................................................................................73 Push ...................................................................................................................................73 Path ....................................................................................................................................74 Pull ....................................................................................................................................75 Limited action group ...........................................................................................................76 Push ...................................................................................................................................77 Path ....................................................................................................................................77 Pull ....................................................................................................................................78 Connected action group .......................................................................................................79 Push ...................................................................................................................................79 Path ....................................................................................................................................80 Pull ....................................................................................................................................81 Individual Assessment M otivators ......................................................................................82 Greater good motivations ...................................................................................................82 Institutional and program influences on assessment ........................................................83 Disciplinary accreditation and advisory boards ...............................................................84 Shared assessment workload ............................................................................................86 Chapter Summary ................................................................................................................88 Chapter 5: Discussion and I mplications .............................................................................90 Overview and I ntroduction ..................................................................................................90 Influences on Assessment ....................................................................................................92 Action Groups .......................................................................................................................94 Connected assessment ........................................................................................................95 Isolated assessment ...........................................................................................................97 Limited assessment ............................................................................................................98 Individual Assessment Motivators .....................................................................................100 Summary of Discussion .....................................................................................................102 Implications for Value Propositions ...................................................................................104 Systems perspectives on assessment ................................................................................104 Implications for Individuals ...............................................................................................107 Incentivizing assessment ..................................................................................................108 Identifying shared m otivators .........................................................................................109 Greater good .................................................................................................................109 Links to program/division assessment .........................................................................110 Leadership characteristics .............................................................................................111 Future Research Implications ............................................................................................112 ! ix Institutions ......................................................................................................................112 Relative boundaries .........................................................................................................113 Practice Implications .........................................................................................................115 Training ..........................................................................................................................116 Coordinating resources and feedback .............................................................................117 Conclusion .........................................................................................................................118 APPENDIC ES .....................................................................................................................121 Appendix A: Interview Protocol ........................................................................................122 Appendix B: Research Participant Information and Consent Form ..................................124 REFERENCES ....................................................................................................................125 ! x LIST OF TABLES Table 1 Two Paradigms of Assessment ...............................................................................23 Table 2 3x3 matrix of actions and influences on assessment ..............................................47 Table 3 Participant c haracteristics ........................................................................................52 Table 4 Influence types, descriptions, and categories .........................................................56 Table 5 Participant position, school/unit , and action group ................................................69 Table 6 Action group x motivation type matrix ..................................................................71 Table 7 Four Frames of Organizations ................................................................................96 ! xi LIST OF FIGURES Figure 1. Hypothetical ideal and non -ideal individu al enacted mental models of learning outcomes assessment ................................................................................................8 Figure 2. Concept map for second interview .......................................................................43 Figure 3. Multiple Perspectives of Enacted Mental Models ................................................91 1 Chapter 1: Introduction to the Dissertation !Undergraduate student learning is at the heart of the higher education mission and has largely been a successful enterprise through increased access, innovative new education delivery models, and by contributing to a knowledge driven society (Thelin, 2004) . However, undergraduate education in the U.S. has come under scrutiny in recent years for a lack of attention to rapidly increasing costs, low degree completion rates, and a declining overall quality of graduates (Arum & Roksa, 2011). The external environ ment consistently weighs in on institutions imploring them to improve undergraduate education for the benefit of social and economic prosperity (Association of American Universities, 2012; Duderstadt, 2009). Given the multiple demands on modern U.S. higher education, institutions must keep multiple purposes in sight (Ewell, 2002, 2008). One of the ways that higher education institutions have worked to address the quality of undergraduate education is the use of articulated learning outcomes in efforts at co urse, program, and institution -levels (American Association of Colleges & Universities (AAC&U), 2013a; AAC&U, 2013b). Learning outcomes in higher education institutions represent one of the primary products of college and universities (Ewell, 1997; Upcraft & Schuh, 1996). In short, learning outcomes serve to identify, bound, and render assessable learning activities stemming from for -credit courses and co -curricular participation (American College Personnel Association, Association of College and University Housing Officers Ð!International, Association of College Unions Ð!International, National Academic Advising Association, National Association for Campus Activities, National Association of Student Personnel Administrators, & National Intramural -Recreation al Sports Association, 2006). 2 Once learning outcomes are clear, assessment of them allows multiple stakeholders to observe the kind and quality of learning present. Learning outcomes assessments are simultaneously used to motivate student learning and det ermine effectiveness using data and feedback (Bess & Dee, 2008; Bresciani, 2006). !To the potential learner, the learning outcomes describes what will be learnt, to the potential employer they describe what should have been learnt, to the quality agencies they provide a system for audit and for the funders (if there are still any left) they provide a means to account for how the money was spent. (Scott, 2011, p.1) !The use of learning outcomes in higher education spans disciplinary and non -credit programmati c efforts, building on the idea that student learning takes place both within and outside of the credit -bearing educational experience (Astin, 1991; Barr & Tagg, 1995; Inkelas & Soldner, 2011; Spence, 2001). All kinds of instructional programs in higher ed ucation can contribute to relevant, integrated learning during college (Gardner, 2014; Hovland & Schneider, 2011). The fundamental mechanisms of assessment and feedback are similar at the individual student, program, and institution level and help faculty and administrators leverage shared values and outcomes from for -credit and co -curricular learning contexts (Barr & Tagg, 1995). Across an institution, assessment is best used when it is simultaneously targeted to an individual learner and resonant with sh ared meaning among audiences (Ewell, 2009). Shared understanding among individuals at multiple levels with different goals can matter greatly to effective practice (Senge, 1994). However, unclear or uncoordinated assessment practices at the institutional l evel pose challenges to individual faculty and staff (instructors) at other levels and are problematic for 3 institutional assessment efficacy. At the same time, assessment clarity relies on shared understanding of the value of learning artifacts. In the fa ce of purposes that are not shared, leaders miss important opportunities to use assessment for improvement. Leaders often ask for more data rather than reorganize or dissect what data they have (Bess & Dee, 2008). It is also not very common for institution al leaders to give specific feedback to departments or units after asking for data to allow improvement. Feedback from data, if it occurs at all, typically does not benefit instructors. Instructors may misinterpret as negative the longer timeframe needed t o receive institutional feedback and miss opportunities to integrate feedback into planning cycles (Bresciani, Zelna, & Anderson, 2004). In the face of these complex dynamics some faculty dismiss the need for assessment as non -academic or non -relevant (Mak i, 2010). While some institutional efforts to align learning outcomes have produced good results, it is often the case that learning outcomes assessments take place without individuals fully understanding the ways their own experiences, training, and cont exts influence their assessment practices from what they do in instruction to making contributions to institution level assessment . From an organizational perspective, enacted mental models (thoughts and actions) are foundational to understanding varied assessment practice at the instructional, program, and institutional levels (Ewell, 2009; Love & Estanek, 2004; Maki, 2010; Schuh & Associates, 2009).!Assessment efforts often reveal various priorities of individuals seeking to meet multiple institutional goals, programmatic purposes, and/or individual classroom assessment needs (Peterson, Einarson, Augustine, & Vaughan, 1999). !While assessment practices vary among individuals and institutions, the variance is both helpful and at times, problematic. Varian ce is normal at individual instructor levels and assessors 4 are typically well suited to these assessment tasks because of unique instructional contexts at the program, course, and assignment levels (Maki, 2010; Mintzberg, 1979). The labels: goals, objectiv es, and outcomes remind assessment users across an institution of the different ways assessment is understood . Assessment practitioners likely choose to define assessment based on the uniqueness of their respective influences, leading to disparate assumpt ions and approaches (Schuh & Associates, 2009). In part to address these differences, assessment scholars have developed Ôcrosswalk Õ!language for coordination between departments/disciplines that may enable an institution to take effective improvement acti ons (Bresciani, Zelna, & Anderson, 2004; Maki, 2010). But crosswalk language is limited and effective coordination and communication are needed to translate the many goals, vocabulary, and underlying influences involved in assessment across disciplines and departments (Bresciani, Zelna, & Anderson). Assessment variation creates additional challenges at an institutional level. First, individual course assessments are foundational to program level assessments, but course outcomes in one area may not translat e across other disciplines and programs (Bresciani, Zelna, & Anderson, 2004; Ewell, 2009; Schuh & Associates, 2009). Second, individual assessments are often not designed or vertically integrated to satisfy both instructional demands and external standards or metrics, but are often required by institutions to do so (Ewell; Peterson et al., 1999; Schuh & Gansamer -Topf, 2010). Despite these variations in assessment, accreditors rely on both course and institutional level data to accredit an institution or a p rogram, adding to the need for clarity of aligned practices (Bresciani, Zelna, & Anderson, 2004). !This research focused on the understanding and practice of learning outcomes assessment from which more sound assessment decisions and policies are or could be made. Investigating individual knowledge, awareness, and action related to learning outcomes assessment, known as 5 enacted mental models (Argyris & Sch ın, 1996; Senge, 1994) can provide a way to better engage faculty and administrators in outcomes assess ment practices that have multiple purposes. !Research Questions ! Little higher education research has focused on individual enacted mental models in the context of assessment, starting from an individual perspective and leading back to broader organizationa l understanding. A goal of this research was to identify and analyze understanding and practice of learning outcomes assessment knowledge of individuals that conduct assessment across higher education contexts. With this goal in mind, I conducted qualitati ve interviews with 12 individuals at a Midwest, research exte nsive university with a large (> 20,000) undergraduate population, all of who had a role in learning outcomes assessment. !The following research questions guided this qualitative investigation: !"# How are learning goals and outcomes understood and assessed? !$# What influences the enacted mental models of individuals Õ!practice of learning outcomes assessment? !Definition of Terms !For ease, I use particular terms and their derivatives in an organization ally hierarchical manner. ÒInstitution Ó!is greater than Òdivision Ó!is greater than Òdepartment Ó!is greater than Òunit Ó!is greater than Òprogram. Ó!Learning outcomes at many institutions can and do exist at all of these levels, and may align more or less coherently depending on the presence of ties among and between hierarchical relationships (American College Personnel Association et al, 2006). It is assumed here that at specific levels (unit, program) learning outcomes are more narrowly or closely defined. At broader levels (institution, division), learning outcomes are less narrowly defined and more inclusive. The word Òassessment Ó!is generally used to describe a process of 6 decisions and actions that allow instructors and administrators to know the extent to which instructional efforts were effective (Palomba & Banta, 1999). Assessment in this study does not refer to a specific instrument or test unl ess described. Both instructor and administrator efforts to deliver planned learning experiences are included in this study. Learning experiences may be explicitly or implicitly mapped, connected, or aligned to discrete or broad learning outcomes and may b e associated with any learning assessments. I use the term Òinstructor Ó!to mean any person, regardless of position, who delivers, facilitates, teaches, creates, or curates educational outcomes, both for -credit and co -curricular, for a learner or group of l earners. !Mental models are an individual Õs conscious or subconscious understanding or conceptualization of information and experience (Johnson -Laird, 1983) that drive action and are often (but not always) articulated in the form of metaphors (Morgan, 2006) , frames of understanding (Bolman & Deal, 1997) , or systems (Senge, 1994). Enacted mental models serve as somewhat flexible containers that hold together systems of knowledge and experiences (Heifetz, 1994). Enacted mental models help an individual negotia te and process information in the form of declarative knowledge (knowing what), structural knowledge (connections between ideas), and procedural knowledge (knowing how to do) (Holland, Holyoak, Nisbett, & Thagard, 1986). In complex learning and assessment environments, mental models and actions are constantly reinforced or adjusted based on experience, available information, and the environment (Dill, 1982; Senge, 1994). !Conceptual Frame !This is an exploratory study of individuals Õ!enacted mental models ope rating in a bounded system (a university). This study explores what people think about and what people actually do with learning outcomes assessment and is guided conceptually by Argyris Õ!(1976) 7 espoused theory and theory in use, which explore the relation ship between what an individual says is important and how that person acts in an organization. Studying mental models from an organizational perspective allows the researcher to account for the various influences and relationships assigned by an individual to concepts, structures, and procedures of assessment. Studying mental models also helps a researcher understand the relationship between individual conceptions and actions that influence assessment practice. Institutional culture, disciplinary, and envir onmental influences are leading explanations for assessment practice (Dill, 1982; Hoffman & Bresciani, 2010; Kezar, 2001) but individual reasons for enactment patterns are less clearly understood. Analyzing enacted mental models explicitly should contribut e to examining the how and why of assessment practice. Although some information can be inferred about the individual from assessment and institutional culture research on groups (Bergquist, 1992; Hoffman & Bresciani; Kezar & Eckel, 2002; Tierney, 1988), t his study aims to address a gap in the assessment field by delving into the influences on individuals Õ!assessment practice because individual mental models are foundational to knowing why an individual thinks and, perhaps, behaves in certain ways. The mode l here (Figure 1.) represents an amalgamation of ways that learning outcomes are discussed in higher education literature (Ewell, 2009; Maki, 2010; Schuh & Associates, 2009) and also serves as a way of conceptualizing this study. ! 8 !Figure 1. Hypothetical i deal and non -ideal individual enacted mental models of learning outcomes assessment !!Sometimes there is connection between assessment outcomes and purposes at all levels and sometimes there is not. In figure 1, path A represents an individ ual learning out comes pathway not connected between assessment environments. Path B represents an individual learning pathway linked or aligned to outcomes and assessments, is easy to follow, is shaped by internal and external stakeholders, and is connected to student learning. The figure represents two hypothetical pathways of learning outcomes across a system of instructional, program, institution, and accreditation environments. Paths represent possible learning outcomes assessments and alignment. Small arrows represent influences toward shared meaning at each level. !This investigation of individual enacted mental models does not assume what role assessment plays in a person Õs formal position or voluntary roles in the organization, associated training, preparation, or what factors may influence an individual in terms of assessment. With predetermined values on position or practice set aside, we may begin to see the relationship between individual actions taken and individual mental models about assessment as communicated by participants (Johnson, et al., 2006). ! 9 Significance of the Study !The primary issue that I explore in this study concerns individuals Õ!enacted mental models of learning outcomes assessment in higher education. Insight into this topic provides a basis for understanding assessment action in context, an important process i n the current U.S. system of higher education (Bok, 2006). The individual task of assessing learning outcomes, when done well, engages an entire community in a focus on the purpose of undergraduate education at any institution (Ewell, 1997). A community en gaged in planning, implementation, assessment, and application of learning data has the potential to become a reflective learning organization by focusing on both knowledge products and assessment processes (A rgyris, 1976; Senge, 1996). The current explora tory study of individual enacted mental models of learning outcomes assessment addresses a gap in research formed as a result of a renewed, institutional level focus on undergraduate learning outcomes (AAC&U, 2013a). This institutional focus on learning ou tcomes revealed how multiple influences on assessment practices and demands for data had unforeseen effects on an individual Õs ability to engage in effective assessment practices (Love & Estanek, 2004; Peterson, et al., 1999; Schuh & Associates, 2009). !Major influences on assessment include federal, state, and regional accrediting influences (Peterson et al., 1999), institutional culture (Bergquist, 1992; Tierney, 1988), institutional and/or disciplinary socialization (Dill, 1982; Mintzberg, 1979), admin istrative training (Hoffman & Bresciani, 2010; Maki, 2010), and environmental influences (Inkelas & Soldner, 2011; Kuh, 2009). These influences interact to shape individual understandings and behaviors of assessment by creating demands for data that are us able at multiple levels for multiple purposes. These influences have not fully informed a response to situations where assessment is constrained by 10 competing goals and where a lack of shared goals serves, in part, to limit the use of assessment at institut ional levels. !!Understanding the ways individuals make sense of learning outcomes assessments should help institutional leaders and faculty better understand the process by which individuals engage in these activities relative to demands across levels of the institution. !Organizationally and operationally, we have lost sight of the forest. If undergraduate education is to be enhanced, faculty members, joined by academic and student affairs administrators, must derive ways to deliver undergraduate educatio n that are as comprehensive and integrated as the ways students actually learn. A whole new mindset is needed to capitalize on the inter -relatedness of the in - and out -of-class influences on student learning and the functional interconnectedness of academi c and student affairs divisions. (Terenzini & Pascarella, 1994, p.32) !Learning outcomes assessment is inherently contextual as assessment processes vary between and within disciplines/programs and institutions (Bergquist, 1992; Peterson et al., 1999; Tiern ey, 1988). An inquiry into enacted mental models is useful within one institution, perhaps providing insight for other institutions in similar circumstances. While learning outcomes assessment can help institutions see the larger picture of institutional e fforts at retention and student efforts at degree attainment (Gasser, 2006), it is necessary to capture context -specific assessment practices and understandings to help explain learning outcomes in a given environment. Coordinated assessment information ca n have great impact on the ability of institutions to assign resources and energy (Love & Estanek, 2004). ! 11 Overview of the Dissertation !The second chapter of the dissertation introduces literature relevant to learning outcomes assessments in higher educa tion as well as enacted mental models in organizational development, specifically in higher education. The third chapter focuses on a discussion of the research design and methods used in this study. The fourth chapter presents data and interpretations. Th e fifth chapter discusses an d explores implications of data and concludes the study. References and appendices follow the conclusion. ! !!! 12 Chapter 2: Review of Relevant Literature !Enacted mental models or the practice and understanding of learning outcomes and assessments help to communicate the value of learning to various internal and external stakeholders. Enacted mental models also vary greatly across individuals, learning environments, and within institutions and can create challenges in some cases when individuals act without awareness of their own mental models. This chapter includes relevant literature that describes organizational uses of learning outcomes and assessment, different levels of their implementation, and mental models in higher education . I explore learning outcomes trends and practices from the perspectives of for -credit and co -curricular learning, and across the levels of instructional, program, institution, and accreditation levels. I explore learning outcomes trends and practices from the perspectives of for -credit and co -curricular learning, and across the levels of instructional, program, institution, and accreditation levels. I also explore the uses of and influences on assessment practice including major organizational factors such as socialization. Finally, I consider the nature of mental models in terms of a systems approach to organizations. A major goal of this literature review is to unpack various interpretations of how learning outcomes and assessment practices are used in a complex educational environment. In a systems approach to assessment, the individual can be considered an active agent in a system that relies on prior training, environments, and accountability (Kezar, 2004). Exploring mental models may provide insight i nto individual agency. !Learning Outcomes !In various forms, learning assessments and associated outcomes have been in use for decades (Shavelson, 2007). The past 35 years have seen an accreditation rich environment increasingly use learning outcomes at the course, program, and major levels (Eaton, 2011) and 13 more recently at the institutional level (Hovland & Schneider, 2011) to represent desired learning goals. While helpful to learners, learning outcomes and assessments are often written in language that i s not well understood among all constituents and so contribute to disparate interpretations of data that have lead to extra -institutional questions about the quality of undergraduate education (Ewell, 2009). Individual institutions respond by positioning l earning outcomes as one way to articulate the value of a learner %s experience to a broader constituency (Hovland & Schneider). However, gaps in the ways that individual faculty or staff members in different departments articulate outcomes leads to unshared meaning across an institution. !A valuable attribute of learning outcomes comes from the potential for formulaic representation and ability of multiple stakeholders to share meaning of learner engagement with those around them including faculty, employers , and external communities. Learning outcomes statements include the following features highlighted by the National Institute for Learning Outcomes Assessment (NILOA). !Student learning outcomes statements clearly state the expected knowledge, skills, attitudes, competencies, and habits of mind that students are expected to acquire at an institution of higher education. Transparent student learning outcomes statements are: !¥! Specific to institutional level and/or program level !¥ Clearly expressed and understandable by multiple audiences !¥ Prominently posted at or linked to multiple places across the website !¥ Updated regularly to reflect current outcomes !¥ Receptive to feedback or comments on the quality and utility of the informat ion provided. (NILOA, 2014, p. 1)! 14 Significant attention has been given to learning outcomes by institutions, divisions, programs, and instructional leaders because of the potential value of the intended clarity and expectations for learners, faculty, and staff (Schuh & Associa tes, 2009). !Learning outcomes are used in both for -credit and co -curricular programs. Within for -credit programs, a focus of institutional accreditation efforts includes aligning instructional level learning outcomes [e.g., per course learning outcomes] with parent program goals (Provezis, 2010). With the advent of institutional level outcomes and more people involved in establishing them, additional alignments are needed between course instruction, program, college or division, and institutional goals (H ovland & Schneider, 2011). The linkage or alignment of different disciplinary goals to common institutional goals can benefit from shared meaning such as a common formula for writing learning outcomes to communicate credibility of outcomes across an instit ution. In co -curricular programs, staff members align learning outcomes from programmatic efforts directly to department goals and from department goals to institutional mission (Schuh & Associates, 2009). In for -credit and co -curricular learning cases, th e need for coordinated institutional goals demands that faculty and staff share meaning of written outcomes at the program/college/division and institutional levels. It is no surprise that conflicts of purpose, efforts, and resources have occurred in acade mic and co -curricular learning outcomes alignment due in part to communication and articulation challenges (Bresciani, Zelna, & Anderson, 2004; Maki, 2010) often indicating that constituents have not reached a shared understanding about the purpose or mean ing of learning outcomes (Bolman & Deal, 1997). !Instruction, department, and institution -level learning outcomes. Literature about designing instruction to align with intended learning outcomes in higher education is present in most every sector of teachin g and learning (Lattuca & Stark, 2009) and co -curricular scholarship 15 (Schuh & Associates, 2009). Varied positions on instructional design include a range of approaches to pedagogy that represent the diversity of academic and co -curricular instruction and programming (Bloom, 1994; see also Collins & Roberts, 2012; Fink, 2013; Goralnik, Millenbah, Nelson, & Thorp , 2012; Kolb & Kolb, 2005). Instruction level learning outcomes are widely used for communicating the location, nature, and depth of learning and for m the foundation for program and institutional level learning outcomes (Hoffman & Bresciani, 2010; Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Provezis, 2010). !Institution -level learning outcomes are the most recent addition to the milieu of alignment effor ts to describe the quality of education efforts (AAC&U, 2013b). The emergence of institution -level learning outcomes is influenced by institutional responses to accreditation agencies that are, in turn, responding to calls for transparent and direct measur es of student learning (e.g., U. S. Department of Education, 2006). For example, at least one national higher education organization, the Association of American Colleges and Universities (AAC&U), worked with institutions to create and validate a series of common learning outcomes and measurement rubrics to help describe and measure commonly valued learning outcomes such as critical thinking or leadership development (Rhodes, 2010). Institution -level learning outcomes tend to integrate both curricular and c o-curricular learning outcomes representing an effort to recognize co -curricular programming for its contribution to student learning (Kuh, 2007; Schuh & Associates, 2009). !Strategies for mapping or connecting program -to-institutional level outcomes are si milar to course -to-program alignment strategies (Hovland & Schneider, 2011). Articulating learning outcomes from individual learning in a course or co -curricular activity to institutional level values remains a great challenge for institutions. One method for articulation relies on signature 16 assignments for evidence of student learning (i.e., Western Association of Schools and Colleges (WASC), 2014). Signature assignments are course assignments reoccurring across learner cohorts that represent evidence of s tudent mastery of a program or institution -level goal, applying to both for -credit and co -curricular outcomes alignment (Collins & Roberts, 2012). Signature assignments are useful in course -to-program articulation, but lose value at broader levels. One cri tique of this approach suggests that signature assignments and signature test -questions, perhaps too discrete, are evidence of learned concepts or skills, but do little to provide evidence of integrated thinking desired by institutional level goals of coll eges and universities (Kahn, 2014). Alternatives to signature questions include assignments that display integrated and critical thinking such as portfolios, applied projects, and/or theses. Applied and/or integrated projects are more difficult to assess a t scale because of their intentional broad scope. !Academic and co -curricular leaders such as deans, chairs, and directors are largely tasked to coordinate and align instructional learning outcomes with program and institutional goals for students. While ch allenging, coordination is not out of the range of knowledge, skills, or abilities of higher education faculty and staff. Leaders in many professionally oriented academic programs have accomplished ongoing coordination and course -to-program alignments for many years (Eaton, 2011). Course -to-program alignment strategies in professionally accredited programs often rely on signature assignments for evidence of student learning (WASC, 2014). However, approaches to alignment are necessarily local and specific to the program, faculty, and institutional context (Wiggins & McTighe, 2006). For -credit programs with program -level and/or professional accreditation generally adopt a similar format to address learning outcomes alignment. Faculty members connect the course outcomes, if not assignments, to program goals, identifying different learner expectations along the way. In many institutions, professionally 17 accredited programs have existing, external support networks or demands for mapping outcomes where program facul ty with less demand for outcomes have less established support (Culp & Dungy, 2012). !Co-curricular learning outcomes. Co-curricular learning outcomes, like credit bearing outcomes, contribute to student learning at instruction, program, and institution lev els but are not as frequently recognized by institutions as credit -bearing experiences. An institution Õs location, mission, history and other factors inform how co -curricular outcomes contribute to the academic experience for many students (Schuh & Gansame r-Topf, 2010). By design or historical accident, co-curricular learning takes place alongside for -credit learning and often becomes nested in units that complement for -credit instruction with planned learning experiences (Kuh, 2009). These units may be org anized in a student affairs division, alongside academic departments, both, or neither (Kuh & Whitt, 1988). !In many cases, students consistently report important learning gains from co -curricular experiences (NSSE, 2005). For example, outcomes in leadershi p development, communication and interpersonal skills, applied technical skills, and affective and emotional learning are known to result from myriad co -curricular education efforts (Renn & Reason, 2012). Co -curricular leaders readily support and improve a college learning environment by helping students integrate powerful for -credit and co -curricular learning experiences important to academic content, career, and personal development (Kolb & Boyatzis, 2001). !Co-curricular outcomes are often driven by a fie ldÕs professional network or best practices such as the Council for the Advancement of Standards professional standards in conjunction with institutional priorities rather than deep knowledge of outcomes (Cubarrubia, 2009). A lack of strong accountability environments for co -curricular outcomes (i.e., 18 accreditation) leaves co -curricular outcome alignments to the efforts of program directors and coordinators rather than institutional leaders (Bresciani, Zelna, & Anderson, 2004). Without accountability from a ccreditors for co -curricular outcomes, institutional leaders seem to have little interest in aligning learning outcomes from non -credit programs to institutional goals. The task is often left to local level program directors or coordinators whose training and professional preparation can vary substantially across settings, contributing to overall success in outcomes alignments (Jessup -Anger, 2009). Without clear alignments or priorities, co -curricular learning outcomes often receive little formal attention in the shuffle of large institutions. !While co -curricular and for -credit learning outcomes aid in clarifying purposes and overall outcomes for undergraduate education, they function differently in institutional environments, leading to differential treatm ent and value thereof. Institutions have varying degrees of accountability for integrating co -curricular outcomes into institutional level goals or outcomes for learners. At the same time, intentionally including both for -credit and co -curricular learning in education efforts can lead to powerful learning experiences (Habron, Goralnik, & Thorp, 2012) linked with increased completion rates and shorter time to degree (Kuh & Schneider, 2008). If/when such powerful experiences also include appropriate assessmen t to capture or describe outcomes, many stakeholders may come to value the gains made by learners and institutions (Heinrich & Rivera, in press ; Maki, 2010). Leaving out some learning experiences from institutional aggregation or alignment efforts may lead to complications in the process of incorporating learning outcomes across an institution. !Assessment in Higher Education !Assessing educational efforts across higher education is an attempt to capture and reflect data for accreditation and accountability, program improvement, and delivering student learning 19 outcomes (Ewell, 2009; Schuh & Associates, 2009). Assessment generally serves to create stronger learner experiences through feedback loops including formative and summative student and instructor data. These data help instructors and programs improve outcomes across courses and learning environments (Banta & Associates, 2002; Suskie, 2009). Multiple demands on and limited resources in institutions create pressure to simultaneously utilize both: 1) format ive assessment processes related to improvement and learning; and 2) summative outcomes assessments related to accreditation (Ewell; Fuller, 2011). !At the same time, transforming assessment data for multiple audiences creates potential misunderstanding am ong for -credit and co -curricular instructors in higher education (Ewell). When describing assessment, the perspective of the assessor matters to issues of data and audience (Bolman & Deal, 1997). The individual assessor may consider a number of student, peer, contextual, political, environmental, disciplinary, and/or external audiences when making assessment choices. These multiple audiences likely have different needs leading to potential conflicts in the ways that data are used. In the next sections, I de scribe various levels where assessment takes place in an organization and how each interacts with the others. !Instruction, department, and institution -level assessment. Instruction level assessment is as varied as learning outcomes themselves and mostly th e responsibility of instructors who teach the subject matter. Those instructors may or may not be prepared to use effective pedagogy to deliver content or assess learning outcomes (Maki, 2010). Ongoing teaching and learning seminars focus more on the impor tance of instructional design such as syllabus construction than on any one pedagogical or assessment method (Fink, 2013). For these reasons, no single trend in postsecondary assessment is prevalent. Instructors typically assess student performance based o n observed student work related to learning goals or outcomes at the course level, attendance, 20 participation, or other non -cognitive standards (Maki; Sedlacek, 2011). Student performance is often only aligned to and measured for course and program purposes and most student work is not aligned to institutional level goals (Ewell, 1997, 2009; Lenning & Ebbers, 1999; Maki). !Program level learning outcomes assessments were the early focus of accreditation agencies to determine the use and efficacy of learning outcomes. Program level assessments helped improve instruction, streamline resources, and served as evidence of improvement for accreditation (Eaton, 2011). When aligned with both discrete learning activities and broader university goals, program level ass essment can help an institution know what a student learns or how a student develops as a result of specific situations or experiences in for -credit or co -curricular environments (Renn & Reason, 2012). Such assessments can also help a department or unit im prove learning delivery processes and inputs (Schuh & Associates, 2009). Program level assessments are helpful for domain specific knowledge, which is why most professionally accredited assessment is concentrated at the program level and guided by well -def ined and professionally standardized processes (Accreditation Board for Engineering and Technology (ABET), 2015). Program level assessments, however, often do not assess team and interpersonal skills, student engagement, and workforce readiness skills (Dwyer, Millet, & Payne, 2006), which have historically been the focus of co -curricular learning and adopted by areas like career services, counseling, leadership development, Greek life, and serv ice learning (Thelin, 2004); this could be why professional accreditors tend to focus on course -based content. Some institutions have utilized internships or service learning to link for -credit and co -curricular activities within the context of an academic program to create meaningful experiences for undergraduates. Co -curricular assessment specialists, in growing numbers, are working toward assessment 21 aggregation that demonstrates broad level learning goal achievement, not unlike institutional learning goa l mapping (Inkelas & Soldner, 2011; S. Lee, 2014, personal communication). !While for -credit and co -curricular learning outcomes assessments have been part of higher education debates for many years (Boyer, 1991; Ewell, 1997; Palomba & Banta, 1999; Upcraft & Schuh, 1996), refocusing on institution -level learning outcomes added nuance to and motivation for assessment efforts of university -wide goals (U. S. Department of Education, 2006). At the same time, co urse level grades and sometimes program level outcom es remained relevant while incentives for institutional goal assessment were generally not in place (Ewell, 2009). Likely as a response to refocused attention on institution -level learning outcomes, recent efforts to improve institutional outcome assessmen t have surfaced (AAC&U, 2013b). For example, instructors that assess for learning in a discipline are increasingly asked to map course and program learning outcomes onto broader, institutional goals (Hovland & Schneider, 2011). Many institutional assessme nts attempt to capture evidence of integrated academic and co -curricular knowledge, skills, and behaviors across a student population (Kuh & Ikenberry, 2009). In doing so, institutions also attempt to respond to both internal and external audiences by repr esenting learning data in different ways. !With additional stakeholders adding demand, learning outcomes should be assessed differently based on the level of alignment, specific definition, and goal -orientation with which they are written (Kuh & Ikenberry, 2009). Institutions and programs use various qualitative and quantitative assessment methods to understand multiple completion and quality outcomes in the hierarchy of learning from classroom, course, and program to department, institution, and accreditor (Schuh & Associates, 2009; Suskie, 2009; Thelin, 2004). At the same time, external accreditation standards are only loosely linked together with institution -wide efforts for learning 22 outcomes or program assessment (Cubarrubia, 2009). Loose linkages leave t he institution responsible to describe a tie between student learning outcomes and the moving target of external accountability (Cubarrubia). In response to increased focus on undergraduate education outcomes (U. S. Department of Education, 2006), regional higher education accreditation agencies encourage inclusion of institutional learning outcomes assessment as a part of accreditation effo rts (Hovland & Schneider, 2011) but do not specify a formula for describing instruction -to-assessment alignment. Non -specific alignment leaves institutions to interpret ambiguous standards across a wide range of instruction -to-assessment activities. For accreditation, administrators position institution -level learning outcomes as a bridge between student learning and exte rnal accountability to connect unique content outcomes to a wider range of institutional values (Bok, 2006; Hovland & Schneider, 2011). Institution -level learning outcomes assessments also help faculty and administrators set both high level and measurable learning expectations for students (Collins & Roberts, 2012). Expectations such as integrated learning are useful for communicating value -added propositions to interested employer s as well as families and governments (Hovland & Schneider). In their typical form, institution -wide learning outcomes assessments make use of aggregate learning data from across the institution and map these data to an institution -wide outcome set. In a careful application, measuring, identifying, or describing institution -wide le arning should contribute to an institution Õs ability to improve individual programs as well as achieve quality measures in accreditation (Hovland & Schneider), which is a departure from historically separate processes of assessment for improvement and asse ssment for accreditation as illustrated in Table 1 below . ! 23 Table 1 Two Paradigms of Assessment !(Ewell, 2009, p.8) !Strategic Dimensions !Assessment for Improvement Paradigm !Assessment for Accountability Paradigm !Intent !Formative (Improvement) !Summative (Judgment) !Stance !Internal !External !Predominant Ethos !Engagement !Compliance !Application Choices !!!Instrumentation !Multiple/Triangulation !Standardized !Nature of Evidence !Quantitative and Qualitative !Quantitative !Reference Points !Over time, comparative, established goal !Comparative or Fixed standard !Communication of Results !Multiple internal channels and media !Public communication !Uses of Results !Multiple feedback loops !Reporting !!!Institution -level assessments with feedback loops to individual programs could inform instructors and programmers how to improve efforts. Instructors and programmers might then encourage students to use multiple perspectives to approach problems, gain depth of knowledge, and breadth in career applicability ( Gardner, 2014; Spohrer, Gregory, & Ren, 2010). !Accreditation -level accountability. In the United States, six major regional accreditation bodies establish and interpret accountability standards and collect related evidence for all institutions seeking accr editation (Ewell, 1997). Nationally, the broad model for accreditation assessment points institutions toward quality assurance, instructional improvement, accountability, and student learning (National Center for Postsecondary Improvement, 2014). At the in stitutional level, the quality assurance model is aimed to help public and institutional 24 stakeholders learn and benefit from a description and interpretation of how programs impact learners. !Prior to the Spelling Õs Report (U. S. Education, 2006), any asse ssments of broad institutional goals were likely used for accreditation rather than for internal improvement (Ewell, 2009). More recently, however, institution -wide aggregation of undergraduate learning outcomes assessments helps to streamline accreditatio n efforts and may also contribute to specific improvements. Accrediting agencies seeking campus cultures of evidence want to see connections between aggregated assessment data and communication among institutional partners (Culp & Dungy, 2012; Ewell, 2009) . However, a downside of collecting accreditation data is that feedback is not often given for purposes of specific internal improvements. Feedback given to institutions after accreditation efforts is generally cursory yet culture change efforts may give a dministrators more reason to internally encourage and make use of aggregated data for program improvements (Kezar, 2004). !When feedback from institutional level aggregate goals is timely and meaningful, improvements at the institution and program level ca n be powerful (Ewell, 2009; Schuh & Associates, 2009). One throughput of this changing environment is the emergence of professional development opportunities focused on advancing teaching and assessment for the multiple purposes of meeting instruction, pro gram, and institutional goals. Orienting faculty and administrators to learning outcomes assessment and data aggregation i s a key to internal improvement but also for institution level assessment more broadly (Ewell, 2009). At the same time, more practice with institution level assessment is needed to know how aggregate learning data can be best reported back to instructors and departments (Collins & Roberts, 2012). ! 25 Co-curricular learning assessment. Co-curricular learning environments, similar to for -cred it environments, have technical and social assessment challenges and multiple audiences at all levels. In a study of institutional assessment leaders including provosts, vice presidents, and assessment coordinators, scholars identified three primary reason s for co -curricular outcomes assessment including program improvement (student learning) (Kirksey, 2010), tracking (Jessup -Anger, 2009), and accountability (Fuller, 2011; Hodes, 2009). Co -curricular learning outcomes assessment, like for -credit assessment, involves direct observations of student effort toward an identified learning outcome (Maki, 2010) and is beginning to include Ôsignature assignments Õ!and other direct observations of student learning (Collins & Roberts, 2012; WASC, 2014). In many co -curr icular units, for example, entry and mid -level administrators are responsible for delivering co -curricular programs and, at times, interpreting individual learning activities in terms of institutional goals (Kirksey, 2010). Co -curricular outcomes are usual ly mapped to division level goals and then integrated with broader institutional learning outcomes to translate outcomes from specific to broad targets (Collins & Roberts, 2012; Kuh, 2010). Vice presidents and directors are in turn responsible for coordinating and supervising numerous units and connections from co -curricular activities to institutional goals (Collins & Roberts, Schuh & Gansamer -Topf, 2010). !Howeve r, often due to staff or unit inexperience, learning outcomes and student performances are indirectly measured or me asured in terms of satisfaction rather than learning (Collins & Roberts, 2012). Further, entry and mid -level co -curricular staff members tha t assess students Õ learning outcomes and development frequently have limited formal training in the assessment method or outcome of interest (Hoffman, 2010; Hoffman & Bresciani, 2010). For example in one study, co -curricular staff with varying ability leve ls found assessment tools 26 challenging to develop and utilize for specific outcomes (Seagraves & Dean, 2010) often engendering reliance on past experience or a conveniently available tool (Collins & Roberts, 2012). A common pathway into the field for early career professionals, student affairs graduate programs mostly train practitioners as generalists in student learning, development, and administrative leadership and have only recently begun to add evaluation and assessment course offerings (Hoffman, 2010) . Increased demands for evidence -based practice require specific assessment training from graduate programs, professional development opportunities, and on -the -job training (Collins & Roberts; Hoffman, 2010). Yet efforts at outcomes assessment persist perh aps because of high demand across different institutional strata (i.e., for -credit, co -curricular) for assessment to drive leadership, action, and sense making of practice (Culp & Dungy, 2012; Jessup -Anger, 2009). To support assessment demand, additional efforts are needed among co -curricular and administrative units to align or map outcomes to institutional goals to create an overall assessment culture on campuses. Green, Jones, and Aloi (2008) found that a commitment to assessment practice existed across different levels of co -curricular staff including entry level, managers, directo rs, and executives responsible for activities such as instruction/programming, advising, and supervision. Institutions giving co -curricular learning outcomes alignments proportional attention (Ewell, 2009; Jessup -Anger, 2009) could lead to a greater contribution of these programs to student and institutional success (Schuh & Gansamer -Topf, 2010). While co -curricular groups face an additional challenge in coordinating, different iating, measuring, and aggregating learning outcomes data, many are willing to contribute effort, if not skill, to a culture of assessment (Ewell, 2009; Green, Jones, & Aloi; Schuh & Gansamer -Topf, 2010). 27 Summary of Learning Outcomes Assessments in Higher Education !For -credit and co -curricular assessment practices aligned with institutional level learning goals can provide a unique opportunity for institutional leaders to assess for accreditation and internal program improvement. Different challenges exist for providers of for -credit and co -curricular learning when assessing relative to institution -wide goals. Assessment practices in for -credit and co -curricular learning environments vary in execution but share a common characteristic: data are coll ected about individual learning but not consistently aligned to broader organizational or institutional goals that help programs improve or communicate values to external stakeholders. However, institution level assessment has strong potential to contribute to a broad culture of assessment by utilizing data and information from multiple sources on a campus to inform multiple university goals (Hovland & Schneider, 2011). !Systems and Mental Models in Higher Education !!In this section I use a systems thinking ap proach to explore enacted mental models of learning outcomes assessment. In basic form, the system of learning outcomes and assessments includes learning as a key input and assessment as the key feedback mechanism. In a more complex systems approach, indiv idual mental models and the assumptions about learning held by individuals form a basis for understanding larger systems of learning outcomes and assessments (Senge, 1994). !Many scholars have tried to explain higher educational environments including aspe cts associated with assessment using perspectives of structural/managerial organizations (Mintzberg, 1979), organizational socialization (Dill, 1982; Tierney, 1997), cultural organizations (Tierney, 1988), organizational typology (Bergquist, 1992), and mul tiple policy paradigms (Kezar & Dee, 2011). Even when taken together, the multiple perspectives do not sufficiently help individuals 28 in complex institutions make sense of an environment where learning outcomes and assessment across different organizational levels are not well linked together, nor do the multiple frames necessarily add up to a systems approach to understanding organizations. !A systems perspective on learning outcomes and assessment may be a way to more clearly understand, explain , and interv ene in assessment actions in higher education. Systems thinking advocated by Argyris and Sch ın (1996), Senge (1994), and others take into account multiple forms of thinking in the organizational system made up of people who hold and enact mental models of the world around them, in this case, assessment. Individuals may develop trust, join and leave groups, and may develop shared meaning about assessment but until individuals understand the individual mental model they are enacting, they operate under assump tions that may or may not match the organizational assumptions where they work (Argyris & Sch ın; Senge) !!Overview of systems thinking. Systems thinking was first developed and tested as a way to apply mechanical systems Õ!understanding to social systems. S ystems thinking relies on people to Òmake their understanding of social systems explicit Ó!for the purposes of improvement (Aronson, 1996, p 1.). According to Aronson systems thinking is helpful when facing !complex problems that involve helping many actors see the Ôbig picture Õ!and not just their part of it; recurring problems or those that have been made worse by past attempts to fix them; issues where an action affects (or is affected by) the environment surrounding the issue, either the natural environmen t or the competitive environment; [and] problems whose solutions are not obvious. (p. 1) ! Contrary to linear problem analysis, systems thinking seeks not to take ap art the elements of an argument but rather to look for ways that parts and their movements affect one another and 29 observe how short term decisions affect long range consequences. Peter Senge (1994) describes systems practice in particular discussing in detail five major competencies, ca lled disciplines, required for systems thinking: Shared Vision, Mental Models, Personal Mastery, Team Learning, and Systems Thinking. Senge focuses on the interplay among the five disciplines stipulating that participants continually reassess their compete ncy in any discipline if/when a conflict or impasse arises in a group. !In the portfolio assessment movement in higher education, systems thinking principles are evident. While assignment and course assessment are important in the department or discipline related experiences, portfolios used at the institutional level can capture and allow assessment of for -credit and co-curricular learning experiences. Portfolios are used in nearly 50% of U.S. public and private higher education institutions and have the p otential to help align specific learning outcomes to broader institutional categories as well as integrate for -credit and co-curricular learning (Clark & Eynon, 2009). Data aggregated from portfolios and signature assignments are being used at the institut ional level to identify and curate learning outcomes artifacts to represent institutional learning outcomes. However, when individual data are not digitally linked or clearly threaded through an institutional process, individual learners likely do not have control over what artifacts are included for consideration, a potential violation of trust between individual and institution (Maki, 2010). !Senge (1994) warns of the difficulties and anticipated barriers of practicing systems thinking skills such as data aggregation in complex organizati ons. The work is time intensive, necessarily group oriented, and may require a non -hierarchical approach to decisions . These practice s are not impossible to achieve and are being used in at least one case with good intentio ns and results (e.g., Youatt, McCully & Blanshan, 2014). Too often, though, systems 30 thinking is seen as antithetical to higher education organizations built on somewhat mechanical models of independent, scholarly production, however outdated (Bess & Dee, 2 008; Mintzberg, 1979; Thelin, 2004). !Mental models. A basic element of systems thinking, mental models are an individual Õs understanding or conceptualization of information and experience (Johnson -Laird, 1983). Essentially a thought process, mental models are difficult to directly observe and require practice for an individual to fully communicate (Hatch, 1997). Mental models can be expressed in the form of metaphors (Morgan, 2006), frames of understanding (Bolman & Deal, 1997) or knowledge of underlying sy stems thinking (Senge, 1994). Informed by culture, context, and individual interpretation of the world around oneself, mental models serve as cognitive containers that hold large amounts of integrated knowledge and experiences (Heifetz, 1994). Mental model s help an individual navigate complex situations and work environments and may change to adapt to the changing environment . In work environments, individuals reinforce or adjust existing mental models by acting on information transmitted by the organizatio n, the environment, and leaders (Dill, 1982). In higher education, culture, work environment, and context inform individual mental models. Individuals, in turn, are personally involved in constructing their specific model of assessment practice and interac tions (Flyvbjerg, 2006) and may do so at multiple levels of the institution (Kuh & Whitt, 1988). Using a leadership example of this idea, it is not enough for a leader to hold the vision, but the leader must present the vision and receive feedback to make meaning (Heifetz, 1994). !In higher education environments, individuals form and enact mental models about assessment action and data (Schuh & Associates, 2009) typically through a process of action and reflection (praxis) that helps make the mental model c onscious (Love & Estanek, 2004; Senge, 31 1994). This understanding and meaning making is a first step toward shared vision, which is another discipline in systems thinking (Senge) . Shared vision is important in a department or unit because of the multiple ways faculty or staff might take action s that bridge individual and organizational action (Argyris & Sch ın, 1996). However, individuals not taking a systems approach to assessment may or may not share or communicate clearly the meaning of their assessment work with anyone else on their campus. Without shared meaning of assessment, an institution misses import ant links, connections, and/or alignment between and among learning outcome levels. The mental models about assessment are different for instructors and administrators at the instructional, program, institutional, and accreditation level based on job rol e, varied training, and intended audience. As described earlier, disciplinary training for course and program assessment focuses on content area expertise, classroom grading, and disciplinary skills (Mintzberg, 1979) while staff and administrative training focuses on co -curricular outcomes such as communication skills, leadership and/or personal development (Hoffman & Bresciani, 2010; Maki, 2010). Different mental models of assessment in use during alignment discussions may lead to misunderstandings and low ered productivity. !There are also differences in mental models for those who assess learning than for those who align assessment data to institutional goals such as accreditation. Mental models held by program and institutional leaders require analysis of how much or what students learn in relation to certain institutional outcomes (Ewell, 2009; Schuh & Associates, 2009). Internal stakeholders, such as provosts, program directors, or deans, are each motivated by different kinds of assessment data for vario us purposes such as accountability and rankings; this results in data collections that do not always connect student learning in a course or program (Tinto, 1993, 32 2000). Outcomes disconnected from experiences result in the loss of both learner and institut ional awareness of the impact of learning processes or environments on institutional level outcomes. At the same time, incentives for instructors and assessors are mostly unrelated to individual student learning assessment data collected for use in aggrega te form by more senior decision makers (Boyer, 1991). !The result of mixed expectations may create confusion about the purpose of instructional and co -curricular work and a lack of confidence by instructors and administrators in the utility of collecting da ta for organizational learning. A lack of clear purpose of assessment at a department level (Culp & Dungy, 2012; Tierney, 1988) leads to an institutional culture that does not always reinforce assessment practice (Kezar, 2001; Senge, 1994). When assessment communications, purpose, or actions lack clarity from visionaries to implementers, the connections between for -credit and co -curricular instruction efforts and broader learning outcomes at an institutional level are also weakened (Weick, 1976). Weakened o r weak connections serve to distance those conducting assessment in any setting from the reinforcing feedback loop that may otherwise help improve practice and learning throughout the organization. !Influence of Mental Models on Systems Components !!There ar e several influences of mental models on systems of assessment apparent in higher education and organizational literature. The concepts of trust, shared meaning, and organizational learning have each been explored by scholars for their influence on teamwor k, decisions, and behaviors of individuals lea ding to a learning organization and apply to this discussion of factors influencing assessment thought and action, as well. Trust . Trust is an important dimension of communication and action and an extension of individual mental models. In a college environment with integrated goals, multiple groups 33 (faculty and staff) deliver instruction and conduct assessment. This reality implicitly or explicitly challenges the traditional roles of faculty and staff related t o instructional delivery and who is assumed to be in charge of the locations where learning takes place (Thelin, 2004). Schein (2009) argues that implied power in a relationship (i.e., faculty over staff) has to be addressed before trust and organizational learning can develop (Senge, 1994). To inform shared meaning, trust has to be developed between those in formal/appointed instructional roles and others that deliv er and assess learning outcomes which can happen when all members are valued and respected f or their contributions to assessment (Schein). When trying to align individual learning outcomes to institutional level outcomes, mutual trust between faculty/programmers and administrative staff can matter greatly to success. Kezar (2004) noted that trust in higher education has the potential to overcome structural boundaries to faculty and staff working together especially where past assessment experiences of faculty and administrators have not been positive (Kezar & Dee, 2011). Shared meaning. Individuals need to both self -identify as part of a system and act to influence outcomes to become rooted in shared culture (Bergquist, 1992; Senge, 1994; Tierney, 1988). The tacit or explicit cultural understanding held by members about an institution is characterized by the term Ôshared meaning Õ!(Senge). In many institutions, shared meaning reinforces some unquestioned assumptions about the organization (Argyris & Sch ın, 1996; Dill, 1982; Kuh & Whitt, 1988; Tierney) and leaving assumptions unquestioned b elies whether there is shared meaning. For example, most faculty and administrators today acknowledge that assessment is part of higher education organizations but may not stop to question what is meant by the term to realize multiple definitions . In this way mental models of an individual , perhaps without awareness, likely influence the culture in an organization (Senge, 1994). ! 34 Mental models of assessment change over time and across environments (Kezar & Dee, 2011; Senge, 1994). For example, when an instru ctor or staff member changes organizations and continues assessment, it is important for there to be significant interactions to socialize the individual to the new organization and contextualize assessment activity (Dill, 1982; Kezar & Eckel, 2002; Tierne y, 1988). The new member then makes sense of, and in turn, perhaps influences shared meaning in the organization. Sharing meaning often results in individual reflection and updated understanding, which may further influence an individual Õs mental model of the organization or any specific action (Argyris & Sch ın, 1996; Huber, 1991; Senge, 1994). Shared meaning of assessment is accomplished through interactions where individuals learn from and contribute to the conversations and actions related to learning o utcomes and connection to course, department, and/or institutional goals (Schuh & Associates, 2009). !Organizational learning. Built on trust and shared meaning, organizational learning is the process through which individuals act consistently to improve th e outcomes for an organization and individuals (Schein, 2009; Senge, 1994). When this learning process is practiced in the service of instruction and assessment, it has the potential to capitalize on staff and faculty efforts to result in powerful and tran sformative learning experiences for students. Powerful learning experiences often include an intentional combination of for -credit and co -curricular experiences aimed at achieving clearly stated goals and outcomes through formal and informal means (Kuh, 20 01). To achieve this transformation, staff and faculty have to overcome organizational barriers to communication and shared understanding to build integrated approaches to for -credit and co -curricular instruction and assessment which require mutual trust (Senge, 1994; Smith & MacGregor, 2009). For example, residential colleges and living learning communities are often places where organizational learning is fostered to coordinate formal and 35 informal faculty and staff experiences that help address complex pr oblems of learning outcomes and assessments (Inkelas & Soldner, 2011; Pike, 2008). !Organizational learning occurs via planned coordination and integration of work roles and expectations and/or less formal opportunities to identify common organizational in terests (Gumport, 1993; Schein, 2009). For ex ample, through planned meetings different faculty and co -curricular staff might work together to support institutional learning outcomes by responding to local data and by providing new or improved programs to i mprove student learning (Smith & MacGregor, 2009). Important opportunities for shared understanding also might occur when faculty and staff realize common interests and outcomes outside of formal structures (Inkelas & Soldner, 2011). Discovering such overl apping interests might occur at brown bag presentations or invited speakers that focus on a topic of broader interest, presenting opportunities for faculty and staff discussions leading to shared understanding and/or action. !In groups w here faculty and adm inistrators come together to discuss assessment findings or learning outcomes and their purposes, and thereby learn and actively adapt to changes (Bergquist, 1992; Heifetz; 1994; Senge, 1994), we might see a setting where !Òconflict [can] be heard and honor ed, that allows differences to be visible and viable Ó!(Tierney, 1988, p. 17). Seeing differences between groups Õ!contested meanings informed by differing data and interpretations is a necessary step in advanced organizational learning and creates opportunity for individuals to identify common values in the larger organization (Senge). The ability for an individual to reflect and make meaning of experiences and the capacity of groups to openly disagree about ideas and eventually move forward with a decision are two key features affecting the success of a learning organization (Ewell, 2009; Schuh & Associates, 2009; Sen ge). To achieve such a learning organization, an understanding of shared meaning and trust requires that 36 we view organizational development as inclusive of both individual growth and group learning processes (Bergquist). !Summary of the Literature !This lite rature review steps back to take an organizational view of mental models and assessment and asks what people are thinking about, why, and how they practice. The way individuals enact their mental models about assessment matter to specific outcomes in highe r education institutions. Researchers need to understand what is actually assessed, in what context, by whom , and for what reasons to understand connections between mental models and actual assessment practice. Instructional assessments are useful starting points to understand t he rationale for shared meaning perhaps leading to a better understanding of institutional values (Hirt, 2006; Senge, 1994; Tierney, 1988). But assessment at the institutional or accreditation level is often linked to external priori ties with larger organizational consequences (Bergquist, 1992). While a stated goal is often alignment of assessment priorities, this frequently proves not to be the reality and alignment between priorities and practices at different levels is less clear. !As thoughts are generally believed to drive actions (Schein, 2009), interviews, concept maps, and observations of actions are the most common strategies for understanding mental models and have proved reliable in prompting reflection and meaning making by individuals (Johnson -Laird, 1983). Time and circumstances change the nature of learning outcomes assessment so any investigation is a snapshot of the individuals Õ thinking at a point in time (Schein). If we can understand enacted mental models of learning outcomes assessment we can learn ways to shift perspectives from the information and potentially organize around fundamental differences and commonalities. !! 37 Chapter 3: Methodology and Methods The purpose of this study was to conduct an exploratory study o f influences on individuals Õ!enacted mental models of learning outcomes assessment. The literature suggested some potential influences might include institutional culture, socialization, academic/professional training , and disciplinary distinctions but how these or other influences affect the ways in which individuals think about and assess learning outcomes was less clear. Understanding individual assessment frames or mental models may help institutional leaders develop more sound policy and implementation decisions about assessing learning outcomes. The following research questions guided my study: 1. How were goals and learning outcomes understood and assessed? 2. What influences the enacted mental models of individuals Õ!practice of learning outcomes assessment? In order to learn more about individuals Õ!enacted mental models of learning outcomes assessment an exploratory, qualitative design allowed for closer examination of reported thinking alongside reported assessment behaviors (Glesne 2006; Miles & Huberma n, 1994). This chapter discusses the research design informed by the researcher Õs constructivist paradigm, methods, participants, and ethics of research. This chapter also addresses approaches to data trustworthiness, analysis, and reported findings. Const ructivist Paradigm The current study was grounded in a constructivist research paradigm. Constructivist research identifies the perspective of the researcher in the data and integrates researcher and participant interpretations in analyses (Patton, 2002). My perspective on research included the possibility of multiple realities in which researcher and participant(s) co -create meanings and 38 understandings (Denzin & Lincoln, 2005; Glesne, 2006). To investigate multiple perspec tives and actions on assessment a constructivist approach allowed me to use both deductive and inductive approaches to data, multiple theories, and sources of data to identify and organize imports concepts (Denzin & Lincoln, 2005; Glesne, 2006; Merriam, 1988). Reporting from this perspect ive represented the ways in which both participants and I made sense of real events in a complex environment. I realize that my presence influenced and shaped the ways in which participants made meaning of questions and therefore their responses that are my data (Guba & Lincoln, 1989). My intent was not to determine an objective, more -or-less correct outcome but rather to surface and document existing interpretations of participants Õ!own worldview and experiences related to assessment (Guba & Lincoln). As a result there are multiple interpretations of assessment held by participants represented in this study (Stake, 2005). Qualitative Research Design Although the literature often suggests potential i nfluence on assessment practice it typically does not address how and why different influences occur. Open -ended interview methods aligned with constructivist approaches (how and what) to knowing more about the influence and practice of individuals . Further the literature about learning outcomes assessment and possible influences on assessment lacked clear constructs upon which a researcher might base a quantitative investigation. Qualitative, open ended, individual interviews were chosen to explore participants Õ!mental models, what infl uenced them, and how. The interviews and additional document analyses were useful for gaining a deep understanding of individuals Õ!influences, approaches to, motivations for, and understanding and application of assessment practices (Creswell, 2008). In t aking this qualitative approach I was able to simultaneously 39 explore the role of multiple internal and external influences on the phenomenon of interest Ñin this research individual enacted mental models of learning outcomes assessment (Creswell, 2008; Merri am, 1988). I discuss each research decision in the following sections. Sampling. Two primary sampling decisions were necessary for this study. The first decis ion concerned the research site for which I considered the particular institution in which enacted mental models of assessment were investigated. The second decision included the participants -- the individuals that assess learning outcomes -- and exploring influences on their practice. Research site. The Association of American Colleges & Universities Õ Shared Futures: Global Learning and Social Responsibility was the starting point in selecting an institution as the site of this study (AAC&U, 2013b). Because the 32 institutions in the Shared Futures initiative have broad learning goals a t some level of implementation I inferred some presence of institutional level learning outcomes in use or in development at these institutions. The study site was selected because it is one of the Shared Futures initiative partner insti tutions (AAC&U, 2013b) and therefo re represented an appropriate environment to begin an investigation of enacted mental models of assessment. I confirmed that efforts to develop and implement institutional learning outcomes had existed and continue; in addition, the institution was conveni ent for scheduling multiple on -site interviews with limited available funding for research logistics. The research was conducted at a large (>20,000 undergraduate students), Midwestern, research -extensive university. Due to the overall size and complexity of academ ic and administrative structures some transferability of results may be appropriate to institutions with similar characteristics. ! 40 Participant selection. In qualitative research sampling is necessary to help bound the exploration to participants wh o are likely to have experience with the phenomenon of interest (Hill & Levenhagen, 1995), in this case assessment thoughts and actions. To avoid a sampling bias for or against any one explanation of assessment behaviors, no particular group within the ins titution was at the center of investigation. In 2009 -2010 as part of efforts related to Shared Future the selected institution began implementing institution level learning outcomes by convening faculty and staff teams to write goals for undergraduate lear ners later approved in faculty gove rnance processes. In early 2012 senior administrators convened a different group of 80 faculty and staff members to work in teams to develop rubrics for five existing goals for learning. These 80 people were drawn from ac ross all undergraduate -serving colleges on the campus and a number of non -academic units. No particular level of training or other commonality was used to select members of this group; directors and deans of the colleges were asked to appoint and/or recrui t individuals that were ÒExcited about assessment Ó (J. Adams, personal communication, 2012). By the middle of 2012, the coordinated efforts at rubric development were completed and this group of 80 has not been called to work on rubrics s ince. At the time of this study the initial rubrics had been vetted by and were in use in the campus community. !Participants were recruited from among the 80 individuals in the university rubric development effort because they were from multiple colleges and units on the c ampus and held varying roles in the institution. It is unknown if participants shared an understanding of learning outcomes at the end of their work on the rubric committee. However it is likely that participants had variable ex periences related to assessm ent based on individual backgrounds and working in 41 different units. Based on participation on the committee I expected that sampled individuals would have a mental model of learning outcomes assessment. I first identified and confirmed the list of particip ants with the rubric project planner and verified their continued employment through the university Õs online directory resulting in removal of three individuals from the list of 80. From the remaining list I drew a random sample of 15 participants that inc luded individuals with titles such as deans and/or directors, assistant deans or directors, program coordinators, teaching faculty, and entry -level employees. I invited individuals via email and phone to participate in two interviews. Interviews took place in an appropriate locatio n identified by the participant which in all cases was a university office space. Second interviews were scheduled at the end of the first meeting. Both sets of interviews took place between June 2014 and January 2015. !By selectin g participants from a broad set of work functions and posi tional strata in the university I assembled multiple perspectives to create a more holistic narrative of enacted mental models of learning outcomes assessment. Recruiting participants from various locations/positions was an important strategy for lowering bias across interview participants in the study (Eisenhardt & Graebner, 2007). I conducted a pilot study to check interview protocol and questions with one faculty and one staff member from two diff erent units on the campus. I adjusted several questions for clarity and added a statement to clarify the definition of learning outcomes for the individual learner at the onset of the interview. The data from pilot interviews are not included in the analys is.!Data Collection !In a constructivist paradigm qualitative research is open to many forms of data collection so a clear connection was needed between research questions, data collection (documents, 42 interviews, and observations), and the realities of a un iversity (Gerring, 2004). The research goal was to identify participants Õ!enacted mental models of learning outcomes assessment and what influenced those mental models. I nterviews and selected documents were the forms of data collected from participants. !Interviews. Interviews were the primary form of data collection. I interviewed each person two different times for 30 -60 minutes each, which helped me surface individuals % understandings about specific topics relevant to enacted mental models of learning ou tcomes assessment and develop a more thorough narrative, overall. Two interviews for each of 12 participants allowed for data saturation during analysis (Patton, 2002; Yin, 2003). Interview questions (Appendix A) were derived from major themes in the extant literature that might have been relevant to learning outcomes assessment. Questions were developed to elicit information from participants about their training, socialization, skill building for assessment, rel ationships, leaders, department and institutional cultures, application of ideas, rewards, incentives, accountability, communication, and barriers. A beginning assumption I had was that not every person carried the same mental models of assessment. I began the interviews with an introduction to the study and an opportunity to consent to be interviewed. I then used a semi -structured interview pro tocol for the first interview (A ppendix A). Second interviews were used to ask more pointed questions based on the mes that emerged from first interviews, help ed clarify statements and ideas, and provide d an opportunity for participants to share more about what they thought was important to the study (Yin, 2003). First interviews were audio recorded and transcribed fo r ongoing reference, accuracy, review, and perspective. Second interviews were recorded but not transcribed because I employed the use of a concept map on which participants recorded data and information related 43 to the study (F igure 2); concept maps were t hen analyzed. I made detailed field notes and reflexive memos during and after interviews to highlight connections between emerging themes and to help identify patterns and/or connections to theoretical constructs about organizational learning, assessment, and/or the environment. ! Documents. I requested and collected a limited number of documents related to enacted mental models of le arning outcomes assessment identifying learning outcomes and their alignment to higher -level goals and reports related to mission, vision, and values. These documents were used to more fully understand the kinds of messages and priorities received or expre ssed by participants in the study (Patton, 2002). Participant safeguards. Interviewing participants sometimes exposed personal opinions and experiences that would be harmful to the participants Õ!reputation if made public. Harm, in these cases, may have include d loss to physical or emotional wellbeing and loss to dignity or autonomy (Miles & Hub erman, 1994). As the researcher I had the responsibility to ensure appropriate protections via four ways standard in qualitative research (Guba & Lincoln, 1989): har m, deception, protection of privacy, and informed consent. I minimized harm to participants Figure 2. Concept map for second interview 44 by acknowledging risks in perso nal and professional disclosure and reassured individuals that their perspectives about assessment were valued as a matter of institutional learning. !In the interview protocol I avoided probing outside of the scope of influences on and/or practice of assessment in the course of the part icipants Õ!professional career. In order to avoi d deception or misunderstanding I provided individuals with an overview of the study and an opportunity to ask deeper questions for further understanding. Consistent with my research paradigm, telling particip ants more about the goals of the study elicited individuals Õ!contributing more relevant information and p erspectives. To protect privacy anonymous identifiers were used for all participants, the institution, units where the individuals work, and other uniq uely identifiable information . Prior to initiating the study I obtained Institutional Review Board (IRB) approval for the study . Participants were informed in writing of the ir rights for joining the study. Each person received written instructions on how t o withdraw from the study at any time if they chose to do so and signed an informed consent form approved by the IRB (Appendix B). Data were stored separately from participant information and the identities of individuals. !Data Analysis and Reporting !In th is study I analyzed individual mental models reported through a deductive analysis of interview responses and an inductive, open coding, thematic analysis attempting to describe categories of data while maintaining authenticity (Glesne, 2006). First I eval uated interviews and concept map data for ideas and themes related to literature on enacted mental models and espoused theory/theory in use (Argyris & Sch ın, 1996; Senge, 1994) and for evidence of understanding and practices related to that person Õs perspe ctive of what was going on around them. I later explored the data for ways that individual assessment practices were more and less connected to the assessment behaviors in the person Õs self -defined work environment. Consistent 45 with qualitative inquiry stra tegies (Creswell, 200 8; Merriam; 1988; Patton, 2002) data analysis was ongoing during the study. Field memos were employed during and after every interview to recall interview content and my associated thinking. Ongoing analysis helped me identify needs for additional information from individuals and adjust second interview questions to reflect deeper exploration of assessment understanding and practice. !In first round interviews I listened for emerging major ideas (Weiss, 1994) that yielded information abo ut myriad influences on assessment for each participant. I used NVivo qualitative data software (v10.2.0) to organize, analyze, and code first interview memos and transcripts for responses to interview questions. I mapped responses directly to interview qu estions thereby linking data to existing explanations of enacted mental models in the literature. When ideas linked to more than one question multip le codes were added to the idea creating links between influences on practice. I identified categories infor med by the literature --motivations, drivers, influences, and reasons given for assessment -- largely represented by interview questions. I then reorganized, added meta -data, and collapsed codes into analytic categories of influences on assessment that I ev entually called Push, Path, or Pull to aid in visualizing how categories influenced different groups in similar or dissimilar ways. Push, Path, or Pull categories emerged from interview data and I summarized the participants % interpretation of influences on their assessment. I created categories of influences labeled Push, Path, or Pull which helped me to identify an assessment narrative that could run through many learning oriented units represented and to parse out details of how individuals act to link their assessment work to the goals in their local, program, and institutional environments. Next I reviewed both first round interview notes and field memos. 46 I debriefed my early observations from the first interview with two different peer debrief partne rs (one academic administrator and one student life administrator) to identify salient concepts, to hear practitioner perspectives, and identify blind spots in my initial analysis (Glesne, 2006). Debriefing resulted in adjustments to my concept map templat e for second -round interviews. It was from first round interview data and debriefing that I developed a framework of assessment levels as a way to share a mental model and encourage participants to make explicit their interpretation of said model. From these analyses I crea ted a second interview protocol including a concept map template onto which all twelve participants entered data and information (F igure 2, above). !I then conducted 12 second -round interviews u sing the concept map template (F igure 2) during which I offered the concept map as a draft and asked for corrections, additions, and changes . I asked participants where on the concept map they would position their primary, secondary , and other work roles including assessment. I asked what connect ions they made between levels of the organizations in their regular work and what other kinds of structures were missing from the concept map for their work. I asked what were the general motivating relationships in the organization. I referred at times to data from first interviews to identify salient motivating factors and asked participants to discuss them in terms of the concept map. Concept maps were used in analysis primarily to identify and understand the linkages between learning outcomes assessment and other facets of the individual %s organizational experience. !I added second -interview analysis to the initial Push, Path, and Pull categories. I identified similarities in the ways different individuals understood and practiced assessment. I labeled th ese similarities action groups and named them Isolated, Limited, and Connected . To 47 visualize relationships I arranged action groups and literature -based categories for a 3x3 matrix of influences and actions (Table 2) that are more fully explained in chapte r four. Table 2 3x3 matrix of action s and influences on assessment ParticipantsÕ Influences on Assessment Participants assessment action groups Isolated Assessment Limited Assessment Connected Assessment Push Path Pull !!Some further commonalities were discovered across participants and groups, differing from both the action groups and influences on assessment categories. To create an additional visual map of assessment influences, I linked the Push, Path, and Pull categor ies and Isolated, Limited, and Connected action groups with participants Õ!stated motivations. I used the visual map to conduct a cro ss case analysis of group types thereby deepening my understanding and ability to communicate the nuances of individual enac ted mental models of learning outcomes assessment. !Throughout analysis I observed data for apparent connections between themes such as learning outcomes, organizational pres sures, and assessment practices and again for explanations of motivation for assess ment. I examined data for various influences and meaning about assessment among participants associated with different roles on campus and analyzed interviews and concept maps for evidence of contradictions or complementary actions. I then explored data fo r the presence of consistencies or inconsistencies across participants Õ!actions and understanding in addition to my own conceptual linkages to relevant organizational theory (Glesne, 2006). The Push, Path, and Pull categories and Isolated, Limited, and Connected 48 categories were analytic as they emerged from the data. Participants in this study did not des cribe their work in these terms but rather I created these categories as artifacts of data based on similarities across participants and their experiences. Data Trustworthiness and Credibility !Trustworthiness was maintained during data collection, analysis, and interpretation by taking detailed interview notes, post -interview memos, verbatim transcriptions, and organization of artifacts (Patton, 20 02). To s upport findings I triangulated data from multiple sources to develop a thorough understanding in order to answer the research questions (Yin, 2003). Sources included two rounds of interviews with participants, field notes, and literature related to assessment and enacted mental models . Among individual concept maps I analyzed connections between thinking and action about assessment and created codes and themes from the data for continued analysis (Weiss, 1994). To help me examine assessment through past and present, train ing, socialization, and accountability my approach Òallow[ed] for the simultaneous examination of the role of structures, culture, organization -wide processes, history, and myriad other conditions Ó!(Merriam, 1988, p. 51). !I then employed a third peer de -briefer to provide additional perspective for data credibility. By exposing discrepancies in analysis and interpretation across individuals, faculty or administrator groups, and/or sub units of the institution, the third peer de -briefer encouraged considera tion of alternative explanations of patterns and themes. Finally key participant member checks offered additional perspective and feedback on the data and interpretations consistent with a constructivist approach to data that values meaning making with par ticipants in the study (Miles & Huberman, 1994). 49 Personal experiences played an important role in the grounding and credibility of this research (Glesne, 2006). In my past roles in higher education I worked in student affairs settings and often found that assessment lacked coordination and meaningful support in t he unit and division. Currently I coordinate and implement program design and assessment plans for faculty and staff on a campus. My interest in this particular research emerged from having struggle d at times to identify how to engage apparently contradictor y reasons to assess my own work based on shifting demands of a changing organizational landscape and the reality of havin g multiple stakeholders who were interested in the outcome for different re asons. I found that assessment was of ten valued when it was complete but carving out time for the compl ex task of an assessment agenda much beyond trackin g attendance and accountability was challenging in the work environment. Based on literature and my ex periences I propose d institutional assessment practices work better when including a broad range of assessment activity that layers output indicators ( e.g., counting students in seats) with assessing learning and development outcomes (e.g., how are you a d ifferent person?). !Limitations !This research design was exploratory and sought to know more about enacted mental models of learning outcomes assessment in one higher education institution. Findings from this study are not directly generalizable although they may be transferable to similar settings nor should causality be inferred from r elationships described. Further individuals participating represented only a small cross section of historical and current perspectives. While the sample incl udes administrators and faculty from different areas in the university these indivi duals cannot represent all perspectives on assessment and espoused goals of representative departments, units, and the institution in the study. ! 50 Answers to the research questions emerged from participants Õ!mult iple perspectives on assessment each of whom informed the knowledge and conclusions presented (Flyvbjerg, 2006). In addressing a problem of assessment practice my perspective informed how I listened for participant meaning and understanding leading me to a model for the second interview . Participant responses to interviews (data) continued to infor m themes, perspectives, and concl usions. In qualitative research the researcher is the instrument of analysis. My presentations of data and my conclusions are confined by my experiences in deci sions affecting curriculum or large -scale program implementation. In sum the current research was an exploratory, qualitative study using deductive and inductive thematic analysis of individual enacted mental models of assessment held by faculty and staff. Using two interviews with each partici pant and some document analysis I created a trustworthy and credible representation of data in response to the research questions. Analyses included open inductive coding, use of codes grounded in theory linked to the interview protocol, peer debriefing, and participant data checks in order to produce a final document. ! 51 Chapter 4: Findings !Twelve individuals each participated in two interviews for this study. The interviews consisted of questions about behavior and mo tivation for assessment, based in literature and seeking to expound on the individuals Õ!learning outcomes assessment thinking and practice. Data presented here were identified through open -ended interviews about a daily activity (assessment) in higher educ ation (Wolcott, 2001). The data respond to the research questions guiding this study: !"# How are learning goals and outcomes understood and assessed? !$# What influences the enacted mental models of individuals Õ!practice of learning outcomes assessment? !The foll owing sections include an overview of the participants, responses to interview questions and several emergent configurations of data. First participants Õ responses to literature -based questions about assessment influence are reported. Individual questions and associated responses were aggregated in three groups representing themes in literature: training and disciplinary socialization ( Push) , environmental and ecological socialization ( Path) , and incentives and accountability ( Pull). Second emergent themes of similar enacted mental models of assessment are displayed, noted in three groups named Isolated, Limited, and Connected. I use these groups as a basis for reporting patterns of behaviors. I then describe similarities and differences across the same patt erns of behaviors in action groups in a cross -participant analysis. Table 3 lists participant characteristics. Fuller participant descriptions are included with the emergent themes in this chapter. 52 Table 3 Participant c haracteristics Pseudonym Position & primary duties College Accreditation type Appointment type Alex Professor and former assistant dean in the college -teaching, research, outreach Professional School Disciplinary accreditation Faculty, tenure stream Blake Associate professor and associate dean - accreditation, research, teaching Professional School Disciplinary accreditation Faculty, tenure stream, part time administrator Cameron Associate dean -curriculum, research Professional School Disciplinary accreditation Faculty, tenure stream, academic administrator Emerson Faculty member and former student affairs staff member administrative, research, teaching Professional School (current) Regional accreditation Faculty -tenure stream Jordan Associate dean for student affairs functions -administrative Science College Regional accreditation Non -faculty, administrator Hunter Associate professor & director of a program -research, teaching Liberal Arts & Sciences College Regional accreditation Faculty, tenure stream, director of a program Taylor Assistant dean for student affairs functions -administrative Co-curricular unit Regional accreditation Non -faculty, administrator Ryan Associate professor -research, teaching, outreach Professional School Disciplinary accreditation Faculty, tenure stream Casey Associate professor & director of a program - research, teaching, outreach Professional School Disciplinary accreditation Faculty, tenure stream, director of a program Addison Librarian for interdisciplinary studies -administrative, research Library Regional accreditation Non -faculty, librarian Avery Instructor of interdisciplinary courses -teaching Liberal Arts & Sciences College Regional accreditation Non -faculty, full time instructor Jules Professor and director of a program -research, teaching Liberal Arts & Sciences College Regional accreditation Faculty, tenure stream, director of a program 53 Participant P rofiles Alex is a late -career professor and former assistant dean in a professional school. His career included several institutions, accrediting body positions, and administrative posts. His current appointment includes 60% research and 40% teaching. Blake is an associate professor and associate dean for school accreditation in a professional school. Her training included disciplinary expertise as well as mentoring for future service with the accrediting body. Her current appointment is 40% administrative, 30% research, and 30% teaching. Cameron is a mid -career associate dean for curriculum in a professio nal school. Her training in a natural science area and affinity for research on teaching and learning led her to this position. Her appointment is 100% administrative. !Emerson is a new faculty member in a professional school, and former student affairs professional with ten years of professional pract ice. Her current work portfolio con sistent with other pre -tenure faculty consists of 60% research and 40% teaching and outreach. In her first year in a tenure -stream faculty position her department bought ou t her teaching load so she could focus on research . Jordan is an associate dean for student affairs functions at a science college. His career is grounded in higher education administration and his current appointment is 100% administrative. He works clos ely with the associate deans for research and academic affairs in his college. ! Hunter is an associate professor and the director of a college wide program in the liberal arts and science college. Her disciplinary training and ongoing mentoring have includ ed assessment skills and experiences from different experts in her field. Her appointment is 20% teaching, 50% administrative, and 30% research. ! 54 !Taylor is an assistant dean for student affairs functions in a co -curricular unit focused on academic excellen ce. She held different student affairs positions for close to 20 years. Her current portfolio is split between recruiting and retention and programming for leadership and international learning goals. ! Ryan is an associate professor in a professional schoo l. Her current appointment includes 60% research and 40% teaching and service. She has been a faculty member during her career at three different large institutions. ! Casey is a mid -career associate professor and the director of a department -wide program in a professional school. Her training is in a humanities discipline. Her current work in a professional school allows her to keep abreast of issues in multiple areas related to her content expertise. Her appointment is 30% teaching, 40% research and 30% a dministrative. ! Addison is a librarian for interdisciplinary studies at the university library whose training in library studies is augmented by close affiliation with a professional association. Her role on the rubric committee was to coordinate faculty a nd staff efforts to define dimensions of a goal. Her role is 100% designated for supporting faculty and students through the library. ! Avery is a late -career instructor of interdisciplinary studies in a liberal arts and science college. His training in an thropology gives him the opportunity to teach in many classroom environments in the university in cluding large lectures with 400 or more students, smaller seminars, and study abroad. His current appointment is 100% teaching. Jules is a p rofessor and direc tor of a college wide program in the liberal arts and science college. Her training was in a humanities field and her appointment is 50% administrative, 30% research, and 20% teaching. Her role on the rubric committee was to coordinate faculty and staff efforts to define dimensions of a goal. ! 55 Assessment I nfluences !An assumption I carried into the study that learning outcomes assessment could be teased out or considered separately from other kinds of assessment, environments, or influences was not accurate. No participants considered their work confined only to learning outcomes assessment or devoid of instructional or programmatic inputs or assessments so language in this chapter reflects participants % broader conceptualization of assessment. The data report ed and discussed take into account assessment behaviors and mental models inclusive of a greater range of job related behaviors. Findings presented firs t include assessment influences found in litera ture and reported in interviews followed by two emergent themes: Action groups and shared influences independent of action groups. While several participants found the term Ôlearning outcomes assessment Õ!unfamiliar other kinds and modes of assessment were more familiar. Different participants used the words outc omes, goals, and objectives were used to describe targets of assessments. Participants reflected that assessment practices and understanding were closely related to understanding of their own work context. The observed range of behaviors led me to identify an overarching construct of this study that I called assessment connectivity. !Two interviews with each participant yielded information about influences on assessment practices. Three categories of influences on assessment are used to organize this report and labeled as Push, Path, and Pull (Table 4.). ¥ Push: The Push into professional practice are influences of training and disciplinary socialization and includes graduate training, mentoring and/or non -accredited disciplinary standards; ¥ Path: The Path through the current environment includes pressures and forces acting on the participant in the current environment and cultural socialization in department, 56 division, or institutional expectations and/or influences, unit or department culture of practice, formative professional development, and career turning points; ¥ Pull: The Pull toward action from outside themselves is based on incentives and accountability and includes any of the practices listed above when practices directly linked, as perceived by th e participant, to incentives and/or accountability measures for their individual work. Often, but not always, incentives equated to money and accountability equated to consequences. Table 4 Influence types, descriptions, and categories ! Influences on as sessment work (from interviews) !Influence category !Category name !Disciplinary standards -non accredited, accepted standards !Influences of training & disciplinary !socialization !Push !Strong mentoring !Influences of training & disciplinary socialization !Push !Ongoing influence of discipline !Environmental & cultural socialization !Path !Program/division assessment activity, training, expectations !Environmental & cultural socialization !Path !Current departmental assessment practices !Environmental & cultural socialization !Path !Greater good -greater than the institution !Environmental & cultural socialization !Path !External stakeholders -public policy !Environmental & cultural socialization !Path !Disciplinary Accreditation -external !Incentives & accountability !Pull !Disciplinary Advisory Board !Incentives & accountability !Pull !Instruction level demands (grades) !Incentives & accountability !Pull !Department level incentives or accountability (course evaluations) !Incentives & accountability !Pull !Institutional level incentives or accountability (funding) !Incentives & accountability !Pull ! 57 I report unit cultural influence on assessment separate from accountability and incentives here to highlight the ways organizational structures appeared to influence participants thinking (Cubarrubia, 2009; Ewell, 2009). While I asserted that culture and accountability are connected in practice, participants Õ!thinking varied fr om mine on this point. At times individuals in this study described assessment prac tices that were both tightly connected and loosely connected to incentive or accountability. Individual participants also described how influences from various categorie s threaded through their career noting the ongoing salience of an influential person or type of experience. External stakeholders such as legislators and public policy were included in the Path category because on the few occasions that this element was identified as an influence by participants it was discussed as one of several consider ations in assessment decisions but was not specifically a direct measure of accountability or incentive to the individual. No participants reported direct rewards or punishments for assessment as a result of state or national governments or public policy. Rep resentative data are included to show the variety of influences on individuals across Push, Path, and Pull categories. !Influences of training and disciplinary socialization. The Push categories were rooted in a number of sources such as disciplinary training, disciplinary socialization, research driven (disciplinary) entry into assessment, or training related to assessment, reporting, or analyzing findings. Several participants respon ded to questions of training and preparation with stories of how their training led them to current practice, indicating that they drew on multiple resources when thinking about practice. Blake, from a professional school, described a pre -tenure/post tenur e career arc of assessment influences: !My PhD was in [sub -discipline] engineering, and I never had any formal training on assessment other than a weeklong workshop that I took [20 years later] in 2006. I didn Õt 58 get a chance to teach in the classroom, my ma jor professor was heavily involved in [accrediting group]. So he encouraged me to think about getting involved -- donÕt get involved too soon in your career. You know it Õs more of a post tenure thing. And when I came here to [Institution Name], my departme nt chairperson was also heavily involved in [accrediting group], so I had two major influencers who are encouraging me to do [accreditation], the basis for my developing the expertise in assessment. !For Blake, groundwork for eventual assessment participati on was laid early by a major professor, and reinforced later by a department chairperson. Cameron an associate dean in a professional school noted a similar version of assessment influences, highlighting personal interest, mentor support, and eventual dire ct action. !I never expected to be here at [university]. I always expected to go to [a nearby teaching focused] institution. [Pre tenure], I would dabble in things related to teaching. I had a mentor who was very interested in it. When I got tenure, I recon figured myself to start understanding students and the sciences; looking at learning. The biggest thing that happened to me in terms of learning outcomes is, I open my big mouth to the chair and I said Ôwe need to revise our curriculum. Õ!Emerson, a new fac ulty member in a professional school, identified gaps in her own assessment preparation: !My assessment influence is a slow evolution, getting best practices learning from other people and everything. So, it's been pretty informal which I think is a little bit of a disservice. I will say even in my recent doctoral training, I don Õt think assessment was ever overtly [taught]: like okay you're going to learn how to assess your course. Or you're going to learn how to assess, plans or whatever in [research] stud ies. I think the most 59 helpful class related to assessment was probably [a] teaching, learning, and curriculum class. !Hunter, an associate professor and director of a program noted strong formative influences on her assessment practice: !My master's thesis advisor was an expert in assessment. He developed a holistic scale for [discipline name] assessment and he was really keen on finding ways to help universities come to some understanding of good [outcomes]. Fast forward to the PhD, [name of expert] does as sessment in several different [disciplines] and is on my dissertation committee. And then coming to [university], working really closely with [Name of another expert], who is a master mentor teacher and who has been working with the directors on assessment issues. So I've gotten to work side by side with her and learned a tremendous amount. So I've had that kind of continual training. Then there's methodology. I've had like five methodology classes and statistics was my foreign language, quantitative and qu alitative research design, figuring out how to really demonstrate a very nice quantitative outcome along with the qualitative. Though I typically do qualitative research, it's best I have some quantitative data. !Avery, an instructor of cross -disciplinary s tudies, made these comments when asked about assessment influences: !My mother was an elementary school teacher; my dad was a railroad telegrapher. I started working, teaching in summer camps. And so I've taught scout craft, swimming and the whole bit. First thing I learned in teaching swimming is you have to first get the kid to trust you, and then once you established trust, the rest of it follows. The other end of it was a faculty member, I was his TA for a number of years, who was and still is, 60 extremely brilliant, he would drool brilliance all over the students, except that I was his TA and it fell to me to interpret him to the students. And in some ways that was the best damn learning experience I had because I had to take that brilliance and Étone it down for mere humans, and that was my job. And as a result I learned a lot about having to bridge the really abstract, theoretical, almost ethereal in a way that was understandable for students. !For some participants it was not as clear what specific influences from their training were relevant to their assessment practices and think ing now. For other participants multiple influences of disciplinary socialization and training were apparent. Some participants readily linked their disciplinary training to their current work and others were able to describe mediating influences in their current work environment. !Environmental and cultural socialization. The Path categories were related to the campus, departmental, and/or professional environment in which the participant perceived influences on their work. Path categories include the presence of professional development opportunities and career trajectory changes (i.e., toward administration) because such elements can be considered part of the institutional culture. The influences in these categories cluster around teams, group wor k and decision -making about curriculum, leadership, and/or use of data. For example Alex a professor and former assistant dean in a professional school noted a concern about the institutional level environmental influence on assessment: !I'm wondering if t he [university learning goals] are too narrowly focused on the traditional word of liberal? I'm very, very concerned on the lack of financial literacy not only in the college, but the university as a whole and society as a whole. What's important is that i ncredible lack of literacy on personal finance. I mean kids have trouble 61 with bank loans and a checkbook, they don't know the difference between debit cards and credit cards. Alex also noted that he was at the end of a long career as a faculty member and identified several curricular changes over time to su pport his assertions. Similarly Cameron, a n associate dean for curriculum, noted the following efforts related to changing the assessment environment in a different professional college: !Well, we get dat a on the backend that tells us how our students have done and so we take that information and we adjust, very similar to research. You get the data back, you figure out what the results mean, and you go back into something different. I think if we can help a faculty understand that's sort of the same process they were adopting here, it becomes less scary. And I also think that they're more willing to adopt [assessment activity] once they understand it better. But in our college a lot of this is just foreign . People just haven Õt been exposed to it before. Here Cameron refers to a distinction made by faculty in the college between research and assessment but offers a way to help a group understand the usefulness of assessment in a relatable way. !Emerson, a new faculty member and former student affairs staff member, noted the role of leadership in encouraging assessment practice. !Assessment was a pretty strong theme coming from the director and I think leadership has a huge role in it. A lot of the current staff members, student affairs professionals, maybe aren Õt well versed in assessment or maybe they don Õt understand that's also their job. If the director of a department stands up and says we all do assessment work and this is what it means, I think that makes a big difference because people follow the leadership. 62 Leadership and assessment were linked together by Emerson several times in the interview. Other participants connected assessment with other important topics such as inclusion and student success. Jordan, an associate dean for student affairs noted the in fluence of institutional values in this case the value of diversity, threaded through teaching and assessment activity in the college. !I have to help faculty understand that you can't just say I va lue diversity, you have to demonstrate and show and embed and get faculty to talk about instead of saying I have a component of diversity in my class. Why don Õt I embed diversity within the content structure of my class and then, be intentional about refle cting upon that. Taylor, a n assistant dean for student affairs functions in a co -curricular unit noted the developing role of learning outcomes in supporting the overall goals of the institution. !I see how our leadership development could supplement the cla ssroom in building some of these [institutional] goals. We tried to have learning outcomes in all of our student affairs programs or most of them. So we do leadership training for all of our student organizations where we will set up outcomes !Addison, a l ibrarian, noted the influence of a national professional association on practice, specifically teaching and assessing the concept disciplinary authority. This professional associa tion is not an accrediting body so does not have the same influence on action s as some of the discipl inary accreditors. In this case the professional body is a cultural influence ( Path ) rather than an accountability influence (P ull ).!The [association] used to have standards for information literacy, and now we have a new framework with different concepts like scholarship as a conversation, [and] authorities in context. [Frameworks] don't map specifically to different [research site] institutional 63 outcomes, they are very much integrated. These concepts are trans -disciplinary, so you know authorities are going to be different in each discipline, and there are things that [students] need to know for [their] discipline. Addison %s comments reflect a peer -to-peer culture of assessment that was strong in the professional association. Her co mments focused less on leadership than some of the other participants. !Participants across curricular and co -curricular environments noted how environments and cultural socialization had a distinct role in shaping and guiding assessment actions. Participan ts were able to identify both positive and negative influences of the ir environment broadly defined on their assessment practice. Most participants identified unique salient environmental influences on their current work environment/culture. The Push (disc iplinary socialization and training) considered with the P ath (environmental and cultural socialization) gave a fuller picture of influences on assessment thinking and behaviors. !Incentives and accountability. The Pull categories stem from departmental or university accreditation, financial and supervisory accountability, and specific disciplinary outcomes for graduates Õ!skills and abilities. These artifacts included student course evaluations of teaching because these were mentioned influential in particip ant performance evaluation. Not all participants were teaching at the time of interviews, so perspectives on student course evaluations were linked only to the individual comments. Alex a professor and former assistant dean noted an historical perspective on the use of student course evaluations transitioning from a form of feedback for faculty to their use in faculty evaluations for promotion. ! 64 The whole concept of student evaluations of faculty is fascinating and it Õs changed. When it first started, four or five decades ago, only the faculty saw the assessments that Õs how they got it sold. The assessments were there to help the faculty. So they got the student evaluations and had better feedback from that perspective. Only faculty could look at it, then de partment chair[person]s began to look at it, and then it became a measure of how good a teacher are you. And then it became input into your salary increases and promotion decisions. At the beginning, they were just supposed to help the faculty. Course eva luations were identified by Alex as actually helpful as a form of feedback over years for understanding one %s own teaching and in his dean role for evaluating teaching but did little to tell faculty about student learning outcomes. !Blake, an associate dean of a professional school, discussed the ways that an accrediting group structures assessment and accountability to i ncorporate a number of metrics including learning outcomes in the particular college. !To be from an [accreditor name] accredited program is not a scale, it Õs just a black or white. You either are accredited or you Õre not. There are eight criteria and they have to do with students, they have to do with program educational objectives, which is what our graduates are capable of accomplishing whe n they Õre in their fields. Then there is the continuous improvement criterion and there are various criteria about our own faculty and institutional support and resources and things like that. !Cameron, an associate dean in a professional school described how her college with some accredited and some non -accredited programs hoped to create incentives and accountability for assessment. ! 65 So this fall we're going to start workshops that are going to help the [non -accredited] departments understand how you actually build these assessments against your programmatic pieces. What I am hoping will happen is once the unit sees how it works programmatically, [faculty] will start to eventually work through t heir courses as well. So, each course is going to have learning outcomes. They are going to be able to demonstrate how this course ties into the programmatic learning outcomes. By promoting an accredited program %s!previous assessment process and training f aculty this participant and the college plan to eventually require non -accredited -program faculty to conduct similar assessments. Emerson, a new faculty member and former student affairs staff member noted her experience in student affairs and the way a d epartment leader influenced accountability for assessment: !I think leadership had a pretty big role in it. The new executive was very clear about saying we need to make sure we Õre doing what we're saying we're doing because the prior year three students died related to alcohol or drugs, yeah. She [said] Ôif somebody comes to me and said what training are you doing for our staff? I will be able to show them our training model and say these are all learning outcomes, this is how we assessed it and this is w hat they got out of it. Õ!And so that was a pretty strong theme coming from the director. And some of that was reactive but definitely I think leadership has a huge role in [assessment]. !Ryan, a faculty member in a professional school, identified how extern al incentives rather than accreditation affected curriculum and assessment behaviors. This participant noted the close 66 relationship of her professional school with employers that eventually employ graduates from her program as a reason to engage with this external stakeholder. !And we are constantly seeking and getting explicit feedback as to how our students are performing and we always get employer's assessments of whether our students have the competencies that are needed on the job. And [we] respond sign ificantly to any negative feedback. About five years ago the feedback was that our students had poor communications skills. We said, okay, help us out. And one of the firms gave us a quarter of a million dollars to have a communication center in -house. We employ a full -time director with a PhD in communications and two graduate students from either communications or English. Ryan %s perspective on incentives was one she characterized as mutual investment in the success of the program by faculty and external stakeholders. !Alternatively Casey, a faculty member in a professional school, expressed frustration with and anxiety about the ways in which rankings and accreditation shape certain behaviors in her professional school: !And so I Õm less concerned with exte rnal evaluation tools in terms of what I think is important for students to learn and much more convinced that there is a [content area expertise] piece that I want students to get when it comes to literatures. There is so much movement right now to be abl e to document what we do and to prove that it has particular relevance or outcomes to people who I don Õt think actually are very invested in learning. And I don Õt mean institutionally, I mean kind of policy makers and politicians who want a particular mode l of [content delivery]. ! 67 Casey went on to explain that accreditors ask for documentation of l earning in quantitative formats and that her expertise is not considered quantifiable in the same ways as other areas in the school. !It feels like music and art ar e not documentable in the same ways, they don Õt have the same correlative value. It feels like I teach something that doesn Õt fit very well into those models. ÔWe have to raise our Math scores Õ is not the same as Ôwe have to have really creative or emotion ally intelligent people. Õ!While the pull of accountability and incentives serve to move behaviors, an individual Õs understanding of their context can serve as a kind of incentive as heard in these two responses to how participants use learning outcomes in their work. Cameron, an associate dean in a professional school responded this way: !I wanted us to be more nimble and more responsive and the only way you can do that in my mind is if you're doing [learning] outcomes, you have your assessments and that's how you get rid of the extra stuff, that's how you keep pace with where the current trends are. !Alex, a professor and former assistant dean in a professional school responded to the same question with the statement: ÒAs department chairman it was my respon sibility to evaluate faculty. Ó!This response reflects an instrumental view of how learning outcomes are used in the school for accountability. Across categories of Push (disciplinary training and socialization), Path (environmental and cultural socializati on) and Pull (incentives and accountability), participants described a diverse set assessment practices most likely because they intentionally randomly recruited for this study. They reflected that assessment practices and understanding were closely relate d to 68 their understanding of their own work. For example, several participants found the term Ôlearning outcomes assessment Õ!unfamiliar while other kinds and modes of assessment were more familiar such as objectives and grading. Perhaps as a reflection of m ultiple influences, participants Õ!mental models of learning outcomes assessment did not always link to or align with their stated assessment actions. Reasons given in interviews included not being asked or hav ing opportunities to align work and not having sufficient incentive or reward to do so. For those w ho did connect align their work the how and why of assessment were integrated. !Action Group F indings !Building on interview responses that led to Push, Path, or Pull influences, some patterns of assessment action and thinking emerged among individuals. Once created, Push, Path, or Pull categories helped me identify an ideal assessment arc that seems to be desired by institutional leaders and could work in most learnin g oriented units represented in this study. Summarizing influences allowed me to parse out details of how individuals act to link their assessment work to the goals in their local, program, and institutional environments. While individuals were interviewed separately and did not work together on a regular basis, assessment actions clustered into three groups identified by the dominant patterns of assessment behaviors that individuals displ ayed. Within their own contexts participants described specific behav iors to intellectually link their assessment activity to other assessment and outcomes activity proximal to their work. The links are described in three categories: Isolated, Limited, and Connected (Table 5) . ! 69 Table 5 Participant position, school/unit , and action group !Pseudonym !Action Group !Position !School/Unit !Emerson !Isolated !New faculty, former student affairs professional !Professional School !Taylor !Isolated !Assistant dean for student affairs functions !Co-curricular unit !Avery !Isolated !Instructor of interdisciplinary studies !Liberal Arts & Science college !Alex !Limited !Professor and former assistant dean in the college !Professional School !Blake !Limited !Associate professor and associate dean, leads accreditation !Professional School !Ryan !Limited !Associate professor !Professional School !Casey !Limited !Associate professor & director of a program !Professional School !Cameron !Connected !Associate dean for curriculum !Professional School !Jordan Connected Associate dean for student affairs functions !Science College !Hunter Connected Associate professor & director of a program !Liberal Arts & Science college !Addison Connected Librarian for interdisciplinary studies !Library !Jules Connected Professor and director of a program !Liberal Arts & Science college ! The Isolated group (Emerson, Taylor, Avery) consisted of individuals who were in some way isolated from consistently connecting their regular assessment practice to other institutional assessment priorities in a systematic or intentional way. Three participants populated this category: an instructor of interdisciplinary studies, an assistant dean in a co -curricular unit, and a new faculty membe r in a professional school . Each Isolated participant described thorough understanding of their current assessment context. The Limited group participants (Alex, Blake, Ryan, Casey ) were all faculty and one assistant dean in discipline/professionally accre dited 70 programs who each understood the value of assessment linked to proximal assessment outcomes and behaviors ; however they did not attempt to connect their work on individual learner and program level assessment to broader goals unless specifically ask ed to do so which was only reported by one of the four individuals. The Limited group individuals were actively involved in professional accreditation and advisory groups reported to have shaped their assessment behaviors. The Connected group participants (Cameron, Jordan, Hunter, Addison, Jules) consisted of deans, directors, and one librarian. All Connected participants made efforts to regularly shape and reframe assessment information related to their current position (e.g. , outcomes data, inputs data) fo r multiple audiences simultaneously. While not behaving identical ly across contexts Connected individuals considered and reconsidered the impact of their own training, environment, and accountability and incentives on assessment efforts. Category matrix. Assessment actions were taken in the context of participants Õ!environment, including their job. The arrangement of these realities of assessment understanding and practice are illustrated by a matrix of thinking and action (Table 6), a summary from two inte rviews with each of 12 participants. ! 71 Table 6 Action group x motivation type matrix Participants Influence types Isolated Group Emerson, Taylor, Avery Limited Group Alex, Blake, Ryan, Casey Connected Group Cameron , Jordan, Hunter, Addison, Jules Push: Training & disciplinary socialization (for assessment) Emerson -low training Taylor -low training Avery -moderate training, strong mentoring Alex -disciplinary training Blake -disciplinary training, mentoring Ryan -mentoring Casey -low training Cameron -low training Jordan -low training Hunter -disciplinary training, strong mentoring Addison -disciplinary training Jules -low training Path: Environmental & cultural socialization (for assessment) Emerson -emphasizes research, teaching Taylor -communication barriers Avery -emphasizes teaching, satisfaction All participants noted accreditation and advisory board driven culture of assessment, including graduate job placement. Casey -noted college membership and community of practice as a positive reason to assess and report, interdisciplinary approach to daily work All reported connections and interactions in work cycles. Cameron -created evidence based budgeting Jordan -team approach to data based hiring Hunter -connection to community of practice Addison -disciplinary values for assessment Jules -departmental culture of data us e Pull: Incentives & accountability (for assessment) Emerson - no direct incentives or accountability Taylor - no direct incentives or accountability Avery - satisfaction incentives or accountability All participants cited professional accreditation as primary reason for assessment. Cameron -links budget to data Jordan -links hiring to data Hunter -links research & publications to data Addison -links activity to institutional standards, data Jules -links program data to institu tional values, program relevance Across categories of Push (disciplinary training and socialization), Path (environmental and cultural socialization) and Pull (incentives and accountability), individuals reported how influences worked together to shape t heir overall practices. Overall the Push categories appear to influence participants in somewhat similar ways. Disciplinary training was generally a low influence on assessment action across all three actions groups. Various levels of disciplinary mentoring were observed in all three groups. For example , Avery (Isolated ) and Hunter 72 (Connected ) both reported strong mentoring for assessment during their respective graduate training. Disciplinary training and/or mentors did not appear to influence action group type independently of other influences but was stronger for some individuals. !Every participant identified ways in which their assessment practices were influenced by their unit, department, division, or college, indicators of the Path category. Environment and culture in the Path category repr esent the influences and factors in this study over which institutions have the most control. The environmental and cultural socialization influences were lowest in the Isolated group, moderate in the Limited group, and highest in the Connected group. Nota bly the Connected group individuals identified the influence of their personally held value for assessment action on unit -level goals. !The Pull categories appear to influence the Isolated action group individuals very little while influencing the Limited and Connected groups more but in different ways. For example, the Limited group described accreditation as the source of accountability while recognizing their own role in contribut ing to the accreditation cycle of data collection and feedback . Accreditation bodies were generally positively regarded for influencing the overall success of colleges for participants from professional schools. The Connected group identified actions and behaviors that linked a desired outcome to assessment and/or da ta. For example a librarian reported mapping non -accredited disciplinary standards to local institutional goals as a way to better leverage existing resources. For the Isolated and Limited groups most activity driven by incentives or accountability related to a participant %s specific duty requiring an assessment practice like administering student course evaluations or completing accreditation reports . For the Connected group, individual and unit leadership played defining role s in assessment action. ! 73 Participant and Action Group D escriptions !Each individual is described in detail and linked with their action group ( Isolated, Limited, Connected ) and then described in terms of Push -disciplinary/training socialization, Path -environmental socialization, a nd Pull -incentives/accountability. !Isolated action group. The isolated group (participants Emerson, Taylor, Avery ) consisted of Emerson , a student affairs administrator who was now in a faculty role in a professional school after completing a degree program; Taylor , a co -curricular administrator in an admissions and student affairs role; and Avery , a self -described late career (40+years) full -time lecturer of interdiscip linary studies . Push. Emerson and Taylor reported similar, limited levels of training or disciplinary socialization ( push ) with assessment practices. Both participants noted that their doctoral training included cursory assessment coursework and that most of what they know about assessment comes from practical or applied experiences either on the job during graduate school or on the job in a post graduate position. Emerson , a new faculty member and former student affairs staff member mentioned gaps in acade mic training for assessment as part of her disciplinary socialization. Taylor , an assistant dean for student affairs , responded this way when asked about training or preparation for assessment: !I donÕt feel like an expert É I donÕt feel that I've had a tremendous amount of course work or instruction or, you know, so if I'm the person that is probably most well situated to do this in my college I don Õt, I don Õt know that I would feel prepared. So where have I learned about assessment? Through random exper iences. !Emerson and Avery , a lecturer of interdisciplinary studies, both reported fairly robust applied assessment experiences during graduate training, despite low classroom training, and 74 seemed able to engage in sense -making of learning outcomes data f or multiple audiences. Another influence included in disciplinary socialization, or Push, was mentoring which was also inconsistent for members of this group. For example, Emerson and Taylor did not mention strong mentors for a ssessment practices. Converse ly Avery , a full time instructor, described a strong mentor relationship to which he attributed teaching and assessment skill development while he was a graduate student linking strong mentoring to previous applied experiences with assessment. ! But as far as learning how to teach, the only one that was any benefit was [Name], he used to teach a rather popular course on the anthropology of popular culture. So the kids liked it for the content, but I think [Name] is the only t eacher I ever had who actually [responded] to my questions and went through why is [a response] good or bad. Path . While each individual in this group gathered data to i nform their individual practice it does not appear that the information gathered was used or coordinated to inform broader goals. Avery , the lecturer of interdisciplinary studies, reported a career -long, concerted level of thought, action, and analysis of learning outcomes assessment pr actice, mostly at one institution. As a lecturer this participant reported finding creative and unique ways to communicate topical learning outcomes to different audiences (not his department leaders) for different reasons. Avery understood clearly that us ing the Òlanguage of learning outcomes Ó!spoken by institutional leaders resulted in positive outcomes for the approval of a new study abroad course that he taught, additional teaching tec hnology resources and at times release from teaching load to pursue a dvances in teachi ng and learning. In these ways and without other mean s to tie outcomes to institutional values Avery connected learning outcomes assessment to institutional environments at several levels in order to achieve objectives of personal importan ce.! 75 Taylor , an assistant dean for student affairs, indicated that her current environment based on the dean %s requests for information was focused on input metrics such as reports on staff time and student enrollment rather than outcome metrics such as lea rning or achievement. Taylor reported that student grades and graduation rates served as a proxy for program success but was unclear about how these metrics helped the department know more about the efficacy of leadership and global education efforts. Like Taylor , Emerson and Avery reported not being asked by their supervisors to contribute outcomes data to unit efforts at aggregating or monitoring data for program or college level goals. As Avery noted of his director, ÒThey actually don Õt do anything with this stuff. Ó !Pull. The common thread that links members of the isolated group was their reported lack of departmental or program level accountability or incentive (Pull) to actively link daily assessment practices to broader institutional ass essment cons tructs. For example Avery indicated that his participation on the institution level rubric development project was not important to his director. ÒI was never asked to report on the committee that whole time. Ó!Even so due to both trainin g and environmental influences Avery was most prepared of those in this group to connect or link course level learning outcomes assessment data to program, insti tutional, or disciplinary goals but was not ever asked to do so. Emerson , a new faculty member and forme r student affairs administrator, articulated how and when her assessment data aligned to program a nd division level outcomes data but indicated not being able to follow through on reporting due to time and logistical constraints. !I will also admit Éwe actually had to do a lot of the formal assessment, that's when we started losing a lot of steam. Yes, we want to make sure they're learning this. But a lot of other things started getting in the way, especially when [residence halls] opening 76 happens. You know, it's like the big day where the people are moving in and all that stuff and there's just not a lot of time. !These comments indicate how unit level support for time spent on assessment activity can work as an incentive for individuals to follow through on data colle ction and analysis specifically and as feedback loops to inform the next cycle of implementation. !Taylor , an assistant dean for student affairs, indicated that she was unfamiliar with current learning outcomes assessment methods and was not asked by her co llege to participate in learning outcomes assessment activities. Even though she was expected to conduct assessments aligned to graduation outcomes as part of her role in the college she expressed dissatisfaction with limited communication about assessment planning in the college. !Yes [it is made clear that I am supposed to conduct assessments] in a planning letter sort of preparation for evaluation of how the college is doing. We do have someone in our college that focuses on assessment. And when you talk about -- you know, Ôdoes h e interact Ð does he communicate with other parts of the staff? Õ!informally, I would say yes...but mostly with the dean. But again, we don Õt meet as a whole staff. !Taylor illustrated how she wished to contribute to the assessment conversation; in this cas e she thought learning outcome assessments could add value to the overall data picture. However, the organizational dynamics of tasks divided among professionals, spotty communications, and this participant %s!limited methods training prevented more integra ted learning outcomes assessments in the college. !Limited action group. This group (participants Alex, Blake, Ryan, Casey ) consisted of four mid - to late -career faculty members from three different professional schools. The presence of a professional accre ditation body in their respective colleges was something that influenced 77 assessment practices for each participant in this group. Each reported that the professional accreditation bodies had ties to their graduate training and disciplinary socialization, strong links to their current environmental socialization, and accountability for assessment from learning outcomes, program evaluation, impacts, reputation, and rankings. Push. Limited group members reported varying influences of their graduate training and specifically the influence of mentors on aspects of their work that involved engaging directly with assessment. Several participants mentioned the influence of disciplinary mentors in their early faculty careers who invited or encouraged them to becom e directly involved in the accrediting process in some form providing better understanding of the overall disciplinary environment. !Path . This group reported strong environmental and cultural influences on assessment data collection and behavior in terms of meeting accreditation standards, communication about assessment data, and explicit connections from assessment purpose to decisions to actions. The general sense was that the accreditation environment was the key driver for the current state of assessment culture in each professional school. For example Casey , a faculty member, described individual efforts to extend positive aspects of the school level accreditation culture to the ins titution assessment environment and saw participation in the universit y rubric project as a way to extend assessment culture but also identified some barriers: !I have [school accreditor] and my [discipline name] community and peers and the faculty and then the institution goals. I think that I am very in line with [all of] t hose goals. It was pretty easy for me to see the alignment of the kind of work that I do in the [university goals]. It was a really interesting exercise to be part of because how that was understood by different disciplines was wildly different. ! 78 Casey iden tified ways she worked to align her liberal arts disciplinary scholarship to the professional school goals and associated accreditor expectations. In linking these scholarl y and professional expectations this participant identified possibility for similar linkages across the university but was disappointed by faculty on the university level rubric committee when they did not see the same potential in the university goals. !Pull. For the participants in the limited group, all faculty members , a common theme w as that they were not asked to connect their course or disciplinary accreditation assessment practices to the institutional level or regional accreditation level assessments. Two participants noted that they were not asked nor encouraged to report to their department on committee work with rubrics. Ryan stated clearly: !No. So what the university committee is doing, that hasn Õt affected my department , no. [What matters is] solely the accreditation piece. Now, there's a huge overlay up but when I was on the c ommittee I wasn Õt even asked to report at a faculty meeting. !All four participants in this group knew how and when their assessment work was reported up to the institu tion and out to accreditors and generally how different audiences used their reports to represent compliance, progress, or to leverage more resources. One faculty member who was also an associate dean was more directly involved in these translating exercises. The participant indicated that other faculty members were not asked to provide interpretations of the assessment data they collected, but that the activities were based on accreditor -provided data forms. The associate dean reported that a college level com mittee described and evaluated accreditation data on behalf of teaching faculty. !Across this group, individuals recognized the assessment culture in which they operated for the iterative influence of accreditation on assessment and assessment on accredita tion and 79 presumed departmental success in the eyes of the institution . These participants maintained a positive relationship with their assessment environments but noted how the dynamic s of accreditation limited them in different ways from extending these practices. While two individuals reported trying to extend department assessment practices in institutional practice all four participants noted either perceived barriers or experience with roadblocks to doing so. !Connected action group. This group (parti cipants Cameron, Jordan, Hunter, Addison, Jules ) consisted of a faculty member/associate dean in a professional school, an associate dean of student affairs in a science college, faculty directors from two different departments in the liberal arts and scie nces, and a librarian focused on specialized library services. A common theme for this group was the ability of individuals to articulate the ways in which their assessment practices connected with surrounding internal and external environments and stake holders. For these participants assessment was informed by and influenced individual student learning, program, division, college, an d institution outcomes. Further each participant considered and described assessment behaviors relevant to external stakeholders such as (regional) accreditation, the state political environment, and granting agencies (i.e., National Institutes of Health). Push. Members of this group re ported various disciplinary training backgrounds from natural sciences to humanities and liberal arts. A key aspect in common for all but one person was that much of their disciplinary coursework was not focused on assessment. Three participants reported s pecific mentoring in applied assessment practices and one mentioned an early career professional development influence on assessment. The other person highlighted several assessment methods courses and a strong mentor focused on assessing learning outcomes in liberal arts and sciences. ! 80 Path . While all participants in the Connected group had direct experience usin g learning outcomes assessments this group largely garnered assessment experiences and expertise from their current environment, rather than cours e work i n their disciplines. In essence they learned on the jo b. Through different strategies each person in this group achieved a high level of assessment activity relative to other action groups that was informed by or that informed goals or outcomes in their respective environments. Cameron , an associate dean, noted her professional development efforts to use course assessment data for program level decisions were aligned to her research and teaching interests on curricular efficacy. Her near term goal w as to encourage facu lty to revise curriculum using a ssessment d ata across departments within the professional school. She planned to make these changes stick by attaching assessment requirements to budget requests over which she had influence. She describe d her shift in actions here: !Our enrollment numbers were just really, really low and I couldn Õt understand why, when issues in the environment were kind of cutting edge. So, department chair said, it's yours, you're the chair, figure it out. So I actually contacted [expert faculty member] and I said here's what we want to do. How do I do it? So [two expert faculty members] helped design it [curriculum revisions] with me. And so, my experience and how I learn came from those inte ractions [with expert faculty] quite honestly. Addison , a librarian, shared a document showing alignment to external standards as evidence of her ability to connect assessment to multiple layers in her work environment. The [library association] used to have standards for information literacy, and now we have a new framework, and the framework is conceptual and we're talking about different threshold concepts like the Ôscholarship as a conversation Õ, Ôauthorities constru ctÕ, there's six of them that you can look at. The library world is kind of thinking about how to move 81 toward these. So I've actually thought about these in the context of the [institution level goals]. [Library association] goals don't map specifically to [institution goals], it's like you took it and exploded them into the [institution goals], so they are very, very much integrated. And some of them show up more in some of the [institution goals] than others. !Three participants actively linked incentives to assessment practices to exert influence on their environment. For example, Jordan , an associate dean of stude nt affairs at a science college noted promoting college level efforts to hire faculty with strong teaching backgrounds in part by rewarding a fo cus on assessment of student learning outcomes in one Õs schol arship of teaching and learning and in turn adjusting tenure criteria to reflect these values. Here he describes how the college level priority translates into individual action: !The idea is ever ybody has to be a teacher scholar, right. We do say that instruction is important and we're going to put a high value on student -faculty interaction formally in the classroom but especially around the aspects of student learning. If your scholarship is hig h energy physics and you can smash atoms with the best of them, fantastic, that fits, but you also need to be an excellent teacher. !Across the Connected group, participants acted to advance assessment connections. All five participants in this group identi fied trust and relationships as important to their assessment success. Three individuals cited continuous improvement effort in conjunction with trust as a reason they felt comfortable extending local assessment to program and college level efforts. !Pull. Individuals in the Connected group created or actively supported direct links from assessment to some form of incentive or accountability. Each participant did so individually related to their perceived expectations of leadership in the respective departm ent and context. 82 Each individual used or supported the use of incentives for assessment within their environment to encourage peers to join unit -wide assessment efforts. Examples of incentives described by participants included requiring and awarding fundi ng for data supported requests for more resources and creating policies that incentivize increased underst anding of learner achievements .!Individual Assessment M otivators !This section explores how influences on assessment ( Push, Pull, and Path ) were interw oven with various job roles, duties, and assessment practice patterns ( Isolated, Limited, Connected ). Comments reported in this section illustrate how individual participants !conceptualized !values and reasoning for assessment (why) rather than behaviors in the environment (how). Some similarities in expressed motivation arose independent of action groups, specifically articulated in the second interview concept maps. Individual concept maps yielded data that showed similarities between participants that were not linked to influences or action groups. !Greater good motivation s. Several participants noted a desire to influence the greater good, defined by participants as educating critical think ers, engaged citizens, and educating for diversity and inclusivity in daily decisions about assessment. This category included Emerson (Isolated ), Casey (Limited), Hunter (Connected ), Avery (Isolated ), and Jules (Connected ). These individu als identified th at some motivation to achieve a greater good was the reason they cared or implemented more rigorous teaching and assessment practice. These motivations were positioned on the concept map outside the institution reflecting that the university has a responsi bility to the greater community (i.e. , state, nation). At the same time some of the specific kinds of greater good motivators like example critical thinking and diversity are values currently articulated within the institution. ! 83 Within this category varia tions existed in how each participant operationalized their espoused value of the greater good within the institution. Several participants did not identify how they linked their work to their espoused greater good motivator. For example Emerson (Isolated ) highlighted her motivation to work toward greater diversity and inclusion through her work yet did not identify a specific action that operationalized this. And two individuals, Hunter (Connected) and Jules (Connected ), said that they teach fundamental critical thinking and feedback skills in the context of their respective disciplines but spoke only about instruction and department level assessment. These two participants did not identify how they would know or assess i f they had any impact on students Õ!post -graduation critical thinking and feedback skills. Alternatively Casey , a faculty member in a professional school ( Limited), identified her greater good motivation and a way to operationalize her value through her tea ching: !I would say [literature in the professional school] is actually meant to give you a different view of the world, to let you embody a different perspective, to stimulate ideas that you might not ever have had more than it is to teach a science lesson or behavioral social studies lesson. !The greater good motivations seemed especially sali ent for these five participants identifying them as the reason to get up and go to work each day linking a personal reason to professional activity. !Institutional and program influences on assessment . Ten of 12 total participants identified various institutional or program level influences as motivators for assessment practice. Participants perceived institutional and/or program value for assessment at different levels of the institution and acted on those perceptions. ! 84 All participants of the Isolated group (Emerson, Taylor , Avery ) and four participants of the Connected group (Cameron, Jordan, Addison, Jules ) identified institutional influence on their assessment practi ce. The quantity and quality of institutional in fluence varied for participants ranging from student course evaluation forms to departmental program review to the state legislative/political process. Participants from the Isolated group used the concept map to illustrate assessment pathways at the instruction or implementation level and did not actively link practices to activity above their position. !The !participants in the Connected group (Cameron, Jordan, Hunter, Addison, Jules) that ident ified institutional or program drivers for their assessment work identified close links from assessment data to action, feedback, and budgetary or program decisions. These participants actively linked their own work to multiple other s%!work in their respec tive environments Ð!hence, why they comprised the Connected group.!The four participants of the Limited group (Alex, Blake, Ryan, Casey ) that identified institutional drivers were two professors, an assistant dean, and an assistant professor and all noted communication barriers in their organizations that prevented sharing assessment data. One professional school faculty was told other priorities such as popular media rankings were important while another professional school faculty said that assessment pr iorities were not shared down from the leadership; a third professional school faculty reported never being asked about existing learning outcome s data. In each case the participants felt these factors limited their motivation or ability to share assessmen t data and ideas. Disciplinary accreditation and advisory boards . Disciplinary accreditation and advisory boards heavily influenced both assessment actions and boundaries of connections for the participants in the Limited group and for one participant in t he Connected group who was a 85 dean responsible for some discipline -accredited programs. The disciplinary accreditation groups represented explicit forms of accountability and advisory boards provided strong incentives to assess and improve programs in the f orm of feedback, project funding, and career placeme nt for graduates. As an example Ryan , a pro fessional school faculty member described important strong adviso ry relationships with employers stating: !We have really strong ties with the people who employ our students. And we are constantly seeking and getting from them explicit feedback as to how our students are performing and we get employe rs% assessments of whether our students have the competencies that are needed on the job. !Blake , a professional scho ol associate dean described the ways the school %s accreditation process influences students, program decisions, faculty behaviors, and in turn, employer relationships in this quote: !There are eight criteria and they have to do with students, they have to do with program educational objectives , which is what our graduates are capable of accomplishing when they Õre in their fields, there is the stuff that happens on campus in the outcomes. Then there is the continuous improvement criterion and then there are various criteria about our own faculty and institutional support and resources ÉSo faculty know what we want our graduates to be able to do. !These participants noted the ways that disciplinary accreditation and advisory board incentives motivated assessment actions. While some accreditation expectations and matching assessments were descr ibed as more or less productive these participants noted the mostly positive outcomes of engaging in the accreditation process. Two individuals explained how disciplinary 86 accreditation stood in for ins titutional level accountability and how deans in their units used accreditation feedback at times to leverage institutional support for the school. !Shared assessment workload. Participants from two of the three groups - Cameron, Jordan, Hunter (Connected ), and Emerson (Isolated ) - noted the need to share the assessment workload across their relevant groups. Emerson , a new faculty and former student affairs staff member, felt an expert should lead assessment efforts in student aff airs where she recently worked and compared the nature of department level assessment efforts with a previous generation of diversity leadership in student affairs. !I think it's the same way if the Director of a department Éstands up and says ÉÕwe all do diversity work. Õ You know if he or she stands up and says we all do assessment work and this is what it means. I think that makes a big difference because...they always kind of follow what the leadership says. !Cameron (Connected) , an associate dean for cu rriculum and Hunter (Connected) , faculty and program director, both acted in ways to create shared assessment workload. As a faculty member at the dean level Cameron had interest, relationships, and incentive to capitalize on her school Õs faculty abili ty to ass ess learning outcomes. To assess outcomes she create d an environment to support actions including strong links to budget incentives such as hiring decisions. Hunter was director of a college level program who worked actively with her dean to build tr ust and strong communication lines to help share the value of learning outcomes in her program but still struggled with certain aspects of assessment: !Right, so if I had a clear set of expectations for reporting ÉTo whom, what audiences --I have every kind of data and I'm absolutely delighted to be able to show off what we Õve done. Because it Õs pretty important stuff. Point me in a direction and I'll do it. But right 87 now, I have several different audiences. I have several different demands and purposes for t hat data. Sometimes I don Õt even know what the purpose is. So I don Õt know how to frame it or how to boil it like down or how to make it accessible. !Several individuals indicated that the shared workload of assess ment took place in the specific department context . For example, Ryan (Limited) , a professional school professor, claimed to value assessment for critical thinkin g broadly (a greater good goal) but was not confident that d efinitions of critical thinking at the institutional level represented depar tmental priorities for outputs in this area. He went on to talk about several assessment processes used in courses. One example I can give you where the assessment was really helpful. In accounting one of the disciplinary specific knowledge areas is unders tanding cash flows. We measured it via performance on an exam question. W e found students were doing very poorly and we actually changed the course as a result of that. But in general with more general skills like critical thinking we're not confident enou gh that what we're assessing is really capturing and we sort of don Õt know how to do it. !Then Ryan talked extensively about using formative assessment to determine the competence level of students learning a specific financial analysis process and understa nding business judgment as a major program outcome. !I teach financial statement analysis a nd analyze financial statements; you're putting the numbers into context. So you need to understand the corporate strategy of the company whose statements you're looking at and you need to understand in principle how various strategies translate into various accounting numbers and what r atios you would look at. ! 88 [Internship providers] were trying to give our student s hands -on experience and doing the statistical analysis that would lead [a] conclusion...It's a student skill set, in the sense it would be a stat[istics homework] scores tha t would give you the tools but what really matters with this is business judgment. !In several examples this faculty member actively assessed student work and acted to improve teaching of multi -step analysis, data synthesis, contextual consideration, and ev aluations, a form of critical thinking. At the end of the interview however, Ryan noted a lack of program -level drive to assess critical thinking aligned to the inst itutional rubric stating flatly ÒWhen the dean's interested we'll get interested in it. Ó!Chapter Summary !Some participants Õ understanding of outcomes and goals including learning outcomes were linked closely with the ways in which they took action on ass essment. For other participants understanding and prac tice were loosely coupled. In the fir st finding about assessment influences, individuals described various reasons and motivations for why to conduct assessment that I organized into influence categories called Push - influences of training and disciplinary socialization , Path - environmental and cultural socialization, and Pull - incentives and acco untability. In a second finding I identified individuals Õ!assessment behaviors in three different groups I called Isolated, Limited, and Connected . In a third finding of shared meaning across action groups, individuals in different action groups expressed similar reasons for assessment, motivation, or values. These potentially shared meanings clustered differently than the second finding --action groups --indicating that some assessment actions were not congruent with espoused values or literature influences. Acr oss these categories and themes understanding the action groups is of central importance because while participants Õ thinking was very 89 important actions actually move an institution toward success. Descriptions of participants Õ!behaviors, stated themes , and categories provide insight into new ways of thinking about assessment at various levels within the institution and should inform discussion s of new thinking about how assessment is practiced. ! 90 Chapter 5. Discussion and Implications Overview and Introduction My starting point for this study was that institutional priorities for learning outcomes assessment might influence behaviors along the co ntinuum of assessment practice ; this only appeared for some participants for whom institutional goals represented a strong influence. Most participants identified how their assessment mental models were shaped by diverse forces and developed through myriad influences, socialization processes, and/or accountability measures. I observed some potentially useful overlaps in the ways individuals translated and built upon training, theory, motivation, and accountability to take actions. Findings about actions in addition to the way participants thought about assessment were especially salient for addressing the following research questions that guided this qualitative investigation: "# How are learning goals and outcomes understood and assessed? !$# What influences the enacted mental models of individuals Õ!practice of learning outcomes assessment? !!In chapter four, I reported data categorized by influences on assessment (Push, Pull, Path ), three assessment action categories (Isolated, Limited, Connected) , and individual motivations for assessment . For discussion purposes, I illustrate d an arrangement of enacted mental models of assessment for differently approaching assessment practices a cross an institution (figure 3 ). 91 This arrangement , inclusiv e of extant theory and participant experience, helps focus attention on learning outcomes assessment practices from different angles . Multiple persp ectives on the practice and understanding of assessment add insight into potential impact on research, policy, and practice. In the following sections, I discuss these findings in the context of different assessment perspectives and related theories. In this multi -framing approach (Bergquist, 1992; Bolman & Deal, 1997; Senge, 1994; Tierney, 1997), I in vite a reflection on literature, organizations , and individual actions to explore the ways in which existing perspectives on theory, action, and/or motivation might work individually or in concert with other pe rspectives to illustrate enacted mental models of assessment. I begin with a discussion of those assessment influences that emerged in this study that mirror the extant literature and the ways that existing approaches ( Push, Pull, Path) help guide, and to some extent, explain behaviors. I follow with a discussion of action groups (Connected, Limited, Isolated) . Finally, I ad dress various motivators held in common by individuals. Each perspective may help contextualize individual behavior in different enviro nments on campus. A Figure 3. Multiple Perspectives of Enacted Mental Models ! 92 goal here is to explore intersections among analytic perspectives to inform practice, policy, theory, and future research. !Influences on A ssessment The first research question ÒHow are learning goals and outcomes understood and assessed? Ó is addressed primarily through myriad influences described by participants in this study. No participant made a distinction among the language of Ôgoals and outcomesÕ as long as they understood that the conversation related to student learning rather than another kind of goal or outcome . ParticipantsÕ graduate training and disciplinary socialization Ñthe Push , environmental and cultural socialization Ñthe Path , and incentives and accountability Ñthe Pull each had important influences on practice, bu t in different ways for different participants. For example, several participants responded to questions of training and preparation with stories of how their depth of training led to their current practice, indicating that they draw on multiple resources in practice. Finding that Push, Path, and Pull influences were present in some form for all participant s indicated that an individualÕs knowledge about their own assessment practice is likely grounded in multiple influences. This study spanned the institu tion in an attempt to understand influences on assessment and characterize assessment practices. Push influences centered on participants Õ mentors and other significant people along the path toward assessment in their current professional role . Participants identified the importance of graduate school and early career mentors on the shape of thei r current career (Blake , Emerson, Hunter, Casey ), and influ ential educators (Cameron, Emerson, Avery ). Path influences were clustered around perceived ne eds of department/unit, community, or other stakeholder group in which the individual worked or participat ed. Participants identified success i n the department (Alex, Cameron, Jordan Ryan, Addison ) or 93 ways to share assessment work with others (Hunter, Case y, Jules ) as important factors in the environment . Pull influences pointed to external sources of accountability, closely linked yet independent of the unit or department. Participants indicated the strong influen ce of accreditors (Alex , Blake , Cameron, Ry an, Casey ), advisory boards (Cameron, Casey ) and professional associations ( Jordan, Addison, Jules ) as pulling them to do assessment work in particular ways. A mix of Push, Path, and Pull influences served to influence many participants about assessment leading to a variety of actions. For example, participants described the institutional level learning outcomes as one of several environmental influences. Some participants considered institutional outcomes as an additional layer to think about but had not integrated institutional outcomes in actual assessment practice at the time of the study. Alternatively, for several participants, institutional learning outcomes were an opportunity to represent data and/or the value of student experience to anothe r inst itutional audience with hopes that additional credibility or funding would follow. For this group of participants the investment had potential for real returns. Push influences encompassing disciplinary training and mentoring had a moderated effect on asse ssment. Participants that identified very strong mentors for assessment described various ways their practice was informed and shaped by the experiences. Participants that did not identify strong mentors described how they learne d about and practice assess ment but the influences were not as strong, consistent with literature on the topic (i.e., Schein, 2009). Disciplinary training was not described as a strong influence on institutional assessment . Path influences were largely described as the nature of the department or unit in which the individual worked. Some salient influences included communication behaviors, goal setting priorities, and the pressure from colleagues to take the assessment process very seriously all of which can 94 influence team performanc e (Senge, 1994) . Pull influences focused on accountability and incentives. Accountability was tied to professional accreditation while incentives were important for their ability to move behavior. In several cases, incentives were tied to existing relation ships such as an advisory board. The connection from relationships to incentives was important because participants who experienced incentives favorably described the mutual nature of identifying and providing incentives (Schein, 2009). The combination of Push, Path, and Pull influences revealed in the first interview led to analytic work in second interviews where individuals expressed variations and differences in how they connect practice to other stakeholders in the environment . Both participants and researcher in this analysis employed multiple explanations to expand understanding, but no one theory was expected to explain specific assessment actions. Action G roups The second research question, ÒWhat influences the enacted m ental models of individualsÕ practice of learning outcomes assessment?Ó was addressed by better understanding both action and motivation discussed here and in the next section. The enacted mental model, or assessment mindset, helped individuals anticipate and plan for the impact of their work (Wiggins & McTighe, 2006). Connectedness in assessme nt often ind icated behavior s by which an individual took into account expected or likely outcomes as early in the process as possible. Individuals that did not have a ccess to or interest in demonstrating the value of assessment described behaviors that stopped short of linking data across their environments. It was acknowledged by several Connected participants that not all opportunities to assess could be best engaged if an individual became aware of the opportunity after project planning or implementation had begun. 95 Connected assessment. Connected assessment practices were fluid, intentional, thoughtful, and utilized multiple simultaneous perspectives to communicate and connect with stakeholders in the relevant environment. Assessment connectivity patterns for the Connected group were more fully integrated across the participants % academic organization or valued external environment, such as a community of practice that spanned both education and non -education environments. Connected group participants thought about assessment in similar ways even if they did not take the same action or were motivated by the same influences . Regardless of f ormal role, p articipants in this group accommodated the perspectives of others by leveraging organizational understanding and leadership to help explain, connect, and encourage assessment action (Bolman & Deal, 1997; Heifetz, 1994; Schein, 2009). All parti cipants in the study indicated the ways they sought and considered the perspectives of people in relevant environments but actions varied greatly. ! The act of connecting assessment from the individual perspective to the group %s or groups% frame of referen ce could be considered an adaptive process possibly leading to shared understanding among and between groups with differing interests (i.e., instructors and college presidents) (Heifetz, 1994). The use of multiple perspectives involved individuals Õ!ability to see the value and interactions of frames. Bolman and Deal (2013) describe organizational frames in terms of strengths and liabilities throughout an organization, utilizing common metaphors to communicate with the user (table 7) and these frames seemed to reflect the ways in which participants in this study thought about assessment. ! 96 Table 7 Four Frames of Organizations (Bolman & Deal, 1997) !Structural !Human Resource !Political !Symbolic !Metaphor for Organization !Factory or Machine !Family !Jungle !Carnival, temple, theatre !Central Concepts !Rules, roles, goals, policies, technology, environment !Needs, skills, relationships !Power, conflict, competition, organizational politics !Culture, meaning metaphor, ritual, ceremony stories, heroes !Image of Leadership !Social Architecture !Empowerment !Advocacy !Inspiration !Basic Leadership Challenge !Attune structure to task, technology, environment !Align organizational and human needs !Develop agenda and power base !Create faith, beauty, meaning ! Participants in the Connected group described interpreting a symbolic organizational activity (assessment) in terms of structural rewards (incentives) to yield a stronger alignment of human resources (relationships and feedback) for better overall outcomes (i.e., student learning or rete ntion) because of their direct effect on the ability to secure scarce (political) resources. Connected mindset participants described a willingness to regularly reassess their environment for changing priorities, acknowledgi ng the possibilities of shifts in leadership, culture, and internal or external stakeholders (Schuh & Associates, 2009; Tierney, 1988). The Connected action group was most prepared to attend to the multiple internal and external demands on assessment pract ice that must be planned for and integrated from instruction level assessment through programs, colleges, and the institution . Consistent with Ewell %s (2009) description of assessment for improvement and/or accountability, i ndividuals in the Connected group had to engage many resources and influences to make the connections between assessment for improvement and assessment for institutional accountability . 97 Four of five individuals in the Connected group were deans or program directors. Those four parti cipants in this group had access to and/or experience with multiple levels of assessment. These different level s of interaction and conversation in an institution are a likely explanation for Connected mindsets, but it is not the only one s. All five partic ipants found a way to see the value in identifying multiple perspectives in their environment and ways to interpret and communicate varying understandings across their relevant organizational structures. The person who was not a director or dean in this gr oup (the librarian) had an interdisciplinary work focus and had participa ted in her national association focused on assessment. The librarian identified both experiences as influences on her thinking about multiple audiences and reasons for assessment. Thi s librarian used her experiences with somewhat complex interdisciplinary learning outcomes assessment to extrapolate meaning about the institutional assessment environment. In turn she was able to influence her library colleagues to consider institutional audiences when collecting data and demonstrating value to stakeholders. Isolated assessment. Participants in the Isolated group had similar training and similar environments of practice to the Connected group but for no individual was accountability or incentive directly tied to connecting a ssessment practice throughout the environment. Accountability was present for course and program level assessment through student course evaluations and attendance and part icipant data. Individuals in this group indicated that they were not asked or invited to link their work on course or program assessment to any greater goals. All three members also indicated that they were not included in larger group conversations about assessment in their u nit or department. One participant , Emerson, was content not to be included in departmental conversations a t the time of the study citing competing demands. Another participant , Taylor, indicated that program level assessment took plac e among the dean 98 and assessment director who did not communicate with the rest of the unit about the tasks. The last, Avery, indicated that he had discussions with his director about the le arning outcomes in his courses centered on teaching and assessment methodology . A response to a lack of connected assessment action throughout an environment might be as simple as creating an action -oriented principle by which staff and faculty are asked to identify connections or similarities in their data related to other environments. For example , some departments on this campus dedicated time at a regular meeting to identify course, program, and/or co -curriculum links for assessment practice as a way to share information and identify overlaps and potential partnership s. Other ways to encourage connected thinking include pre-defining relevant environments and appropriate feedback process cycle s. In many cases the Isolated group members were similarly motivated as the Connected group and understood the presence of relevant environments but did not have the ability or incentive to take ac tion for lack of accountability or communications, and placing energy on higher priorities. !Multiple pathways for empowering groups to connect a ssessment across levels of an institution are outlined in various practice -oriented publications that could be used as starting places for strategies (i.e., Keeling, 2008; Schuh & Associates, 2009). Institutional incentives might become linked to principle s of action in addition to assessment methods . Linking connections to incentives and/or accountability and decisions can result in a learning cycle which in turn can lead to an empowered culture where individuals and units know what to do and do it (Tierne y, 1988; Bergquist, 1992). !Limited assessment. Participants in the Limited group each described actions that restricted the ways assessment practices might reach further audiences at the point of interaction with the external, disciplinary accreditor. Each individual noted the immediate influence of the 99 accrediting body on assessment subject and even method. Several participants in this group described some arguments for and against accreditor -defined assessment ultimately noting that the department is better off for the overall process yet is challenged at times to grasp the rationale for assessment. The reasons given for these limits on practice were the multiple kinds of content, graduation, recruitment, financial, and advisory influences of disciplin ary accreditation. Each participant in this group discussed trust in and communication with their colleagues or deans (one was a trusted dean) to interpret and leverage disciplinary assessment processes and outcomes to secure additional resources, institution al credibility, and important external program rankings. One participant described the process of leveraging accreditation re ports and data by a dean who presented a gap in college capacity to the provost (i.e., need to teach a new do main area) based on accreditation assessment. This participant further described how the dean attached a request for more resources (faculty line) to fill the gap. This kind of within -college/unit leveraging practice was understood fairly well across the Limited group of participants, reinforcing deference to accreditation -defined assessment recommendation s or actions to help advance departmental goals. This systemic approach to assessment inclusive of people, environments, and cultures stopped short of directly linking institutional value to departmental assessment. Participants in this group acknowledged that data collected for accreditation could be useful for knowing more about institutional learning outcomes. Balancing accreditation priorities and inst itutional learning outcomes is a complex process. For example, one professional school faculty member n oted her own pathway describing her interdisciplinary career (humanities/profession) in terms of both standards for quality teaching and assessment in humanities and professional accreditation standards. While 100 several professional school faculty members in this study embraced their role in accreditation -driven assessment, the humanities/professional school faculty member , Casey, neither embraced nor rejected it , yet considered it part of her environment. In this way mindset mattered greatly to assessment su ccess in the particular environment. At the same time in the above example, her actions reflected a pragmatic approach needed to balance her humanities background within her quantitatively oriented professional. This participant identified that she was val ued in her department for contributing to several potentially co mpeting priorities. Furthermore she spoke of creating a functional space to contribute to each priority as an individual, rather than relying on professional school colleagues to carry her weight. This participant felt her willingness to be a team player (i.e., serving on committees ) was a way she could contribute to the success of the college even though she could not directly contribute to quantitative accreditation efforts. Knowing more about how groups of people act for both improvement and accountability can lead to impacts for leaders in curricular and co -curricular environments. One important note to add is that while the regional accreditation review had begun during the time interviews were conducted for this study, no participants mentioned the accreditation process in interview comments. For these individuals, it is likely that regional institutional accreditation did not have a strong influence on daily ass essment practice. This is one of the reasons I aligned some external stakeholders with cultu ral and environmental socializa tion rather than with accountability and incentives. Considering Connected, Isolated, and Limited enacted mental models allows stakeh olders to approach assessment p ractices differently. Individual Assessment M otivators !Several kinds of motivations for assessment emerged from interviews that were not specifically tied to influences or actions but may have potential for assessment leader s seeking to 101 influence practices. Individuals in this study were motivated to assess on behalf of the greater good, defined by participants as engaged citizenship, positive community outcomes, and developed lifelong learners. Institutional and program influences were related to the value of data and outcomes . Several individuals were interested in seeing the value of the institution demonstrated through learning outcomes data arranged to represent the undergraduate experience at the institution. Advisor y boards were motivators for the ways in which they provided direct links to a related industry, feedback, and sometimes incentives . Shared workload in a department was a motivator because participants felt that they could learn from colleagues and share experiences and lessons from a complex process. Through attention to individual motivators in the context of assessment demands, individuals may come to be more aware of ways in which they interact with their influences, including accountability and incenti ves (Schein, 2009). By mapping influences and behavior , individuals may better grasp how their actions play out in a given environment and leverage shared goals and assets. The data from different participants Õ!concept maps were similar in ways that suggest possible advantages in coordinating training across action groups (Connected, Isolated, Limited ) that may lead to similar mindsets and reasoning rather than focusing only on changing behavior. While assessment practice for some individuals was not clearly aligned to support their stated motivation, the presence of dissonance can be an opportunity to influence changes (Schein, 2009). Including assessors in identifying differences in motivation and action may provide a knowledge base on which an indiv idual can act. Linking identified differences to an opportunity to reconsider the why, rather than the what, of assessment may help assessors see the importance of alignment and then act in context for appropriate changes. ! 102 Knowing what are shared motivator s on a given campus could be a standalone research agenda . In service of assessment shared motivators may illustrate opportunities for coordination of additional activity to create and elicit shared meaning at an institutional level. If arranged and coordi nated shared motivators among individuals with different levels of assessment skill, understanding, and influences could be identified to help bridge skill or mindset differences .!&he shared motivator and core value of lifelong learning embedded in a university may be sufficient to encourage willing participants to engage and perhaps bring along some of the less willing to address assessment improvements s ystematically (Hoffman & Brescian i, 2010; Kezar, 2001) !Focusing on similar motivations can be a ba sis for developing trust and eventually shared understanding important to organizational learning and improvements (Kezar, 2004; Senge, 1994). Perhaps due to individuals having diverse experiences, several motivators like religious and community affiliatio ns surfaced but were not linked to or aligned with action groups. These motivators illustrated the nuances of working with individuals whose mental models included contexts outside of the institutional sphere. Therefore, influences, actions, and motivation s could be considered in more holistic ally to address any changes desired in an assessment environment. Using evidence based practices and integrating multiple perspectives can lead to stronger overall assessment practice in the community (Kezar, 2001). Al ignment between individuals known to share assessment motivations but with different action patterns may provide the basis for professional development assessment work -groups. Implications for creating these alignments are discussed later in future researc h.!Summary of Discussion While mindsets were similar in the Connected , Isolated , and Limited groups, shared motivations across this group varied . Assessing motivation to act in a group may indicate where 103 shared interest exists , but may not help individuals link their practices in a way that helps everyone around them see the value of actions. Alternatively, assessing mindset through motivation without understanding the range of influences and action w ill not directly lead an institution to understand more a bout individualsÕ motivation. This discussion detail ed responses to the research questions: 1. How are learning goals and outcomes understood and assessed? 2. What influences the enacted mental models of individualsÕ practice of learning outcomes assessment? Push, Pull, and Path influences on assessment helped show how learning goals and outcomes were understood and assessed. The identification of both a ction groups and common motivations help s form a more thorough understanding of individual enacted mental models. In short, the answer to the first research question was that goals and outcomes were not always understood as linked to the specific environment in which assessment was enacted, and were continually shaped by both individ ual and institutional factors. The answers I found to the second question were highly individuated, but patterns of connectedness emerged as a key factor in assessment efficacy. Connectedness also represented an opportunity for institutional level analysis and added understanding of practices across this particular institution Õs assessment spectrum. In complex organizations it is unlikely that any single approach can change behaviors, but finding that participants were attentive to their environments if not connected through actions, pushes our understanding closer to a pragmatic response. These data lead to a number of implications for theory, research, and practice. ! 104 Implications for Value P ropositions Institutional a ssessment should be r edefined more holistically in terms of value contribution and done so at multiple levels of the institution. For example, several individuals had similar levels of technical expertise for conducting assessment, yet varied in their need or interest in connecting their as sessment work to all levels. Not everyone in this study saw how connecting assessment to goals across student, program, and institution could contribute t o the value proposit ion of the overall institution. While value propositions were not included in the investigation, a persistent question of assessment scholarship relates to how individuals and organizations create evaluate contributions. Crafting high -level institutional goals is an important step to guide assessment thinking toward learning outcomes (A AC&U, 2013b; U . S. Dept. of Education, 2006). Developing and modeling a culture of Connected assessment action linked with goal driven thinking is a longer process, but well underway when an institution has high -level goals and/or action principles in plac e. In many large universities , action principles exist at the institutional level only, but are intentionally left open to unique program -based application. The addition of institutional principles for assessment action at the program level may encourage f aculty and staff to orient toward assessment practices that support a value chain of learning outcomes. Individual or program level assessment could be better mapped onto something more comprehensive like the department/unit or institution level in order to demonstrate the value proposition of learning outcomes. Encouraging mindsets along with effective assessment methods helps to address a simultaneous need for Connected individual action with a systems orientation. !Systems perspectives on assessment. The findings in this study may contribute to understanding an individual within a system, specifically an enhanced understanding of 105 individu al assessment behavior patterns . Considering the individual mental model in a systems approach to assessment action helped show the ways that a person %s preparation, institutional socialization, incentives, and accountabi lity contribute to an institution -wide system of assessment . However, in many cases, higher education assessment literature did not explore enacted mental models (Ewell, 2009; Kezar, 2004; Schuh & Associates, 2009) but rather relied on individuals to adopt, and later shape, the view of the institution. Examining what people do and what they think was necessary to understand e nacted mental models as part of the system of assessment . An individual might be able to adjust one%s own mindset while relying on previous experience in conjunction with information about larger systems at work in the environment . Alternatively an organizational rearrangement of the ways assessment is socialized, encouraged, and rewarded across organizational boundaries can impact mindset and practice. For example, if relevant leaders pivoted from information delivery training to modeling and rew arding Connected assessment behavior for demonstrating outcomes, the community might be encouraged to shift assessment mindsets from Isolated to Connected rather than trying to assimilate new training into established mental models (Argyris & Sch ın, 1996). Essentially this approach seeks to affect and leverage the value proposition of the institution embedded in a mental model rather than the shape of the model itself. The individual assessment actions were interesting because they were driven by identifia ble influences. However patterns among action groups with common mindsets were compelling because of the potential for changing practice. The behaviors ( Isolated, Limited, Connected ) in this study lead to my categorization of action groups. These patterns might be considered archetypes for fut ure research and categorization leading to a number of implications 106 for pragmatic applications of knowledge about assessment action outlined in the following sections . In promoting hol istic assessment thinking, an institution relies on smart and savvy personnel to figure out the details while providing purpose and direction as needed. A key challenge in using broader organizational theories to explain or drive action in an institution is the somewhat chaotic working of the actual environment and nuances of individual actors (Cohen & March, 1986). Archetypes can inform different organizational thinking needed to conceptualize assessment leadership that is both top -down and bottom up and focused on values and princip les followed by discrete implementation (Heifetz, 1994). Theories embracing complexity while making evidence based distinctions have potential to undergird complex leadership patterns for more effective assessment. Investigati ng enacted mental models showed potential to help an institution understand patterns of assessment behaviors in multiple ways that could be influenced in the future in the environment . Creating and coordinating a conversation among assessment practitioners designed to observe and address various mindsets and approaches to assessment could serve to inform adjustments to practice. Higher education theories of culture (Tierney, 1988), organizational frames (Bergquist, 1992; Bolman & Deal, 2013), systems think ing (Senge, 1994), incentives and accountability (Ewell, 2009), governance (Kezar, 2004), environments (Inkelas & Soldner, 2011) and leadership (Heifetz, 1994; Kezar, Carducci, & Contreras -McGavin, 2006) each retain relevance to explorations of enacted men tal models. Individual motivation and behavior patterns were salient in the findings of this study and in many ways lead participants to different approaches to practice based on the specific set of influences and subsequent actions . Holding multiple, simultaneous f rames in view allows for a differently holistic approach to engaging responses to assessment challenges in complex institutions. The findings provide evidence of the way s three 107 particular perspectives - disciplinary training, environmental so cialization, and incentives/accountability - hold together for analysis of complex assessment .!Impl ications for I ndividuals !Implications for individuals hoping to support, encourage, hire for, and account for assessment are largely contained in facilitatin g individual pathways through varied assessment practices. Enacted mental models can be shaped or influenced through the avenue of the practice (enacted), the thinking (mental model), or a mix of both (Argyris & Sch ın, 1996; Bolman & Deal, 1997; Dill, 1982 ; Tierney, 1988 ). It is important to focus on the things an institution can control including cultural and environmental socialization rather than graduate preparation patterns. It is not enough to house assessment in a faculty development or teaching supp ort office but necessary to involve individuals in the process throughout the institution. !For example, instructors in accredited departments may be enacting a Limited assessment approach without knowing it because they default to accreditor definitions of success without considering institutional priorities. These individual s might consider adopting lessons from an interdisciplinary mindset to integrate assessment work in the college with similar or complementary assessment work at the institutional level. Instructors with Isolated assessment perspectives , presuming they want to be included, may be frustrated at meetings where th eir assessment work or data are not considered relevant to the institution. Ex clusion at the program or department level could well reinforce individual practice to remain isolated . Learning outcomes as a system work from the student through the course, program, college, and institution (Schuh & Associa tes, 2009; Senge, 1994). Considerations for addressing individual behaviors should be jointly informed by individual values in light of program, college, and institutional needs for coordinated knowledge and data about learner outcomes. ! 108 Incentivizing asses sment . The participant orientations to assessment in this study were not anti -assessment despite some expectations for resistance (Lattuca & Stark, 2009; Maki, 2010). Any resistance to assessment by participants was in the context of balancing workload dem ands and the amount of time needed to develop or validate effective assessments, reflecting the teaching and learning culture in the institution. It is important to not misconstrue resistance to the workl oad or timing of assessment as resistance to assessment generally ; literature focuses on the how -to of overcoming resistance rather than individual actions (Bresciani, Zelna, & Anderson 2004; Culp & Dungy, 2008; Ewell, 2009; Schuh & Associates, 2009). !Participants acr oss action groups treated resistance to assessment differently. In the presence of a new assessment demand (i.e. , institutional learning goals) the Connected group generally identified priorities for assessment and integrated new demands into their plannin g mindset adjusting mental models of the organization until action could be justified. At the time of interviews some Connected group participants included institutional learning goals in their assessment plans while some had not yet prioritized the work. In the face of the same demands , Isolated group participants gave thought to the predicted workload of new assessment demands, but identified no need to integrate the additional work into their action item list, now or later, citing competing demands on th eir time. !Considering the demands, for example, of pe rformance metrics on assessment I anticipated that some individuals might resist or distrust efforts to incentivize behavior change toward learning outcomes assessments. However participants across the study readily understood the role of incentives and performance. Positioning assessment connectivity in early socializatio n patterns of teaching and learning such as graduate training is important. To address performance institutio ns have to provide validated assessment support to early career staff, faculty, and 109 graduate students to better create an experience of successful assessment and feedback. Addressing connectivity during graduate training could help resolve an imbalance in research extensive universities between research and teaching priorities with different perceived weight (Kim, 2005). Identifying shared motivators . Several similar motivators emerged for participants that may represent a source of shared understanding . Sharing motivations through concept maps and surveys may help develop trust and shared meaning (Senge, 1994) among and between differently connected assessment practitioners. !Greater good. The shared understanding among p articipants was that some value bot h greater than a nd encompassing the institution is what mattered most beyond the educatio nal or social value of a degree or the external identity of the institution. For both Connected and Isolated group members the nature of linking greater good values to their daily work was both idealized and elusive. Some greater good motivations mirrored or extended institutional outcome statements and represent ed an opportunity for individuals to more deeply engage in assessment . With appropriate support benefits coul d exist for both the individual and the institution. The individual benefits by knowing more about a personal value, while the institution benefits by engaging more individuals in assessment (Steiber & Al −nge, 2013). However, g iven the work needed to devel op new a ssessment mechanisms or methods to connect academic or programmatic responsibilities to greater good values it is understandable that individuals have not attempted greater good assessment connection on a systematic basis. The current connections between course level learning and institutional value are sufficiently cumbersome and continue to require concerted efforts by coordinated groups in academic departments and learning -oriented administrative units . 110 Links to program/division assessment . Som e overlaps in individual motivations were interesting because of the positions of the people involved. In one case a tenured faculty member from a professional college, an associate dean from a different professional college, and a program director from th e arts and letters college all indicated very similar thinking and practice of assessment connectivity. It is unlikely, however, that these people would be called into a meeting together much less one about assessment because they do not share titles used in this hierarchical organization and the institution does not regularly engage publicly in cross -functional applications of assessment except for an occasional community of practice. These individuals may not know that they have similar motivations to ass ess, information that could potentially benefit them and institution (Kezar & Eckel, 2002). !Institutional drivers for learning outcomes assessment were not linked closely to daily activity for those in the Isolated group. These individuals were aware of th e in stitutional value of assessment, but unable or unwilling to link findings to goals or data gathering efforts in their smaller campus organization. A top -down approach to institutional assessment had not yet saturated all departments represented in this study in similar ways. If a goal is to produce more assessment connec tivity among faculty and staff, the implications are that program, division, or department influences are necessary but not sufficient for the outcome. The distinguishing common thread among Connected group participants was the creation of relationships, policies , incentives, and account ability closely linked to assessment behaviors. Advisory boards and accreditors provided incentives and accountability, and to some extent, adaptive leadership for making curricular changes for the Limited group. The strength o f these same structures serve d to limit connectivity to the institutional value proposition. P art icipants in accredited programs come 111 to rely on the accreditor -department relationship as a proxy for connectedness from student, to program, department, college, and institutional. !Leader ship characteristics. One practical organizational challenge for college or program leaders in reinforcing a particular assessment mindset is to create a regular opportunity for feedback (Argyris & Schın, 1996; Senge , 1994), allowing users to learn from da ta at multiple levels . In an example of assessment leadership, one participant seemed to understand assessment from multiple angles, but did not report conducting much direct learning assessment. He indicated doing a fair amount of convincing others to do assessment at the right times, for the right reasons, and in the ways that make the most sense for the learning standard in q uestion. Because of substantive organization -wide communicat ion and planning in his college it was easy enough for him as an assis tant dean to point instructors often th e most discrete level assessors to the correct support and models aligned with valued assessment practice in the college. He then mapped college level assessment practice to institution wide practice. This behavior se t represented a mediating role available to many staff and faculty with Connected mindsets. The participant identified this role as one of leadership and implied the intention of connectivity. As an expert in the study of higher education organizations, ra ther than a discipline in the college, this person identified a way to support assessment connectivity of others. In translating disciplinary learning outcomes to reflect institutional values, this participant demonstrated leadership by reinforcing a stron g culture of assessment (Culp & Dungy, 2012). Leaders must value transparency as well as encourage organization -wide learning through modeling and incentives to create a productive environment for assessment data (Senge, 1994). Organizational openness can support open dialog and debate that can, in turn, lead to individuals taking action to improve their courses or programs. Leaders matter to the process of 112 data reports, transformation and packaging at various times and for various reasons. Without a vision often provided by leaders it becomes d ifficult for any one individual, connected mindset or not, to trace through an institution %s!structures and practices. Student learning data in institutional reports are aggregated, shaped and managed into predetermi ned forms (i.e. , institutional learning outcomes) . Aggregate student outcomes may or may not be clearly represented in public reports resulting in low utilization for improvement (Ewell, 2009). To address instructor and program leader needs to benefit from such aggregate reports requires appropriate feedback loops (Schuh & Associates, 2009). Far from simply capturing and representing data institutional assessment leaders need to clarify , appropriately disaggregate data for programs, and model positive uses of data in line with institutional priorities. Leaders might work to develop training for individuals and teams based on the type of stated motivation likely to include actors with variously connected assessment mindsets. Mapping exercises like the one used in this study could be adapted to group applications for purposes of sharing mental models. Individuals did not interact with each other as part of this study , so an additional consideration is how individuals would shar e knowledge of shared common motivations. Teams or c ommunities of practice centered on assessment thinking and action could provide temp lates for mapping mental models and creating shared understanding about assessment connectedness (Trowler & Knight, 2000 ). Future Research I mplications Institutions . Key outcomes of this study may influence further research at similarly large and complex institutions to the study site as well as smaller institutions working on assessment processes . In an exploratory study these participants represented a small cross section of 113 university assessment life whose a ssessment patterns varied . It might be expected that greater variation exists across a larger group of people in a diverse institution. The complexity of assessment i nfluences such as individual preparation, environmental socialization, and incentives/accountability were salient for all participants. Creating and validating a survey based on known assessment influences would be an important step toward a scalable understanding of within - and across -insti tutional patterns of assessment influences. Further more , patterns of action and motivation might be translated into surveys to help map assessment practice and motivation across an institution. Alternatively, a dee p look into the motivation and behaviors of one or two colleges or large co -curricular units in an institution could impact assessment practice patterns. Cases of assessment practice in a unit with presumably more discrete goals than a large institution ma y help leaders know how to understand assessment practices and mental models within a unit in order to improve practice there. Department or unit awareness of patterns inform s institution level assessment because efforts to assess learning or aggregate inf ormation for accreditation differ significantly. Knowledge of common motivation for assessment could help to enhance the value of assessment. Research on enacted mental models may be productive at smaller institutions with a more specific educational focus such a s technical or religious education . If members of these institutions are assumed to share a mental model and do not, more informed practice could result from additional data . Relative boundaries. The relative starting and stopping point of assessme nt for each individual depended on a particular combination of department, discipline, and individual perspectives of their own work. However, the particular environment did not necessarily confine assessment thinking. For example, an instructor with a con nected mindset described three 114 conceptual distinctions relevant to their work that focused on course, program, and accreditation only . By comparison a dean with a connected mindset interacted with five or more distinct conceptions of assessment including c ourse level data, program level reporting, disciplinary accreditation metrics, institutional outcomes, and regional accreditation. It is not unexpected to see a more senior member of the institution (dean) interact with institution and accreditation data . At the same time less senior participants also demonstrated connected practice. !For several participants being motivated by the greater good created incentive to align assessment work to values like engaged democracy and community development held by stakeholders outside regular higher education channels . Leveraging individual motivators to influence connectedness has to take into account how different individuals and groups in the institution interact with their surrounding external environments. A th ree dimensional map of multiple mental models could help visualize where common assessment practice takes place for individuals. An opportunity for future research could include modeling and predicting departmental participation in institutional assessment .!Mental model maps for individuals would note the relative starting and stopping point f or each individual on a y -axis. A planar horizontal x -axis would identify the placement of mental models in relation to other individuals and departments. Take for exa mple two hypothetical individuals, an instructor and an associate dean , who both assess at the college l evel in the same department. These individuals have different assessment starting point s: the instructor assesses instruction, program, and college level learning; while the dean, who no longer teaches courses, focuses primarily at the college, institutional, and accreditation levels. A useful analysis would take not e the di rect motivation overlaps or share d assessment mindset s for these individuals . Visualizing these mental models could help researchers create networked maps of assessment 115 connectedness patterns at a department or unit level, inclusive of various motivators (personal, professional, academic, co-curricular) to help leaders identify gaps in practice (Senge, 1994). As this study demonstrated, further research on terms used to describe assessment in higher education is important to inform data and decision -making at multiple levels of practice simultaneously. The same word, assessment, is used extensively to describe tests, student learning, program, college, accreditation, and other external stakeholder actions . Perhaps more operational terms would serve an institution better. Modifying the word assessment in practice should be inclusive of institutional level target s such as financial metric s, teac hing input, or learning outcome, and likely stakeholders. Adding more specific language create s more opportunities to share meaning about the ways asses sment works across levels, potentially leading to a form of self -aware communication about the work (Senge, 1994; Wheatley, 1992) . Actually finding shared meaning can lead fairly q uickly to coordinated projects and potentially innovative outcomes in higher education (Koen, Bertels, & Elsum, 2011). Practice I mplications Implications for practice extend to organizational training, coordination, and leadership. My working assumptions about organizations stem from work in student affairs and faculty support organizations in both smal ler and larger institutions. In these contexts I worked with individuals who assess ed from a single frame and struggled with alternative explanations (Bolman & Deal, 2013). Prior to this study, I would have recommended that simply taking more perspectives on the organization would help an individual see a problem more clearly . But with evidence of mental models of assessment placed next to enacted practice, looking at the influences and motivations for assessment points more toward the tension of a bottom up effort to translate individual and program level outcomes to look like or fit into the containers created 116 by institutional level outcomes. Top down efforts to inform and coordinate assessment work and to provide feedback to instructors can help alleviate the translation of course level outcomes to institution level outcomes. A connected assessment practice means that an individual can understand that s tudent learning outcomes and/or program data provide substance to the valu e proposition of an institution, and more so when the assessment process is reflective of shared meaning among internal constituents. Establishing institutional value in the eyes of external stakeholders relies on learning outcomes to simultaneously help different iate learner achievements and look similar to peer institutions . A methodologically similar assessment process can help an institution communicate the common value of many different learning outcomes (Bolman & Deal, 2013; Ewell, 2009; Schuh & Associates, 2009). Training. A shift in focus from behavioral goal orientation toward individual mindsets of connected assessment practice may be a useful concept on which institutions might build faculty and staff training and develop ment protocols . A mindset is adaptable to multiple kinds of individual and environmental inputs while maintaining fi delity to a value in this case, assessment quality. For example, one dean in this study identified individuals with similar mo tivations (inc entivized by this dean) from different disciplinary groups to design assessment intended to develop connectedness . By leveraging common experiences during training of what were essentially different action groups, this dean accomplished assessment effectiv eness at a technical and political level (Heifetz, 1994; Inkelas & Soldner, 2011) . Asking assessment influence questions modeled on results herein may provide insight into knowing the factors and arrangements of talent that matter to accomplishing more connected practice . In another example example, a change to assessment committee structures designed to highlight influences and a 117 department/unit contribution to the overall value proposition may help prime faculty and staff to approach assessment with local priorities in mind . Taking a planning approach based on experiential learning theory might help coordinate ena cted men tal models of assessment (i.e., Kolb, 2014) . Kolb %s theory of experiential learning points toward a four -part cycle beginning with concrete experience, followed by reflective observation, abstract conceptualization, and application and testing. Thi s theory a pplied to group training may help faculty and practitioners better respond to assessment influences based in lived experience (concrete experience) , individual motivation (reflective observation) , and/or theoretical drivers (abstraction) to make decisions (application) . By aligning enacted mental models findings to e vidence based approaches to training like experiential learning may help to inform multiple perspectives and audiences simultaneously (Senge, 1994). Coordinating resources and feedback . Institution -wide educational resources (i.e., salary line) are frequently assigned on the basis of value created by a department for an institution. I n turn, educational value is often enough defined b oth b y evidence of learning and evidence of learning outcomes alignment to the value proposition represented by institutional learning outcome s (Ewell, 2009; Schuh & Associates, 2009). When limited resources are assigned based on evidence of learning and alignment , connected mindsets and behaviors are reinfo rced. Leaders at the dean and/or director level are largely responsible for communicating and framing a need for evidence and value to an academic or co -curricular department (Kezar, 2001). Importantly, four out of five individuals espousing connected mind sets in this study held dean or director titles. Deans and directors have the greatest ability to influence faculty and staff who educate, assess, and communicate the value of learning and campus level administrators who assign and distribute limited value -based resources. 118 From evidence and value faculty and staff can be made more conscious about how student work is aggregated and used in multiple ways (Ewell, 2009), perhaps leading to more connected assessment approaches. In practice efforts to aggregate data do not often result in feedback to help an individual college or unit make decisions but feedback loops are very important to understanding both process and outcomes and becoming a learning organization (Senge, 1994). With digital assessment tools suc h as portfolios, tagging, search functions, and various technology -embedded reports (i.e., badges), a more automated two -way learning conversation is possible to see where and how learner artifacts are utilized in the institution . Further, d emonstrating in tegrated thinking in assessment action is a good step forward to encouraging integrated thinking in undergraduate education (Schuh & Associates, 2009). Con clusion By exploring individual enacted mental models of assessment and connections with university expectations and v arious influences on assessment I hope the practice and understanding of assessment is clearer for individuals and the institution. This study provides evidence that interrupts common organizational explanations and provides insight into a path for exploring learning outcomes assessment, and hopefu lly, connected assessment practice (Bresciani, 2006 ). I sought to inform ways in which individual enacted mental models of assessment practice help organizations advance an agenda of student lear ning in concert with university programs through the use of robust assessment practice. With this research, I attempted to create a better understanding of individual assessment frames or mental models that should help institutional leaders develop more s ound policy and implementation decisions about assessing learning outcomes. By better understanding how and why individuals practice and understa nd learning outcomes assessment it was possible to better 119 understand dynamic assessment practice in a systemati c manner. I was able to identify institutional, cultural, and environmental variables that influence assessment practices and understandings as well as variation by formal/appointed roles (Thomas, 2011). Higher education organizations are not enactors of a ssessment. They are containers in which individuals design and engage in learning and assessments. Assessment practices help individuals see and understand their parts of this container, allowing space for adaptive changes when needed ( Heifetz, 1994). From a distance the shape of the container remains fairly static, while a closer look at the edges provide s a view of internal and external pressures to reshape the value proposition. The idea of institutional level assessment relies on individuals sharing meaning about the shape of the container by focusing on process, but an institution as an entity may not be able to share understanding of learning outcomes in the same ways across all units. Differentiation is built into the concept of organizing by programs , departments, and disciplines. The key challenge for institutional leaders is representing and communicating both aggregate assessment data and assessment processes as symbols of shared meaning (Hatch, 1997). This study looked deeper into individual -organ izational perspectives on assessment, specifically the enacted mental models of assessment for a number of individuals, providing some additional insight into available models of higher education assessment and evaluation (i.e., Astin & antonio, 2012). App lying organizational explanations to assessment action has been a useful starting point, yet at times confined actors to narrowly defined action. A key outcome illuminates how individual training, actions, environmental influences, or personal motivations are important to assessment action, previously unexplained by theoretical explanations alone. This study addressed the practical challenge of coordinating and understanding multiple assessment perspectives across levels and boundaries within a large 120 organi zation (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Data shed light on several approaches to outcomes and goal assessment for student learning, program efficacy, and institutional quality. In addition to considering existing theory as an explanation for assessment, the study data nudge practitioners toward grasping the complexity of individual be haviors and motivation related to past training, current environment, and future goals. Individuals undertaking assessment roles are encouraged to actively connect their assessments to the shared values held across the institution rather than ascribing con trol to organizers at all levels throughout an institution . When individuals view their organizations as a networked container of values, they can more connectively enact a mental model of assessment according to their discipline, training, and institution al goals to foster student learning and development and advance the value of the institution. ! 121 APPENDICES ! 122 Appendix A Interview Protocol Thank you for being willing to participate in my study of the understanding and practice of assessment in one residential college. Purpose: I am interested in learning more about how institutional culture influences faculty and administrators understanding (interpret, decide) and practice (collect data, report) of assessment of planned learning e xperiences. This research concerns the why and how, individual and group decisions, and the learning environment where assessment is taking place. Procedures: I will ask a number of open -ended questions. As I indicated in the initial contact, I would like to audio record these interviews so that I am able to accurately represent what you say. If you would like to say something and prefer it not be recorded, please tell me, and I will turn off the recorder. All recordings, transcriptions, forms, and other do cuments will be coded, with pseudonyms used in place of names, institutions, and the college to safeguard the participants, the institution, and the collegeÕs identities to the greatest extent possible. Do you have any questions before we begin? Consent: R eview and sign two consent forms; give one form to the participant. Make sure the recording device is ready. Start interviewing. Background As you know I am interested in understanding how you understand and practice assessment in your individual position, as well as in the context of (college) and the (University). 1. Please begin by sharing with me a bit about yourself and your role and work here at (college). 2. What is your experience with developing learning outcomes and/or assessment thereof? 3. What are the l earning outcomes you try to impart upon your students? a. How do you assess them? b. Why do you use that method? 4. How do expectations for assessment influence your assessment decisions? 5. In what way do you connect your learning goals to other contexts? a. Why? b. How? 6. In what way do you connect your assessment practices to other contexts? a. Why? b. How? 7. How do you communicate about your understanding of assessment? a. With whom? b. When? c. Why? 8. How do you think about learning goals? 9. What influences your practice of assessment? 10. How does your academic/professional preparation influence your assessment practice? a. In what assessment methods were you trained? b. At what depth are you comfortable conducting learning assessments? 123 c. How do your assessment practices align with college expecta tions? 11. How, in turn, do people influence action in the (college) environment? 12. What can you tell me about Leadership? a. Who leads? b. When? c. How? d. To what end? 13. What can you tell me about Trust in (college) a. What does trust mean? b. How is trust developed? c. What happens when trust is lost? 14. What can you tell me about groups making decisions together in the (college)? a. How would you describe a typical group decision -making process? b. About what kind of topics do groups (not one person) have influence? 15. How does (instit ution) influence your decisions about assessment? a. What do you know about (institution) and/or (college) learning goals? b. Do you use the (institution) learning goals to inform your course/program goals? c. How do broader learning goals seem to matter to assess ment? 16. How does (college) aggregate learning outcomes data? 124 Appendix B Research Participant Information and Consent Form You are being asked to participate in a research study regarding the ways in which individuals understand and practice learning outcomes assessment in the course of their appointed role at Michigan State University. The individual interview will take appr oximately 45 minutes, with scheduling made at your convenience and in a location comfortable for you. A second interview will be requested and/or scheduled at the end of the first interview. I would like to take an audio recording and handwritten notes t hroughout the interview, if you consent. Participation in this research study is completely voluntary. You have the right to decline participating. You may change your mind at any time and withdraw. You may choose not to answer specific questions or to stop participating at any time. Any direct identification information, including your name, the name of your office/unit, and the names of your current and/or previous institutions will be removed from data when responses are analyzed. Although every attem pt will be made to keep your identification and information private, some distinguishing characteristics such as what you share about your assessment practice and other comments may reflect your identity. If you have any questions about this study, such a s scientific issues, or to report an injury, please contact Dr. Marilyn Amey, Professor and Chair, Educational Administration, 620 Farm Lane, East Lansing MI, 48824 (rm 427 Erickson Hall), Michigan State University, by phone: (517)-432-1056, or email addre ss: amey@msu.edu. If you have any additional questions or concerns regarding your rights as a study participant, or are dissatisfied at any time with any aspect of this study, you may contact - anonymously, if you wish Ð Kristen Burt, JD, Interim Director, Human Research Protection Programs on Research Involving Human Subjects, by phone: (517) 355 -2180, fax: (517) 432 -4503, email address: irb@msu.edu, or postal mail: 408 West Circle dr. (202 Olds Hall), East Lansing, MI 48824. Thank you for participating! I agree to participate in this study. In addition, by signing below I agree to allow my responses to be audio taped for research purposes of this study. Signature _________________________________ Date _______________ Name (Printed)_____________________________ Student Researcher: William Heinrich Faculty Advisor: Marilyn Amey, Ph.d. Graduate Student Professor, Higher Education Michigan State University Michigan State University heinri19@msu.edu amey@msu.edu 125 REFERENCES 126 REFERENCES Accreditation Board for Engineering and Technology (2015). Homepage/About. Retrieved August 12, 2015, from http://www.abet.org/ . American Association of Colleges & Universities (AAC&U) (2013a). Shared Futures Homepage. Retrieved October 7, 2013 , from http://www.AAC&U.org/SharedFutures/global_century/institutions.cfm. American Association of Colleges & Universities (AAC&U) (2013b). How are campuses remapping general education to emphasize broad learning and the 21 st-century skills that are key to innovation, problem solving, and leadership for change? Retrieved September 13, 2013, from http://www.AAC&U.org/meetings/networkforacademicrenewal.cfm. American College Personnel Association (ACPA), Association of College and University Housing Officers Ð International (ACUHO -I), Association of College Unions Ð International (ACUI), National Academic Advising Association (NACADA), National Association for Campus Activities (NACA), National Association of Student Personnel Administrators (NASPA), and National Intramural -Recreational Sports Association (NIRSA). (2006). Learning reconsidered 2: Implementing a campus -wide focus on the student experience. R. P. Keeling (Ed.). Washington, D.C.: Authors. Accreditation Board for Engineering and Technology (2015). History. Retrieved on June 10, 2015, from http://www.abet.org/about -abet/history/. Aronson, D. (1996). Overview of systems thinking. Retrieved on Septembe r 3, 2014, from http://resources21.org/cl/files/project264_5674/OverviewSTarticle.pdf Arum, R. & Roksa, J. (2011). Academically adrift: Limited learning on college campuses . Chicago, IL: University of Chicago Press. Argyris, C. (1976). Increasing Leadersh ip Effectiveness . New York, NY: Wiley. Argyris, C. & Schın, D.A. (1996). Organizational learning II. Reading, MA: Addison -Wesley. Astin, A. W. & antonio, a. l. (2012). Assessment for excellence: the philosophy and practice of assessment and evaluation in higher education (2 nd ed). American Council on Education Lanham, MD: The Rowman & Littlefield Publishing Group . Astin, A. W. (1991). Assessment for excellence: the philosophy and practice of assessment and evaluation in higher education. Santa Barbara, CA: Oryx Press. Association of American Universities (AAU) (2012). Revised -Vision -and-Change -Final -Report.pdf AAU Undergraduate STEM Education Initiative. Retrieved on September 22, 2015 from http://www.aau.edu/policy/article.aspx?id=12588. 127 Banta, T.W., & Ass ociates. (2002). Building a scholarship of assessment. San Francisco CA: Jossey -Bass. Barr, R. B., & Tagg, J. (1995, November/December). From Teaching to Learning: A New Paradigm for Undergraduate Education. Change, pp. 13-25. Bergquist, W. H. (1992). The four cultures of the academy: Insights and strategies for improving leadership in collegiate organizations . San Francisco, CA: Jossey -Bass. Bess, J. L. & Dee, J. R. (2008). Understanding college and university organization: Theories for effective policy and practice (v 1). Sterling, VA: Stylus. Bloom, B. S. (1994). Reflections on the development and use of the taxonomy . In Anderson, L. W. & L. A. Sosniak, eds. (1994), Bloom's Taxonomy: A Forty -Year Retrospective . Chicago National Society for the Study of Education. Bolman, L. G., & Deal, T. E. (1997). Reframing organizations: Artistry, choice, and leadership (2nd ed.). San Francisco, CA: Jossey -Bass. Bolman, L. G., & Deal, T. E. (2013). Reframing organizations: Artistry, choice, and leadership (5th ed.). San Francisco, CA: Jossey -Bass. Bok, D. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more . Princeton, NJ: Princeton University Press. Boyer, Ernest L. (1991). The scholarship of teaching. College Teaching 39 (1), 11-13. Bresciani, M. J. (2006). Outcomes -based academic and co -curricular program review: A compilation of institutional good practices . Stylus Publishing, LLC. Bresciani, M. J., Zelna, C. L., & Anderson, J. A. (2004). Assessing s tudent learning and development: A handbook for practitioners. Washington, DC: NASPA . Clark, J. E., & Eynon, B. (2009). E -portfolios at 2.0 -Surveying the Field. Peer Review , 11(1), 18. Cohen, M. D., & March, J. G. (1986). Leadership in an organized anarchy . Leadership and Social Change . Collins, K. M. (2011). The future of student learning in student affairs . In K.M. Collins & D. M. Roberts (Eds.), Learning Is Not a Sprint: Assessing and Documentin g Student Leader Learning in Coc urricular Involvement (185-196). Washington, D. C.: NASPA -Student Affairs Administrators in Higher Education. Creswell, J. W. (2008). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. Cubarrubia, A. P . (2009). Exploring the influence of external standards of institutional effectiveness on program assessment in student affairs (Doctoral dissertation, The George 128 Washington University). UMI 3344843. Culp, M. M. & Dungy, G. J. (Eds.) (2012) Building a culture of evidence in student affairs: A guide for leaders and practitioners . Washington, DC: NASPA -Student Affairs Administrators in Higher Education. Denzin, N. & Lincoln, Y. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Lincoln, Y. (Ed s.), The Sage handbook of qualitative research (pp. 1-32). Thousand Oaks, CA: Sage Publications. Dill, D. D. (1982). The management of academic culture: Notes on the management of meaning and social integration. Higher Education, 11 , (3). 303- 320. Duders tadt, J. J. (2009). Aligning American Higher Education with a Twenty -first -century Public Agenda. Examining the National Purposes of American Higher Education: A Leadership Approach to Policy Reform. Higher Education in Europe , 34(3-4), 347-366. Dwyer, C.A . Millet, C.M. & Payne, D.G. (2006) A culture of evidence: Postsecondary assessment and learning outcomes. Retrieved August 27, 2014 , from http://www.ets.org/Media/Resources_For/Policy_Makers/pdf/cultureofevidence.pdf . Eaton, J. S. (2011). U.S. accreditat ion: Meeting the challenges of accountability and student achievement. Evaluation in Higher Education 5 (1). pp 1-20. Eisenhardt, K. & Graebner, (2007). Theory building from cases: Opportunities and challenges. Academy of Management Journal, 50 (1), 25Ð32. Ewell, P. T. (1997). Strengthening assessment for academic quality improvement. In M. W. Peterson , D. D. Dill , L. A. Mets , and associates, (Eds.) Planning and Management for a Changing Environment: A Handbook on Redesigning Postsecondary Ins titutions . (360-381). San Francisco, CA: Jossey -Bass. Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. in T.W. Banta & Associates (Eds.) Building a scholarship of assessment (2002). San Francisco, CA: Jossey -Bass. Ewell, P. (2008). Assessment and accountability in America today: Background and c ontext . New Directions for Institutional Research, Assessment Supplement, 7-17. Ewell, P. T. (2009). Assessment, accountability and improvement: Revisiting the tension. NILOA Occasion al Paper #1 . Retrieved September 15, 2013 , from www.learningoutcomesassessment.or g. Fink, D. L. (2013). Significant learning experiences: An integrated approach to d esigning college c ourses. San Francisco, CA: Jossey -Bass. Flyvbjerg, B. (2006). Five misund erstandings about case -study research. Qualitative Inquiry, 12, 219-245. 129 Fuller, M. B. (2011). Preliminary results of the survey of assessment c ulture . Retrieved April 4, 2013, from http://www.shsu.edu/research/survey -of-assessment -culture/documents/2011Survey ofAssessmentCultureResults.pdf. Gardner, P . (2014). CERI Research Brief 2012 -4: Liberally Educ ated Versus In -Depth Training. Retrieved August 12, 2015, from http://www.ceri.msu. edu/home/attachment/ceri -research -brief -2012-4-liberally -educated -versus -in-depth -training/. Gasser, R. F. (2006). Character istics of persisting students utilizing the retention self -study framework: A case s tudy . (Doctoral Dissertation) Retrieved from ProQuest Dissertations and Theses. (137356263) Gerring, J. (2004). What is a case study and what is it good for? The American Political Science Review, 98 (2), 341 Ð 354. Glesne, C. (2006). Becoming qualitative researchers: An introduction. (4th ed.). Bost on, MA: Pearson/Allyn & Beacon. Green, A. S., Jones, E., & Aloi, S. (2008). An exploration of high -quality student affairs learning outcomes assessment practices. Journal of Student Affairs Research and Practice , 45(1), 133-157. Goralnik, L., Millenbah, K . F., Nelson, M. P., & Thorp, L. (2012). An environmental pedagogy of care: Emotion, relationships, and experience in higher education ethics l earning. Journal of Experiential Education 35 (3). 412-428. Gumport, P. (1993). The contested terrain of academic program reduction. Journal of Higher Education , 64(3), 283Ð311. Guba, E. G. & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications. Hatch, M. (1997). Organization the ory: Modern, symbolic, and postmodern perspectives . New York, NY: Oxford University Press. Heifetz, R. A. (1994). Leadership without easy answers. Cambridge, MA: The Belknap Press of Harvard University Press. Heinrich, W. F. & Rivera, J. E. ( in press ). Ass essing multiple dimensions of significant l earning. In Laura Wankel and Charles Wankel (Eds.) Integrating curricular and co -curricular endeavors to enhance student success. Bingley, United Kingdom: Emerald Publishing Group. Hill, R. C. & Levenhagen, M. (1995). Entrepreneurial activities metaphors and mental models: Sensemaking and sensegiving in innovative and entrepreneurial activities. Journal of Management 21 : 1057 -1074 . 130 Hirt, J. B. (2006). Where you work matters: Student affairs adm inistration at different types of institutions . Lanham, MD: University Press of America. Hoffman, J. (2010). Perceptions of assessment competency among new student affairs professionals (EdD dissertation, University of California, Los Angeles, United State sÑCalifornia). Retrieved Feb 18, 2013, from Dissertations & Theses: Full Text. UMI Number: 3437511. Hoffman, J. L., & Bresciani, M. J. (2010). Assessment work: Examining the prevalence and nature of assessment competencies and skills in student affairs jo b postings. Journal of Student Affairs Research and Practice, 47 (4), 495Ð512. Holland, J. H., Holyoak, K. J., Nisbett, R.E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery . Cambridge, MA: MIT Press. Hovland, K. & Schnei der, C. G. (2011). Deepening the connections: Liberal education and global learning in college. About Campus 16 (5). 2-8. Huber, G.P. (1991). Organizational learning: The contributing processes and the literatures. Organization Science, 2 , 88Ð115. Inkelas, K. K. & Soldner, M. (20 11). Undergraduate living Ðlearning programs and student outcomes. in J.C. Smart, M.B. Paulsen (eds.), Higher Education: Handbook of Theory and Research 26, New York, NY: Springer Science + Business Media B.V. Jessup -Anger, E. R (2009). Implementing innovat ive ideas: A multisite case study of putting Learning Reconsidered into practice . (PhD dissertation, Michigan State University, United StatesÑMichigan. Retrieved Feb 16, 2012, from Dissertations & Theses: Full Text. 242 pages, UMI Number: 3381261. Johnson -Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press. Johnson, T. E., OÕConnor, D. L., Pirnay -Dummer, P. N., Ifenthaler, D., Spector, J. M. & Seel, N. (2006). Comparative study of mental model research methods: Relationships among ACSMM, SMD, MITOCAR & DEEP methodologies. In Concept maps: Theory, methodology, technology (CaŒas, A. J. & J. D. Novak, Eds). Proceedings of the Second International Conference on Concept Mapping. San Jos”, Costa Rica. Habron, G., Goralnik, L ., & Thorp, L. (2012). Embracing the learning paradigm to foster systems thinking. International Journal of Sustainability in Higher Education, 13, 4, pp. 378 -393. Heinrich, W. & Rivera, J. E. (Forthcoming). Assessing Multiple Dimensions of Significant Learning. In Laura Wankel and Charles Wankel (Eds.) Integrating curricular and co -curricular endeavors to enhance student success. Bingley, United Kingdom: Emerald Publishing Group. 131 Kahn, S. (2014). E -Portfolios: A look at where weÕve been, where we are now, and where weÕre (possibly) going. Peer Review Winter 2014, 6 (1). Keeling, R. P. (2008). Assessment reconsidered: Institutional effectiveness for student success (1st ed.). Washi ngton, D.C.: NASPA -Student Affairs Administrators in Higher Education. Kezar, A. J. (2001) Understanding and facilitating organizational change in the 21st century: Recent research and conceptualizations. In ASHE -ERIC Higher Education Report Volume 28, Num ber 4. Kezar, A. J., Series Editor. San Francisco, CA: Jossey -Bass. Kezar, A. J., & Eckel, P. D. (2002). The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive concepts?. The Journal of Higher Education , 73(4), 435-460. Kezar, A. J. (2004). What is more important to governance: Relationships, trust, and leadership or structures and formal processes? In W. Tierney, & V. Lechuga (Eds.), New directions for higher education. pp. 35Ð46. San Francisco, CA: Jossey -Bass. Kezar, A., Carducci, R., & Contreras -McGavin, M. (2006). Rethinking the ÒLÓ word in higher education: The revolution of research on leadership . San Francisco, CA: Jossey -Bass. Kezar, A. & Dee. J. (2011). Conducting paradigm -cros sing analyses of higher education organizations: Transforming the study of colleges and universities. In J. C. Smart and Paulsen, Michael B. (Eds.) Higher Education: Handbook of Theory and Research, 26, 265-315. New York, NY: Springer. Kezar, A. J. & Ecke l, P. J. (2002). The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive co ncepts? The Journal of Higher Education, 73, (4). 435-460. Kim, S. E. (2005). Balancing competing accountability requirements: Challenges in performance improvement of the nonprofit human services agency. Public Performance & Management Review , 29(2), 145-163. Kirksey, M. J. (2010). Building and sustaining a culture of assessment: How student affairs program assess and contribute to student learning and development in the co -curricular and curricular environments (EdD dissertation, University of the Pacific, United States ÑCalifornia). Retrieved January 25, 2013 from Dissertations & Theses: Full Text. UMI Number: 3479 698. Koen, P. A., Bertels, H. M., & Elsum, I. R. (2011). The three faces of business model innovation: challenges for established firms. Research -Technology Management , 54(3), 52-59. Kolb, D. A. (2014). Experiential learning: Experience as the source of le arning and development . FT Press. 132 Kolb, D. & Boyatzis, R. (2001). Experiential learning theory: Previous research and new directions. p. 227 Ð248. In Perspectives on thinking, learning, and cognitive styles . Mahwah, NJ: Lawrence Earlbaum Associates. Kolb, A. Y., & Kolb, D. A. (2005). Learning styles and learning spaces: Enhancing experiential learning in higher education. Academy of management learning & education , 4(2), 193-212. Kuh, G. (2001). Assessing what really matters to student learning. Change, p. 10-18. Kuh, G. D. (2007). What student engagement data tell us about college readiness. Peer Review , 9(1), 4-8. Kuh, G. (2010) Forward. In Schuh, J.H., & Gansemer -Topf, A.M. (2010, December). The role of student affairs in student l earning assessment (NILOA Occasional Paper No.7). Kuh, G. D., & Ikenberry, S. O. (2009, October). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: University of Illinois and Indiana University, National Institut e for Learning Outcomes Assessment. Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Kuh, G. D. & Schneider, C. G. (2008). High -impact educational practices: What they are, who has access to them, and why they matter. AAC&U. Washington D.C. Kuh & Whitt, (1988). The invisible tapestry: Culture in A merican colleges and universities. ASHE -ERIC Higher Education Report 1. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Lattuca, L. & Stark, J. (2009). Shaping the college curriculum: Academic plans in context (pp.145-181). San Francisco, CA: Jossey -Bass. Lenning, O., & Ebbers, L. (1999). The powerful potential of learning communities: Improving education for the f uture. [ASH E-ERIC Higher Education Report, vol. 26, no. 6.] Washington, DC: Association for the Study of Higher Education, ERIC Clearinghouse on Higher Education, & George Washington University, Graduate School of Education and, Human Development. Love, P. G., & Est anek, S. M. (2004). Rethinking student affairs practice. San Francisco, CA: Jossey -Bass. Maki, P. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA, Stylus. Merriam, S. B. (1988). Case study research in education: A qualitative approach. San Francisco, CA: Jossey -Bass. 133 Miles, M. B. & Huberman, A. M. (1994). Qualitative data analysis. Thousand Oaks, CA: Sage Publications, Inc. Mintzberg, H. (1979). The professional b ureaucracy. The structuring of o rganizat ions -a synthesis of the r esearch . Upper Saddle River, NJ: Prentice Hall. Morgan, G. (2006). Images of o rganizations. Thousand Oaks, CA: Sage Publications Incorporated. National Center for Postsecondary Improvement (2014). Assessment policy models, types, considerations . Retrieved August 27, 2014 from http://web.stanford.edu/group/ncpi/unspecified/assessment_states/assessment.html National Institute for Learning Outcomes Assessment (2014). Providing evidence of student learning: A tra nsparency f ramework. Retrieved August, 26, 2014 , from http://www.learningoutcomeassessment.org/TFComponentSLOS.htm . Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and impr oving assessment in higher e ducation. Higher and Adult Education Series . San Francisco, CA : Jossey -Bass. Patton, M.Q. (2002). Qualitative research & evaluation methods (3rd ed). Thousand Oaks, CA: Sage Publications, Inc. Peterson, M. W., Einarson, M. K., Augustine, C. H., & Vaughan, D. S. (1999). Institutional support for student assessment: Methodology and results of a national survey. Stanford, CA: Stanford University, National Center for Postsecondary Improvement. Pike, G. R. (2008). Assessment matters: Learning about learning communities: Consider the variables. About Campus , 13(5), 30 -32. Provezis, S. (2010). Regional accreditation and student learning outcomes: Mapping the territory. Urbana, IL: University of Ill inois and Indiana University, National Institute for Learning Outcomes Assessment. Renn, K. A., & Reason, R. D. (2012). College students in the United States: Characteristics, experiences, and outcomes . San Francisco, CA: John Wiley & Sons. Rhodes, T. (ed. ) 2010. Assessing outcomes and improving achievement: Tips and tools for using rubrics . Washington, DC: Association of American Colleges and Universities. Seagraves, B., & Dean, L. A. (2010). Conditions supporting a culture of assessment in student affairs divisions at small colleges and universities. Journal of Student Affairs Research and Practice, 47 (3), 307Ð324. Scott, I. (2011). The learning outcome in higher education : time to think again? Worcester Journal of Learning and Teaching, 5, p.1Ð8. 134 Schein, E. (2009). Helping: How to offer, give and receive help. San Francisco, CA: Berrett -Koehler Publishers. Schuh, J. H. and Associates (2009) Assessment methods for student a ffairs. San Francisco, CA: Wiley & Sons. Schuh, J.H., & Gansa mer-Topf , A.M. (2010). The role of student affairs in student learning assessment (NILOA Occasional Paper No.7). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Sedlacek, W. E. (2011). Using noncognitive variables in assessing readiness for higher education. Readings on Equal Education. 25, 187-205. Senge, P. M. (1994). The Fifth discipline fieldbook: Strategies and tools for buildin g a learning organization . New York, NY: Currency: Doubleday. Senge, P. M. (1996). Leading learning organizations . Training & Development (50) ,12; p. 36. Shavelson, R. J. (2007). A brief history of student learning assessment: How we got where we are and a proposal for where to go n ext. Washington, D.C.: Association of American Colleges and Universities. Smith, B. L. & MacGregor, J. (2009). Learning communities and the quest for quality. Quality Assurance in Education 17 (2). P 118 -139. Spence, L. (2001, November/December). The case against teaching. Change, 11-19. Spohrer, J., Gregory, M., Ren, G. (2010). The Cambridge -IBM SSME white paper r evisited. In: Handbook of Service Science. Service Science: Research and Innovations in the Servic e Economy , pp. 677Ð706. Heidelberg, Germany: Springer, Stake, R. E. (2005). Qualitative case studies. In N. K. Denzen & Lincoln, Y. S. (Eds.), The sage handbook of qualitative research (pp. 443-466). Thousand Oaks, CA: Sage Publications. Steiber, A., & Al−nge, S. (2013). A corporate system for continuous innovation: the case of Google Inc. European Journal of Innovation Management , 16(2), 243-264. Suskie, L. (2009). Assessing student learning: A common sense guide. San Francisco, CA: Jossey -Bass. Terenz ini, P. T., & Pascarella, E. T. (1994). Living with myths: Undergraduate education in America. Change, 26 (1), 28-32. Thelin, J. R. (2004). A history of American higher education. Baltimore, MD: Johns Hopkins University Press. Thomas, G. (2011). A typology for the case study in social science research following a review of definition, discourse, and structure. Qualitative Inquiry, 17 (6), 511-521. 135 Tierney, W. G. (1988). Culture in higher education: Defining the essentials. The Journal of Higher Education. 5 9 (1), 2-21. Tierney, W. G. (1997). Organizational socialization in higher education. The Journal of Higher Education, 68 (1), 1-16. Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition. Chicago, IL: University of Chicago Press. Tinto, V. (2000). What have we learned about the impact of le arning communities on students? Assessment Update: Progress, Trends, and Practices in Higher Education, 12(2), 1-2, 12. Trowler, P., & Knight, P. T. (2000). Coming to know in higher educa tion: Theorising faculty entry to new work contexts. Higher Education Research and Development , 19(1), 27-42. Upcraft, M. L., & Schuh, J. H. (1996). Assessment in student affairs: A guide for p ractitioners (1st ed.). San Francisco, CA: Jossey -Bass. U. S. Department of Education. (2006). A test of leadership: Charting the future of American higher education (Report of the commission appointed by Secretary of Education Margaret Spellings). Washington, DC: Author. Weick, K. E. (1976). Educational organi zations as loosely coupled systems. Administrative Science Quarterly , 21(1), 1-19. Weiss, R. S. (1994). Learning from strangers : The art and method of qualitative interview studies. New York, NY: The Free Press . Western Association of Schools and Colleges (2014). Senior College and University Commission, WASC Glossary. Retrieved August 25, 2014, from, http://www.wascsenior.org/lexicon/14#letter_s . Wheatley, M. (1992). Leadership and the new science. San Francisco, CA: Berrett -Koehler. Wiggins, G. & McTighe, J. (2006). Understanding by design: A framework for effecting curricular development and a ssessment (2nd ed.). Association for Supervision and Curriculum Development, Alexandria, VA. Wolcott, H. F. (2001). Writing up qualitative research 2nd Ed. Thousand Oaks, CA: Sage Publications, Inc. Wylie, J. P. (2012) . Residents' interaction with their college living -learning peer mentor: A grounded theory . Dissertation CLEMSON UNIVERSITY, 2012, 259 pages; Retrieve from ProQuest Dissertation and Theses ( 3512198). Youatt, J. P., McCully, W. and Blanshan , S. A. (2014), Welcome to the neighborhood ! Reinventing academic life for undergraduate s tudents. About Campus, 19: 24 Ð28. 136 Yielder, J. & Codling, A. (2004) Management and leadership in the contemporary univ ersity . Journal of Higher Education Policy and Management, 26 (3). Yin, R. (2003). Case study research: design and m ethods. 3rd Ed. Thousand Oaks, CA: Sage Publications, Inc.