TEACHERS’ SENSEMAKING ABOUT IMPLEMENTATION OF AN INNOVATIVE SCIENCE CURRICULUM ACROSS THE SETTINGS OF PROFESSIONAL DEVELOPMENT AND CLASSROOM ENACTMENT By Xeng de los Santos A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Curriculum, Instruction, and Teacher Education—Doctor of Philosophy 2017 ABSTRACT TEACHERS’ SENSEMAKING ABOUT IMPLEMENTATION OF AN INNOVATIVE SCIENCE CURRICULUM ACROSS THE SETTINGS OF PROFESSIONAL DEVELOPMENT AND CLASSROOM ENACTMENT By Xeng de los Santos Designing professional development that effectively supports teachers in learning new and often challenging practices remains a dilemma for teacher educators. Within the context of current reform efforts in science education, such as the Next Generation Science Standards, teacher educators are faced with managing the dilemma of how to support a large number of teachers in learning new practices while also considering factors such as time, cost, and effectiveness. Implementation of educative, reform-aligned curricula is one way to reach many teachers at once. However, one question is whether large-scale curriculum implementation can effectively support teachers in learning and sustaining new teaching practices. To address this dilemma, this study used a comparative, multiple case study design to investigate how secondary science teachers engaged in sensemaking about implementation of an innovative science curriculum across the settings of professional development and classroom enactment. In using the concept of sensemaking from organizational theory, I focused specifically on how teachers’ roles in social organizations influenced their decisions to implement the curriculum in particular ways, with differing outcomes for their own learning and students’ engagement in three-dimensional learning. My research questions explored: (1) patterns in teachers’ occasions of sensemaking, including critical noticing of interactions among themselves, the curriculum, and their students; (2) how teachers’ social commitments to different communities influenced their sensemaking; and, (3) how sustained sensemaking over time could facilitate teacher learning of rigorous and responsive science teaching practices. In privileging teachers’ experiences in the classroom using the curriculum with their students, I used data generated primarily from teacher interviews with their case study coaches about implementation over the course of one school year. Secondary sources of data included artifacts such as teachermodified curriculum materials, classroom observation notes, and video-recordings of classroom instruction and professional development sessions. Data analysis involved descriptive coding of the interview transcripts and searching for linguistic markers related to components of an occasion of sensemaking. Findings show that teachers engaged in sensemaking about curriculum implementation in multiple and different ways that were either productive or unproductive for their learning of rigorous and responsive science teaching practices. Teachers that had productive outcomes for teacher learning were engaged in sustained sensemaking that involved critical noticing of interactions between the curriculum, themselves, and their students, with the goal of bridging the gap between what the curriculum offered and what their students could do. In contrast, teachers that had unproductive outcomes for teacher learning were engaged in sensemaking that often involved critical noticing of only one aspect and were motivated by local obligations. Four themes emerged: sustained sensemaking over time, the influence of school communities, teacher learning of content, and the influence of teachers’ beliefs. Using these findings and themes, I present a model for teacher sensemaking within the context of long-term professional development around implementation of an innovative curriculum, with a mechanism for how teacher learning could occur over time. Implications for science teacher professional development and learning and directions for future research are offered. Copyright by XENG DE LOS SANTOS 2017 ACKNOWLEDGMENTS My dissertation committee was an invaluable source of support and encouragement. My advisor and dissertation chair, Dr. Charles W. (Andy) Anderson, challenged me to think through my conceptual framework in ways that has made me a more rigorous educational researcher. Andy was a generous and patient advisor and indulged me by listening to stories about salsa dancing. The members of my committee provided invaluable feedback. In particular, I thank Dr. Alicia Alonzo for her thoughtful attention to detail, Dr. Corey Drake for her insightful comments, and Dr. Kenneth Frank for his enthusiasm and optimism. A special thanks to Dr. William Penuel at CU Boulder for taking the time to support my scholarly work in various ways. I acknowledge the teachers who agreed to participate in the larger Carbon TIME research project. These teachers graciously allowed us into their classrooms, which required vulnerability and risk. The amazing work that they do every day with their students gives me hope that equitable, rigorous, and responsive science education for all is a real possibility. I acknowledge my colleagues in the Department of Teacher Education. To name but a few: Katie Cook, TJ Smolek, Lora Kaldaras, Meenakshi Sharma, Dr. Angie Kolonich, Dr. Wendy Johnson, Dr. Steve Bennett, and Dr. Joi Merritt. A special thanks to Dr. Hannah Miller at Johnson State College, who was my mentor in so many ways. My time at Michigan State would not have been as enjoyable without these delightful people. Finally, I thank my family for their constant support and encouragement. My mother came and watched the kids while I went to conferences. My kids, Eleanor and Noah, kept me grounded and focused on what is most important in life. And Timothy Gay came into my life at the right time and for the right reasons. I love you all. v TABLE OF CONTENTS LIST OF TABLES ......................................................................................................................... x LIST OF FIGURES ..................................................................................................................... xiv Chapter One .................................................................................................................................... 1 Introduction .................................................................................................................................... 1 Context and Focus of the Study .......................................................................................... 3 Teachers’ Sensemaking About Implementation of an Innovative Curriculum .................. 7 Conceptual Framework for Investigating Teachers’ Sensemaking ...................... 10 Subjectivities .................................................................................................................... 13 Professional Background ...................................................................................... 13 Roles in Carbon TIME ......................................................................................... 16 Overview of Chapters ....................................................................................................... 18 Chapter Two ................................................................................................................................. 19 Literature Review ......................................................................................................................... 19 Using PD to Support Teachers’ Implementation of Innovative Curricula ....................... 19 Science Education Reform ................................................................................... 19 Curriculum Implementation ................................................................................. 22 Professional Development .................................................................................... 25 Literature on Boundary Objects and Sensemaking .......................................................... 26 Boundary Objects ................................................................................................. 26 Sensemaking ......................................................................................................... 28 Synthesis ........................................................................................................................... 32 Conclusion ........................................................................................................................ 36 Chapter Three ............................................................................................................................... 38 Context and Methods .................................................................................................................... 38 Context: A Design-Based Implementation Research Project ........................................... 38 Methods ............................................................................................................................ 43 Participants ........................................................................................................... 43 Case study teachers ................................................................................... 43 Case study coaches ................................................................................... 44 Data Collection ..................................................................................................... 45 Data from classroom enactment ............................................................... 45 Teacher interviews ........................................................................ 46 End-of-unit interviews ...................................................... 48 End-of-year interview ....................................................... 49 Y1-follow-up interview ..................................................... 51 Data from PD ............................................................................................ 54 Data Analysis ........................................................................................................ 55 Teacher interview transcriptions .............................................................. 56 vi Preparing the data ........................................................................ 56 Coding the data ............................................................................ 56 Reliability coding of descriptive codes ............................. 60 Systematic analyses of the data set ............................................... 62 Identifying patterns across cases ...................................... 63 Identifying patterns within cases ...................................... 64 Identifying occasions of sensemaking .............................. 64 Defining an occasion of sensemaking ...................................................... 67 Outcomes of sensemaking ............................................................. 71 Modifications .................................................................... 72 Reflections ........................................................................ 72 Goals and resources that influence sensemaking ......................... 73 Goals ................................................................................. 73 Practical knowledge ......................................................... 74 Social communities ........................................................... 74 Critical noticing ............................................................................ 75 Analyzing insufficient evidence of sensemaking ..................................... 76 Triangulation with other data sources ...................................................... 77 Summary ........................................................................................................................... 77 Chapter Four ................................................................................................................................. 79 Findings ........................................................................................................................................ 79 Identifying Occasions of Sensemaking ............................................................................ 83 Results of Numerical Analyses: Identifying Potential Occasions of Sensemaking ............................................................................................................................... 84 Variation across cases ............................................................................... 85 Variation within cases .............................................................................. 86 Combined results for numerical analyses ................................................. 87 Results of Content Analysis: Identifying Occasions of Sensemaking ................. 88 Other occasions of sensemaking .............................................................. 89 Sufficient evidence of sensemaking ......................................................... 91 Teachers’ comments about boundary objects when there was insufficient evidence of sensemaking .......................................................................... 95 Differences in patterns of sensemaking across cases ............................... 99 Ms. Eaton’s landscape of sensemaking about Carbon TIME boundary objects ........................................................................ 100 Ms. Callahan’s landscape of sensemaking about Carbon TIME boundary objects ........................................................................ 101 Comparison of patterns of sensemaking about Carbon TIME boundary objects ........................................................................ 102 From Identifying to Describing Occasions of Sensemaking .............................. 103 Occasions of Sensemaking: Narratives Situated Within Teachers’ Ecologies of Practice ......................................................................................................................................... 106 Theme 1: Sustained Sensemaking Over Time .................................................... 110 Ms. Nolan’s occasion of sensemaking about the Process Tools: “Putting myself in the shoes of my students” ........................................................ 110 vii Ms. Callahan’s occasion of sensemaking about the Predictions Tool: “It’s important for the students” ..................................................................... 117 Summary ................................................................................................. 124 Theme 2: Influence of School Communities ...................................................... 124 Ms. Callahan’s occasion of sensemaking about the data spreadsheets .. 125 Ms. Wei’s occasion of sensemaking about the Plants Unit investigation: “Something a little bit flashier” .............................................................. 130 Influence of school communities ................................................ 140 Mr. Ross’s occasion of sensemaking about using student assessment data for teacher evaluation: “How to fit all of those things together” ............ 141 Influence of school communities ................................................ 145 Ms. Nolan’s decisions to modify her enactment .................................... 150 Summary ................................................................................................. 151 Theme 3: Teacher Learning of Content .............................................................. 151 Mr. Harris’s occasion of sensemaking about the Pre- and Post-Tests: “Learning what they don’t know” ........................................................... 151 Theme 4: Influence of Teachers’ Beliefs ........................................................... 155 Ms. Barton’s occasion of sensemaking about discourse and grading of the Process Tools: “Knowing what kids think” ............................................ 156 Summary ............................................................................................................. 164 Synthesis ......................................................................................................................... 164 Chapter Five ............................................................................................................................... 169 Discussion ................................................................................................................................... 169 Discussion ....................................................................................................................... 171 Research Question 1: Occasions of Sensemaking .............................................. 171 Critical noticing ...................................................................................... 171 Outcomes of sensemaking ...................................................................... 173 Decisions to modify curriculum materials ................................. 173 Decisions to modify classroom enactment .................................. 174 Reflections on modifications ....................................................... 174 Reflections on goals and resources that influence sensemaking 174 Research Question 2: Social Commitments to Various Communities ............... 175 Modifying to fit local contexts ............................................................... 175 Navigating multiple PD initiatives ......................................................... 176 Meeting the expectations of local contexts ............................................ 177 Research Question 3: Sustained Sensemaking Over Time ................................. 177 Synthesis ......................................................................................................................... 179 Limitations ...................................................................................................................... 181 Participants ......................................................................................................... 181 Data Sources ....................................................................................................... 182 Sensemaking ....................................................................................................... 184 Implications .................................................................................................................... 185 Science Teacher PD and Learning ..................................................................... 186 Future Research .................................................................................................. 188 Conclusion ...................................................................................................................... 189 viii APPENDICES ............................................................................................................................ 190 APPENDIX A: TEACHER INTERVIEW PROTOCOLS ............................................ 191 APPENDIX B: REFLECTION ON TEACHING PRACTICES SURVEY .................. 198 APPENDIX C: FRAMEWORK FOR DESCRIPTIVE CODING ................................. 200 APPENDIX D: RESULTS AND ANALYSES OF DESCRIPTIVE CODING ............ 201 APPENDIX E: SUFFICIENT AND INSUFFICIENT EVIDENCE OF SENSEMAKING IN THE DATA ............................................................................................................... 204 APPENDIX F: OCCASIONS OF SENSEMAKING ..................................................... 217 APPENDIX G: EVIDENCE OF SENSEMAKING OVER TIME ................................ 227 APPENDIX H: TEACHER-MODIFIED ARTIFACTS ................................................ 236 APPENDIX I: CURRICULUM MATERIALS ............................................................. 241 REFERENCES ........................................................................................................................... 246 ix LIST OF TABLES Table 1 Selected Carbon TIME Boundary Objects of Interest ..................................................... 33 Table 2 Content of Face-to-Face and Online PD Sessions in 2015-2016 School Year ............... 40 Table 3 Case Study Teacher Demographics and School Context in 2015-2016 .......................... 43 Table 4 Student Demographic Data for Ms. Nolan and Ms. Wei’s High Schools in the Northwest ....................................................................................................................................................... 44 Table 5 Case Study Coach Experiences in K-12 Education and Roles in Carbon TIME ............ 45 Table 6 Interview Dates and Interviewers for Case Study Teacher Interviews ........................... 47 Table 7 Summary of Primary and Secondary Data Sources ........................................................ 55 Table 8 Parent and Child Codes in the Descriptive Coding Framework ..................................... 59 Table 9 Descriptive Coding of Mr. Harris’s Excerpt About the Instructional Model and Process Tools ............................................................................................................................................. 60 Table 10 Ratio of Individual Teachers’ Talk to the Mean About Boundary Objects & Classroom Practices ........................................................................................................................................ 86 Table 11 Justifications for Identifying Occasions of Sensemaking About Boundary Objects & Classroom Practices Based on Numerical Analyses of Amount of Teacher Talk ....................... 87 Table 12 Identification of an Occasion of Sensemaking Based on Sufficient Evidence in Content Analysis ........................................................................................................................................ 88 Table 13 Summary of Teachers’ Sensemaking About Carbon TIME Boundary Objects ........... 91 Table 14 Teachers’ Comments About Boundary Objects When There Was Insufficient Evidence of Sensemaking in the Data .......................................................................................................... 96 Table 15 Primary Foci of Teachers’ Critical Noticing in Their Occasions of Sensemaking ..... 105 Table 16 Judgments About Teachers’ Approaches to Sensemaking About Carbon TIME Boundary Objects ....................................................................................................................... 107 Table 17 Selected Occasions of Sensemaking for Narrative Description .................................. 109 Table 18 Ms. Nolan’s Goal of Putting Herself in the Shoes of Her Students ............................ 112 x Table 19 Ns. Nolan’s Occasion of Sensemaking About the Evidence-based Arguments and Explanations Tools ..................................................................................................................... 114 Table 20 Ms. Callahan’s Focus on the Importance of Particular Science Topics and Skills ..... 118 Table 21 Ms. Callahan’s Occasion of Sensemaking About the Predictions Tool ...................... 122 Table 22 Ms. Callahan’s Occasion of Sensemaking About the Data Spreadsheets ................... 126 Table 23 Ms. Wei’s Goal of Having Something a Little Bit Flashier for Students ................... 136 Table 24 Ms. Wei’s Occasion of Sensemaking About the Plants Unit Investigation ................ 138 Table 25 Mr. Ross’s Goal of Fitting It All Together .................................................................. 144 Table 26 Mr. Ross’s Occasion of Sensemaking About the Pre- and Post-Tests ........................ 145 Table 27 Mr. Harris’s Learning Over Time About How Much His Students Don’t Know ....... 153 Table 28 Mr. Harris’s Occasion of Sensemaking About the Pre- and Post-Tests ...................... 154 Table 29 Ms. Barton’s Belief That Students Copy Written Work ............................................. 158 Table 30 Ms. Barton’s Sensemaking About Discourse and Grading of the Process Tools ....... 160 Table 31 Categories of Productive and Unproductive Sensemaking for Teacher Learning ...... 167 Table 32 Determination of Productive and Unproductive Sensemaking for Case Study Teachers ..................................................................................................................................................... 168 Table 33 List of Practices on the Reflection on Teaching Practices Survey .............................. 198 Table 34 Framework for Descriptive Coding of Teacher Interviews ......................................... 200 Table 35 Total Number of Excerpts Coded with Each Code for All Teacher Interviews .......... 201 Table 36 Number of Words, Means, and Totals for Teachers’ Talk About Boundary Objects & Classroom Practices .................................................................................................................... 203 Table 37 Percentage of Individual Teachers’ Talk About Boundary Objects & Classroom Practices ...................................................................................................................................... 203 Table 38 Sufficient Evidence for Teachers’ Sensemaking About the Instructional Model ....... 204 Table 39 Insufficient Evidence for Teachers’ Sensemaking About the Instructional Model .... 205 xi Table 40 Sufficient Evidence for Teachers’ Sensemaking About the Expressing Ideas Tool ..................................................................................................................................................... 206 Table 41 Insufficient Evidence for Teachers’ Sensemaking About the Expressing Ideas Tool ..................................................................................................................................................... 207 Table 42 Sufficient Evidence for Teachers’ Sensemaking About the Predictions Tool ............ 208 Table 43 Insufficient Evidence for Teachers’ Sensemaking About the Predictions Tool .......... 209 Table 44 Sufficient Evidence for Teachers’ Sensemaking About the Evidence-based Arguments Tool ............................................................................................................................................. 210 Table 45 Insufficient Evidence for Teachers’ Sensemaking About the Evidence-based Arguments Tool .......................................................................................................................... 211 Table 46 Sufficient Evidence for Teachers’ Sensemaking About the Explanations Tool ......... 212 Table 47 Insufficient Evidence for Teachers’ Sensemaking About the Explanations Tool ....... 213 Table 48 Sufficient Evidence for HS Teachers’ Sensemaking About the Pre- and Post-Tests ..................................................................................................................................................... 214 Table 49 Sufficient Evidence for MS Teachers’ Sensemaking About the Pre- and Post-Tests ..................................................................................................................................................... 215 Table 50 Insufficient Evidence for MS Teachers’ Sensemaking about the Pre- and Post-Tests ..................................................................................................................................................... 216 Table 51 Mr. Ross’s Occasion of Sensemaking About the Instructional Model ....................... 217 Table 52 Mr. Harris’s Occasion of Sensemaking About the Evidence-based Arguments Tool ..................................................................................................................................................... 218 Table 53 Ms. Callahan’s Occasion of Sensemaking About the Instructional Model ................. 218 Table 54 Ms. Callahan’s Occasion of Sensemaking About the Expressing Ideas Tool ............. 219 Table 55 Ms. Callahan’s Occasion of Sensemaking About the Pre- and Post-Tests ................. 219 Table 56 Ms. Apol’s Occasion of Sensemaking About the Predictions Tool ............................ 220 Table 57 Ms. Apol’s Occasion of Sensemaking About the Pre- and Post-Tests ....................... 220 Table 58 Ms. Wei’s Occasion of Sensemaking About the Instructional Model ........................ 221 xii Table 59 Ms. Wei’s Occasion of Sensemaking About the Expressing Ideas Tool .................... 221 Table 60 Ms. Wei’s Occasion of Sensemaking About the Evidence-based Arguments Tool ... 222 Table 61 Ms. Wei’s Occasion of Sensemaking About the Explanations Tool .......................... 222 Table 62 Ms. Wei’s Occasion of Sensemaking About the Pre- and Post-Tests ......................... 223 Table 63 Ms. Nolan’s Occasion of Sensemaking About the Instructional Model ..................... 223 Table 64 Ms. Nolan’s Occasion of Sensemaking About the Expressing Ideas Tool ................. 224 Table 65 Ms. Nolan’s Occasion of Sensemaking About the Pre- and Post-Tests ...................... 224 Table 66 Ms. Eaton’s Occasion of Sensemaking About the Evidence-based Arguments Tool ..................................................................................................................................................... 225 Table 67 Ms. Eaton’s Occasion of Sensemaking About the Explanations Tool ........................ 225 Table 68 Ms. Eaton’s Occasion of Sensemaking About the Pre- and Post-Tests ...................... 226 Table 69 Mr. Ross’s Goal of Fitting It All Together .................................................................. 227 Table 70 Ms. Barton’s Persistent Belief Over Time About the Value of Students’ Written Work ..................................................................................................................................................... 228 Table 71 Ms. Barton’s Belief about the Value of Talking ......................................................... 229 Table 72 Ms. Nolan’s Goal of Putting Herself in Her Students’ Shoes ..................................... 231 Table 73 Ms. Wei’s Goal of Student Engagement and Personal Connection ............................. 233 Table 74 Ms. Callahan’s Focus on the Importance of Science Topics and Skills ...................... 234 xiii LIST OF FIGURES Figure 1. Context and focus of the study on the classroom enactment setting .............................. 5 Figure 2. Conceptualization of teachers’ implementation of Carbon TIME as a disturbance in teachers’ ecology of practice .......................................................................................................... 9 Figure 3. Conceptual framework for investigating teachers’ sensemaking about Carbon TIME curriculum implementation over time .......................................................................................... 11 Figure 4. Model of teacher sensemaking ...................................................................................... 68 Figure 5. What counts as an occasion of sensemaking ................................................................ 70 Figure 6. The landscape of teachers’ sensemaking about Carbon TIME boundary objects ........ 90 Figure 7. Ms. Eaton’s landscape of sensemaking about Carbon TIME boundary objects ......... 100 Figure 8. Ms. Callahan’s landscape of sensemaking about Carbon TIME boundary objects .... 101 Figure 9. Ms. Nolan’s modification to embed the Evidence-based Arguments Tool into the Explanations Tool: The back side .............................................................................................. 115 Figure 10. Feedback loop in Ms. Nolan’s sensemaking about the Evidence-based Arguments and Explanations Tools ..................................................................................................................... 116 Figure 11. Timeline of Ms. Callahan’s Sustained Sensemaking About the Expressing Ideas and Predictions Tools in 2015-2016 .................................................................................................. 121 Figure 12. Ms. Callahan’s modification to the data spreadsheet for the ethanol burning investigation in the Systems & Scale unit to include percent change in mass in Year One ....... 127 Figure 13. The influence of Ms. Callahan’s social commitment to her school colleagues and students on her occasion of sensemaking about the data spreadsheets ...................................... 129 Figure 14. The influence of Ms. Wei’s social commitment to her school colleagues on her occasion of sensemaking about the Plants unit investigation ..................................................... 141 Figure 15. The influence of Mr. Ross’s social commitment to school administrators and students on his occasion of sensemaking about student assessment data ................................................. 147 Figure 16. Mr. Ross’s landscape of sensemaking about Carbon TIME boundary objects ........ 149 Figure 17. Results of descriptive coding: Matrix of code co-occurrences in Dedoose .............. 202 xiv Figure 18. Ms. Nolan’s modification to the Expressing Ideas Tool in the Animals unit to substitute a panda growing for a boy growing as the phenomenon of interest .......................... 236 Figure 19. Ms. Nolan’s modification to embed the Evidence-based Arguments Tool into the Explanations Tool: The front side .............................................................................................. 237 Figure 20. Ms. Nolan’s Creation of a New Tool That Combines All Three Processes ............. 238 Figure 21. Ms. Callahan’s modification to the data spreadsheet for the ethanol burning investigation in the Systems & Scale to include percent change in mass unit in Year Two ...... 239 Figure 22. Ms. Wei’s modification to the Evidence-based Arguments Tool in the Animals Unit to match the Claim-Evidence-Reasoning framework ...................................................................... 240 Figure 23. Predictions Tool for the Systems and Scale Unit ...................................................... 241 Figure 24. Expressing Ideas Tool for the Systems and Scale Unit ............................................ 242 Figure 25. Evidence-based Arguments Tool for the Systems and Scale Unit ............................ 243 Figure 26. Explanations Tool for the Systems and Scale Unit ................................................... 244 Figure 27. The Carbon TIME Instructional Model .................................................................... 245 xv Chapter One Introduction Current science education reform efforts based on the Framework for K-12 Science Education (National Research Council, 2012) require fundamental shifts in teachers’ and students’ roles in science classrooms. Teachers are being asked to design instruction, develop assessments, and facilitate classroom discourse that engages students in three-dimensional science learning, which integrates disciplinary core ideas, scientific and engineering practices, and crosscutting concepts. This vision of three-dimensional science learning shifts students’ roles significantly from learning about science to figuring out phenomena; correspondingly, teachers’ roles shift from being the primary source of authority to playing a supportive role in developing students’ epistemic agency, or authority to shape the practice and knowledge of a community of learners (e.g., Roehl, 2012; Stroupe, 2014). The Next Generation Science Standards (NGSS Lead States, 2013) were developed based on this vision of three-dimensional science learning and provides guidance for teachers in the form of performance expectations that describe what students should know and be able to do by certain grade levels. For example, a performance expectation for high school biology is: “Develop a model to illustrate the role of photosynthesis and cellular respiration in the cycling of carbon among the biosphere, atmosphere, hydrosphere, and geosphere” (HS-LS2-5). Integrating the three dimensions in a way that supports students in building knowledge over time is just one challenge that this new vision of science education poses for teachers and students. In additional to these demands, teachers are being asked rightly to attend to equitable science instruction for all students from a range of socioeconomic, language, ethnic, racial, gender, and academic ability backgrounds (Lee, Miller, & Januszyk, 2014). 1 In order to support teachers and students in achieving this new vision of science teaching and learning, science teacher educators have already begun developing and implementing professional development (PD) programs designed to support teachers’ implementation of NGSS. One example is NGSX: The Next Generation Science Exemplar System for Professional Development, a web-based learning system that includes face-to-face PD and collaborative study groups (NGSX, 2017). PD programs such as NGSX can help teachers explore the ideas presented in the Framework for K-12 Science Education and NGSS and develop “tools and strategies to take this new vision back into their classrooms” (NGSX, 2017). For experienced teachers, however, exposure to new ideas, tools, and strategies may still not be enough to overcome the difficulty of translating understanding of NGSS into enactment of teaching practices that support students’ engagement in three-dimensional science learning. Once teachers are back in their classrooms and immersed in the day-to-day reality of determining what to do for the next day, they face a myriad of challenges to enacting new teaching practices, including having enough time to plan for instruction and adapt existing curriculum materials to support three-dimensional science learning. Furthermore, teachers often must meet local expectations, such as using common assessments and meeting teacher evaluation requirements. One dilemma, then, for science teacher educators is how to best use the limited time in PD to support teachers in developing a vision of three-dimensional science teaching and learning and enacting that vision successfully with students in classrooms. Science teacher educators must change the ways in which they conceptualize science teacher PD and preparation related to NGSS (National Academies of Sciences, Engineering, and Medicine, 2015; Reiser, 2013; Windschitl & Stroupe, 2017). 2 Therefore, in this study, I investigated secondary science teachers’ classroom enactment of an innovative curriculum as they participated in a large-scale, multi-year implementation project. My goal was to use the concept of sensemaking from organizational theory (Weick, 1995) to explore teachers’ sensemaking about their implementation of an innovative science curriculum across the settings of PD and classroom enactment over time. Opfer and Pedder (2011) called for PD studies to be situated in teachers’ contexts in order to recognize and investigate the complexity of teacher professional learning across multiple scales. Likewise, Kazemi and Hubbard (2008) called for researchers to attend to the coevolution of teachers’ participation between classroom practice and PD as a way to better understand why some teachers’ practices changed more than others. This multi-directional approach to studying teacher learning is a shift from a uni-directional approach that assumes that teachers will take what they learn from the PD setting and apply it to their classroom contexts. In the following sections, I describe the context and focus of the study and explain how exploring teachers’ sensemaking could provide insights into variations in outcomes of teachers’ professional learning experiences. Context and Focus of the Study The context of this study is a design-based implementation research (DBIR; Penuel, Fishman, Cheng, & Sabelli, 2011) project investigating secondary science teachers’ implementation of Carbon TIME, an innovative science curriculum developed by educational researchers at Michigan State University. Within this larger project, my study focused on the classroom enactment experiences of eight case study teachers during their first year of implementation. Therefore, throughout this dissertation I use the term “project” to refer to the larger curriculum implementation project. 3 Carbon TIME researchers designed the curriculum to support secondary science teachers in scaffolding students’ engagement in three-dimensional science learning. I describe relevant features of the curriculum in detail in Chapter Two; here, I note that Carbon TIME focuses on using the crosscutting concepts of energy and matter conservation to trace matter and energy through carbon-transforming processes such as photosynthesis and respiration. Carbon TIME uses an Instructional Model based on an inquiry-application sequence that engage teachers and students in two different forms of classroom discourse: divergent and convergent talk and writing. In the project and in my study, “discourse” refers to written as well as verbal communication. The Carbon TIME project used research-based principles to design PD to support teachers’ implementation of the curriculum. For example, the project used a cohort model to increase the likelihood of contact between researchers and teachers and among teachers in a designed network of Carbon TIME colleagues. One of Reiser’s (2013) recommendations for PD providers was to “structure teachers’ work to be collaborative efforts to apply NGSS to their own classrooms” (p. 16); the purpose of the designed network was to facilitate those collaborations through face-to-face and online PD. Thus, as teachers crossed the boundary between the PD and classroom enactment settings multiple times during their participation in the first year of the project, I investigated teachers’ sensemaking about their implementation of the curriculum and focused my study on the classroom enactment setting (see Figure 1). 4 Figure 1. Context and focus of the study on the classroom enactment setting In my study, I conceptualized teachers as boundary crossers, or people who crossed boundaries between settings, and curriculum materials as boundary objects, or artifacts that crossed boundaries between settings with potentially different meanings in those settings (Star & Griesemer, 1989; Star, 2010). Carbon TIME curriculum materials appeared in both the PD and classroom enactment settings, but each setting varied in terms of purpose and participants present. In the PD setting, researchers and teachers explored features and intended uses of the curriculum; in the classroom enactment setting, teachers and students could enact the curriculum in ways that differed from the intended curriculum. For example, the four Carbon TIME Process Tools were material artifacts (sheets of paper) that were intended to scaffold classroom discourse around inquiry and application activities in the units. They appeared in the PD setting when researchers showed teachers how to use them to support students’ writing and talk about the 5 phenomena in the units. However, in the classroom enactment setting, teachers and students could use them in different ways for different purposes, and those ways could differ from the intended curriculum (e.g., using the Process Tools to record the “correct” answers). Furthermore, I conceptualized that teachers’ professional communities, including their school- and districtlevel colleagues and administrators, could influence how teachers engage in sensemaking about Carbon TIME boundary objects in both settings. As teachers and Carbon TIME materials moved across the settings of PD and classroom enactment over time, I theorized that extended engagement in sensemaking about the curriculum materials in those settings had the potential to promote teacher learning, which I defined broadly as any shift in teachers’ goals for teaching and learning, practical knowledge (van Driel, Beijaard, & Verloop, 2001), or social commitments to their various communities. In the following section, I explain how investigating teachers’ sensemaking about implementation of Carbon TIME could provide insights into variations in outcomes of teacher professional learning experiences. Within the particular context of the Carbon TIME project, one desired outcome was for teachers to make progress towards learning rigorous and responsive science teaching practices associated with Carbon TIME boundary objects and supporting students’ engagement in three-dimensional science practices. While others have defined the terms “rigorous” and “responsive” in science education in particular ways (e.g., Thompson et al., 2016), I use the terms in my study in a more restrictive sense. I define rigorous science teaching practices as attending to the three dimensions of the Framework for K-12 Science Education (disciplinary core ideas, scientific and engineering practices, and crosscutting concepts) and responsive practices as attending to the development of students’ science ideas in classroom discourse around investigations of observable phenomena. 6 Thus, teachers could be rigorous but not responsive if they enforce principles of matter and energy conservation but do not elicit and use student’s prior knowledge to inform instruction. On the other hand, teachers could be responsive but not rigorous if they support students in sharing their ideas but do not attend to whether those ideas conserve, conflate, or disregard matter and energy conservation. Throughout this dissertation, I sometimes use the phrase “rigorous and responsive science teaching practices” as a shorthand; however, the complete statement is always restricted to those practices associated with Carbon TIME boundary objects. Thus, I do not intend to generalize the findings of this study to the entirety of a teacher’s practices nor to others’ definitions of rigorous and responsive. Ideally, by implementing an innovative science curriculum and engaging in sensemaking about particular Carbon TIME boundary objects, teachers will make progress towards learning how to enact both rigorous and responsive science teaching practices. Although I did not investigate whether changes in teacher learning were associated with student learning (as it was outside the scope of my study), I was able to determine whether shifts in teachers’ goals for teaching and learning, practical knowledge, or social commitments to various communities were likely to result in students’ engagement in three-dimensional science learning by examining whether those shifts were more or less aligned with reform visions of science education. Teachers’ Sensemaking About Implementation of an Innovative Curriculum I used the concept of sensemaking from organizational theory to investigate teachers’ implementation of Carbon TIME. According to Weick (1995), sensemaking is the process of making something “sensible” and is a social process that is ongoing, retrospective (and prospective), grounded in identity construction, focused on and extracted by cues in the external environment, and driven by plausibility rather than accuracy. In other words, sensemaking is a 7 person’s answer to the question of “what’s the story?” about their critical noticing of an event, usually one that is extraordinary, unexpected, or disruptive (Weick, Sutcliffe, & Obstfeld, 2005). In the context of this study, I defined sensemaking as critical noticing that involves action situated in context over time, with the contexts being the PD and classroom enactment settings. Researchers have argued that sensemaking is a crucial dimension of the implementation process (Spillane, Reiser, & Reimer, 2002) and can provide insight into how teachers make decisions (Drake & Sherin, 2006). I note that not all decisions require sensemaking. Investigating teachers’ sensemaking about curriculum implementation can yield insights about what teachers find unexpected or disruptive. For example, Allen and Penuel (2015) used sensemaking to investigate teachers’ responses to PD focused on NGSS and found differences in how well teachers managed the ambiguity and uncertainty inherent in using materials developed by researchers in one context and used by teachers in the context of their local settings. I conceptualized implementation of Carbon TIME as a disturbance in teachers’ ecology of practice that could trigger teachers’ sensemaking. As Figure 2 shows, participation in the Carbon TIME project involved implementation of three Carbon TIME units in one school year, with each unit spanning three to four weeks each and including numerous instructional materials, investigations, and administration of student assessments before and after each unit. Teachers invested a considerable amount of time and resources to plan for implementation of three units, including determining what to exclude in order to make room for Carbon TIME or how to integrate Carbon TIME into their usual curriculum. In taking an ecological perspective on teachers’ practices, I follow Zhao and Frank’s (2003) example of conceptualizing teachers’ technology use in schools as an “organic, dynamic, and complex” phenomenon (p. 810) that considers relationships among factors and not just 8 identification of factors. Of particular relevance to this study is the idea that “ecosystems have the tendency or ability to achieve homeostasis or internal equilibrium, a key ecological phenomenon” (p. 811). In conceptualizing implementation of Carbon TIME as a disturbance in teachers’ usual practices that have achieved homeostasis, I theorized that teachers’ sensemaking about unexpected or disruptive events could provide insights into tensions between teachers’ usual practices and NGSS practices. Furthermore, in focusing on sensemaking as critical noticing that involves action situated in context over time, I could trace these tensions over time and analyze how they materialized and whether they got resolved or not, and why. Additionally, I could examine whether the disturbance created by teachers’ sensemaking about Carbon TIME implementation shifted teachers’ ecology of practice to a new state of equilibrium. Figure 2. Conceptualization of teachers’ implementation of Carbon TIME as a disturbance in teachers’ ecology of practice 9 Therefore, I used the concept of boundary objects in conjunction with organizational theory to investigate how teachers, as boundary crossers, engaged in sensemaking about boundary objects across settings and over time. From a design perspective, one affordance of conceptualizing curriculum materials as boundary objects is the ability to highlight features of each setting that influence teachers’ reasoning and practices (see Cobb, Zhao, & Dean, 2009). Thus, exploring teachers’ sensemaking across the settings of PD and classroom enactment over time afforded the opportunity to examine how various settings gave rise to different ways of knowing and being in those settings (Kazemi & Hubbard, 2008; Putnam & Borko, 2000). These differences are crucial for understanding how context influences teachers’ sensemaking, including how local school settings enhance or inhibit teachers’ learning of rigorous and responsive science teaching practices. Conceptual Framework for Investigating Teachers’ Sensemaking Figure 3 shows my conceptual framework for investigating teachers’ sensemaking about their implementation of Carbon TIME, which I developed based on literature from organizational sensemaking as well as science education and teacher professional development. My central phenomenon of interest was teachers’ sensemaking as a result of implementing Carbon TIME, which I conceptualized as a disturbance in teachers’ ecology of practice. The goals and resources that influenced teachers’ sensemaking included: teachers’ goals for teaching and learning; teachers’ practical knowledge, which is defined as the integration of teachers’ beliefs, formal knowledge, and experiential knowledge (van Driel, Beijaard, & Verloop, 2001); and, teachers’ social commitments to various communities, which is connected to the organizational aspect of sensemaking in terms of considering individuals as social actors within social organizations that have particular purposes, norms, and expectations. 10 Figure 3. Conceptual framework for investigating teachers’ sensemaking about Carbon TIME curriculum implementation over time The outcomes of sensemaking included: teachers’ decisions to use, not use, or modify curriculum materials or enactment; teachers’ reflections about outcomes of modifications or enactment; and, teachers’ reflections about any of the goals and resources that influenced sensemaking. I theorized that teacher learning could occur over time if teachers’ reflections flowed into a feedback loop that shifted any of the goals and resources. Like feedback loops in natural ecological systems, the nature of the reflection could serve to either reinforce or shift existing beliefs, goals, or social commitments in a way that either maintained existing practices or shifted them toward reform visions of science teaching and learning. The development of this conceptual framework for investigating teachers’ sensemaking about implementation of an innovative science curriculum across the settings of PD and classroom enactment over time led me to formulate three research questions: (1) What are patterns in teachers’ occasions of sensemaking? (a) What are the goals and resources that influence teachers’ sensemaking? 11 (b) What are the outcomes of teachers’ sensemaking? (2) How do teachers’ commitments to social relationships in various communities influence their sensemaking? (3) How do the outcomes of teachers’ sensemaking contribute to feedback loops that support teachers in making progress toward learning of rigorous and responsive science teaching practices? Research Question 1 focuses on the central phenomenon, including what teachers engage in sensemaking about. Research Question 2 focuses on the third goal and resource—teachers’ commitments to social relationships in various communities—as an affordance of using the concept of organizational sensemaking. Research Question 3 focuses on the feedback loop between the outcomes and goals and resources; that is, teachers had the opportunity to engage in sustained sensemaking over time as a result of agreeing to implement at least three Carbon TIME units over one school year. The question was: Will implementation of Carbon TIME disturb teachers’ ecology of practice in ways that were productive for their learning of rigorous and responsive science teaching practices? In summary, the context of the Carbon TIME project provided an opportunity to explore teachers’ sensemaking about an innovative science curriculum that provided teachers with a coordinated system of curriculum, assessments, and PD. Teachers’ implementation of Carbon TIME may illuminate key tensions around what teachers choose to engage in sensemaking about versus what researchers hoped teachers would engage in sensemaking about. That is, researchers designed the curriculum materials and PD to influence teachers to focus on particular aspects of science teaching that they thought were important—for example, engaging students in academically productive talk (Michaels & O’Connor, 2012) through scaffolding classroom 12 discourse around investigations of observable phenomena. However, teachers may choose to focus their attention elsewhere, missing an opportunity to engage in sensemaking about an aspect of the curriculum that could support them in making progress towards developing rigorous and responsive science teaching practices. Subjectivities As a qualitative researcher, I must identify, name, and describe how my own subjectivities have influenced particular aspects of the research process in this study. As Glesne (2006) wrote about becoming a qualitative researcher: When you monitor your subjectivity, you increase your awareness of the ways it might distort, but you also increase your awareness of its virtuous capacity. You learn more about your own values, attitudes, beliefs, interests, and needs. You learn that your subjectivity is the basis for the story that you are able to tell. It is the strength on which you build. It makes you who you are as a person and as a researcher, equipping you with the perspectives and insights that shape all that you do as researcher, from the selection of topic clear through to the emphasis you make in your writing. Seen as virtuous, subjectivity is something to capitalize on rather than to exorcise. (p. 123) My interpretation of becoming a qualitative researcher (in the sense that we are always becoming) is that my subjectivities at the time of the study and at this time, as I write this dissertation many months after data collection, can both help and hinder my interpretations of the data and research process. I see at least two ways in which my subjectivities have influenced the research—my professional background and my role as a graduate student researcher in the Carbon TIME project—and, I describe those subjectivities further in the following sections. Professional Background First, I am Asian-American and female, and these aspects of my identity influence ways in which others perceive me and I perceive others. As a petite woman, I imagine that I could be perceived as less threatening and authoritative, which could work in my favor as teachers may be more willing to open up to me about their concerns and frustrations. I attended all the face-to- 13 face PD sessions in the Carbon TIME Midwest Network (described further in Chapter Three) and interacted with Carbon TIME teachers both formally and informally (for example, during meals). As a qualitative researcher, I approached my interactions with teachers through the lens of trying to understand their thoughts and feelings (as opposed to judging them). Thus, it was important that I was seen as someone they could trust, especially with their challenges and frustrations. Second, I had 11 years of secondary science teaching experience in public schools in Maryland and California before attending a graduate program in Michigan. Although I taught primarily 9th grade Earth Science due to my undergraduate degree in Earth and Planetary Sciences, I also taught Chemistry, Astronomy, 8th grade Physical Science, and 6th grade Earth Science. My science content knowledge mattered because Carbon TIME was taught primarily by biology teachers. Although my biology content knowledge was sufficient to understand the basic concepts and curriculum, I could not engage in conversation with high school biology teachers about deeper content, particularly at the Advanced Placement level. However, my lack of deep biology content knowledge may have worked in my favor because I could not critique teachers’ content knowledge, which again may have led to teachers being more comfortable sharing their concerns with me. In particular, the case study teacher I worked with, Ms. Callahan, had a high level of content knowledge and was uncomfortable being video-recorded; I believe that my lack of deep content knowledge contributed towards her feeling more comfortable having me as an observer in her classroom. Although I enjoyed working with young people, I was unsatisfied with my experiences as a classroom teacher and left the profession, hoping to enter educational research as a way to discover other ways in which I could support the improvement of science education for all 14 students. After some experiences working as a graduate student research assistant on several projects with science education faculty, I realized that I valued working closely with teachers in their classrooms to improve their teaching practices and to help their students learn science in meaningful ways. As an experienced teacher, I empathized with teachers’ teaching conditions and local expectations. Furthermore, I had attended many PD sessions at local, state, and national levels, yet I felt that even after a decade of teaching and a Master’s degree in Science Education, I wasn’t where I wanted to be with my teaching. Thus, when the opportunity arose to conduct a dissertation study with high school teachers participating in an extensive curriculum implementation project, I proposed a dissertation study in which I could examine teachers’ sensemaking as they crossed the boundaries of classroom enactment and PD over time. In other words, my choice of topic for this study resulted partly from my own curiosity and frustration about why PD had not seemed to work for me when I was a classroom teacher. Finally, even though I am becoming a qualitative researcher, I am still at heart a classroom teacher. This strong identity has influenced my research in terms of how I want to work with teachers and how I want to portray teachers in my work. I believe in collaborating with teachers rather than studying teachers as subjects. During this study, I found that I sometimes had a tendency to interpret teachers’ actions and words from my perspective only; I had to remind myself to take a step back and try to view the situation from the perspective of the teacher in order to interpret their actions and words in terms of ways that might make sense to them and not necessarily to me. My analysis involved taking multiple perspectives, particularly when I had been looking at something for too long. I would consult with the case study coach of the teacher for a different perspective, including affirmation about my interpretations of the data. Often I would simply take a break and come back to the data at a later time, when I had perhaps 15 gained a different understanding of what seemed to be going on by having looked at another case. My ultimate purpose was to portray teachers’ experiences through an empathetic lens. Roles in Carbon TIME First, I note that I was a case study coach for Ms. Callahan in this study. By working closely with her throughout the year, I gained a more visceral understanding of her and her teaching practices, and this understanding colored my analysis of her case study data. However, by having some time lapse between data collection and data analysis, I was able to see our interviews anew and realized that I had forgotten many things we had talked about in our interviews. For example, I had not remembered the extent to which she had talked about students’ Pre- and Post-Test scores. Thus, systematic analyses of the data and reliability coding with a set of second coders helped ensure that my interpretations of the data were valid. Second, I was a member of two Carbon TIME research teams. In my second year of working on the project (2016-2017), I was a co-leader of the case study research team. In this role, I helped lead and set the agenda for team meetings, designed teacher interview protocols, and collected data at face-to-face PD sessions (video-recordings and field notes). Although I use “we” to refer to the research team, I led the development of the teacher interview protocols that produced the primary data source for my study. I was also a member of the network research team as one of the qualitative researchers. In this role, I analyzed a small portion of the survey data related to teaching practices, lead coding and reliability coding for all the teacher interview transcriptions, and shared my ongoing analysis of how professional networks seemed to be influencing teachers’ sensemaking about curriculum materials as boundary objects. In addition, I was co-author on an invited book chapter written by network team members (three faculty and 16 three graduate student researchers) about building networks to support effective implementation of science curriculum materials in the Carbon TIME project. These experiences have influenced my perspective in two ways. First, being a member of the case study research team for two years afforded me an insider perspective on the case study data. Not only was I a case study coach for one of the teachers, but I also had access to weekly meetings with the other case study coaches. This access afforded me the opportunity to check my interpretations of a teacher’s case with the person who was closest to the teacher in terms of having worked with them in their classrooms. Second, being a member of the network research team and being exposed to some of the quantitative data being collected and analyzed in the larger research project afforded me the opportunity to expand my perspective beyond the eight case studies. That is, I could situate the case study data within the larger context of what seemed to be happening for all Carbon TIME teachers. This perspective helped me in particular for the two case study teachers in the Carbon TIME Northwest Network—Ms. Wei and Ms. Nolan— who often referenced other Carbon TIME teachers in their interviews. Finally, I note that I was not a part of the first Carbon TIME project, which had focused on curriculum and assessment development. Therefore, I did not feel a particular attachment to the success or failure of the curriculum during classroom enactment. This detachment enabled me to engage in data collection and analysis without an emotional stake in whether teachers liked the curriculum or thought it supported students’ learning and engagement. Rather, I could focus on understanding teachers’ experiences, reasoning, and sensemaking about their implementation of the curriculum. Thus, to this study I brought a combination of my professional experiences as a secondary science public school teacher and a qualitative educational researcher committed to working with teachers in the context of their local school settings. 17 Overview of Chapters In Chapter Two, I review relevant literature on science education reform, professional development, and organizational sensemaking and relate those to my conceptual framework for investigating teachers’ sensemaking about implementation of an innovative curriculum. In Chapter Three, I describe my methods of data collection and data analysis. In Chapter Four, I present findings related to each of the research questions, including narratives of selected occasions of sensemaking that highlight particular teachers’ cases and show variations in teachers’ patterns of sensemaking. In Chapter Five, I discuss the findings within the context of current science education reforms and share implications for science teacher PD, teacher learning, and directions for future research. 18 Chapter Two Literature Review In this chapter, I review literature from science education, mathematics education, teacher professional development, and organizational sensemaking that are relevant to my conceptual framework for investigating teachers’ sensemaking about implementation of an innovative science curriculum across the settings of PD and classroom enactment over time. I begin with a combination of literature from science education, mathematics education, and teacher professional development that situates the Carbon TIME project and curriculum within scholarly conversations about challenges in supporting teachers’ implementation of reform-oriented innovations such as NGSS. Then, I review literature about boundary objects to inform my conceptualization of Carbon TIME curriculum materials as boundary objects that cross the boundary between PD and classroom enactment settings. Finally, I review literature from organizational sensemaking and studies in educational research that have used sensemaking to gain insight into the central phenomenon of this study. I end the chapter with a synthesis of the findings that led to my selection of particular Carbon TIME curriculum materials as boundary objects that had the most potential to trigger teachers’ sensemaking and therefore provide insights into the outcomes and goals and resources of teachers’ sensemaking about implementation of the curriculum. Using PD to Support Teachers’ Implementation of Innovative Curricula Science Education Reform Carbon TIME is a distinctive curriculum and worthy of study for several reasons. First, the curriculum is based on learning progressions for carbon cycling and energy flow in socioecological systems (Jin & Anderson, 2012a; Mohan, Chen, & Anderson, 2009). Learning 19 progressions (LPs) are descriptions of increasingly sophisticated ways of thinking about a topic (National Research Council, 2012) and hold promise as frameworks in science education that can help teachers focus their attention on the development of students’ ideas from more naïve to more scientific conceptions (Gotwals & Alonzo, 2012). This new way of viewing students’ ideas in science education is qualitatively different from the traditional approach of “right or wrong” or the commonly held perspective that students’ non-canonical ideas are “misconceptions” to be corrected. Alonzo (2011) used the metaphor of LPs as a roadmap to explain that, “by laying out where students have been and where they are going, learning progressions provide a view that is broader than a single instructional goal and can help to ensure that this bigger picture is ultimately what guides instruction” (p. 126). The few research studies that have examined teachers’ use of LPs suggest that teachers do not readily appropriate LPs in ways that researchers intend and that sustained PD is needed to support teachers’ enactment of LP-based formative assessment practices (Furtak, 2012; Furtak, Morrison, & Kroog, 2014). Thus, I expect that the teachers participating in the Carbon TIME project will need sustained support with new ideas, such as reasoning about students’ thinking in terms of LPs. Researchers designed Carbon TIME to align three aspects of a coordinated LP-based system (Gunckel, Mohan, Covitt, & Anderson, 2012; Jin & Anderson, 2012b): (1) LP frameworks that describe a range of students’ ideas about carbon cycling and energy flow in socio-ecological systems, (2) student assessments that help teachers identify where students are on the progression, and (3) instructional resources and tools that support teachers’ and students’ learning, specifically around the NGSS scientific practices of analyzing and interpreting data, constructing explanations, and engaging in argument from evidence. By aligning these three 20 aspects, researchers hoped to provide teachers with a coherent LP-based system that would fully support teachers’ implementation of the curriculum. Carbon TIME is also distinctive because of the amount of materials that it provides for teachers. Although the comprehensiveness of Carbon TIME is impressive, it may also be overwhelming for teachers, particularly as they try to coordinate use of all the materials while also managing incentives (e.g., grades) for students (Cohen & Ball, 2001). The curriculum includes six units—an introductory unit (Systems and Scale), three organismal units (Plants, Animals, Decomposers), and two large-scale units (Ecosystems, Human Energy Systems). Each unit follows the same Instructional Model of an inquiry and application sequence that helps students connect macroscopic observations to patterns in the observations to abstract atomicmolecular models of phenomena (see Figure 27 in APPENDIX I). Resources for each unit include educative teacher guides, videos, presentation slides, online student assessments, and Process Tools to scaffold classroom discourse around unit investigations. Thus, teachers could engage in sensemaking about logistical and technical issues, such as how to manage materials or get their students online to take the assessments, as well as conceptual issues, such as what counts as more sophisticated reasoning. Validated student assessment data from pilot testing of Carbon TIME units from 20112014 showed that instruction using Carbon TIME could increase student learning (as measured using LP-based assessments). However, some teachers’ classrooms had little or no pre-post learning gains while others had significant gains (Doherty, Draney, Shin, Kim, & Anderson, in preparation). The variability in these assessment results raised a question: Why were some teachers more successful and others less so? Initial analyses of classroom video data indicated that some of the discrepancy could be attributed to differences in teachers’ classroom practices. 21 That is, although teachers had access to the same curriculum materials, the ways in which they used, did not use, or modified the materials and their enactment of the curriculum may have influenced students’ learning gains. Thus, in the next section, I review literature related to teachers’ implementation of reform-oriented curricula in mathematics and science education. Curriculum Implementation The study of curriculum implementation and its effects on teacher learning has a long history in educational research. Ball and Cohen (1996), for example, argued about 20 years ago that curriculum materials could act as “agents of instructional improvement” (p. 6) since they were already scaled up in the form of textbooks and textbooks are a routine part of schools. However, they cited several problems with teachers’ use of curriculum materials, such as a lack of strong curricular guidance: “teachers’ understanding of the material, their beliefs about what is important, and their ideas about students and the teacher’s role all strongly shape their practice” (p. 6). And, ultimately, because “the curriculum that counts is the curriculum that is enacted” (p. 8), the authors recommended that curriculum developers design the intended curriculum with the enacted curriculum in mind. Since then, educational researchers have investigated various aspects of curriculum implementation, such as elementary teachers’ adaptations of a reformoriented mathematics curriculum (Drake & Sherin, 2006) and elementary teachers’ ability to adapt science lessons (Davis, Beyer, Forbes, & Stevens, 2011). Based on their work with elementary teachers implementing a reform-oriented mathematics curriculum, Sherin and Drake (2009) developed a curriculum strategy framework that consisted of describing teachers’ particular orientations to three interpretive activities— reading, evaluating, and adapting curriculum materials—before, during, and after instruction. The authors found three general approaches towards adapting lessons: omitting, replacing, and 22 creating new components. In connecting teachers’ approaches to evaluating and adapting curriculum materials, the authors found that teachers created new materials when they evaluated the material prior to instruction in terms of the teacher and during instruction in terms of the students. The authors determined that this connection between a teacher’s approach to evaluation and adaptation “illustrates a proactive sense of collaborating with the curriculum” (p. 488). This finding aligned with Remillard’s (2005) description of curriculum use as participation with the text, in which the assumption is that “teachers and curriculum materials are engaged in a dynamic interrelationship that involves participation on the parts of both the teacher and the text” (p. 221). She argued that the distinction between this perspective and the perspective of curriculum use as interpretation of text is that “researchers in this group seek to study and explain the nature of the participatory relationship” (p. 221). More recently, Zangori, Forbes, and Biggers (2013) examined elementary teachers’ use and modifications of science curriculum materials to promote explanation construction. Using a matched set of lesson plans and video-recordings of lesson enactments, the authors analyzed the extent to which teachers supported students’ explanation construction and found that “teachers’ conceptions of explanation construction and concerns about the abilities of their students to engage in scientific explanations impacted their curricular adaptations” (p. 989). For example, they found that teachers emphasized giving priority to evidence over students’ formulation of evidence-based explanations because of their conception of scientific explanation construction as encompassing only engagement with the phenomena and data collection and not data analysis. That is, teachers prioritized the hands-on aspect of conducting investigations; thus, their modifications to the curriculum were not productive in supporting students’ formulation of evidence-based explanations. 23 Together, what these studies highlight is that scholars have investigated teachers’ curriculum use in terms of their adaptations or modifications of the curriculum and found variations in how teachers’ approaches influence the type and productivity of the modification in terms of alignment with reform-oriented visions. In developing their curriculum strategy framework, Sherin and Drake (2009) showed how investigating the timing and nature of teachers’ interactions with the curriculum can illustrate connections between teachers’ practices as they planned for, enacted, and reflected on enactment. Studies with secondary science preservice teachers indicate that planning for enactment is an important activity in which teachers have the opportunity to design high-quality learning tasks (Kang, Windschitl, Stroupe, & Thompson, 2016; Kang, 2017). Thus, one implication for my study is to consider how teachers interact with the curriculum before, during, and after enactment. Furthermore, Remillard (2005) presented a framework of the teacher-curriculum relationship in which she proposed the possibility that enacting curriculum might prompt teacher learning and change. However, because there were few studies of the participatory relationship in teachers’ curriculum use over time, she suggested that future studies might examine “whether use of unfamiliar curriculum materials might be viewed as a form of teacher development” (p. 239). Indeed, DeBarger et al. (2017) investigated purposeful science curriculum adaptation with middle school teachers as a strategy to improve teaching and learning in three key areas: (1) eliciting and interpreting students’ ideas at the beginning and end of investigations, (2) creating a classroom culture for academically productive talk, and (3) adjusting teaching when students’ difficulties in understanding could not be easily overcome. (p. 72) Using a quasi-experimental design, they found that purposeful adaptation targeted toward eliciting and responding to students’ ideas can lead to differences in student learning outcomes. 24 As I have described previously in this chapter and illustrated in Figure 2, teachers’ implementation of Carbon TIME is a significant disturbance in their ecology of practice in terms of the amount of time and effort required to plan for enactment of three units. In addition to being unfamiliar to teachers as materials that were created by researchers, Carbon TIME may also represent an unfamiliar vision of science teaching and learning that, when enacted, creates discomfort. How teachers react to that discomfort can identify tensions between teachers’ goals and Carbon TIME goals, thereby providing insight into why enact the curriculum in particular ways. As Zangori et al. (2013) found, teachers may have particular conceptions of scientific practices that influence their modifications of the curriculum or their enactment to be more or less productive in supporting students’ engagement in those practices. Professional Development Now that I have reviewed some literature related to teachers’ modifications of reformoriented curriculum, I turn to how PD can support teachers’ implementation. As I illustrated in Figure 1, one goal of the Carbon TIME project was to support teachers’ implementation of the curriculum by using a cohort model with a designed network to facilitate communication among Carbon TIME teachers and flow of information from experts to novices. Research on social networks within a school shows that the level of implementation of an innovation achieved by a school is related to teachers’ access to resources and expertise through formal school structures and informal social networks (Frank, Zhao, Penuel, Ellefson, & Porter, 2011; Penuel, Riel, Krause, & Frank, 2009; Penuel et al., 2010; Spillane, Min Kim, & Frank, 2012). Teachers’ professional networks are especially important as researchers scale up innovations to reach more teachers (e.g., Cobb & Jackson, 2011). Some Carbon TIME teachers, however, were the only teachers at their schools implementing Carbon TIME. Therefore, researchers aimed to support 25 teachers by developing a network of Carbon TIME colleagues that they could access outside of the limited time in face-to-face PD. In my conceptual framework for investigating teachers’ sensemaking about implementation of Carbon TIME, I posited that one of the goals and resources that influence sensemaking is teachers’ social commitments to their various communities. The Carbon TIME networks could be one of those communities if teachers actively sought or provided help through their connections in the network. Colleagues in Carbon TIME networks could also be a source of inspiration and emotional support, particularly for teachers who may be teaching in school settings in which they feel professionally isolated. Literature on Boundary Objects and Sensemaking Combining the concept of boundary objects with organizational sensemaking is useful in analyzing how boundary objects can coordinate teachers’ work between the PD and classroom enactment settings because of differences in the goals, purposes, and values of the people in those settings. In this section, I review literature related to boundary objects, organizational sensemaking, and educational research that has used sensemaking to investigate various issues. Boundary Objects In my conceptual framework for investigating teachers’ sensemaking about implementation of Carbon TIME, I conceptualized curriculum materials as boundary objects that crossed the boundary between the PD and classroom enactment settings. Boundary objects are material artifacts that people act toward and with, including the following dimensions: (1) temporality, (2) basis in action, (3) subjection to reflection and local tailoring, and (4) meanings distributed throughout any number of these dimensions (Star & Griesemer, 1989). For example, curriculum materials that include thinking tools for students can be boundary objects because 26 they appear in both the PD and classroom enactment settings but may have different meanings in those settings due to different purposes and participants. The notion of boundary objects is most useful at the level of organizations because of the invisibility of coordinating work across settings in which different organizations have different social norms, goals, and purposes, resulting in differing meanings in those settings. Star (2010) theorized that boundary objects have both ill-structured and well-structured aspects and that the meanings people make of these objects in various settings do not need to be the same in order to be useful in coordinating work across settings. In the context of this study, coordinating work across settings means that the curriculum materials are able to support students’ engagement in three-dimensional science learning even if teachers do not share the same understanding of the boundary object as the researchers in the PD setting or the students in the classroom enactment setting. For example, a well-structured aspect of the Carbon TIME Process Tools is the incorporation of the crosscutting concepts of matter and energy conservation in the form of The Three Questions: Where are molecules moving? How are atoms in molecules being rearranged into different molecules? What is happening to energy? These concepts are also represented in the form of two short phrases that accompany The Three Questions: Atoms last forever, and energy lasts forever. Thus, students have the opportunity to engage in three-dimensional science learning through their interactions with the Process Tools even if their teacher does not have the same understanding of the Process Tool as the curriculum developers and researchers. As boundary crossers, however, teachers have a unique opportunity to learn at the boundary. In their review of boundary objects and boundary crossings in educational research, Akkerman and Bakker (2011) stated simply that “all learning involves boundaries” (p. 132). 27 They identified four dialogical learning mechanisms at the boundaries between different sociocultural worlds: identification, coordination, reflection, and transformation. Briefly, they are: (a) identification, which is about coming to know what the diverse practices are about in relation to one another; (b) coordination, which is about creating cooperative and routinized exchanges between practices; (c) reflection, which is about expanding one’s perspectives on the practices; and, (d) transformation, which is about collaboration and codevelopment of (new) practices. (p. 150). The authors stressed that boundaries are not only barriers but also potential resources for learning. Thus, one implication for this study is that teachers have the potential to learn at the boundary as they cross between the social worlds of the PD and classroom enactment settings. I imagined that, as teachers crossed multiple times over the course of implementation, they had the opportunity to make progress towards learning rigorous and responsive science teaching practices, especially if they had a participatory relationship with the curriculum and viewed it as something to collaborate with and not merely follow. In other words, multiple crossings may open up the possibility of teacher learning about themselves as teachers and their students as learners, with changes for both the boundary crosser and boundary object. Sensemaking The concept of boundary objects aligns well with the concept of sensemaking from organizational theory (Weick, 1995; Weick, 2001; Weick, Sutcliffe, & Obstfeld, 2005), which I used to investigate how and why teachers do what they do. Sensemaking can be described as how a person answers the question of “what’s the story?” about their critical noticing of an event, usually one that is extraordinary, unexpected, or disruptive (Weick et al., 2005). In the case of teachers, I posit that sensemaking could occur as teachers engage in activities such as lesson planning in which they anticipate potential problems with enactment. Sensemaking is 28 simultaneously retrospective (one does not know one has made a mistake until it has been made) and prospective (what should I do next)? Understanding teachers’ sensemaking is crucial for improving teaching practices because sensemaking precedes decision making. That is, by the time a decision has been made, the most important action—sensemaking—has already been done (Weick, 2001). Sensemaking is literally the active process of making something sensible. Weick (1995) described sensemaking as a process that is: (1) grounded in identity construction, (2) retrospective, (3) enactive of sensible environments, (4) social, (5) ongoing, (6) focused on and by extracted cues, and (7) driven by plausibility rather than accuracy. Another way to conceptualize all seven properties of sensemaking is the phrase “how can I know what I think until I see what I say?” (Weick, 1995, p. 18). In other words, I know who I am by discovering how and what I think and how others respond to what I say. Sensemaking is social, ongoing, and retrospective because what I say is for a particular audience, my talk is spread across time, and to learn what I think, I look back over what I said earlier. Sensemaking is about what’s plausible (now that I see what I’ve said and how you’ve responded to it), not what’s accurate. This concept of organizational sensemaking is appropriate for studying school settings for two reasons. First, schools operate as organizations at multiple levels. At the classroom level, a teacher and her students establish and negotiate norms, expectations, and roles and responsibilities. At the school level, teachers (and students) are part of a larger social structure in which teachers are held accountable to other teachers, administrators, and vice versa. That is, rather than acting in isolation, teachers’ social interactions with their students and their colleagues and administrators over time create collective norms and expectations. Second, Weick et al. (2005) proposed that sensemaking starts with chaos (in the sense that something 29 unexpected happens). Although schools and classrooms may seem dominated by structure and routines, they are also places where unpredictability can cause teachers to identify something as a problem, thereby engaging in critical noticing of something. Identifying something as a problem is an important first step for the effective reflective practitioner (Loughran, 2002). By using sensemaking in my study, I could investigate teachers’ critical noticing of interactions between the curriculum and themselves and between the curriculum and their students. Although this notion of critical noticing seems similar to that of noticing developed by scholars in mathematics teacher education (van Es & Sherin, 2002, 2008), there are some important differences between the two concepts. van Es and Sherin proposed that learning to notice as a classroom teacher involves three key aspects: (a) identifying what is important or noteworthy about a classroom situation; (b) making connections between the specifics of classroom interactions and the broader principles of teaching and learning they represent; and (c) using what one knows about the context to reason about classroom interactions. (p. 573) The first aspect—identifying what is noteworthy about a classroom situation—is most similar to the notion of critical noticing in organizational sensemaking in terms of identification as the significant mental process. The second and third aspects include making connections and reasoning—processes that are also present in sensemaking. Therefore, critical noticing in organizational sensemaking is defined more narrowly than the notion of noticing that has been developed in mathematics education. For the purposes of this study, sensemaking is a more appropriate theoretical construct because of my conceptualization of curriculum implementation as an intentional disturbance in a teacher’s ecology of practice. However, the notion of noticing may be useful in understanding how or why teachers engage in critical noticing because noticing was developed by researchers in mathematics education in the particular context of classroom teaching. 30 Finally, sensemaking is useful for understanding implementation because teachers’ decisions are public and irrevocable, and therefore teachers’ sensemaking is often an attempt to justify their decisions (Weick, 2001). Teachers’ repeated justifications can be linked to their commitments to social relationships in their communities. Within classrooms, for example, teachers may be committed to social relationships with their students (e.g., doing fun experiments to catch students’ interest). Within schools, teachers may be committed to social relationships with their colleagues and administrators (e.g., sharing resources to build a reputation as someone to go to for help). And within Carbon TIME networks, teachers may be committed to social relationships with Carbon TIME colleagues or staff. Although educational researchers have not used the concept of sensemaking extensively, particularly in science education, some researchers have used it to investigate issues such as reading policy implementation (Coburn, 2001, 2005), teachers’ implementation of mathematics curriculum (Drake & Sherin, 2006; Marz & Kelchtermans, 2013), teachers’ sensemaking of student learning data (Bertrand & Marsh, 2015), teachers’ sensemaking about language policy implementation in bilingual classrooms (Palmer & Rangel, 2011), and teachers’ sensemaking in response to PD focused on Next Generation Science Standards (Allen & Penuel, 2015). The use of sensemaking affords researchers the opportunity to examine teachers’ meaning making processes about new policies or curricula that could cause confusion about what the policy means or how to implement the curriculum. For example, Allen and Penuel (2015) found that some teachers were able to manage ambiguity, uncertainty, and perceived incoherence about NGSS productively while others did not. Most recently, Marco-Bujosa, McNeill, Gonzalez-Howard, and Loper (2017) used sensemaking to explore teacher learning about argumentation using an educative reform-oriented 31 science curriculum. They found variation in how five middle school teachers used the curriculum. Teachers who used it solely to support student learning did not seem to recognize that it could promote their own learning. In contrast, they found that “teachers who actively engaged in their own learning while adapting the curriculum to their context made learning gains, indicating a need for teacher active reflection to learn new practices” (p. 141). This finding brings up the question of how dialogical learning mechanisms may work at the boundary for different teachers. Synthesis Based on this literature review, I hypothesized that, of the potential objects of sensemaking that could arise from teachers’ implementation of Carbon TIME, particular boundary objects were more likely to trigger teachers’ sensemaking due to ambiguity and uncertainty between their meanings in the PD setting versus classroom enactment setting. Table 1 on the next page shows the six Carbon TIME boundary objects that I selected to focus on for my study, including descriptions of their intended purposes in the curriculum and their potential for triggering teachers’ sensemaking. The six boundary objects are: (1) Instructional Model, (2) Expressing Ideas Tool, (3) Predictions Tool, (4) Evidence-based Arguments Tool, (5) Explanations Tool, and (6) Pre- and Post-Tests. First, the Carbon TIME Instructional Model may trigger teachers’ sensemaking due to ambiguity and uncertainty about its structure. The Instructional Model is represented by a triangle with a vertical component that starts with observations at the base, then moves to patterns, and finally models at the apex (see Figure 27 in APPENDIX I). The triangle shape represents the relative number of observations, patterns, and models: there are many observations that can be described by a few patterns, which can be explained by even fewer atomic-molecular 32 scale models. Thus, the level of abstractness increases, which can pose problems for teachers and students in making connections between macroscopic scale observations and atomic-molecular scale models. Table 1 Selected Carbon TIME Boundary Objects of Interest Carbon TIME Boundary Object Intended Purpose in the Curriculum Instructional Model Provides a broad overview of the structure of a unit, which includes the inquiry-application sequence and indicates where each Process Tool fits in the IM to support classroom discourse Expressing Ideas Tool Predictions Tool Scaffolds divergent classroom discourse around what students think happens during a carbon-transforming process (e.g., ethanol burning, photosynthesis, respiration) Scaffolds divergent classroom discourse around what students predict they will observe in the investigation using the Three Questions Evidence-based Arguments Tool Scaffolds convergent classroom discourse around evidence generated from the investigation Explanations Tool Scaffolds convergent classroom discourse around constructing explanations about the phenomena Pre- and PostTests Assess students’ understanding of carbon-transforming processes using learning progression levels Potential for Triggering Teachers’ Sensemaking The graphic that represents the IM is complex and contains two dimensions: a horizontal dimension that shows the direction of student learning and a vertical dimension that shows the concrete-abstract nature of scientific knowledge about natural phenomena Students may have difficulty sharing their ideas about what materials go in or comes out of the process; what questions they have about the phenomena Students may have difficulty connecting macroscopic and atomic-molecular scales; differences between matter movement, matter change, and energy Students may have difficulty identify evidence for matter and energy changes, drawing conclusions, and coming up with unanswered questions Students may have difficulty differentiating between macroscopic scale observations and atomic-molecular scale mechanisms; matter and energy changes Students and teachers may have difficulty with the unconventional format of the questions and with how to score students’ responses for grading purposes The Instructional Model also has a horizontal component that identifies the direction of student learning, from divergent talk and writing during the inquiry stage (“moving up” the triangle from left to right) to convergent talk and writing during the application stage (“moving 33 down” the triangle). The unit investigations occur towards the end of the inquiry stage and center around an observable phenomenon (e.g., ethanol burning, plants growing, mealworms eating and gaining mass). Students collect data, or evidence of matter and energy changes (e.g., mass loss or gain in the ethanol, plants, and mealworms). Then, they use this data to construct arguments from evidence and explanations of phenomena that connect their macroscopic scale observations with atomic-molecular scale explanations. Thus, as classrooms move from inquiry to application, teachers and students must shift from divergent to convergent discourse, which can pose problems if teachers do not provide explicit signals that the nature of classroom discourse is shifting from sharing many ideas to converging on one idea based on evidence from the unit investigations. Second, Carbon TIME researchers designed the Process Tools (PTs) to scaffold two types of classroom discourse. The Process Tools are: (1) Expressing Ideas, (2) Predictions, (3) Evidence-based Arguments, and (4) Explanations (see Figures 23 through 26 in APPENDIX I for the PTs from the Systems & Scale Unit). Each Carbon TIME unit uses the Instructional Model to scaffold teachers’ and students’ discourse practices around scientific ideas and evidence—a unit begins with divergent thinking, in which many ideas about the phenomenon of interest are expressed and considered, and ends with convergent thinking, in which ideas are clarified and consolidated. Some or all of these discourse practices may be unfamiliar or uncomfortable to teachers and students, thus disturbing a teacher’s ecology of practice and potentially promoting sensemaking about the disturbance. The topic of classroom discourse in secondary science classrooms has a rich and extensive literature base that I mention only briefly here, from Lemke’s (1990) descriptions of IRE (initiation-response-evaluation) patterns to studying the effects of teacher talk (e.g., Moje, 34 1995) to making meaning through tensions between dialogic and authoritative discourse (Mortimer & Scott, 2003). The Carbon TIME project specifically used Michaels and O’Connor’s (2012) Talk Science Primer in the PD to support teachers’ learning of academically productive talk, which includes the following elements: a belief that students can do it, well-established ground rules, clear academic purposes, deep understanding of the academic content, a framing question and follow-up questions, an appropriate talk format, [and] a set of strategic “talk moves.” (pp. 1-2) Some of these elements are related to components in my conceptual framework for investigating teachers’ sensemaking about Carbon TIME. In my framework, teachers’ practical knowledge is a driver of sensemaking and includes teachers’ beliefs and formal knowledge. van Driel, Beijaard, and Verloop (2001) argued that long-term PD programs should elicit, monitor, and account for teachers’ practical knowledge. For example, a teacher’s use of the Process Tools in their classrooms may trigger sensemaking if she believes that students can’t do it (engage in academically productive talk), which may cause tensions between what the tool was designed to do and what a teacher believes her students can do. Or, a teacher’s shallow understanding of the academic content may inhibit his ability to support students’ deep understanding of the content. Finally, Carbon TIME researches designed LP-based assessments to provide teachers with information about their students’ LP levels. Teachers administered Pre- and Post-Tests for each unit and an overall curriculum Pre- and Post-Test that covered all six units regardless of whether teachers taught those units or not. Thus, over the course of a year of implementation, teachers administered eight assessments and were encouraged to use the results of those tests for grading and assessment purposes. The Carbon TIME Pre- and Post-Tests could potentially trigger teachers’ sensemaking for several reasons. First, the tests were administered online and could pose technical and 35 logistical problems for teachers in terms of getting computer lab space and time for all their students to take the tests. Second, the format of the test questions was unconventional. The first part of a question asked students to select from several choices, including “all, some, or none” (forced-choice); the second part asked students to write an explanation. At the time of the study, the Carbon TIME project could provide automatic scoring only for the forced-choice part; teachers had to grade the written explanations themselves if they wanted to use students’ scores for grading purposes. Grading written explanations is not necessarily a straightforward process. For example, Talanquer, Bolger, and Tomanek (2015) explored prospective teachers’ assessment practices around interpreting students’ written work and found differences in how teachers framed the assessment of student understanding and attended to relevant disciplinary ideas. They found that few teachers attempted to make sense of students’ ideas. Sensemaking could be triggered in my study, then, as teachers attempted to get their students online to take the assessments, score responses for grading and accountability purposes, or try to understand the format and content of the questions for themselves. Conclusion Current reforms in science education require significant shifts in teachers’ and students’ roles in the classroom. One challenge for science teacher educators is how to develop effective systems of support for teachers to learn new practices that engage students in three-dimensional science learning. Implementation of innovative science curricula has the potential to reach large numbers of teachers; however, differences in how teachers approach curriculum use could affect how productive teachers’ modifications are for their own learning of rigorous and responsive science teaching practices and students’ engagement in three-dimensional science learning. In this study, I aimed to understand what teachers engaged in sensemaking about; what they 36 critically noticed about their implementation of the curriculum; how their social commitments to various communities, including their Carbon TIME network, influenced their sensemaking; and, how multiple boundary crossings over time could promote teacher learning of new teaching practices associated with the curriculum materials. 37 Chapter Three Context and Methods This qualitative comparative case study (Yin, 2014) explored how eight secondary science teachers engaged in sensemaking about Carbon TIME materials as they participated in professional development and enacted curriculum units with their students over the course of a year. Case study was an appropriate methodology for this study because the research questions began with how, there was no control of behavioral events, and the study focused on how contemporary events unfolded over time (Flyvbjerg, 2011; Yin, 2014). In this chapter, I begin by describing the context of the study within a larger research project. Next, I describe the methods in detail, including information about the case study teacher participants and their respective coaches, data collection, and data analysis. Context: A Design-Based Implementation Research Project The context for this study was a large-scale, design-based implementation research (DBIR; Penuel, Fishman, Cheng, & Sabelli, 2011) project investigating teachers’ implementation of Carbon TIME (Carbon: Transformations in Matter and Energy), a curriculum developed by science education researchers at Michigan State University. The project used a cohort and network PD model to support secondary science teachers in multiple states. In the 2015-2016 school year, the first cohort had 32 teachers participating from three states corresponding to three Carbon TIME networks (pseudonyms): Midwest (MW), Northwest (NW), and West (WE). Of note is that most teachers in the Midwest Network were the only teacher participating from their school or district; most teachers in the Northwest Network taught in the same large urban school district; and, there was only one teacher in the West Network due to issues with recruitment and timing. Both the Midwest and Northwest Networks included veteran Carbon TIME teachers who 38 had participated in the first Carbon TIME curriculum and assessment development project and therefore had several years of experience working with the Carbon TIME curriculum, assessments, and research team. Teachers consented to participate in the Carbon TIME project for two consecutive years. Researchers invited all teachers to volunteer as case study teachers, which entailed more intense data collection (e.g., teacher interviews, video-recordings of classroom instruction, collection of focus student work samples, focus student interviews) beyond that of basic participation in the larger project. In return, teachers were offered one-on-one support in the form of a case study coach and additional compensation for the extra time and effort. Of the teachers who volunteered, eight were selected to participate as case study teachers. All of the selected participants had minimal or no prior experience with Carbon TIME; they were purposefully selected in order to explore how teachers new (or relatively new) to an innovative science curriculum engaged in sensemaking about their implementation of it in their classrooms. All teachers agreed to implement at least three Carbon TIME units during the 2015-2016 school year and decided when and how they would implement the units. Case study teachers worked with their dedicated case study coach to plan for classroom visits and data collection. Coaches were supported in their work through weekly case study research team meetings where they were able to share their challenges and experiences and, as a group, discuss how to resolve logistical and theoretical issues that arose from working with the case study teachers. I provide more information about the case study coaches in the Methods section on participants. From August 2015 to August 2016, teachers attended three face-to-face PD sessions and participated in an asynchronous online PD course. The content of the face-to-face and online PD sessions are shown in Table 2. The first face-to-face PD session was designed to introduce 39 teachers in a Carbon TIME Network to each other and to important features of the project and curriculum, such as research goals and findings, expectations of participation, the inquiry and application sequences in the Instructional Model, how the Process Tools were designed to scaffold classroom discourse, and learning progression views of students’ ideas. Table 2 Content of Face-to-Face and Online PD Sessions in 2015-2016 School Year Month and Year Content of Face-to-Face PD Introduction to other teachers in the Network and to the curriculum, including research goals and findings, the Instructional Model and Process July & August Tools to scaffold discourse; modeling of 2015 unit investigations; analysis of student responses on tests, time to select and plan for unit implementation, and distribution of materials and resources. Discussion of how Process Tools were working for teachers and students, including sharing challenges, scaffolds, *December 2015 modifications, strategies for student engagement, and grading and assessment practices. Mid-year reflections on what worked well, favorite strategies, challenges, and what teachers were most proud of; Discussion of grading and assessment practices related to Process Tools and February 2016 Pre- and Post-Tests; How students are responding to Unanswered Questions; Synthesis exercise to think about science storyline across units; Overview of large-scale units. Overview of changes to units, including one- and two-turtle pathways; Revisions to Tools for clarity; Divergent and convergent classroom discourse; August 2016 Discussed video of Carbon TIME instruction using lens of epistemic framing; New online testing website; Discussion and selection of PLCs for second year of implementation. *Only for the Northwest Network 40 Content of Online PD Introductions, including teaching context and experiences Reflections about the Carbon TIME units, learning progressions, and what teachers hoped to gain from being a member of the Network Teachers reflected on what they noticed about students’ ability to explain how matter and energy are transformed when organic matter burns, how their understanding changed, and how they can use what they learned about student understanding in this unit to plan for instruction in the next unit; How well Carbon TIME goals aligned with their goals or district goals, how well the Instructional Model supported student learning, and how engaging the unit was for expanding teachers’ repertoire of strategies and supporting students in classroom scientific discourse; Grading and assessment strategies for formative and summative purposes; Managing materials for the unit investigations; Questions or suggestions about managing implementation and time; Student learning over time, including identification of learning progression levels; How particular students completed Process Tools. The Northwest Network had an additional face-to-face session in December 2015 to discuss challenges and successes of using the Process Tools with their students and to share ideas about enactment. Carbon TIME researchers and staff designed online PD to facilitate teachers in a Network to interact with each other asynchronously, including responding to prompts at various times of the year designed to support teachers in preparing for and reflecting on implementation of the units. For example, prompts asked teachers to reflect on what they noticed about students’ learning, strategies for engaging students in discourse around the Process Tools, grading and assessment practices, and plans for implementation of future units. The Northwest and Midwest Networks used different learning management systems to deliver the online PD (Schoology and Desire2Learn, respectively). In addition, the Northwest Network teachers often communicated via email because they were in the same school district and had a history of district-wide collaborations beyond that of participation in the Carbon TIME project. Because the West Network had only one teacher, she typically did not attend PD with others (except for the August 2016 PD with the Northwest Network) and instead worked closely with her case study coach to plan for and reflect on implementation during the school year. Both the face-to-face and online PD sessions were designed to trigger teachers’ sensemaking about potential challenges associated with implementation of Carbon TIME. The goal of this study was to take a deeper look at what happens when teachers implement Carbon TIME in their classrooms. How did teachers engage in sensemaking about Carbon TIME curriculum materials? Through the PD activities, teachers had the opportunity to engage in sensemaking around four challenges: (1) aligning their goals for teaching and learning with Carbon TIME goals, (2) managing and coordinating curriculum resources and tools, (3) enacting 41 an Instructional Model that may differ from their routine teaching model, and (4) building a repertoire of teaching strategies, such as scaffolding different types of classroom discourse (e.g., divergent and convergent talk). These four challenges are a set of hypotheses about occasions for sensemaking. The first two challenges are hypotheses about teacher-driven sensemaking, or things that are likely to disrupt teachers’ usual routines and relationships. For example, teachers’ adaptations can be influenced by their learning goals for students, which may not be aligned with the learning goals of the curriculum (Allen & Penuel, 2015; Davis, Beyer, Forbes, & Stevens, 2011). The last two challenges are hypotheses about researchers’ aspirations, or what researchers hope teachers will devote their time and energy to making sense of. For example, Carbon TIME researchers hope teachers will build a repertoire of teaching strategies that include eliciting and responding to students’ ideas in ways that both honor students’ contributions and push students to justify their ideas using scientific evidence (data collected from observations and measurements in the unit investigations) and scientific principles (e.g., matter and energy cannot be created or destroyed). Researchers intended for teachers to use Carbon TIME materials as a toolkit rather than as a script to be followed. Therefore, the context of the large-scale curriculum implementation project provided multiple opportunities across the settings of PD and classroom enactment for teachers to engage in sensemaking over time about how the curriculum materials interacted with their own teaching practices and practical knowledge and their students’ learning within and across units. Finally, I note that as a graduate student member of the Carbon TIME research project, I use the word “we” in this chapter to differentiate between actions taken as a result of research team decision-making versus my own thinking. 42 Methods Participants Case study teachers. Teachers volunteered and consented to participate as Carbon TIME case study teachers and were spread over three Carbon TIME networks, including five high school teachers and three middle school teachers in rural, suburban, urban school contexts (see Table 3). Five teachers were in the Midwest Network, two teachers were in the Northwest Network, and one teacher was in the West Network. Distinctions about Carbon TIME Network membership are important because teachers in a network met each other in person at the face-toface PD sessions and participated in the same online PD course. All of the participants were experienced teachers, with number of years teaching ranging from 7 to 28. Five teachers were White and female. Teachers varied in their backgrounds in terms of undergraduate emphasis in biology: five teachers had a major emphasis, and three teachers had a minor emphasis. Ms. Nolan was the only participant who was National Board Certified (by the National Board for Professional Teaching Standards) at the time of the study. Table 3 Case Study Teacher Demographics and School Context in 2015-2016 Carbon TIME Network Number of Years Teaching Mr. Ross M White & other 8 Mr. Harris M White 14 MW Ms. Callahan F White 13 Ms. Barton F White 20 Ms. Apol F White 28 Ms. Wei F Chinese 12 NW Ms. Nolan F White 7 WE Ms. Eaton F White 22 Note. MW = Midwest, NW = Northwest, and WE = West Pseudonym Gender Race / Ethnicity 43 Undergraduate Emphasis in Biology Major Major Major Minor Minor Minor Major Major School Context Suburban HS Suburban-Rural HS Urban-Suburban HS Rural MS Rural MS Urban HS Urban HS Suburban MS School context varied, from rural middle schools (Ms. Barton and Ms. Apol) to a math and science magnet school located in an urban-suburban high school (Ms. Callahan). Ms. Wei and Ms. Nolan taught in the same large urban district but at different schools with slightly different student populations (see Table 4 below). Data were obtained from publicly available online school report cards for 2015-2016. Table 4 Student Demographic Data for Ms. Nolan and Ms. Wei’s High Schools in the Northwest Teacher Ms. Nolan Ms. Wei Percent White Students Percent Free and Reduced Lunch Percent English Language Learners Percent Special Education Percent Advanced Learning Eligible 55 40 34 38 6 5 17 7 6 34 Percent 10th graders demonstrating proficiency in ELA 89 84 Case study coaches. Each case study teacher worked with a dedicated coach to support them with implementation and data collection throughout the year. Seven coaches were paired with eight case study teachers; Mackenzie worked with two case study teachers—Ms. Wei and Ms. Nolan (see Table 5). Four of the coaches were graduate student researchers with 6 to 11 years of K-12 teaching experience. I note that I was the case study coach for Ms. Callahan. Three of the coaches—Sabrina, Mackenzie, and Daisy—were Carbon TIME Network Leaders and had other experiences in education, especially with curriculum development and PD. Network Leaders helped design and lead the face-to-face and online PD and were the primary point of contact for teachers in their networks. Case study coach experiences and roles in the Carbon TIME project are important to note because teacher-coach pairs co-constructed the teacher interviews that were the primary data source for my study. 44 Table 5 Case Study Coach Experiences in K-12 Education and Roles in Carbon TIME Case Study Teacher Mr. Ross Mr. Harris Ms. Callahan Ms. Barton Ms. Apol Ms. Wei Ms. Nolan Ms. Eaton Caitlin Winnie Evelyn Sierra Sabrina Number of Years K-12 Experience 10 7 11 10 6 Mackenzie 30+ NW Network Leader, district science program manager Daisy 16+ WE Network Leader, educational consultant Coach Pseudonym Roles in Carbon TIME and other organizations Graduate student researcher Graduate student researcher Graduate student researcher Graduate student researcher MW Network Leader, NSF GK-12 Fellow Of note is that coaches and teachers developed close working relationships over time, building trust as teachers shared their frustrations, challenges, and concerns about implementation and coaches listened to their thoughts and feelings. For example, Caitlin shared that she did not feel like a coach in terms of having more expertise because she and Mr. Ross had similar backgrounds, number of years teaching, and were both new to Carbon TIME. Data Collection Data gathered for each case study teacher included: (1) transcriptions of five semistructured interviews (one after teaching each of the three units, one after teaching all of the units, and one at the beginning of the second year of implementation), between 30-60 minutes in length each; (2) at least nine video-recordings of classroom enactment (three for each unit) and coaches’ accompanying observation notes; (3) video-recordings of face-to-face PD sessions; (4) artifacts from the face-to-face PD and the online course of study; (5) teacher-created artifacts, and, (6) teachers’ responses to survey items. Collecting multiple sources of data served to ensure construct validity (Yin, 2014). Data from classroom enactment. Case study coaches visited teachers’ classrooms at least three times per unit to provide general implementation support and to collect data, including 45 video-recordings of classroom instruction, observation notes, and interviews with teachers and focus students. To address issues of reliability and validity, all case study coaches were trained to use the same data collection protocols to ensure consistent data collection across researchers (Yin, 2014). The primary data source for this study were transcriptions of teacher interviews (see Table 6 on the next page). All case study teachers taught the same three units: Systems and Scale (SS), Animals (AN), and Plants (PL). Secondary data sources included: teacher-created artifacts, particularly modified curriculum materials; video-recordings and coaches’ observation notes of classroom enactment; video-recordings and field notes from face-to-face PD; teachers’ discussion and reflection posts from the online course of study; and, teachers’ responses to online surveys. I obtained data such as teachers’ modifications to curriculum materials directly by asking teachers or their coaches to provide the modified artifact via email (e.g., electronically modified artifact). Teacher interviews. Teachers signed IRB-approved consent forms when they initially agreed to participate as Carbon TIME case study teachers. Consent included data collection such as video-recordings of their classroom instruction and teacher interviews. Before every interview, coaches asked teachers if they had any questions before starting to record the interview, which could include questions about consent, participation, and compensation. 46 Table 6 Interview Dates and Interviewers for Case Study Teacher Interviews Teacher Mr. Ross Mr. Harris Ms. Callahan Ms. Barton Ms. Apol Ms. Wei Ms. Nolan Ms. Eaton Interview Date 10.28.2015 12.17.2015 05.18.2016 06.09.2016 10.12.2016 10.15.2015 11.06.2015 11.19.2015 05.25.2016 11.02.2016 11.11.2015 04.04.2016 04.04.2016 06.08.2016 10.07.2016 11.13.2015 05.24.2016 05.24.2016 06.13.2016 11.14.2016 11.17.2015 04.12.2016 04.21.2016 04.26.2016 09.23.2016 10.30.2015 12.05.2015 12.19.2015 06.08.2016 10.13.2016 10.30.2015 12.09.2015 01.29.2016 06.05.2016 10.22.2016 01.19.2016 03.08.2016 04.21.2016 04.21.2016 09.26.2016 Unit or Interview Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Plants Animals Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up Systems & Scale Animals Plants Post end-of-year Y1 follow-up 47 Interviewer Caitlin Winnie Evelyn Evelyn Sierra Sabrina Mackenzie Mackenzie Daisy Because sensemaking is most importantly an issue of language, talk, and communication (Weick et al., 2005), one way researchers can access sensemaking is through in-depth interviews with teachers (e.g., Coburn, 2005). Semi-structured interviews were designed to elicit teachers’ sensemaking about their implementation of the curriculum, including the rationale behind decisions they made to use, not use, or modify curriculum materials or their enactment. The interviews started with open-ended questions to elicit teachers’ areas of concern (e.g., “What have you been thinking about lately? How has that affected your implementation of Carbon TIME units?”) and questions that invited teachers to tell stories about their experiences using the curriculum (e.g., “Tell me about a lesson that was challenging for you”). Later, coaches asked more specific questions that were tailored to teachers’ experiences implementing Carbon TIME (e.g., “Why did you decide to modify the data spreadsheet?”). Another way to access teachers’ sensemaking is to show the teacher samples of student work or video-clips from their classroom instruction (see Borko, Jacobs, Eiteljorg, & Pittman, 2008) or the PD sessions as prompts to foster a discussion about what they were engaged in sensemaking about. All interviews were transcribed using a commercial transcription service. In the following sections, I describe each of the interview protocols (see all interview protocols in APPENDIX A). End-of-unit interviews. After teaching a unit, the coach would interview the teacher about their experiences implementing the curriculum. This interview began with the teacher’s viewpoint and prompted teachers to reflect about: what they felt was important to share with researchers about their experiences, a high point in the unit, a challenging lesson in the unit, how Carbon TIME fit in with their usual curriculum, what they decided to leave out or modify, concerns or worries about implementation, and who they talked to about Carbon TIME, 48 including who they asked for help. The protocol provided guidance for coaches to ask general probing questions such as “Can you give an example of that?” and “What happened after that?” After probing for what teachers thought was important to share, the protocol had two sections for coaches to probe what researchers thought was important to know about: (1) exploring teachers’ reasoning related to artifacts such as student work samples or videorecordings of classroom enactment; and (2) exploring teachers’ reasoning related to selected research areas, including networks and sensemaking, teacher and student curiosity, diversity (or how the curriculum materials supported students from diverse background), and principles (or how the curriculum materials supported students in tracing matter and energy through carbontransforming processes). Since they were most knowledgeable about the teachers’ classroom practices and enactment, coaches were given discretion to select artifacts and probe teachers’ reasoning in areas that they thought would be most productive for supporting teacher reflection and providing insight into selected research areas. For example, Evelyn decided to transcribe a portion of Ms. Callahan’s classroom discourse in the Plants Unit for discussion at an interview because she noticed that Ms. Callahan enacted traditional discourse patterns (such as IRE, or initiation-response-evaluation) and wanted to elicit Ms. Callahan’s reasoning about her classroom discourse patterns. Finally, this interview protocol ended with a wrap-up question (“What else would you like for me to know about your experience using Carbon TIME?”) that was designed to provide space for the teacher to share a final thought about their experiences implementing the curriculum. End-of-year interview. This interview was also called the Final or Post interview because, at the time, the case study research team, including myself, thought this would be the last interview for this cohort of case study teachers. Coaches conducted these interviews in April, 49 May, and June 2016. The goal of this interview was to elicit teachers’ responses about how they used the curriculum, how their students responded to the curriculum in terms of engagement and learning, and how teachers’ professional networks influenced their implementation. The protocol began with an open-ended question asking teachers to compare what was alike and different when they were teaching Carbon TIME versus other units. Probing follow-up questions asked teachers to elaborate on how Carbon TIME fit into their existing curriculum (or not), their role when teaching Carbon TIME, and how Carbon TIME changed the way they taught or thought about teaching. The next set of questions focused on students’ role when they were learning Carbon TIME. Follow-up questions asked teachers to reflect on student engagement, their goals for students, and whether Carbon TIME changed the way their students were engaged in science learning. At the beginning of the year, each case study teacher had selected at least four focus students to follow during implementation. This section of the protocol provided prompts for coaches to ask about particular focus students, including struggling or less successful students. For example, “Is there anything you would do differently next time with Carbon TIME to better support students like this?” The next section of the protocol focused on particular curriculum materials. Coaches asked teachers to reflect on how each of the Process Tools (Expressing Ideas, Predictions, Evidence-based Arguments, Explanations) worked for them and their students, the online assessment website, and their grading and assessment practices related to the Process Tools and Pre- and Post-Test results. Follow-up questions probed for how these assessment data informed teachers’ instruction, how they helped or didn’t help students’ learning, and how researchers could further improve them. The case study research team decided to dedicate a section of the end-of-year interview protocol to teachers’ grading and assessment practices because grading 50 and assessment using Carbon TIME emerged as an important issue for all Carbon TIME teachers over the course of the year, including case study teachers. In other words, teachers were spending a significant amount of time reviewing students’ responses on the Pre- and Post-Tests and Process Tools and deciding how they were going to hold students accountable for learning. Thus, the case study research team wanted to elicit teachers’ sensemaking about their grading and assessment practices related to these Carbon TIME materials. The next section of the protocol focused on teachers’ professional networks and relationships. Coaches asked teachers to reflect on whether Carbon TIME aligned (or not) with how they were evaluated by their schools or districts. Follow-up questions probed about how teachers used the Pre- and Post-Test results, standardized testing, and requirements for showing student growth. As with teachers’ grading and assessment practices, these questions arose from case study coaches’ observations that some teachers were spending a significant amount of time thinking about how Carbon TIME could help them with local teacher evaluation requirements. Additional questions in this section asked teachers to share how they felt about interactions with other people in their school, including colleagues and administrators, and with people in their Carbon TIME Network. Finally, this interview concluded with questions about how teachers felt about participating as a case study teacher and what changes they would want to make in the second year of implementation, including ways in which the Carbon TIME project could support them in face-to-face or online PD. Y1 follow-up interview. This interview was conducted in Fall 2016 after case study teachers had attended the Summer 2016 face-to-face PD session and were beginning their second year of Carbon TIME implementation. After initial analyses of the end-of-unit and end-of-year interview transcriptions, along with analyses of teachers’ responses to survey items about their 51 teaching practices from Y0 to Y1, the case study research team decided to conduct one more interview. Miles, Huberman, and Saldana (2014) “strongly advise analysis concurrent with data collection” in order to “cycle back and forth between thinking about the existing data and generating strategies for collecting new, often better, data” (p. 70). Thus, after completing initial analyses of the existing interview and survey data set, we determined that we needed more data regarding teachers’ reasoning about content presented in the Summer 2016 PD and shifts in their survey responses. All Carbon TIME teachers, including the case study teachers, were asked to complete surveys at multiple points in the project. Since my focus was on teachers’ classroom enactment, I examined survey data that was related to teachers’ perceptions of their teaching practices (see Table 33 in APPENDIX B for a List of Practices on the Reflection on Teaching Practices Survey). Thus, here I do not include extensive information about the full survey contents except to note that items asked teachers about their demographics, goals, content knowledge, and pedagogical content knowledge. For the teaching practices section, the survey items asked teachers to select their top two teaching practices from a list of 8-12 practices for each area: Formative Assessment, Inquiry, Explanations, and Decision Making. Teachers took the Y0 survey after they had signed up to participate in the project but before any PD; case study teachers took this survey in May, June, or July 2015. The exception to this timing was Ms. Eaton in the West Network, who took the survey in October 2015 due to later enrollment in the project. Teachers took the Y1 survey after they had taught Carbon TIME for one year; case study teachers took this survey in April, May, or June 2016. Data from these surveys showed that some responses shifted from Y0 to Y1; however, the research team, including myself, were uncertain of teachers’ interpretations of the survey items and why they did or did not shift. Thus, we 52 decided to conduct a Y1 follow-up interview with case study teachers to elicit teachers’ reasoning and perceptions about their shifts, if any, in their survey responses. The protocol began with an open-ended question designed to elicit teachers’ reasoning about curriculum materials as boundary objects. Coaches asked teachers to reflect on what they spent time thinking about when they were preparing to teach Carbon TIME. Follow-up questions probed for sensemaking (what were they uncertain or concerned about?), agency (what did they feel they had control over), and networks (who helped or hindered them). Because the Summer 2016 PD sessions were designed to address concerns and challenges that had come up the year before (in preparation for teachers’ second year of implementation), we designed the protocol to elicit information about teachers’ responses to particular content in the PD sessions. This section asked teachers to reflect on the Instructional Model, including what they liked about it or what puzzled them about it, and how they were thinking of using the Instructional Model differently, if at all, in the second year; the Process Tools, including which Tool they thought best supported students’ learning, and whether they had made any modifications to the Tools; and, the redesigned online assessment system, including how they were thinking of using assessment results differently in the second year, if at all. As with previous protocols, coaches were given discretion to probe areas or bring issues that they felt were important for the teacher to discuss. The second section of the interview protocol was customized to each teacher based on their survey results. Teachers were provided with a written copy of their survey results so that they could refer to it as they talked about why their responses shifted (or not). Coaches prepped teachers for this section by stating that we (the research team) were interested in the shift, if there was one, in their responses from Y0 to Y1. Coaches asked about the four teaching practices: 53 Formative Assessment, Inquiry, Explanations, and Decision Making. Each question prompt was structured in a similar way. For example: Let’s start with your Formative Assessment practices. In Y0, you selected “Ask students to respond to each others’ ideas (agree/disagree, add on, evaluate, etc.” AND “Ask students to explain their reasoning in support of correct answers.” In Y1, you selected “Search for better ways to elicit and respond to students’ ideas” AND “Ask students to explain their reasoning in support of correct answers.” Tell me about this shift in your selections. Coaches were again prompted to probe for the selected research areas of sensemaking, agency, and networks. Finally, the protocol asked teachers to reflect on their involvement in professional organizations, such as local conferences or National Board Certification. This question was designed to elicit teachers’ motivation for pursuing voluntary PD and enrichment activities. The protocol ended with an invitation for teachers to share any questions they had with the coach. Data from PD. Data from the face-to-face PD sessions included video-recordings, field notes, and artifacts. Field notes were taken electronically on laptops by graduate student researchers, including myself, and included descriptions of the PD setting, lists of Carbon TIME staff and teachers present, and transcriptions of staff and teacher talk in the sessions (as much as could be captured in-the-moment by the note taker). Field notes were reviewed by the network research team and suggestions for improvement or adjustment were incorporated into the next set of field notes. Artifacts from the PD sessions included photographs of the physical settings and PD activities such as posters of teachers’ responses to reflection prompts. Throughout the school year, teachers engaged in an online course of study led by Carbon TIME Network Leaders that supported their implementation of specific units. Data from the online course of study included artifacts such as Portable Document Format (PDF) files of teachers’ discussion and reflection posts. All data were stored on a university server. In summary, primary and secondary data from classroom enactment and PD are shown in Table 7. 54 Table 7 Summary of Primary and Secondary Data Sources Primary Sources Secondary Sources Data Five semi-structured interviews per teacher and teachercreated or modified artifacts Field notes, video-recordings of PD sessions, PDF files of online PD discussion and reflection posts, video-recordings of classroom instruction Collected By Case study coach Carbon TIME researchers Dates 2015-2016 2015-2016 Data Analysis The purpose of this study was to examine teachers’ sensemaking about implementation of an innovative science curriculum across the settings of PD and classroom enactment. My primary data source were transcriptions of teacher interviews over the course of one year of implementation. Secondary data sources included artifacts such as teacher-modified curriculum materials, field notes from face-to-face PD, and video-recordings of classroom enactment. To analyze the data, I first had to prepare the data for coding, then conduct a systematic analyses of the data to identify occasions of sensemaking for more in-depth analyses. I used my conceptual framework for investigating teachers’ sensemaking about Carbon TIME implementation as a guide to identify particular parts of the framework, including defining what counts as an occasion of sensemaking and how I identified them in the data. I combined three approaches to searching for evidence of sensemaking in the data: (1) a pragmatic approach by searching for a sufficient quantity of data in the form of amount of teacher talk about a boundary object or classroom practice, (2) a conceptual approach by searching for the presence of key components in my framework for investigating teachers’ sensemaking (goals and resources, outcomes, and critical noticing) and, (3) a theoretical approach by searching for teachers’ expressions of surprise generated by an unexpected or disruptive event (Weick et al., 2005). Generally, I took an interpretive approach to my analysis and subjected every assumption about meaning to 55 critical scrutiny (Erickson, 1986). In the following sections, I describe in detail the methods I used to analyze the data. Teacher interview transcriptions. A total of 40 interviews (five each for eight case study teachers) were transcribed by a commercial transcription service. These transcriptions were stored as Word files on a university server and could be accessed by Carbon TIME researchers online. The files were named by date of interview, case study teacher, case study coach, and Carbon TIME unit or interview. Next, I describe preparation of the data for coding and systematic analyses. Preparing the data. Before the data could be uploaded to an online data analysis website, the transcriptions had to be cleaned in terms of replacing real names with pseudonyms. This cleaning process was done by undergraduate student researchers. Once the transcriptions had been cleaned, I uploaded them as text files to Dedoose (www.dedoose.com), an online application for analyzing qualitative and mixed methods research data. Once the files were uploaded in Dedoose, I linked each file to a set of descriptors that would allow me to filter and search among all data files. The five descriptors included: case study teacher ID number, network and cohort, unit, focus (teacher), and location (interview). Using the descriptors, I could search for all files that were about the Plants Unit, or I could use the filter to view only files associated with a particular teacher. Coding the data. The first step in using Dedoose to code the data was to create excerpts. In Dedoose, excerpts are chunks of data that can then be coded using the coding framework. With some input from the network research team, I created excerpts based on topic. Thus, some excerpts were as short as two turns of talk; for example, the coach asked a question and the teacher responded. If the coach then moved on to another question that was unrelated to the first 56 question (i.e., it was not a probing or follow-up question that stayed on the same topic), then I determined that a new topic was starting, and I created an excerpt from only the first two turns of talk. For example, at the beginning of Ms. Barton’s first interview about her implementation of the Systems & Scale Unit, I identified two excerpts: SIERRA: What happened while you were teaching this unit that you think is really important to talk about? MS. BARTON: I think one thing is that you—like you think that kids know certain things, and they don’t necessarily know certain things. Like I think even when the Systems and Scale size-wise I guess… just thinking about the different size of things and how they fit, the fact that the kids are coming in with all the different, you know, some kids come in with certain experiences, or they’ve done certain things, and other kids haven’t. And so you might think, “Well, kids are all ready to move on,” or “Of course they know this piece of information,” and they don’t. So I think that the unit made that pretty clear. Like it was a good clarifying. SIERRA: Okay. And so what was a high point? MS. BARTON: I wish we would’ve done this earlier, because now I’m like, I have—all of the stuff kind of runs together for me. But… I think that the high point is that, like the focus on talking, and that the kids were actually, it seemed like the kids were caring about their learning, and they were taking responsibility for like wanting to know. And… the fact that… we were doing things that seemed relevant to them or at least exciting to them…. I think that that was a high point. I mean, it certainly made it easier, because some of the things that we have normally during that time period are fairly dry. So, you know, that was definitely helpful. In this example, I created an excerpt from the first two turns of talk. Then, because Sierra shifted the conversation to talking about a high point, I created another excerpt from the next two turns of talk. In summary, for the entire data set, I created a total of 1067 excerpts for all 40 interviews that varied in length from as short as 89 characters to as long as 4,921 characters (this information was obtained using the Length column in the Excerpts tab in Dedoose). Once excerpts were created, they could then be coded using the coding framework. Because I had chosen to focus my analysis on teachers’ sensemaking about boundary objects, including classroom practices associated with using those boundary objects, I created a 57 descriptive coding (Miles, Huberman, & Saldana, 2014) framework that would identify those topics in the data. Initially, the coding framework was too narrow. For example, I tried to mark students’ engagement, learning, and attributes as separate codes. After some trial coding and several rounds of discussion and revision with the Carbon TIME network research team, I finalized the framework for descriptive coding of teacher interviews (see Table 33 in APPENDIX C for descriptions of each code). In the previous example, we decided eventually to combine all talk about students into a single code in order to facilitate ease of coding. Furthermore, for the purposes of descriptive coding, we did not need a high level of differentiation. The coding framework included three parent codes: Boundary Objects, Classroom Practices, and Networks and Obligations (see Table 8). The Boundary Objects parent code had six child codes for curriculum materials that appeared in both the PD and classroom enactment settings: (1) Evidence-based Arguments Tool, (2) Explanations Tool, (3) Expressing Ideas Tool, (4) Instructional Model, (5) Pre- and Post-Tests, and (6) Predictions Tool. The Classroom Practices parent code had six child codes based on classroom practices that researchers wanted to know about but that had also emerged from teachers’ shared concerns during the year, particularly around classroom enactment with the boundary objects: (1) Discourse, (2) Grading and Assessment, (3) Modifications, (4) Non-CTIME, (5) Reasons, and (6) Students. The Networks and Obligations parent code had six child codes based on information about teachers’ professional networks, particularly who they talked about or mentioned, and obligations to local or national level entities: (1) All Others, (2) CTIME Network, (3) CTIME Obligations, (4) CTIME Staff, (5) Local Obligations, and (6) Policies and Standards. Finally, the coding framework contained a code for Concern to identify teachers’ talk about any concerns or 58 challenges they had using Carbon TIME. This code was used for any excerpt that contained teachers’ talk about a concern or challenge they or their students faced in using the curriculum. Table 8 Parent and Child Codes in the Descriptive Coding Framework Parent Code Boundary Objects Classroom Practices Networks & Obligations Child Codes Instructional Model, Expressing Ideas Tool, Predictions Tool, Evidence-based Arguments Tool, Explanations Tool, Pre- and Post-Tests Discourse, Grading and Assessment, Modifications, Non-CTIME, Reasons, Students All Others, CTIME Network, CTIME Obligations, CTIME Staff, Local Obligations, Policies and Standards Purpose To identify teachers’ talk about selected Carbon TIME boundary objects To identify teachers’ talk about their classroom practices To identify teachers’ talk about people in their social networks at multiple levels Descriptive coding allowed me to identify and then search for excerpts that contained data about particular boundary objects, classroom practices, professional networks and obligations, and concerns. I coded excerpts using only the child codes. In Dedoose, excerpts could be coded with more than one code. For example, an excerpt from the Y1 follow-up interview with Mr. Harris had the most code co-occurrences with nine child codes applied (see Table 9). In this example, the excerpt was 2,306 characters in length, and Mr. Harris mentioned all four Process Tools and shared some reasons he was concerned about using the Instructional Model with his students. Most excerpts, however, had fewer code co-occurrences. Using the Excerpts tab in Dedoose and filtering for Codes Count, I report that 989 out of 1066 (or 93%) of the excerpts had between 0-3 code applications. 59 Table 9 Descriptive Coding of Mr. Harris’s Excerpt About the Instructional Model and Process Tools Excerpt EVELYN: Okay, great. So you know the instructional model that we have in Carbon Time. And so we’re interested in what you’re thinking about the instructional model now that you’ve taught Carbon Time for one year and you’re teaching it again this year. So in the document I sent you there is a picture of the instructional model in case you forgot what it looks like. MR. HARRIS: Yup, I’ve got it. EVELYN: Okay. So what are some things that you like about it, what are some things that puzzle you about it, and how are you thinking about it differently this year versus last year? MR. HARRIS: I definitely understand the model and how it all works better this year. I’ve noticed all the PowerPoints now have the instructional models at the beginning, which I like because I wish I would spend time going over it with the kids more. But I feel like as a teacher I’m understanding it after teaching it for two years. Because I don’t know how well a 14 year old is going to really engage in trying to get to understand it. So I appreciate it and I understand more now, and I wish I could… Like part of me is like, “Oh, I’d like to take the time to explain this to kids,” and I did at the beginning a little better, but then again I also see the kids start to fade out as well. You know, you’re 14, you’re 15, you don’t care, you want to do something cool. And so it’s hard to get them to think about that. But I think if I used it… Again, as I do it more I think I’ll get better explaining it, and then hopefully through that they can see that it’s not the same worksheet. Because sometimes they’ll see… you’ve got expressing ideas, the predicting, evidence-based argument tool and then explanation, you have all four of those, and in their minds sometimes they can just view that as like, “Why are we doing the same worksheet again?” And you’re like, “No, this is the point of trying to see how they all work together and look at how there’s been a change over time.” So with the model I think that can help to it, but again, for myself I’m still scrambling to try to get things in time. We also have a time crunch. We have trimesters, and so I just have Biology A and then those students are gone before Thanksgiving. So that’s a big challenge to get through three units before that time. Child Code Applied Instructional model Instructional model Reasons Concern Concern, Students Students Expressing ideas, Predictions, EBA, Explanation Students Reasons Local obligations Concern Reliability coding of descriptive codes. To ensure reliability of the descriptive coding, I used the Dedoose Training Center to create code application tests for a set of second coders that included four graduate student researchers from the Carbon TIME network research team. I note that one of the second coders, Sierra, was Ms. Barton’s case study coach in this data set. One coder was a new graduate student working on quantitative analyses of the network survey data, and the other two coders were new graduate students to the project and also first-year case study 60 coaches for Carbon TIME Cohort 2 teachers (i.e., their case study teachers’ data were being collected in 2016-2017 and not included in this set). Second coders took reliability tests that I created in the Training Center. A test included excerpts from one teacher’s unit or interview. Code application test results reported the Pooled Kappa, which is based on Cohen’s kappa statistic, a widely used and respected measure to evaluate inter-rater agreement as compared to the rate of agreement expected by chance. The Pooled Kappa summarizes rater agreement across many codes (e.g., see de Vries, Elliott, Kanouse, & Teleki, 2008). Miles and Huberman (1994) suggested that inter-rater reliability should approach 0.90. The code application test result in Dedoose reports the Pooled Cohen’s Kappa and individual kappa for each code. In order to calculate the kappa, the tests needed to include at least two excerpts for one code; thus, some codes could not be tested if the set of excerpts in the test did not include at least two excerpts. I began creating sets of tests by teacher to be coded by second coders weekly. Initial results of the reliability tests showed Pooled Kappa scores ranging from 0.40 to 0.60. Thus, to improve inter-rater reliability, I met with second coders both individually and as a group to discuss and resolve discrepancies until we all reached agreement about the meanings and application of codes. I exported test results as reports in Word files in which the text of the excerpt was followed by a table showing trainer and trainee codes. Then, we discussed discrepancies by highlighting evidence in the text that we each thought justified code application. When meeting with second coders individually, sometimes for 1-2 hours at a time, I kept track of our discussion using the Comments function in Word. The codes that were most problematic across all second coders were the codes that involved more interpretation, including Concern, Reason, Discourse, and Students. For example, 61 because teachers had different ways of talking about their concerns, language about concern was not consistent across teachers; some teachers used phrases such as “I’m concerned about X,” and others used phrases such as “I wish,” or “I’m worried about.” Thus, there was some ambiguity among second coders and myself about when an excerpt justified code application. We decided eventually that some codes needed to be applied more narrowly, such as Non-CTIME for only things that were not related to Carbon TIME at all, and some codes needed to be applied more broadly, such as Grading and Assessment, because we wanted descriptive coding to err on the side of capturing more rather than less data. Codes for Boundary Objects, in contrast, often had kappas of 0.9 to 1.0. Because I had discussed, resolved, and recorded notes about discrepancies in the reports, I did not ask second coders to spend time re-taking the tests. Instead, as a final step in this process, I would go back to the Code Application window and change the original codes that needed changing (because sometimes I had missed something) as these were the only codes that counted in the data set. Systematic analyses of the data set. Table 35 in APPENDIX D shows the total number of excerpts coded with each code for all teacher interviews. Because there was some consistency in the semi-structured teacher interview protocols in terms of asking about particular topics, patterns of descriptive coding across teachers were similar. However, there was some variation. For example, EBAT had fewer excerpts compared to other codes, but across the teachers, the number of excerpts coded varied from 1 to 11. Because the excerpts varied in length and sensemaking is primarily about teachers’ talk in terms of both what they talk about and how long they talk about it, a faculty member with whom I met weekly to discuss my ongoing analysis suggested that I use word count as a gross approximation of the amount of time spent talking about a topic. Because the interviews were co-constructed by the teacher-coach pair, I decided to 62 count all the words in an excerpt (using the Word Count function in Word), including coaches’ words and transcriptionists’ notes, such as marking crosstalk or pauses. My goal was to be able to identify patterns within and across a case for more in-depth analysis; thus, similarities across all cases such as names before the text of the talk (i.e., “EVELYN:”) would in effect be canceled out. Next, I used the Analyze tab and Qualitative Charts in Dedoose to view a table of code co-occurrences (see Figure 17 in APPENDIX D). From this table, I could click on a code to open a window showing all matching excerpts. For example, clicking on the Explanations PT code brought up the text of 71 matching excerpts, which I could then export as a Word file. I exported files for all of the Boundary Objects child codes, two of the Classroom Practices child codes, Concern, and one Networks and Obligations child code: EBAT, EXPL, EXPR, INMO, PPTS, PRED, DISC, GRAD, CONC, LOCA. Then, I used these Word files to count the number of words for all the excerpts coded with that code. Because there was some overlap in terms of code co-occurrences, I decided to count words only for the six Boundary Objects child codes and two of the Classroom Practices codes: DISC and GRAD. I also calculated the totals and means for each teachers’ talk. The results of this analysis are in Table 36 in APPENDIX D. Finally, in order to identify occasions of sensemaking systematically for more in-depth analysis, I had to first find patterns in amount of talk across and within cases. Identifying patterns across cases. To find patterns across cases, I calculated the ratio of individual teachers’ talk to the mean for boundary objects and classroom practices by dividing the number of words by the mean. With a ratio of 1.00 as the mean divided by the mean, I could then identify teachers’ talk that was above (more than 1.00) or below (less than 1.00) the mean to determine how much more or less individual teachers talked about the boundary object and 63 classroom practice compared to other case study teachers. I also calculated the overall amount of talk for each teacher to get a sense of how much more or less they talked about these topics in general compared to other case study teachers. Identifying patterns within cases. To find patterns within cases, I calculated the percentage of individual teachers’ talk about boundary objects and classroom practices by dividing the number of words for that topic by the total number of words for the teacher. My goal was to get a sense of the distribution of individual teachers’ talk about boundary objects and classroom practices. Unsurprisingly, the two classroom practices—Discourse and Grading and Assessment—were a large percentage (double digits) of individual teachers’ talk because they were broader and the codes had been applied more broadly, sometimes but not always cooccurring with one or more of the Boundary Objects codes. For example, PPTS (Pre- and PostTests) sometimes co-occurred with GRAD (Grading and Assessment), which is reasonable given that teachers use tests scores for grading and assessment purposes. More specifically, the cooccurrences were 44% of the PPTS coded excerpts and 33% of the GRAD coded excerpts (39 co-occurrence; 88 PPTS total, 120 GRAD total) meaning that some of teachers’ talk about PPTS and GRAD were about other things, such as problems with the online assessment system or other grading and assessment practices unrelated to the tests (see Table 37 in APPENDIX D). Identifying occasions of sensemaking. Finally, in order to identify occasions of sensemaking for more in-depth analysis, I used a combination of the patterns across and within cases to identify potential occasions that were significant both for individual teachers and across teachers. This decision was pragmatic in terms of searching for excerpts of transcript data that contained a sufficient amount of text to warrant a detailed analysis. Using the numerical analyses described above, I identified potential occasions based on higher than average number of words 64 compared to other teachers (defined as a ratio more than 1.00) and a large percentage of individual teachers’ talk (defined as double digit percentages). Across the cases, the number of these occasions varied from 2 to 7 occasions per teacher with a total of 41 occasions. Next, to further refine and reduce the number of occasions to a reasonable amount for indepth analysis, I reviewed the contents of the excerpts for substantial evidence of sensemaking that would warrant further analysis. This decision was conceptual in terms of identifying components of sensemaking (outcomes, goals and resources, and critical noticing) that were consequential for teacher and/or student learning (e.g., modifications to the curriculum materials or enactment based on teachers’ goals, beliefs, or social commitments that did not necessarily support students’ engagement in three-dimensional science learning). Content analysis reduced the number of potential occasions from a total of 41 (based solely on the numerical analysis) to 26. In addition, as I reviewed these data, I identified occasions of sensemaking that were unrelated to boundary objects or classroom practices and labeled these occasions in a seventh category as Other. In taking a theoretical approach to finding occasions of sensemaking, I looked for teachers’ expressions of surprise due to an unexpected or disruptive event. According to Weick (1995), “interruptions are consequential occasions for sensemaking” (p. 105). Therefore, besides having a sufficient quantity of data to search for indicators of teachers’ sensemaking about particular boundary objects, I also used linguistic markers related to teachers’ expressions of surprise or concern because they often represented an interruption in teachers’ expectations (i.e., teachers were surprised because something happened that they did not expect or something did not happen that they expected would happen). These discrepancies were often related to teachers’ goals and resources (goals, beliefs, social commitments). 65 Here I provide examples of how I combined a pragmatic, conceptual, and theoretical approach to determine whether there was substantial evidence or not in the data to warrant pursuing further analysis. Numerical analysis of the Expressing Ideas Tool based on higher than average number of words yielded a total of 41 potential occasions of sensemaking (pragmatic approach). In analyzing the contents of the excerpts coded with EXPR, I determined that the data in Mr. Ross’s excerpts did not have sufficient evidence of sensemaking to pursue further analysis. First, the ratio of Mr. Ross’s excerpt compared to the mean was 1.00, which meant that the number of words counted for his excerpts about the Expressing Ideas tool was exactly the mean, which was 617 words. In re-examining his excerpts, I noted that he had only two excerpts and that in both he stated that he liked using the Expressing Ideas Tool: I certainly still like the expressing ones because I believe from these we are going to start building the questions that we’re going to want to talk about and, again, like some of them are left at the end unanswered. That’s okay. I guess that may favorites are the expressing and then the evidence. (Y1 follow-up interview) In this example excerpt, Mr. Ross does not say more beyond what is captured here about the Expressing Ideas tool, particularly in terms of challenges or concerns he had about using the tool. Thus, using the conceptual and theoretical approaches, I determined that there was insufficient evidence of sensemaking about the Expressing Ideas tool to warrant identifying an occasion of sensemaking for Mr. Ross. In contrast, I determined that Ms. Wei had an occasion of sensemaking about the Expressing Ideas tool, despite only a slightly larger ratio of 1.20. Five excerpts were coded for EXPR, and there was evidence in the data of aspects of sensemaking (which I describe in more detail in the next section on defining an occasion of sensemaking). First, Ms. Wei expressed concern about her use of the tool related to students’ prior knowledge: 66 MS. WEI: The Expressing Ideas tool. So I always used that but to varying degrees of success, I think, just based on students’ prior knowledge. And the way that it was presented once they, for some reason, they really wanted to get a correct answer on the Expressing ideas tool instead of just expressing ideas. So I had a hard time with getting them out of that and saying, “This is just your ideas right now.” And then I think that there was a lot of similarity between the Expressing Ideas tool and the Explanation tool. And so when… Or maybe it was the Predictions tool. Now I can’t even remember…. MACKENZIE: The Predictions tool is more like hypothesizing at the beginning of an investigation. Where kids thought about the investigation and they thought about potential outcomes. MS. WEI: Okay. So no, that is different. It’s the Expressing Ideas tool and Explanations tool. Those two things were so similar that when kids were filling out the Explanations tool, they were like, “Wait a second, we did this already.” There was also a little bit of disconnect on the Explanations tool in particular about the graphic; what they were doing on their graphic versus what they were doing in the written explanation. So this was another place where I felt like, “Gosh, I really needed to give sentence stems to kids.” (Post end-of-year interview) Second, Ms. Wei took the time to make a modification to her enactment of the tool in terms of giving sentence stems to students to support them in expressing their ideas. Furthermore, Ms. Wei was able to articulate with precision about the contents of the Expressing Ideas tool: Like Expressing Ideas tool, you know, there were things like, “What goes in and out of this plant?” or, “What does the plant take in and how does it take those things?” And my struggling learners were kind of like, “I just don’t know what comes out of a plant. I really don’t know.” And even when they got to the place where they were able to draw arrows to show the exchange of gases, that didn’t necessarily transfer over to their explanation. So I ended up having to do some sentence frames for them to help them with that. (Post end-of-year interview) Thus, I determined that there was sufficient evidence in the data to warrant describing in detail an occasion of sensemaking for Ms. Wei about the Expressing Ideas tool. I did not search for occasions of sensemaking for teachers around the classroom practices because they often overlapped with that of the Carbon TIME boundary objects. Defining an occasion of sensemaking. I used my conceptual framework for investigating sensemaking in Carbon TIME to define an occasion of sensemaking as including 67 the following components: critical noticing, interactions among goals and resources (goals, practical knowledge, social communities), and outcomes of sensemaking (decisions and reflections). Figure 4 shows my model of teacher sensemaking in which teachers’ critical noticing of interactions among the curriculum materials, their students, and themselves are influenced by teachers’ goals and resources and result in outcomes of sensemaking that can feed back into the goals and resources, particularly as teachers move across the boundary of PD and classroom enactment over time. In this model, teachers’ sensemaking is influenced by their vision of three-dimensional science teaching and learning and the presence (or absence) of a supportive school community and professional network. Figure 4. Model of teacher sensemaking My next step in conducting a systematic, in-depth analysis was to use this model to complete tables for all 26 occasions with evidence from the data set. Due to the semi-structured and co-constructed nature of the teacher-coach interviews, I decided that a sufficient description 68 of an occasion of sensemaking did not have to include identification of all components. For example, if there was insufficient evidence of a teacher’s commitment to a particular social community, then I did not describe that component in the occasion of sensemaking. In addition, I used teachers’ own words as much as possible in the description to give a sense of how the teacher was talking about it. Teachers’ words are placed within quotation marks and cited with the corresponding interview; therefore, identification of a component marked with quotations in each description of an occasion of sensemaking was tied directly to evidence in the coded data. An important aspect of sensemaking that affected how I conducted my analysis was the issue of time. Due to the nature of Carbon TIME curriculum implementation, with multiple units spread across a school year, an occasion of sensemaking could span several months. Teachers may engage in several cycles of planning for and enacting implementation but not be able to share their reflections on either of those processes until discussion at a later time with their coach (when they participated in the end-of-unit or post interviews). Thus, my definition of what counts as an occasion of sensemaking is critical noticing that involves action situated in context over time, which could occur during planning for implementation, classroom enactment with students, or interviews with the case study coach (see Figure 5). Although researchers may differ in their definitions of action, for my purposes I took a broad view and considered action in the context of this study as “the process of doing something” related to their implementation of Carbon TIME units. Thus, I considered both making a modification to a boundary object and reflecting on (i.e., talking about) enactment of a boundary object in the classroom with students as actions in terms of teachers doing something. 69 Figure 5. What counts as an occasion of sensemaking Furthermore, I note that teachers’ critical noticing sometimes occurred when teachers were not with coaches, either during planning for or during enactment; thus, in my analysis, I could infer only that critical noticing had taken place at some time prior to the interview but I could not pinpoint exactly when it occurred. I marked these inferences as such in the tables for occasions of sensemaking by using italic text. At other times, critical noticing occurred in-the-moment during the interview, particularly when teachers and coaches were examining artifacts such as student work, video-recordings of classroom instruction, or transcriptions of classroom discourse. In these instances, I mark teachers’ words from the interview data using quotation marks and citing the corresponding interview. The order in which I identified the components of sensemaking was often: (1) identifying outcomes such as decisions and reflections that were shared during the teacher-coach interview, (2) then identifying goals and resources that influence sensemaking tied to those outcomes, and (3) finally, making inferences about what teachers were critically noticing in that occasion. Rather than try to set up a second interpretive coding scheme in Dedoose, I marked data 70 corresponding to components of sensemaking in the Word files of exported excerpts. Because I had already reduced the data that I needed to analyze, I could focus on data for particular boundary objects or classroom practices for each teacher. In addition, I used codes such as Concern to search for patterns in teachers’ concerns or Local Obligations to search for patterns in teachers’ obligations to local contexts. In the following sections, I describe in more detail how I analyzed the data for each of these components and provide illustrative examples. Outcomes of sensemaking. Generally, I used discourse analysis (Gee, 2005; Johnstone, 2008) and qualitative analysis (Miles, Huberman, & Saldana, 2014) techniques to identify words, phrases, or ways of talking about something that seemed significant to the teacher. To identify outcomes of sensemaking, I used Weick’s (2001) framework as a starting point to identify meaning in teachers’ talk: labels, or what teachers talked about; metaphors, or how teachers talked about it. When analyzing labels and metaphors, I was conscious of Tannen and Wallat’s (1987) notion of polysemy, or multiple meanings. A statement could have multiple meanings, including the content of the statement, the social need it represents (e.g., need for connection), and the norms for social interactions. I am conscious also of how people face multiple interpretations of their actions and may choose to ignore negative aspects while highlighting positive ones (Weick, 2005). That is, language is a performance of social identity (Gee, 2005), and people may choose to portray themselves in particular ways for particular reasons. In this study, performances of social identity occur as teachers enact the curriculum in their classrooms (i.e., perform the role of a science teacher for students) and talk about their enactment of the curriculum with their coaches (i.e., perform the role of a science teacher implementing Carbon TIME for researchers). 71 Modifications. To identify modifications, I marked teachers’ talk about modifications to curriculum materials, which included phrases such as: “I ended up modifying the tool, I switched the order, I embedded the X tool into the Y tool, I did them in groups, I decided not to use them, I didn’t do it.” For example, Ms. Wei explained that she had modified a Process Tool: “And I think that Plants was where I ended up modifying the Evidence-based Arguments Tool to fit CER because we had just done the yeast lab” (Plants Unit interview). Sometimes coaches would bring up modifications they had noticed in teachers’ classroom enactment and asked teachers to talk about them, including their reasoning for making the modification. Modifications were most often about adding or changing a curriculum material or about enactment of the curriculum in ways that were different from the teacher’s guide. I found evidence of teachers’ modifications in excerpts coded with the Modifications. Reflections. To identify reflections, I marked teachers’ talk about their serious thought or consideration of a topic. Although the topic of reflection in teaching and teacher education has a rich and extensive literature base (e.g., Clark & Lampert, 1986; Fishman, Marx, Best, & Tal, 2003; Korthagen & Vasalos, 2005; Schon, 1987; Shoffner, 2011), I took a broad view and considered reflection as “serious thought or consideration,” particularly in the context of reflecting or “throwing back” things that had happened during planning and enactment. Thus, because reflections were about things that had already happened, they often involved teachers’ conclusions about something. Teachers could reflect that a tool was useful for a particular reason or that students’ low test scores were due to a particular factor. For example, Ms. Apol shared that “I like the Evidence-based Arguments [Tool] because, like what I’ve said, we’ve done a lot of Claim-Evidence-Reasoning writing” (Animals Unit interview). The key linguistic marker in this example is the use of the word because to signal a teacher’s reasoning or justification for an 72 expressed belief. Sometimes a reflection could be changes in teachers’ thinking about something, such as articulating different feelings about something before and after they had implemented the unit. Finally, reflections could also be conclusions about any of the goals and resources that influence sensemaking—teachers’ goals, practical knowledge (including beliefs), and commitments to their social communities. For example, Mr. Harris expressed a goal about the Process Tools: “I don’t want them [Process Tools] stapled because I want them [students] all to be able to see them at once” (Y1 follow-up interview). The key linguistic marker in this example is the words I want to signal a teacher’s goals. Goals and resources that influence sensemaking. My definition of sensemaking includes three goals and resources: teachers’ goals, practical knowledge, and commitments to social communities. The primary communities I was interested in were teachers’ classrooms (students), schools (colleagues and administrators), and Carbon TIME networks (coaches, teachers, and staff). Goals. To identify teachers’ goals, I marked teachers’ talk about things that they wanted or seemed to want. Often goal statements were explicit and easy to identify, starting with phrases like “I want/wish” or “I do not want/wish.” Sometimes goal statements were implicit and required more interpretation on my part, such as one teacher who stated that she thinks about how she is going to get students to a certain place in their thinking; I inferred that “getting students to a certain place” was her goal even though she hadn’t stated it explicitly as such. Sometimes goal statements were explicit, such as Mr. Ross’s goals for his students: “Like, as I was saying, that’s the goal of my class. I want you [students] to be able to evaluate the world around you” (Y1 follow-up interview). 73 Practical knowledge. Using van Driel, Beijaard, and Verloop’s (2001) definition of practical knowledge as the integration of formal knowledge, experiential knowledge, and beliefs, I identified teachers’ practical knowledge most often by identifying their beliefs about something. Like teachers’ goals, beliefs were sometimes more or less explicit. Explicit statements of beliefs tended to start with phrases like “I believe” or “I think.” For example, Ms. Wei expressed a belief about student engagement: “I think it’s really important to have some of these wow things for students so that they feel like, oh my gosh, this class is amazing” (Systems & Scale Unit interview). Implicit statements of beliefs required more interpretation on my part and were often identified by teachers’ reasoning about why they thought something. For example, a teacher thought that working one-on-one with students better supported their learning—although she did not state that explicitly as a belief, I inferred that her belief was “oneon-one better supports learning” because that was her reasoning for why chose to do something in her classroom. Social communities. Weick’s (2001) framework also included platitudes, or norms that teachers appealed to in their justifications. In the context of organizational sensemaking, repeated justifications represent commitments to social relationships and are then identified as frames of reference, or ways that teachers view and reason about their teaching practices and curriculum enactment with their students. Explicit statements of commitments to different social communities often started with phrases like “I’m committed to” or “I talked with this person.” Explicit statements about obligations to social norms started with phrases like “I felt I had to” or “We’re expected to.” For example, Mr. Ross expressed obligations to his district: “Most of my anxiety actually just comes from district scheduling and district stuff that they tell me I have to do” (Systems & Scale Unit interview). The key linguistic marker in this example is the phrase I 74 have to, which indicates an obligation. I found evidence of teachers’ social commitments to their social communities in excerpts coded with codes such as Concern, Local Obligations, or All Others. Critical noticing. Critical noticing is at the center of sensemaking and was the component that I had to most often infer from my interpretations of the data for the primary reason that teachers did not necessarily state their critical noticing of something as such. Additionally, the interview protocols that we designed rarely asked teachers what they “noticed” about something. Thus, critical noticing was the last piece of an occasion of sensemaking that I identified and described, but every occasion of sensemaking has a description of critical noticing. Despite these challenges, there were particular markers that I was able to use to identify teachers’ critical noticing. First, I searched for teachers’ expressions of surprise, particularly concerning students’ engagement and learning. These statements started with phrases like “I was surprised” or “Students were frustrated.” For example, Ms. Callahan expressed surprise about her students’ assessment results: “It still surprised me whenever they were asked to elaborate, to describe, to write out some of these processes that they will miss key points” (Animals Unit interview). Ms. Callahan used this expression multiple times in this interview and talked at length and in detail about her students’ test results, which further strengthened my inference that she was critically noticing her students’ interactions with the Pre- and Post-tests. I inferred that these statements were critical noticing because teachers were able to articulate their surprise, or reaction to something unexpected. My rationale is that we may critically notice things that are unexpected rather than expected and thus engage in sensemaking about it. Implicit statements of critical noticing started with phrases like “It was challenging for me to do X” or “I struggled with Y.” With these types of statements, I inferred that teachers were 75 stating what they had critically noticed about an unexpected or challenging event. For example, Ms. Eaton expressed her struggles with the Pre-Test: “I really struggled with getting their PreTest information to help guide my teaching. This [Process Tools] really helped guide my teaching, the other Tools helped guide my teaching” (Systems & Scale Unit interview). In this example, the key linguistic marker is I really struggled, and I inferred that Ms. Eaton must have critically noticed her own reaction to students’ Pre-Test results. Finally, I differentiated between critical noticing about the curriculum, students, or the teacher. Sometimes, the foci was all three, for example, if teachers were critically noticing their own reactions to students’ interactions with the curriculum materials. For example, I inferred that Mr. Harris critically noticed his own reaction to students’ results on the Post-Test: “And I want our kids to get good grades… but with these [Tests] it was like some of these were really simple that they would still get wrong and the—but they’re foundational to science… I like that—that it opens our eyes to the things that we all think that they know, and they don’t” (Systems & Scale Unit interview). A linguistic marker of Mr. Harris’s critical noticing of his own reaction were the words it opens our eyes; a linguistic marker of his critical noticing of students’ interactions with the Post-Test were the words they would still get wrong. Sometimes there were two foci, when teachers were critically noticing the interaction of two foci, such as students with the curriculum. And finally teachers’ critical noticing could include any one of these foci. Analyzing insufficient evidence of sensemaking. I did not analyze occasions of sensemaking for which there was insufficient evidence of sensemaking in this data set. However, because the interview protocols asked teachers about their interactions with boundary objects and their classroom practices, all teachers had something to say about each of the boundary objects. 76 Thus, I did some interpretive coding of teachers’ talk by identifying reasons why a boundary object did not seem to be an occasion of sensemaking for them. Triangulation with other data sources. For modifications to curriculum materials, I used artifacts as a secondary data source, and for reflections on planning and/or enactment, I used video-recordings of classroom enactment (when available) to confirm that teachers had modified or enacted the curriculum in ways they said they had done during the interviews. Data from the online PD was sparse; what I found in that data source mostly corroborated the interview data and did not provide new understandings of teachers’ sensemaking. Triangulating among data sources contributed to reliability of the interview data. Summary In this chapter, I have described my research design, data collection, and data analysis methods. I took a qualitative approach and used a comparative case study design to explore teachers’ sensemaking about their implementation of an innovative science curriculum. Data sources included primarily transcriptions of teacher interviews, teacher-created or teachermodified artifacts, and video-recordings of classroom instruction. Data analysis used qualitative and discourse analysis methods and a combination of pragmatic, conceptual, and theoretical approaches to searching for occasions of sensemaking, including numerical and content analyses of teachers’ talk and identification of goals and resources and outcomes of sensemaking. The results of my analyses were: (1) identification of teachers’ occasions of sensemaking, including sensemaking about Carbon TIME boundary objects of interest; (2) descriptions of occasions of sensemaking, including goals and resources, outcomes, and foci of teachers’ critical noticing; (3) descriptions of how teachers’ sensemaking is influenced by their social commitments to various communities, and, (4) descriptions of how the outcomes of teachers’ 77 sensemaking contribute to feedback loops that can result in teacher learning of rigorous and responsive science teaching practices associated with Carbon TIME boundary objects. 78 Chapter Four Findings In this chapter I present findings related to teachers’ sensemaking about their implementation of an innovative science curriculum across the settings of PD and classroom enactment over time. Carbon TIME researchers designed the curriculum, student assessments, and instructional resources to support secondary science teachers and their students with engagement in rigorous and responsive science teaching and three-dimensional science learning. The larger curriculum implementation research project within which this study was situated was designed using research-based principles of PD, such as engaging teachers with content using active learning strategies and building a networked community of support. Teachers who volunteered to participate in the project for two years were provided with material resources, such as equipment and supplies, and human resources, such as a cohort model of networked Carbon TIME colleagues. Data for this study were obtained from the first cohort’s first year of implementation. The research literature shows that teachers take up PD in different ways and that PD is often not as “effective” as PD providers would hope. With the adoption of the Next Generation Science Standards, PD providers and curriculum and assessment developers face the challenge of how to support teachers in shifting their teaching practices from more traditional methods to a new and different vision of rigorous and responsive science teaching that attends to threedimensional science learning. This shift can be challenging especially for experienced teachers who have practical knowledge that has developed over many years under the old regime, when science content was often divorced from process skills. Therefore, even with all of the PD support and continual engagement with the Carbon TIME Project over time, teachers may not 79 take up ideas or practices from PD in ways that are productive for learning how to teach in this new way. Carbon TIME researchers hoped that providing teachers with a network of colleagues implementing the same curriculum would provide them with an opportunity to gain a new perspective on science teaching and learning, with the possibility of supporting teacher learning of new practices. This study aimed to address these issues by examining teachers’ sensemaking about their implementation of Carbon TIME, privileging the classroom enactment setting as an important site of learning for teachers because of the extensive amount of time that teachers spend in classrooms with their students and in schools with their colleagues and administrators. Even so, the hope was that their infrequent interactions with Carbon TIME researchers and colleagues would be important experiences that could influence their sensemaking about implementation in ways that would help them make progress towards learning rigorous and responsive science teaching practices. In my study, I define sensemaking as critical noticing that involves action situated in context to explore teachers’ sensemaking about their implementation of Carbon TIME. One affordance of using organizational sensemaking was the focus on how crossing boundaries between organizations with different purposes and goals, such as between PD and classrooms, elucidates features of those organizational settings that influence teachers’ sensemaking. My research questions were: (1) What are patterns in teachers’ occasions of sensemaking? (a) What are the goals and resources that influence teachers’ sensemaking? (b) What are the outcomes of teachers’ sensemaking? (2) How do teachers’ commitments to social relationships in various communities influence their sensemaking? 80 (3) How do the outcomes of teachers’ sensemaking contribute to feedback loops that support teachers in making progress toward learning of rigorous and responsive science teaching practices? To answer these questions, I used discourse analysis and qualitative analysis methods to examine 40 interview transcriptions from eight case study teachers. I focused my analysis on Carbon TIME boundary objects which, by definition, appeared in both the PD and classroom enactment settings but with potentially different meanings because of differences in the participants present and purposes for engaging in sensemaking. Additionally, I focused my analysis on two classroom practices related to those boundary objects—discourse, or students’ talk and writing, and teachers’ grading and assessment practices. By identifying and describing occasions of sensemaking for the secondary science teachers in this study, I was able to analyze patterns across cases, looking for similarities and differences in what teachers were engaged in sensemaking about, how they were engaged in sensemaking about it, and why. I chose to focus my analysis on Carbon TIME boundary objects that had the potential to trigger teachers’ sensemaking because they required teaching practices that may differ from what teachers were used to. For example, a teacher who was not used to having students express their ideas about phenomena may engage in sensemaking about the Expressing Ideas Tool because they were puzzled by their students’ reactions to it. After identifying patterns, I then connected these patterns to the question of whether participating in the Carbon TIME curriculum implementation project, with its associated PD and designed network, triggered occasions of sensemaking in ways that were productive or unproductive, with the definition of productive as sensemaking that helped teachers make progress towards learning 81 rigorous and responsive science teaching practices associated with those selected boundary objects. Of the 48 potential occasions of sensemaking based on numerical analyses, I found sufficient evidence of sensemaking in the data to warrant identification of 23 occasions, with variation across cases in terms of number of occasions and objects of sensemaking. Furthermore, I found that the nature of teachers’ sensemaking, which included interactions among teachers’ goals and resources (goals, beliefs, and social commitments) for engaging in sensemaking about particular objects, was more important to understand than what they were engaged in sensemaking about. These occasions of sensemaking could encompass long time spans (e.g., several months) as teachers engaged in multiple rounds of planning for and reflecting on enactment while also moving across the boundary between PD and classrooms. Finally, I found that in all cases teachers’ social commitments to their various communities, particularly their local school obligations, influenced their sensemaking but not always in the same way. In the first major section of this chapter, I present findings from the process of identifying occasions of sensemaking, including the following results: (1) numerical analyses of teachers’ talk to identify potential occasions of sensemaking; (2) content analyses of teachers’ talk to identify occasions that warranted in-depth description based on conceptual and theoretical approaches to searching for occasions of sensemaking, including sufficient and insufficient evidence of sensemaking in the data; and, (3) patterns of sensemaking within and across cases to illustrate variations in teachers’ occasions of sensemaking. In the second major section, I use my model of teacher sensemaking to describe patterns across teachers’ occasions of sensemaking and present narrative descriptions of specific occasions of sensemaking that exemplify these patterns, including components of sensemaking 82 such as goals and resources, outcomes, and teachers’ critical noticing. I situate these occasions of sensemaking in teachers’ ecologies of practice to illustrate how implementing Carbon TIME acted as a disturbance, thereby triggering teachers’ sensemaking. These narrative descriptions use teachers’ own words as much as possible in order to provide readers with a sense of how teachers were talking about their experiences implementing the curriculum. Using the narrative descriptions of selected occasions of sensemaking, I answer the second research question by describing how teachers’ social commitments to various communities influenced their sensemaking, and I answer the third research question by describing how feedback loops in teachers’ sensemaking could lead to teacher learning of rigorous and responsive science teaching practices. To support these findings, I highlight occasions of sensemaking in which the data show that teachers’ beliefs, practices, or goals were either shifting or staying the same over time. Finally, I end the chapter with a synthesis of the findings that groups teachers into four categories based on the nature of their sensemaking. Identifying Occasions of Sensemaking The first findings that I present are related to identifying occasions of teachers’ sensemaking. What were teachers sensemaking about, and why? First, I conducted systematic numerical analyses of the data to identify potential occasions of sensemaking; then, I combined a conceptual and theoretical approach to reduce the number of potential occasions to occasions that warranted more in-depth description based on content analysis of sufficient evidence of sensemaking in the data. In the following sections, I describe findings from the process of identifying occasions of sensemaking, then I describe patterns of sensemaking within and across cases. I found sufficient evidence that all the case study teachers except Ms. Barton were engaged in sustained sensemaking focused on least two of the six Carbon TIME boundary 83 objects. Finally, I discovered that in the process of exploring teachers’ sensemaking, it was also important for me to understand what teachers were not engaged in sensemaking about, and why. Thus I also report findings related to what teachers said about boundary objects when there was insufficient evidence of teachers’ sensemaking. This enabled me to develop a more a more complete picture of how individual teachers understood and used boundary objects, and why. Results of Numerical Analyses: Identifying Potential Occasions of Sensemaking The first step in identifying potential occasions of sensemaking was to conduct a systematic numerical analysis of excerpts coded with teachers’ talk about boundary objects. In addition to a pragmatic decision to have a sufficient quantity of text to search for linguistic markers of sensemaking, I assumed that teachers would spend more time talking about what they were engaged in sensemaking about. Therefore, quantifying the amount of teachers’ talk about Carbon TIME boundary objects and associated classroom practices served as a gross approximation of teachers’ occasions of sensemaking. The descriptive coding scheme identified teachers’ talk about boundary objects, their classroom practices, their professional networks and local obligations, and their concerns. The results of the descriptive coding for all codes and all teachers are shown in Table 35 in APPENDIX D. Briefly, results showed that patterns across cases were similar because the interview protocols elicited teachers’ talk about the same topics. For example, a high number of excerpts were coded for STUD (Students) because teachers often talked about students’ learning, engagement, or attributes in their reflections on classroom enactment. The highest number of excerpts coded for STUD were for Ms. Wei and Ms. Nolan at 75 and 73 excerpts, respectively. On the other end, the lowest number of excerpts coded for STUD were for Ms. Barton and Ms. Eaton at 33 and 34 excerpts, respectively. Some codes had no associated excerpts; for example, 84 COBL (Carbon TIME obligations) were not applied to any excerpts for Mr. Ross, Ms. Apol, and Ms. Eaton. Because excerpts varied in length, however, a more accurate measure of time spent talking about a topic was the total number of words in all excerpts for a particular code. The results of that analysis, which included only the boundary objects and two classroom practices (Discourse and Grading and Assessment) codes are shown in Table 37 in APPENDIX D. I constrained the analyses to certain codes because codes such as Concern, Students, or Modifications co-occurred with codes for boundary objects and classroom practices. For example, a total of 48 excerpts (among all teachers) were coded for both Students and Pre- and Post-Tests; 62 excerpts were coded for both Students and Discourse; and, 43 excerpts were coded for both Students and Grading and Assessment. Furthermore, Students was applied broadly and therefore could be applied to teachers’ talk about anything related to students but not necessarily any of the boundary objects or classroom practices of interest. Variation across cases. To get a sense of individual teachers’ amount of talk relative to others, I calculated the ratio of individual teachers’ talk to the mean about boundary objects and classroom practices (see Table 10). Overall, Ms. Barton, Ms. Apol, Ms. Wei, and Ms. Nolan’s total amount of talk were below the mean; Mr. Harris, Ms. Callahan, and Ms. Eaton’s total amount of talk were above the mean; and, Mr. Ross’s total amount of talk was near the mean at a value of 1.02. There was variation both across and within teachers for particular boundary objects or classroom practices. For example, Ms. Apol’s ratio of talk about the Predictions Tool was 3.05, which means that she talked about the Predictions Tool about 3.05 times more than the average case study teacher. Thus, I identified this boundary object as a potential occasion of sensemaking for Ms. Apol. Other similar cases included the Predictions Tool for Ms. Callahan, 85 the Evidence-based Arguments Tool for Mr. Harris, the Instructional Model for Ms. Nolan and Mr. Ross, and the Explanations Tool for Ms. Eaton and Ms. Wei. Of note is that Ms. Barton was the only case study teacher who talked a lot about discourse (highest ratio at a value of 1.49) without also having a higher than average ratio for any of the boundary objects. This result meant that Ms. Barton was talking about discourse, or students’ writing and talking, without also talking a lot about Carbon TIME boundary objects associated with supporting classroom discourse (i.e., all the Process Tools). Table 10 Ratio of Individual Teachers’ Talk to the Mean About Boundary Objects & Classroom Practices Ross Harris Callahan Barton Apol Wei Nolan Eaton INMO 1.77 0.52 1.39 0.72 0.39 0.72 2.07 0.42 EXPR 1.00 0.47 1.73 0.48 0.78 1.20 1.48 0.87 PRED 0.22 0.56 2.59 0.84 3.05 0.33 0.06 0.35 EBAT 0.25 2.03 0.19 0.20 1.36 0.90 1.69 1.36 EXPL 0.89 0.38 1.07 0.32 0.37 1.76 1.24 1.97 PPTS 0.81 1.77 1.74 0.23 0.74 0.49 0.90 1.32 DISC 1.09 1.00 0.88 1.49 1.38 0.39 0.73 1.04 GRAD 1.33 1.15 0.93 1.17 0.58 0.58 0.57 1.69 Overall 1.02 1.14 1.13 0.86 0.92 0.69 0.92 1.33 Note. Bolded text indicates the two highest ratios for each boundary object or classroom practice Variation within cases. To get a sense of variation within a case, I calculated the percentage of individual teachers’ talk about boundary objects and classroom practices. The results showed that some teachers had very little talk about some boundary objects. For example, Mr. Ross, Ms. Nolan, and Ms. Eaton’s talk about the Predictions Tool was 1% or less of their total talk. Unsurprisingly, teachers’ talk about the two classroom practices—Discourse and Grading and Assessment—were in the double digits for all cases because the coding for classroom practices was broader than for boundary objects. 86 Combined results for numerical analyses. Thus, a combination of the numerical analyses of ratios of teachers’ talk compared to others and percentage of talk within cases yielded a total of 41 potential occasions of sensemaking (see Table 11). Justifications were based on a ratio showing a higher than average number of words as compared to other teachers (ratio of 1.00 or higher) and a percentage in the double digits. Table 11 Justifications for Identifying Occasions of Sensemaking About Boundary Objects & Classroom Practices Based on Numerical Analyses of Amount of Teacher Talk Code Ross Harris Callahan Barton Apol Wei Nolan Eaton Total INMO n -n ---n, p -3 EXPR n -n --n n -4 PRED --n -n, p ---2 EBAT -n, p --n, p p n, p n 5 EXPL --n, p --n, p n, p n, p 4 PPTS p n, p n, p -p p p n, p 7 DISC n, p n, p p n, p n, p p p n, p 8 GRAD n, p n, p n, p p p p p n, p 8 Total 5 4 7 2 5 6 7 5 41 Note. n = justification based on higher than average number of words as compared to other teachers; p = justification based on double digit percentage of talk for the individual teacher In summary, there was variation across and within cases. I found the highest number of potential occasions for Ms. Callahan and Ms. Nolan at 7 occasions each. Next, Ms. Wei, Ms. Eaton, Mr. Ross, Ms. Apol, and Mr. Harris had between 6-4 occasions. And Ms. Barton had the lowest number of potential occasions at 2 occasions, both for classroom practices. Unsurprisingly, in all cases, teachers talked a lot about their discourse and assessment and grading practices. I found the highest number of occasions for the Carbon TIME boundary object of Pre- and Post-Tests at 7 occasions, and the lowest number for the Predictions Tool at 2 occasions. 87 Results of Content Analysis: Identifying Occasions of Sensemaking Next, in order to reduce these 41 potential occasions of sensemaking to a smaller number for in-depth analysis and description, I combined a conceptual and theoretical approach to analyze the contents of the excerpts for sufficient evidence of sensemaking, including evidence of components of sensemaking such as teachers’ goals, practical knowledge, social communities, critical noticing, and outcomes (see Chapter Three for examples of how I determined sufficient and insufficient evidence of sensemaking). The results of that analysis are shown in Table 12. Of the 41 potential occasions of sensemaking based on the numerical analysis, I found 26 that warranted more in-depth description based on the content analysis. Tables showing representative excerpts for sufficient and insufficient evidence of teachers’ sensemaking about each of the boundary objects are located in APPENDIX E. I did not conduct analyses of the classroom practices because those codes co-occurred with the boundary objects codes; excerpts that did not co-occur were about practices not associated with these boundary objects. Table 12 Identification of an Occasion of Sensemaking Based on Sufficient Evidence in Content Analysis Code INMO EXPR PRED EBAT EXPL PPTS Other Total Ross X ----X -2 Harris ---X -X -2 Callahan X X X --X X 5 Barton ------X 1 Apol --X --X -2 Wei X X -X X X X 6 Nolan X X -X X X -5 Eaton ---X X X -3 Total 4 3 2 4 3 7 3 26 From lowest to highest number of occasions of sensemaking by teacher, the results show: Ms. Barton with 1 occasion; Mr. Ross, Mr. Harris, and Ms. Apol with 2 occasions each; Ms. Eaton with 3 occasions; Ms. Callahan and Ms. Nolan with 5 occasions each; and, Ms. Wei with 6 88 occasions. For teachers who had fewer occasions of sensemaking (particularly Ms. Barton, Mr. Ross, and Mr. Harris), the data show sustained sensemaking about a topic over time; thus, fewer occasions of sensemaking did not necessarily indicate less sensemaking overall. One pattern to note across cases is that most of the teachers, including all the high school teachers, were engaged in sensemaking about the Pre- and Post-Tests, particularly within the context of their grading and assessment practices. In contrast, only two teachers had sufficient evidence of sensemaking about the Predictions Tool (Ms. Callahan and Ms. Apol), which indicated that the Predictions Tool was not an occasion of sensemaking for most of the teachers. Other occasions of sensemaking. Using a theoretical approach based on Weick’s (1995) definition of sensemaking, I identified Other occasions that seemed significant for teachers. Ms. Barton spent a lot of time talking about discourse mostly unrelated to the boundary objects, so I identified an occasion of sensemaking for her about discourse; Ms. Callahan and Ms. Wei made modifications to curriculum materials that had what I considered strong evidence of being driven by social commitments to their school communities, so I identified occasions of sensemaking for them about those modifications. Although I used primarily amount of talk to systematically search for occasions of sensemaking, I also identified these two occasions for Ms. Callahan and Ms. Wei based importance to my conceptual framework for investigating teachers’ sensemaking in terms of being illustrative examples of sensemaking driven by social commitments in school settings. These two occasions had extensive amounts of talk that were not necessarily captured in the descriptive coding for boundary objects or classroom practices. The findings related to identifying occasions of sensemaking indicated that, of the 48 potential occasions of sensemaking (if every case study teacher were to engage in sensemaking about all six boundary objects), there was sufficient evidence in the data to warrant identification 89 of 23 occasions of sensemaking about Carbon TIME boundary objects. Taking a broad view, then, the landscape of teachers’ sensemaking about Carbon TIME boundary objects is visualized in Figure 6, with larger text size corresponding to more occasions of sensemaking. Figure 6. The landscape of teachers’ sensemaking about Carbon TIME boundary objects The findings show that, over the course of a Carbon TIME unit that follows the Instructional Model, teachers were engaged in sensemaking about the Instructional Model itself and the Process Tools (Expressing Ideas, Predictions, Evidence-based Arguments, and Explanations) that were designed to support classroom discourse around the unit investigations. At the end of a unit, all teachers except Ms. Barton were engaged in sensemaking about their students’ responses on the unit Post-Test (see Tables 49 and 50 in APPENDIX E). One pattern of sensemaking across cases was the high number of occasions of sensemaking about the Carbon TIME Pre- and Post-Tests: teachers’ sensemaking focused on wording of the tests and inconsistencies in students’ responses on the forced-choice questions and written explanations. This pattern was not entirely unexpected given that teachers administered an overall Pre- and Post-Test (that covered all six Carbon TIME units whether or not teachers taught those units) and 90 individual unit Pre- and Post-Tests. Thus, students took a total of eight tests over the course of implementing three Carbon TIME units. The first goal of this study was to identify occasions of teachers’ sensemaking. The findings presented thus far indicate the landscape of teachers’ sensemaking, but what specifically were teachers engaged in sensemaking about, and why? What were they not engaged in sensemaking about, and why? The answers to these questions are important in terms of addressing issues in science education about how to best support teachers to engage students in three-dimensional science learning. To answer these questions, I next present findings from content analyses of sufficient and insufficient evidence of sensemaking in the data. Sufficient evidence of sensemaking. Table 13 shows a summary of teachers’ sensemaking about Carbon TIME boundary objects based on content analyses of sufficient evidence of sensemaking in the data, organized from highest to lowest number of occasions. Table 13 Summary of Teachers’ Sensemaking About Carbon TIME Boundary Objects Boundary Object Pre- and PostTests Instructional Model Evidencebased Arguments Tool Expressing Ideas Tool Explanations Tool Predictions Tool Case Study Teachers Ross, Harris, Callahan, Apol, Wei, Nolan, Eaton Ross, Callahan, Wei, Nolan Harris, Wei, Nolan, Eaton Callahan, Wei, Nolan Wei, Nolan, Eaton Callahan, Apol Foci of Sensemaking MS teachers: wording and vocabulary on the tests HS teachers: students’ inconsistencies between forced-choice and written explanations, precision of language, and knowledge of foundational concepts Horizontal and vertical structure of the Instructional Model, particularly going back down the triangle or pyramid Supporting students in using evidence from the investigations to construct arguments and identifying Unanswered Questions, and ordering of the columns for Evidence and Conclusions Student engagement, willingness to be “wrong” in expressing their initial ideas about natural phenomena, and growth over time Modification of the Tool or enactment to support students in constructing explanations of phenomena Supporting students in being okay with not knowing and being wrong and showing growth in learning across the unit 91 Results show that the high school teachers were engaged in sensemaking about students’ responses on the Post-Test, and the middle school teachers except Ms. Barton were more engaged in sensemaking about the wording on the tests. For example, both Ms. Apol and Ms. Eaton were surprised about their students’ test scores and attributed the low scores to unfamiliar wording for middle school students (see Table 49 in APPENDIX E). For example, in her Y1 follow-up interview, Ms. Apol stated that she was “shocked at how poorly some of them did” and that she didn’t think “it reflects what they know” because “they’re different tests than they’ve ever taken before, not just it being online, but the way they’re worded.” There was no evidence, however, that Ms. Apol decided to change her enactment around the tests. Rather, she supported her reasoning that the Carbon TIME tests were “not the type of test” her students were used to taking by citing another Carbon TIME teacher from the mid-year face-to-face PD session: “Somebody said it in our Carbon TIME workshop in February that these tests are nothing like any other tests 7th graders take” (Plants Unit interview). In contrast, all the high school teachers critically noticed their students’ responses on the tests, particularly the Post-Test, albeit in slightly different ways (see Table 48 in APPENDIX E). For example, Mr. Harris expressed frustration that his students were doing poorly on the foundational knowledge questions on the Post-Test. After teaching the Systems & Scale Unit, he stated that it was “very disheartening to see that [kids getting foundational knowledge like atoms last forever wrong].” Similarly, Mr. Ross thought that his students’ responses on the “essays” or written explanations showed better understanding than the “all”, “some”, or “none” forcedchoice questions. And Ms. Wei noticed that her students were “still holding onto” particular ideas on the Post-Test, such as plants getting their mass from the soil. After teaching the Plants Unit, she reflected that “even though like, in some of their responses, they would actually circle 92 Plants get their mass from the air and from water. But when they wrote out their explanations, they said, Plants get their mass from soil and nutrients.” Thus, like the other high school teachers, Ms. Wei was also engaged in sensemaking about inconsistencies in students’ responses on the forced-choice questions and written explanations. The next highest number of occasions of sensemaking was for the Instructional Model. Four teachers were engaged in sensemaking about the vertical and horizontal structure of the Instructional Model (see Table 38 in APPENDIX E), which is an important Carbon TIME boundary object because it provides teachers with a complete view of the trajectory of a unit, including how the other boundary objects (Process Tools and Pre- and Post-Tests) fit into the unit. The Instructional Model was discussed extensively at the Summer 2015 face-to-face PD session—researchers described where each Process Tool fit into the inquiry-application sequence to support classroom discourse around students expressing ideas, making predictions, using evidence from the investigations to construct arguments, and constructing explanations. The Instructional Model has a vertical component showing observations, patterns, and models (from more concrete evidence to more abstract representations) and a horizontal component showing the inquiry-application sequence, with the apex of the triangle showing the turning point in the unit. The results show that three of the four high school teachers were engaged in sensemaking about the structure of the Instructional Model (which teachers often referred to as the triangle, pyramid, or mountain). For example, although Mr. Ross said that he used the Instructional Model to help him organize what he was thinking and doing, he critically noticed that students were “having a hard time on the back slope, like, when they’re supposed to apply” (Y1 follow-up interview). Ms. Wei was confused about the levels of the triangle, and stated that she wanted to make the sequence “linear in some way” (Y1 follow-up interview). 93 Next, all the teachers were engaged in sensemaking about supporting students’ engagement in or completion of the Process Tools. For example, there was sufficient evidence in the data that four teachers were engaged in sensemaking about the Evidence-based Arguments Tool (see Table 44 in APPENDIX E), which was the third tool in the Instructional Model and supported students in constructing arguments based on evidence from the investigation. The three columns in the Tool are: (1) Class Evidence (what patterns did we find in our class evidence about each of the Three Questions?), (2) Conclusions (what can we conclude about each of the Three Questions using this evidence?), and (3) Unanswered Questions (what do we still need to know in order to answer each of the Three Questions?). Two of the teachers—Ms. Wei and Mr. Harris—switched the columns so that Conclusions was before Evidence. Mr. Harris did not provide a reason in the available data, but Ms. Wei stated that she modified the Evidencebased Argument Tool “to fit CER (claim-evidence-reasoning framework)” (Plants Unit interview). Finally, there was sufficient evidence in the data that two teachers were engaged in sensemaking about the Predictions Tool (see Table 42 in APPENDIX E), which supports students in making macroscopic-scale predictions about what they will observe in the unit investigations. The Tool differentiates between matter movement (what will gain or lose mass due to movement of matter?), matter change (how will matter changes in this system affect CO2 in the air and the color of the BTB, which is an indicator for CO2), and energy change (what evidence of energy change will you be able to observe?). For example, Ms. Apol critically noticed that she had not always started her labs with predictions in comparison to Carbon TIME investigations. In her Post interview, she stated that “I think it [Carbon TIME implementation] is changing the way I think—I don’t know how fast it’s going to happen. Like, with NGSS, I think 94 kids have to get used to being able to know that it’s okay to be wrong.” Her insight was that, “without the Predictions Tool, you have nothing to compare it [the Evidence-based Arguments Tool]… that Prediction Tool becomes important once they do the Evidence-based Argument and their explaining” (Y1 follow-up interview). Ms. Apol’s reflections indicate that she was thinking about the tools in relation to each other and engaged in sensemaking about how making predictions earlier in the unit would support her students later in seeing how much growth they had made during the unit. In summary, these findings provide a broad overview of what teachers were specifically engaged in sensemaking about. Generally, teachers were critically noticing features of the boundary objects, such as the columns of the Evidence-based Arguments Tool, and their students’ interactions with the boundary objects, such as students’ struggles with wording on the tests or with expressing their ideas about phenomena using the Expressing Ideas Tool. However, to obtain a more complete picture of teachers’ sensemaking, it was just as important to understand what teachers were not engaged in sensemaking about, and why. Therefore, in the next section I present findings about potential occasions of sensemaking for which there was insufficient evidence of sensemaking in the data. Teachers’ comments about boundary objects when there was insufficient evidence of sensemaking. The PD sessions that focused on teachers’ use of Carbon TIME boundary objects were designed to support teachers in sensemaking about how to engage students in threedimensional science learning. Although all the boundary objects were essential to the curriculum in meeting that goal, it was reasonable to assume that teachers may not have engaged in sensemaking about all of them, and it is important to know why. Therefore, I conducted content analyses and found four reasons why teachers did not engage in sensemaking about Carbon 95 TIME boundary objects: (1) they expressed liking and/or perceptions of understanding it; (2) they expressed neutral feelings about it; (3) they noticed a problem but there was no evidence of extended talk about it; or, (4) they noticed a problem and gave a reason for not trying to engage in sensemaking about it (see Table 14). Table 14 Teachers’ Comments About Boundary Objects When There Was Insufficient Evidence of Sensemaking in the Data Boundary Object Expressed Liking and/or Perceptions of Understanding It Expressed Neutral Feelings About It Noticed a Problem But No Evidence of Extended Talk About It Noticed a Problem and Gave a Reason for Not Trying to Engage in sensemaking about It Instructional Model Expressing Ideas Tool Harris, Apol, Eaton -- -- Barton Ross, Barton, Apol, Eaton Harris -- -- Predictions Tool Barton, Eaton Ross, Wei, Nolan Harris -- Evidence-based Arguments Tool -- Ross, Callahan -- Barton, Apol Explanations Tool Ross, Callahan, Barton, Apol -- Harris -- Pre- and PostTests Barton -- -- -- All of these reasons could be problematic in terms of potential for teacher learning. For example, in the first and second categories, teachers’ perceptions of understanding it, liking it, or expressing neutral feelings about it could inhibit productive sensemaking about boundary objects because teachers would not see a need to spend time sensemaking about it if they thought they understood it. Of the 25 potential occasions that had insufficient evidence of sensemaking, 14 (or 56%) were due to teachers’ expressions of liking and/or perceptions of understanding it. For this category, I note that it was possible for teachers to both like and engage in sensemaking about a boundary object (for example, see Ms. Nolan and the Expressing Ideas Tool in Table 40 in 96 APPENDIX E). However, to be identified in this category, a potential occasion of sensemaking had to have evidence only for teachers’ expressions of liking or understanding it and insufficient evidence of sensemaking or extended talk about it. Six of the 25 potential occasions (or 24%) were attributed to teachers’ expressions of neutral feelings about a boundary object and were indicated in teachers’ talk by a lack of either strong positive or negative statements (i.e., I like, I loved, I really didn’t like). For example, in her Post interview, Ms. Nolan said: “The Predictions Tool. I mean, it served its purpose. It’s fine. I don’t think there’s anything great [about it].” Similarly, in her Post interview Ms. Wei said: “The Predictions Tool—I don’t plan on changing too much. I think that’s simply just a way of getting at prior knowledge again.” Both Ms. Nolan and Ms. Wei acknowledged the existence of the boundary object and its role in the curriculum but expressed neutral feelings about it rather than liking it or having a lot to say about it. There was no evidence in Ms. Nolan or Ms. Wei’s data of extended talk about the Predictions Tool. Unlike the first two categories, the third category contained potential occasions of sensemaking in which the teacher identified a problem but there was no evidence of extended talk about it. Both of the potential occasions in the third category were from Mr. Harris. He seemed to notice a problem with the Predictions and Explanations Tools, but there was insufficient or no evidence of extended talk about them (see Tables 43 and 47 in APPENDIX E). For example, when talking about the Predictions Tool, Mr. Harris said: I like the predicting, but that is like pulling teeth sometimes. You know, with the kids, it’s hard to get them to take the time. Like yesterday we were predicting, and I felt like I was giving them too much information trying to get them… So I kinda give them some information to get them there. So I like predicting, but I feel like sometimes I give too much to them. I’m not giving them the right answer, I’m just trying to get them to write down what their thoughts are. (Y1 follow-up interview) 97 Mr. Harris acknowledged challenges he had with supporting students in making predictions about the phenomena in the units. However, these concerns were captured briefly only at the last Y1 follow-up interview; therefore, there was insufficient evidence of extended talk about it. Similarly, for the Explanations Tool, Mr. Harris stated challenges with using it in his classroom: Sometimes I feel like the explanations and the arguments, they’re so close to each other, that’s where they can feel like it’s pretty redundant. And so that can be a challenge I guess. Because when they’re doing the evidence-based, they’re looking at their data and making a case for something, and then when they’re doing their Explanation Tool I feel like that’s generally the same kind of concept, but then maybe the modeling and trying to tie that in there. So sometimes the evidence-based and explanation, that’s again a challenge for me to get them to still put full effort into putting their ideas out there. If that makes sense. (Y1 follow-up interview) Again, Mr. Harris acknowledged challenges he had with using the tool with his students, but there is insufficient evidence of extended talk about it. In this case, the interviewer (Evelyn) did not probe him further and moved on to the next question in the protocol. Finally, three of the potential occasions were in the last category, where teachers noticed a problem and gave a reason for not trying to engage in sensemaking about it. Reasons in this category are most problematic in terms of potential for teacher learning because in explicitly stating reasons for not trying to engage in sensemaking about a boundary object (often by placing responsibility on others), teachers foreclosed opportunities to learn about how to improve their teaching practices. For example, Ms. Apol gave a reason for not trying to engage in sensemaking about the Evidence-based Arguments Tool based on her perceptions of students’ behavior and abilities. She said: I like the evidence based arguments because like, what I’ve said, we’ve done a lot of claim-evidence-reasoning writing…. I think Unanswered Questions [on the EBA Tool] in 7th grade, I’m not going to consider it to be like my goal because a lot of them aren’t going to do it or they don’t know how to do it, or afraid that they’re getting it wrong. So I like the evidence and conclusion, like I said, because it ties into how we write in Science. (Animals Unit interview) 98 Ms. Apol stated explicitly that she was not going to consider Unanswered Questions her goal because she thought her students weren’t going to do it or didn’t know how to do it, and the implication seemed to be that she was not going to invest the time to support her students in doing so. Unlike Ms. Barton, she was more precise in her language about the Evidence-based Arguments Tool, including noticing how the Evidence and Conclusions columns were similar to the Claim-Evidence-Reasoning framework. However, there is no evidence in the data that she had extended talk about the Evidence-based Arguments Tool. In summary, the findings indicate four reasons why there was insufficient evidence of teachers’ sensemaking about a boundary object. For the majority of potential occasions (80%), teachers either expressed liking and/or understanding it or having neutral feelings about it. For the remaining occasions, Mr. Harris noticed a problem with his enactment of two of the Process Tools but there was insufficient evidence of extended talk about it, and Ms. Barton and Ms. Apol noticed problems and gave reasons for not trying to engage in sensemaking about it, primarily due to beliefs about their students’ behavior or abilities. Differences in patterns of sensemaking across cases. Combining the occasions of sensemaking with sufficient and insufficient evidence for a particular case, then, yielded a more complete picture of what individual teachers were engaged in sensemaking about, not engaged in sensemaking about, and why. In this section, I describe patterns of sensemaking within and across two cases and postpone providing examples of teachers’ talk about Carbon TIME boundary objects and classroom practices for the narrative descriptions of sensemaking that are located in the next major section of this chapter. To illustrate an example of variation in patterns of sensemaking across cases, I contrast Ms. Eaton with Ms. Callahan to show differences in occasions of sensemaking across the landscape of a Carbon TIME instructional unit. 99 Ms. Eaton’s landscape of sensemaking about Carbon TIME boundary objects. Ms. Eaton was a middle school teacher in the Carbon TIME West Network. Due to timing and recruitment issues, she was the only teacher in her network and therefore worked closely with her case study coach, Daisy, to plan for, enact, and reflect on implementation. There was sufficient evidence in the data that Ms. Eaton was sensemaking about the Evidence-based Arguments Tool, Explanations Tool, and Pre- and Post-Tests in terms of how to best support her middle school students in successfully demonstrating learning and growth—she critically noticed that they were struggling with these practices and assessments. For these boundary objects, she decided to modify the tool or her enactment to bridge the gap between how her students were performing and how she wanted them to perform. Figure 7 below shows the landscape of Ms. Eaton’s sensemaking about Carbon TIME boundary objects (bolded text shows occasions of sensemaking). Figure 7. Ms. Eaton’s landscape of sensemaking about Carbon TIME boundary objects For the other boundary objects, there was insufficient evidence in the data for Ms. Eaton for occasions of sensemaking about the Instructional Model, the Expressing Ideas Tool, or the Predictions Tool because she expressed either feelings of liking these tools or perceptions of 100 understanding the tools and how to use them in her enactment of the curriculum. Therefore, the findings show that Ms. Eaton was engaged in sensemaking about the boundary objects that occurred in the second half of the unit, when she was using these boundary objects for grading and assessment purposes and critically noticing how well her students were doing or not doing with constructing arguments and constructing explanations of phenomena. Ms. Callahan’s landscape of sensemaking about Carbon TIME boundary objects. In contrast, Ms. Callahan was engaged in sensemaking about the two Process Tools at the beginning of a Carbon TIME unit. Combining occasions of sensemaking with sufficient and insufficient evidence for her case yielded a picture of Ms. Callahan as a high school biology teacher experienced with supporting students in constructing arguments and explanations but not necessarily with expressing their ideas and making predictions about the results of an investigation (Figure 8). Figure 8. Ms. Callahan’s landscape of sensemaking about Carbon TIME boundary objects 101 For the other boundary objects, there was insufficient evidence in the data that Ms. Callahan was engaged in sensemaking about the Evidence-based Arguments Tool because she expressed neutral feelings about it and the Explanations Tools because she expressed liking or understanding it. Comparison of patterns of sensemaking about Carbon TIME boundary objects. In summary, a comparison of Ms. Eaton’s landscape of sensemaking with Ms. Callahan’s shows that they were engaged in sensemaking about different Carbon TIME boundary objects based on what was puzzling or challenging for them. The results show that Ms. Eaton was engaged in sensemaking about the last two Process Tools whereas Ms. Callahan was engaged in sensemaking about the first two. Both teachers were working with high-ability student populations. For Ms. Callahan, however, having students take a more active role in expressing and sharing their ideas differed from her usual teaching practices. After the experience of teaching three Carbon TIME units, Ms. Callahan articulated that the way she taught in Carbon TIME was different than the way she normally taught, which usually involved more direct instruction. For Ms. Callahan, then, the first two Process Tools were occasions of sensemaking because they involved different teaching practices than she was used to or comfortable with. At first she justified her challenges with using these tools by stating her belief that students did not like to express their ideas or make predictions. However, by the end of the year, she had changed her thinking about the value of classroom discourse around these tools and made modifications in her enactment to better support students’ use of the tools. Although Ms. Eaton had a different landscape of sensemaking, her reasons for engaging in sensemaking about particular boundary objects were similar. She was not spending time and effort engaged in sensemaking about boundary objects that she liked or understood; instead, she 102 focused her sensemaking on boundary objects that presented struggles and challenges for her students. As a classroom teacher with 20 years of experience, Ms. Eaton recognized and articulated how Carbon TIME was different from her normal way of teaching, including how it “shakes up the order” in which she taught topics and “added different vocabulary” to the way she taught. Ms. Eaton articulated that Carbon TIME seemed to change her beliefs about what her students were capable of, which was similar to how Ms. Callahan changed her beliefs about what her students enjoyed doing or not doing in a science classroom. This comparison of Ms. Eaton and Ms. Callahan’s landscapes of sensemaking show how teachers’ different occasions of sensemaking could have similar reasons for sensemaking. Therefore, I argue that understanding teachers’ reasons for engaging in sensemaking, or why teachers were engaged in sensemaking about particular boundary objects and not others, were more important than what they were engaged in sensemaking about. To identify teachers’ particular reasons for engaging in sensemaking and relate it to their local contexts, then, I needed to describe occasions of sensemaking in more detail, including specifying the goals and resources that influenced teachers’ sensemaking. From Identifying to Describing Occasions of Sensemaking The first major section of this chapter described findings related to occasions of teachers’ sensemaking. What were teachers engaged in sensemaking about, and what were they not engaged in sensemaking about? Why were individual teachers engaged in sensemaking about some boundary objects and not others? One assumption of this study was that teachers had limited resources and time; therefore, what they chose to attend to, or their occasions of sensemaking, signified issues that were of primary importance to them. Within the context of this 103 study, sensemaking was accessed through teachers’ talk with their case study coaches about their experiences curriculum materials in the classroom with their students. In summary, I found variation within and across cases in terms of teachers’ patterns of sensemaking about Carbon TIME boundary objects. The findings presented thus far show the landscape of teachers’ sensemaking, or a broad view of teachers’ occasions of sensemaking and teachers’ reasons for engaging in sensemaking about some boundary objects and not others. Results show that teachers were engaged in sensemaking about the structure of the Instructional Model, how to use the Process Tools with their students to enact different types of classroom discourse and, how implementation of Carbon TIME could serve their own goals for teaching and learning or alignment with local obligations such as school and district-level initiatives. Second, I found four reasons for insufficient evidence of sensemaking in the data, including teachers’ perceptions of understanding or liking the boundary object or expressing neutral feelings about it. Three of the case study teachers noticed problems but there was either no evidence of extended talk about it or the teachers gave a reason for not trying to engage in sensemaking about it. Third, teachers were engaged in sensemaking about different boundary objects but in similar ways; conversely, they were also engaged in sensemaking about the same objects but in different ways and for different reasons. The findings presented thus far allude to differences in particular components of my model of teacher sensemaking (see Figure 4 in Chapter Three). A full analysis of an occasion of sensemaking includes the following components: (1) critical noticing; (2) goals and resources, including teachers’ goals, practical knowledge, and social commitments to communities; and, (3) outcomes, including teachers’ decisions to modify curriculum materials or enactment and reflections on any of the goals and resources or other topics. In identifying components of 104 occasions of sensemaking, I found that the focus of teachers’ critical noticing varied. Teachers’ critical noticing focused on either themselves (T), their students (S), and/or the curriculum materials (C). Bolded text in Table 15 indicates the six occasions that I selected for in-depth description; Ms. Nolan’s sensemaking about the Evidence-based Arguments and Explanations Tools were combined into one occasion because they were inter-related (i.e., she had embedded the Evidence-based Arguments Tool into the Explanations Tool). Tables describing these occasions of sensemaking are in APPENDIX F. Table 15 Primary Foci of Teachers’ Critical Noticing in Their Occasions of Sensemaking Object of Ross Harris Callahan Barton Apol Wei Nolan Eaton Sensemaking Instructional S, C -T, C --T, C T, C -Model Expressing --T, S, C --S, C S, C -Ideas Tool Predictions --T -T, C ---Tool EBA Tool -T, C ---C S, C S, C Explanations -----S, C S, C S, C Tool Pre- and S, C T, S, C S, C -S S, C S, C S, C Post-tests Other --C S -S --Note. T = teacher, S = students, C = curriculum materials; bolded text indicates occasions that were selected to be described in further detail in the following sections A common pattern in the foci of teachers’ critical noticing was the combination of students interacting with the curriculum materials (S, C); of the 26 occasions of sensemaking, this combination of critical noticing occurred for 13, or half of the occasions. The second most common pairing was the teacher interacting with the curriculum material (T, C), occurring for 4 occasions. For two occasions involving Mr. Harris and Ms. Callahan, the combination of critical noticing involved all three foci (T, S, C). These variations in foci were not necessarily correlated 105 with goals and resources or outcomes of sensemaking or with judgments about productive or unproductive sensemaking in terms of potential for teacher learning; however, they do indicate what teachers were critically noticing in these occasions of sensemaking. Thus, I argue that teachers’ reasons for engaging in sensemaking about particular boundary objects were more important to understand than what they were engaged in sensemaking about. Teachers’ reasons for engaging in sensemaking mattered in terms of how they reflected on and framed interactions between the curriculum, their students, and themselves, which I will argue then influenced how productive the occasion of sensemaking was for teacher learning of rigorous and responsive science teaching practices associated with the boundary object. In the next major section of this chapter, I present narrative descriptions of selected occasions of sensemaking to provide readers with a sense of how these occasions of sensemaking were situated within teachers’ ecology of practice, including how teachers’ goals and social commitments to various communities influenced their critical noticing of features of the boundary objects and their enactment of the curriculum with their students. Occasions of Sensemaking: Narratives Situated Within Teachers’ Ecologies of Practice My analysis of teachers’ occasions of sensemaking indicated that individual teachers’ approaches to sensemaking about Carbon TIME boundary objects were fairly consistent across boundary objects. That is, the way a teacher reflected on and framed interactions between the curriculum, their students, and themselves were fairly consistent for boundary objects for which there was sufficient evidence of sensemaking in the data. However, there were differences across cases in teachers’ approaches to sensemaking, including their reasons for engaging in sensemaking, that mattered in terms of potential for future learning about rigorous and responsive science teaching practices and students’ engagement in three-dimensional science 106 learning (I note that measuring student learning was outside the scope of this study; however, I made judgments about whether teachers’ decisions or reflections were productive for engaging students in three-dimensional science learning). Thus, I selected particular occasions to describe in more detail because they were divergent, or different for different teachers, and consequential, or had direct effects on teacher and/or student learning. Table 16 shows judgments about teachers’ approaches to sensemaking. Table 16 Judgments About Teachers’ Approaches to Sensemaking About Carbon TIME Boundary Objects Productive for student engagement Productive for teacher learning Unproductive for teacher learning or student engagement Approach to Sensemaking About Boundary Objects Teacher Focused on bridging the gap between perceptions of what the boundary objects offered and what students needed Nolan, Callahan, Eaton Focused on the gap between perceptions of what the boundary objects offered and teachers’ understanding of the objects or students’ interactions and reactions to the objects Focused on perceptions of what the boundary objects offered and engaged in reasons for sensemaking about the objects that were unrelated to student engagement or teacher learning; for example, alignment with local obligations Nolan, Callahan, Eaton, Harris Ross, Wei, Barton, Apol For the purpose of being strategic in selecting illustrative examples, I describe one occasion for each case study teacher with the exceptions of Ms. Apol because her case was similar to Ms. Barton’s case and Ms. Eaton because her case was similar to Ms. Callahan’s case (see Tables in APPENDIX F for occasions of sensemaking that are not described here). Each description includes: (1) a summary table showing components of an occasion of sensemaking using the teachers’ own words in quotation marks and my inferences in italic text, and (2) a narrative situating the occasion within teachers’ ecologies of practice, including stories that teachers tell about themselves, their schools and students, and their teaching practices. 107 Each narrative begins with a description of the teacher and their school context, including teachers’ perceptions of the backgrounds and abilities of their student populations when available. In my descriptions I strive to tell stories about teachers using their own words as much as possible to provide readers with a sense of who these teachers were as sensible beings trying to address the persistent challenges of how to simultaneously portray the curriculum, enlist student participation, expose student thinking, and contain student behavior while also accommodating their own personal needs (Kennedy, 2016). Thus, I aim to tell stories about teachers in a way that portrays their sensemaking as being sensible to them given their personal and professional backgrounds, goals, beliefs, and teaching contexts. When possible, I include stories about teachers from the PD setting to contribute to a picture of how teachers navigated the settings of PD and classroom enactment. Furthermore, in my analysis and descriptions of the cases, I aim to resist neat narratives: Through analysis we are not on the trail of singular truths, nor of overly neat stories. We are on the trail of thematic threads, meaningful events, and powerful factors that allow us entry into the multiple realities and dynamic processes that constitute the everyday drama of language used in educational sites…. It is, in fact, the competing stories, put into dynamic relation with one another, that allow insight into participants’ resources and challenges and, moreover, into the transformative possibilities of social spaces for teaching and learning. (Dyson & Genishi, 2005, p. 111) In other words, when there was evidence in the data to identify and describe conflicting stories, I present those conflicts. Table 17 shows a snapshot of the six occasions of sensemaking I selected for narrative description, including a distinctive phrase that the teachers used that seemed to characterize their sensemaking about implementation of Carbon TIME. I begin my stories about teachers’ occasions of sensemaking with Ms. Nolan, who was notable for engaging in sensemaking about the boundary objects in ways that consistently considered her students’ needs and resulted in 108 modifications to the boundary objects and her enactment that supported students’ engagement in three-dimensional science learning. I end the stories with Ms. Barton, who had insufficient evidence in the data about sensemaking of any of the Carbon TIME boundary objects but lots to say about classroom discourse, conditions in her local context, and students’ behavior. Table 17 Selected Occasions of Sensemaking for Narrative Description Teacher School Context Location Nolan Urban HS Northwest Wei Urban HS Northwest Ross Suburban HS Midwest Callahan Harris Barton Urbansuburban HS Suburban-rural HS Rural MS Midwest Midwest Midwest Occasion of Sensemaking Modification to the Process Tools Students’ engagement in the Plants unit investigation Students’ scores on the Preand Post-Tests Modification to the data spreadsheets Students’ scores on the Preand Post-Tests Discourse and grading of the Process Tools Distinctive Phrase “putting myself in the shoes of my students” “something a little bit flashier” “how to fit all of those things together” “it’s important for the students” “learning what they don’t know” “knowing what kids think” The following section is organized by themes according to the nature of teachers’ sensemaking: • Theme 1: Sustained Sensemaking Over Time • Theme 2: Influence of School Communities • Theme 3: Teacher Learning of Content • Theme 4: Influence of Teachers’ Beliefs Because the research questions are interrelated, the occasions of sensemaking described using these themes capture how what teachers engaged in sensemaking about (RQ1) were related to specific goals and resources, particularly their social commitments to various communities (RQ2), and outcomes such as reflections and decisions about enactment that could contribute to productive or unproductive feedback loops over time (RQ3). 109 Theme 1: Sustained Sensemaking Over Time An important research-based design feature of the Carbon TIME project was teachers’ long-term participation in the project. By having teachers agree to implement at least three Carbon TIME units in one school year, researchers aimed to have teachers and their students engage with curriculum materials multiple times, with the hope that teachers and students would become not only more comfortable with using the materials but also become more proficient with using them for their designed purposes. Thus, with my study I had the opportunity to investigate whether or not teachers would engage in sustained sensemaking over time about Carbon TIME boundary objects. My third research question was about how feedback loops in teachers’ occasions of sensemaking could lead to teacher learning over time. Both Ms. Callahan and Ms. Nolan showed evidence of sustained sensemaking over time about the Process Tools. Ms. Callahan was engaged in sensemaking about the first two Process Tools in the Instructional Model because she was not used to having students express their ideas and share their predictions about the results of investigations. Ms. Nolan was engaged in sensemaking about the last two Process Tools because she critically noticed her students’ struggles with constructing explanations and modified the tools to support students in using evidence to construct their explanations. In both of these cases, outcomes of sensemaking contributed to feedback loops that either reinforced or changed teachers’ goals, practical knowledge, or social commitments in ways that were generally productive for teacher learning and student engagement in three-dimensional science learning. Ms. Nolan’s occasion of sensemaking about the Process Tools: “Putting myself in the shoes of my students.” Ms. Nolan was a White female teacher with 7 years of experience and a major undergraduate emphasis in biology. She taught in the same large urban district as Ms. Wei 110 but at a different high school and was part of the Carbon TIME Northwest Network. Of note is that Ms. Nolan completed National Board Certification in 2014, the year prior to her participation in the Project (personal communication, February 22, 2017). Because Ms. Nolan’s classroom teaching was deemed exemplary by the Carbon TIME research team, she joined Carbon TIME as a case study coach and a member of the student research team during her second year of participation in the project. Ms. Nolan’s case study coach was Mackenzie, who was also the Carbon TIME Northwest Network Leader and local school district science program manager; thus, in these roles, Mackenzie facilitated the face-to-face and online PD sessions, collected case study data, provided general Carbon TIME implementation support, and facilitated other district-level PD initiatives not related to Carbon TIME. For example, Ms. Nolan explained that her district and school “worked really hard… to do Claim-Evidence-Reasoning” (Systems & Scale interview). Although Ms. Nolan did not have a high opinion of the success of her school’s teacher evaluation system, she noted that to be a “distinguished educator at the top level” required “a lot of student voice” or “being an active learner.” Ms. Nolan was singular for engaging in sensemaking about all but one of the Carbon TIME boundary objects in ways that consistently considered her students’ needs. Table 18 (on the next page) shows Ms. Nolan’s goal of being student-centered, or putting herself in the shoes of her students in order to better visualize what she could do to support students’ engagement in three-dimensional science learning. Thus, a distinctive phrase for Ms. Nolan was, “putting myself in the shoes of my students.” Ms. Nolan was a distinctive teacher in several ways. First, in contrast to all the other case study teachers, she had sustained sensemaking about the Instructional Model across the year. In the first end-of-unit interview, after teaching Systems & Scale, Mackenzie asked Ms. Nolan to 111 describe how she had used the Instructional Model in her classroom. Ms. Nolan explained that she had included the Instructional Model in her presentation slides to show students where they were in the unit in order to decrease their anxiety about learning expectations. For example, if the students were still on the left side, including learning foundational knowledge, then students could still be confused. But, if they were on the other side of the pyramid, then Ms. Nolan expected them to know that a Post-Test was coming soon, and she wasn’t going to give them any more practice. She wanted to be “transparent” with her students and to show them where they were in the Instructional Model so they would know her expectations for their performance; this reasoning is an example of how Ms. Nolan attempted to put herself in the shoes of her students and imagined how more comfortable students would be if they knew what the expectations were. Table 18 Ms. Nolan’s Goal of Putting Herself in the Shoes of Her Students Interview SS unit Oct 2015 AN unit Dec 2015 PL unit Jan 2016 Post June 2016 Y1 Oct 2016 Excerpts from Interview Transcriptions I don’t know if they’re doing that so much yet, but at least it’s – I feel like I’m being as transparent as I can with the kids, to show them like this is where we’re at. I think I want to get better at this, but always seeing that step back again to; what are the objectives. I’m trying to get better at putting myself in the shoes of my students, and what’s their experience like? They like breath like a sigh of relief like “Oh why didn’t you just tell us that before. Like that’s not hard. Like oh that is so clear now. I get it.” So now that I feel like I’ve gotten my feet wet with what the tools are and where we’re trying to go with this, now I want to think more about the student experience in this process. I think about where the students are coming from. So, like what did they just finish and where are they going next. Furthermore, in her Y1 follow-up interview, Ms. Nolan stated explicitly that she did “a lot of thinking about… the Instructional Model.” In doing so, she recognized ways in which she did not seem to fully understand the Instructional Model even though she had clear ideas about how she wanted to use it with her students. She reflected that: 112 So, in the triangle there is the observations, the patterns, and the models and I feel like there should be an element of like when you’re going up the triangle you are looking at observations, and patterns, and models and then coming down the triangle you are doing the same thing because you are going past those bands. So I don’t think I fully understand the integrity of that. How does that tie in? (Y1 follow-up interview) Ms. Nolan was critically noticing and trying to engage in sensemaking about how the vertical (observations, patterns, models) structure of the Instructional Model was supposed to work in concert with the horizontal dimension (progression in student learning through the inquiryapplication sequence). Furthermore, in her reflection, Ms. Nolan recognized and articulated her own perceived level of understanding of the built-in or designed structure of the Instructional Model. This occasion of sensemaking (see Table 66 in APPENDIX F) for Ms. Nolan is an example of how her reflections were productive for her own learning about rigorous and responsive science teaching practices associated with Carbon TIME boundary objects. Similarly, Ms. Nolan was singular in her sensemaking about the Expressing Ideas Tool. She reflected that the tool was “attempting to be sort of the puzzling phenomenon that you then hang the rest of your ideas on” (Post interview; see Table 40 in APPENDIX F). Even though she thought it was “really cool” for students to realize that there were “holes in their understanding of something as basic as burning,” she thought that the scenarios in the Expressing Ideas Tools were not interesting or engaging enough for her students, so she decided to modify them. For example, she substituted a boy growing with a panda bear growing in the Animals Unit (see Figure 18 in APPENDIX H). Another modification she made in her enactment was to record students’ Top 10 questions during the expressing ideas stage to return to later at the end of the unit for students to see their own growth over time. Again, these examples illustrate how Ms. Nolan was putting herself in the shoes of her students: what would be an engaging phenomenon for them? Why might it be important for students to recognize their own growth in learning? 113 Of all the occasions of sensemaking for Ms. Nolan, I describe her occasion of sensemaking about the Evidence-based Arguments and Explanations Tools (see Table 19). Table 19 Ns. Nolan’s Occasion of Sensemaking About the Evidence-based Arguments and Explanations Tools Interactions Among Goals & Resources Goal: “If they can do a good response, there then they are getting it” (Y1 follow-up) + Social Communities: “So I know that various teachers, myself included, we came out with our own to try and put the whole big picture together. And we did that and then it was much more successful” (Post) Critical Noticing S and C: “So they really struggled with the evidence for each of these” (AN unit) Outcomes of Sensemaking Decision: “I took the Evidence-Based Arguments tool and embedded it into the Explanations tool” (SS unit) Reflection: “That’s why I haven’t made the switch yet. I actually think that evidence first, then claim is perhaps better but I don’t know if it fits with the way students think about it” (Y1 follow-up) Ms. Nolan critically noticed that her students were struggling to construct explanations using evidence from the Evidence-based Arguments Tool. Her goal was to support students in using the Three Questions (about matter and energy changes) to construct explanations about the phenomena in the units because she wanted students to construct “good” responses that included evidence. One outcome of her sensemaking was a decision to modify the tools by “embedding” the Evidence-based Arguments Tool into the Explanations Tool in the form of a scaffold—a checklist that students could use to keep track of what evidence they needed to include in their explanations (see Figure 9 on the next page and Figure 19 in APPENDIX H for modifications to the front side of the tool). 114 Figure 9. Ms. Nolan’s modification to embed the Evidence-based Arguments Tool into the Explanations Tool: The back side Another outcome was Ms. Nolan’s reflection about how she was unsure of whether to modify the Evidence-based Arguments Tool to match with the Claims-Evidence-Reasoning framework (which is a modification that Ms. Wei made and which Ms. Nolan knew about from interactions with her Carbon TIME network colleagues). What is notable is that Ms. Nolan expressed uncertainty about making the modification because she was unsure about which one was better for her students—Claims-Evidence-Reasoning, or the order in the Evidence-based Arguments Tool, which started with evidence from the investigation, then moved on to conclusions that students could make based on that evidence. This uncertainty was a consistent feature of Ms. Nolan’s sensemaking—that is, she continued to wonder about how her students were doing even after she had made modifications based on her initial critical noticing of students’ interactions with the boundary objects. For example, Ms. Nolan wondered whether students would still be able to construct explanations if she took away the scaffold. What is also notable about Ms. Nolan’s sensemaking is that her reflections, which was an outcome of her 115 sensemaking, contributed to a feedback loop that then influenced goals and resources that influence sensemaking (see Figure 10). Figure 10. Feedback loop in Ms. Nolan’s sensemaking about the Evidence-based Arguments and Explanations Tools Because of Ms. Nolan’s strong commitment to her students’ learning (as evidenced by her distinctive phrase of “putting myself in the shoes of my students”), she continued to reflect on how her students were interacting with the curriculum even after she had already made modifications based on initial interactions. She was exposed to CER in school- and district-level PD and knew that other Carbon TIME teachers were modifying the tool to fit CER; however, she had a strong sense of commitment to her students’ learning and resisted conforming to others for the sake of aligning with colleagues. Again, this consistent focus on students’ experiences sets apart Ms. Nolan from the other teachers in this study. In conclusion, Ms. Nolan’s sensemaking was productive for her own learning about rigorous and responsive science teaching practices associated with Carbon TIME boundary objects. Ms. Nolan was sensemaking about particular features of the objects that mattered for students’ engagement in three-dimensional science learning. What sets Ms. Nolan apart from the other case study teachers was a sense of agency in modifying the boundary objects and her enactment, a focus on students’ experiences using the curriculum from the viewpoint of the students, and a vision of science teaching and learning that aligns well with that of Carbon TIME 116 and the Next Generation Science Standards. For example, at the end of the Plants Unit (which was her third unit), she wanted students to “put all three pieces [photosynthesis, biosynthesis, respiration] together,” and created an additional Explanations Tool that did that (see Figure 20 in APPENDIX H). Thus, her sensemaking about Carbon TIME implementation was focused on taking an active role in bridging the gap between her perceptions of what the curriculum offered in terms of supporting students’ engagement in three-dimensional science learning and what her students needed in order to do so. Ms. Callahan’s occasion of sensemaking about the Predictions Tool: “It’s important for the students.”. Ms. Callahan was a White female teacher with 13 years of experience and a major undergraduate emphasis in biology. She taught in a math and science magnet program that was situated in an urban setting and enrolled students from the surrounding high schools and suburban areas. Students attended the program in the morning for their math, science, and technology classes and returned to their home schools in the afternoon. Of note is that Ms. Callahan was in her first year at the magnet program after having been selected for the position through a competitive process. The director of the program was familiar with the Carbon TIME Project because one of Ms. Callahan’s colleagues, a senior teacher, was a veteran Carbon TIME teacher and had helped lead some of the Midwest Network face-to-face PD sessions. I also note that I was Ms. Callahan’s case study coach and, rather than being assigned to be her coach, I had asked her at the first face-to-face PD if she wanted to be a case study teacher with me as her coach; I do not recall any particular reason for selecting her to potentially work with me other than I felt an affinity with her that I did not feel with other teachers in the Midwest Network. A notable characteristic of Ms. Callahan was her verbosity and ability to articulate her reasons for her beliefs, decisions, and actions. A recurring phrase throughout her interviews was, 117 “it’s important for the students” (see Table 20). For example, she thought it was important for students to understand particular advanced science topics (e.g., electron transport chain), develop particular science skills (e.g., communicating results to others, using evidence to support their answers, using statistics), and develop particular outlooks (e.g., connections between all living things). Table 20 Ms. Callahan’s Focus on the Importance of Particular Science Topics and Skills Interview SS unit Nov 2015 AN unit April 2016 PL unit April 2016 Post June 2016 Y1 Oct 2016 Excerpts from Interview Transcriptions They need to understand, I think it’s important. I really think it’s important that the students account for their differences in their materials. That’s a great skill for all students to have. I think it’s important for the students to be able to understand a little bit more about the citric acid cycle to be able to understand more about the electron transport chain and chemiosmosis and all [those] components. You have to make observations every day, and all that can lead to the point where it’s very important for them…. I think that really gives them that ownership of their education. The similarity is that there are a lot of student collaborations. That there’s a lot of interaction between the students, and I think that’s really important for science. I think in both cases, it’s really important that students are explaining their reasoning, that they’re understanding why they’re thinking certain ways, and that they’re absolutely using evidence to support those answers In the previous section of this chapter, I had described how Ms. Callahan’s landscape of sensemaking about Carbon TIME boundary objects had included the first two Process Tools because they engaged her and her students in discourse (divergent talk) that was unfamiliar and uncomfortable for her. For example, Ms. Callahan stated that she “had a hard time with the students… with the Expressing Ideas, initially… because they really, really wanted the right answer” (Systems & Scale Unit interview; see Table 40 in APPENDIX F). She critically noticed that students were struggling with expressing their ideas and believed that students struggled because they wanted to know the “right” or “correct” answers. In this case, she placed 118 responsibility for the challenges she faced as a teacher on students’ desire for wanting to know the right answer. However, Ms. Callahan changed her classroom enactment with the Expressing Ideas Tool over the course of the year with help from me and ideas that she got from other teachers in her network at the mid-year PD session. In one of interviews, I shared with Ms. Callahan that the focus students in the student group interview had stated that they liked being able to be wrong in the expressing ideas stage of the unit. Consequently, by the beginning of the second year of implementation, Ms. Callahan stated that the Expressing Ideas was: the most important tool… for teachers, because it really helps us to gather so much information about the students’ background, and to see where they’re coming from, and how did their ideas change over time. (Y1 follow-up interview) Furthermore, Ms. Callahan changed her enactment to include the use of different colored pens to mark changes in ideas over time on the Tools (a strategy she had picked up from another Carbon TIME teacher at the mid-year PD session) and, at the end of a unit, she asked students to compare their Explanations Tools with their Expressing Ideas Tools to see how much their ideas and language had changed over time. Similarly, Ms. Callahan showed evidence of changes in her thinking and enactment over time with the Predictions Tool. Initially, after the Systems & Scale unit, she stated a belief that students “don’t like the unknown,” and “do not like to predict,” which made it challenging for her to use the tools in her classroom. She reflected in the Post interview that she thought it was “problematic for students who started out [at a] pretty high level of understanding… as they already know what’s going to happen.” However, by the beginning of her second year of implementation, Ms. Callahan stated that she liked the Predictions Tool for learning because: 119 they maybe know what the pieces are in some ways, and I really like the fact that it’s very authentic, it’s very much students just being able to discuss and be okay with making mistakes and sort of predicting and not knowing. (Y1 follow-up interview) In addition, Ms. Callahan was sensemaking about the structure of the Instructional Model itself and expressed concerns about “going back down the pyramid.” Ms. Callahan critically noticed that she had difficulty with “going back down the pyramid” because she wanted to “keep moving up instead of going back down” (Y1 follow-up interview; see Table 38 in APPENDIX F). I note that she had brought up her confusion during the Summer 2015 face-to-face PD, and there ensued a brief discussion among the teachers and Carbon TIME researchers about what it meant for students to “go down the pyramid” during the application sequence of the unit. Ms. Callahan was also sensemaking about the Post-Test and expressed surprise at her students’ results, critically noticing their lack of precision in language. After teaching the Animals Unit, Ms. Callahan articulated that “they will miss key points or they’ll put the word it, or they won’t be descriptive enough… they’re still not—not all of them—but there are still enough students who are not being precise enough with their language.” For the forced-choice questions, Ms. Callahan believed that students were not comfortable with the extreme choices of “all” or “none” due to general advice they had received about taking standardized tests such as the ACT or SAT (i.e., being careful about selecting extreme choices): In so many ways, Carbon TIME is really distinct and different from what I typically would do in a classroom. It’s more--I don’t want to say inquiry-based, but it’s certainly not as direct with information transfer. The students are the ones that are exploring and they’re coming up with ideas. They’re sort of using the evidence that they are collecting to make their claims. That’s very different from how I normally run a lot of the information. Mostly, especially in biology, these are like some facts: there’s four stages of mitosis. The students are getting to develop those. Carbon TIME is a neat step away from sort of memorization but just gaining and taking in information that you’re seeking and understanding. A lot of differences in that way. (Post interview) 120 In summary, Ms. Callahan was sensemaking about Carbon TIME boundary objects in ways that were productive for her own learning of responsive science teaching practices—she was making progress towards changing her enactment to be more responsive to students’ ideas. She was also sensemaking about the Instructional Model and Pre- and Post-Tests with a depth of precision that showed that she was attending to particulate features of those boundary objects and how she and her students were interacting with them. As I presented in the narrative descriptions of occasions of sensemaking in the previous section of this chapter, Ms. Callahan engaged in sustained sensemaking about the first two Process Tools in the Carbon TIME Instructional Model. Figure 11 shows a timeline of Ms. Callahan’s sustained sensemaking about the Expressing Ideas and Predictions Tool in 20152016. She attended the first face-to-face PD session in August 2015; she implemented the Systems & Scale unit in October 2015. For the next three months in the winter, she “took a break” from Carbon TIME to have her students conduct outdoor explorations about duckweed, an aquatic plant, in a nearby wetland area. She resumed teaching the next two Carbon TIME units in February and March 2016 by starting the Plants unit at the beginning of February, switching to the Animals unit while the radish plants were growing, and then finishing up the Plants unit in March. During these units, she attended the mid-year PD in February. Figure 11. Timeline of Ms. Callahan’s Sustained Sensemaking About the Expressing Ideas and Predictions Tools in 2015-2016 121 Table 21 shows the components of Ms. Callahan’s occasion of sensemaking about the Predictions Tool. She critically noticed that it was challenging for her to use the tool with her students because she perceived that students wanted to know what was happening and did not like to express their ideas or make predictions. Initially she reflected that she thought the Predictions Tool was problematic for more advanced students because they would make the “right” prediction. When I heard that as her coach, I interjected and shared with her that students said that they liked being able to be wrong at this stage of the unit. I shared her students’ comments with her after the first unit. Then, we did not see each for three months when she took a break from Carbon TIME. Table 21 Ms. Callahan’s Occasion of Sensemaking About the Predictions Tool Interactions Among Goals & Resources Goal: “I really like them being comfortable with the fact they’re not going to know” (Post) + Practical Knowledge: “They don’t like the unknown” (SS unit); “I think from my end, the students do not like to predict” (AN unit) + Social Communities: “During the group interview, the students said that they liked the Predictions and Expressing Ideas tools, because they liked being able to be wrong” (Evelyn; AN unit) Critical Noticing Outcomes of Sensemaking Decision: To incorporate new practice of using different colored pens on tools (field notes) T: “So that was the challenging part for me” (SS unit) Reflection: “I think [predicting is] problematic for students who started out pretty high level of understanding at the beginning” (Post) Reflection: “So I really like the Predictions tool for student learning because I think it opens up their minds a little bit” (Post) 122 At the mid-year Carbon TIME PD in February, Ms. Callahan picked up an idea that she had heard from a colleague and implemented it in the two remaining units that she taught in February and March. She decided to incorporate a new practice of using different colored pens on the tools to differentiate between students’ individual ideas, ideas they got from their “shoulder partner,” and ideas they got from the whole-class discussion. In my observation notes for that class, I noted that Ms. Callahan seemed to be enacting a new practice, and I asked her about it during our post-observation reflection. She confirmed that she had gotten the idea from the mid-year PD when she was talking with Mr. Harris in a small group and that she had never used the strategy before but could see changes in how students were interacting with the tool. Ms. Callahan continued this new practice into her second year of implementation. There were two factors at work in this occasion of sensemaking for Ms. Callahan about the Predictions Tool. The first factor involved my role as her case study coach—as the person most familiar with Ms. Callahan’s teaching practices when she was teaching Carbon TIME, I was able to note changes in her practice and ask her to reflect on them, thereby capturing them for this data set. Additionally, I offered not only a contrary perspective, but students’ perspective, that may have influenced Ms. Callahan to re-consider her initial reflections about the tool. The second factor involved the time lag between when she first started teaching Carbon TIME, when she was exposed to others’ experiences at the mid-year PD, and then when she started teaching Carbon TIME again and could incorporate a new strategy into her teaching repertoire. Incorporation of this new strategy was particularly significant for Ms. Callahan because I noted in my observation notes that her classroom discourse patterns tended to be the traditional IRE (initiate-respond-evaluate) pattern. Ms. Callahan also tended to speak very rapidly. Thus, incorporating the new strategy not only helped her students keep track of where ideas came from 123 and how they changed over time (across the four Process Tools of a unit) but also helped her slow down and give students the space and time to think and process their ideas. The opportunity to engage in sustained sensemaking over time, with targeted support from a coach and exposure to colleagues’ strategies, enabled Ms. Callahan to incorporate a new strategy that supported her learning of responsive science teaching practices associated with divergent classroom discourse. Summary. How did feedback loops in teachers’ occasions of sensemaking contribute to teacher learning over time? Teachers in this study had the opportunity to engage in sustained sensemaking about interactions between the curriculum, their students, and themselves. One teacher learned how to enact and sustain a new practice around classroom discourse by being exposed to new strategies from her colleagues and supported with a coach; the feedback loop contributed to changing her practical knowledge around how to use the tools. Another teacher already had a reflective stance and engaged in sensemaking that created feedback loops in which outcomes of sensemaking contributed to reinforcing student-centered goals and social commitments that she already had. Occasions of sensemaking in which teachers’ reflections and decisions were close-ended inhibited potential learning of new science teaching practices. Theme 2: Influence of School Communities In using organizational sensemaking to construct my conceptual framework for investigating teachers’ sensemaking, I consider the role of teachers’ social commitments to their various communities. In the classroom enactment setting, teachers may be committed to social relationships with their students (e.g., developing and maintaining an image as a “fun” teacher), their colleagues (e.g., developing and maintaining an image as a supportive colleague), and their school- and district-level administrators (e.g., developing and maintaining an image as an effective teacher). Of the occasions of sensemaking that were described in the previous section 124 of this chapter, I now highlight four of those occasions to illustrate the influence of teachers’ school communities on their sensemaking about implementation of Carbon TIME. This theme addresses patterns in occasions of sensemaking about the influence of school communities on teachers’ sensemaking. Ms. Callahan, Ms. Wei, and Mr. Ross all engaged in sensemaking about particular aspects of Carbon TIME implementation because of social commitments to their school communities, particularly their colleagues. Ms. Nolan was singular in being socially committed to her students consistently across her occasions of sensemaking. Ms. Callahan’s occasion of sensemaking about the data spreadsheets. As one of the three 9th grade teachers in the math and science magnet program, Ms. Callahan worked closely with her colleagues to coordinate students’ experiences across courses in science, math, and information technology (IT). For example, she explained that the IT teacher had showed students how to use Excel spreadsheets to create tables, do correlations, and come up with regression lines while the math teacher taught a two-week unit on statistics. Ms. Callahan shared the 9th grade team’s goals: We want them to, the 9th graders, we want them to see that math, science, and technology are the same. You really aren’t going to be a great scientist unless you incorporate the math. You’re not going to be a great mathematician unless you can apply your concepts. And if you can’t communicate using appropriate forms of technology, no one will know what you’re doing. So we’re really sort of trying to set that standard here at [the program] for the integrative approach to student growth. That’s nice. (Systems & Scale unit interview) I identified an additional occasion of sensemaking for Ms. Callahan about her modification of the data spreadsheets based on my observations of her classroom instruction as her case study coach (see Table 22). As I was collecting video-recordings and observation notes of her enactment in the Systems & Scale unit, I noticed a slight shift in her tone of voice as she 125 switched from talking about information on the Carbon TIME PowerPoint slides to talking about the columns of the data spreadsheet that students were going to fill in with their data. Table 22 Ms. Callahan’s Occasion of Sensemaking About the Data Spreadsheets Interactions Among Goals & Resources Practical Knowledge: “That’s so you could do sort of a basic essential science and math skill that they need to have when they’re doing any sort of data work” (SS unit) + Social Communities: “And so we talked with their math teacher… And so percent change of mass is kind of the common skill we want our 9th graders to walk out with” (SS unit) Critical Noticing Outcomes of Sensemaking Decision: Modify data spreadsheets to include a column for percent change in mass (Figure 12) C: Whole-class data spreadsheet has a column for change in mass Reflection: “But I think for high school students they need to be accountable for that and understand the basic statistical analysis, if they’re going to work with data and really good science research” (SS unit) Later, during the post-observation reflection, I asked her about the modification and she explained that she decided to modify the spreadsheet to include a column for percent change in mass of the ethanol (see Figure 12). Like all the curriculum materials, the data spreadsheets were in electronic editable form for teachers to modify as they wished. 126 Figure 12. Ms. Callahan’s modification to the data spreadsheet for the ethanol burning investigation in the Systems & Scale unit to include percent change in mass in Year One When I asked Ms. Callahan about this modification again at the end-of-unit interview, she explained: And so we talked with their math teacher who also has some of the 9th graders, and he’s like, “Well let’s make sure they really are familiar with this because our students are unique in that they take a research test. And so percent change of mass is kind of the common skill we want our 9th graders to walk out with. So being able to reiterate that through Carbon TIME was really supportive of what we normally do, or what is normally done here at [our program].” So that worked out quite well. (Systems & Scale unit interview). Her math colleague explicitly stated that knowing how to calculate percent change in mass was a necessary skill for their particular student population. When I offered Carbon TIME researchers’ perspective that they had not included percent change in mass because they did not consider it essential to understanding the phenomena in the unit, Ms. Callahan stayed firm and explained that: At the middle school level, go Carbon TIME people. I can see where you leave out percent change in mass, but if you can work it into an Excel spreadsheet and just have the 127 numbers appear and the teachers can take that information and use it to grow and explain some basics of statistics and science, then they should. (Systems & Scale unit interview). Thus, the influence of Ms. Callahan’s school community, as well as her own beliefs about what students should know and be able to do in specialized math and science magnet program, contributed to her sensemaking about the data spreadsheets. In this case, I determined that Ms. Callahan’s sensemaking was unproductive for this occasion in terms of supporting students’ engagement in three-dimensional science learning—knowing the percent change in mass of the ethanol did not necessarily contribute to the big idea that there was less ethanol after they had burned it in the investigation and therefore they had to figure out where the “missing” ethanol had gone. I identified an occasion of sensemaking for Ms. Callahan about the data spreadsheets that can be visualized in Figure 13. During planning for implementation, she critically noticed that the she could modify the data spreadsheet for the unit investigation by adding an additional column for percent change in mass. When she enacted the lesson with her students, I noted, as her case study coach, a slight shift in her tone of voice. During the post-observation reflection, I asked her about the modification, and she explained that percent change in mass was an important statistical concept for her students to know. Later, I brought up the modification again during the post-unit interview, and Ms. Callahan reflected that, in talking with her math colleague, she had learned that it was an important statistical concept for her particular students to know since they were in a special math and science magnet program. Furthermore, data from the beginning of Ms. Callahan’s second year of implementation shows that Ms. Callahan added an additional column for percent change in mass of BTB (see Figure 21 in APPENDIX H); thus, Ms. Callahan’s modified data spreadsheet had two columns for percent change in mass. Neither 128 of these additions necessarily enhanced students’ understanding of the big idea that ethanol loses mass when you burn it. Figure 13. The influence of Ms. Callahan’s social commitment to her school colleagues and students on her occasion of sensemaking about the data spreadsheets The actions that Ms. Callahan took were situated in a context in which she was a new teacher in the program and therefore socially committed to proving herself as a “team player.” She took the time to modify the spreadsheet in order to reinforce what her math colleague had taught and reinforce to students the importance of being more precise in their calculations of how much mass was lost—she wanted them to know that scientists use percent change in mass in order to account for differences in initial mass. Furthermore, she was committed to the success of her students on the special exams that they had to take as part of the magnet program to prepare them to do conduct their own research projects and collaborate with local scientists. Ms. Callahan’s actions to establish her status as a team player in her school community were reasonable given that she was new to the program and wanted to establish good relationships with her colleagues and students. 129 In conclusion, Ms. Callahan made a modification to the curriculum materials that was reasonable to her given her local context, even if it did not necessarily contribute to students’ engagement in three-dimensional science learning. Her approach to sensemaking was generally productive for her own learning in the case of sensemaking about boundary objects that supported more responsive teaching practices. Ms. Callahan’s growth and learning over the course of the year was mediated by interactions with her coach and her Carbon TIME colleagues. However, by the end of her first year of implementation, Ms. Callahan was still left with concerns about her understanding and use of the Instructional Model, concerns which she had raised at the first PD. Ms. Wei’s occasion of sensemaking about the Plants Unit investigation: “Something a little bit flashier.” Ms. Wei was a Chinese female with 12 years of experience and a minor undergraduate emphasis in biology. She taught in an urban high school in the Northwest and had recently moved to the area. At the time of the study, Ms. Wei was in the process of completing her National Board Certification, including using student data from Carbon TIME as evidence of student growth for the certification requirements. Ms. Wei’s case study coach was also Mackenzie. Because Ms. Nolan and Ms. Wei were in the same district, had the same case study coach, and interacted with each other, I contrast their approaches to sensemaking to show how different their approaches could be even though they had some commonalities. Like Ms. Nolan, Ms. Wei had sufficient evidence of sensemaking in the data for all boundary objects except the Predictions Tool. However, Ms. Wei’s sensemaking about the Instructional Model was markedly different; she initially expressed understanding of the Instructional Model but later shared her confusion about the vertical and horizontal structure. In the Post interview, she said: 130 The Instructional Model. Now that I understand the Instructional Model, I’m really bought into it. And I was actually talking with Ms. Nolan yesterday about how we can incorporate that Instructional Model in the other units that we do. However, four months later at the beginning of her second year of implementation, she expressed confusion about the levels of the triangle and stated that she wanted to make the sequence “linear in some way” (Y1 follow-up interview; see Table 38 in APPENDIX F). She explained: The triangle was confusing to me because I think that the levels of the triangle somehow was supposed to fit in with specifically the observations and gathering evidence but then I also associated the triangle with the downward part of the like model building and then other examples. (Y1 follow-up interview) And although Ms. Wei expressed wanting to use the Instructional Model in the same way as Ms. Nolan, she shared that she was confused by it: Well, I definitely want to use the Instructional Model to show kids their progression in the unit but when I sat down and looked at it and was like, reading the individual texts for each one. I was feeling a little confused just visually by it because it has things like modeling, patterns, and something on the triangle. I didn’t see how that fit in. I think that in my mind I was thinking “Oh, I actually need to revise the Instructional Model so that when I show it in front of kids I can have a poster on the side of my wall with my revision of the Instructional Model” but really to show kids like their progression. Thus, both Ms. Nolan and Ms. Wei were sensemaking about the structure of the Instructional Model but for different reasons. Ms. Nolan was comfortable using the Instructional Model with her students to show them where they were in the unit while also sensemaking about how the vertical and horizontal structures worked in concert to support students’ learning over time; she was sensemaking about the Instructional Model as it was. In contrast, Ms. Wei, perhaps by talking with Ms. Nolan and being exposed to what she was doing in her classroom, seemed uncomfortable using the Instructional Model with her students until she understood the structure for herself. Ms. Wei, however, was sensemaking about the Instructional Model not as it was but as how she wanted it to be. In expressing that she wanted the model to be “linear in some way,” she seemed to not fully recognize the utility of having a two-dimensional instructional model that 131 attended both to concrete and abstract forms of knowledge (vertical structure) and progression in students’ learning over time as they engaged in inquiry and application activities (horizontal structure). In other words, in wanting the Instructional Model to be linear, Ms. Wei seemed to want to reduce the complexity of the model from two dimensions to one dimension. Similarly, Ms. Wei reflected that she had challenges (that Ms. Nolan did not have) with using the Expressing Ideas Tool with her students: The Expressing Ideas tool. So I always used that but to varying degrees of success I think, just based on students’ prior knowledge. And the way that it was presented once they… For some reason they really wanted to get a correct answer on the Expressing Ideas tool instead of just expressing ideas. So I had a hard time with getting them out of that and saying, “This is just your ideas right now.” (Post interview) Ms. Wei modified her enactment to include sentence frames to support students in expressing their ideas. At the same time, she attributed students’ performance on the tool to differences in prior knowledge and the belief that students wanted to get the correct answer on the tool. Thus, Ms. Wei’s reasons for sensemaking about the Expressing Ideas Tool were about the gap between what the tool was designed to do and what she perceived her students were willing to do or capable of doing; she exercised agency in modifying her enactment to better bridge the gap by providing sentence stems for students. Finally, the last example I elaborate on is Ms. Nolan and Ms. Wei’s differences in their reasons for sensemaking about the Evidence-based Arguments Tool. The three columns in the Tool are: (1) Class Evidence (what patterns did we find in our class evidence about each of the Three Questions?), (2) Conclusions (what can we conclude about each of the Three Questions using this evidence?), and (3) Unanswered Questions (what do we still need to know in order to answer each of the Three Questions?). 132 Although they were exposed to the same district-wide PD initiatives, Ms. Nolan and Ms. Wei seemed to have taken up the various initiatives in different ways. One of those initiatives was using the CER, or claim-evidence-reasoning framework (in some cases, across subject areas) to support students in constructing arguments from evidence. Ms. Wei stated that she modified the Evidence-based Argument Tool “to fit CER” because she wanted to (Plants Unit interview; see Figure 22 in APPENDIX H). She explained that she switched the Evidence and Conclusions columns in the tool to match the order of claim-evidence-reasoning (rather than having evidence first). Ms. Nolan, however, did not make the same modification even though she was aware of the CER framework and noted that her school and district were working hard to use CER; in fact, she questioned whether the CER framework was better for her students’ learning. Ms. Nolan critically noticed that her students were struggling to construct explanations, particularly in using evidence from the investigation, and decided to embed the Evidence-based Arguments tool into the Explanations Tool in the form of a checklist. Furthermore, Ms. Nolan had a different perspective about the last column of the Evidence-based Arguments Tool and realized that “students did not know how to come up with the Unanswered Questions, and then my realizing that that’s so important for them to come up with those Unanswered Questions” (Y1 follow-up interview). Ms. Wei, on the other hand, expressed challenges in using the tool with her students. She said that they were working on the tool and “that column that says Unanswered Questions, part of that column I think is in a way easier if you ask questions all the time and are curious about why things happen, and my students really struggle with that column” (Systems & Scale Unit interview). As with the Expressing Ideas Tool, Ms. Wei attributed students’ performance on 133 the tool to students’ familiarity with the practice of asking questions and being curious about why things happen. Thus, Ms. Wei’s reasons for sensemaking about the tool were about how to streamline her instruction by making all curriculum materials look the same (i.e., modifying the Evidencebased Arguments Tool to look more like CER) and how students’ abilities or capabilities influenced their interactions with the tools. Ms. Nolan’s reasons for sensemaking about the Evidence-based Arguments and Explanations Tools were about how to modify the tools to better support her students in reaching the level of performance she wanted them to or thought they would be capable of with appropriate scaffolding. At the same time, Ms. Nolan expressed concern about the continued use of the scaffold: I looked at some of them [tools], for sure, I’ve looked at probably a quarter of them or so, and they’re good. My fear though is, if I took this [scaffold] away, what would they write? (Systems & Scale unit interview) Thus, Ms. Wei and Ms. Nolan were sensemaking about the same Carbon TIME boundary objects but in different ways. Ms. Wei made modifications to the boundary objects and her enactment primarily in order to create more coherence for students and for herself; Ms. Nolan, on the other hand, made modifications primarily in order to bridge the gap between what the curriculum offered and how her students were performing or interacting with the boundary objects. One difference between the two teachers is that the nature of coherence for Ms. Wei seemed to be more linear and one-dimensional whereas Ms. Nolan recognized the complexity of the boundary objects and strived to make modifications that maintained the integrity of the curriculum while also supporting her particular students’ performances. Therefore, I determined that Ms. Wei’s sensemaking was generally unproductive for her own learning of rigorous and responsive science teaching practices associated with Carbon TIME boundary objects. Although, 134 like Ms. Nolan, she critically noticed her students’ interactions with the curriculum materials, she attributed students’ interactions to deficits in their abilities and capabilities. One difference between Ms. Wei and Ms. Nolan, however, was their school context. Ms. Wei explained that part of her challenge with teaching in general was differences in her students’ prior knowledge: And so even that Systems & Scale takes you through some of the basic knowledge they expect students to have, the students that don’t have prior knowledge or can’t remember what an atom is and the difference between an atom and a molecule, or they don’t, or if they don’t have, if they haven’t had prior experience, or if they have misunderstandings about scale, then it’s really hard for them to catch on with the lessons as is. (Systems & Scale unit interview) When Mackenzie asked Ms. Wei what she was doing to scaffold that, Ms. Wei replied that she didn’t have a good way of scaffolding, so she would tell students to come in for extra time with her. She stated explicitly: “I don’t currently have any structures in the classroom itself to help those students out” (Systems & Scale unit interview). However, Ms. Wei made an effort to engage students when they were in the classroom—she used the Whoosh bottle demonstration (a dramatic Whoosh sound caused by lighting vaporized ethanol in a five-gallon water bottle) to make the phenomenon more engaging for students. As Ms. Wei explained to Mackenzie: Just in terms of a demo, so at the beginning of the year I think it’s really important to have some of these wow things for students, so that they feel like, oh my gosh, this class is amazing. And when you burn the ethanol in the little dishes, it’s not very exciting, and they can’t see it because it’s 32 kids. So the Whoosh Bottle we used because it’s something really flashy that still uses ethanol and you can walk around with the Whoosh Bottle, so it’s just a lot more engaging. It was the same idea behind the Methane Bubbles that we went from the Burning Ethanol lab and then we were doing a lot of paperwork and direct instruction, like them listening and taking notes and then trying to fill out all of these Explanation Tools. And so they just needed something in there that told them, oh, this is the thing that we’re talking about, because there has just been a disconnect. (Systems & Scale unit interview) Ms. Wei expressed her belief that it was important to have “these wow things” for students in order to make the phenomena in the lessons more engaging. Part of her thinking about student 135 engagement also seemed to involve students’ personal connections to the curriculum in the form of student voice. Ms. Wei talked about flashy demonstrations and student ownership of their own learning through student engagement that was triggered by these “wow things” (see Table 23). Table 23 Ms. Wei’s Goal of Having Something a Little Bit Flashier for Students Interview SS unit Oct 2015 AN unit Dec 2015 PL unit Dec 2015 Post June 2016 Y1 Oct 2016 Excerpts from Interview Transcriptions Now I would like something a little bit flashier, if possible, because (Mackenzie: It is from the woman who used the Whoosh Bottle.) Right. It’s just really hard to sell to the kids. There’s not a real big wow factor in like… in these processes that much. And so that was part of the reason why we did the dissection… Well, I wish that there was just some way, I like dry mass; I just wish there was some way to make it somehow a little bit more real for students. I don’t know… I think ideally Carbon TIME wants them to take a little bit more ownership of their learning and I don’t know if that happened this year. Thinking about like how to bring student voice to some of the Carbon TIME curriculum. So it’s actually been great to have other people at other schools to talk to about this stuff. Thus, a distinctive phrase for Ms. Wei was, “something a little bit flashier.” Ms. Wei critically noticed ways in which her students were engaged or not with the curriculum materials in the units, especially if they were contrary to her expectations. Thus, I identified an occasion of sensemaking for Ms. Wei about the Plants unit investigation, and I now describe how her social commitments to her school community seemed to influence her sensemaking in ways that were generally unproductive for her own learning and students’ engagement in three-dimensional science learning. Ms. Wei was in a district and school that was pushing multiple initiatives, including common assessments (called end-of-course exams in her school) and the CER framework for supporting students’ argumentation practices. Thus, Ms. Wei’s local obligations included common school-level assessments and alignment to district-level initiatives such as standards- 136 based grading and learning targets. Ms. Wei was committed to making standards-based grading work with Carbon TIME. As she explained to her case study coach, Mackenzie: I like the idea that students can be assessed through these formative assessments constantly, and I think that ultimately standards-based grading will help in this integrated class model that we have of honors and regular students together. So Ms. Nolan and I got together yesterday specifically to talk about honors and regular integration into one class. And I think that standards-based grading is like one of the really great ways that we can establish equity among students in a class, if we can execute it in that way. (Post interview) That’s [the network] been working really well, actually, because I have been networking like with Isabelle (pseudonym) a lot specifically about assessments and standards-based grading within Carbon TIME. (Y1 follow-up interview) Ms. Wei was spending time thinking about how to use and adapt Carbon TIME to meet multiple objectives in her local context. In addition, she was making use of her colleagues in the Carbon TIME Northwest Network to discuss ideas about how to meet these goals. In particular, through working closely with Mackenzie as her case study coach, she strengthened her connection to Ms. Nolan. I identified an occasion of sensemaking for Ms. Wei about the Plants Unit investigation in which she wished that there was a way to make the investigation “somehow a little bit more real for students” because she critically noticed that they were not as engaged with the results (see Table 24). During planning for enactment, Ms. Wei had decided to grow and collect data about the radish plants herself instead of having her students do it, and she used Vernier probeware to measure the CO2 and O2 concentration levels quantitatively rather than using the BTB indicator to obtain qualitative measures. 137 Table 24 Ms. Wei’s Occasion of Sensemaking About the Plants Unit Investigation Interactions Among Goals & Resources Goals: “I just wish there was some way to make it somehow a little bit more real for students” (PL unit) + Social Communities: “I’m trying to figure out like number of plants that’s going to yield the best data [to give to my colleagues]” (PL unit) Critical Noticing S: “They just totally took my word for it” (PL unit) Outcomes of Sensemaking Decision: “Because I was doing a lot of things behind the scenes. I was doing all of the massing, and the drying and everything like that” (PL unit) Reflection: “So I think that’s part of the reason why that students maybe have felt some disconnect to it” (PL unit) During reflection on enactment of the unit investigation with her coach, Ms. Wei reflected that her students may not have been as engaged as she had hoped them to be because she had not allowed them to collect the data themselves: But I continue to wonder how connected the students felt to that data…. Because they did, I mean, they saw me for four weeks dumping the dry mass of the crystals into the water and doing that whole thing…. But they didn’t, they just kind of totally took my word for it (Plants unit interview). As she reflected on this issue with Mackenzie, she justified her decision to collect the data herself with a variety of reasons, including logistical issues (“I couldn’t figure out an easy way to distribute and then recollect”); issues with the plants themselves being “fragile” (“they were being kids and they [plants] got knocked over and gel on the floor”); and, issues with getting “good data” to share with her school colleagues, who had not done the dry mass lab and were therefore using her data with their students. Ms. Wei recognized that her decision to collect the data herself may have contributed to students’ disengagement: Well, I wish that there was just some way—I like dry mass—I just wish there was some way to make it somehow a little bit more real for students. Because I was doing a lot of 138 things behind the scenes. I was doing all of the massing, and the drying and everything like that. So I think that’s part of the reason why that students maybe have felt some disconnect to it. (Plants unit interview) Ms. Wei was concerned about her perceptions of students’ feelings of disconnect to the lesson. Yet, she ultimately justified her decision by sharing that since her colleagues had not done the lab (“their kids didn’t do dry mass because they’d never grew their radishes”), she had shared her data with them and I inferred that she wanted good data to share with her colleagues: I also think that the plants in the light and dark; it took me probably like four different 24hour periods to actually get good data. Partially because of our probes, but also partially because I’m trying to figure out like number of plants that’s going to yield the best data. And from everyone that I’ve talked to no one has been able to be successful with the actual BTB when it comes to plants; I think it was plants in the light. I can’t remember which one it was. Everyone’s manipulating their BTB. So I feel like if there’s a lab that we’re going to do, we should be able to do it without, and collect legitimate data without manipulating it. (Plants unit interview) This excerpt shows that Ms. Wei was concerned about getting legitimate data. After hearing from others in her network that they were struggling to use the BTB correctly, she decided to use the Vernier probes to measure CO2 levels directly and was experimenting with how many plants were required to get good data: “Well, I could not put in too many plants because, for some reason, it got too wet in there and the probes won’t work with too much moisture.” In conclusion, Ms. Wei’s approach to sensemaking was generally unproductive for her own learning and her students’ engagement in three-dimensional science learning. A core conflict for her was her focus primarily on students’ interest in the phenomena, which caused her to select demonstrations for their wow factor and flashiness and which did not necessarily contribute to students’ understanding of how principles of matter and energy conservation could be used to explain discrepancies in mass data measured in the investigations. Ms. Wei recognized her own contributions to students’ disengagement yet rationalized her decisions and 139 ultimately placed responsibility on students (e.g., lack of prior knowledge, lack of curiosity) and the curriculum materials (e.g., fragility of radish plants, two-dimensional Instructional Model). Influence of school communities. I identified an occasion of sensemaking for Ms. Wei about her enactment of the Plants unit investigation that can be visualized in Figure 14. Ms. Wei’s social commitment to her school colleagues influenced her initial decision during planning for implementation to collect data herself. During classroom enactment with her students, she critically noticed students’ disengagement, and during the unit interview with her case study coach, she reflected that she wanted to make the investigation “more real” for her students. Thus, in this occasion, her reflection, which was an outcome of her sensemaking about the Plants unit, cycled back and became a new goal for her, which was connected to her overarching goals about student engagement and ownership of learning. Although student engagement is important in terms of “being interested in and “motivated to learn” about the phenomena in the unit investigations, a focus on students’ interest, motivation, or curiosity by itself is not sufficient to engage students in three-dimensional science learning. The actions that Ms. Wei took were situated in a context in which she was socially committed to her school colleagues in the form of being a “good colleague” by providing them with data that they needed because they had not done the investigation in their classrooms. Furthermore, Ms. Wei was committed to students’ engagement in terms of fostering their motivation, interest, or curiosity in the unit investigations. Because of these two social commitments, Ms. Wei critically noticed when her students were disengaged with the data in the investigation because she had chosen to collect the data herself. 140 Figure 14. The influence of Ms. Wei’s social commitment to her school colleagues on her occasion of sensemaking about the Plants unit investigation Yet, in the end, Ms. Wei justified her decision with the need to provide data to her colleagues and reflected that she was still committed to her students’ interest in the investigations and wanted to them more real for students. Ms. Wei’s actions to maintain or improve her status with people in her school community were reasonable given that, to fit into a community, teachers often need to be seen as “valuable team players” and as “fun teachers.” Mr. Ross’s occasion of sensemaking about using student assessment data for teacher evaluation: “How to fit all of those things together.” Mr. Ross was a White male teacher who had 8 years of teaching experience and a major undergraduate emphasis in biology. He taught in a suburban high school and was part of the Carbon TIME Midwest Network. Of note is that Mr. Ross worked with his case study coach, Caitlin, during the previous year (2014-2015) to pilot Carbon TIME case study data collection methods, including testing protocols and equipment for video-recording of classroom instruction, classroom observation tools, and teacher and student interviews. 141 Mr. Ross was concerned about how his students perceived his teaching and his class. For example, he shared with Caitlin that: Sometimes there are students that… When I read my evaluations, there are students who are like, “I love Mr. Ross’s notes.” In the same hour, “I hate notes.” And so I do like to offer these different things. (Post interview) Later in the same interview, he talked about how his physics students had noticed and commented that he was unprepared for class (because he had spent the time preparing for his biology classes in which Carbon TIME was being taught): But I certainly am not… I am a pretty thin-skinned person when it comes to my craft, so that was hard but at the end of our interactions the students left saying that it was a great class and their evaluations were sterling. They were just great evaluations with really good comments. So then it was okay. It was difficult to juggle just in that one respect, but next year if I was teaching all bio it wouldn’t be a problem. Mr. Ross was eventually okay with their criticisms since the students gave him “sterling” evaluations at the end. He also mentioned looking at his ratings on the RateMyTeachers site: I also don’t put any stock into RateMyTeacher.com at all, right? Because it’s not vetted in any way. You can put anything that you want on there. And last year, one of my bio students was like, “He needs to stick to the district curriculum.” (Systems & Scale unit interview) Thus, despite Mr. Ross’s claim that he didn’t put any stock into an anonymous online teacher evaluation site, Mr. Ross seemed to be concerned about students’ evaluations of his teaching. At the same time, he was also trying to meet other local obligations, including common assessments and school or district-wide PD initiatives. In this way, he was similar to Ms. Wei in terms of having to navigate multiple PD initiatives in his local context. When Caitlin asked Mr. Ross what was most important for him to talk about after teaching the first Carbon TIME unit, he said: “Well, you know at the same time I’m doing Systems and Scale, I’m also doing biomolecules and biochemistry” (Systems & Scale unit interview). During the first year of implementation, Mr. Ross had a consistent goal of trying to 142 incorporate all the different PD initiatives at his school; thus, a distinctive recurring phrase for him was, “how to fit all of those things together.” Across his interviews, he mentioned the following initiatives in addition to his participation in the Carbon TIME Project and his local obligation to common assessments: Accountable Talk, Claim-Evidence-Reasoning (CER), and International Baccalaureate (IB). Although Mr. Ross expressed difficulties and worries about trying to meet all of his local obligations, he also seemed to value each initiative. For example, he stated that he thought working on scientific argumentation (i.e., CER) was “a decent goal” and his teaching evaluation required showing students’ progress in scientific argumentation: The evaluation required me to show a student’s progress in scientific argumentation. And they gave us a rubric; a four, a zero, one, two, three or four point rubric. It was a five point rubric really because there’s a zero. And the rubric was pretty specific…. So we did a great job with claim, evidence and reasoning but we didn’t do any referencing. So it should be CERR. There should be some referencing. (Post interview) Thus, a core conflict for Mr. Ross was “just trying to figure out how to fit all of those things together” (see Table 25 on the next page). Mr. Ross used several colorful analogies to express what he was trying to do, including crossing t’s and dotting i’s, killing two birds with one stone, and double dipping. At the beginning of his second year of implementation, he was still thinking about how he could use Carbon TIME to meet his other obligations: So right now I keep trying to think how I can double dip, if you will. Like how I can make sure that I’m doing, I like the stuff from Carbon TIME and continue to use it but then also make it count for this other area. (Y1 follow-up interview) In this example, Mr. Ross uses the analogy of double dipping to indicate how he is trying to fit Carbon TIME with local obligations. 143 Table 25 Mr. Ross’s Goal of Fitting It All Together Interview SS unit Oct 2015 AN unit Dec 2015 PL unit May 2016 Post June 2016 Y1 Oct 2016 Excerpts from Interview Transcriptions Just making sure that I cross all my district t’s and dot all district i’s is the only thing that I worry about…. Mostly related to the common assessment. It’s just difficult to kind of wade through all of that at the same time as I’m doing everything else that I normally have to do. All we have ever been working on is scientific argumentation, basically. That’s like our district goal, and it’s a decent goal, right?… So what I like about the Plants thing is it really blends itself well to the framework that we’ve been using—the claim, evidence, and reasoning. I feel like it ties in very well with some of the IB stuff. All I have to do is take IB jargon and apply it to this. With the IB, inquiry is like a huge focus, and so I’m just trying to figure out how to use the Carbon TIME stuff to cover what I have to teach and how I have to assess it…. So I’m just trying to figure out how to fit all of those things together. He tried to use the Carbon TIME Post-Test data to show growth in student learning and spent time grading students’ written explanations using learning progression levels: Yeah, so I try to use it. I tried to kill two birds with one stone. I literally didn’t get the data crunched until January, which we finished Systems and Scale in like, November. So it took me forever to actually get it all done and it didn’t… And the evaluation process, I did all of that and then they send us this thing like, “Just put these into the boxes.” So all the information that I wrote into the boxes, they sent us a pre-filled out form and then ultimately said, “You don’t need to do it this way this year. Any form of growth you want to show,” after I… But on purpose I wanted to… I’m sure that I’ll get better at it doing it more. So I wanted to experience the whole thing. (Post interview) Thus, an occasion of sensemaking for Mr. Ross was about the Carbon TIME Pre- and Post-Tests (see Table 26 on the next page). Mr. Ross critically noticed that his students’ responses on the “essay,” or written explanations showed better understanding than the “all, some, or none” forced-choice questions. Due to technology and human resource limitations, the Carbon TIME online student assessment system provided scores only for the forced-choice questions. Mr. Ross, then, came up with an elaborate grading scheme and decided to grade every short answer response using the learning 144 progression levels. This level of detail was necessary if he was going to use the assessment data to show student growth for his teaching evaluation. Table 26 Mr. Ross’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Critical Noticing Goal: “I want to know what your evidence is, and I want to know how you use that evidence with your reasoning to support that” (PL unit) + Social Communities: “I have to do something called a SLO—student learning objective” (AN unit) S and C: “So in some cases I think that when they wrote their essays, it shows they understand better than the all-someor-none questions” (PL unit) Outcomes of Sensemaking Decision: “I’m also going in and grading every single short answer question on a 1, 2, 3 or 4” (AN unit) Decision: “I’m using it for my SLO goals, my evaluation for this year” (AN unit) Reflection: “Because I gave my students a test grade based on those questions, it makes me a little bit worried because… some students are thinking on that deep of level” (PL unit) One outcome of his sensemaking was his worry that giving students a test grade based on the forced-choice questions would not accurately assess students’ understanding because some of his students were “thinking on that deep of level” and thus overthinking the “all, some, or none” choices. Goals and resource that influenced his sensemaking included his goal of wanting students to use evidence to support their written explanations and his local obligation to show student growth on his teacher evaluation. Influence of school communities. This occasion of sensemaking for Mr. Ross highlights how his overarching concern and time spent sensemaking about the student Pre- and Post-Test data for his teacher evaluation was situated in an ecology of practice in which he was trying to do 145 everything. Local obligations required that he use the same common assessments as his colleagues, use the IB and CER frameworks to design instruction and assess students’ learning, and show student learning gains for his teaching evaluation. With all of these obligations, Mr. Ross felt pulled in many different directions yet believed that he could somehow make it all fit. Mr. Ross was not required to use student assessment data from Carbon TIME in any particular way, but he was required to provide evidence of student growth for his teacher evaluation. Understandably, he wanted to “double dip” since he was already involved with Carbon TIME, and he decided to use students’ Pre- and Post-Test scores to show student learning gains. Notably, he developed an elaborate grading scheme to score the forced-choice questions using points and written responses using learning progression levels. This focus on using the data for teacher evaluation purposes influenced his critical noticing of students’ responses in terms of grading for accountability and not necessarily assessment of student understanding. This is not to say that Mr. Ross never engaged in sensemaking about students’ learning or understanding. Rather, that his focus on showing student growth seemed to enhance his critical noticing of how to score students’ responses in a way that would show student learning gains for his teacher evaluation and seem “fair” to students but not what students’ responses were telling him about their understanding of the content (see Figure 15). 146 Figure 15. The influence of Mr. Ross’s social commitment to school administrators and students on his occasion of sensemaking about student assessment data The actions that Mr. Ross took were situated in a context in which he was socially committed to his school administrators in the form of “being an effective teacher” by showing student learning gains and to his students in the form of “being a fair teacher” by grading fairly. Mr. Ross’s actions to maintain or improve his status with people in his school community were reasonable given that teachers are accountable to the people who evaluate their performance— administrators and students—whether formally or informally. Through conversations with his case study coach, who had observed his classes, Mr. Ross recognized, however, that there were things he could do to be more responsive to students during classroom instruction. For example, he expressed wanting to have students reflect at the end of class because he valued having students make connections between what they were learning 147 today to what they had learned before, but he explained that he couldn’t do that because students at his school were too used to lining up at the door: Like, they don’t want to use that time to reflect. They just want a mental break, and they try to take it, and they try to line up at the door. So it’s really unfortunate, because I do believe that the best time to reflect is right at the end. But the way that the kids see things and the way that they’ve been in school for a while, you know, multiple teachers allowing kids to line up at the door has just fed into this, you know, “the last five minutes is time for me to pack my stuff and get ready to go,” when the last five minutes is often the most quality, you know, you’ve just got all the materials you need to actually synthesize something. (Systems & Scale unit interview) Later in the year, Mr. Ross still had the same problem and attributed it again to students wanting to line up at the door: And if I were doing a better job, a little bit of reflection at the end of each day would help that [connect what came before with what’s happening now] occur again. Again, the difficulty with having reflection at the end is that everyone just packs up and wants to line up at the door and go. (Animals unit interview) This example shows that Mr. Ross was willing to let responsibility for some aspects of his teaching practice, such as taking the time to reflect with his students at the end of the day, fall to others—in this case, to his students and school colleagues (e.g., their learned behavior of lining up at the door as a result of having multiple teachers in the school allowing them to do it). At one point, Mr. Ross told Caitlin that: “I’ve decided for sure I’m not doing anything that won’t actually be of benefit to me teaching, right?” (Animals unit interview). Here then, is the heart of the matter, so to speak: Mr. Ross was teacher-centered and his sensemaking was more about his teaching than his students’ learning. In this occasion of sensemaking for Mr. Ross, he critically noticed inconsistencies in students’ responses on the forced-choice and explanations questions, but his worry was not about what that inconsistency might mean in terms of student understanding but about how grading could accurately reflect students’ understanding. From his 148 perspective, this worry made sense given his concerns about students’ evaluations of his teaching and his goal of using the data to show student growth. In conclusion, Mr. Ross was concerned primarily with how to use Carbon TIME to meet other obligations, such as his teacher evaluation. There was insufficient evidence of sensemaking in the data for Mr. Ross for any of the Process Tools because he either expressed liking and/or understanding it or having neutral feelings about it. Figure 16 shows Mr. Ross’s landscape of sensemaking about Carbon TIME boundary objects: there was sufficient evidence of sensemaking in the data for Mr. Ross for only the Instructional Model and Post-Tests. Figure 16. Mr. Ross’s landscape of sensemaking about Carbon TIME boundary objects Yet, in reflecting on his enactment with his coach, he recognized to some extent that his teaching was not as responsive as it could be. Rather than really listening to students’ ideas and using them to inform his instruction, he was marking them primarily for participation and accountability purposes. Thus, Mr. Ross’s sensemaking was unproductive in terms of moving his practice towards rigorous and responsive science teaching because his goal of fitting it all 149 together inhibited his sensemaking about how his students’ interactions with the curriculum were or were not supporting engagement in 3-dimensional science learning. Ms. Nolan’s decisions to modify her enactment. Of all the modifications that Ms. Nolan made, here I highlight her decision to modify her enactment with the Expressing Ideas Tool. Ms. Nolan decided to record students’ questions during class discussion of students’ responses on the Expressing Ideas Tool. Recall that the Expressing Ideas Tool ends with a spot for students to write what questions they currently have about the phenomena (e.g., how ethanol burns). At this point in the unit, students are still expressing their initial ideas about the phenomena based on prior knowledge and are not expected to construct atomic-molecular, mechanistic explanations; therefore, prompting students to ask questions about the phenomena scaffolds their engagement in Asking Questions, one of the NGSS scientific practices. Because of her commitment to empowering students to see how their understanding has changed over time, Ms. Nolan records students’ Top Ten questions at this stage in the unit and returns to them later, at the end of the unit, in order for students to see that they have made progress by now being able to answer these questions. The actions that Ms. Nolan took were situated in a context in which she was socially committed to her students’ motivation, learning, and awareness that their understanding improves over time as a result of engaging in Carbon TIME activities. Rather than being influenced to conform to local norms and obligations, it seemed that she felt free to “just do our own thing” (Post interview). When Mackenzie asked her who she talked to about Carbon TIME, she named her two in-school colleagues and stated that: “So we’re all in it. They are all in. So we were talking about our day-to-day lessons and helping each other out with that” (Animals unit interview). In contrast to the other teachers, it seemed that Ms. Nolan already had good 150 relationships with her colleagues and administrators and did not need to take actions to prove herself to her school community. Summary. How did teachers’ social commitments to their various communities influence their sensemaking about Carbon TIME implementation? The findings show that teachers’ social commitments to their school communities, particularly administrators and colleagues, did not necessarily support teacher or student learning but helped teachers maintain, improve, or establish their status within the school community. Although these actions were reasonable given that teachers generally want to be seen as collaborative, fair, and fun, they did not result in outcomes that were necessarily helping teachers make progress towards learning rigorous and responsive science teaching practices or supporting students’ engagement in threedimensional science learning. Ms. Nolan was an exception in terms of her responsiveness to students’ needs and commitment to empowering students to be responsible for their own learning in ways that placed responsibility on her to support them in doing so. Theme 3: Teacher Learning of Content Although the Carbon TIME PD focused on supporting teacher learning of rigorous and responsive science teaching practices, I found that one of case study teachers was engaged in sensemaking about his own learning of the content. Mr. Harris was singular among the case study teachers for engaging in sensemaking about what he did or did not know as a result of interacting with Carbon TIME curriculum materials. In this section, I describe an occasion of sensemaking for Mr. Harris in which he was engaged in sensemaking about his students responses on the Pre- and Post-Tests. Mr. Harris’s occasion of sensemaking about the Pre- and Post-Tests: “Learning what they don’t know.” Mr. Harris was a White male teacher with 14 years of experience and a 151 major undergraduate emphasis in biology. He taught in a suburban high school that also drew students from the surrounding rural areas and was part of the Carbon TIME Midwest Network. Mr. Harris had missed the first face-to-face PD session in August 2015, so he worked closely with his coach, Winnie, to learn about Carbon TIME and how he could implement three units within the context of his trimester schedule. Mr. Harris was notable for freely expressing his concerns to Winnie; results of the descriptive coding showed 88 excerpts coded for Concern (the next highest teacher was Ms. Barton at 23 excerpts). For example, after teaching one unit, Mr. Harris expressed his challenges with using the Carbon TIME PowerPoint slides: There’s a lot of pressure. You’re trying to… I mean, just going again through the directions when I looked at the slides. Like, “All right, do slides one through three, then ask this question. Now I’ll show slide four and have them fill out the tool. Now do slides five, six, seven. Ask this question.” And just trying to… You don’t realize I felt like a student teacher where I’m like, “My goodness, this is somebody else’s…” Like they put these slides exactly here for this reason. And you move to a slide and then you’re like, “Yeah, what was that for?” And then you’re looking back at the notes. “Okay, for this slide say this.” All right. And you can tell that you’re not owning the curriculum. And that is a challenge. (Systems & Scale unit interview) Understandably, Mr. Harris was concerned about his challenges using a curriculum that he had not created or did not feel as if he “owned.” He was also concerned about his students’ performance on the Process Tools and online assessments. A distinctive and recurring phrase for Mr. Harris was, “learning what they don’t know.” Throughout his implementation of the units, Mr. Harris was disheartened and frustrated to learn over time that his students were not understanding what he thought they were understanding (see Table 27 on the next page). He shared these concerns with Winnie during the post-unit interviews, but he also shared them with his school colleagues. 152 Table 27 Mr. Harris’s Learning Over Time About How Much His Students Don’t Know Interview SS unit Oct 2015 AN unit Nov 2015 PL unit Nov 2015 Post May 2016 Y1 Nov 2016 Excerpts from Interview Transcriptions And I’m like, “how many times did we say that?” you know? And it was just—that was very disheartening to see that. But I’m realizing, it doesn’t matter if you get through everything if they don’t know the foundational—they don’t understand the foundational stuff. It doesn’t matter if they can fill in a box. It’s really, again, the eye-opening frustration of learning what they don’t know is very frustrating. So that’s a big difference, is having teachers—we like to think the students know everything we teach them. To try to step back and realize they don’t and taking the time to do that is good. It’s also hard for us. Because we talked about—as a staff last week—about how as teachers we don’t like to ask why because then we find out how much students don’t know. It’s easier to give the multiple-choice question and say like, “They pretty much have it.” There was sufficient evidence of sensemaking for Mr. Harris for only the Evidence-based Explanations Tool and the Pre- and Post-Tests. As the reader may recall from the previous section in this chapter, Mr. Harris was the only teacher who noticed a problem with his enactment of the Predictions and Explanations Tools, but there was insufficient evidence of extended talk about the tools; therefore, I describe an occasion of sensemaking for Mr. Harris about the Pre- and Post-Tests and note that Mr. Harris seemed to have many concerns and challenges that were not necessarily captured in the data. In this occasion of sensemaking, Mr. Harris critically noticed his own reaction to his students’ poor responses on the Post-Tests, particularly on the foundational questions (see Table 28). For example, Mr. Harris shared with Winnie that he was frustrated with his students’ performance: And that was my big frustration. I looked at the post quizzes. I was like, “Oh, it makes so much sense. High energy bonds, low energy bonds. Like when we burn the paper in class and that’s organic just like the ethanol and the water’s not.” And then they take the post quiz and I was like, “Oh my goodness.”…. So to see some kids get some of the foundational questions wrong was just, how did that happen? Because obviously there’s 153 no wonder they get it wrong when I’m just like, “Copy this note down.” (Systems & Scale unit interview) The outcomes of his sensemaking were reflections about how part of the problem could be how he presented the materials (“copy this note down”), that the goal of getting through everything didn’t matter if students were not understanding the foundational stuff, and that it was eyeopening for him to realize his own misconceptions. An important influence on his sensemaking was his social commitment to his colleagues in using common assessments. Table 28 Mr. Harris’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Critical Noticing Goal: “I want our kids to get good grades” (SS unit) + Social Communities: “But then, it still comes back to a common assessment. It’s easiest to have data that’s easily crunchable” (PL unit) T, S, and C: “But I’m realizing, it doesn’t matter if you get through everything if they don’t know the foundational— they don’t understand the foundational stuff” (AN unit) Outcomes of Sensemaking Reflection: “It’s really, again, the eyeopening frustration of learning what they don’t know is very frustrating” (PL unit) Reflection: “Part of it I feel like is how do I present the material?” (SS unit) Reflection: “As teachers we don’t like to ask why because then we find out how much students don’t know. It’s easier to give the multiple-choice question and say like, they pretty much have it” (Y1 follow-up) Furthermore, Mr. Harris was aware of his own developing content knowledge. For example, Mr. Harris talked at length with Winnie about the concept of energy and how he did not know how to answer the question on the Post-Test: What forms of energy are needed for this chemical change? Because it is chemical change, so I don’t know, I guess you’d call it chemical. And where does the energy come from?… And I still don’t know that answer: Where does the energy come from to break 154 that food apart when it digests?… I guess just thinking about all those things, so it’s like, oh boy. Again, the more you know, the less you know. (Animals unit interview) Mr. Harris stated that “the more you know, the less you know” in reference to his own realization that he didn’t know as much as he thought the knew. Thus, a recurrent pattern for Mr. Harris was to talk about his own learning or his students’ learning of the science concepts in the curriculum. His depth of precision showed that he was attending to the Post-Test in a detailed way—for example, noting that students were not answering questions about foundational knowledge correctly (e.g., “carbon is an atom”). In conclusion, Mr. Harris’s approach to sensemaking was generally productive for his own learning of the content and not necessarily of rigorous and responsive science teaching practices or of students’ engagement in three-dimensional science learning. However, I note that Mr. Harris was sensemaking about an important concern for teachers—how can they support students in learning science content if they do not understand it for themselves? Put differently, Mr. Harris was engaged in sensemaking about something that he needed to be engaged in sensemaking about—his own understanding of the content—before he could perhaps engage in sensemaking about other issues, such as how to be more responsive to his students’ ideas or how to support their engagement in three-dimensional science learning. Theme 4: Influence of Teacher Beliefs The last theme that emerged from my data analysis was about the influence of teachers’ beliefs on their sensemaking. Using van Driel, Beijaard, and Verloop’s (2001) definition of practical knowledge as the integration of teachers’ formal knowledge, experiential knowledge, and beliefs, I found that I was able to most easily identify teachers’ beliefs in their talk about how and why they implemented the curriculum in particular ways. Among the case study teachers, Ms. Barton was singular for expressing beliefs in both the interviews and PD sessions 155 that seemed counter to the vision of three-dimensional science teaching and learning embodied in Carbon TIME. In this section, I describe how Ms. Barton’s beliefs influenced her sensemaking about discourse and grading of the Process Tools. Ms. Barton’s occasion of sensemaking about discourse and grading of the Process Tools: “Knowing what kids think.” Ms. Barton was a White female teacher with 20 years of teaching experience and a minor undergraduate emphasis in biology. She taught in a rural middle school in the Midwest. Her school had a student population with more economic than racial or ethnic diversity. In her first interview with her case study coach, Sierra, she shared that: We have kids that live, you know, over here in Apple Valley (pseudonym) Trailer Park. We have kids that live—that are actual millionaires, like actual, not exaggerating, millionaires. And so, a lot of it is socioeconomic. And it’s like the aesthetic experiences that they come in with… and I found in researching it, that aesthetic education a lot of times can be a good push for them, because they’re not able to like, go to museums on the weekend, or you know. So to me, I feel like the diversity is the amount of time that they’re able to spend with some kind of adults that’s supporting them and the amount of money or, you know, options I guess, or choices that they feel like they have. (Systems & Scale interview) In a later interview, Ms. Barton returned to this idea of socioeconomic diversity by telling Sierra about how her own background influenced her ideas about learning: My family was really, really poor, but I was outside all the time. My grandma watched my cousins and I. She locked us outside, threw peanut butter and jelly sandwiches on the picnic table…. So I was outside all the time. I feel like that helped me learn. For other kids now, they’re just indoors all the time, and they don’t have anyone prompting them to make the connections or to be engaged with the world, I guess. (Plants Unit interview) She talked about using her school’s outdoor classroom as a way to have students use hand lenses to look at and draw pictures of different plants because she thought that forcing them to draw pictures would make them notice more: It [the outdoor classroom] has a pond. There’s actually one muskrat living in there, but there’s bugs and birds and trees, and they can sit down. I just had them draw some plants on a clipboard, and then, I mean, at least that forced them to just be around plants. So I 156 think that was one of my intentions of doing that, was just so that everybody would have a shared experience that we could then talk about. (Plants unit interview) Thus, Ms. Barton’s personal background and belief about learning from being outdoors influenced her thinking about what would best support students’ learning. Ms. Barton was notable in the first face-to-face summer PD in 2015 for having an extended and slightly contentious public debate with Winnie (Mr. Harris’s case study coach) about how her use of interactive science notebooks would be incompatible with the Process Tools in the form of “handouts.” The notion of using interactive science notebooks, which are typically spiral-bound or bound composition notebooks, to help students organize their ideas has grown in popularity with science teachers, particularly at the elementary and middle school levels. Thus, Ms. Barton’s concern about how she would fit the Carbon TIME tools with her current teaching practices was reasonable, and she was vocal about bringing it up before the whole cohort and Carbon TIME team at the PD session. Yet, Ms. Barton was also notable for being the only case study teacher to consistently refer to the tools as “worksheets.” Although a Carbon TIME researcher had tried to correct her language at the first face-to-face summer PD by stating that “we call them the tools” (Midwest Network field notes), Ms. Barton continued to refer to the tools as worksheets. She used the word worksheet a total of 46 times across all her coded interview excerpts. For example, she explained that “like, I know they’re called thinking tools, but the kid looks at it and sees worksheet, you know?” (Systems & Scale unit interview). After one year of implementation, she reflected that she still didn’t think that “writing on a worksheet necessarily lends itself to talking” (Y1 follow-up interview). Ms. Barton’s conception of the Process Tools as worksheets seems related to her belief that students simply copy each other and therefore the Process Tools as written work were 157 problematic, or perhaps even invalid, evidence of students’ thinking. Table 29 below shows her persistent belief that students copy written work (see Table 70 in APPENDIX G for extended excerpts). Instead, she valued talking as a way to better assess students’ thinking (see Table 71 in APPENDIX G for extended excerpts) and did not take responsibility for supporting students’ writing because enforcing independent writing would “take up all [her] energy” (PL unit interview) and “it’s not worth the amount of time” (Y1 follow-up interview). Table 29 Ms. Barton’s Belief That Students Copy Written Work Interview SS unit Nov 2015 AN unit May 2016 PL unit May 2016 Post June 2016 Y1 Nov 2016 Excerpts from Interview Transcriptions I’d rather have them just sit and talk with their friend about it and then write it together. Because they’re just going to copy. I mean, if you had them just sit by themselves, they’ll just copy to get it done. Well, a lot of them [Process Tools] we did as groups because there was complex thinking, and they’re just going to copy off each other anyway. They’re just going to copy off each other anyways. I mean, there’s nothing that can be done about it. The problem of teaching is really knowing what kids think because when you give them a written thing, they mainly just copy off of each other. I mean, I’d like to have them all write their own, but I don’t know if the value equals the amount of time, because if they’re all just going to copy off one person anyway, then it’s really not worth the amount of time. Thus, a distinctive phrase for Ms. Barton was, “knowing what kids think.” In response to a question about whether and how Carbon TIME had changed the way her students think, Ms. Barton said, “the problem of teaching is really knowing what kids think because when you give them a written thing, they mainly just copy off of each other,” (Post-teaching interview). This quote illustrates Ms. Barton’s core conflict—she was able to state a key problem of teaching (“knowing what kids think”) yet she failed to critically notice ways in which Carbon TIME was designed to support her in doing exactly that because of a persistent belief that students were just going to copy. Researchers had designed the Process Tools to support teachers in “knowing what 158 kids think” by eliciting and sharing students’ ideas, and both the face-to-face and online PD sessions were designed to focus teachers’ attention on features of the tools and how to scaffold classroom discourse using them. Despite these PD efforts over the course of a year, Ms. Barton did not take up ideas or teaching practices that would have supported her in moving productively towards rigorous and responsive science teaching. When her case study coach, Sierra, asked her if she had any concerns about using Carbon TIME, she said: Well, I don’t know. I mean, I didn’t really have any worries, but I think that I definitely am not going to give out a bunch of individual worksheets. I mean, I just don’t think that’s a way of learning science, because you’re not interacting really. You’re just like, “I’m going to silently do this worksheet, and then the teacher’s going to silently grade it at home.” What is that? You know what I mean? There’s no connectedness that way. (Plants unit interview) In the excerpt above, Ms. Barton showed that she did not pick up on ideas offered in the PD about how to structure classroom discourse using the Process Tools. She stated that worksheets were to be done silently and the teacher would then grade the worksheet silently at home. Although she expressed beliefs about the low value of individual writing because they’re just going to copy, she found ways for her students to complete the Process Tools in pairs, groups, or as a whole-class activity. Thus, an occasion of sensemaking for Ms. Barton involved discourse and grading of the Process Tools (see Table 30 on the next page). I inferred that Ms. Barton critically noticed student engagement during activities that involved talking. Outcomes of her sensemaking included her decision to complete the Process Tools in pairs or groups and to focus more on students’ talk than writing; her reflection that she would continue to focus on students’ talk as an accountability measure; and her reflection that the amount of worksheets continued to be overwhelming for her. Goals and resources that influenced her sensemaking included her belief that students copy. 159 Table 30 Ms. Barton’s Sensemaking About Discourse and Grading of the Process Tools Interactions Among Goals & Resources Critical Noticing Practical Knowledge: Belief that “they’re just going to copy” (SS unit); “I just don’t think that’s [writing] a way of learning science, because you’re not interacting really” (PL unit) + Social Communities: “I’m really isolated” (PL unit) S: Student engagement during activities that involved talking and disengagement during activities that involved writing Outcomes of Sensemaking Decision: “a lot of them [Process Tools] we did as groups” (AN unit) Reflection: “To me, it’s better to just at least if you say you’re going to do it together, at least when you walk around and you see them talking about it, that’s something” (AN unit) Reflection: “It feels overwhelming to me, the amount of worksheets” (SS unit) In terms of her social commitments to various communities, Ms. Barton talked mostly to her family members and found it difficult to talk about her ideas with her school colleagues even though she was the department chair. Her perception was that her colleagues got “irritated” when she wanted to talk about an idea “because I like to talk about things, like how I’m talking about them right now, and a lot of people don’t like to talk about things like that” (Systems & Scale unit interview). Despite these concerns, Ms. Barton expressed a desire to talk about her ideas and students’ learning with others yet felt constrained by local obligations. In response to a question about who she talked with about Carbon TIME, she responded: Really, no one because I’m really isolated. I mean, there’s really no talking time. The other science teachers and I all have different plan times. I mean, we have PLC, but we have this big agenda of basically…. I mean, I’m the one that sets the agenda for it, but it’s really not really set by me. The curriculum director and the principal tell you, “You have to talk about this, this, this, this.” You know what I mean? So there’s really not a tremendous amount of just open-up time to be like, “Hey, guess what.” I mean, we have brought student work before, and I brought some work from the Systems & Scale. (Plants unit interview) 160 Ms. Barton rarely reached out to colleagues in her Carbon TIME network and, when she did, it was about technical problems using the BTB indicator in the unit investigations. Although Ms. Barton believed that students were just going to copy, she thoughtfully reasoned that writing had different purposes: I mean, because you have to think about, what is the purpose of writing? What is the purpose of it? If you’re going to write—if a student’s going to write, why are they writing? And I don’t always know the answer to that. Are they writing so that they can process out their own ideas? Are they writing just to show me what they know? Are they writing to kind of like, argumentative writing? Or they’re presenting a point of view. Those are all different ways of writing. But I guess I feel like if they’re going to be writing, I want them to somehow be writing to make meaning. And I guess I just have more thinking to do in that area. (Y1 follow-up interview) This excerpt shows that Ms. Barton recognized her own limitations—she wasn’t sure if she always knew students’ purposes for writing, yet she wanted students’ writing to be for the purpose of making meaning, so she thought she might have more thinking to do in that area. Thus, Ms. Barton did not seem to take responsibility or exercise her authority as the teacher in determining the purposes for student writing in her classroom. Similarly, Ms. Barton identified a problem with the Evidence-based Arguments Tool and gave a reason for not trying to engage in sensemaking about it. She said: The Evidence-based Arguments; I mean the thing that I like about them is if it were, and this is kind of the problem with this or any other curriculum; it’s assuming that all kids want to learn, or are interested in the topic or will do what they’re asked to do. And I found with Evidence-based Arguments that one key person would do it, and everybody else looked happy. So I guess I feel like this year, I’ll probably just do more talking about the Evidence-based Arguments, and trying to get student input. I mean I’d like to have them all write their own, but I don’t know if the value equals the amount of time, because if they’re all just going to copy off one person anyway, then it’s not really worth the amount of time. I don’t know; I like it, and if you had all students that were going to take it seriously, and do their own work, then I think it’s really great. On paper, it’s really great. (Y1 follow-up interview) 161 Ms. Barton’s reason for not trying to engage in sensemaking about the tool was that students did not take the work seriously and were just going to copy off one person anyway, so she was not going to invest the time to hold students accountable for writing on the tool. Again, Ms. Barton’s language about the tool was vague in the sense that there is no evidence in the data that she noted particular features of the Evidence-based Arguments Tool designed to support students in constructing arguments from the evidence collected during the unit investigations. Likewise, Ms. Barton did not attempt to engage in sensemaking about the Instructional Model. When asked what she liked about the Instructional Model, she responded: MS. BARTON: I really like the expressing ideas and the predicting…. [pause] SIERRA: What puzzles you about the Instructional Model, if anything? MS. BARTON: The amount.... It’s a different way of teaching. It’s really like a thought process about one thing. The amount is a struggle for me, and maybe it’s easier for a high school or older students that can sustain more on one topic. But for me, maybe because I feel pressure to keep moving, and it’s kind of a pressure situation. It’s hard for me to stay with one thing for that amount. (Y1 follow-up interview) The first thing to note in this exchange is that Ms. Barton did not initially answer directly about the Instructional Model but rather talked about features of the unit that she liked. When Sierra, her coach, asked her about what puzzled her, she responded with a vague answer about “the amount.” Ms. Barton acknowledged that it was a struggle for her and that she felt pressure to keep moving. Finally, she offered a reason for not trying to engage in sensemaking about it—she explicitly stated that maybe it was easier for high school students to sustain more on one topic, meaning implicitly that because she taught middle school students and because it was hard for her to stay with one thing for that amount, she did not seem to value time spent thinking about the Instructional Model. Furthermore, Ms. Barton’s language about the Instructional Model was vague in the sense that Ms. Barton’s conception of the curriculum in terms of “amount” of 162 sustained attention required for enacting one Carbon TIME unit did not differentiate between different parts of the unit (i.e., inquiry and application sequences in the Instructional Model). In conclusion, Ms. Barton was concerned primarily with ways in which Carbon TIME did not match her goals or teaching context, which in turn influenced what and how she was sensemaking about the boundary objects. For example, in terms of content, Ms. Barton remarked that Carbon TIME did not address topics that she felt needed to be addressed for her middle school students, such as differences between the concepts of compounds, elements, and mixtures, and density. For example, in talking about density, she told Sierra that: Density is something that is like, something that you need a deep understanding of. People need to be able to wrap their minds around it; it is something that needs to be done. If you were saying that you were going to use this curriculum, and it was going to enhance students’ deep understanding, it should somehow address density. I just feel like it should. (Animals unit interview) Similarly, Ms. Barton was concerned about skills such as measuring using rulers and identifying dependent and independent variables and felt that those skills “could easily be put” in the curriculum, and so she didn’t know “why they don’t just like, put it in” (Systems & Scale unit interview). Rather than view the curriculum as a toolkit that she could adapt to her teaching context, she wanted curriculum developers to provide her with a complete curriculum that had everything she would need, including lessons that were clearly aligned to the standards. She stated that she did not have time to look at the teacher’s guides or online curriculum materials. Remarkably, she did not realize for a full year that she could modify the curriculum materials: I wish that the worksheets had the NGSS standard on the worksheet. And if I had the edit rights to the worksheets, I’d probably go through and do that. I really feel it needs to be on there…. But it would make it way easier for me; I could look it up, and then I would write—my “I can statements” all have to be NGSS referenced, with the actual standard; the code on there, on the board. So if I had the direction it’d be way easier. (Sierra: You should have the Word versions of them to make modifications. Do you have those?) Oh, I didn’t know about that. (Y1 follow-up interview) 163 Thus, for Ms. Barton, the interactions among her goals and resources influenced her to critically notice ways in which Carbon TIME supported students’ talk but not their writing. Her beliefs about students’ behavior, abilities, and capabilities inhibited her sensemaking in ways that were unproductive for moving her practice towards rigorous and responsive science teaching, yet her decisions and reflections made sense to her given her isolation and perceptions of being in an unsupportive teaching context. Summary Before summarizing the findings presented in this section, I re-iterate that these narrative descriptions were intended to illustrate particular occasions of sensemaking for teachers with the limitation that they were derived from the evidence available in the data set and are not meant to be representative of a teachers’ entirety of practice. In summary, it was important for teachers to critically notice interactions between the curriculum and themselves or the curriculum and their students in order to engage in sensemaking about boundary objects in ways that were productive for their own learning of rigorous and responsive science teaching practices and students’ engagement in threedimensional science learning. Critical noticing that was one-dimensional was driven by teachers’ social commitments to their school communities in ways that helped teachers align with their school- or district-level obligations but did not necessarily align with reform visions of science teaching and learning. Variations in teachers’ patterns of sensemaking were due to differences in teachers’ reasons for engaging in sensemaking about particular boundary objects or issues. Synthesis First, I note that the findings reported in this study are for teachers’ first year of implementation only; therefore, they should not be interpreted as representative of a teacher’s 164 entirety of practice. Case study teachers’ participation in the Carbon TIME Project was a unique experience in terms of the level of support and engagement they had with Carbon TIME staff and researchers. Thus, as teachers continued to participate in implementation for a second year, whether they were continuing as case study teachers or not, they may have shifted their implementation in ways that were not captured in this data set. In summary, the findings show that teachers were engaged in sensemaking about Carbon TIME boundary objects in ways that were productive or unproductive for teacher learning about rigorous and responsive science teaching practices and students’ engagement in threedimensional science learning. The foci of teachers’ critical noticing was important in terms of whether they noticed interactions between the curriculum and themselves or the curriculum and their students in ways that considered their own needs and their students’ needs. For example, both Ms. Wei and Ms. Nolan critically noticed interactions between their students and the Explanations Tool—they critically noticed that their students were struggling to construct explanations. One outcome of Ms. Wei’s sensemaking was the decision to not use some of the Explanations Tools. In contrast, one outcome of Ms. Nolan’s sensemaking was the decision to modify the tool to include a scaffold for students. I determined that Ms. Wei’s sensemaking was unproductive for teacher learning and student engagement because she avoided the challenge of figuring out how to use the tool with her students by citing her concerns about repetitiveness of the tools. The outcomes of her sensemaking foreclosed future opportunities for her to learn about how to use the tools to support students’ engagement in three-dimensional science learning. In contrast, Ms. Nolan’s sensemaking was productive for student engagement because her modification considered students’ needs and for teacher learning because she then continued to reflect on how well the modification was working for her students. 165 A factor in whether teachers modified boundary objects was teachers’ sense of agency in being able to do so. For example, Ms. Callahan did not feel as though she could modify any of the Process Tools because of the authority of Carbon TIME as a research-based curriculum and her reasoning that it was her first time using the curriculum, so she wanted to implement the tools as they were. Other teachers, however, felt like they could modify the tools if they wanted to—not only had Carbon TIME researchers presented the curriculum as a toolkit for teachers to use and adapt to their classrooms, but they provided teachers with electronic editable versions of the curriculum materials Ms. Nolan was notable in making several modifications to boundary objects; however, they were not all necessarily helpful. For example, her modification to the Explanations Tool was helpful because it provided a scaffold for students to use evidence about matter and energy changes from the investigation and the Evidence-based Arguments Tool to construct explanations about carbon-transforming processes. On the other hand, her modification to the Expressing Ideas Tool to replace what she perceived as a less engaging phenomenon with a more engaging one—a panda growing instead of a boy growing—did not necessarily change the nature of the tool in ways that would markedly affect students’ interactions with it. Ms. Nolan expressed that she wanted to maintain the integrity of the tools because she respected the authority of Carbon TIME as a research-based curriculum; at the same time, she exercised agency in modifying the materials to fit her students’ needs. Another outcome of sensemaking was teachers’ decisions to modify their enactment. For example, Ms. Eaton was notable for modifying her enactment around the Process Tools and Preand Post-Tests purposefully. The nature of her modifications was to orient students to the boundary objects in ways that would enhance their performance. In an effort to support students’ 166 performance on the Process Tools, she talked with them about how the tools were like graphic organizers to help the organize their thinking. For the student assessments, Ms. Eaton talked with them about the language of the tests and what the questions were really asking them to do. In other words, Ms. Eaton was purposefully modifying her instruction to bridge the gap between what the curriculum offered and demanded of students and her students’ performances. The result of my synthesis of these findings is shown in Table 31. Based on the findings, I have determined two categories of sensemaking in terms of being productive or unproductive for teacher learning. Differences include variations in critical noticing and modifications to or reflections on enactment that did not necessarily support teachers’ learning of rigorous and responsive science teaching practices and students’ engagement in three-dimensional science learning. Reasons for unproductive sensemaking were sometimes related to the influence of teachers’ social commitments to people in their school communities in order to maintain, improve, or establish status. And, teachers’ engagement in sustained sensemaking over time was productive, especially for teacher learning. Table 31 Categories of Productive and Unproductive Sensemaking for Teacher Learning Teacher Learning Productive Sensemaking Critical noticing of interactions between the teacher and the curriculum materials; reflections on implementation that supported their own learning of rigorous and responsive science teaching practices Unproductive Sensemaking Critical noticing of the curriculum materials or students’ reactions to the curriculum in ways that did not support their own learning of rigorous and responsive science teaching practices Based on these categories, I have determined that the case study teachers’ overall sensemaking, particularly about Carbon TIME boundary objects, placed them in two broad groups (see Table 32). First, Ms. Callahan, Ms. Eaton, and Ms. Nolan engaged in sensemaking that was generally productive for both teacher learning and student engagement in three167 dimensional science learning. Mr. Harris also engaged in sensemaking that was productive for his own learning, particularly of the content associated with the curriculum; however, his sensemaking was less influenced by attention to student engagement. Table 32 Determination of Productive and Unproductive Sensemaking for Case Study Teachers Sensemaking Case Study Teachers Ms. Callahan, Ms. Eaton, Ms. Nolan, Mr. Harris Ms. Wei, Mr. Ross, Ms. Apol, Ms. Barton Productive sensemaking for teacher learning Unproductive sensemaking for teacher learning In contrast, Ms. Wei, Mr. Ross, Ms. Apol, and Ms. Barton engaged in sensemaking that was generally unproductive for teacher learning. Ms. Wei and Mr. Ross were situated in contexts where they were navigating multiple PD initiatives and, even though they had access to a Carbon TIME network for support, they did not seem to successfully prioritize among the initiatives in ways that supported their own learning or students’ engagement. Ms. Apol and Ms. Barton were both situated in rural middle school contexts and gave reasons for not trying to engage in sensemaking about particular boundary objects. In the next chapter, I discuss these findings within the context of current reform efforts in science education. 168 Chapter Five Discussion In this chapter, I discuss the findings of this study within the context of current reform efforts in science education. I posited that teacher learning of rigorous and responsive science teaching practices associated with Carbon TIME boundary objects could occur when teachers’ goals, practical knowledge, and social commitments changed over time as a result of engaging in sensemaking about Carbon TIME implementation. The potential and need for teacher learning was greatest for teachers whose goals, beliefs, and practices did not align well with those of Carbon TIME. Although face-to-face and online PD sessions were designed to support teachers in engaging in sensemaking about particular features of Carbon TIME, ultimately it was up to individual teachers to critically notice those features. Within the context of current reform efforts in science education, most teachers’ accustomed practices were not entirely consistent with reform visions of science teaching and learning. Therefore, the purpose of this study was to analyze teachers’ sensemaking to provide insights into why teachers chose to implement Carbon TIME in particular ways. My research questions addressed the following interrelated aspects: 1) What were patterns in teachers’ occasions of sensemaking? 2) How did teachers’ social commitments to their communities influence their sensemaking? And, 3) How did teachers’ engagement in sustained sensemaking support teacher learning? For all of these aspects, I discuss the findings in relation to the reform goal for teachers to make progress towards learning rigorous and responsive science teaching practices. First, I found variations in patterns of teachers’ occasions of sensemaking, which indicated that teachers were engaged in sensemaking about Carbon TIME boundary objects and 169 issues that were important to them given their particular goals, practical knowledge, and local teaching context, with differences in outcomes of sensemaking, some of which directly affected students’ classroom experiences. Findings also indicated that teachers’ school communities, particularly their social commitments to colleagues and administrators, influenced their reasons for engaging in sensemaking about particular boundary objects. Finally, the findings show that teachers engaged in sustained sensemaking, or multiple cycles of sensemaking over time, that enhanced their learning of rigorous and responsive science teaching practices. A synthesis of these findings yielded judgments about teachers’ sensemaking as productive or unproductive based on their patterns of occasions of sensemaking and success in making progress towards teacher learning of rigorous and responsive science teaching practices: • Productive sensemaking for teacher learning: Teachers who critically noticed interactions between themselves and the curriculum materials and reflected on implementation in ways that supported their own learning of rigorous and responsive science teaching practices. • Unproductive sensemaking for teacher learning: Teachers who critically noticed features of the curriculum materials or students’ reactions to the curriculum in ways that did not support their own learning of rigorous and responsive science teaching practices. In the sections that follow, I discuss the findings for each of the research questions in relation to these two categories. This chapter begins by addressing each of the research questions and findings in relation to the goal of developing rigorous and responsive science teaching practices that engage students in three-dimensional learning. Then, I discuss limitations related to participants, data collection, and data analysis that constrain interpretations of the findings. Finally, I end this chapter with a 170 discussion of implications for science teacher PD and learning and directions for future research based on the four themes of sustained sensemaking over time, influence of school communities, teacher learning of content, and influence of teachers’ beliefs. Discussion Research Question 1: Occasions of Sensemaking First, I found that the landscape of teachers’ sensemaking about Carbon TIME boundary objects varied across teachers. Based on sufficient evidence of sensemaking in the data, teachers had more or fewer occasions of sensemaking. Therefore, identification of teachers’ occasions of sensemaking provided insight into what teachers found challenging or puzzling; however, the particular occasions of teachers’ sensemaking did not seem to matter as much as teachers’ reasons for engaging in sensemaking, including what they critically noticed. In the following sections, I discuss how teachers’ critical noticing and outcomes of sensemaking relate to productive or unproductive sensemaking for teacher learning and student engagement in threedimensional science learning. Critical noticing. Critical noticing is at the center of sensemaking, and the findings show that teachers’ critical noticing varied in terms of what they critically noticed and the precision with which they noticed it. Teachers noticed either interactions between themselves, their students, and the curriculum materials or each of these foci separately. How was teachers’ critical noticing related to teacher learning of rigorous and responsive science teaching practices and students’ engagement in three-dimensional science learning? First, I determined that occasions of sensemaking in which teachers critically noticed individual foci rather than interactions among foci resulted in unproductive sensemaking for teacher learning and student engagement. For example, Ms. Wei critically noticed the columns of 171 the Evidence-based Arguments Tool and modified them in order to align to a district-level initiative about use of the Claims-Evidence-Reasoning framework in constructing arguments from evidence. The result of this occasion of sensemaking was unproductive for student learning because Ms. Wei’s modification did not take into consideration her students’ needs but only her desire to align her instruction with district initiatives. Furthermore, the depth of critical noticing varied and affected teachers’ engagement in sensemaking about the boundary objects. Ms. Barton was notable for being the only case study teacher to have insufficient evidence of sensemaking about any of the Carbon TIME boundary objects; this lack of depth, or imprecision, was evident in her language and conception of the Process Tools as “worksheets,” which likely inhibited her critical noticing of features of the Process Tools that could have supported her students’ engagement in three-dimensional science learning. Therefore, Ms. Barton’s sensemaking about classroom discourse was unproductive for both student engagement and her own learning. This lack of depth in critical noticing may be due to teachers’ unwillingness or inability to examine boundary objects closely, including a legitimate lack of time or resources. Indeed, Ms. Barton often cited lack of time as a reason for not spending more time planning for enactment and expressed a desire to have a complete curriculum that would provide her with everything she needed, included alignment to standards. Second, I determined that occasions of sensemaking in which teachers critically noticed interactions between foci, such as between the curriculum materials and students, resulted in productive or unproductive sensemaking depending on the outcomes of sensemaking. These outcomes then either inhibited or enhanced the potential for future teacher learning and may be due to teachers’ willingness and ability to resolve challenges posed by their critical noticing of 172 interactions between the curriculum and their students. Next, I discuss outcomes of sensemaking in more detail. Outcomes of sensemaking. In my conceptual framework for investigating teachers’ sensemaking about their implementation of Carbon TIME, I identified two types of outcomes: decisions to use, not use, or modify the curriculum materials or classroom enactment; and, reflections on those decisions or any of the goals and resources that influence sensemaking (teachers’ goals, practical knowledge, or social commitments). The findings show that teachers made modifications to the curriculum materials, boundary objects of interest, and/or their classroom enactment. But the question is, did those modifications help teachers and their students make progress towards rigorous and responsive science teaching and three-dimensional science learning? And did teachers’ reflections on any of the goals and resources support their progress? Decisions to modify curriculum materials. Teachers’ decisions to modify curriculum materials involved a certain kind of agency in terms of feeling like they could modify the materials. To modify curriculum materials, including Carbon TIME boundary objects, teachers had to take the time to download and edit the material and feel like they had the agency to do so. During this process, they may have critically noticed features of the curriculum material that they wanted to modify, perhaps based on prior critical noticing of their students’ interactions with the material. An example of this kind of sensemaking was Ms. Nolan, who seemed to have a strong sense of who she was as an educator and a vision of science teaching and learning that aligned well with that of Carbon TIME. Modifications helped support students’ engagement in three-dimensional science learning if they attended to students’ ability to use a crosscutting concept to engage in a scientific practice 173 about a core disciplinary idea. In the case of decisions to modify curriculum materials, teachers’ perceptions of the gap between what the curriculum provided and what students needed were mediated by their critical noticing of features of the materials and aspects of students’ behavior and performance. Decisions to modify classroom enactment. Teachers’ decisions to modify classroom enactment purposefully also involved a certain sense of agency. Of course, secondary science teachers’ enactment may vary from class period to class period as they adjust instruction throughout the day, so by purposeful I mean teachers’ intentional changes in enactment to achieve a desired outcome. Usually this purposeful modification involved some planning on the part of the teacher to prepare for enactment. An example of this kind of sensemaking was Ms. Eaton, who had the benefit of working closely with her case study coach, Daisy, to plan for and reflect on enactment. Reflections on modifications. Teachers’ reflections on modifications to the curriculum materials or classroom enactment had the potential to contribute to feedback loops that led to teacher learning, or changes in teachers’ goals, beliefs, or practices over time. Examples of this type of sensemaking were Ms. Callahan and Ms. Nolan. Factors that contributed to Ms. Callahan’s learning were a coach who offered different and sometimes contrary perspectives and a time lag between implementation of the units. For Ms. Nolan, continual sensemaking about the success of students’ interactions with the curriculum contributed to her continual learning about how to best support students’ engagement in three-dimensional science learning. Reflections on goals and resources that influence sensemaking. Teachers’ reflections on any of the goals and resources had the potential to support teacher learning. An example of this kind of sensemaking was Mr. Harris, who critically noticed his students’ responses on the 174 Post-Tests and reflected that their poor scores could have been due to his own teaching and the common practice of using multiple-choice exams to assess students’ understanding. He did not necessarily implement the curriculum in ways that supported students’ engagement in threedimensional science learning; however, he was engaged in sensemaking about his own understanding of the content and questioned his own formal knowledge. Thus, his sensemaking resulted in shifts in his practical knowledge, particularly his beliefs about the value of multiplechoice tests for assessing students’ understanding. Research Question 2: Social Commitments to Various Communities How did teachers’ social commitments to various communities influence their sensemaking in ways that were productive or unproductive for teacher learning and student engagement? I find Frank, Kim, and Belman’s (2010) notion of utility theory and teacher decision making useful in interpreting the findings. That is, teachers’ decisions about how to use the curriculum materials is a function of maximizing utility of either their own teaching efficacy or compliance with the norms of others in the school. I use this notion of utility theory to interpret why teachers may have made modifications to fit their local contexts, how they navigated multiple PD initiatives, and how they met the expectations of local contexts. Modifying to fit local contexts. Several teachers were influenced by social commitments to their school communities—namely administrators, colleagues, and students—to engage in sensemaking about particular aspects of Carbon TIME implementation. For Ms. Wei, deciding to change the columns of the Evidence-based Argument Tool to fit with CER helped her comply with the norms of others in her school; it was a simple modification that did not require much time, yet the benefit to Ms. Wei would be compliance with school and district initiatives. Ms. Wei shared that she was relatively new to the area; it seems reasonable that she would want to fit 175 in to the local culture. This type of sensemaking indicates that teachers’ positions within their school communities can influence their sensemaking. Similarly, Ms. Callahan was new to her school community and benefited from making a simple modification to a data spreadsheet that would help her comply with the norms of others in her school. In both of these cases, prioritizing compliance over teaching efficacy was reasonable and desirable from the perspectives of the teachers. Navigating multiple PD initiatives. The findings show that Mr. Ross and Ms. Wei in particular were trying to navigate multiple PD initiatives and faced challenges in prioritizing those initiatives. Mr. Ross resolved this challenge by stating a goal of trying to fit everything together, with the result that he was trying to do everything but in shallow ways. Similarly, Ms. Wei faced challenges in navigating multiple PD initiatives, but she resolved them by choosing among initiatives and among Carbon TIME materials. For example, she chose not to use particular pieces of Carbon TIME that she perceived as unhelpful in reaching her goals. In both of these cases, teachers were maximizing compliance with the norms of others in their school. In contrast, even though Ms. Nolan was in the same district as Ms. Wei and therefore was exposed to the same district obligations, she did not seem to face the same challenges in prioritizing among multiple PD initiatives. Although Ms. Nolan was the only teacher at her school implementing Carbon TIME, she expressed that her school colleagues were supportive of her endeavor and willing to listen when she wanted to share materials or stories with them. Therefore, Ms. Nolan did not seem to face the same pressure to conform as the other teachers. Along with her strong sense of who she was as a teacher, Ms. Nolan felt that she had the agency to modify what she needed to in order to maximize her teaching efficacy. 176 Meeting the expectations of local contexts. Both Ms. Apol and Ms. Barton taught in rural middle schools in the Midwest. In this context, both teachers faced multiple challenges in implementing Carbon TIME. Ms. Barton expressed her feelings of isolation and wanting to talk about ideas and student learning with her colleagues but faced resistance from her school community. Ms. Apol had decided to implement the curriculum haphazardly, blending it into her usual instruction without a coherent storyline. Both teachers gave reasons for not trying to engage in sensemaking about Carbon TIME boundary objects due to their perceptions of students’ abilities, capabilities, or behavior. Ms. Apol and Ms. Barton were veteran teachers and leaders at their school; they were meeting the expectations of their local contexts. In this case, it may not be so much complying with the norms of others as having a different vision of what effective science teaching looked and sounded like. Ms. Barton, for example, valued talking more than writing and did not seem to critically notice how the Process Tools could support her students in doing both. Research Question 3: Sustained Sensemaking Over Time Finally, teachers had the opportunity to engage in sustained sensemaking over time about Carbon TIME boundary objects because they implemented at least three Carbon TIME units over the course of one school year. Due to local constraints, some teachers taught three units in succession (e.g., Mr. Harris’s trimester schedule); others made the decision to blend in the units with their usual instruction (e.g., Ms. Apol) or even with each other (e.g., Ms. Callahan). Regardless of how teachers chose to implement the units, they engaged with curriculum materials repeatedly in both the PD and classroom enactment settings and had the opportunity to engage in multiple cycles of sensemaking about Carbon TIME boundary objects over time, with 177 feedback loops from outcomes of sensemaking potentially contributing to goals and resources that influence sensemaking. Because Carbon TIME offered a vision of three-dimensional science teaching and learning that was different from what most teachers and students were used to, one question was whether teachers would engage in sustained sensemaking about Carbon TIME boundary objects in ways that would support their learning of rigorous and responsive science teaching practices associated with those boundary objects. Based on the findings of this study, the answer to that question is that the outcomes of teachers’ sensemaking—particularly their reflections about modifications they had made to boundary objects or their enactment of classroom practices associated with those boundary objects—contributed to feedback loops that either maintained or shifted teachers’ goals, practical knowledge, or social commitments, resulting in teacher learning over time. Ms. Callahan was an example of a teacher who learned how to enact more responsive classroom discourse teaching practices over the course of Carbon TIME implementation. Multiple factors contributed to her learning, including teaching at a school where phenomena and using evidence from investigations to construct explanations were valued. Ms. Callahan was selected for this position at her school mostly likely because her values and ways of thinking about science teaching and learning aligned with those of the magnet program. Thus, she had a high level of content knowledge and easily appropriated the language of Carbon TIME even during her enactment of the first unit. Ms. Callahan’s teaching practices were rigorous in terms of pressing students for explanations, supporting them in connecting macroscopic scale observations with atomic-molecular models, and focusing on students’ precision in language. What Ms. Callahan needed to learn were more responsive teaching practices, such as really 178 listening to her students’ ideas and using them to inform her instruction. Her Carbon TIME network contributed to her learning in the form of interactions with her coach and being exposed to strategies from other Carbon TIME teachers while she had time to use them in the units. Synthesis Teachers’ reasons for engaging or not engaging in sensemaking about particular Carbon TIME boundary objects influenced the nature of their sensemaking and resulting outcomes in terms of potential for future teacher learning. For example, where did teachers locate responsibility for supporting students’ engagement in three-dimensional science learning? Did they place responsibility on themselves, the curriculum, or the students? Did they have a deficit view of students’ abilities and capabilities? Did they feel like they could exercise their authority and agency in modifying the boundary objects or their enactment? Did they seek help from others in their social networks? Or, did they feel isolated and alone, despite having access to colleagues in a designed network? The answers to all of these questions contributed to variations in teachers’ engagement in sensemaking. In synthesizing the results of this study, I present a model for productive sensemaking based on the potential for teacher learning of rigorous and responsive science teaching practices that engage students in three-dimensional science learning. Productive sensemaking occurred when teachers engaged in sustained sensemaking over time about Carbon TIME boundary objects and critically notice interactions between the curriculum materials, their students, and themselves (see Figure 4 in Chapter Three). Productive sensemaking resulted in outcomes, particularly reflections about modifications or enactment, that flowed into feedback loops that resulted in changes in teachers’ goals, practical knowledge, or social commitments to their communities. These feedback loops 179 were important for “restructuring” experienced teachers’ knowledge and beliefs. As van Driel et al. (2001) suggested: It seems that long-term staff development programs are needed to actually change experienced teachers’ practical knowledge. Given the nature of practical knowledge as integrated knowledge which guides teachers’ practical actions, this is not surprising. From this perspective, an innovation implies not adding new information to existing knowledge frameworks; in fact, teachers need to restructure their knowledge and beliefs, and, on the basis of teaching experiences, integrate the new information in their practical knowledge. (p. 148) One contribution of this study is to provide a mechanism by which new information is incorporated into the restructuring process. Productive sensemaking was driven by goals, beliefs, and social commitments that focused on student engagement in three-dimensional science learning. These cycles of sensemaking may be a result of teachers’ willingness and ability to spend time and effort examining curriculum materials closely and resolving challenges posed by their critical noticing of students’ interactions with the curriculum. Productive sensemaking was situated within the setting of a supportive school community and professional network. In contrast, some teachers focused their sensemaking on aspects of implementation that were less related to rigorous and responsive science teaching practices and student engagement in three-dimensional science learning. Other unproductive foci included complying with school norms (that usually did not align with reform visions of science teaching and learning) or focusing on student motivation at the expense of three-dimensional science learning. Variations in the productivity of teachers’ sensemaking brings up the question of what personal goals and resources teachers bring with them to their classrooms and how educational systems such as local schools and districts can influence teaching at the classroom level. Before I discuss implications of the findings for science teacher PD and teacher learning, however, I discuss limitations of the study that affect interpretations of the findings. 180 Limitations I discuss three limitations of this study related to participants, data sources, and the concept of sensemaking from organizational theory. Participants First, the teachers volunteered to participate in this study as case study teachers, so teachers’ motivation to engage in long-term PD should be considered. Second, five of the eight case study teachers were White and female, and although they taught in a variety of school contexts, future studies should include a wider demographic range of teachers and schools in order to make broader claims about generalizability. In addition, four of the seven coaches were graduate student researchers, and although they all had classroom teaching experience, they had not all necessarily been trained as coaches except in the setting of learn-as-you-go in the weekly case study research team meetings. Each of the teachers and coaches brought particular backgrounds, experiences, and perspectives to the teacher-coach pairing. Thus, the pairings were idiosyncratic (and often based on geographic proximity between teacher and coach) and unlikely to be replicated in future studies. For example, out of all the case study teachers, Mr. Harris seemed to struggle most with his own content knowledge and pedagogical content knowledge, and it was fortuitous that, out of all the coaches he could have had, he was paired with Winnie, who had a high level of content knowledge (having taught Advanced Placement Biology) and who was familiar with the curriculum from piloting Carbon TIME materials previously as a classroom teacher before attending graduate school. Likewise, Ms. Eaton was the only teacher in the Carbon TIME West Network, and it was fortuitous that she was paired with Daisy, who was not only the West Network Leader but also had many years of professional experience as a coach, educational 181 consultant, curriculum developer, and PD provider; Daisy could support Ms. Eaton’s implementation of the curriculum in ways that others were unlikely to be able to. Data Sources These data were captured only for teachers’ first year of implementation; all teachers in this study continued to participate in the larger Carbon TIME project for a second year in 20162017 (although not all as case study teachers), and any sensemaking or shifts in teachers’ reasoning or practices in the second year were not captured for this study. Anecdotal stories from teachers’ case study coaches indicate that some teachers did shift their reasoning or practices in the second year. The findings presented in this study, however, are limited to teachers’ sensemaking about their first year of implementation only and should not be interpreted as representative of the entirety of a teachers’ sensemaking about their implementation of Carbon TIME. Next, I note that the case study teachers and coaches co-constructed the primary data source for this study—the teacher interviews. Russ, Lee, and Sherin (2012) found three types of framing in cognitive clinical interviews with students: inquiry, an oral examination, or an expert interview. The authors noted that students can shift between frames during the course of individual interviews depending on frame-shifting cues from the interviewer. Similarly, I assert that interviews conducted between teachers and coaches could follow similar patterns. As a member of the case study research team, I led the design and implementation of case study teacher interview protocols. My intent was to frame the interview questions in a way that positioned teachers as experts “in which they take their task to be that of discussing their own thinking, on which they are the experts, and that is relatively unproblematic for them,” (Russ et al., 2012, p. 587). The goal was to elicit teachers’ reasoning about their own 182 teaching practices, their students’ learning, and what they thought about the curriculum materials in a way that only they could tell us about. Sensemaking is about how an individual engages in sensemaking about an event, including their own thoughts, actions, and re-actions. Thus, we designed open-ended questions that started with phrases like, “what do you think about this?” or “how do you feel about that?” However, the case study coach could choose to probe or not probe the teacher for more information, depending on how the coach thought the interview was going at the time. In contrast, we wanted to avoid the oral examination frame in which the teacher would be “expected to produce a desired response in a clear and concise fashion” (p. 586). In other words, we did not want the teacher to feel as though there was a “right” response that we were expecting to hear. Furthermore, for teachers’ sensemaking, an inquiry frame can be productive. Russ et al. (2012) described an inquiry frame as: “students may choose to frame the clinical interview activity as one in which they should engage in inquiry to construct an explanation. Rather than saying ‘I don’t know’ students may attempt to figure out an appropriate answer in the moment of the interview” (p. 582). This frame can be productive for teachers’ sensemaking because coaches may press teachers to construct an appropriate explanation in-the-moment. For example, I noticed that Ms. Callahan had modified an Excel data spreadsheet, so in the interview I pressed her to provide an explanation of why she did so and offered an alternative perspective from the Carbon TIME curriculum development team (that the modification was unnecessary) for her to engage in sensemaking about. Although all coaches pressed teachers for explanations, they varied in terms of the extent to which they pursued particular topics. The co-construction of the interview data by teachers and their coaches is a limitation in the sense that the particular topics, framing, and extensiveness of interviews could be somewhat 183 inconsistent across the teacher-coach pairs. The semi-structured nature of the interview protocols ensured some consistency but also allowed for coaches to bring up things they had seen in teachers’ classrooms and for teachers to spend time talking about what they wanted to talk about. For example, some coach-teacher pairs talked more about artifacts such as student work samples or viewed more videos (or transcriptions) of classroom enactment. Furthermore, although coaches were instructed to probe teachers’ reasoning as much as possible, there was variation in terms of how coaches enacted this directive; all coaches asked follow-up questions but not necessarily about the same topics or to the same extent, and even these follow-up probes varied by coach from interview to interview across the year. In a way, these variations in amount of time focusing on particular topics were an indication of what the teacher-coach pair thought was most important to talk about and therefore what was most important to engage in sensemaking about. At the same time, these variations in the data presented a limitation for analysis that I discuss next. Sensemaking A key limitation of using the construct of sensemaking from organizational theory is that I have access to teachers’ sensemaking only if they share it in their talk. In other words, teachers could have engaged in sensemaking that was not captured in the data, either because the interview protocols did not elicit it or because the coach did not ask about it or because of other reasons, such as running out of time to conduct the interview. To address this limitation, interview data were collected at five points in time, spread across a year, and I focused my analysis on recurring ideas, beliefs, and topics. Each of the teachers had their own distinctive patterns of talk, including particular words or phrases that they would use. However, I note that, 184 despite these efforts to capture teachers’ sensemaking at various points in time, the findings are limited only to teachers’ sensemaking that they revealed to their coaches in the interview data. Furthermore, in defining what counts as an occasion of sensemaking for the context of this study, I identified the components of sensemaking (goals and resources and outcomes) a priori and constrained my analyses to those components. Given that the study participants were experienced classroom teachers, I identified the goals and resources that influence sensemaking as teachers’ goals, practical knowledge, and social commitments to their communities. Therefore, it is possible that teachers were engaged in sensemaking about issues that were not captured using my conceptual framework. Implications First, I note that educational researchers have emphasized the need for studies of PD to connect to teacher and student learning (e.g., Fishman, Marx, Best, & Tal, 2003), and I agree that this connection is important for educational researchers to pursue. However, connecting the experiences of the teachers in PD to student learning gains is outside the scope of this study (although there are data on Carbon TIME student learning gains that can be used in future studies). My focus on teachers’ sensemaking about particular Carbon TIME boundary objects constrained my analyses of teachers’ practices to those practices associated with the boundary objects. Furthermore, the purpose of this study was not to examine fidelity of teachers’ implementation or effectiveness of PD, however they are defined and measured. Rather, my purpose was to explore teachers’ sensemaking about their implementation of an innovative science curriculum as they navigated the settings of PD and classroom enactment over time. Within the context of a large-scale, design-based implementation research project, one aim of 185 this study was to contribute to researchers’ goal of continually refining the curriculum materials and developing PD and teachers’ social networks in response to issues that emerged as teachers implemented the curriculum. Therefore, using the four themes from the findings, I discuss implications for science teacher PD and teacher learning and end this section with directions for future research. Science Teacher PD and Learning There are several implications for science teacher PD, particularly around teacher learning of how to support students’ engagement in three-dimensional science learning. The results of this study show that some teachers were more successful than others in terms of engaging in productive sensemaking about their implementation of the curriculum. Four themes about teachers’ sensemaking emerged from my findings: (1) sustained sensemaking over time, (2) influence of school communities, (3) teacher learning of content, and (4) influence of teachers’ beliefs. Teachers like Ms. Nolan came to the project already with a reform vision of science teaching and learning; thus, she was able to engage in sustained sensemaking about how to best use and modify the curriculum to meet her students’ needs. Other teachers needed support in other areas, including developing their own practical knowledge, rigorous and/or responsive classroom discourse practices, and a supportive professional network of reform-oriented colleagues. One implication of the need to provide teachers with opportunities to engage in sustained sensemaking over time involves the timing of PD. Teachers need to cross the boundary between classroom enactment and PD multiple times in order for the PD to have more of an influence on their teaching practices. For example, due to the trimester schedule, Mr. Harris taught all three units within the fall trimester; by the time the mid-year PD happened in February, Mr. Harris was 186 no longer teaching Carbon TIME and did not have an opportunity to try out new strategies. This implication is important in terms of supporting teachers’ sustained sensemaking about boundary objects that can facilitate reflection on how to best scaffold students’ engagement in threedimensional science learning. One implication of teacher learning of content is that PD needs to be responsive to teachers’ needs. One teacher may need more support with accessing professional networks while another teacher may need more support with deepening their own content knowledge. As the case of Mr. Harris showed, teacher learning of content was an important first step for him in order to then be able to focus on student learning. In order to be responsive, PD providers need to develop ways to identify teachers’ challenges and provide targeted support. Teachers’ professional networks could serve that role; however, as the findings of this study suggest, teachers must exercise some agency in seeking help. Furthermore, receiving help does not necessarily mean that teachers’ challenges have been resolved. The influence of school communities on teachers’ sensemaking suggests implications for systemic changes. Teachers cannot meet the demands of reform efforts alone; they need the support of reform-minded colleagues, administrators, and teacher educators. Additionally, PD providers must understand that teachers’ positions in their schools strongly influence their teaching practices and find ways to help teachers negotiate local norms and requirements. Teachers can feel pressure to conform to the norms of their school communities in ways that are often not supportive of teacher learning and student engagement in three-dimensional science learning. I wonder if there are ways to bring up these issues in the PD setting and discuss how teachers might, for example, navigate multiple PD initiatives. 187 Finally, an implication of the influence of teachers’ beliefs suggests that reform efforts address teachers’ beliefs about students’ abilities and capabilities. As the case of Ms. Barton showed, teachers’ beliefs about what their students were able or capable of doing influenced how they engaged in sensemaking about the utility of the curriculum materials for their classrooms. PD providers could consider ways to elicit teachers’ implicit beliefs and support teachers in reflecting on how they could bridge the gap between what the curriculum offers and what their students need in order to engage in three-dimensional science learning. Future Research One aspect of this study that I did not investigate in depth was how the curriculum materials functioned as boundary objects across the settings of PD and classroom enactment over time. This question is better suited for a longitudinal study that collects data for multiple years of implementation and multiple crossings between the two settings. Although I identified particular Carbon TIME boundary objects as a subset of curriculum materials that warranted special attention due to their importance in scaffolding classroom discourse, I did not examine in depth teachers’ and students’ interactions with the boundary objects except as they appeared in teachers’ talk. Examining these interactions may contribute to knowledge about what teachers’ critically notice, what meaning they make of the features that they critically notice, and whether those meanings change over time and for what reason. Knowing this information would help science teacher educators design PD to support teachers’ critical noticing of features that lead to productive and sustained sensemaking. An aspect of implementation that emerged as a concern for teachers was how to assess students’ performances and learning. The Carbon TIME Pre- and Post-Tests were an occasion of sensemaking for seven out of the eight case study teachers. Teachers were concerned that they 188 did not create these tests; therefore, they had to understand what the “right” answers were first in order to support their students’ performances on the assessments. Although the PD addressed learning progressions-based assessments and actively engaged teachers in assessing student responses using learning progression levels, teachers still had difficulty with grading and assessing students’ responses. The Carbon TIME project addressed teachers’ concerns at the beginning of the second year of implementation by creating rubrics to help teachers score students’ written responses; but I wonder whether providing teachers with another tool addresses the deeper issue of how teachers view and use the assessment data to inform their instruction. Future research could explore the relationship between teachers’ assessment and instructional practices within the context of implementing a reform-oriented curriculum. Conclusion In this study, I aimed to explore teachers’ sensemaking about their implementation of an innovative science curriculum across the settings of PD and classroom enactment. I analyzed teachers’ talk to identify occasions of sensemaking about Carbon TIME boundary objects that had the potential to support teacher learning of rigorous and responsive teaching practices that engaged students in three-dimensional science learning. Findings indicated that productive sensemaking involved teachers’ critical noticing of interactions between the curriculum, their students, and themselves. Feedback loops from outcomes to goals and resources could enhance teacher learning by influencing teachers’ goals, practical knowledge, and social commitments to various communities to shift over time. Variations in teachers’ patterns of sensemaking and success in engaging in productive sensemaking indicate a need to understand more about the factors that influence teachers’ sensemaking. 189 APPENDICES 190 APPENDIX A TEACHER INTERVIEW PROTOCOLS End-of-Unit Teacher Interview Protocol Beginning the Interview with the Teacher’s Viewpoint (The three main questions are open-ended to allow the teacher to talk about what is most important. Try to address each of the sub questions too). 1. What happened while you were teaching this unit that you think is really important to talk about? a. Tell me about a lesson in this unit that was a high point for you. b. Tell me about a lesson in this unit that was challenging for you. c. How does CTIME fit in with your usual curriculum? (e.g., did you teach CTIME straight through or add in other lessons in between) d. What did you decide to leave out in this unit, and why? e. How (or why) did you decide to modify ___ ? (e.g., investigation or tool specific to the unit and to teacher’s use of it; could be asked multiple times). 2. What concerns / worries do you have when implementing and using CTIME? 3. Who are you talking to about CTIME? a. What did you talk about? b. Who, if anyone, helped you decide to use / not use / modify ___ ? (e.g., tool, experiment) Opportunities to Address Specific Research Areas (Look for opportunities to probe each area as teachers respond to the opening questions.) Networks / Sensemaking Have you asked anyone for help regarding CTIME? (If they asked for help) How did you decide who to ask for help about ____ ? Tell me about how you approached the person you asked for help. (interested in negotiation process and circumstances under which the teacher asked for help) Curiosity Did it seem like students were engaged and interested about ___ ? Did the students have any opportunities to talk about this and listen to one another? Diversity In what ways are your students different from one another that matter for Carbon TIME teaching and learning? How did you see this Carbon TIME unit meeting the needs of diverse learners in your classroom? 191 What types of support did you put into place in order to facilitate and/or better scaffold their learning? Principles How well did the curriculum support your students in tracing matter and/or energy? How well did the curriculum support your students in thinking about how matter and energy are transformed at different scales when ___? (e.g., things burn, plants grow, animals moves, things decay) Can you tell me about a time when students used OR struggled to use matter and energy tracing to answer the Three Questions? Exploring Teacher’s Reasoning Related to Artifacts (student work samples or video clips of classroom instruction) This is an opportunity to compare interpretations between the teacher and the case study coach. Give the teacher the copies of the focus students’ process tools or show a video clip from one of their lessons. Start with these initial questions (use the probes below to follow up on their responses). 1. What stands out to you about the work/lesson that we are looking at here? 2. What was your purpose in using this tool or doing this activity? How well did the students achieve it? Curiosity What is interesting to you about these answers? What makes that answer “better” than the others? Why do you think that student was able to give a better answer than other students? How well do these responses represent the entire class? What is missing from this student’s response? What might help them make that connection? Were the students aware of and/or interested in the scientific question that the lesson was addressing? (e.g. were they interested in “how does ethanol burn?” did they realize that the modeling activity was also addressing the same question?”) Diversity Work Sample Questions How did your (diverse) students use this tool differently than others? What additional support did you provide to these students to facilitate and/or scaffold their learning? (Use what you know about diversity in the classroom to be specific here about students with IEPs, ELL students, racially diverse students, girls, etc. I.e., “How did your ELL students use this tool differently than others?”) Video Clip Questions How did your (diverse) students engage in the classroom discussion here differently than others? What support did you (or could you have) provided to these students to facilitate their engagement in this discussion? (Use what you know about diversity in the classroom to be specific here about students with IEPs, ELL students, racially diverse students, girls, etc. I.e., “How did your ELL students use this tool differently than others?”) 192 Principles One of the things Carbon TIME is trying to do is to help students trace matter and energy and connect scales. How well do you think students are doing with those [in this video] or [in this worksheet]? • What about their [answers on this worksheet/actions in this video] leads you to say that? • How could this activity we are watching/worksheet have better supported students’ tracing of matter and energy? • Did his activity/worksheet help you understand more about what your students think about matter and energy at different scales? Why/why not? Networks Have you shared or discussed these student work samples with anyone else? If so, whom? What did you talk about? What are the circumstances under which you shared it? (e.g., faculty meeting, break, did the person come to you?) Wrapping up the Interview 1. What else would you like for me to know about your experience using CTIME? 193 Final End-of-Year Teacher Interview Protocol Carbon TIME in your classroom 1. Let’s start with a comparison. Please talk about what’s alike and different, when you’re teaching a Carbon TIME unit, compared to other units that you teach? a. How does Carbon TIME fit or not fit into your existing curriculum? Why do you think so? b. Tell us about your role when you are teaching Carbon TIME compared to other curricular materials. c. How has Carbon TIME changed the way you teach or think about teaching? Can you give a specific example? 2. Tell us about the role of students when they’re learning Carbon TIME. a. Compared to other curricular materials … i. Did the students engage differently with Carbon TIME than with other curricula? If so, how? If not, why don’t you think they did? ii. Are your goals for your students in science the same or different when you teach Carbon TIME? b. Has Carbon TIME changed the way your students think? Can you give a specific example? c. How are your focus students alike and different? i. Follow up with probes about one or two focus students, based on coach’s perceptions of who is excluded or rarely expressing their ideas or somebody who is dominant. d. Discuss a specific (struggling or less-successful) focus student. i. In what ways did your implementation of Carbon TIME support their learning? In what ways did it seem to fall short? ii. Is there anything you would do differently next time with Carbon TIME to better support students like this? iii. Is there additional support you wish you’d received from Carbon TIME to better meet the needs of this student? e. What is your perception of how Carbon TIME provided opportunities (or did not) for participation to students with different backgrounds and histories of academic success? (Give specific example) i. Was this different in any way from when you were teaching non-Carbon TIME units? How so? 3. There are some specific parts of the Carbon TIME units that we’re trying to study and understand, and we want to learn how they worked for you. These include • the 4 Process Tools: Expressing Ideas, Predictions, Evidence-based Argument, and Explanations • the NREL site • your classroom grading & assessment practices i. for each of these, we’d like to know … a. how they informed your teaching b. how they helped or didn’t help your students' learning c. how they could be improved a. Use as Prompts: 194 i. ii. How useful do you think the pre-test scores are? How did you use these results? Would these results influence your course design or any other teaching methods? Is there any particular form of report based on pretest scores that you prefer? Why are you applying certain assessment tools/grading methods? How do you incorporate Carbon TIME assessments into your grading of students? Professional networks and relationships 4. How does Carbon TIME align or not align with how you are evaluated? a. What are other ways that you are evaluated? b. Listen for, bring up, and/or probe as needed: i. Student test scores ii. standardized testing iii. requirements for showing student growth 5. As they relate to Carbon TIME, how do you feel about your interactions with other people in your school, and with people in your Carbon TIME network? a. How is your implementation of Carbon TIME supported by your co-workers? Or, how do you not feel supported? Why? b. Have you ever talked about what you learned in Carbon TIME with some teachers who are not participating Carbon TIME? What about? How did the teacher respond to this? What do you think about this response? c. How about your principal and/or other administrators? d. How do you feel about your participation in the Carbon TIME network this year? e. Have you made any new friends through Carbon TIME? Apart from PD, have you communicated with Carbon TIME teachers who are not from your same school? If so, what would have you talked about? 6. How have you felt about participating as a Carbon TIME Case Study Teacher this year? a. In what ways has being a Case Study Teacher been helpful/supportive? b. In what ways has being a Case Study Teacher been a disturbance/problematic? Thinking about next year 7. What would you like to be different next year, in your Carbon TIME teaching and PD? a. What changes will you make next year in terms of implementing Carbon TIME units? For example, which units will you teach, the order of units, the materials you’ll use, or how you will use them? We have future online and F2F PD – how can we use this time to support you as best as possible? 195 Y1 Follow-Up Teacher Interview Protocol Part 1: Curriculum Materials as Boundary Objects 1. What do you spend time thinking about when you prepare to teach Carbon TIME? Probe for: sensemaking (what are they uncertain or concerned about?), agency (what do they feel they have control over?), and networks (who helps or hinders them?). 2. We’re interested in what you think about the Instructional Model now that you’ve taught Carbon TIME for one year and are planning to teach it again this year. [Show or have available a diagram of the IM.] a) What do you like about the Instructional Model? b) What puzzles you about the Instructional Model? c) Are you thinking about the Instructional Model differently this year? And, does that mean you’ll do anything differently in your classroom? 3. We’re interested in how you’re thinking about the Process Tools now. [Show or have available examples of the PTs.] a) Which Process Tools do you think best support student learning? (Expressing Ideas, Predictions, Evidence-based Arguments, Explanations) b) You didn’t mention X tool—can you elaborate on why you think it doesn’t support student learning as well as the other tools? c) What materials do you feel free to modify? On the other hand, what’s important to keep? d) Did you make any modifications to the process tools? a. Can you tell me about one modification that you made? b. Are there modifications that you wish that you could make? i. What challenges you in making this modification? c. Are there changes that you feel that you cannot make? i. Can you tell me more about that? d. Are there changes that you felt that you had to make? i. Can you tell me more about that? Probe for: agency (what do they feel they have control over?) 4. At the August PD, we spent time discussing discourse routines around the Process Tools, such as divergent and convergent discussions. What do you see as the relationship between writing on the Process Tools and talking about science ideas in your classroom? 5. We’re interested in how you’re thinking about Grading and Assessment. a) What do you like about the new NREL Assessment system? b) Do you have any concerns about using the NREL Assessment system? c) How do you think you will use the assessment results differently in your classroom this year? d) How do you determine the grades for process tools? Part 2: Y0 and Y1 Survey Results This section will be customized for each case study teacher based on their survey results. 196 Now we’ll take a look at your survey results from Y0 and Y1. We’d like to know more about why you chose particular practices as your “top two” practices, and we’re interested in the shift, if there was one, in your responses from Y0 to Y1. [Show T their responses.] 1. Let’s start with your Formative Assessment practices. In Y0, you selected “Ask students to respond to each others’ ideas (agree/disagree, add on, evaluate, etc.” AND “Ask students to explain their reasoning in support of correct answers.” In Y1, you selected “Search for better ways to elicit and respond to students’ ideas” AND “Ask students to explain their reasoning in support of correct answers.” Tell me about this shift in your selections. Probe for: sensemaking (what are they puzzling or uncertain about?), agency (what do they feel they have control over?), and networks (who helps or hinders them?). 2. Now let’s look at your Inquiry practices. In Y0, you selected “Encourage students to pose questions about the natural world” AND “Ask students to make a prediction about what will happen in an experiment they are about to conduct.” In Y1, you selected “Encourage students to pose questions about the natural world” AND “Have students generate questions about observations or patterns in data.” Tell me about this shift in your selections. Probe for: sensemaking (what are they puzzling or uncertain about?), agency (what do they feel they have control over?), and networks (who helps or hinders them?). 3. Now let’s look at your Explanations practices. In Y0, you selected “Practice using a model to explain different phenomena” AND “Present data or their conclusions from investigations to their peer.” In Y1, you selected the same two practices. Tell me about what these practices mean to you. Probe for: sensemaking (what are they puzzling or uncertain about?), agency (what do they feel they have control over?), and networks (who helps or hinders them?). 4. Finally, let’s look at your Decision Making practices. In Y0, you selected “Distinguish between claims and evidence in an information source” AND “Pose questions about the impacts of human activity on the natural world.” In Y1, you selected “Evaluate the credibility of scientific claims” AND “Identify the relevant science in an article or media report.” Tell me about this shift in your selections. Probe for: sensemaking (what are they puzzling or uncertain about?), agency (what do they feel they have control over?), and networks (who helps or hinders them?). Part 3: Teacher Agency Now I would like to talk to you about your involvement in professional organizations. 1. Do you attend any local or national professional conferences? If so, which ones? a. Why do you attend [named conference]? b. What do you gain from [named conference]? 2. Are you National Board Certified, or are you currently pursuing certification? a. (If “Yes”) Why did you decided to earn National Board Certification? Part 4: Concluding the Interview Thank you for taking the time to talk with me about how you plan on implementing Carbon TIME this year. Do you have any questions for me? 197 APPENDIX B REFLECTION ON TEACHING PRACTICES SURVEY Table 33 List of Practices on the Reflection on Teaching Practices Survey Category Formative Assessment Inquiry Explanations List of Teaching Practices FA1. Ask students to explain potentially incorrect ideas at the beginning of a unit FA2. Record students’ ideas to use again in later lessons FA3. Ask students to respond to each others’ ideas (agree/disagree, add on, evaluate, etc.) FA4. Ask students to explain their reasoning in support of correct answers FA5. Use assessment data to make decisions about how or what to teach next within a unit FA6. Use assessment data to make decisions about how or what to teach next between units or topics FA7. Search for better ways to elicit and respond to students’ ideas FA8. Reflect on the ways in which you interpret students’ ideas IN1. Encourage students to pose questions about the natural world IN2. Have students generate questions about observations or patterns in data IN3. Ask students to identify the types of data that need to be collected to answer a particular question IN4. Ask students to make a prediction about what will happen in an experiment they are about to conduct IN5. Ask students to explain a prediction they make about what will happen in an experiment before they conduct it IN6. Ask students to explain patterns in data that they have collected IN7. Have students identify unanswered questions at the end of an investigation IN8. Ask students to assess how well the evidence supports a given claim IN9. Ask students to find patterns in data collected through multiple observations (Y0 only) EX1. Describe the mechanism involved in explaining a phenomenon EX2. Create or draw on a visual diagram (Y1 only) EX3. Present data or their conclusions from investigations to their peers EX4. Revise an explanation in light of new evidence EX5. Critique a presented explanation using scientific reasoning EX6. Use a scientific law that applies to the microscopic scale to explain a phenomenon at the macroscopic scale EX7. Describe the assumptions or limitations of a scientific model EX8. Conduct whole-class discussions with the goal of collective consensus EX9. Practice using a model to explain different phenomena EX10. Create a visual diagram and explain in text form (Y0 only) 198 Table 33 (cont’d) Decision Making DM1. Pose questions about the impacts of human activity on the natural world DM2. Discuss the relationships between science and society DM3. Evaluate the credibility of scientific claims DM4. Discuss the limitations of scientific methods and knowledge DM5. Read and discuss articles in news media or on the web DM6. Identify criteria and constraints for solutions to a problem DM7. Consider possible barriers to implementing a solution such as cultural, economic, or other sources of resistance DM8. Analyze the validity and reliability of a source of information DM9. Compare and contrast information from various sources DM10. Distinguish observations from inferences in an information source DM11. Distinguish between claims and evidence in an information source DM12. Identify the relevant science in an article or media report DM13. Identify sources of error or methodological flaws in an information source (Y0 only) 199 APPENDIX C FRAMEWORK FOR DESCRIPTIVE CODING Table 34 Framework for Descriptive Coding of Teacher Interviews Parent Code Child Code EBAT EXPL Boundary Objects Classroom Practices Networks & Obligations EXPR INMO Name Evidence-based Arguments tool Explanations tool Expressing ideas tool Instructional model PPTS Pre- & post-tests PRED Predictions tool DISC Discourse GRAD Grading & assessment MODI Modifications NONC Non-CTIME REAS Reasons STUD Students CONC Concern ALLO All others CNET CTIME network COBL CTIME obligations CSTA CTIME staff LOCA Local obligations POLI Policies & standards Description T talks about the Evidence-based Arguments PT T talks about the Explanations PT T talks about the Expressing Ideas PT T talks about any aspect of the Instructional Model (must name it) T talks about CTIME pre and post unit or overall tests, or NREL assessment system T talks about the Predictions PT T talks about students’ writing or talking OR strategies for getting students to talk and write individually in small group or class discussions T talks about grading and assessment practices, including formative assessment T talks about modifications (changes, additions, deletions) to CTIME curriculum materials, including boundary objects OR enactment T talks about what they do that’s not related to CTIME at all or non-CTIME curriculum they use T talks about their goals, purposes, or justifications for a decision they made or why they think something T talks about Ss’ ability, engagement, learning, or attributes T talks about concerns or challenges using CTIME, including time, materials, website, PD, tests OR talks about things they spend time thinking about T talks about a person not in the other categories T talks about colleagues (usually other teachers teaching CTIME) in their CTIME network T talks about obligations to CTIME staff, curriculum materials, or the research process T talks about CTIME staff, including PIs, researchers, PD providers T talks about school or local level expectations, including common curriculum and tests, teacher evaluation, administration, or standards T talks about state or national level policies and standards, like NGSS, MSTA, or National Board 200 APPENDIX D RESULTS AND ANALYSES OF DESCRIPTIVE CODING Table 35 Total Number of Excerpts Coded with Each Code for All Teacher Interviews Code EBAT EXPL EXPR INMO PPTS PRED DISC GRAD MODI NONC REAS STUD CONC ALLO CNET COBL CSTA LOCA POLI Ross 1 6 2 5 9 2 12 18 2 21 37 39 6 14 11 0 2 27 1 Harris 11 5 3 1 15 5 12 15 8 3 48 64 88 35 4 2 2 22 1 Callahan 1 6 5 4 16 8 14 14 10 7 48 54 22 9 4 2 7 10 3 Barton 1 4 3 2 2 4 22 18 12 11 58 33 23 14 1 1 4 6 7 201 Apol 5 3 1 1 6 9 23 8 6 25 36 47 13 15 9 0 5 11 2 Wei 10 21 5 3 11 3 11 15 22 20 36 75 19 11 32 2 8 15 8 Nolan 11 13 6 9 16 3 16 11 15 17 26 73 14 16 22 1 6 8 1 Eaton 6 12 5 2 13 3 12 21 12 2 43 34 21 7 2 0 7 5 1 Figure 17. Results of descriptive coding: Matrix of code co-occurrences in Dedoose 202 Table 36 Number of Words, Means, and Totals for Teachers’ Talk About Boundary Objects & Classroom Practices INMO EXPR PRED EBAT EXPL PPTS DISC GRAD Total Ross 1478 617 115 316 1495 2433 4416 5870 16740 Harris 438 291 292 2586 636 5293 4067 5076 18679 Callahan 1164 1065 1344 243 1792 5194 3558 4097 18457 Barton 604 297 434 260 534 693 6035 5175 14032 Apol 322 481 1578 1732 625 2202 5589 2573 15102 Wei 600 739 171 1147 2963 1471 1572 2562 11225 Nolan 1725 914 29 2150 2076 2690 2976 2533 15093 Eaton 350 535 181 1730 3317 3934 4232 7478 21757 Mean 835 617 518 1271 1680 2989 4056 4421 16386 Table 37 Percentage of Individual Teachers’ Talk About Boundary Objects & Classroom Practices INMO EXPR PRED EBAT EXPL PPTS DISC GRAD Ross 9 4 1 2 9 15 26 35 Harris 2 2 2 14 3 28 22 27 Callahan 6 6 7 1 10 28 19 22 Barton 4 2 3 2 4 5 43 37 203 Apol 2 3 10 11 4 15 37 17 Wei 5 7 2 10 26 13 14 23 Nolan 11 6 <1 14 14 18 20 17 Eaton 2 2 1 8 15 18 19 34 APPENDIX E SUFFICIENT AND INSUFFICIENT EVIDENCE OF SENSEMAKING IN THE DATA Table 38 Sufficient Evidence for Teachers’ Sensemaking About the Instructional Model Teacher Ross Callahan Wei Nolan Representative Excerpts from Interview Transcriptions Mostly, I would just show them where we’re at currently on this model…. So often in school kids fail to connect what came before with what’s happening now. And that’s part of our job as educators, is making sure that they know, like, find that connection…. It helps me organize what I’m thinking and doing (AN unit). The kids still have a hard time on the back slope, like, when they’re supposed to apply…. So again having a model, referring to it, and thinking about it helps me in the long run. Helps me help them…. Like, at the beginning of last year, I did a really good job of putting down a lot of ideas and then bringing them back, and I hope to keep doing that and using this to remind me to do that (Y1). So I still have issues with it going back down the pyramid, and I know I need to let that go…. So I do want to use more, using this this sort of model with the students so they have an idea where we’re going, anything that helps them not get lost in the trajectory of what we, every day, being a little uncertain of what we’re doing…. hopefully it’ll help me keep them more focused because I do feel like it was very disjointed for me last year, not including that module, the—what do we call this again? I’m sorry, the—instructional model…. It’s [going back down the pyramid] just getting more information and resupporting and applying it and just building a stronger foundation again. I understand that, but part of me just wants to keep moving up instead of going back down (Y1). I was feeling a little confused just visually by it because it has things like modeling patterns and something on the triangle. I didn’t see how that fit in…. The triangle was confusing to me because I think that the levels of the triangle somehow was supposed to fit in with specifically the observations and gathering evidence but then I also associated the triangle with the downward part of the like model building and then other examples. I need to make that just linear in some way…. Because then when I went down the other side of the triangles, it was like, “Oh and then they should be modeling here, and then they should be making more observations here, but wait a minute that’s not actually what we are doing. We are working with a model instead” (Y1). What I do is, at the beginning of each day, I always have a slide. My first slide is our goals for the day, and the next slide shows the instructional model, with the “you are here” arrow…. Which I think is really nice for a couple of reasons. One, I think it’s going to be comforting to them, and like, grounding for them to see the patterns, to see, “Oh, I’m here, I’m doing this again, I’ve done this before; I was successful before, I can do it again.” And it also helps them, like if they’re on the left side of the pyramid, in foundational knowledge or whatever, they know it’s sort of a free pass, like you can you be confused, and you haven’t gotten very far, and I don’t expect your learning to be very complex yet. So it sort of, it takes off some of that anxiety they might have. Whereas if they have already come down the other side of the pyramid, and they’re still like, “I don’t know, I don’t know what’s going on,” like, that’s a good place for them to reflect and say, “Oh shoot, we’ve got a post-test soon, and she’s not going to give us any more practice” (SS 204 Table 38 (cont’d) unit). This year, I am being way more thoughtful about thinking like, what skill were they looking at before. If they are trying to explain something so there is a lot of lurching back and forth between the scales. So I’m thinking, where were they before? What evidence did they have, and so where are we going to go next? How am I going to help them make that connection to make that leap? Yeah. A lot of thinking about the—what’s that?— instructional model (Y1). Table 39 Insufficient Evidence for Teachers’ Sensemaking About the Instructional Model Teacher Harris Barton Apol Eaton Representative Excerpts from Interview Transcriptions I definitely understand the model and how it all works better this year. I’ve noticed all the PowerPoints now have the Instructional Models at the beginning, which I like because I wish I would spend time going over it with the kids more. But I feel like as a teacher I’m understanding it after teaching it for two years…. Because sometimes they’ll see, you’ve got expressing ideas, the predicting, evidence-based argument tool and then explanation, you have all four of those, and in their minds sometimes they can just view that as like, “Why are we doing the same worksheet again?” And you’re like, “No, this is the point of trying to see how they all work together and look at how there’s been a change over time” (Y1). [Sierra: what do you like about the Instructional Model?] I really like the expressing ideas and the predicting…. [Sierra: what puzzles you about the Instructional Model, if anything?] The amount.... It’s a different way of teaching. It’s really like a thought process about one thing. The amount is a struggle for me, and maybe it’s easier for a high school or older students that can sustain more on one topic. But for me, maybe because I feel pressure to keep moving, and it’s kind of a pressure situation. It’s hard for me to stay with one thing for that amount (Y1). I like that it’s [Instructional Model] laid out for us. Even in the PowerPoints, it shows them kind of where we are, and it doesn’t—I have not gone through this, and I probably should with them just to show how you gain knowledge and then how you use it and then how you come up with conclusions, and draw conclusions and so I do think it’s laid out very well, but yeah (Y1). Well, I like the Instructional Model because it really helps me understand the stuff I need to grade…. And it’s not about assessing them on their way up. It’s more about on the way down. So, it just helps me understand where my expectations should be of them and how much I need to hammer into them (Y1). 205 Table 40 Sufficient Evidence for Teachers’ Sensemaking About the Expressing Ideas Tool Teacher Callahan Wei Nolan Representative Excerpts from Interview Transcriptions I had a hard time with the students for me with the Expressing Ideas, initially. That part was a little, I think, frustrating for them. So I tried to be very positive with it, but it was still hard because I really, they almost… I don’t know if it’s the student population here, but they really, really wanted the right answer (SS unit). I think one of the great things about Carbon TIME is really trying to glean information from the students before we do take on. All that expressing ideas and sort of getting their preconceptions and sort of figuring out where they’re coming from has been a great useful tool (Post). The most important tools on my end is Expressing Ideas for teachers, because it really helps us to gather so much information about the students background, and to see where they’re coming from, and how did their ideas change over time (Y1). Like Expressing Ideas tool, you know, there were things like, “What goes in and out of this plant?” or, “What does the plant take in and how does it take those things?” And my struggling learners were kind of like, “I just don’t know what comes out of a plant. I really don’t know.”… So I always used that [Expressing Ideas tool] but to varying degrees of success I think, just based on students’ prior knowledge. And the way that it was presented once they—for some reason they really wanted to get a correct answer on the Expressing Ideas Tool instead of just expressing ideas. So I had a hard time with getting them out of that and saying, “this is just your ideas right now.” And then I think that there was a lot of similarity between the Expressing Ideas Tool and the Explanation Tool…. Those two things were so similar that when kids were filling out the Explanations tool, they were like, “Wait a second, we did this already” (Post). I really liked the elicitation day; the one, what is it called? Expressing Ideas. Because burning is something that they’ve all had experience with. But it was clear they’d never really thought about it at any level other than just like stuff burns…. That was really cool for me to see them bumping against that, and I think it was cool for them as well to realize there’s a lot of holes in their understanding of something as basic as burning, which actually isn’t that basic at all. It’s quite complex for their brains…. There’s the part on the elicitation where they have to – they write their ideas and then their questions about it. And no, I think I took them down. There was we had the top ten questions that were asked about that. So that was really cool to have that hanging there, and I never referenced it, and I think I’m okay with myself for not referencing throughout the unit. But then we came back to it at the end, and it was our last review activity, which was really fun (SS unit). I think the Expressing Ideas tool that kick off a unit is a really nice idea, and I think it’s attempting to be sort of the puzzling phenomenon that you then hang the rest of your ideas on. For me they fell a little short of being engaging…. So really nice idea, really nice thing for them to come back to at the end, but I don’t think that the scenarios were interesting enough (Post). I mean I’ve modified the Expressing Ideas one a little bit just to put something a little bit more engaging in there (Y1). 206 Table 41 Insufficient Evidence for Teachers’ Sensemaking About the Expressing Ideas Tool Teacher Ross Harris Barton Apol Eaton Representative Excerpts from Interview Transcriptions When they were expressing their ideas already, they already had some really well-formed ideas that I don’t think they would’ve had had they not had two units of Carbon TIME already (PL unit). Well, I certainly still like the Expressing ones because I believe from these we are going to start building the questions that we’re going to want to talk about and again like some of them are left at the end unanswered (Y1). And so sometimes as I use the model I would like to get to a stage—and I said I was going to do this this year—where I can put the expressing ideas, predicting, Evidence-based Arguments and explanations all in one like packet that would be color coded (Y1). I really like the Expressing Ideas and the predicting. The Expressing Ideas one, I think that we do more with the whiteboards and stuff. I like to do, with those tools being worksheets or whatever, I like to have the worksheet ones be after we’ve talked as a group, and after they’ve done the activity, or done whatever (Y1). I think Carbon TIME encourages them to ask each other questions and to listen to each other’s answers. So it gives them a chance to think through their own thoughts and ideas with Predictions and Expression tools. Then it also encourages them to share their ideas and compare their ideas to others (Post). I think that is a scaffold. To me, that’s like… It’s like giving them closed notes. It’s giving them something to… And I think that… For Expressing Ideas tools and for the prediction, it’s a really good place for them to go back and make modifications to their thinking. But, really, it wasn’t even for me to know how their ideas have changed. It was for them to know how their ideas have changed because I think they’re learning a lot about how this works. And I just want them to identify it (Y1). 207 Table 42 Sufficient Evidence for Teachers’ Sensemaking About the Predictions Tool Teacher Callahan Apol Representative Excerpts from Interview Transcription The Predictions Tool, where they’re basically like, “hey, what’s going on here?”…. They want to know instantly what’s happening. They don’t like the unknown. So that was a bit challenging…. So they did it [Predictions Tool] at least twice, and both times, they just really wanted to know what’s going on before they did it. So that was the challenging part for me (SS unit). I think from my end, the students do not like to predict. They like to not know, learn, and answer. They don’t want to, like, think their way through—what might happen and when potentially, especially, if they don’t feel confident in the background information. [Evelyn: During the group interview, the students said that they liked the Predictions and Expressing Ideas Tools because they liked being able to be wrong.] Oh good! I’m so happy!.... So, my guess is they do probably enjoy the Predictions Tools in class because when we go through them, it gets to be highly entertaining…. Like you said, their ability to be just totally out there is pretty, it’s important. They get, they don’t really get a chance to be that way. So I’m glad that they liked that (AN unit). When I’m predicting—but I already know, and then I already knew this. It sort of becomes, I think problematic for students who started out pretty high level of understanding at the beginning, becomes a little bit more challenging to be engaging as they already know what’s going to happen (Post). I sort of like the Predictions Tool the most partly because I think it actually is learning, even like the Evidence-based Arguments, some of that, the ones that we’ve talked about, they seem not enough or it just seems like they’re putting in the pieces. The Predictions Tool, they maybe know what the pieces are in some ways, and I really like the fact that it’s very authentic, it’s very much the students just being able to discuss and be okay with making mistakes and sort of predicting and not knowing (Y1). And now, we’ve always done labs, but I think these labs are a little bit different. These labs always start with prediction, which I have not always done…. I think it [Carbon TIME] is changing the way they think. I don’t know how fast it’s going to happen. Just like with NGSS, I think kids have to get used to being able to know that it’s okay to be wrong. So making predictions and just going all out and saying, “This is what I think is going to happen.” Some kids don’t want to do that. They don’t want to be wrong. So I think it’s going to force them to change…. I liked the Predictions Tool because, as I said, that’s kind of started me thinking that before I teach other units is, “What do you think is going to happen, why do you think it’s going to happen? Were you right, were you wrong?” (Post) I think the Evidence-based Arguments is one of the best ones because they have to look at what they got from the lab, but without the Predictions Tool, you have nothing to compare it to. So I really like this, “What do you know now that you didn’t know before?” because not only, “Did you fill out the Predictions Tool on Monday? Look at what you know today that you had no clue about on Monday.” So it makes them really—that Prediction Tool becomes important once they do the Evidence-based Argument and their explaining (Y1). 208 Table 43 Insufficient Evidence for Teachers’ Sensemaking About the Predictions Tool Teacher Ross Harris Barton Wei Nolan Eaton Representative Excerpts from Interview Transcription The Predictions can, like, can help with the misconceptions or whatever, but yeah, I don’t know (Post). I like the predicting, but that is like pulling teeth sometimes. You know, with the kids it’s hard to get them to take the time. Like yesterday we were doing predicting, and I felt like I was giving them too much information trying to get them… Because if not, they just say, “Uhhhh, I think it’s going to be carbon, hydrogen and oxygen.” And I’m like, “What? That’s not even a thought.” So I kinda give them some information to get them there. So I like the predicting, but I feel like sometimes I give too much to them. I’m not giving them the right answer, I’m just trying to get them to write down what their thoughts are (Y1). For one thing, I think that it’s a really safe way for kids to get their ideas out. And kids have a lot of hidden ideas that they’re holding onto from past experiences, especially with science; sometimes Carol will say something, or for whatever reason, they have past experience and they’re holding onto that. And I think the Expressing Ideas and Predicting do a really good job of helping the teacher get that out, in a way that I don’t think I was doing before. I think you’re not just putting some information into an empty ball; you’re building information with the information they’re already carrying around. And especially if you have to get rid of something, or try to help them see that something that they are carrying around doesn’t make sense, using the Expressing Ideas and Predicting is really helpful with that…. As far as the writing, kind of predicting and stuff, I’ve been doing more of that on the whiteboards (Y1). The Predictions Tool I don’t plan on changing too much. I think that’s simply just a way of getting at prior knowledge again (Post). The Predictions Tool. I mean, it served its purpose. It’s fine. I don’t think there’s anything great (Post). For Expressing Ideas tools and for the Prediction, it’s a really good place for them to go back and make modifications to their thinking. But, I wouldn’t just use those because… you know what I mean? By themselves unless we’re moving forward… It’s a scaffold at the beginning to help them just try to make sense of what they’re talking about. But, they don’t have enough information at that point (Post). 209 Table 44 Sufficient Evidence for Teachers’ Sensemaking About the Evidence-based Arguments Tool Teacher Harris Wei Nolan Eaton Representative Excerpts from Interview Transcriptions I’m looking at it, so an Evidence-based Argument tool. I think I mentioned before I interchange like conclusion to be first. I think it’s really... I still always kind of struggled with the three questions and trying to understand energy with my own personal learning about energy through the process of this and the fact that there are high-energy bonds but really it’s the fact, I don’t know. That whole energy we talked about before. That’s random in my brain the fact that when you break a bond it’s not giving you energy but we teach it that way, but it’s really the fact that the new product has less energy. So, energy is released (Post). So sometimes the Evidence-based and Explanation, that’s again a challenge for me to get them to still put full effort into putting their ideas out there (Y1). And I think that plants was where I ended up modifying the Evidence-based Argument Tool to fit CER because we had just done the yeast lab; that was EOC write-up…. I switched the order of the columns, so it’s been Evidence, Conclusion, Unanswered Questions…. So the first column was, I think what Carbon TIME calls the Conclusion would be the Claim; second column is Evidence, and then I added in there like, Qualitative and Quantitative Data. And then the third column was Connecting Evidence to Atoms or whatever when they provide the statements, that was the third column. And then the fourth column was Unanswered Questions (PL unit). This is the main modification I guess I made though, is I took the Explanation tool, and really put, embedded the evidence in the rules into there. So I took the Evidence-based Arguments tool and embedded it into the Explanations tool (SS Unit). That unanswered questions I’ve gone from hating it to loving it because it really does draw out that turning point of what you can observe in an investigation and what you can’t observe…. I’ve been playing around with the idea of modifying the EBA tool…. I kind of go back and forth and I haven’t made the switch yet because I kind of like them to say what they—like that’s how you build a claim is by looking at the evidence first but it’s just the way we’ve been teaching them all through school is just to do claim first then evidence. So, I just I don’t know which one is better. That’s why I haven’t made the switch yet. I actually think that evidence first, then claim is perhaps better but I don’t know if it fits with the way students think about it…. I think my shift for that having students generate questions about observations about patterns or data is directly related to the evidence based arguments tool. Realizing that they didn’t know how to come up with the unanswered questions and then my realizing that that’s so important for them to come up with those unanswered questions (Y1). Actually, they’re doing so much better because that was one of the things that I did this time. They had to answer their argument together. They have to come up with “What do you think?” together, by themselves. Then, I put them with their group and I said, “Look for things that are different in your responses because that’s the unanswered question. Look for the things that you think this is going to happen, and you think this is going to happen, while you can’t both be right. Right? Or maybe you want to be both right, but that’s kind of where you need to go with your unanswered question” (SS unit). The Evidence-based Argument needs to be more tied to the data because the data’s more concrete, and the Evidence-based Argument tool for a lot of them was, “what am I supposed to be writing down? It’s a completion grade. Let me just write something down.” So I did not always see a lot of application of what did you really just learn, you just did this experiment…. I think the Evidence-based Argument is where it clicks, and then, the explanations is where I figure out if they can apply it (Y1). 210 Table 45 Insufficient Evidence for Teachers’ Sensemaking About the Evidence-based Arguments Tool Teacher Ross Callahan Barton Apol Representative Excerpts from Interview Transcriptions Then the evidence again like as I was saying that’s the goal of my class. I want you to be able to evaluate the world around you (Post). And a lot of discussion and discourse, especially when they’re going through their Evidence-based Arguments—say, “Hey, what did you put down for your conclusion? What evidence did you use and why? (AN unit). The Evidence-based Arguments; I mean the thing that I like about them is if it were, and this is kind of the problem with this or any other curriculum; it’s assuming that all kids want to learn, or are interested in the topic or will do what they’re asked to do. And I found with Evidence-based Arguments that one key person would do it, and everybody else looked happy. So I guess I feel like this year, I’ll probably just do more talking about the Evidence-based Arguments, and trying to get student input. I mean I’d like to have them all write their own, but I don’t know if the value equals the amount of time, because if they’re all just going to copy off one person anyway, then it’s not really worth the amount of time. I don’t know; I like it, and if you had all students that were going to take it seriously, and do their own work, then I think it’s really great. On paper, it’s really great (Y1). I like the evidence based arguments because like what I’ve said, we’ve done a lot of claim evidence reasoning writing…. I think unanswered questions in 7th grade, I’m not going to consider it to be like my goal because a lot of them aren’t going to do it or they don’t know how to do it, or afraid that they’re getting it wrong. So I like the evidence and conclusion, like I said, because it ties into how we write in Science…. Do you think that having the conclusion first and the evidence second would be better for them since they’re used to making a claim and then evidence? “Here’s what happened, here’s why it happened, here’s the evidence to back it up?” (AN unit). I like the evidence-based one especially because like I said it goes with our writing claim evidence reading. So it kind of tied into something that was similar to what I’d done throughout the years that was linked with that (Post). 211 Table 46 Sufficient Evidence for Teachers’ Sensemaking About the Explanations Tool Teacher Wei Nolan Eaton Representative Excerpts from Interview Transcriptions When we were working with the Explanation Tool, that column that says unanswered questions, part of that column I think is in a way easier if you ask questions all the time and are curious about why things happen. And my students really struggle with that column…. And then there’s also, for every explanation tool, there is a section at the bottom that says, now write out your explanation on the back. And I’ve always had to modify that for my regular students to, instead of one whole paragraph to sort of write out each individual question (SS unit). So instead of using the explanation tools, I was… for cellular respiration, I was using their yeast lab conclusion as their explanation for cellular respiration…. I started to get really concerned about the repetitiveness of the explanation tools, and that’s why we did the poster project instead of the “Explain Again” tools, and that’s also why we modified the scenarios (AN unit). I used the explanation tool sometimes as a summative. This year I want to try using it more as a formative (Y1). This is the main modification I guess I made though, is I took the explanation tool, and really put, embedded the evidence in the rules into there. So I took the Evidence-based Arguments tool and embedded it into the explanations tool (SS unit). So there was one additional explanations tool that I put in there where they have to put all three pieces together. So it’s usually photosynthesis, biosynthesis, respiration. I wanted one to put the whole story together…. Which is the beauty of them. They are all the same (PL unit). I would use the Explanations tools actually [for grading]. That is one that I looked at because I feel like that’s when you really start to see what they got (Post). I have been working with a slightly modified version of it this year and we worked with it last year too. Putting in evidence from the investigation as well (Y1). One of the biggest things that they’re struggling with is, they think if they answered the movement question, they may have answered the carbon question (SS unit). I think one of the things that was pretty difficult during the “Animals” unit was trying to get them to understand the difference between digestion, biosynthesis, and cellular respiration. I really struggled with just giving them the time to process it, and they kind of had to fail at it a little bit in order to make sense of it. When we were grading their explanation tools, a lot of them were like, “Oh, yeah. Okay.” Then I would let them go back and fix it, and then help to reinforce their ideas, the difference between the three processes (AN unit). That was the explanation tools I correct. It was the explanation tool and then they really saw the value and the pictures and the way to explain it. It helped them to really understand that this was a tool than more than it was an assignment. That was the conversation we had a lot…. Well, they had to fill out that explanation tool. Just the fact, the graphic organizer. They didn’t have to do the explanation on the back where they put it all together. But they have to have some understanding of the process (PL unit). 212 Table 47 Insufficient Evidence for Teachers’ Sensemaking About the Explanations Tool Teacher Ross Harris Callahan Barton Apol Representative Excerpts from Interview Transcriptions Again, a lot of students only need this much space. She really went further, and I think if the goal is to get the students talking with accountable talk or science talk, like she mentions distinct pieces of evidence, and she mentions distinct ideas that we have, over and over, went back to. So I certainly think that she grew and has done a really good job explaining what’s occurring and can back it up. You know, she has claims, she has evidence, and then she’s giving great reasoning (SS unit). I guess that my favorites are the expressing and then the evidence are my favorites but I mean obviously the last one the explanation is pretty important…. Yeah like now take all of this and tell me a story. A real story with a start, a middle, and an end that like covers these things (Y1). So once we had talked about where the atoms are moving from and to then with the next part we had the carbon question, “how are atoms rearranged?” So what molecules are carbon atoms in before? Yeah. So I guess it’s just the process of learning how this is filled out that was just for me I’m like, “Wait, did I cover what they wanted or not?” I think I know what I’m talking about but finding a way to… I don’t want to give them the answers but we’re trying to draw it for them but there was almost too much I guess (SS unit). Sometimes I feel like the explanations and the arguments, they’re so close to each other, that’s where they can feel like it’s pretty redundant. And so that can be a challenge I guess (Y1). So, in some ways it’s the right answer, which the Explanation tools—not that there really is a right answer to science questions—but there’s a more focused, concrete information, and they thrive on knowing that this is what I’m supposed to say—this is what I was supposed to see—I was right on track—or I didn’t explain this part. But I think that it really puts things together—ties them together nicely for them (AN unit). I think for me it was just, for the once again for the students and the explanations tool of being able to see what’s happening inside the plant cell…. I thought that was neat (PL unit). The second most important one I think is the explanations. Being able to explain what’s happening right now on the evidence that they’ve gained (Post). I think it worked really well to do the kits with the molecules; to do a kit and then the explanations tool where they talked with each other then go over it together. So maybe go over a concept together, then they do a kit thing while you walk around and talk to them. Then they do an explanations tool with their partner; that seemed to be a good like for them, and activity level where they could – you know what I mean; stay focused…. Because then I just saved them, and then what we did was I kind of used them like an end for the next day. When we were going to introduce let’s say when we did the mealworms, then I took the explanations to all that they did as pairs, and would say “Okay, well this is what you guys said about whatever was before that.” And then we’d talk about it like as a group, and then did the activity (AN unit). We did like the explanations to it. I included like two or three of the worksheets from Carbon TIME towards the graph. Whatever we graphed I can’t remember. It was a while ago…. My lab grades are usually electrograph and a data table. So that was their data from the lab and their graph, plus their explanations and prediction tools per that last grade…. But if you were looking at like the lab for the big Carbon TIME and have them do the explanations tool. So you’re claim, evidence, reasoning. So they could show; here’s what we got, here’s the evidence to prove it. I think you could use something like that [for demonstrating student growth]. I don’t see why not (Post). 213 Table 48 Sufficient Evidence for HS Teachers’ Sensemaking About the Pre- and Post-Tests Teacher Ross Harris Callahan Wei Nolan Representative Excerpts from Interview Transcriptions And I put it all in the grade book for what I wanted to do for the pre- and post-test, but I haven’t had a chance to go back. So for the pre I’m just putting just the multiple choice answer that they scored, for the post I’m putting in the multiple choice and then I’m also going in and grading every single short answer question on a 1, 2, 3 or 4. And so I’m going to give 36 points basically, because every part of the short answer can be worth up to four points. And also going over and showing them the rubric and showing the level 4 thing, I think it really helps (AN unit). They are so used to getting the answers immediately. So you know I’m hoping to use that assessment as a thing to generate interest in questions (Y1). And that was my big frustration. I looked at the post quizzes. I was like, “Oh, it makes so much sense. High energy bonds, low energy bonds. Like when we burn the paper in class and that’s organic just like the ethanol and the water’s not.” And then they take the post quiz and I was like, “Oh my goodness.”… Well, I did look at some of their responses and there were just things like, ‘carbon is an atom.’ And kids getting that wrong. And I’m like, “What?” Or ‘atoms last forever.’ No, false. And I’m like, “How many times did we say that?” you know? And it was just, that was very disheartening to see that. And again, part of it I feel like is, how do I present the material? … But with these it was like some of these were really simple that they would still get wrong…. I like that, that it opens our eyes to the things that we all think that they know and they don’t (SS unit). But my students—I think I even saw their post-test. They really still had a difficult time with, “Alright, so we have to digest everything first. And then that material can be used for cellular respiration. Or the material can be used for biosynthesis”… And, but then what I’ve found is a lot of those students, actually on their post test—even though I know they understand the material and what was happening—because they haven’t really put those ideas into words very well and they haven’t really thoughtfully reflected on them, that they made pretty flagrant, I would call, mistakes or missed terms when they were using their explanations on the post tests (AN unit). I’m interested to see their unit post tests because I have a feeling that some of them really seemed to use this Plant unit as a way to put things together (PL unit). So I think that they’re still having a hard time because we haven’t, we still have not yet really defined energy. We still don’t really know what energy is (AN unit). But I did notice that on the posttest, that there were still I would say the majority of students would now say that plants don’t get their mass from the soil. But there are definitely those students who are like really holding onto that…. Even though like in some of their responses, they would actually circle “Plants get their mass from the air and from water.” But when they wrote out their explanation, they said “Plants get their mass from soil and nutrients” (PL unit). Thinking about pre-assessments and post-assessments. That’s something that I want to pay more attention to (Post). What is interesting to me is that on the System and Scale test, which I haven’t thought about in a while now, they would be inconsistent within the same test. So on one question, they would show oh they’ve got it, they’re keeping those things separate. And in another question they wouldn’t (SS unit). And I’m frustrated by that, and a part of me was a little bit crushed when I skimmed over the results from last Friday’s test. I was like, “Oh, they’re still not getting it.” But on the optimistic side, they get another chance in plants…. So it’s almost like standards-based grading. You have to have multiple opportunities to practice something until you get it (AN unit). 214 Table 49 Sufficient Evidence for MS Teachers’ Sensemaking About the Pre- and Post-Tests Teacher Apol Eaton Representative Excerpts from Interview Transcriptions I think my biggest thing, like I said, looking at the test scores. It breaks my heart because when you listen to them talk and when you do it as a class, they have good conversations amongst groups, they seem – they do know the answers. Then when they fill out the test, I’m shocked at how poorly some of them did. I need to look at the best, my smartest kids’ scores, too. Maybe that would make me feel better (laughter) doing that, maybe I won’t. I think that they’re learning it. Somebody said it in our Carbon Time workshop in February that these tests are nothing like any other tests 7th graders take. So I don’t think that it’s a good – like I said, I don’t think it reflects what they know. I think they’re different tests that they’ve ever taken before, not just it being online, but the way they’re [0:08:00] worded, and the way they’re – “Here’s the thing. You tell me what’s wrong with it, or is it right or wrong.” They don’t do that. So maybe somebody needs to – I don’t know. I don’t think it’s reflective of what they know (PL unit). As far as the tests themselves, I don’t think so. I just still think they’re up there, different. We said they’re not the type of test these kids are used to taking (Y1). They didn’t understand that the question was not about the bonds, the question was about the atoms; so, just their inferences of the question. So, helping them understand it more... we’re going to spend a little more time going over the pretest and talking about what are they asking you in this question, so that hopefully they can be able to answer it with the evidence…. I think the posttest would have even gone better if I had just a little bit more time talking to them using the same vocabulary. I think I tried to remember the questions and be like, “Okay, remember when they ask this question,” but I need to do more of that; just to help them be more successful since I’m not writing the test (SS unit). The thing that comes to mind, and I was just going to ask you about this, the thing that comes to my mind is the wording on the test. A lot of times, they’re using words on the test that have not come up during the lesson. So they’re referring to things, and I can’t think of one off the top of my head, but just making sure that the kids are able to explain what they know and understand what’s being asked of them (AN unit). If we do both of those at the same time because we’re assessing these kids on questions I did not write, and as teacher, that’s difficult because yes they took the pre-test, yes we looked at their data on their pre-test, but I wasn’t expecting them to do well on the data…. I think that that was one way that I did. So we spent time looking over the test, talking about what was being asked of them, and getting them familiar with the wording and the vocabulary more than anything else (Post). 215 Table 50 Insufficient Evidence for MS Teachers’ Sensemaking about the Pre- and Post-Tests Teacher Barton Representative Excerpts from Interview Transcriptions The kids actually really like it for a computer test. A couple of kids told me that they liked them. So I liked that the kids liked them because I hate giving the kids tests…. But at least they are somewhat entertaining tests; they’re not like super-boring dry tests. So I guess I like that part…. Instead of last year, I just kind of looked at them and thought about them. But this year I’m using the posttest ones as a class grade. And the kids are allowed to use their science notebooks. The science notebooks have – I made them write the questions; they have the statements in them; they have some of the worksheets that we used. But they also have other things in their notebook, like when we did the molecule kits, I made them draw the molecules; draw exactly what they look like, and the product, and the reactants and whatnot. So I mean I think that that helps too, to lend itself to being more serious, because now I’m having it be a class grade (Y1). 216 APPENDIX F OCCASIONS OF SENSEMAKING Table 51 Mr. Ross’s Occasion of Sensemaking About the Instructional Model Interactions Among Goals & Resources Goals: “that’s part of our job as educators is, making sure that they know, like find that connection” (AN unit) + Social Communities: Noted discussion of IM at PD (field notes) Critical Noticing Outcomes of Sensemaking Decision: “show them where we’re at currently on this model” (AN unit) Students “still have a hard time on the back slope” (Y1 follow-up) 217 Reflection: “it helps me organize what I’m thinking and doing” “(AN unit) Reflection: “I’m becoming more comfortable with the idea of guided inquiry and it still being inquiry” (Y1 follow-up) Table 52 Mr. Harris’s Occasion of Sensemaking About the Evidence-Based Arguments Tool Interactions Among Goals & Resources Critical Noticing Outcomes of Sensemaking Decision: “I interchange like, conclusion to be first” (Post) Goal: “So, you’re trying to get them to think deep” (Post) + Social Communities: “That whole energy we [with coach] talked about before” (Post) T and C: “I still always kind of struggled with the Three Questions and trying to understand energy with my own personal learning about energy” (Post) Reflection: “It’s like they have to learn the unanswered questions that they’re supposed to be asking” (Post) Reflection: “I want to be more familiar with what the Unanswered Questions are and be more familiar with what the correct answers are” (Post) Table 53 Ms. Callahan’s Occasion of Sensemaking About the Instructional Model Interactions Among Goals & Resources Goals: “I don’t want them to feel like they’re going back down to a lower level again” (Y1 follow-up) + Practical Knowledge: “part of me just wants to keep moving up instead of going back down” (Y1 follow-up) Critical Noticing “I still have issues with it going back down the pyramid” (Y1 follow-up) 218 Outcomes of Sensemaking Reflection: “Using the Instructional Model last year… it was hard for them and for me” (Y1 follow-up) Reflection: “I’m proud of myself in some ways because I am using the Instructional Model more” (Y1 follow-up) Table 54 Ms. Callahan’s Occasion of Sensemaking About the Expressing Ideas Tool Interactions Among Goals & Resources Critical Noticing Goal: “It really helps us to gather so much information… to see where they’re coming from, and how did their ideas change over time” (Post) + Practical Knowledge: Belief that “they really, really wanted the right answer” (SS unit) T, S, and C: “I had a hard time with students for me with the expressing ideas, initially” (SS unit) Outcomes of Sensemaking Decision: To modify enactment to look back at the tool at the end of the unit (field notes) Reflection: “All that expressing ideas and sort of getting their preconceptions… has been a great useful tool” (Post) Table 55 Ms. Callahan’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Goal: “I want them to make really sure that when we are doing these activities that they’re storing information because they know there’s going to be some sort of assessment” (AN unit) + Practical Knowledge: “I think they’ve been taught a long time ago that when they take their ACTs and SATs that usually the extremes—none and all—are not the right answer” (AN unit) Critical Noticing Outcomes of Sensemaking Decision: “I did actually use their posttest scores as their—an actual score” (Post) T, S, and C: “I was surprised because I’ve had probably about 10 students who did not do well at all” (AN unit) 219 Reflection: “I wanted them to be acknowledged moving forward for growing in their academics. That’s really important—the assessment practices” (Post) Reflection: “I want to see them growing, be able to answer more coherently, and understanding not only the multiple choice answers but more importantly their explanations being stronger, using better vocabulary” (Y1 follow-up) Table 56 Ms. Apol’s Occasion of Sensemaking About the Predictions Tool Interactions Among Goals & Resources Critical Noticing Goal: “So making predictions… some kids don’t want to do that, they don’t want to be wrong; so I think it’s going to force them to change” (Post) + Practical Knowledge: “Which in the past has not been how science was taught” (Post) T and C: “I think these labs are a little bit different; these labs always start with prediction, which I have not always done” (Post) Outcomes of Sensemaking Reflection: “I think it encourages them to share their ideas and compare them to others” (Post) Reflection: “That Prediction tool becomes important once they do the evidence-based argument and their explaining” (Y1 follow-up) Table 57 Ms. Apol’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Practical Knowledge: Belief that “when you listen to them talk and when you do it as a class, they have good conversations amongst groups; they seem, they do know the answers” (PL unit) + Social Communities: “Somebody said it in our Carbon TIME workshop in February that these tests are nothing like any other tests 7th graders take” (PL unit) Critical Noticing S: “I’m shocked at how poorly some of them did” (PL unit) 220 Outcomes of Sensemaking Reflection: “I think they’re different tests than they’ve ever taken before, not just it being online, but the way they’re worded… I don’t think it’s reflective of what they know” (PL unit) Reflection: “I just still think they’re up there, different. We said they’re not the type of test these kids are used to taking” (Y1 follow-up) Table 58 Ms. Wei’s Occasion of Sensemaking About the Instructional Model Interactions Among Goals & Resources Goal: “I definitely want to use the Instructional Model to show kids their progression in the unit” (Y1 follow-up) + Social Communities: “I was actually talking with [Ms. Nolan] yesterday about how we can incorporate that Instructional Model into other units that we do” (Y1 follow-up) Critical Noticing Outcomes of Sensemaking T and C: “The triangle was confusing to me… I need to make it linear in some way” (Y1 follow-up) Reflection: “I was feeling a little confused just visually by it because it has things like modeling, patterns, and something on the triangle” (Y1 follow-up) Table 59 Ms. Wei’s Occasion of Sensemaking About the Expressing Ideas Tool Interactions Among Goals & Resources Practical Knowledge: Belief that “for some reason they really wanted to get a correct answer on the Expressing Ideas tool instead of just expressing ideas” (Post) Critical Noticing S and C: “It didn’t really help the struggling learners” (Post) 221 Outcomes of Sensemaking Decision: “So I ended up having to do some sentence frames for them to help them with that” (SS unit) Reflection: “I think that there was a lot of similarity between the Expressing Ideas tool and the Explanation tool” (Post) Table 60 Ms. Wei’s Occasion of Sensemaking About the Evidence-based Arguments Tool Interactions Among Goals & Resources Critical Noticing Goal: “I really need to scaffold with students the way that they develop an argument, like an evidence-based argument” (Post) + Social Communities: “And then we’ve talked a little bit about how to incorporate the Evidence-based Argument tool into the Explanations tool” (Post) C: The columns of the EBA tool are similar to the CER framework Outcomes of Sensemaking Decision: “I ended up modifying the Evidence-based Argument tool to fit CER… I switched the order of the columns” (PL unit) Reflection: “So the Evidence-based Arguments tool I like… but I’ve reordered the columns so it’s not evidence first, it’s how can I answer this question first, like on of the Three Questions and then what’s my evidence that supports my answer to that” (Y1 follow-up) Table 61 Ms. Wei’s Occasion of Sensemaking About the Explanations Tool Interactions Among Goals & Resources Critical Noticing Goal: “When I have them do that writing, I really want it to be like, legitimately, like, meaningful, like, they—I’m giving them feedback on this” (AN unit) + Practical Knowledge: Belief that “I think that the shear repetition itself is not helping them as much as targeted oneon-one support” (PL unit) S and C: “My students really struggle with that column” (SS unit) Outcomes of Sensemaking Decision: “I’ve always had to modify that for my regular students to, instead of one whole paragraph to sort of write out each individual question” (SS unit) Reflection: “I started to get really concerned about the repetitiveness of the Explanation tools, and that’s why we did the poster project instead” (AN unit) Reflection: “When it comes down to their actual explanation, I don’t know how much it helps them” (PL unit) 222 Table 62 Ms. Wei’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Practical Knowledge: “I think that they’re still having a hard time because… we still don’t really know what energy is” (AN unit) + Social Communities: “Kind of like what [colleague] was saying with her formative assessments… So I think that that would be how I would want to use these” (Post) Critical Noticing S and C: “I would say the majority of students would now say that plants don’t get their mass from the soil” (PL unit) Outcomes of Sensemaking Decision: “We have done the postassessment and graded it, but I ended up not using a bunch of the Carbon TIME questions” (PL unit) Reflection: “Thinking about preassessments and post-assessments, that’s something that I want to pay more attention to” (Post) Table 63 Ms. Nolan’s Occasion of Sensemaking About the Instructional Model Interactions Among Goals & Resources Critical Noticing Goal: “I put up there just to show them where they’re at, which I think is really nice… it’s going to be comforting to them” (SS unit) + Social Communities: “So, it [knowing where they are on the Instructional Model] takes off some of that anxiety they might have” (SS unit) T and C: Features of the IM, including where foundational knowledge appears on the IM 223 Outcomes of Sensemaking Decision: “At the beginning of each day, I always have a slide… and the next slide shows the Instructional Model, with the You Are Here arrow” (SS unit) Reflection: “This year I am being way more thoughtful about thinking like, what skill were they looking at before… a lot of thinking about the… Instructional Model” (Y1 follow-up) Table 64 Ms. Nolan’s Occasion of Sensemaking About the Expressing Ideas Tool Interactions Among Goals & Resources Social Communities: “I think it was cool for them to realize there’s a lot of holes in their understanding of something as basic as burning… it’s quite complex for their brains” (SS unit); “I think it was empowering to the kids to see how their understanding had changed” (SS unit) Critical Noticing S and C: Students’ responses to the scenarios in the Express Ideas tools Outcomes of Sensemaking Decision: “Then we came back to it at the end… then I said, Okay, here are your Top Ten Ideas, or your Top Ten questions that you had” (SS unit) Reflection: “I don’t think that the scenarios were interesting enough” (Post) Reflection: “I think it’s attempting to be sort of the puzzling phenomenon that you then hang the rest of your ideas on” (Post) Table 65 Ms. Nolan’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Goal: “I want the big picture, like wow, most of my kids are missing this objective so when I go through and teach this unit, I want to make sure that I nail this objective” (PL unit) + Social Communities: “The kids, when they were walking out, were like, that was a bunch of trick questions; I’m like, I am not trying to trick you” (PL unit) Critical Noticing S and C: “They would be inconsistent within the same test” (SS unit) 224 Outcomes of Sensemaking Reflection: “A part of me was a little bit crushed when I skimmed over the results from last Friday’s test” (AN unit) Reflection: “I realized that in explaining their reasoning, I didn’t always know where they were coming from. So, like, before using that assessment data to like, figure out where to go, I really need to know what they were thinking first” (Y1 follow-up) Table 66 Ms. Eaton’s Occasion of Sensemaking About the Evidence-based Arguments Tool Interactions Among Goals & Resources Critical Noticing Goal: “I think it’s really good for them to know that we’re coming up with a collective understanding, not just their own” (Y1 follow-up) S and C: “They can tell you what’s atomic level, they can tell you what’s macro, they can tell you… not so much at large scale” (SS unit) Outcomes of Sensemaking Decision: “I put them with their group… they did such a great job of coming up with Unanswered Questions this time” (SS unit) Reflection: “I think the evidence-based argument is where it clicks” (Post) Reflection: “The evidence-based argument needs to be more tied to the data because the data’s more concrete” (Post) Table 67 Ms. Eaton’s Occasion of Sensemaking About the Explanations Tool Interactions Among Goals & Resources Goal: “When I think about all of it, I think about, how am I going to get them to the Explanation tool?” (Y1 follow-up) + Practical Knowledge: “The Explanation tool is where I know if they’ve internalized it, if they get it” (Y1 follow-up) Critical Noticing S and C: “One of the biggest things that they’re struggling with is, they think if they answered the movement question, they may have answered the carbon question” (SS unit) 225 Outcomes of Sensemaking Decision: “I would let them go back and fix it and then help reinforce their ideas, the different between the three processes” (AN unit) Decision: “They didn’t have to do the explanation on the back where they put it all together… They use those tools to create a poster” (PL unit) Table 68 Ms. Eaton’s Occasion of Sensemaking About the Pre- and Post-Tests Interactions Among Goals & Resources Critical Noticing Goal: “Just making sure that the kids are able to explain what they know and understand what’s being asked of them” (AN unit) + Practical Knowledge: “I think the posttest would have gone better if I had just a little bit more time talking to them using the same vocabulary (SS unit) S and C: “See, that right there is a perfect example of the vocabulary difference… They didn’t know what biomass was” (AN unit) 226 Outcomes of Sensemaking Decision: “So we spent time looking over the test, talking about what was being asked of them, and getting them familiar with the wording and the vocabulary” (Post) Reflection: “I think a lot of it had to do with the wording of the questions, not their understanding of the concept” (SS unit) APPENDIX G EVIDENCE OF SENSEMAKING OVER TIME Table 69 Mr. Ross’s Goal of Fitting It All Together Interview SS unit Oct 2015 AN unit Dec 2015 PL unit May 2016 Post-teaching June 2016 Y1 follow-up Oct 2016 Excerpt from Interview Transcription Most of my anxiety actually comes from district scheduling and district stuff that they tell me I have to do…. The material [Carbon TIME] is good. I’ve been recommending it to other teachers that I really like and respect and that have given me good things in the past or that I’ve done good things with. So nothing there that I’m worried about. Just making sure that I cross all my district t’s and dot all district i’s is the only thing that I worry about…. Mostly related to the common assessment. We’ve been given just mountains of, you know, IB [International Baccalaureate] propaganda or whatever. Propaganda’s not the right word—but the instructional manual, the Primary Years something manual, all these like, 106-page documents that kind of explain—and, it’s research-based and it’s, like it is good stuff, it’s just difficult to kind of wade through all of that at the same time as I’m doing everything else that I normally have to do. All we have ever been working on is scientific argumentation, basically. That’s like our district goal, and it’s a decent goal, right? I mean, I want my kids to be able to talk about complex issues and be persuasive about them, and they’ll also understand the evidence and the data. So what I like about the Plants thing is it really blends itself well to the framework that we’ve been using—the claim, evidence, and reasoning. And when they [administrators] gave me a full bio [schedule], they were like, “you will be the experiment guy. You’ll be setting up everyone’s experiments. You’ll be tearing down people’s labs. You’ll be developing all the curriculum for IB. You are are the bio guy now. You understand that? We gave you what you wanted. Now you are bio.” And I was like, all right…. And I’m the bio guy. I just write the curriculum. I’m thinking further about matter and energy in it. I’m thinking that Systems & Scale is the way that I’m going to hit it. I feel like it ties in very well with some of the IB stuff. All I have to do is take IB jargon and apply it to this. But this is the model that I’m planning on using, and a lot of the teachers are just going to follow my lead. Well, right now because we are becoming an IB school, and I have to have these, like, IB lessons. Right now I’m thinking a lot about how to shoe horn Carbon—and it’s not actually that difficult—like, Carbon TIME does fit really well with this. With the IB, inquiry is like a huge focus, and so I’m just trying to figure out how to use the Carbon TIME stuff to cover what I have to teach and how I have to assess it. So the hard part is that IB has a very, well, they keep saying rigorous, but they have a very set way that they assess…. So I’m just trying to figure out how to fit all of those things in together. You know, like they just recently unburdened our curriculum, but they haven’t underburned the test, though…. So right now I keep trying to think how I can double dip, if you will. Like, how I can make sure that I’m doing—I like the stuff from Carbon TIME and continue to use it, but then also make it count for this other area…. It’s just all again trying to fit everything. 227 Table 70 Ms. Barton’s Persistent Belief Over Time About the Value of Students’ Written Work Interview SS unit Nov 2015 AN unit May 2016 PL unit May 2016 Post-teaching June 2016 Y1 follow-up Nov 2016 Excerpt from Interview Transcription Or, I’ll do like where instead of kids doing something individually, I’d just do it as partners because they just copy anyways. I mean, really, what’s the point of that? You know? I’d rather have them just sit and talk with their friend about it and then write it together. Because they’re just going to copy. I mean, if you had them just sit by themselves, they’ll just copy to get it done. Well, a lot of them [Process Tools] we did as groups because there was complex thinking, and they’re just going to copy off each other anyway. I mean, if I would’ve had them do it individually, maybe one person would’ve done it, and five people would’ve copied them. At least when they do it in groups, they’re at least talking about it. I mean, you can battle the copy battle, but that’s, to be quite honest, a pissing contest that I don’t need to engage in constantly… So I’m just pretty much like, I didn’t tell them that I know they’re going to copy, but I do know that. They’re just going to copy off each other anyways. I mean, there’s nothing that can be done about it. I mean, that would be like me saying I’m going to keep them from swearing in the hallway. That would take up all my energy. For me to say they’re not going to copy, that would be all my energy. So there’s no point in giving them all those things because, basically, there’s one person out of about eight people that actually do it, and then the other people just copy. Of course, when you see that, you can act on it. But I just don’t—they’re so used to automatically just look in there and write. I don’t know how much they get out of just writing things. I really don’t. I mean, I want to say they do, but I don’t know. I’d much rather have a little thing and then talk with them because then you can think more about, what does the person really know. The problem of teaching is really knowing what kids think because when you give them a written thing, they mainly just copy off of each other, and when they’re talking I think the best way to know what they know is when we did those post-it note activities because then they know their name is not going to be up there, and they can just—I think that it’s making them think about more complicated, you know, like, ideas. And I found with Evidence-based Arguments [Process Tool] that one key person would do it, and everybody else looked happy. So I guess I feel like this year I’ll probably just do more talking about the Evidence-based Arguments and trying to get student input. I mean, I’d like to have them all write their own, but I don’t know if the value equals the amount of time, because if they’re all just going to copy off one person anyway, then it’s really not worth the amount of time. I don’t know—I like it, and if you had all students that were going to take it seriously and do their own work, then I think it’s really great. On paper, it’s great. Then explanations of phenomena, I think that’s wonderful. I like the model, the coaching. I really like a lot about it. I mean, in principle, I think it’s all excellent. But the—what would you call that? The user part or something? The using—it’s in the using, the student using that, you know what I mean? 228 Table 71 Ms. Barton’s Belief about the Value of Talking Interview SS unit AN unit PL unit Post-teaching Excerpt from Interview Transcription I think that the high point is that, like the focus on talking, and that the kids were actually—it seemed like the kids were caring about their learning, and they were taking responsibility for, like, wanting to know. And the fact that… we were doing things that seemed relevant to them, or at least exciting to them. And I’m adding that [calorimetry lab] to the worksheet because I, again, I just feel like you cannot just give them worksheets. Like, I know they’re called “thinking tools,” but the kid looks at it and sees “worksheet,” you know? And so like I’m also doing that calorimetry lab at the same time, so that they, like, also burn stuff, talk about their worksheet, talk with each other. I think it worked really well to do the kits with the molecules—to do a kit and then the Explanations Tool where they talked with each other, then go over it together. So maybe go over a concept together, then do a kit thing while you walk around and talk to them…. I don’t know how much they get out of just writing things…. I’d much rather have a little thing and then talk with them, because then you can think more about what does the person really know. What that, I had them stand up and explain it…. Then people can ask them questions and stuff…. So I would use the talking and the working in groups, and I would use the experiments. I definitely would not use the same amount of worksheets…. If they all have a role, if they’re all just sitting around and talking about one, then I think it forces them to participate more…. I don’t want to say disregard written work, but what is the point of written work? The point is that they’re somehow talking with each other and processing an idea. I mean, I don’t think about the point of written work as being, “this is exactly how you’re going to show my learning.” Because for one thing, a lot of kids like Nate [pseudonym], he won’t write anything because he has terrible handwriting…. They don’t want to write things. We have a couple kids in class that have on their IEPs that they don’t have to write anything. So maybe that’s why I don’t think about writing as being important. I don’t know. It’s not that it’s not important. It’s just that, “do I think that’s all they know?” No…. Well, the only thing I would say is, I made sure that kids like that [who needed support or scaffolding] were in a group where there is somebody that would be willing to write as much as possible. Then talking with the students. I mean, I don’t know what else you could really do. I mean, I struggled a lot. What are you going to do? I don’t know. I think next year I’m definitely going to use some of the parts of Carbon TIME, but I think that next year in the very beginning of the school, like I do a lot of procedural, you know, just like how to exist. But I think also I’m going to add into that like, how to do science talk because that is definitely a skill, and I don’t think like I can just expect that they’re going to be able to discuss things. I might find somebody else over the summer that, you know, they can model, and then I might model with someone. I was looking at—there’s a technique called Fishbowling where—and so I might try to like introduce that and then do just a couple of samples or little non-threatening activities with that. And do the Carbon TIME after that because I do think that kids struggle, and kids want me to be the one that tells them the answer, you know. And I don’t think they have their—I don’t know if you would call it academic voice or what but they don’t have confidence in their academic voice. 229 Table 71 (cont’d) Y1 follow-up For one thing, I feel that it’s [Expressing Ideas and Predictions Process Tools] a really safe way for kids to get their ideas out. And kids have a lot of hidden ideas that they’re holding onto from past experiences, especially with science…. And I think the Expressing ideas and Predicting do a really good job of helping the teacher get that out in a way that I don’t think I was doing before. I think you’re not just putting some information into an empty ball. You’re building information with the information they’re already carrying around. And especially if you have to get rid of something, or try to help them see that something that they are carrying around doesn’t make sense, using the Expressing Ideas and Predicting is really helpful with that…. I mean, because you have to think about, what is the purpose of writing? What is the purpose of it? If you’re going to write, if a student’s going to write, why are they writing? And I don’t always know the answer to that. Are they writing so that they can process out their own ideas? Are they writing just to show me what they know? Are they writing to, kind of like, argumentative writing? Or, they’re presenting a point of view. Those are all different kinds of writing. But I guess I feel like if they’re going to be writing, I want them to somehow be writing to make meaning. And I guess I just have more thinking to do in that area…. I’ve tried to streamline it [the Instructional Model]. Like, using more talk modes. I’m using whiteboards more and doing whiteboards. I have enough whiteboards for everyone, so having students draw things, and then show their partner, then add to their drawings, then show the class or show me. So just trying to get more things out that way, where I’m hoping that it’s doing the same thing but more quickly but also more just talking…. I guess I just feel like I would rather have them talk more and think more, and there’s the finite amount that they’re going to do, and if I have to choose between talking and thinking or doing a worksheet, I am going to choose talking and thinking more. I mean, because by the time they get to doing a worksheet, I want it to be really good. I want it to be really saying something. I don’t want them to just be like, “Oh, I’m going to get this done because I just have one more thing to do.” I just don’t want that. Because what is it really saying? Is it saying they have the ability to copy? Is it saying that they really processed something? I don’t know what it’s saying. I mean, maybe it’s different for every kid. I don’t know…. I don’t think that writing on a worksheet necessarily lends itself to talking. Or at least I don’t know how it does. I don’t know. 230 Table 72 Ms. Nolan’s Goal of Putting Herself in Her Students’ Shoes Interview SS unit Oct 2015 AN unit Dec 2015 PL unit Jan 2016 Post June 2016 Excerpts from Interview Transcriptions I don’t know if they’re doing that so much yet, but at least it’s – I feel like I’m being as transparent as I can with the kids, to show them like this is where we’re at. The problem is every instructional model, I think, (chuckle) has like, they’re all very similar, and I think this is okay too, but they all have variations on the theme…. And even the climate change thing, I feel like that’s still not quite biology. It is; there’s a biology connection for sure. There’s sort of this grand unveiling I felt like, on the day of the organic/inorganic, and it was like the last slide of that lecture. Where it’s like “Look, animals are made of organic molecules.” It’s like the apples and the I don’t know. But I kind of felt like, it was almost like I felt like I’d been keeping it a secret from them for some reason this whole way. And like “Look, I tricked you into learning biology and you didn’t even know you were doing it.” And I don’t know if I liked that; it felt in-genuine somehow. Well the way I did it actually was I did… I tried to do a true jigsaw. And so I handed out a third of the class all together, they all got digestion, and they all worked on that together. And then a third of the class all got cellular respiration, and a third got biosynthesis – not in that order necessarily. And then within those I numbered them off, and so then they had to go meet with two other people. So they got that support to help them fill it out by themselves – or the first ones – and then they had to be the representative. And I think seeing them dig into one process, they appreciated that. Where they didn’t have to think about everything at once. They could just think about one of those things. And it was really quick for me to see where their ideas were kind of messy…. And so I feel like there’s almost that piece missing where we really call that out. I feel like we’re trying to be all sneaky about it – or that’s how I felt in the end. It’s like I was trying to sneak it in to get them to realize it on their own, but it’s such a big leap away from their preconception that they didn’t see it…. I think I want to get better at this, but always seeing that step back again to; what are the objectives. I’m trying to get better at putting myself in the shoes of my students, and what’s their experience like? So it was clunky [order of topics]. The kid’s kind of recognized that it was clunky. The number one most favorite thing that they liked. There was one day where I could just tell that they were getting frustrated and getting confused and it was like okay. I am just going to give it to you. Here is the lecture. Here is my 15-minute lecture…. Oh they totally can and I think it was successful because they had already seen all the pieces they just had to see them together. They like breath like a sigh of relief like “Oh why didn’t you just tell us that before. Like that’s not hard. Like oh that is so clear now. I get it.” I am like oh. Like yeah it is not that hard. To be a distinguished educator at the top level there, there’s got to be a lot of student voice in it. And I feel like Carbon TIME does a good job of trying to get the student voice into stuff…. We have, so in a nutshell though I guess – and I don’t know if this was our conversation before this or no – so I feel like I have graduated from Carbon TIME kindergarten and I’m ready now to layer in some new elements like the discourse and the assessment. So now that I feel like I’ve gotten my feet wet with what the tools are and where we’re trying to go with this, now I want to think more about the student experience in this process. 231 Table 72 (cont’d) Y1 Oct 2016 I think about where the students are coming from. So, like what did they just finish and where are they going next. And, uh, and like this year, am I allowed to use my ideas from this year because that was more fresh in my head than what it was last year? (Mackenzie: Yeah. For sure.) Okay. This year I am being way more thoughtful about thinking like what skill were they looking at before. If they are trying to explain something so there is a lot of lurching back and forth between the scales. So, I’m thinking where were they before? What evidence did they have and so where are we going to go next? How am I going to help them make that connection to make that leap? Yeah. A lot of thinking about the, what’s that, Instructional Model…. Like I think that I can still hold true to the Instructional Model and still use all the tools that the expressing ideas and the predictions, and EBA, and explanations but I feel like I can put my own spin on it…. I don’t feel restricted in making modifications to the tools. Hopefully the modifications I make don’t lessen the integrity of the tool. I see the value in them and I don’t want to decrease that and so I don’t think the modifications I make are huge. 232 Table 73 Ms. Wei’s Goal of Student Engagement and Personal Connection Interview SS unit Oct 2015 AN unit Dec 2015 PL unit Dec 2015 Post June 2016 Y1 Oct 2016 Excerpts from Interview Transcriptions And although I think that that [Soda Water fizzing] is valuable, and I’m still using it in that manner, I now sort of also feel like it is actually really valuable as an example of a chemical change. Now I would like something a little bit flashier, if possible, because (Mackenzie: It is from the woman who used the Whoosh Bottle.) Right. It’s just really hard to sell the kids. When you tell them what we’re doing for their very, very first lab experience in Biology, they say we’re watching soda go flat? There is just this let-down. I started to get really concerned about the repetitiveness of the Explanation Tools, and that’s why we did the poster project instead of the “Explain Again” tools, and that’s also why we modified the scenarios. And I still have that same concern of plants, too. I was also concerned because… well, you’ve heard me say this before. There’s not a real big wow factor in like… in these processes that much. And so that was part of the reason why we did the dissection because there’s this like, “Oh, my gosh. This is amazing. Look how long the digestive tract is.” So that’s still a concern…. I don’t see much in the Carbon TIME curriculum to engage them in a personal way. So like I feel like from personal experience and then also from hearing this and from other people like that you really need to make that connection, that like very personal connection with students who are not engaged in school. So for example, the elicitation, “How does a boy grow?” I ended up changing the elicitation to focus on a particular… like an African-American woman climber. Well, I wish that there was just some way, I like dry mass; I just wish there was some way to make it somehow a little bit more real for students. Because I was doing a lot of things behind the scenes. I was doing all of the massing, and the drying and everything like that. So I think that’s part of the reason why that students maybe have felt some disconnect to it. Well, I do think that they have a better sense of their trajectory when they’re doing Carbon TIME. I don’t know… I think ideally Carbon TIME wants them to take a little bit more ownership of their learning and I don’t know if that happened this year. I think that they’re still… They still just very much wanted to… (Mackenzie: Check off the boxes?) Check off the boxes, yeah. Thinking about like how to bring student voice to some of the Carbon TIME curriculum. So it’s actually been great to have other people at other schools to talk to about this stuff. 233 Table 74 Ms. Callahan’s Focus on the Importance of Science Topics and Skills Interview SS unit Nov 2015 AN unit April 2016 PL unit April 2016 Post June 2016 Example Excerpts from the Transcriptions But I think for high school students they need to be accountable for that [percent change in mass] and understand the basic statistical analysis, if they’re going to work with data and really good science research. They need to understand, I think it’s important. I really think it’s important that the students account for their differences in their materials. That’s a great skill for all students to have. I think it’s important for the students to be able to understand a little bit more about the citric acid cycle to be able to understand more about the electron transport chain and chemiosmosis and all [those] components. I think the more that academically—I don’t want to say challenging—but the more the layers that they have for cellular respiration, the stronger they’re going to be when they get to chemistry—the stronger they’re going to be when they become biologists in the future. So I really think that component on its own is critical…. And I’m jotting down ideas and they’re editing their thoughts. And I think that’s fantastic—the constant refining of their ideas and their explanations is really important, so. But I think just getting used to that long term experimental data, and that’s what’s going to be in their future, especially if they go into research, getting used to the idea that you’re accountable and it has to last. You have to take care of it every day. You have to make observations every day, and all that can lead to the point where it’s very important for them…. I think that really gives them that ownership of their education. It gives then that ownership of their learning. That is really important for them…. I think it’s [growing radishes] going to really stick with them and that’s - I think an important part of your science is that you make it so that it’s unforgettable. That the – that radish growth will be something, surprisingly, that they’re going to remember for life…. I think we just, as I said earlier, I don’t know if we necessarily give the students enough time to really be able to express what their understanding is, what their perceiving, their explanations and that would have been - these are good questions. So we know that photosynthesis part of this process where C02 is incorporated into the plants and into the glucose, why is that critical for life on earth? That’s a great question. Like that’s a - from my end one of the most important questions of Carbon TIME. The similarity is that there are a lot of student collaborations. That there’s a lot of interaction between the students, and I think that’s really important for science. In so many ways, Carbon TIME is really distinct and different from what I typically would do on a classroom. It’s more - I don’t want to say inquiry based but it’s certainly not as direct with information transfer. The students are the ones that are exploring and they’re coming up with ideas. They’re sort of using the evidence that they are collecting to make their claims. That’s very different from how I normally run a lot of the information. Mostly, especially in biology these like some facts. There’s four stages of mitosis. The students are get to develop those. Carbon TIME is a neat step away from sort of memorization, but just gaining and taking in information that you’re seeking and understanding. A lot of differences in that way…. They’re seeing how important it is that they understand that separation of matter and energy…. When we talk about Chemistry, we’re talking about the materials. Like this is important for me to understand how atoms are in relation to the big picture. 234 Table 74 (cont’d) Y1 Oct 2016 Well, I think it’s really important that students get a chance to express their ideas. This is only the second time that they’ve expressed ideas so far in this unit, in this module. I want them to know that what they think matters. Each of the time, they sort of thoughtfully put it down and then maybe get some questions going. I also think that once they’ve spent some time with it quietly and working with the writing, that one they do share with a partner, they’re more able to say, “Oh, okay, yeah. I was writing that same question,” or it’s sort of a little awesome to make more connections between the two people…. So, I think that’s probably why I wanted to use formative assessments to be more succinct in the curriculum that we used, in the class. I think in both cases, it’s really important that students are explaining their reasoning, that they’re understanding why they’re thinking certain ways, and that they’re absolutely using evidence to support those answers…. I mean, I think it’s important that we know that investigations never really end, that a good investigation leads to more questioning and more answers…. So, we spend a lot of time talking about, what information are you getting, where are you getting it from, why is that so important for you, and how can you use that? 235 APPENDIX H TEACHER-MODIFIED ARTIFACTS Figure 18. Ms. Nolan’s modification to the Expressing Ideas Tool in the Animals unit to substitute a panda growing for a boy growing as the phenomenon of interest 236 Figure 19. Ms. Nolan’s modification to embed the Evidence-based Arguments Tool into the Explanations Tool: The front side 237 Figure 20. Ms. Nolan’s Creation of a New Tool That Combines All Three Processes 238 Figure 21. Ms. Callahan’s modification to the data spreadsheet for the ethanol burning investigation in the Systems & Scale to include percent change in mass unit in Year Two 239 Figure 22. Ms. Wei’s modification to the Evidence-based Arguments Tool in the Animals Unit to match the Claim-Evidence-Reasoning framework 240 APPENDIX I CURRICULUM MATERIALS Figure 23. Predictions Tool for the Systems and Scale Unit 241 Figure 24. Expressing Ideas Tool for the Systems and Scale Unit 242 Figure 25. Evidence-based Arguments Tool for the Systems and Scale Unit 243 Figure 26. Explanations Tool for the Systems and Scale Unit 244 Figure 27. The Carbon TIME Instructional Model 245 REFERENCES 246 REFERENCES Akkerman, S. F., & Bakker, A. (2011a). Boundary crossing and boundary objects. Review of Educational Research, 81(2), 132-169. Allen, C. D., & Penuel, W. R. (2015). Studying teachers’ sensemaking to investigate teachers’ responses to professional development focused on new standards. Journal of Teacher Education, 66, 136-149. Alonzo, A. C. (2011). Learning progressions that support formative assessment practices. Measurement, 9, 124-129. Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is—or might be—the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25(9), 6-8, 14. Ball, D. L., & Forzani, F. M. (2009). The work of teaching and the challenge for teacher education. Journal of Teacher Education, 60(5), 497-511. Bertrand, M., & Marsh, J. A. (2015). Teachers’ sensemaking of data and implications for equity. American Educational Research Journal, 52(5), 861-893. Borko, H., Jacobs, J., Eiteljorg, E., Pittman, M. E. (2008). Video as a tool for fostering productive discussions in mathematics professional development. Teaching and Teacher Education, 24, 417-436. Clark, C., & Lampert, M. (1986). The study of teacher thinking: Implications for teacher education. Journal of Teacher Education, 37, 27-31. Cobb, P., & Jackson, K. (2011). Towards an empirically grounded theory of action for improving the quality of mathematics teaching at scale. Mathematics Teacher Education and Development, 13(1), 6-33. Cobb, P., Zhao, Q., & Dean, C. (2009). Conducting design experiments to support teachers’ learning: A reflection from the field. Journal of the Learning Sciences, 18(2), 165-199. Coburn, C. E. (2001). Collective sensemaking about reading: How teachers mediate reading policy in their professional communities. Educational Evaluation and Policy Analysis, 23(2), 145-170. Coburn, C. E. (2005). Shaping teacher sensemaking: School leaders and the enactment of reading policy. Educational Policy, 19(3), 476-509. 247 Cohen, D. K., & Ball, D. L. (2001). Making change: Instruction and its improvement. The Phi Delta Kappan, 83(1), 73-77. Davis, E. A., Beyer, C., Forbes, C. T., & Stevens, S. (2011). Understanding pedagogical design capacity through teachers’ narratives. Teaching and Teacher Education, 27, 797-810. DeBarger, A. H., Penuel, W. R., Moorthy, S., Beauvineau, Y., Kennedy, C. A., Boscardin, C. K. (2017). Investigating purposeful science curriculum adaptation as a strategy to improve teaching and learning. Science Education, 101, 66-98. Desimone, L. M. (2009). Improving impact studies of teachers’ professional development: Toward better conceptualizations and measures. Educational Researcher, 38(3), 181-199. de Vries, H., Elliott, M. N., Kanouse, D. E., & Teleki, S. S. (2008). Using pooled kappa to summarize interrater agreement across many items. Field Methods, 20(3), 272-282. Doherty, J. H., Draney, K., Shin, H. J., Kim, J. H., & Anderson, C. W. (in preparation). Validation of a learning progression-based monitoring assessment. To be submitted to Science Education. Drake, C., & Sherin, M. G. (2006). Practicing change: Curriculum adaptation and teacher narrative in the context of mathematics education reform. Curriculum Inquiry, 36(2), 153-187. Dyson, A. H., & Genishi, C. (2005). On the case: Approaches to language and literacy research. New York, NY: Teachers College Press. Erickson, F. (1986). Qualitative methods in research on teaching. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 119-161). New York, NY: Macmillan. Fishman, B. J., Marx, R. W., Best, S., & Tal, R. T. (2003). Linking teacher and student learning to improve professional development in systemic reform. Teaching and Teacher Education, 19, 643-658. Flyvbjerg, B. (2011). Case study. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research. Thousand Oaks, CA: SAGE Publications. Frank, K. A., Kim, C., & Belman, D. (2010). Utility theory, social networks, and teacher decision making. In A. J. Daly (Ed.), Social network theory and educational change (pp. 232-242). Cambridge, MA: Harvard University Press. Frank, K. A., Zhao, Y., Penuel, W. R., Ellefson, N., & Porter, S. (2011). Focus, fiddle, and friends: Experiences that transform knowledge for the implementation of innovations. American Sociological Association, 84(2), 137-156. 248 Furtak, E. M. (2012). Linking a learning progression for natural selection to teachers’ enactment of formative assessment. Journal of Research in Science Teaching, 49(9), 1181-1210. Furtak, E. M., Morrison, D., & Kroog, H. (2014). Investigating the link between learning progressions and classroom assessment. Science Education, 98, 640-673. Gee, J. P. (2005). An introduction to discourse analysis: Theory and method (2nd ed.). New York, NY: Routledge. Glesne, C. (2006). Becoming qualitative researchers: An introduction (3rd ed.). Boston, MA: Pearson. Gotwals, A. W., & Alonzo, A. C. (2012). Introduction: Leaping into learning progressions in science. In A.C. Alonzo & A.W. Gotwals (Eds.), Learning progressions in science: Current challenges and future directions (pp. 3-12). Rotterdam, The Netherlands: Sense Publishers. Greeno, J. G. (1997). On claims that answer the wrong questions. Educational Researcher, 26(1), 5-17. Greeno, J. G., Collins, A. M., & Resnick, L. B. (1996). Cognition and learning. In D. Berliner & R. Calfee (Eds.)., Handbook of educational psychology (pp. 15-46). New York: Macmillan. Gunckel, K. L. (2013). Fulfilling multiple obligations: Preservice elementary teachers’ use of an instructional model while learning to plan and teach science. Science Education, 97, 139162. Gunckel, K. L., Mohan, L., Covitt, B. A., & Anderson, C. W. (2012). Addressing challenges in developing learning progressions for environmental science literacy. In A.C. Alonzo & A.W. Gotwals (Eds.), Learning progressions in science: Current challenges and future directions (pp. 39-75). Rotterdam, The Netherlands: Sense Publishers. Jin, H., & Anderson, C. W. (2012a). A learning progression for energy in socio-ecological systems. Journal of Research in Science Teaching, 49(9), 1149-1180. Jin, H., & Anderson, C. W. (2012b). Developing assessments for a learning progression on carbon-transforming processes in socio-ecological systems. In A.C. Alonzo & A.W. Gotwals (Eds.), Learning progressions in science: Current challenges and future directions (pp. 151-181). Rotterdam, The Netherlands: Sense Publishers. Johnstone, B. (2008). Discourse analysis (2nd ed.). Malden, MA: Blackwell Publishing. Kang, H., Thompson, J., & Windschitl, M. (2014). Creating opportunities for students to show what they know: The role of scaffolding in assessment tasks. Science Education, 98, 674704. 249 Kang, H., Windschitl, M., Stroupe, D., & Thompson, J. (2016). Designing, launching, and implementing high quality learning opportunities for students that advance scientific thinking. Journal of Research in Science Teaching, 53(9), 1316-1340. Kang, H. (2017). Preservice teachers’ learning to plan intellectually challenging tasks. Journal of Teacher Education, 68(1), 55-68. Kazemi, E., & Hubbard, A. (2008). New directions for the design and study of professional development: Attending to the coevolution of teachers’ participation across contexts. Journal of Teacher Education, 59(5), 428-441. Korthagen, F., & Vasalos, A. (2005). Levels in reflection: core reflection as a means to enhance professional growth. Teachers and teaching: theory and practice, 11(1), 47-71. Lee, O., Miller, E. C., Januszyk, R. (2014). Next Generation Science Standards: All standards, all students. Journal of Science Teacher Education, 25, 223-233. Lemke, J. L. (1990). Talking science: Language, learning, and values. Norwood, NJ: Ablex. Loughran, J. J. (2002). Effective reflective practice: In search of meaning in learning about teaching. Journal of Teacher Education, 53(1), 33-43. Marco-Bujosa, L. M., McNeill, K. L., Gonzalez-Howard, M., & Loper, S. (2017). An exploration of teacher learning from an educative reform-oriented science curriculum: Case studies of teacher curriculum use. Journal of Research in Science Teaching, 54(2), 141-168. Marz, V., & Kelchtermans, G. (2013). Sense-making and structure in teachers’ reception of educational reform: A case study on statistics in the mathematics curriculum. Teaching and Teacher Education, 29, 13-24. Michaels, S., & O’Connor, C. (2012). Talk science primer. Cambridge, MA: TERC. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: an expanded sourcebook. Thousand Oaks, CA: Sage Publications, Inc. Miles, M. B, Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: SAGE Publications, Inc. Mohan, L., Chen, J., & Anderson, C. W. (2009). Developing a multi-year learning progression for carbon-cycling in socio-ecological systems. Journal of Research in Science Teaching, 46(6), 675-698. Moje, E. B. (1995). Talking about science: An interpretation of the effects of teacher talk in a high school science classroom. Journal of Research in Science Teaching, 32(4), 349-371. 250 Mortimer, E. F., & Scott, P. H. (2003). Meaning making in secondary science classrooms. Maidenhead, PA: Open University Press. Mulcahy, D. (2012). Thinking teacher professional learning performatively: A socio-material account. Journal of Education and Work, 25(1), 121-139. National Academies of Sciences, Engineering, and Medicine. (2015). Science Teachers Learning: Enhancing Opportunities, Creating Supportive Contexts. Committee on Strengthening Science Education through a Teacher Learning Continuum. Board on Science Education and Teacher Advisory Council, Division of Behavioral and Social Science and Education. Washington, DC: The National Academies Press. National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: The National Academies Press. NGSS Lead States. (2013). Next Generation Science Standards: For states, by states. Washington, DC: The National Academies Press. NGSX. (2017). NGSX: The Next Generation Science Exemplar Learning System for Science Educators. Retrieved from http://www.ngsx.org Opfer, V. D., & Pedder, D. (2011). Conceptualizing teacher professional learning. Review of Educational Research, 81(3), 376-407. Palmer, D., & Rangel, V. S. (2011). High stakes accountability and policy implementation: Teacher decision making in bilingual classrooms in Texas. Educational Policy, 25(4), 614-647. Penuel, W. R., & Fishman, B. J. (2012). Large-scale science education intervention research we can use. Journal of Research in Science Teaching, 49(3), 281-304. Penuel, W. R., Fishman, B. J., Cheng, B. H., & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational Researcher, 40(7), 331-337. Penuel, W. R., Fishman, B. J., Yamaguchi, R., & Gallagher, L. P. (2007). What makes professional development effective? Strategies that foster curriculum implementation. American Educational Research Journal, 44(4), 921-958. Penuel, W. R., Riel, M., Joshi, A., Pearlman, L., Min Kim, C., & Frank, K. A. (2010). The alignment of the informal and formal organizational supports for reform: Implications for improving teaching in schools. Educational Administration Quarterly, 46(1), 57-95. 251 Penuel, W., Riel, M., Krause, A., & Frank, K. (2009). Analyzing teachers’ professional interactions in a school as social capital: A social network approach. Teachers College Record, 111(1), 124-163. Putnam, R. T., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4-15. Reiser, B. J. (2013). What professional development strategies are needed for successful implementation of the Next Generation Science Standards? Paper presented at the Invitational Research Symposium on Science Assessment. Retrieved from http://www.ets.org/research/policy_research_reports/publications/paper/2013/jvhf Remillard, J. T. (2005). Examining key concepts in research on teachers’ use of mathematics curricula. Review of Educational Research, 75(2), 211-246. Roehl, T. (2012). From witnessing to recording—material objects and the epistemic configuration of science classes. Pedagogy, Culture, & Society, 20(1), 49-70. Sandberg, J., & Tsoukas, H. (2015). Making sense of the sensemaking perspective: Its constituents, limitations, and opportunities for further development. Journal of Organizational Behavior, 36, 6-32. Schon, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. San Francisco, CA: Jossey-Bass Publishers. Sherin, M. G., & Drake, C. (2009). Curriculum strategy framework: investigating patterns in teachers’ use of a reform-based elementary mathematics curriculum. Journal of Curriculum Studies, 41(4), 467-500. Shoffner, M. (2011). Considering the first year: Reflection as a means to address beginning teachers’ concerns. Teachers and teaching: theory and practice, 17(4), 417-433. Spillane, J. P., Kim, C. M., & Frank, K. A. (2012). Instructional advice and information providing and receiving behavior in elementary schools: Exploring tie formation as a building block in social capital development. American Educational Research Journal, 49(6), 1112-1145. Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research, 72(3), 387-431. Star, S. L. (2010). This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values, 35(5), 601-617. 252 Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, ‘translations’ and boundary objects: Amateurs and professionals in Berkeley’s museum of vertebrate zoology. Social Studies of Science, 19(3), 387-420. Stein, M. K., & Kaufman, J. H. (2010). Selecting and supporting the use of mathematics curricula at scale. American Educational Research Journal, 47(3), 663-693. Stroupe, D. (2014). Examining classroom science practice communities: How teachers and students negotiate epistemic agency and learn science-as-practice. Science Education, 98, 487-516. Talanquer, V., Bolger, M., & Tomanek, D. (2015). Exploring prospective teachers’ assessment practices: Noticing and interpreting student understanding in the assessment of written work. Journal of Research in Science Teaching, 52(5), 585-609. Tannen, D., & Wallat, C. (1987). Interactive frames and knowledge schemas in interaction: Examples from a medical examination/interview. Social Psychology Quarterly, 50(2), 205-216. Thompson, J., Hagenah, S., Kang, H., Stroupe, D., Braaten, M., Colley, C., & Windschitl, M. (2016). Rigor and responsiveness in classroom activity. Teachers College Record, (5), 1-58. van Driel, J. H., Beijaard, D., Verloop, N. (2001). Professional development and reform in science education: The role of teachers’ practical knowledge. Journal of Research in Science Teaching, 38(2), 137-158. van Es, E. A., & Sherin, M. G. (2002). Learning to notice: Scaffolding new teachers’ interpretations of classroom interactions. Journal of Technology and Teacher Education, 10(4), 571-596. van Es, E. A., & Sherin, M. (2008). Mathematics teachers “learning to notice” in the context of a video club. Teaching and Teacher Education, 24, 244-276. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: SAGE Publications. Weick, K. E. (2001). Making sense of the organization. Malden, MA: Blackwell. Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16(4), 409-421. Wertsch, J. V. (1998). Mind as action. New York, NY: Oxford University Press. Windschitl, M., Thompson, J., Braaten, M., & Stroupe, D. (2012). Proposing a core set of instructional practices and tools for teachers of science. Science Education, 96, 878-903. 253 Windschitl, M., & Stroupe, D. (2017). The three-story challenge: Implications of the Next Generation Science Standards for teacher preparation. Journal of Teacher Education, 68(3), 251-261. Yin, R. K. (2014). Case study research: Design and methods. Thousand Oaks, CA: SAGE Publications. Zangori, L., Forbes, C. T., & Biggers, M. (2013). Fostering student sense making in elementary science learning environments: Elementary teachers’ use of science curriculum materials to promote explanation construction. Journal of Research in Science Teaching, 50(8), 989-1017. Zhao, Y., & Frank, K. A. (2003). Factors affecting technology uses in schools: An ecological perspective. American Educational Research Journal, 40(4), 807-840. 254