GROUNDING FORMATIVE ASSESSMENT IN HIGH-SCHOOL CHEMISTRY CLASSROOMS: CONNECTIONS BETWEEN PROFESSIONAL DEVELOPMENT AND TEACHER PRACTICE By Dante Igor Cisterna Alburquerque A DISSERTATION Submitted to Michigan State University in partial fulfillment of requirements for the degree of Curriculum, Instruction, and Teacher Education—Doctor of Philosophy 2014 ABSTRACT GROUNDING FORMATIVE ASSESSMENT IN HIGH-SCHOOL CHEMISTRY CLASSROOMS: CONNECTIONS BETWEEN PROFESSIONAL DEVELOPMENT AND TEACHER PRACTICE By Dante Igor Cisterna Alburquerque This study describes and analyzes the experiences of two high-school chemistry teachers who participated in a team-based professional development program to learn about and enact formative assessment in their classrooms. The overall purpose of this study is to explain how participation in this professional development influenced both teachers’ classroom enactment of formative assessment practices. This study focuses on 1) teachers’ participation in the professional development program, 2) teachers’ enactment of formative assessment, and 3) factors that enabled or hindered enactment of formative assessment. Drawing on cultural-historical activity theory (CHAT) and using evidence from teacher lessons, teacher interviews, professional development meetings as data sources, this single embedded case study analyzes how these two teachers who participated in the same learning team and have similar characteristics (i.e., teaching in the same school, teaching the same courses and population of students, and using the same materials) differentially used the professional development learning about formative assessment as mediating tools to improve their classroom instruction. The learning team experience contributed to both teachers’ development of a better understanding of formative assessment—especially in recognizing that their current grading and assessment practices were not appropriate to promote student learning—and the co-creation of artifacts to gather evidence of students’ ideas. Although both teachers demonstrated understanding about how formative assessment may serve to promote student learning and had a set of tools available to utilize for formative assessment use, they did not enact these tools in the same way. One teacher appropriated formative assessment as mediating tool to verify if the students were following her explanations, and to check if the students were able to provide the correct response. The other teacher used the mediating tool to promote better understanding of students’ ideas and her mindset shifted to place more value on the diversity of students’ thinking and help them be more aware of their ideas. This study illustrates the complexities of enacting formative assessment practices in particular classrooms because teachers may interpret and use these tools in different ways. Thus, when teachers enacted these mediating tools, their interaction with the activity system’s components produced different instructional outcomes and tensions. Similarly, this study describes how the use of artifacts of practice can be a vehicle between professional development and classrooms, especially in early stages of professional development. This study presents implications for professional development and formative assessment research and practice. Professional development needs to support teachers in reflecting on their practice in terms of activity systems, use a solid and research-based understanding of formative assessment, and promote opportunities to teachers to create, enact, and reflect on formative assessment artifacts and tools. Copyright by DANTE IGOR CISTERNA ALBURQUERQUE 2014 ACKNOWLEDGMENTS During the years I dedicated to pursuing this research I received contributions of admirable people and institutions that supported me in this endeavor. My dissertation director, Dr. Amelia Wenk Gotwals, provided me with outstanding guidance in conducting this study. She helped me frame and refine my ideas about formative assessment and science instruction. She also provided quality feedback on organizing my ideas and my writing. I am grateful of my Dissertation Committee members: Drs. Michelle Williams, Gail Richmond, and Ralph Putman, who provided me with meaningful feedback and pushed my thinking forward. Dr. Edward Roeber gave me substantive support and guidance. He introduced me to his research on formative assessment and visualized routes and sites to conduct my research. I greatly appreciate the support of the Michigan Department of Education that funded this research study and the Michigan State University Graduate School that funded me with the Dissertation Completion Fellowship. I also appreciate the generosity of the teachers from the research site, especially the two chemistry teachers who opened their classrooms to me and were always accessible to my questions and requirements. My gratitude is to all those who provided intellectual support and feedback to this study. My colleagues from the research project on formative assessment, Tara Kintz, John Lane, and Steve Bennett, provided me remarkable insights and feedback. The mentors and colleagues from the Sandra K. Abell Institute in Science Education gave me meaningful ideas for crafting my data analysis and interpretation. David Davenport and Ibrahim Delen provided me with valuable support and feedback in the data analysis and writing stages. v TABLE OF CONTENTS LIST OF TABLES .......................................................................................................................ix LIST OF FIGURES ..................................................................................................................... x CHAPTER 1: INTRODUCTION TO THE STUDY................................................................... Purpose of the Study ........................................................................................................ Research Questions .......................................................................................................... Study Context: A Snapshot .............................................................................................. Learning About Formative Assessment in a Statewide PD Program .................. The School ........................................................................................................... Organization of this Document ........................................................................................ 1 4 5 6 6 8 9 CHAPTER 2: REVIEW OF LITERATURE ...............................................................................11 Formative Assessment Theory and Practice ....................................................................11 Formative Assessment Within Instructional Systems..........................................12 Guiding Principles to Organize Formative Assessment ..................................................14 Establishing Where the Learners Are in their Learning (Where am I going?) ....14 Establishing Where the Learners Are Going (How am I getting there?) .............15 Establishing What Needs to Be Done to Get Students There (Where to next?) ..17 Research on Formative Assessment in Science ...............................................................18 Factors that Influence Formative Assessment Practice ..................................................20 Time and Support in Schools ...............................................................................20 Cultural Values and Assumptions About Assessment ........................................21 Teachers’ Beliefs ................................................................................................22 Teachers’ Knowledge .........................................................................................23 Teachers Learning About Formative Assessment ..........................................................24 Professional Development Focused on Formative Assessment ..........................26 Local and Community-Based Professional Development. .................................28 Notes About Teacher Practice and Professional Development ......................................29 Understanding Teacher Practice .........................................................................30 Changes in Teacher Practice ...............................................................................32 CHAPTER 3: RESEARCH DESIGN AND METHODS ............................................................34 Analytical Framework: Sociocultural Theories of Learning ...........................................35 Sociocultural Theories of Learning and Formative Assessment ..........................36 Sociocultural Theories of Learning and Science Education .................................37 Cultural-Historical Activity Theory as Lens to Understand Complex Systems ..............39 CHAT and Educational Research .........................................................................42 Teacher Learning and Appropriation of Pedagogical Tools ............................................43 Levels of Appropriation .......................................................................................43 Lack of appropriation ...............................................................................43 Appropriating a label ...............................................................................44 Appropriating surface features .................................................................44 vi Appropriating conceptual understandings ...............................................44 Achieving mastering ................................................................................44 Research Approach: Case Study ......................................................................................44 Case Study: Single and Embedded Design ...........................................................45 Participants .......................................................................................................................48 Data Sources ....................................................................................................................50 Lesson Videotapes .................................................................................................50 Teacher Interviews ................................................................................................52 Learning Team Meeting Videotapes .....................................................................54 Secondary Data Sources ........................................................................................54 Procedures for Coding and Analyzing the Data ..............................................................55 Data Reduction ......................................................................................................56 Classroom observation coding protocol and narrative.............................56 Coding schema for teacher interviews .....................................................57 Coding schema for learning team meetings .............................................58 Coding and Activity Systems Organization ..........................................................58 Rigor and Quality in Research .........................................................................................61 Brief Notes About Reflexivity ..............................................................................63 CHAPTER 4: FORMATIVE ASSESSMENT IN THE LEARNING TEAM EXPERIENCE ...66 Overview of the FAME Learning Team ..........................................................................66 Types of Activities ...............................................................................................68 Formative Assessment Content............................................................................69 Depth of Formative Assessment Content ............................................................70 Depth of Discussion .............................................................................................71 Synthesis ..............................................................................................................72 Progress During Learning Team Meetings .....................................................................73 Understanding Formative Assessment and its Relationship with Grading ..........73 Crafting Formative Assessment Tools and Using Students’ Evidence ...............78 Evaluation and Projections of the Learning Team Experience ............................80 Chemistry Teachers’ Role in the Learning Team ..........................................................81 Different Approaches to Formative Assessment and Grading ............................81 Diane’s first approach to formative assessment.......................................82 Lisa’s first approach to formative assessment .........................................84 Collaboration in the Creation of Formative Assessment Tools ...........................85 Evaluation of the Learning Team Experience .....................................................91 Diane’s foci ..............................................................................................91 Lisa’s foci.................................................................................................92 Outcomes of the Learning Team Experience ..................................................................93 Formative Assessment Tools as a Product of Shared Expectations ....................93 Formative Assessment as Potential Boundary Objects ........................................94 Influence of the Learning Team in Teachers’ Learning of Formative Assessment ........96 Diane’s Outcomes ................................................................................................97 Lisa’s Outcomes...................................................................................................99 vii CHAPTER 5: ENACTMENT OF FORMATIVE ASSESSMENT IN CHEMISTRY CLASSROOMS ...........................................................................................................................102 A Starting Point to Organize Teachers’ Activity Systems...............................................105 Mapping the Terrain for Formative Assessment: Teachers’ Instructional Practices .......107 Spaces, Instructional Materials, and Curriculum .................................................108 Diane’s Instructional Patterns ..............................................................................110 Lisa’s Instructional Patterns .................................................................................113 Synthesis and Connections with Activity Systems ..............................................117 Enacting Formative Assessment Tools ............................................................................119 Diane’s Enactment ...............................................................................................120 Connections with Diane’s activity system ...............................................123 Lisa’s Enactment ..................................................................................................124 Connections with Lisa’s activity system..................................................126 Synthesis of Exit Slips’ Use and Connections with Learning About Practice ....127 Formative Assessment Moments: Zooming in on Classroom Practice ...........................129 Diane’s Formative Assessment Moment .............................................................130 Contribution of this moment to understand Diane’s activity system .......135 Lisa’s Formative Assessment Moment ................................................................137 Contribution of this moment to understand Lisa’s activity system .........142 Activity Systems for Diane and Lisa ...............................................................................143 Diane’s Activity System ......................................................................................144 Lisa’s Activity System .........................................................................................146 Tensions in Challenges in Activity Systems ....................................................................148 CHAPTER 6: DISCUSSION AND IMPLICATIONS ................................................................153 Teachers’ Engagement in FAME Professional Development .........................................155 Learning Teams and Classroom Practice .............................................................157 Making Connections at Early Stages of Professional Development ...................159 Activity Systems and the Enactment of Formative Assessment ......................................161 Tensions in the Enactment of Formative Assessment .....................................................164 Implications of the Study .................................................................................................167 Implications for FAME Professional Development ............................................167 Implications for Team-Based Professional Development Models ......................169 Implications for Research on Professional Development ....................................170 Implications for Research on Formative Assessment ..........................................171 Limitations of the Study...................................................................................................173 Projections for Future Research .......................................................................................175 APPENDICES .............................................................................................................................177 APPENDIX 1: Summaries of Lessons Videotaped ....................................................................178 APPENDIX 2: Teacher Interviews’ Protocols ...........................................................................192 APPENDIX 3: Preliminary Coding Schema for Lessons ...........................................................195 APPENDIX 4: Coding Schema for Learning Team Meetings ...................................................198 APPENDIX 5: Code Trees Representations ...............................................................................201 REFERENCES ............................................................................................................................203 viii LIST OF TABLES Table 1: Summary of Lessons Videotaped .............................................................................. 52 Table 2: Guiding Questions Used for Selective Coding .......................................................... 60 Table 3: Learning Team Activities Across Meetings .............................................................. 68 Table 4: Time Spent in Formative Assessment Components by Meeting ............................... 69 Table 5: Meeting Time Distribution According to Depth of Formative Assessment ............. 70 Table 6: Depth of Discussion Across Meetings ...................................................................... 72 Table 7: Activity System Component ......................................................................................104 Table 8: Descriptions of Diane’s Lessons ...............................................................................179 Table 9: Descriptions of Lisa’s Lessons ..................................................................................186 ix LIST OF FIGURES Figure 1: Activity system model (adapted from Engeström (1987)) ........................................ 41 Figure 2: Representation of the single embedded case study framed for this research, including the case if study, the embedded units of analysis, and the context (Adapted from Yin (2009)). ...................................................................................... 47 Figure 3: Students selecting the red-yellow-green cards of the formative assessment tool cocreated in the learning team ....................................................................................... 89 Figure 4: Activity system model (adapted from Engeström (1987)) ........................................103 Figure 5: Common elements for Diane’s and Lisa’s activity systems ......................................107 Figure 6: Views of Diane’s and Lisa’s chemistry classrooms ..................................................109 Figure 7: Comparison between Lisa’s and Diane’s notes to explain the Gibbs’ free energy equation. Lisa (top) used the whiteboard to write notes while she explained while Diane (bottom) completed her notes in the unit package. Note the differences in the notes’ organization and the level of detail.. .........................................................115 Figure 8: Diane’s students using the exit slips. Note that the student on the left selected the yellow card and the student on the right is holding the yellow and the green card in order to make a decision. .......................................................................................121 Figure 9: Representation of Diane’s activity system that occurred when she enacted formative assessment as mediating tool to improve her chemistry instruction. .......144 Figure 10: Representation of Lisa’s activity system that occurred when she enacted formative assessment as mediating tool to improve her chemistry instruction .......148 Figure 11: Characterization of the main tensions that arose in teachers’ activity systems. For Diane (top) the enactment of formative assessment produced tensions between the tool and the object, the subject and the object, and the object and division of labor. For Lisa (bottom) the main tension occurs between the object of the activity object and the community. .......................................................................................150 Figure 12: Tree codes generated in grounded theory analysis ...................................................202 x CHAPTER 1: INTRODUCTION TO THE STUDY Classroom assessment is a key practice for classroom teachers. On average, a teacher can spend as much as a third to a half of his or her professional time involved in assessment-related activities (Stiggins, 1999). This large amount of time spent on assessment implies that classroom assessment is tied to teachers’ instructional practices and can serve the purpose of supporting instruction and influencing student learning (McMillan, 2013). Formative assessment is recognized for its contribution to student learning (e.g., Black & Wiliam, 1998a, 1998b; RuizPrimo, 2011) and engagement (e.g., Black, Harrison, Lee, Marshall, & Wiliam, 2004; Webb & Jones, 2009). In 2006, based on the extensive review of formative assessment literature and consideration of the theories that are fundamental to this process, the Council of Chief State School Officers defined formative assessment as a “process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes” (CCSSO, 2008; p. 3). Teachers with knowledge and skills in formative assessment are better able to organize their instructional and assessment practices for promoting student learning (Buck, Trauth-Nare, & Kaftan, 2010). Teachers need information from students as a rationale to make discerned judgments, to provide feedback to students, and to make instructional decisions. Moreover, students involved in formative assessment are able to recognize the nuances in their learning process and find strategies to regulate their learning (Allal, 2010). Students engaged in the formative assessment process are able to identify learning goals, receive feedback from teachers and peers, show their ideas and understanding to the class, and self-assess their learning and the learning of their peers. The interconnection of these strategies helps teachers and students to decide future courses of action based on the evidence of student learning (Black & Wiliam, 2009). 1 Even though formative assessment is recognized as an important practice about which teachers need to be knowledgeable and skillful, some researchers (e.g., Athanases & Achinstein, 2003; Black et al., 2004; Black & Wiliam, 2005; Brookhart, 2001; Daws & Singh, 1996; Schneider & Randel, 2009) argued that many teachers have insufficient levels of assessment literacy and lack of expertise in sound formative assessment practices. This situation is applicable to teachers, regardless of their stages in their professional careers. Many have argued that teacher preparation programs do not provide adequate learning experiences for teachers (e.g, Brookhart, 2001; Buck et al., 2010; Maclellan, 2004; Otero & Nathan, 2008), although formative assessment knowledge and skills are frequently included in teacher professional standards. Moreover, there are scarce opportunities for teachers to learn about formative assessment, partly related to the meager opportunities that school and district administrators have to learn and reflect on the importance of this practice (Stiggins, 2006). Despite the troubling state of the field, teacher learning about formative assessment can be promoted through professional development (Black et al., 2004; Popham, 2009; Schneider & Randel, 2009). Well-designed professional development opportunities may help teachers gain the knowledge and skills needed to do formative assessment in a way that is supportive of student learning and goes beyond the mere use of teacher-centered assessment procedures (Coffey, Hammer, Levin, & Grant, 2011; Shepard, 2009). Even though research has identified some characteristics associated with effective professional development in formative assessment (Coffey, Sato, & Thiebault, 2005; Sato, 2003; Schneider & Randel, 2009; Wylie, Lyon, & Goe, 2009), there is scarce research into how professional development influences teachers’ practices. This is important, because one of the key outcomes of all professional development models is to influence teachers’ practices (Desimone, 2009). 2 The design of professional development models that are closer to the context of teaching and situated in schools has been promoted as a factor of effective professional development. One way to have professional development take contextual factors into account is through the use of professional learning communities (Wenger, 1998) that share knowledge and expertise about classroom practice through reflective inquiry. In terms of developing a sustainable driving force for change in schools (Stoll, Bolam, McMahon, Wallace, & Thomas, 2006) and developing a empowered community of learners (Thomas, Wineburg, Grossman, Myhre, & Woolworth, 1998), these models are being suggested for impacting teacher practice. Research has documented some impact of these models on teacher practice, but the evidence beyond teacher perceptions is limited (Vescio, Ross, & Adams, 2008). Research does not provide much evidence about how teachers translate learning from professional development experiences into enactment of new practices. Even though it is expected that teachers make use of the knowledge and skills that they learned in professional development (Kazemi & Hubbard, 2008), it is not clear how teachers enact what they learned in the classroom setting and connect it into their practice. We also do not how teachers negotiate tensions related to making changes in their current instructional systems, for example, in terms of the curricula and students’ characteristics. Research on teacher learning, furthermore, posits that, even though teachers may learn in cooperative professional development programs, the way they enact a practice in their particular settings (i.e., classrooms) is individual and mediated by numerous factors (Cobb, McClain, de Silva Lamberg, & Dean, 2003). When enacting formative assessment, we know little about how professional development and classroom settings are feeding each other (Kazemi & Hubbard, 2008). 3 Research on formative assessment for science teachers and in science classrooms reports that professional development in formative assessment may influence teacher practice and student learning (e.g., Falk, 2012; Furtak, 2012; Furtak et al., 2008; Ruiz-Primo & Furtak, 2007). While there have been some studies on science-specific formative assessment, Coffey et al. (2011) argued that research on formative assessment has overlooked the nature of science learning, especially in terms of scaffolding students’ scientific ideas. Purpose of the Study The purpose of the dissertation study is characterizing the formative assessment practices of two experienced chemistry teachers (Diane and Lisa) who participated in a local team-based professional development to learn about formative assessment theory and classroom practice. This embedded single case study (Yin, 2009) characterizes the learning experiences of Diane and Lisa in the professional development program, based on learning teams—a type of professional learning community (Wenger, 1998). Both teachers actively engaged in discussions about formative assessment in the context of one suburban high school situated in the state of Michigan. Based on what they learned in the team meetings and motivated by activities in the learning team setting, both teachers attempted formative assessment practices in their classrooms. Besides the participation in the same learning team, Diane and Lisa have many elements in common. Both have been working for several years in the same school building, they use the same instructional resources, and teach chemistry courses for the same grade level students. This study examines both teachers’ 10th grade chemistry classes to understand the nuances in the ways both teachers engaged with formative assessment. Drawing on sociocultural theories of learning and, in particular, cultural-historical activity theory (CHAT), this study describes the two teachers’ enactment of formative assessment 4 practices and connects their classroom enactment to their engagement in the professional development setting. Even though both teachers had the same professional development experience, they enacted formative assessment differently in their particular and respective instructional system. Equally important for this study is to understand the factors that influenced the enactment of formative assessment and to the extent these factors enabled or hindered the development of this practice. This study also analyzes tensions that emerged in the professional development experience and in each teacher’s classroom. In this study, I used evidence from learning team meetings (PD) and classroom videotapes to examine what teachers actually talked about and did as well as teacher interviews, to explore how teachers understood formative assessment, explained factors that influenced their practice, and made formative assessment-related decisions. Research Questions A classroom practice is formative to the extent that evidence about student understanding is elicited, interpreted, and used not just by teachers, but also by learners, students, and their peers (Black & William, 2009). For this study I will focus on Diane and Lisa’s classroom enactment of formative assessment as well as the influence of the professional development model in their classroom enactment. In that sense, the emphasis is on teacher practices both in the professional development and in the chemistry classroom. For this embedded case study, I posed an overall research question as well as three specific research questions that refer to the different settings connected with the study. The overarching research question is: “How does participating in a team-based professional development influence two chemistry teachers’ enactment of formative assessment classroom practices?” 5 The specific research questions focus on 1) teachers’ participation in the professional development, 2) teachers enactment of formative assessment, and 3) factors that enable or hinder enactment of formative assessment practices. Thus, the specific research questions are: o How do these two chemistry teachers engage in a team-based professional development about formative assessment? o How do these two chemistry teachers enact formative assessment in their classrooms? o What tensions emerge when these two teachers learn about and enact formative assessment practices? Study Context: A Snapshot In this section I will present contextual information that helps to understand where and how the case study is embedded in a broader context. Previously, I explained that this study focused on two teachers who participated in a professional development model that aims to promote teachers’ knowledge and skills in formative assessment. This section describes some characteristics of the formative assessment professional development model and outlines some features of the school and learning team. Learning About Formative Assessment in a Statewide PD Program Formative Assessment for Michigan Educators (FAME) is a statewide professional development program that started in 2008 to provide Michigan teachers support in the implementation of effective formative assessment practices that promote student learning. The Michigan Department of Education developed FAME as part of a comprehensive and balanced assessment system created in response to the new high-school graduation requirements adopted 6 in 2006, which included the provision of teacher professional development in formative assessment. FAME is designed to support teachers in learning about formative assessment theory, strategies, and techniques as well as providing an impetus to implement, reflect on, and refine new instructional and assessment practices. To accomplish these efforts, FAME is based largely within local contexts in a team-based setting. Membership in the FAME learning teams is decided at the local level (i.e., within a school or district). Each learning team is composed of one team facilitator (named as a “coach”) and five to eight learning team members (LTMs) who are interested in the study of formative assessment. Coaches may be teachers, school or district administrators, or curriculum specialists who are locally recruited to facilitate LTMs’ learning about the concept of formative assessment and to promote the use of formative assessment strategies and tools. Through team meetings, LTMs are expected to develop knowledge about planning for and using formative assessment as an ongoing process of setting learning goals, gathering evidence of students’ ideas, and using these ideas in order to provide feedback and alter instruction when pertinent. In addition, LTMs are expected to actively participate in learning team discussions about formative assessment, reflect on the successes and challenges in using formative assessment strategies, and support the work and ideas of other team members (Michigan Department of Education, 2011). Using the definition of formative assessment stated by the Council of Chief State Schools Officers (CCSSO, 2008), the FAME model structures the process of formative assessment learning through eight components: (1) planning for formative assessment; (2) learning target use; (3) use of student evidence; (4) use of formative assessment strategies; (5) use of formative 7 assessment tools; (6) student and teacher analysis; (7) formative feedback; and (8) making instructional decisions (Measured Progress, 2010). The model also emphasizes that these components are interrelated as a process and need to be coherently articulated. Before starting the team meetings, coaches and LTMs attend a single-day workshop named “Launching into Learning” to provide participants a common understanding of these eight components. Teachers are asked to share their current instructional and assessment practices, so they can discuss and reflect on how the use of formative assessment would fit in their schools and classrooms. Moreover, all teams have access to print and online materials that cover the eight components of the formative assessment learning process and support teachers’ implementation of classroom strategies and tools. Through this design, FAME is designed to promote teacher learning about formative assessment and to increase the use of formative assessment strategies in the classroom. As mentioned above, the FAME model is locally implemented and learning teams have different makeups, learning foci, meeting agendas, and meeting frequencies. These are developed according to participants’ characteristics and interests as well as with the collaboration of school and district administrators. Therefore, what the learning team meetings “look like” and how they impact teachers’ knowledge and implementation of formative assessment is likely to vary. Since this learning process takes time, it is expected that learning team members will commit to working together for three or more years in order for this effort to be successful (Michigan Department of Education, 2011). The School As part of the FAME professional development program, a learning team was created in November of 2011 in a suburban high school located in southeastern Michigan. In 2011-12 the 8 high school had close to 1,800 students enrolled and more than 90 faculty members. In terms of student demographics, 83% of the students were classified as White, 8% as African-American, 3% as Hispanic, 2% as Asian, and 4% of the students were characterized as mixed-race. Furthermore, 31% of the students were classified as economically disadvantaged, which means students who are eligible for free- or reduced-price meals (Michigan Department of Education, 2014). In relation to students’ achievement, 33% of 11th grade students at this school were considered “proficient” in the science component of the 2012-13 Michigan Merit Examination (MME). As a reference, the statewide proportion of students that were considered proficient was 26%. Similarly, the proportion of students who met the ACT science benchmarks for college readiness was 27% in this high school, whereas the statewide average was 23% (Michigan Department of Education, 2014). To be clear, the focus on this study is on the process of learning about formative assessment and enacting this practice in chemistry classrooms by the two teachers that are participants in this study. However, the professional development and the school are elements that frame and influence the experiences of both teachers are quintessential for setting the stage of this study. Organization of this Document This dissertation document is organized in six chapters. The second chapter reviews research literature that provides support to the problem that illuminates this study. The review considers key components related to formative assessment theory and classroom implementation, professional development, and teacher learning and practice. The third chapter describes the analytical framework for used in the study as well as its research design and methods. 9 The fourth chapter presents the results about the two chemistry teachers’ participation in the team-based professional development. The fifth chapter examines both teachers’ enactment of formative assessment and the activity system in which each teacher was immersed. Chapter five also describes their instructional and formative assessment practices, the classroom enactment of formative assessment artifacts created in the professional development, and different instructional situations that evidence the process of enactment. In addition, the fifth chapter examines perceptions of teachers about the learning experience, especially through the analysis of particular formative assessment-related instructional moments. The last chapter of this document connects and discusses the teachers’ participation in the professional development, classroom practice, and the enactment of formative assessment. Chapter six also discusses the meaning of the experience in terms of teacher learning, formative assessment practice, and mediating factors in the process of enactment of formative assessment. The chapter presents and projects some implications of this study’s findings for FAME as a professional development program and for research on formative assessment and professional development. 10 CHAPTER 2: REVIEW OF THE LITERATURE In the introductory chapter I described elements that frame this case study, such as its context, significance, and research questions. In this chapter I will review conceptual and research-based literature that serves to frame this case study. I will start by reviewing the state of the field of classroom formative assessment and highlighting the importance of the appropriation of formative assessment as knowledge and classroom practice. Then, I will describe factors that influence the enactment of formative assessment practices. Because teachers’ learning about formative assessment can be promoted by professional development (Popham, 2009; Schneider & Randel, 2009), this chapter will review research on formative assessment professional development, especially those school-based models related to professional learning communities. Finally, I will describe aspects related to the conceptualization of teacher practice and teacher learning, and emphasize the connections with professional development. Formative Assessment Theory and Practice Formative assessment is a classroom practice recognized for its contribution to student learning (e.g., Black et al., 2004; Black & Wiliam, 1998a, 1998b; Ruiz-Primo, 2011) and student involvement (e.g., Black et al., 2004; Webb & Jones, 2009). Teachers with knowledge and skills in formative assessment are able to better organize their instructional and assessment practices for promoting student learning (Buck et al., 2010). Students who are in classrooms where formative assessment is employed have the potential of being more engaged in their learning process because they have the agency to make decisions and adjustments to their own learning (Black & Wiliam, 2009; Brookhart, 2004, 2013a). Although the positive contribution of formative assessment is generally recognized, there are slight differences in how formative assessment is defined both in research and practice (Dunn 11 & Mulvenon, 2009). Bennett (2011) noted that the current definitions of formative assessment include a broad diversity of approaches and that there is little consistency from one implementation to another. Formative assessment, therefore, has been used in different contexts that go beyond its traditional conceptualization and development. For example, formative assessment has been promoted to counterweight the effects of summative assessment and testing for accountability purposes (Shepard, 2005). Wiliam & Leahy (2007) noted that formative assessment is often used to describe assessments that provide information on the likely performance of students and include some sort of feedback, regardless of the purpose. Black (2013) warned about the importance of distinguishing assessments with different purposes. For example, some test-development companies label their products, such as interim- or benchmarktest products as formative assessment, but products, in and of themselves, are not consistent with the research corpus that supports formative assessment as a practice that involves both teachers and students (Popham, 2009; Shepard, 2009). Formative Assessment Within Instructional Systems Formative assessment has the purpose of informing students’ learning and teaching (Bell & Cowie, 2001). Based on the extensive review of formative assessment literature and consideration of the theories that underlie this process, the Council of Chief State School Officers defined formative assessment as a “process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes” (CCSSO, 2008, p. 3). Formative assessment is not merely a collection of assessment procedures to be administered; it is a teaching practice embedded within instruction. Furthermore, formative assessment involves a continuum of multiple processes that are organized in different levels of length, formality, and planning 12 (Shavelson et al., 2008). These include planned assessment-related activities embedded in the instructional curriculum (Heritage, 2007), informal formative assessment (Ruiz-Primo & Furtak, 2006; Ruiz-Primo, 2011), unplanned formative assessment (Bell & Cowie, 2001), and rapid/onthe-fly instructional moments that may have a formative character (Shavelson et al., 2008). All types of formative assessment practices are important and have the potential to improve students’ learning (Black & Wiliam, 1998a) because they can be used to make instructional and learning decisions by teachers and students. Cowie and Bell (2001) described a model that integrates planned and interactive formative assessment situations. The purpose of planned formative assessment is to get information from the whole class about the understanding of a particular curricular content. It is characterized as cycles of eliciting-interpreting-acting. Interactive formative assessment takes place during the instructional activities and is unplanned. Its purpose is to mediate student learning in a more contextual and personal manner. It also involves shorter cycles of noticingrecognizing-responding and tends to be done individually or in small groups. Planned and interactive formative assessment serve different goals, but they are highly interconnected and feed into each other. Depending on the purpose, teachers may move from one type to another in any given class period. The variety of types of formative assessment suggests that this practice is, rather than a single process, a complex system in which multiple layers are connected and depend on each other. In his critical review of research on formative assessment, Bennett (2011) suggested that future research on formative assessment needs to be understood as part of a comprehensive system in which all components work together to promote student learning and help teachers develop assessment competency. Allal and Mottier-Lopez (2005) made a connection between 13 the formative assessment processes and regulation of learning (Perrenoud, 1998). This includes adapting teaching and learning activities in different ways such as: retroactive, interactive, and proactive. At the core of these regulation processes, adjustments can be made in some “moments of contingency,” in which “learning activities may change the course in the light of the pupils’ responses" (Wiliam & Leahy, 2007, p. 35). Therefore, regulation of learning could be translated into four formative assessment elements in a systemic manner: goal setting, monitoring progress to a learning goal, interpretation of feedback, and goal-directed adjustments in teaching and learning (Allal, 2010). Guiding Principles to Organize Formative Assessment The variety of types of formative assessment can be organized in guiding principles related to setting goals, gathering evidence from students, and making adjustments in instruction in order to guide and promote student learning. Black and Wiliam (2009)—based on Ramaprasad (1983) and Wiliam and Thompson (2007)—established three key processes in formative assessment in relation to learning and teaching: 1) establishing where the learners are in their learning; 2) establishing where the learners are going; 3) establishing what needs to be done to get students there (the goal). In a similar way, drawing on Sadler's work (1989), Hattie and Timperley (2007) described ‘feedback loops’ that included three guiding questions to be asked by students and teachers: (1) where am I going?, (2) how am I getting there?, and (3) where to next? These three guiding principles to formative assessment are described below. Establishing Where the Learners are in their Learning (Where am I Going?) Setting instructional goals or learning targets is essential for formative assessment since they allow the completion of feedback loops (Black & Wiliam, 1998b) and establish a direction for students’ learning processes (Marzano, Pickering, & Pollock, 2001). Learning targets are 14 usually related to curriculum standards and refer to different levels of specificity. Even though many teachers are somewhat familiar with setting learning targets as part of instructional planning, there are also learning goals that come up over the course of informal formative assessment practices. These targets tend to be short-term, discrete, and immediate (Ruiz-Primo, 2011). From the formative assessment perspective, it is important that students can appropriate the learning goal. This is important, because students can actively reflect on their current learning in relation to the learning goals as well as the relationship between the learning goal and classroom activities, i.e., self-regulation (Allal, 2010). Therefore, learning targets should be shared with the students (Brookhart, Moss, & Long, 2007) and stated in student-friendly language (Huinker & Freckmann, 2009). Establishing Where the Learners are Going (How am I Getting There?) Classroom practice cannot be formative if there are no opportunities to verify: (1) whether the student knows, and (2) what the student knows, understands, and can do (Torrance & Pryor, 1998). Thus, the process of formative assessment needs on-going opportunities to elicit evidence of students’ thinking. Teachers need information from students as a rationale to make discerned judgments, to provide feedback to students, and to make instructional decisions. Although many teachers are familiar and successful with using different formative assessment strategies to elicit student ideas, the process of interpreting the information collected tends to confuse some teachers (Furtak, 2011). In ensuring a learning-centered classroom, it is important for teachers to be able to notice’ students’ prior ideas (e.g., Hiebert, Morris, Berk, & Jansen, 2007; Van Es & Sherin, 2002). Noticing consists of three steps: (1) attending to significant students’ ideas, (2) reasoning 15 about these ideas, and (3) making instructional decisions with regard to these ideas. In differentiating planned and interactive formative assessment, Cowie and Bell (2001) used “noticing” in a slightly different manner. In that model, noticing, recognizing, and responding are part of the interactive/unplanned side of formative assessment, while cycles of elicitationinterpretation-acting are related to planned formative assessment. Regardless of the strategy used to gather information from students’ ideas; three basic principles are essential to formative assessment practice. The first relates to the scope of students ideas. Some teachers elicit a restricted number of views about students’ prior knowledge, and within this group, they tend to focus on academic concepts (Otero & Nathan, 2008; Otero, 2006). The second principle is connected with the use of students’ ideas to design instruction. If teachers can anticipate students’ problematic ideas that are recurrent (Heritage, Kim, Vendlinski, & Herman, 2009; van Zee & Minstrell, 1997), these can be included in planned formative assessment activities. The third principle implies that elicitation of students’ ideas must go beyond the mere verification of the correct and acceptable responses (Ruiz-Primo & Furtak, 2007) and focus on promoting student thinking and reflection. Students need to be active participants in the formative assessment process, so they can self- and peer-assess in order to reflect on what they know and what can they do (Heritage, 2013). Students also need opportunities to reflect on how they are learning in the processes to discuss possible ways of action with teachers and peers. When teachers elicit information from students, they need to consider balanced opportunities to conduct convergent assessment (i.e., verifying whether the student knows) and divergent assessment—to discover what students know, understand and can do (Pryor & Crossouard, 2008; Torrance & Pryor, 1998; 2001). Duschl and Gitomer (1997) referred to the 16 use of “assessment conversations” as an instructional dialog that embeds assessment in the lesson structure to engage students in evaluating different representations of students’ ideas that are supported by evidence. These conversations are dialogic and interactive and serve as tools for student participation, self-assessment, and feedback to students (Furtak, 2012; Ruiz-Primo & Furtak, 2006; Ruiz-Primo, 2011), because they put on the table what and how students are thinking. The use of formative assessment may give students evidence that can be used for current and future learning so students and teachers will be able to discern ways of action that guide learning (Heritage, 2013). Thus, teachers and students collecting evidence of learning to promote dialog and reflection aligns with the ideas that recognize formative assessment as a dialogic (Crossouard & Pryor, 2012) and interpretive (Van Es & Sherin, 2002) process. Establishing What Needs to be Done to Get Students There (Where to Next?) In a learner-centered formative assessment system, collecting information from students is pointless if it is not related to a particular use, especially to support students’ learning and to orient the following instructional moves. Feedback and instructional decisions are central to promote student learning and are at the core of effective instruction (Wiliam, 2013). For RuizPrimo and Furtak (2006), using student evidence implies more than providing the right response or mere evaluative feedback. They stated: A teacher can provide students with specific information on actions they may take to reach learning goals: ask another question that challenges or redirects the students’ thinking; model communication; promote the exploration and contrast of students’ ideas; make connections between new ideas and familiar ones; recognize a student’s contribution with respect to the topic under discussion; or increase the difficulty of the task at hand (p.61). 17 Therefore, utilizing information from students learning encompasses a decision-making process in which teachers balance their own beliefs and values with the demands of external factors to determine better courses of action (Hiebert et al., 2007; Mcmillan, 2003). Although teachers’ instructional decisions are highly contextualized, a general guideline is that these decisions have to be made in a fashion such that the next instructional steps are likely to be better or better-founded in empirical evidence than the decisions that teachers and students would have made in the absence of such evidence (Black & Wiliam, 2009). When involved in formative assessment, students also need to use assessment evidence to make adjustments in their learning (Sadler, 1998). Feedback and peer-assessment constitute different ways to evaluate what to do next. Effective feedback means that it is clear, descriptive, and related to learning goals (Hattie & Timperley, 2007). Peer-assessment enable students be actively making judgments about peers’ performance or responses and help students to experience different types of feedback (Topping, 2009). Research on Formative Assessment in Science Research on science education has documented models and experiences of implementing formative assessment with pre-service (Buck et al., 2010; Otero, 2006) and in-service teachers (Ash & Lewitt, 2003; Bell & Cowie, 2001; Black et al., 2004; Coffey et al., 2005; Falk, 2012; Furtak, 2012; Ruiz-Primo & Furtak, 2006; Sato, Coffey, & Moorthy, 2005). These studies have shown the effect of formative assessment in science lessons and classrooms in order to enable teacher practice and student learning (e.g., Coffey et al., 2005; Furtak, 2012). In addition, these studies recognized the centrality of formative assessment to address students’ thinking about science content and showed that the use of formative assessment has been helpful for promoting 18 classroom practices related to questioning strategies, feedback, and scientific inquiry (e.g., Black et al., 2004; Ruiz-Primo & Furtak, 2006). Formative assessment can be especially relevant when teachers and students are working with inquiry, argumentation, and socio-scientific issues (Driver, R. A., Newton, P., & Osborne, 2000; Duschl, 2003), because students need teacher support in topics that are particularly challenging to students, such as reviewing their claims and warrants. The use of formative assessment has helped science teachers: be more reflective about students’ understandings (Furtak, 2012; Ruiz-Primo & Furtak, 2006), develop more acute judgments about student thinking (Ash & Lewitt, 2003), better identify students’ science conceptualizations (Furtak, 2012), and increase their pedagogical content knowledge (Falk, 2012). However, implementation of formative assessment in science classrooms is not straightforward. For example, Buck & Trauth-Nare (2009) found that merely implementing formative assessment tools does not necessarily imply that students are learning science in depth, because students were initially reluctant to communicate their ideas to the teacher as well as they did not receive descriptive feedback that supports student learning. In addition, in a critique of the state of formative assessment research in science education, Coffey et al. (2011) pointed out that research on formative assessment in science has focused on strategies used by teachers but that it has mainly overlooked the disciplinary part of what should be formatively assessed, especially in terms of understanding the particularities of the content area. 19 Factors that Influence Formative Assessment Practice Teachers can grow professionally when they have the opportunity to study and change their practices in the interest of student learning (Ash & Lewitt, 2003). Teachers who are learning about formative assessment can detect and examine their own needs, beliefs, priorities and assumptions (Black et al., 2004; Sato, 2003). Research has identified different factors that influence learning about and developing formative assessment practices in the classroom. These are in two categories: the social context of teaching and characteristics of the learner (Grossman, Smagorinsky, & Valencia, 1999). Social context of teacher includes time and support in schools and cultural values and assumptions about assessment. Characteristics of the learner include teacher beliefs and teacher knowledge. Time and Support in Schools Changes in formative assessment practices are slow and need to be gradually implemented (Bennett, 2011; Black et al., 2004; Dunn & Mulvenon, 2009; Ofsted, 2008; Webb & Jones, 2009; Wylie et al., 2009). Changes are more likely to occur if they are a collective effort with support from school principals and administrators and with the dissemination of information and practices regarding formative assessment in the school building (Black et al., 2004; Ofsted, 2008; Webb & Jones, 2009; Wylie et al., 2009). Collaboration also helps teachers to share responsibilities for the implementation of classroom formative assessment practices. The implementation of successful formative assessment practices in schools is strongly related to school administrators’ support, leadership, and effective communication (Stiggins, 2009; Wylie & Lyon, 2009) as well as administrators’ trust and high expectations (Ofsted, 2008). In terms of designing professional development programs, getting school support and collaboration is 20 essential, since the organizational conditions of schools tend to be overlooked (van Driel, Meirink, van Veen, & Zwart, 2012). For formative assessment to take hold in classrooms, timing, as a critical issue, needs to be better managed in schools by increased collaboration with and support from colleagues and administrators. The success of implementation of formative assessment in schools is related to the ways schools perceive accountability-related demands, for example, when schools direct their efforts student preparation for high-stakes assessments (Birenbaum, Kimron, & Shilton, 2011) and college-admission exams (Thomas & McRobbie, 2012). For some teachers, there is a tension between covering the curriculum and implementing formative assessment practices, which take time and repeated efforts (Buck & Trauth-Nare, 2009). Cultural Values and Assumptions About Assessment Although there are calls for balanced assessment systems (e.g., Brookhart, 2013; Stiggins, 2006) and for setting a continuum between formative and summative assessment practices (Allal, 2010), there is a persistent over-emphasis on summative assessment in schools that undermines thoughtful instruction (Shepard, 2000). For instance, instructional practices of teachers tend to be aligned with the expectations of high-stakes assessments, which usually consist of a paper-based single examinations that are taken in the last week of school (Black, 2013). Several research studies (e.g., Black & Wiliam, 2005; Gioka, 2009; Harlen, 2005; Lamprianou & Christie, 2009; McClam & Sevier, 2010) described experiences where the implementation of formative assessment creates tension in the traditional grading practices of teachers. The issue of grading also affects students’ expectations about instruction. Thomas and McRobbie (2012) described the tensions that students experienced in the context of a 21 pedagogical change related to the development of students’ metacognition in high-school chemistry class. Even though students recognized the benefits and importance of this new instructional approach, they also expressed their concerns about maintaining their academic success—regarding results in high-stakes tests, especially those ones related to college admission. International experiences show that the implementation of formative assessment in schools may be complicated, especially in cultures where parents, students, and teachers are oriented to tests and results (Berry, 2011). Although different countries have implemented policies to promote a more balanced assessment system in schools, making changes in a culture that is deeply rooted in societies is a hard endeavor. Other studies (e.g., Black & Wiliam, 2005b), however, suggested that the local contexts, cultures, and educational policies about assessment make some difference in promoting formative assessment practices at the school level. For example, the differences in implementation of formative assessment between examination-oriented cultures such as in Hong Kong (Berry, 2011; Brown, Kennedy, Fok, Chan, & Yu, 2009) and others such as New Zealand where formative assessment is also emphasized (Brown & Hirschfeld, 2008; Brown, Irving, Peterson, & Hirschfeld, 2009; Gilmore, 2002) may imply different ways of enacting formative assessment in schools and classrooms. Teachers’ Beliefs Cornett (1990) as cited in Sweeney, Bula, and Cornett, (2001) described that teachers are influenced by personal practice theories. These refer to the system of beliefs that are based on previous experiences and result of enacting classroom practice. Thus, many studies have shown that the implementation of formative assessment is strongly related to teachers’ beliefs, attitudes, and conceptions about teaching, learning, curriculum (Black & Wiliam, 2005a; Coffey et al., 22 2005; Sato, 2003; Shepard, 2000; Webb & Jones, 2009) and assessment (Brookhart, 2007; Matese, 2005). Matese (2005) emphasized that creating assessment opportunities for teachers has to consider the interaction of beliefs about the purpose of assessment, beliefs about making curricular decisions (i.e., what to teach and what to assess), and categories of teacher knowledge and skills. Implementation of formative assessment requires that teachers have the favorable attitudes toward the role that formative assessment can play in enhancing teaching and learning (Heritage, 2007). By examining cases of teachers implementing formative assessment in Israel, Birenbaum et al. (2011) said that teachers with more constructivist beliefs about instruction, learning, and assessment have a tendency to report more frequent usage of formative assessment classroom practices. Research presents different examples of how teachers’ beliefs about formative assessment interact with the cultural milieu present in classrooms and schools (Coffey et al., 2005; Maclellan, 2004; Sato, 2003; Webb & Jones, 2009). For example, the implementation of formative assessment practices is affected by contradictions between teachers’ beliefs about learning and the existing culture in the classroom community (Webb & Jones, 2009) such as in a predominant culture of standardized and summative tests (Marshall & Drummond, 2006). Coffey et al. (2005) described an experience in which two teachers made decisions about formative assessment motivated by their set of personal beliefs. The particular demands of administrators, parents, colleagues, and students influenced their decisions as well. Teachers’ Knowledge Formative assessment is based on teachers’ professional knowledge and experiences (Bell & Cowie, 2001). When used by teachers for identifying students’ prior ideas in science (Otero & Nathan, 2008), formative assessment, requires a complex set of integrated skills 23 associated with different categories of teacher knowledge ; each category has to be developed and integrated with the others. However, many teachers are not aware of the different purposes and features of assessments, especially because the strong influence of traditional approaches to assessment, such as viewing all assessments as summative and for grading purposes (Song & Koh, 2009; Stiggins, 2006). Teachers’ content knowledge is important for formative assessment because it influences both instructional quality and teachers’ abilities to pose precise questions to elicit students’ understanding (Matese, 2005). Athanases and Achinstein (2003) and Magnusson, Krajcik, and Borko (1999) considered that assessment knowledge is part of the domain of PCK or pedagogical content knowledge (Borko & Putnam, 1996; Shulman, 1986). Thus, teachers’ PCK plays a large role in determining how effective they are at implementing various formative assessment practices and in making instructional decisions related to student learning (Birenbaum et al., 2011; Cowie & Bell, 2001; Harlen, 2005a; Matese, 2005). For example, Alonzo, Kobarg, and Seidel (2012)—by observing videotapes of physics teachers when using content knowledge in interactions with students —identified the importance of teachers’ abilities to recognize the content that was difficult for their students as well as the piece(s) of content that they need in order to connect with others. In science, teachers’ learning of PCK can be supported by the implementation of content-specific formative assessment professional development (Falk, 2012). Teachers Learning About Formative Assessment Despite the abundant evidence about the effect of formative assessment on students, many teachers are not skilled in sustained assessment practices (Athanases & Achinstein, 2003; Brookhart, 2001; Schneider & Randel, 2009) and struggle with implementing formative 24 assessment in the classroom (Black et al., 2004; Black & Wiliam, 2005a; Daws & Singh, 1996). For example, a number of teachers struggle with understanding the purposes of different types of assessments and using them meaningfully. In most schools, grading practices are overemphasized and this may be related to teachers’ lack of training and experience in innovative classroom assessment practices that can improve student learning (Athanases & Achinstein, 2003; Ofsted, 2008). In science, Myhill & Brackley (2004) found that teachers rarely explored students’ prior knowledge. Teachers’ instructional decisions were essentially based on “fixed” curricular sequences instead of making adjustments based on students’ responses (Minstrell, Anderson, & Li, 2011; Torrance & Pryor, 2001). In recognizing students’ ideas, some teachers limited the spectrum of possible responses and these tended to focus on academic knowledge. Moreover, teachers’ expectations of student responses just considered fully developed ideas, in some sort of “get it or don’t” manner (Otero, 2006). Many have argued that pre-service teacher education does not help teachers develop formative assessment competence; and as a result, new teachers lack the basic knowledge and skills in classroom assessment (Brookhart, 2001; Buck et al., 2010; Maclellan, 2004; Popham, 2009; Stiggins, 2009). That may be connected with the fact that preservice teachers tend to learn about assessment disconnected from a particular theory of student learning (Otero, 2006). The lack of assessment literacy (Abell & Siegel, 2011; DeLuca & Klinger, 2010; Stiggins, 2007) continues in the professional careers of teachers, because few teachers and school administrators have opportunities to learn about sound classroom assessment practices (Stiggins, 2006). Thus, in order for formative assessment to have a positive impact on students, it is important that teachers have opportunities to learn about formative assessment as well as the impetus to enact formative assessment practices. 25 Teachers need to learn formative assessment knowledge to identify what formative assessment is and is not. They also need knowledge and skills to implement formative assessment practices in the classroom, considering the particular characteristics of their students, content area, level of teaching, and cultural settings. Enacting formative assessment practices can place new demands on teachers including finding ways to support students in providing feedback to peers, identifying learning targets, assessing progress towards learning targets, and adapting future instruction based on students’ evidence (Wylie & Lyon, 2009). For Ash and Lewitt (2003), learning about formative assessment is connected with the appropriation students’ thinking and actions, and making sense of particular meanings for students, for example, the ways by which students think about science content. This implies that teachers are able to see learners with more detail and perspective and, thus, new problems related to instruction may arise. In doing so, teachers develop a fine-tune understanding that helps them support and respond to students’ thinking. In sum, learning about formative assessment implies developing a varied and complex set of specific knowledge and practices that are interwoven in teaching and learning. Professional development in formative assessment has been proposed as a strategy to help teachers learn and improve in this practice (e.g., Black et al., 2004; Popham, 2008; Schneider & Randel, 2009; Stiggins, 2009). Professional Development Focused on Formative Assessment Schneider and Randel (2009) distinguished several characteristics of effective formative assessment professional development: administrative support; individualization of teacher’s learning goals; content knowledge; time; collaboration; coherence; and active learning. Sato (2003) noted that professional development in formative assessment should also address 26 teachers’ characteristics, such as content knowledge and beliefs, so teachers can connect their personal approaches to assessment with their overall instructional practices. Professional development is not solely about providing resources for teachers; teachers also have to want to use them and make them appropriate for their practice (Torrance & Pryor, 2001). When teachers have opportunities to guide their own learning and make more connections with classroom practices, they can leverage the focus of the professional development into their own practice (Schneider & Randel, 2009), especially when teachers perceive the experience as important and useful for their practice (Moscovici & Varrella, 2008). Webb and Jones (2009), however, emphasized that teachers may feel constrained in implementing formative assessment strategies, which are partially affected by the contradiction between the teachers’ beliefs about learning and the existing culture in a classrooms and schools. Professional development that aims to support teachers’ use of formative assessment should not only focus on what teachers need to know but also the larger system in which those teachers find themselves (Wylie & Lyon, 2009) in order to provide teachers adequate support. Local policies, contexts, curricular guidelines, and local administrators need to be considered in the design of professional development. Research on the effectiveness of various professional development models focused on formative assessment has reported positive findings in teachers’ practices. Working with small groups of teachers and supported by researchers, Black et al. (2004) and Sato (2003) reported improvements in the use of questioning, use of feedback, peer- and self-assessment, and the formative use of summative tests. In a collaborative action research project, Torrance & Pryor (2001) reported changes in classroom practice, especially in communicating, criteria for success to the students. Working with two teachers, Ash and Lewitt (2003) suggested that opportunities 27 for teachers to enact formative assessment in the classroom may complement the effect of professional development for teachers, especially to the extent teachers can use cognitive tools developed from the interaction with students. Working with communities of practice for elementary teachers, Webb and Jones (2009) reported that professional development helped teachers become more likely to support students’ learning processes, while students were more likely to take responsibility for their own learning. Local and Community-Based Professional Development Some studies (e.g., Borko, 2004; Desimone, 2009; Grossman & Woolworth, 2001; Thomas et al., 1998; Wei et al., 2010) recommended professional development models that were situated in the school context, that focused on student learning, and that promoted teachers’ engagement in activities that allow them to share, discuss, and reflect on their classroom experiences. In doing so, an important factor related to successful professional development is the creation of a sustained community of learners that allows teachers to actively experience and apply what they learned in their own context (Moscovici & Varrella, 2008). The use of local professional learning communities (Lave & Wenger, 1991) composed of teachers (and administrators) is one way to meet the requirements of quality professional development. Professional learning communities (PLC) may provide opportunities for teacher learning and change by enabling teachers to work collaboratively toward a common goal (Thomas et al., 1998) and constitute a vehicle for supporting collaborative inquiry (Nelson, Slavit, Perkins, & Hathorn, 2008). Two assumptions justify local PLCs: (1) knowledge is situated in the daily experiences and best understood by critical and collective reflection and (2) active participation in this process is related to increased knowledge and students’ learning (Vescio et al., 2008). Because of their school-based nature, PLCs can provide support for 28 teachers to make changes in classroom practice, become a space for reflection and insight, engage teachers in a community that can be sustained over time (Stoll et al., 2006), and attend to process and content by engaging teachers in authentic problems within their professional practice (Wilson & Berne, 1999). The impact of PLCs in teacher practice and student learning is not conclusive. In their review of research studies on professional learning communities effectiveness, Vescio et al. (2008) concluded that limited evidence of the impact of this model exists beyond teachers’ perceptions—although it seems that participation in PLCs helps teachers orient their practice in a more student-centered focus and increases collaboration among teachers. Moreover, research suggested that creating and sustaining a PLC is not an easy task, especially given the large number of factors that influence its success (Stoll et al., 2006). Notes About Teacher Practice and Professional Development For Desimone (2009), well-designed and effective professional-development models should produce increased teacher knowledge and skills and/or changes in teachers’ beliefs and attitudes so that teachers use these new learnings in their classroom with the focus on enhanced student learning. This traditional approach understands the impact of teacher learning as unidirectional and mostly focused on the outcome. Kazemi and Hubbard (2008), however, called for a different approach to examine the contribution of a professional development model, by considering the multidirectional influences between the professional development and the classroom, as well as the coevolution of participants in both settings. This is important because teachers are professionally acting in multiple activity settings (Wertsch, 1985). For example, in a PLC composed of teachers from different content areas, teachers’ abilities to adapt and reshape 29 what they learned to the particularities of their content areas are critical, especially when is argued that formative assessment is a content-specific practice (Coffey et al., 2011). Kazemi and Hubbard (2008) used the distinction between “knowledge” and “knowing” (Cook & Brown, 1999) to illustrate how the same type of learning is used differently depending on the setting. Knowledge refers to something that teachers possess and knowing implies knowledge that is installed in action, and enacted. Therefore, the concepts of knowledge and knowing and their interactions are relevant to understanding what teachers learn in professional development and how that knowledge is transformed, to be utilized in a variety of settings and purposes. However, the manner in which this process of translation occurs and how the new ideas and understandings will be “reshaped” to classroom use is unclear (Kazemi & Hubbard, 2008; Hewson, 2007). Understanding Teacher Practice In order to understand the complexity involved when teachers learn about a practice and enact the practice in different settings, the distinction between possessing types of knowledge is not enough. Drawing on sociocultural theories of learning, I explore some approximations of the meaning of practice to illustrate possible ways of visualizing teacher learning. From the perspective of cultural-historical activity theory (CHAT), practice is indistinctive of action. Individuals’ minds form part of an activity system that is part of the material world by itself and shaped by the social and material milieu (Roth & Lee, 2007). For example, Roth and Lee (2007) described a classroom situation in with students engaged in the study of ecosystems. They participated in a field trip and designed projects that may help to finding solutions to the case of a polluted creek. In doing so, practice meant students engaged in particular instructional activities guided by tools and resources existing in the community, participated in socially- 30 promoted activities, and found motivation in the goals and values of their community. Similarly, Wenger (1998) posited that learning occurs by social participation within a community. Individuals are “participants in the practices of social communities and constructing identities in relation to these communities” (p. 4). From the perspective of teachers learning in a PLC, learning occurs when teachers are engaged in discussions, sharing and analyzing classroom artifacts, or creating tools, because they become active subjects in the community through participation. Wenger considered that practice is one component for learning and it was defined as “shared, historical and social resources, frameworks, and perspectives that can sustain mutual engagement in action” (p. 5). The concept of habitus (Bordieu, 1990) provides another lens to explain that practice is shaped by norms and tendencies that are created in social groups and guide human action and thinking. Habitus can also be associated with the notion of teachers’ dispositions (Milne, Scantlebury, & Otieno, 2006). Working in schools and teaching chemistry can be conceived as habitus, which tends to be stable, organized in schemata, and shaped by the values and dispositions of a particular culture—for instance the culture of an U.S. suburban high school. Even though the experiences of these teachers may differ and they can hold personal theories, a large part of their instructional practice is shaped by their expectations of teaching and learning chemistry that are socially and culturally accepted. In a context of a professional development model that purports to provide impetus to making progress and change in instructional practice, these theoretical approaches to practice may provide some elements to understand how teachers enact a practice that they learned in a professional development as well as the mediating factors that influenced that enactment. 31 Changes in Teacher Practice If teacher practice is complex, shaped by the values and dispositions of a culture, seen in action, and developed in a community, is it possible to observe changes in teacher practice? Change is also related to the need of balancing multiple constraints of different stakeholders. For example, Cobb et al. (2003) posed that “teachers’ instructional practices are profoundly influenced by the institutional constraints that they attempt to satisfy, the formal and informal sources of assistance on which they draw, and the materials and resources that they use in their classroom practice” (p. 13). Changes are also promoted and shaped by teachers’ values and beliefs, especially in the areas they feel important for them (Coffey et al., 2005). Therefore, change may not be similar among teachers. Some teachers change more than others, and some new practices are easier to implement (Borko, 2004). Amid this complexity, transformation and change are relevant for developing teacher practice. Enacting practices such as formative assessment helps teachers to be more focused on student thinking and learning (Athanases & Achinstein, 2003). That change in focus implies that teachers are more aware of new problems of practices that demand a more accurate professional discernment. Thus, teacher learning occurs when they are able to generate an enlarged instructional scope to navigate and new learning possibilities are created (Roth & Lee, 2007). The emergence of instructional challenges and dilemmas (Kazemi & Hubbard, 2008) implies that teachers can describe and analyze student work in a finer-grain size, and this might constitute learning. Due to its dialogical and contextual nature, formative assessment implies to teachers face dilemmas of practice (Bell & Cowie, 2001), which are based on professional judgment. Webb and Jones (2009) described the process of expansive learning (Engeström, 32 2001) in the enactment of formative assessment. When enacting formative assessment the force that enabled change in practice was the “contradiction between the teachers’ beliefs about learning and the existing culture in the classroom” (p. 165). Ash and Lewitt (2003) described that teacher learning occurred at the moment of enacting formative assessment practices. They also explained that teachers and students appropriated each other’s thinking and actions to the extent that both parties were able to make sense of particular meanings such as students’ different ways of thinking. In summary, learning and enacting formative assessment implies embedding this practice in teachers’ current instructional systems of teaching and learning. These systems reflect a teacher mindset and are situated within a culture that influences their instruction. This poses the question of how teachers enact a new practice—such as formative assessment—learned in the context of professional development when different factors influence that enactment. In the next chapter, I will describe the research design and methods for this dissertation study including the analytical framework that will be used to explain and organize the findings of this study. 33 CHAPTER 3: RESEARCH DESIGN AND METHODS This dissertation study is situated in an interpretivist paradigm (Blaikie, 2007). The purpose of this paradigm is to understand and interpret a complex, socially-constructed, and dynamic aspect of reality (Glesne, 2010). Key is the assumption that reality can be interpreted from the individuals’ perspectives and, similarly, in the different ways in which people interact and construct social meaning. The research methods related to the interpretivist paradigm are considered qualitative. Qualitative research methods focus on the examination and the interpretation of data to elicit meaning, increase understanding, and develop evidence-based knowledge (Corbin & Strauss, 2008). These methods emphasize long-term and in-detail interactions with participants in their particular settings, so data sources come from individuals interacting in a social context. Examples of these data sources are the observation of human actions, the analysis of artifacts, and the linguistic processes that refer to individuals’ sense making. As a result, qualitative research methods produce a rich and deep corpus of information about a limited number of people and cases (Patton, 2002); and they describe and explain how individuals interpret and make meaning of a particular object, event, process or action (Glesne, 2010). Qualitative research, however, would be incomplete without locating experiences within a larger frame and context in which the study is embedded (Corbin & Strauss, 2008). The researcher role in qualitative methods implies talking with individuals, observing the social setting, and integrating different sources of information that are interwoven. Since qualitative research is used to gain insights into attitudes, behaviors, social contexts, and values of individuals (Glesne, 2010), the researcher searches patterns in the data analyses that might result in new theory, by using an inductive approach. As human instruments, qualitative 34 researchers need to balance their own perspectives as insiders—by entering into the topic of study—and outsiders, to rely on prior knowledge about the topic and context (Fetterman, 2009). This qualitative study posits the in depth characterization of formative assessment learning and enactment through observations of two chemistry teachers working in a learning team setting (as professional development) and in their classrooms. The qualitative focus of this study purports to examine formative assessment learning and enactment from the participants’ perspective and to discover new approaches and meanings that contribute to the development of empirical knowledge on these topics (Corbin & Strauss, 2008). This study was conducted by examining in-depth interactions with relevant individuals in their own contexts and sites (Glesne, 2010), such as classrooms and professional development sessions. In doing so, the study considers each teacher’s beliefs, understandings, and perceptions. The collection of different sources of data would help to understand the ways in which teachers linked the different settings (activity systems) to learn and enact formative assessment in the classrooms and the factors that mediated in these processes of learning and enactment. Analytical Framework: Sociocultural Theories of Learning This section describes elements of sociocultural theories of learning that help to frame and conceptualize this qualitative study. Sociocultural theories of learning emphasize the interconnections of social and individual processes and the construction of meaning through social interaction that develops cognition and provide meaning (Vygotsky, 1978) such that knowledge results from social and cultural interactions. Accordingly, teaching, learning and classroom assessment are human and social activities shaped by institutional and cultural contexts (Lemke, 2001). 35 In sociocultural theories, therefore, observation of learning includes three levels of analyses: personal, interpersonal, and community (Rogoff, 1995). These levels are interconnected, interdependent, and inseparable. Thus individuals are part of (or participate in) particular activities that shape learning that is mediated by language, cognitive tools, and symbolic tools. Subject participation in particular activities may imply the emergence of different types of learning and development, such as the participation in new types of activities. Sociocultural Theories of Learning and Formative Assessment Some scholars have argued that formative assessment needs to be understood in the light of social and cultural models of teaching and learning (e.g., Pryor & Crossouard, 2008; Shepard, 2000). In their foundational review of research on formative assessment, Black and Wiliam (1998a) discussed this social approach by emphasizing: “…all the assessment processes are, at heart, social processes, taking place in social settings, conducted by, on, and for social actors” (p.56). Moreover, participation in formative assessment configures the development of new identities as teachers—who are also learning about classroom practices—and students. Pryor and Crossouard (2008) explained: Formative assessment interactions involve enabling learners first to engage with new ways of being and acting associated with new, aspirational identities; and second to have these recognized as legitimate, where what counts as legitimate is strongly framed by institutional discourses and assessment demands. (p.3) Formative assessment has been described as dialogic (Crossouard & Pryor, 2012) and discursive (Bell & Cowie, 2001) practice. For Allal (2010), formative assessment concretizes in the social interactions that occur in the classroom and in the different assessment procedures that 36 mediate in this process and imply collective action in learning (Heritage, 2010). For example, the use of pedagogical tools such as ‘assessment conversations’ may serve to model norms and procedures that promote student engagement and classroom enculturation (Ruiz-Primo, 2011). Teachers’ use of formative assessment is highly idiosyncratic, because its enactment depends on different purposes such as examining student understanding, eliciting particular types of information, making interpretations and decision-making (Bell & Cowie, 2001). Meaningmaking occurs in the classroom during assessments, so each assessment is “a head-on encounter with a culture's models of prowess”, and an occasion of learning and self-reflection (Wolf, 1993, p. 213). Doing formative assessment implies individual and mutual appropriation of learning products as part of joint participatory appropriation (Rogoff, 1995). Therefore, formative assessment is inherent in Vygotsky’s concept of zone of proximal development (Shepard, 2000; Ash & Lewitt, 2003). Formative assessment is a process where teachers and students learn to regulate the own learning in interaction with the social and contextual elements of regulation and with the ones from the learning environment (Allal, 2010). In summary, one can argue that sociocultural theories of learning provide the most productive way to understand the enactment of classroom formative incidents with the potential to promote students’ intended learning (Pryor & Crossouard, 2008; Torrance & Pryor, 1998, 2001). Sociocultural Theories and Science Education Using sociocultural theories of learning to understand science teaching and learning implies understanding science, science education, and research upon science education as activities conducted within human, institutional and cultural lens (Lemke, 2001). Sociocultural 37 perspectives emphasize the role of human interaction in teaching and learning science activities, for example, those that are enculturated in communities (e.g., a science classroom). It can be argued that students are learning the socially and cultural traditions that are traditionally emphasized (Lemke, 2001). For example, in the particular case of chemistry education—part of the scope of this study—the predominant way of instruction often consists of deriving and applying empirical laws (Gilbert, Justi, Van Driel, De Jong, & Treagust, 2004; Osborne, 2012). Lemke (2001) argued that the science content, which merely consists of a collection of facts, is not enough to understand the different roles and implications of scientific knowledge outside classroom contexts. Students might have some basic understanding of science and certainly, being able to respond to tests successfully, but their understanding of science is unlikely to be used in real-life contexts. In chemistry education, research has also posited the importance of using multiple representations and levels to understand chemical phenomena (Gilbert & Treagust, 2009; Mahaffy, 2004). Students’ thinking of chemical phenomena should integrate three levels— macroscopic, microscopic, and symbolic (Gabel, 1998; Thomas & McRobbie, 2001). Aligned with a sociocultural perspective, Mahaffy (2004) suggested adding the human component as the fourth level that is needed to examine a chemical phenomenon to emphasize the public understanding of the role of chemistry in the society and promote meaningful student understanding. Thus, the use of sociocultural theories of learning contributes to a better comprehension of teachers’ learning and enactment of formative assessment in the context of chemistry instruction. For the purposes of this study, teachers are engaging with students in particular 38 activities that are situated in and influenced by the social context. Teacher and student learning are processes influenced by cultural and social factors that are beyond the classroom interactions. Cultural-Historical Activity Theory as Lens to Understand Complex Systems In order to respond to the research questions and to give meaning to the learning experiences of both chemistry teachers, this study uses cultural-historical activity theory (CHAT). CHAT’s origins are in the seminal work of Vygotsky, Leontiev, and other colleagues during the 1920s and 1930s. Later, Engeström (1987) developed a conceptual and analytic framework for representing the activity system—the unit of analysis that integrates complex sets of information. That framework is usually named CHAT or third generation activity theory. Vygotsky (1987) defined the concept of mediation to explain the relationship between human development and the environment. This suggests that individuals do not interact directly with the environment, but through artifacts, tools, and other social elements. That interaction produces signs that assist individual development, language, and consciousness. Signs help individuals to make sense of the world and when a sign is tangible for individuals, it can be used as a tool. Yamagata-Lynch (2010) posited: “there is not a clear moment when an artifact transforms into a cultural tool, but a cultural tool is an artifact that has gained value within participants’ activities rather than as a temporary tool for engaging in the immediate activity” (p. 17). The work of Leontiev and his colleagues in the Kharkov School of Developmental Psychology broadened the scope of Vygotsky’s work and introduced human activity as unit of analysis that is distributed among the subjects that participate in the activity and its other components (Zeek, Foote, & Walker, 2001). That unit of analysis is usually called objectoriented activity (Yamagata-Lynch, 2010). For Leontiev, individuals choose to become 39 members of one particular activity and, motivated by the object, participate to accomplish a particular goal. In doing so, the object-oriented activity condenses the different mediational processes in which the subjects that participate in the activity engage. However, participation in an activity system is dynamic; the events that occur during and within the activity system can modify the subjects’ motivations, goals, the environment, and the activity itself (Rogoff, 1995). In activity systems, humans transform social conditions, identify contradictions, make decisions, and create new cultural tools (Sannino, Daniels, & Gutierrez, 2009). CHAT is thus focused on understanding and transforming practice in context (Roth, Lee, & Hsu, 2009) because individuals are also modifying and using different tools and artifacts that may help them to create new activity systems in the process of sense-making and to trigger transformations. From the analytical perspective, the use of CHAT is helpful because it provides support to organize complex real-data sets in graphic models that allow communication about this extensive research (Yamagata-Lynch, 2010). The use of CHAT as an analytical framework is based on the work developed by Engeström (1987). He developed a triangular model that represents the activity system—the unit of analysis for CHAT embedded in a social and historical context (Engeström, 1987; Wertsch, 1991; Yamagata-Lynch, 2010). The activity system is a bounded system in which a series of processes and actions occur inside to focus on a particular object (Yamagata-Lynch, 2010). In the Engeström’s model, each component is located either in the triangle vertices or in the midde of the sides of the triangle (see Figure 1). 40 Figure 1. Activity system model (adapted from Engeström (1987)) According to Yamagata-Lynch (2010): the object is the goal or motive that organizes the activity; the subject refers to the individual or groups of individuals involved in the activity system; the tool includes artifacts, symbols, or concepts that can act as resources for the subjects; the rules are any type of regulations that frame the activity; the community corresponds to the social group that the subject of the activity systems belongs; the division of labor refers to how the tasks are shared among the activity system subjects and the community, and the outcome of an activity system is the final result of the activity. Therefore, relations among the components of the activity system model occur within a bounded system, but that also is embedded in a particular culture and history. During an activity, contradictions and tensions can emerge from any component. Contradictions can be characterized as unknowns, barriers to achieving the object, or conflicts between components (Nardi, 1993). CHAT is a meaningful approach to investigate transforming systems and to enable the identification of the main contradictions and tensions within a system in which contradictions 41 may trigger driving forces for change (Engeström, 1999). It also allows one to understand the processes by which an individual adopts the pedagogical tools available in a particular activity setting (Grossman et al., 1999). To some extent, CHAT integrates and fuses the interaction between individual and environment (Yamagata-Lynch, 2010). CHAT and Educational Research Cultural-historical activity theory has been used with different purposes in educational research. For Engeström (1987), its main feature is to support researchers in a participatory and interventionist approaches. Penuel (2014) concurs that CHAT can be used to illuminate formative-research interventions. Additionally, CHAT has been used to describe real-world leaning situations, developing new research methods, exploring theoretical concepts, and planning solutions to work-based problems among others (Yamagata-Lynch, 2010). Connected with this dissertation study, Grossman et al. (1999) proposed that activity theory—an umbrella term that includes CHAT—is a helpful framework for studying teachers’ professional development, because it emphasizes the importance of social and cultural factors in particular contexts. Webb and Jones (2009) used CHAT to investigate the enactment of the formative assessment practices of primary teachers. They explored changes and tensions of teachers and students in the classroom as well as the driving forces that enabled change. Windschitl and Thompson (2011) used activity theory to design developmental tools for teaching that can be used to support pre-service teachers’ science practices. Thomas and McRobbie (2012) used CHAT to interpret changes and tensions related to the use of new pedagogical tools created to promote students’ metacognition in high-school chemistry. Patchen and Smithenry (2014) used CHAT to analyze day-to-day participation structures in the enactment of a chemistry classroom. 42 In summary, CHAT is an appropriate framework for this study and contributes to responding to the research questions. CHAT is particularly useful in understanding how experiences in the professional development were configured, as an activity system, to make possible science teachers’ learning about formative assessment. It is also relevant in understanding how the enactment of classroom formative assessment was mediated in each teacher’s activity system and how it evolved over time. Teacher Learning and Appropriation of Pedagogical Tools For this dissertation study, one relevant point is to examine the ways that the teachers leveraged their formative assessment learning in classroom practice. Thus, teacher learning can also be understood by the ways that teachers appropriate different activity system tools that act as mediators to guide a goal-directed action. Appropriation refers to the process by which a person incorporates and embraces pedagogical tools (Windschitl & Thompson, 2011) to be used in a social environment (Grossman et al., 1999; Wertsch, 1991). Therefore, I am interested in the different ways that teachers adopt pedagogical tools and are able to transform the activity system in which they are embedded. Levels of Appropriation In order to characterize how teachers are adopting pedagogical tools that mediate in their activity systems, Grossman et al. (1999) defined different levels of appropriation. I will use this framework to illustrate how teachers made progress in their formative assessment practices. These levels are described below: Lack of appropriation. Teachers do not adopt particular pedagogical tools. It may occur because the use is not understood; the tool is too far away from the learner, or there is resistance to using the tool. 43 Appropriating a label. This occurs when a teacher uses the name of the tool, but does not understand any of its characteristics. The teacher is not aware of the implications of the tool. Appropriating surface features. A teacher at this level is learning some features of the pedagogical tool, but he or she is not able understand how these features are integrated and function as a whole. Appropriating conceptual understandings. A teacher at this level understands the underlying rationale for pedagogical tool as well as its implications, identifies the nuances of the tool, and is able to understand the applications for the tool in different contexts. However, the tools are not translated into practice. Achieving mastery. This level implies teachers’ understanding of the conceptual underpinnings of the pedagogical tool as well as the skill to use the tool effectively in the classroom. This level is usually seen in experienced and expert teachers. Research Approach: Case Study Qualitative case studies help to understand the functioning of real-life cases in real contexts (Stake, 2006), so a case is studied in depth and intensively (Glesne, 2010). According to Yin (2009), a case study focuses on the investigation of a phenomenon in depth and in its particular context, especially in those situations in which the limits between phenomenon (the case) and context are not clearly distinct. Stake (1995) mentioned that a case study is a system with some sort of boundaries, but it also includes the analysis of its integrated working parts. A “case of study” locates within a bigger system or context (Creswell, 2007), and it is highly interconnected with the contextual conditions that surround it. Therefore, the researcher role for a case study is to delimit the boundaries of the case from its contextual milieu. 44 Stake (2006) considered that the main objective of case study research is to understand the case, which also implies the examination of its specificity, functioning, and activities. The importance of case study research derives from its focus on a rich and deep understanding (Patton, 2002) especially regarding the research foci and their working parts. A case study has value when it highlights the phenomenon to be studied and becomes a focal point and a learning opportunity (Schram, 2006). This study is designed as an embedded single case study (Yin, 2009). Embedded case studies are a subtype of single case study that includes, within the single case, sub-units of analysis. These embedded units of analysis can broaden the scope of analyses and increase the complexity and richness of the case of study. Patton (2002) recommended nesting and layering case studies because, “You can always combine studies of individuals into studies of a program, but if you only have program-level data, you cannot disaggregate it to construct individual cases” (p. 447-448). Furthermore, design and analyses of embedded case studies need to balance the findings that emerge in each unit of analysis with the holistic aspects so that they can inform and illuminate to each other. Case Study: Single and Embedded Design Case study research is helpful to describe and explain real-life problems and operating in real situations (Stake, 2006). This study encompasses the experiences of two chemistry teachers who learned about formative assessment in a team-based professional development and are making steps to enact some formative assessment practices in their own classroom. This single and embedded case study is justifiable because it purports to illustrate the particular events of two experienced science teachers who are inserted in a similar context (school, content area, type of students, grade level, etc.) and learned about formative assessment in the same learning team 45 as professional development. Moreover, the description of the enactment in two different activity settings (such as classrooms) is helpful to understand in depth the diversity of mediating factors and conditions that affect the enactment of formative assessment. Thus, gaining and sharing understandings of complex human activities through particularization is one of the researcher goals in using activity systems analysis (Yamagata-Lynch, 2010). Stake (2006) mentioned that the researcher’s decision for a single case study is to explain a phenomenon with purposes of particularization (instead of generalization). In addition, this case study can be justified as longitudinal (Yin, 2009), to the extent that it is pertains to describe the coevolution of teachers involvement in classroom practice and professional development (Kazemi & Hubbard, 2008). Therefore, the engagement of two chemistry teachers with formative assessment in the learning team and in their classroom constitutes this particular case study. As previously described, the purposes of FAME (Formative Assessment for Michigan Educators) are to promote teachers’ learning of formative assessment knowledge and practice as well as provide an impetus to implement, reflect on, and refine new instructional and assessment practices. The two chemistry teachers learned about formative assessment in the same activity system. They participated in the same activities of the FAME learning team, shared discussions about theory and practice, and created artifacts to be used with their students. Each teacher, however, enacted formative assessment in different classroom (activity) settings. Therefore, the instructional processes of both teachers (i.e. in the classroom) are the units of analysis embedded in the case study including the enactment of formative assessment. Each classroom setting is defined as a particular activity system and delineated as a unit of analysis. Thus, the embedded case study 46 design allows gathering evidence from these teachers in different sub-settings and by using different data sources (Yin, 2009) that enrich the understanding of this case (See Figure 2). CONTEXT Classroom level School level FAME PD and learning team District and macro policy levels Expectations for students in a particular culture Case of Study Two chemistry teachers learning about formative assessment knowledge and practice in a team-based professional development model Unit of Analysis Unit of Analysis Enactment in classroom 2 Enactment in classroom 1 Figure 2. Representation of single embedded study framed for this research, including the case of study, the embedded units of analysis, and the context (Adapted from Yin (2009)). Along with the characterization of teachers in the two units of analysis and in their participation in FAME professional development (“the case”), the connection between the case and the embedded units of analysis is paramount. For this study, understanding how participation in the learning team influences each teacher’s formative assessment enactment is very important especially in terms of the mediating factors that configure each teacher’s activity system. A good case study is context-sensitive (Patton, 2002). In this sense, the contextual background may include different elements and components. Stake (2006) mentioned that historical, cultural, and physical contexts are of interest, although other areas may include the 47 social, economic, political, ethical, and aesthetic concepts. Case study research is compatible with activity systems analyses (Yamagata-Lynch, 2010), because activity systems analyses examine systems that are difficult to separate and distinguish from the context in natural settings. In addition, since this study characterizes formative assessment from a sociocultural perspective (e.g., Pryor & Crossouard, 2008) and recognizes the importance of sociocultural perspectives in science teacher learning (e.g., Lemke, 2001; Milne et al., 2006) the influence of the context in this case is considerable. In this particular case study, examples of contextual components are the school culture and policies, the national and local policies that affect curriculum and assessment enactment, the community of teachers in the school building (especially the teachers from the science department), and the non-science teachers that participated in the FAME learning team. Other contextual components are the state science standards, students’ aspirations, students’ socioeconomic status, etc. Moreover, classroom-related components are related to students’ characteristics such as previous ideas in science, classroom participation, attitudes about assessment, etc. Next, I will characterize the research methods utilized in this case study. I will describe the characteristics of the two participant teachers. Then, I will explain the data sources employed in the study, followed by the data analysis procedures. Finally, I will discuss some aspects related to quality qualitative research such as triangulation and reflexivity for this study. Participants This embedded case study is focused on the experiences of two female teachers, Diane and Lisa (these are pseudonyms) who teach chemistry in grades 9-12. I will describe each teacher based 48 on the information that was collected through interviews, conversations, and other secondary data sources. The first teacher, Diane, completed 11 years of classroom teaching experience in 2013. She also serves as science department chairperson. Diane has been teaching in the school building for nine years. Prior to that she worked in another high school in the same school district. Before working as a teacher, Diane worked for one year in a science museum. In her undergraduate studies, Diane majored in dietetics and minored in psychology. She holds a M.S. in Science Education and is studying for an Education Specialist Degree in K-12 Leadership program. Diane is certified to teach chemistry, biology, general science, and psychology. She also holds administration certification. In the years of data collection for this study she taught chemistry (the course I observed), psychology, and advanced chemistry. She taught students from Grades 10, 11 and 12. Prior to the FAME professional development, she did not recall having any formal training in formative assessment. The second teacher, Lisa, has been teaching for 12 years in the building. This is the only school building where she has worked. She majored in chemistry and minored in physical education. She holds a Master’s Degree in Education Science Leadership. In the years of data collection she was teaching two chemistry courses, all for Grade 10 students: Chemistry (the class that was observed) and Chemistry B—an introductory chemistry class that focuses on the fundamental knowledge needed to become science literate. Besides teaching chemistry, she is also the National Honor Society director in the school. She said that she loves to teach Chemistry, because “it combines my two favorite things—science and math.” [Interview #1, Lisa, March 2013]. Lisa mentioned that she was introduced to formative assessment in the last 49 couple of years, before participating in the FAME learning team. By then, she realized that some of the things she has done in the past have been parts of formative assessment. Data Sources One of the features of qualitative research is that it relies on multiple sources of data (Corbin & Strauss, 2008) and frequently uses data obtained through observation, questioning of relevant individuals, and interaction with participants (Glesne, 2010). Accordingly, three primary data sources will be considered in the study for each teacher—lesson videotapes, teacher interviews, and videotapes of learning team meetings. Lesson Videotapes Lesson videotapes are the most compelling data source to be considered for the study. Observations of classroom practices are useful to provide information to respond to the research questions in the place where they occur (Yin, 2009). Lesson observations are essential to capture assessment practices in the classroom, because they mostly reflect interactions between teachers and students (Randel & Clark, 2013) or among students. Multimedia representations, such as lesson videotapes can capture different components of teachers’ practice at once and consider the lesson analysis from different perspectives (Hatch & Grossman, 2009). Conducting multiple and consecutive lesson observations for the same teacher allows gathering evidence of teachers’ abilities to connect lessons, students’ understandings, and events as well as to display a fuller range of instructional and assessment strategies (Danielson & McGreal, 2000). Thus, lesson videotaping constitutes the primary method of gathering information about teachers’ formative assessment practices. From the formative assessment perspective, this modality of consecutive lesson observation is especially important to collect evidence of the ongoing process of making instructional adjustments based on evidence of students’ 50 understandings, although Hatch and Grossman (2009) pointed out that description and analysis of instructional practices may be overwhelming especially in more than one lesson. For each teacher I videotaped a 10th grade chemistry class. According to the high-school science department website, the course is described as a “two-semester college preparatory course.” Its focus is on helping students understand chemistry core concepts related to inquiry, reflection, and social implications; forms of energy; energy transfer and conservation; properties of matter; and changes in matter. The course activities include a variety of teaching methods, laboratory investigations, group and individual activities, discussions, and cooperative learning. Classroom assessment will be done through class participation, group projects, individual projects, labs, homework, quizzes, and tests. I videotaped both teachers’ lessons in two years, 2011-12 and 2012-2013, approximately at the same time of the year. The number of students in each teacher’s class was close to 28 in both years. I collected videotapes for each teacher in four episodes. The first one was in March 2012 and covered topics of thermochemistry, although one lesson introduced a new unit about solutions. The second episode of videotaping occurred in June 2012, and it covered content about redox reactions, but some lessons were spent in final exam review. The third and fourth videotaping episodes were in March 2013 and the end of May 2013. These covered the same topics that were covered in 2012 (thermochemistry and redox reactions respectively). Since I videotaped the teachers teaching the same course, the instructional units and the resources used by teachers were the same. In summary, 23 lessons were collected in total. Ten hours were from Lisa’s classes and thirteen from Diane’s. Ten lessons were videotaped in 2011-12 and 13in 2012-13. Each lesson lasted 60 minutes. The summary of lesson videotapes is presented in Table 1. 51 Table 1 Summary of Lessons Videotaped Year 2011-2012 2012-2013 Teacher Unit Total Round 1 Round 2 Diane 3 3 6 Lisa 1 3 4 Total 4 6 10 Diane 4 3 7 Lisa 4 2 6 Total 8 5 13 In seven of the eight videotaping episodes, I recorded two or more consecutive lessons, in order to gather evidence of the teachers’ abilities to connect lessons, students’ understanding, and events and to display a fuller range of instructional and assessment strategies (Danielson & McGreal, 2000). The consecutive videotaping was crucial for identifying how teachers made instructional decisions and adjustments based on students’ evidence that cannot be collected in one single lesson. Furthermore, multiple formative assessment opportunities were captured as target practice (Randel & Clark, 2013). The lessons videotaped in 2012 displayed a broad set of activities. For example, they included activities guided by teachers, classroom discussion, group work, science laboratory activities, and individual work with textbooks and handouts. Descriptions of each lesson videotaped are presented in Appendix 1. Teacher Interviews Interviews have the purpose of entering into the perspective of the other person, assuming that the perspective of the interviewee is meaningful, valuable, and explicit (Patton, 2002). As an interaction between at least two individuals, researchers have the role of disentangling what 52 individuals say to make sense out the words elicited by the interview questions (Glesne, 2010). For example, interviews can be used to gather information about classroom practice as well as teachers’ perceptions about their practice (Tierney & Dilley, 2002). I conducted two rounds of semi-structured interviews with each teacher to gather evidence of their understandings, beliefs, priorities, and assumptions about classroom enactment of formative assessment as well as the rationale for their decision-making process. In semistructured interviews, additional questions may emerge in the course of the interview and may replace or complement the pre-defined questions (Glesne, 2010; Johnson, 2002). The first round of interviews was in March of 2013 (the week after lesson videotaping). The interview focused on the following themes: (1) understanding of formative assessment, (2) previous experiences with formative assessment, (3) classroom implementation of formative assessment, and (4) factors that influenced enactment of formative assessment practices. The second round of interview occurred in June 2013 in the week following the last episode of lesson videotaping. The interviews focused on the formative assessment related decisions that both teachers made in their instructional practices and on perceptions of formative assessment related instructional practices and instructional decisions. I selected four video prompts for each teacher—three from the redox unit and one from the thermochemistry unit (all recorded in 2013). The videos were selected because they represented examples of the formative assessment components included in the FAME model (Measured Progress, 2010) and illustrated critical moments for their formative assessment decisions (Myhill & Warren, 2005). The interview had teachers observe video prompts of their own formative assessment practices and used stimulated recall procedures (Meade & McMeniman, 1992; Peterson & Swing, 1982). From the perspective of CHAT, mental activity cannot be separated from observable action. 53 Rather, it serves the purpose of a sign in mediation (Yamagata-Lynch, 2010). Therefore, the interviews helped me gather insights into the teachers’ thinking and their decision-making process based on evidence of their students’ understandings. As a follow-up, I asked about factors that influenced their enactment of these practices. All the interviews were videotaped and audio recorded. Lastly, the four interviews were transcribed. The protocols used in the two interviews are presented in Appendix 2. Learning Team Meeting Videotapes I videotaped five FAME learning team meetings during 2011-12 in which the chemistry teachers participated. In those meetings, learning team members: (1) discussed the concept of formative assessment (especially by comparing formative assessment with grading practices), (2) reflected on their classroom instructional practices, (3) discussed practical implications of the formative assessment process for their particular subjects and contexts, and (4) created formative assessment tools and engaged with their classroom use. After each meeting, I wrote a narrative that described the events that occurred that focused on the following topics: role of the team leader, link to teacher practice, and link to student learning. Secondary Data Sources I collected and utilized exemplars of classroom resources and instructional materials captured in the lessons videotaped. I also considered some artifacts created or used in the learning team such as readings or and formative assessment tools. These artifacts provide context to lesson observations and, more importantly, provide information that is not available by other data sources (Glesne, 2010). I also collected complementary information from other sources of data that helped me organize the case study. For example, information from the 54 school and school district website, classroom photographs, information from teacher surveys, and documents and instructional resources referred to me by the teachers. Procedures for Coding and Analyzing the Data Glesne (2010) noted that qualitative methods purport, in a general sense, to look for patterns through data analysis. These methods, however, do not try to reduce the multiple interpretations to fixed categories or numbers, or to a general norm. Thick descriptions are generated in order to gain a perspective of the participants’ experiences including contextual information. These descriptions result from a constant reexamination of the data collected to find accurate, fair, and trustworthy report (Yamagata-Lynch, 2010). I used a grounded-theory approach used to search patterns across the different sources of data. Grounded theory (Glaser & Strauss, 1967) is a methodology for developing theory that is “grounded” in the data collected, with the purpose of demonstrating relationships between conceptual categories and specifying the conditions under which theoretical relationships emerge, change, or are maintained. This approach uses an iterative search in themes and patterns to build theory. Therefore, grounded theory involves continuous data collection, sampling, coding, categorizing, and comparing, in order to generate theory (Glesne, 2010). In particular, I used the constant comparative method (Glaser & Strauss, 1967)—in which the researcher engages in a process of iterative reexamination while comparing different data sources—as my analytical method. The constant comparison method starts by establishing coding procedures, in order to break the data in analytical units (Strauss, 1987) then uncovers the key points to be considered in the analysis. According to Yamagata-Lynch (2010), the constant comparison method is 55 compatible with CHAT’s activity systems analysis method. Next, the procedures for coding the data are described. Data Reduction Data reduction is the use of analytical techniques on searching through the data for identifying patterns and themes (Glesne, 2010). For this study, data reduction includes a description and coding protocol for lessons, open coding for teacher interviews, and a coding schema for the learning team meetings. Classroom observation coding protocol and narrative. I used a preliminary coding schema to analyze the video-taped lessons. This schema considered the eight interrelated formative assessment components described in the FAME model (Measured Progress, 2010) and other relevant formative assessment-related elements that were mentioned in the research literature (See Appendix 3). The purpose of this coding schema was to identify a wide array of instructional and formative assessment elements that were helpful to identify patterns across lessons and teachers. For each lesson, I created a memo in which I identified the respective codes, described the lesson, and transcribed some examples of teacher and student talk. These memos included notes about my interpretation of the particular situations, based on the formative assessment perspective. I also focused my description on critical moments (Myhill & Warren, 2005) in which teachers enacted formative assessment. In particular, these moments represented teachers collecting evidence of students’ ideas and making explicit decisions based on students’ questions, responses, or comments. For example, some episodes included teachers using new instructional strategies to provide a different explanation for a concept that was confusing for students or teachers making public announcements to the students based on what they noticed on 56 students’ ideas or work. I made a list of these episodes and I selected three per teacher to be used in the second round of interviews. Once completing the description of the lessons, I organized a matrix that included for each lesson the content taught, and the main lesson activities. The matrices also included the following three questions that delineate the dimensions of formative assessment and guide it as a process: Where are we going? Where am I now? How do we close the gap? (Hattie & Timperley, 2007; Sadler, 1989). Moreover, when relevant, I included comments and notes about how a particular lesson was embedded in an instructional sequence. This matrix is presented in Appendix 1. Coding schema for teacher interviews. Interviews were analyzed using an open-coding procedure (Bohm, 2004). Teachers’ responses were read and broken down into preliminary categories. When necessary, the preliminary categories were broken down in sub-categories. In order to organize and facilitate the procedure, I used qualitative analysis software (NVivo) to code the interview corpus. I iteratively reorganized the emerging codes according to the study research questions, the context of teachers’ formative assessment practices, the learning team meetings, and the information that came up from the and the context of the implementation (e.g., content area, connections with the FAME learning team, and students’ characteristics). I used a hierarchical “tree” to represent the relationships among the codes. Moreover, since the interviews asked about teachers’ perceptions of a number of formative assessment critical moments, the emerging codes were used to provide insight into teachers’ instructional decisions and, in a broader sense, to make connections with the general enactment of the formative assessment tools observed in the lesson videotapes. 57 Coding schema for learning team meetings. The FAME research team developed a coding schema to analyze interactions of the learning teams, especially in terms of how teachers discussed classroom formative assessment topics and connected them with student evidence (Gotwals, Cisterna, Kintz, & Lane, in preparation). The dimensions included in this coding are: (1) types of activities, (2) formative assessment content, (3) depth of formative assessment content, (4) depth of discussion, and (5) participation structure. The coding categories are presented in Appendix 4. In the case of the learning team, a narrative case was created to illustrate the participation of the teachers in the meetings, their discussions about classroom implementation of formative assessment, and their reflections on the professional development experience. The narrative included tables and graphs regarding the meeting time that the learning team spent in each dimension. This narrative serves for complementing the data collected by lesson observations and teacher interviews to respond to the study’s research questions. Coding and Activity Systems Organization Once the data was broken into multiple categories, I organized the codes in groups of families of concepts. For this purpose I used axial coding. Axial coding involves an intensive analysis of the categories of codes that were identified in the previous phase of codes (Strauss, 1987), so the research identifies overarching themes and categories among codes, families of codes and subfamilies of codes (Yamagata-Lynch, 2010). By using mind-mapping-related software (i.e., CmapsTools), I created and refined representations for each teacher that integrate the different codes from the data sources and become a first approximation to respond to the research questions. I started this process by making relationships among the interview codes to create a first draft. Then I used the results from the lesson observation summary matrix to refine 58 the initial representation. For example, this process helped me to fuse codes, to re-categorize some codes from one particular family to another, or even to eliminate some codes. As a result, I created two representations that illustrated the hierarchies and relationships among the codes that I ultimately established (See Appendix 5). The final process was selective coding in which the researcher codes data around a main family of codes that are essential to the message that the investigator learned in the study (Strauss, 1987; Yamagata-Lynch, 2010). Prior to starting with this process and guided by the research questions, I examined the code trees that I created in the axial coding stage. For creating the activity-system analysis model (Engeström, 1987), I used the guiding questions suggested in Mwanza’s eight-step model (Mwanza, 2002) and Yamagata-Lynch's model (2010) to characterize the activity systems that would be included to represent the different components (See Table 2). By using these guiding questions, I identified preliminary themes that matched to the components of the activity systems and were common for both teachers. For example, I found that for both teachers had a similar (but preliminary) interpretation of the activity system object, subject, and tool, according to the axial coding results. I continued my selective coding by mapping the components of my activity system that, primarily, described different interpretations of the activity system components. When pertinent, I used evidence from lesson episodes and teacher interviews’ transcriptions to reinforce or strengthen the characterization of those activity system components. For instance, I found in my axial coding that both teachers had different ways to conceive the division of labor component in the activity system and that was reflected through different sources of evidence and in different moments. The characterization of that component provided me with support to differentiate each teacher’s activity system. 59 Table 2 Guiding Questions Used for Selective Coding Mwanza (2002) Yamagata-Lynch (2010) Step 1. Activity. What sort of activity am I interested in? 1. What are the key activities related to this study that are in the data set? Step 2. Objective. Why is this activity taking place? Step 3. Subjects. Who is involved in carrying out this 2. What is the activity setting in which these activities are situated? activity? Step 4. Tools. By what means are the subjects carrying out this activity? 3. Who are the subjects of these activities? 4. What is the shared object of these activities? 5. Do different subjects participating in the same Step 5. Rules and regulations. Are there any cultural norms, rules, and regulating governing the activity view the activity and the object performance of the activity? differently? If yes, why? 6. What tools, rules, community, and division of Step 6. Division of labor. Who is responsible for what when carrying out this activity and how are the roles organized? labor are involved in these activities? 7. What systemic contradictions are bringing tensions into these activities? Step 7. Community. What is the environment in which 8. What are the outcomes of these activities? the activity is carried out? Step 8. Outcome. What is the desired outcome from this 9. What historical relationship does one activity have with another? activity? 10. How does one activity interact with another? I presented my preliminary activity system models—one by teacher—to an external researcher who was familiar with the study characteristics and context. We discussed the results of the axial coding together and, revised the preliminary descriptions of the activity system models (when pertinent), and identified tensions within each teacher’s activity system. My selective coding was complete when I wrote thick descriptions of Diane’s and Lisa’s classroom enactment of formative assessment in a narrative format. This served as the base to organize and structure Chapter 5 of this dissertation study. The writing process helped me refine and polish the characterization of each teacher’s system and; more importantly, to identify nuances in the formative assessment practices of both teachers that configured their particular 60 activity systems. Simultaneously, since the writing process implied a new layer of analysis, I iteratively made changes in the characterization of the activity system components for each teacher. Rigor and Quality in Research In qualitative research, the quality of the analyses depends on the ability of the researcher to analyze, provide insights, recognize patterns, and weight different sources of data. Thus, the credibility of a study depends on three interrelated elements: rigorous research methods, credibility of the researcher, and philosophical belief in the value of qualitative inquiry (Patton, 2002). In terms of ensuring rigorous research methods, triangulation is defined as the practice in qualitative research of relying on multiple methods to gather data (Glesne, 2010) and make sense of the multiple perspectives that are available based on the data. In particular, triangulation refers to the use of multiple data sources, methods, analysis, and theories (Patton, 2002). Glesne (2010) also indicates that triangulation may refer to using multiple investigators and multiple theoretical perspectives. As human instruments, qualitative researchers need to balance different perspectives Fetterman (2009). Qualitative researchers have to deal with perceptions and impressions, either those that are collected from working with participants or the personal ones. Stake (2006) explained that triangulation provides the researcher more confidence about the accuracy of the research methods employed. In this dissertation study, some of the strategies that I used for triangulation were: first, using various sources of data about the two chemistry teachers. These included evidence of classroom lessons and team meetings videotapes (indicating what teachers actually did), interviews, and other complementary sources. Second, I used CHAT as the analytical 61 framework for making sense of the problem for this study, but I also used other research approaches within sociocultural theories of learning that helped me to enrich the findings and implications of the study. I also included different strategies for coding the different data sources in order to have an integrated perspective of the data collected. Third, regarding coding, the five FAME meetings that were videotaped were double coded. One-third of the lesson videotapes (four lessons by teacher) were double coded. These videos were coded by me and by two members of the FAME research team. Fourth, I worked with other researchers to gain insights on the processes of data coding and analysis. I shared with the FAME research team excerpts of teachers’ interview transcriptions, video clips from lessons that represented critical assessment moments, and video clips of teachers reflecting on those critical assessment moments. I also shared my preliminary data analyses to receive feedback on their insights and interpretations, based on the team expertise working on the project and on the elaboration of observation protocols. Lastly, when I finished my grounded theory-based analysis and built preliminary representations of the activity systems, I shared them with the FAME research team in order to receive feedback and make additional refinements. Furthermore, I replicated these activities with two researchers external to the FAME research team. I recognize the importance of successively reflecting on the data collection process, data analysis, and writing. For this study, the collaboration with my dissertation director, who is also involved in the FAME research, and my colleagues in the FAME research team helped me understand the context of the program and its particularities. Another important experience that helped me to reflect on my dissertation work was the participation in a summer institute targeted for doctoral students in science education, where I received significant ideas and suggestions 62 from mentor researchers. Based on that experience, I created a poster that was presented at a national conference in science education that presented preliminary evidence of one of the results’ chapters. I received feedback from the audience that helped me clarify my thinking during the writing process. I also created a dissertation log in order to register the various set of personal ideas and others’ suggestions that may be helpful to improve this study. Although I was not quite consistent as I wanted, the log helped me to check previous information, notes, and evidence from my ideas and conversations with others. Brief Notes About Reflexivity In qualitative research, reflexivity becomes an important concept in terms to understand how the researcher influences and interacts with and is influenced by participants, research setting, and procedures (Glesne, 2010). Reflexivity forces the researcher to think about their position in relation to research participants (Patton, 2002). Similarly, reflexivity contributes to understand how personal characteristics, principles, and ideas interact with others in the research context and influence decisions about methodology, methods, and interpretations (Glesne, 2010) and serve as inquiry into the researcher own biases (Patton, 2002). Thus, the examination of my personal motivations and strengths as researcher helps me better understand the ways in which I understand and report this study—that in terms of the knowledge generated is only partial. My motivation for doing this study is related to my interest in classroom formative assessment as a process that is helpful for teachers and students. Reflecting on my previous experiences, I concluded that using formative assessment in the classroom is helpful for teachers and students. My background and previous work in educational assessment made me realize that, along with the focus on technical quality and issues of validity of fairness, assessment needed to be helpful for the primary users who participate in the classroom interactions. 63 Similarly, I believe that teachers can learn about and improve their classroom assessment practices (and formative assessment practices in particular), because I believe that learning about formative assessment is key for reaching more student-centered teaching. Teachers need numerous and rich learning opportunities during their professional careers, so well designed professional development initiatives—and research on that topic—are crucial. Even though I interacted with the two teachers and had some conversations, my role with the two chemistry teachers primarily assumed the stance of a non-participant observer. I spent the time in the school videotaping learning team meetings and chemistry lessons. This decision mainly occurred in my role as a graduate research assistant for the project that does research on the impact of FAME as statewide professional development. I recognize that my role as a researcher who also worked for the FAME research project may have influenced both teachers’ approaches to the lessons that I videotaped especially in the first year of videotaping. I appreciate that both teachers were very receptive and open to being videotaped and interviewed. Beyond formal videotaping, I had some small conversations with the teachers before and after the videotaped lessons. The topics were basically about their lessons, the school, some ideas about science teaching, and their impressions about how they were enacting formative assessment. In those moments, more than speaking from the FAME perspective, I tried to speak from my experiences as a former science teacher in order to establish some rapport and be empathic with teachers’ process of formative assessment enactment. My experience as science teacher helped me to understand the context of their lessons. Although I never taught chemistry in the high-school level (I taught chemistry content for middle school), I am quite familiar with the content and know some elements of chemistry teaching. However, my perception of both teachers’ experiences could have been influenced by the fact 64 that I am not a US citizen or permanent resident and, therefore, I was neither educated in nor I have not taught in the United States’ school system. This means that I have a different cultural lens through which I examine their lessons. I could have possibly overlooked certain cultural elements, but I probably I also noticed other elements more easily. I believe that the ‘outsider’ perspective is valuable, because it helped me to reflect on my own educational experience, learn about cultural practices that were unfamiliar for me, and notice some elements that others would not have recognized. In the next two chapters, I will present the results of my study. Having firstly described and analyzed Diane’s and Lisa’s experiences in the learning team, I will focus on their chemistry lessons especially in terms of their formative assessment enactment. 65 CHAPTER 4: FORMATIVE ASSESSMENT IN THE LEARNING TEAM EXPERIENCE In this chapter, I will describe the learning team experience of Diane and Lisa, two high-school chemistry teachers who participated in the FAME professional development program. Working on their school learning team, these chemistry teachers were introduced to formative assessment content, created tools to gather evidence from student ideas, and discussed connections between formative assessment theory and practice (and some connections with science). This chapter also includes contextual information that contributes to understanding better both chemistry teachers’ actions in the learning team, especially in the ways that the learning team experience helped Diane and Lisa reflect and refine their understanding of formative assessment. Moreover, this contextual information is helpful to understand the ways Diane and Lisa enacted formative assessment in the classroom to improve their chemistry instruction—a topic that will be described in detail in the next chapter. I will start with a general outline of what occurred during learning team meetings in order to characterize the team according to their activities, formative assessment topics, and types of discussions. This characterization provides the context for what happened over the course of the professional development year. Then, I will focus my description on how these two chemistry teachers engaged in the learning team meetings, used these experiences to frame their understanding of formative assessment, and made connections with science classroom practice. Finally, I will synthesize the main outcomes of the learning team experience for both teachers. Overview of the FAME Learning Team This FAME learning team was created in October 2011, in an effort to disseminate the FAME program in the school district. The team leader was a social studies teacher who also worked as 66 curriculum coordinator. Among the learning team members, two taught chemistry (Diane and Lisa), one taught visual arts, one taught mathematics, and one taught English Language Arts. The visual arts teacher only attended the initial activities and did not persist the whole year. The team leader recruited teachers in the school because these teachers were working on the implementation of a new curriculum targeted for advanced-level Grade 11 and 12 students (all the teachers from the team but Lisa were working on this curriculum). The first activity of the learning team was a full-day FAME professional development workshop in October 2011. In the workshop, the eight components of the FAME formative assessment model were introduced to the audience. These are: (1) planning for formative assessment, (2) learning target use, (3) use of student evidence, (4) use of formative-assessment strategies, (5) use of formative-assessment tools, (6) student and teacher analysis, (7) formative feedback, and (8) making instructional decisions (Measured Progress, 2010). Additional activities included discussions of how each formative-assessment component could take place in the classroom. Teams were encouraged to set annual goals and make commitments for the year. To help team leaders organize their meetings, all the participants received printed and online resources that provided a general overview of the eight formative assessment components. In November 2011, the learning team had its first meeting in the school building. During the professional development year, the team met five times. The team met after school and meetings lasted between 30 to 60 minutes. Based on the coding schema developed by the FAME research team to analyze interactions and use of meeting time within learning teams (See Appendix 4), I will describe the ways the learning team organized its meeting time according to the following dimensions (1) types of activities, (2) formative assessment content, (3) depth of formative assessment content, (4) depth of discussion. 67 Types of Activities The learning team spent the first meetings in reading and discussing information from printed materials about formative assessment—a trend that decreased in the subsequent meetings (see Table 3). Over the learning team meetings, the time spent in activities focused on sharing experiences of practice increased (4% to 70%, with the exception of the last meeting). Table 3 Learning Team Activities Across Meetings Meeting (% of time) Codes 1 2 3 4 5 Sharing examples or tools from practice 4 25 26 70 0 Analyzing & discussing examples of samples of 0 0 0 0 0 55 32 15 5 0 0 0 0 0 0 14 25 0 11 0 9 12 28 7 41 6 0 19 0 4 12 6 10 7 12 0 0 2 0 43 student work or video of classroom teaching Reading, examining, discussing information from a book or other source Lecture or presentation of information Discussion of external constraints or classroombased obstacles Discussion of potential uses of Formative Assessment Discussion of unrelated topics Guiding Discussion (e.g., setting the stage, giving directions, setting meeting goals) Other activities Across meetings, time was also spent in discussions about potentialities or constraints related to the use of formative assessment in schools. However, the team did not engage in 68 activities to analyze evidence of student work or classroom practice, and there were not any lecture or presentation of content. Formative Assessment Content Table 4 presents the distribution of the meeting time spent in the dimension “formative assessment content”. This dimension is related to the eight interrelated components by which FAME structures the process of formative assessment learning. An important proportion of the meeting time was focused on getting a general understanding of formative assessment, especially in the two first meetings (71% and 58% respectively). The learning team members spent most of the third meeting talking about learning target use (69% of the time). The fourth and fifth meetings were more focused on using formative assessment tools and formative feedback. In the meetings however, there was a tendency to talk about other topics that were not about formative assessment (with the exception of meeting 3). Table 4 Time Spent in Formative Assessment Components by Meeting Meeting (% of time) Codes 1 2 3 4 5 Planning 0 0 0 0 0 Learning target use 0 0 69 11 0 Student evidence and formative assessment tools 0 0 8 43 39 Formative assessment strategies and instructional 10 0 0 0 0 8 0 0 34 17 General about formative assessment 71 58 0 6 27 Non directly related to formative assessment 11 42 23 7 16 decisions Formative feedback 69 Topics related to strategies and instructional decisions were rarely discussed. Similarly, the ‘planning’ component of formative assessment—that means mapping out when, why, and how formative assessment will occur in the instructional process—was not covered. Depth of Formative Assessment Content Table 5 shows the distribution of the meeting time based on the dimension “depth of formative assessment content” that refers to the focus and attention to authentic problems within teachers’ professional practice (Wilson & Berne, 1999). Table 5 Meeting Time Distribution According to Depth of Formative Assessment Meeting (% of time) Codes 1 2 3 4 5 Abstract discussion of formative assessment 53 5 5 0 27 Discussion of theory only (e.g., discussion of 9 0 3 7 0 9 40 17 66 24 18 11 39 21 32 11 44 37 7 16 strategies without linking to specific tools) Discussion of practice only (e.g., discussion of tools without linking to a specific strategy) Linking theory and practice (e.g., how a tool is linked to a strategy) No opportunities for formative-assessment discussion Overall, meeting 1 was mainly focused on “abstract” discussion of formative assessment—for example, what formative assessment is and how different it is from summative assessment. Meetings 2 and 4 focused on discussing aspects of classroom practice without connecting with theory of formative assessment (40 and 66% respectively). Meeting 3 spent an important proportion (39%) discussing about connections between formative assessment theory and classroom practice and meeting 5 distributed its discussion time among different categories. 70 It is important to note that little meeting time was spent in discussing theory of formative assessment, for example, by talking about the benefits and characteristics of some formative assessment strategies. Depth of Discussion This dimension refers to the examination of learning team members engaged with the discussions, support each other’s ideas, and critically examine classroom practice (Nelson et al., 2008; Stoll et al., 2006). The role of the team leader in mediating and facilitating discussions was key in the FAME model, so she was trained in Cognitive Coaching strategies (Costa & Garmstom, 2002). Table 6 presents the distribution of meeting time across according to different categories for the dimension “depth of discussion”. When the meetings offered room for discussion, an important proportion of the meeting time was spent in conversations type parallel sharing (i.e., team members sharing examples or ideas without building off of others’ examples or ideas), especially in meeting 2 (69%). Moreover, meeting time was also spent in discussions in which learning team members were actually able to build their ideas based on others’ comments. This was particularly important in meeting 4 (77% of meeting time). A small proportion of the meeting discussion time was deeper, in which learning team members built off others’ ideas and examined reasons that underlie and justify specific formative-assessment strategies and tools. Those moments occurred especially in meetings 3 and 4. 71 Table 6 Depth of Discussion Across Meetings Meeting (% of meeting time) Codes 1 2 3 4 5 No opportunities for conversation/discussion 39 21 14 0 0 One-way sharing (e.g., one person talking and 19 4 11 7 12 26 69 33 17 32 16 7 30 76 39 0 0 12 0 17 giving information) Parallel sharing (sharing examples or ideas without building off of others’ examples or ideas) Linking ideas or examples through building off of others’ ideas, but does not push for reflection or in depth analysis of the “whys” Linking ideas and examples through an examination of why ideas and examples are similar/different or why they happened Synthesis Based on the different learning team meeting codes for the dimensions about meeting time use, I classified the meetings into three groups. Meetings 1 and 2 were focused on general ideas about formative assessment, in which learning team members spent an important proportion of the meeting time examining informational pieces, while discussion allowed team members to reflect on the basics of formative assessment. Meetings 3 and 4 were more focused on topics such as learning target use, formative assessment tools, and use of evidence. In these two meetings, there were opportunities to talk about formative-assessment classroom practice and some connections with theory were made be made, although not systematically. Discussions in the learning team tended to develop some sense of cooperation, and in some moments, learning team members were able to discuss formative assessment in depth (some examples of 72 these moments will be described in the next sections of this chapter). The fifth meeting, at the end of the year, was more focused on potential uses of formative assessment where learning team members showed a tendency to integrate some of the components they learned in the professional development year in their conversations. Progress During Learning Team Meetings In this section, I will describe the learning trajectory of the learning team during the PD year. My goal is to provide a rich description of the topics, discussions, and conversations that occurred in the learning team meetings in order to understand how the formative assessment content was addressed and developed by team members. Based on the overall description of the learning team meetings presented in the previous section, I will organize the description of the meetings in three groups which represent stages in teachers’ learning trajectory. The first refers to understanding formative assessment and its relationship with grading practices (meetings 1 and 2); the second refers to crafting formative-assessment tools to assess learning targets and using student evidence (meetings 3 and 4), and the third corresponds to the evaluation and projections of the learning team experience (Meeting 5). Understanding Formative Assessment and its Relationship with Grading The first two meetings were focused on creating a general and common understanding of formative assessment. In the first meeting, the team leader planned an activity to discuss some essentials of formative assessment and set the stage for the professional development year. Team members read the article, “The best value in formative assessment” written by Chappuis and Chappuis (2007) that outlined some ideas about formative assessment. Based on this article, the team participated in a jigsaw activity that included five topics, namely, a) what is formative 73 assessment?, b) summative use of learning methods, c) role of feedback, d) formative for learning methods, e) what are your targets?/what scares you?/and questions that we still have. After concluding the activity, the main discussion was about how to insert formative assessment into a grading-oriented school culture. Team members provided several examples from their content areas and classrooms to illustrate how crucial and decisive grading was for students, parents, and the school. They also pointed out that grades are a powerful force that guide and influence their instruction. In that context, team members mentioned that the implementation of formative assessment in their classrooms would be challenging, especially if they had to change their current grading practices. To address these points, the team leader provided suggestions and clarifications to make the implementation of formative assessment more manageable. She emphasized that team members had to start slowly, focus on one class, and use formative assessment in particular instructional moments only. She also explained that participation in the FAME learning team does not imply a quick and radical shift in classroom practice. In contrast, the team leader emphasized the importance of using formative assessment to support student learning so students can excel academically. She mentioned: You are never going to have 100% of your classroom be formative assessment, you have to have summative [assessment]. You have to have grades and things, but you need to provide enough formative assessment so the kids get feedback to do well in summative assessments. If the kids are going great on homework but then failing in every test, or they doing great in classes but failing common exams, what does it say of what is happening in the classroom? [Learning Team Meeting 1, team leader, November 2011] 74 The team leader tried to direct the discussion to the potential impacts and benefits of implementing this practice (9% of the meeting time). Learning team members, however, insisted on their concerns regarding potential negative effects of enacting formative assessment (14% of the meeting time), especially in relation to the community and students’ attitudes about grading. For example Diane, one of the chemistry teachers, responded to the comment of another team member who mentioned that formative assessment had to be “sold” to the students and the community. Diane said that involving the students in the benefits of formative assessment would be hard, since they are very grade-driven and motivated by external rewards. To illustrate her point, she commented that in the previous year, her senior students from the advanced class were familiar with making their own guiding questions, but that situation did not occur this year with the new cohort of students. She explained her frustration: I had to change this [activity] for that [this year] group. I had to grade it or they will not do it! And they know they were going to get the test…and they don’t care. I don’t know how to make them care! [Learning Team Meeting 1, Diane, November 2011] In response to the concerns mentioned by the learning team members in the first meeting, the team leader planned the activities for the second meeting. The goal was to promote teachers’ reflections on the impact of their grading practices on instructional practices and student learning. In doing so, they read an article from a local newspaper that described the efforts of some schools in the area to rethink their grading practices to prioritize grading students’ learning achievement and to diminish the weight of behavioral components of grading. 75 Thus, the team discussion focused on how much and why their assessment practices were influenced by the use of behavioral grades (e.g. completion points for working in class and doing the homework), a practice that was widely legitimated in that high school. Team members recognized that reducing the use of behavioral grades would help teachers and students be more centered on student learning instead of just giving completion points for students doing homework or completing the unit worksheets. Even though team members recognized the positive implications of assessing student learning only, they recognized that changing this practice was struggling in a culture where grading behaviors was the norm and shared by different teachers. That concern is illustrated by the mathematics teacher. [Teachers] do give credit for cleaning, for washing the desks, such and such. And when they come to me, they don’t have the basic skills they need to be successful, and then… the parents are saying like ‘they don’t find that in other math classes’; and I research it, and I’m looking at all the test grades are failing…and even, I’m guilty of [what] the district says… homework is 30%. So 30% of the grade for putting down anything in a piece of paper, anything at all, it is not mastery in the subject. I would love to try this [assessing student learning only], it will be a big shift in thinking, [but] I can see a lot of parents getting very upset about it, because we are rewarding them. [Learning team meeting 2, mathematics teacher, December 2012] Guided by the team leader, team members gradually started to recognize the importance of separating grades that measure student learning and grades that check behaviors during the meeting discussion in the sense that their current approaches to grading did not actually reflect mastery of the content and, according to them, implied grade inflation. 76 Team members also discussed whether the end-of-year district common assessments actually reflected students’ learning in the content area especially because the results of those assessments counted for 20% of the final grade. Team members posited that the district common assessments —although they stated assessing higher-order skills—focused more on factual knowledge. These assessments actually had different emphases from what was expected and that fact was important for team members. They recognized that the common district assessments were important for planning and guiding their instruction and may affect the implementation of formative assessment practices. The team members expressed awareness and concern that their current assessment and grading practices were not completely focused on student learning, and in some cases, they were merely rewarding students for completing tasks. Team members also mentioned that implementing more formative assessment implied making changes in the ways they understood their teaching, as pointed by the team leader. [Doing formative assessment] is also a huge shift to say: ‘I do not care when you turn it in, I don’t care if you come to class late, I do not care if you got a pen.’ It does not mean that all has to be implied, but if you are really not going to grade behavior, we really need to look at what we grade. [Learning team meeting 2, team leader, December 2011] A large focus in these discussions was that, due to the strong influence of students, parents, and teachers expectations about grades; the enactment of formative assessment would be problematic. In that sense, team members recognized that the discussion was helpful to recognize that their current grading and assessment practices were not fully aligned with the 77 promotion of student thinking. This is an important step in making sense of formative assessment. However, they may have over-extended the impact of formative assessment on grading. Having a culture that supports formative assessment in the classroom does not imply that teachers are not concerned with behavioral aspects of learning (e.g. coming to class on time and prepared). Rather, a formative assessment culture might shift the focus to student ideas and learning and less on assigning grades along the way. Crafting Formative Assessment Tools and Using Student Evidence The third learning team meeting occurred in January 2012. The goals for the meeting were to discuss the use of learning targets and to motivate teachers to use formative assessment in their classrooms. At the beginning, the team leader recalled the importance of learning targets as starting point for implementing formative assessment. She also showed some examples of learning targets that were created, crafted, and shared by her previous FAME learning team. Although teachers said they had some previous knowledge and familiarity with the use of learning targets, they recognized that they did not use them systematically. Teachers noted that they often would remind students of the learning targets at the beginning of an instructional unit, but did not connect them with the classroom activities in a regular basis. Thus, the team members said that they wanted to create formative assessment tools that would help them gather evidence of students’ ideas that were connected with learning targets and content standards during a unit, especially before the administration of summative assessments. An emphasis of this meeting’s discussion was the use of student-friendly learning targets, especially in terms of making content standards more accessible to students. Team members shared some examples of learning targets that can be implemented in different contexts. For example, the ELA teacher talked about how to use learning targets in her AP classes, and the 78 mathematics teacher mentioned her challenge of using learning targets that operationalize the Common Core standards. The chemistry teachers described how they struggled with the high number of content standards to be covered, which implies working with a huge number of learning targets in short units of time. They also pointed out that this fact made the use of learning targets in high school very different from elementary teachers. Based on those inputs, the two chemistry teachers led the creation of a formative assessment tool designed to assess students’ accomplishment of learning targets either for an instructional unit or a single lesson. The main purpose of that formative assessment tool (called exit slips) was for teachers to gather information about students’ progress so that feedback could be provided to support students before summative tests. The team agreed to use the tool during the second semester in at least one class. In the next section of this chapter, I will describe in detail the creation of this formative assessment tool that focuses on the role of the chemistry teachers in this process. The fourth meeting occurred in February 2012. In the meeting, the teachers discussed the use of formative-assessment tools that are helpful for reflecting on students’ evidence. Team members analyzed examples of rubrics to discuss what makes a quality rubric from the formative assessment perspective. They also discussed how to use exemplars of student work for the goal of promoting better understanding of students’ ideas and supporting students’ learning. Team members also examined examples of indicators used in teacher assessment rubrics that were related to formative assessment and instructional practices. They reviewed indicators such as if the teacher used tools to gather information of students’ prior ideas or if the teacher posted the learning targets on a public place in the classroom. They discussed whether those indicators matched up with their own classroom practice. The discussion helped team members reflect on 79 their current assessment practice (i.e. whether they met these indicators and the reasons that justified their discernment.) Evaluation and projections of the learning team experience The last meeting occurred at the beginning of June, the week before final exams (i.e. district common assessments). The meeting focused on evaluating the progress of the learning team to suggest possible courses of action to continue as a second-year team. Team members shared what they wanted to accomplish the next year and brought ideas and suggestions that may be implemented in the classroom. Although teachers commented on their efforts to do formative assessment and enact the formative-assessment tool they created (exit slips/stop-light), teachers recognized the importance of being more systematic with this process for the next year. Since they started with the classroom enactment of formative assessment in January, they felt that they were not as consistent as they expected. Finally, the discussion moved to collect suggestions about how to gather better evidence from students’ ideas. They mentioned the importance of using formative assessment tools with that purpose and also addressed the idea of using and enacting formative assessment tools similar to those used and proved by other FAME learning teams. The team members discussed possible learning resources, such as books, that can be used in their second year. Similarly, the team leader shared her plans of having professional development days in which the team can meet for FAME only. Despite the positive outcomes mentioned in the last meeting, the team did not meet as a second-year team. Even though the team leader tried to regroup the teachers, the team did not continue. In October, 2012 the team leader explained that main reason for not meeting as a second-year meeting was the lack of district support (in terms of paying for teacher substitutes 80 and resources) that would have allowed professional development days for FAME instead of volunteering after school. The team leader said in an email communication: “We were expecting the district to support the work more. We want to continue discussing formative assessment and working together, but I don't know how organized the team can be with everyone taking on more work this year.” Chemistry Teachers’ Role in the Learning Team This section describes the participation of the two chemistry teachers, Diane and Lisa, in the learning team. The purpose of this section is to characterize the learning experience of these two teachers regarding their gradual involvement in the team and learning about formative assessment. The section also makes connections with science content and practice. For example, Diane and Lisa brought elements of their classrooms to the discussion table. The section is organized according to the three stages of learning team meetings as described in the previous section of this chapter. Different Approaches to Formative Assessment and Grading (Meetings 1 and 2) In the first two meetings, Diane and Lisa actively participated in the discussion about the compatibility of using formative assessment and grading. Although the team leader emphasized the idea that formative assessment has the purpose of “facilitating learning,” she also made the comment that formative assessment “should not be graded.” That issue was controversial for the teachers, and Diane and Lisa had different opinions about this. The conversation below illustrates teachers’ opinions regarding their understanding of formative assessment. Team leader: “How do we feel about that, that we do not grade formative assessment?” 81 Diane “That is one of my frustrations, I don’t like it. I mean, you can still give an assessment, I feel, that it is formative in nature, but there is bullet points… Why not? You can grade homework.” Lisa “What about formative assessment along the way, at the end of the grade? You know [that] at the end, summative assessment. To do formative to help them learn, how they…get the mistakes over with, and totally understand what they are understanding, and then say, OK, now we are doing the summative assessment to make sure”. [Learning Team Meeting 1] Diane’s first approach to formative assessment. Diane described grading as a frequent component of her instruction. She administered weekly quizzes and used completion points for grading student work. Thus, grading served as a component to organize her instruction as well as to regulate students’ completion of tasks. In the first meeting, Diane described her assessment practices as “formative in nature,” because she was continuously providing feedback to students, while they were working on their assignments such as laboratory reports or worksheets. In doing so, her students had enough time and opportunities to review their assignments based on her guidance and feedback before grading. Although the use of completion points and grading was part of Diane’s approach to instruction, these were more related to the purpose of regulating student work than supporting students’ thinking. For her, grading was justified because it helped and forced students to master the content. Diane also said that her grading strategy was responsive to her students (and parents) because most students had the goal of applying for college. Her chemistry courses, 82 therefore, prepared them for and were aligned with college admission tests (Diane mainly taught chemistry courses for advanced students). She believed that, in order to be successful, her students need to be prepared in high school for the summative assessment culture of college. For example, in the second learning team meeting she posited that in college everything “is about summative [assessment], they are not given any feedback, and this is real life,” [Learning Team meeting 2, Diane, December 2011], so Diane argued that grading needed to be part of her instructional classes. Another component that influenced Diane’s grading strategies was her beliefs about students’ motivation. For her, a proportion of her students lacked intrinsic motivation for learning chemistry as well as curiosity for science. Students were taking chemistry, because they “have to take it” as a requirement for college applications. In that scenario, she mentioned her disappointment that their students just wanted the grade but were not interested in learning chemistry. She even noted that some students were prone to cheat in quizzes and exams. Nevertheless, during the discussions that took place in the two first meetings, Diane recognized that her practice of frequently grading students’ completion of tasks and other behaviors could be pernicious for assessing students’ learning. For example, she mentioned that the overemphasis on grading implied that science teachers tended not to plan many laboratory and experimental activities in their courses, because the bulk of those activities demanded time and opportunities for trial and error and were difficult to be monitored with grades. Moreover, Diane also commented that the use of behavioral grading practices promoted the assessment of vocabulary and factual knowledge and discouraged learning more complex science content. This situation was exacerbated by the large number of content standards to be covered in chemistry that made difficult teaching “in depth.” 83 In summary, based on the learning team discussions, Diane recognized that the use of behavioral grades may promote grade inflation; such that students are not necessarily mastering chemistry content, even though they were getting the highest grades. She also posited that doing formative assessment would require using more formal and systematic ways to collect evidence of students’ thinking so that students could be supported before the summative exams. Lisa’s first approach to formative assessment. Lisa understood formative assessment in a different manner. She agreed with the idea that formative assessment should not be graded. For her, formative assessment was helpful for teachers since their students provide information about what they learned and it was helpful for students to improve their work. She explained that students needed more time to try out and review their work before be assigned a grade: What if you did a lab using, OK the way you handled your data; turn it in, and I give you an assessment on that, and OK and you turn in again your conclusions, if you did it, just broken up and give it feedback on each part. I know there’s a lot of work, but I think that, overall, I will take all the feedback that you’ve gotten and you get your perfect lab report, almost like a rough draft versus final draft. [Learning Team 1, Lisa, November 2011] In the first meeting, Lisa suggested some ideas for formative assessment tools to gather information on students’ progress. She sketched out ideas for exit slips; for example, these slips would include a ‘target for the day’ and have students explain to the teacher “what you got on the lesson [and] what still you don’t understand.” Lisa also mentioned she was motivated to use formative assessment even though she recognized that it would imply more work. She expressed her belief that formative assessment would help students progress in their understanding of 84 chemistry. Actually, Lisa said that she wanted to know “how much they [students] actually retained and how much they can apply things…I want to see them do both”. Lisa’s main concerns were related to engaging parents and the school community in the process of adapting formative assessment practices, since she perceived that making changes in the traditional assessment and grading practices may cause some resistance. She emphasized in the discussion that the implementation of this process is slow and may imply more work for the teacher, but it would be beneficial for students. Lisa also suggested that teachers and the school community had to be consistent in this collective effort so that parents would buy into the idea of formative assessment and recognize that “the whole formative is helping them in progressing as a learner and to understand.” [Learning team meeting 1, Lisa, November 2011] In summary, although both teachers expressed different approaches to and attitudes toward the implementation of formative assessment in the first two meetings, they mentioned in the first meetings the motivation to pursue in the endeavor of learning about formative assessment. Collaboration in the Creation of Formative Assessment Tools In the third meeting, team members were introduced to learning targets as a base for enacting formative assessment. When sharing their experiences with using learning targets in the learning team, Diane, who also serves as the school science department chair, said that the group of science teachers already included the content standards at the beginning of the unit packages (the set of handouts and written materials that students use in each unit). Nevertheless, in response to the team leader’s question about using ”I can statements” in the science department—a tool typically used to state learning targets in student friendly language—Diane said that high-school students did not like that tool, since they felt treated as little kids. For that 85 reason, she suggested creating student-friendly learning targets that refer content standards in a different ways. Both chemistry teachers noted that the mere use of content standards was not enough for implementing formative assessment. They wanted a formative assessment tool to help students be more aware of the learning target as well as help teachers to gather information of students’ ideas in relation to the learning target during instruction. Diane explained: We give them [the list of standards] to the kids. That is the first page, that is what the book contains, this is what they need to know…, and [then] I will say, “by the end of this you’re going to know about this”; but then, I don’t really go back. That’s my problem, like “here is the test review, let’s go”. But I never go and say, “how do you feel about that?” [Learning Team 3, Diane, January 2011] In this meeting, the team leader brought and introduced some exemplars of formative assessment tools that were shared in her previous FAME learning team. This stimulated both chemistry teachers to start sketching out a particular formative assessment tool helpful for students and teachers. While the rest of the team meeting was engaged in a conversation about using learning targets to check for understanding, Diane and Lisa started a sidebar conversation about that formative assessment tool. Diane: “I would like to, and I am thinking of how will we do it with formative assessment with that idea…, maybe not every day, but at the end, before we get them start getting ready to start assessing them. Say: ‘where I’m at?’, ‘How do I feel?’…” 86 Lisa: “That is what I start doing, referencing back, saying that we have gone over this. Start. How do you guys feel about it? [Using] thumbs up, thumbs down, whatever I can do it.” [Learning Team 3, January 2011] Based on the previous conversations and ideas that were brought to the discussions, Diane and Lisa engaged in the co-creation of a formative-assessment tool that reflected the main ideas mentioned during the meetings. The following conversation describes how both teachers created the tool. Diane “Maybe we can do a red, yellow, green, and they [students] pick the color and they write.” Diane “Should we do that after a test review? before the test?, but they need enough time to look at it.” Lisa “I think so, but after, they need time for practice.” Diane “…and for the homework?, so in each homework [we will use the tool], then go over… Do we need it? Yes or no?” Diane “We can do a red piece of paper, a green, and the yellow; and after we lecture, after the practice, after we go with a homework; then we can say: ‘pick a color’, and the color is going to tell me, because visually the color can tell.” Lisa “They could give them…” Diane “They can be a ticket out the door too, so we can make that right, and we can say: ‘if you get those, then you need to give me an example of the problem solved...If you pick a stack, tell me why’…” 87 Lisa “The yellow may be: ‘I understand this part, but not this part’.” Diane “I think for the goal, I get it, because I want an example, like ‘give me, show me what you know’. But step two, I want questions?...Two solid questions that you have… [Different from] ‘I don’t understand anything’.” Lisa “I think, maybe, even put the example” Diane “[some sort of] ‘Show me what you know?’…” Lisa “Showing an example may be hard, they can draw it.” [Learning Team 3, January 2011] Thus, the formative assessment tool that they created consisted of three cards with different colors: red, yellow, and green (see figure 3). Students pick one of the three cards, according to the perception of their own learning, in relation to the learning target or goal for the day. Moreover, they had to write on the card the justification of their choice. For example, if students chose the yellow card, they would have to explain what they learned and what they still have doubts about, or if they chose a green card, students would have to solve a problem to demonstrate their understanding of the learning target. Diane and Lisa shared and explained the formative assessment tool to the rest of the learning team, received suggestions for improvement from other learning team members, and persuaded team members to try this tool out to be used in their respective content areas and classrooms. 88 Figure 3. Students selecting the red-yellow-green cards of the formative assessment tool cocreated in the learning team. Along with the co-creation of the formative assessment tool, another key point of the third meeting was to translate the chemistry standards into student-friendly learning targets. In the previous meetings, Diane mentioned the difficulties associated with doing formative assessment especially concerning the large number of content standards for chemistry. For Diane, teachers were pressured and did not have time to slow down and provide feedback on students’ work. For example, Diane commented that she had to teach close to 133 content standards in high school chemistry, and therefore, providing detailed feedback for every single response of each student in a systematic manner was difficult for her to do. Therefore, Diane and Lisa talked about how to adapt the content standards into learning targets that are more student-friendly to be used in connection to the formative assessment tool that they co-created in the meeting. The following conversation details the process of debriefing: Diane “When we talk about chemistry, the decisions about our curriculum are pretty much made. It’s just about how we do it, and about the learning targets. My only question [is]: how broad are they and how often?” 89 Lisa “It sounds very broad for me, we can…” Diane “So I like the idea of, before quiz…” [she makes the gesture of passing out cards] Lisa “Even if we took, like pretty broad expectations, and we would say, ‘Ok, our learning targets for these expectations are to do this, this, and this’, you know what I mean?” Diane “I think that you can probe it too.” Lisa “Yes, because this one is the relationship between…, [for example] we are learning the gases’ laws: Charles, Boyle…, you know: ‘I understand the relationship visually as well as in a model’, and I think, we just can take some of the chunk on.” Diane “…and I think, because we are already breaking down into quizzes, then we assess the quiz, and then those quizzes build up…, like at the end we go through to the content standards and check them off. When they quiz, maybe they can go through the content standards, fill the yellow [card], and then the quick through the final summative, to check them off. So I think this it, that’s what we need to do! [Learning Team 3, January 2011] In this conversation, Diane and Lisa explored a way to translate the content standards into learning targets that can be more student-friendly and can serve as guide for formative assessment. The exit slips tool serves as a vehicle for connecting students’ ideas with the learning targets. Thus, the teachers planned to reorganize the content standards in a few learning targets that represent content expectations in a more concrete way and refer to eventual indicators of student learning. 90 In this process, both teachers mentioned the importance of using the exit slips to monitor student progress before summative assessments. At the same time, they suggested dividing the unit content into chunks to be assessed through quizzes. Although Diane and Lisa did not say in the meeting if they wanted to grade the quizzes, these would be helpful for tracking student progress over the course of an instructional unit in order to help teachers and students make decisions on the instructional process. In the next chapter, I will describe in detail the process for enacting the exit tools in both teachers’ classrooms. Evaluation of the learning team experience As part of their reflection at the end of the year, Diane and Lisa made some points about the formative assessment implementation as well as the enactment of the exit slips. Even though both teachers had a similar experience, both expressed different foci to guide their work with formative assessment. Diane’s foci. Diane mentioned the importance of enhancing her use of learning targets to help students and teachers determine the accomplishment of the learning targets. She pointed to her interest in using formative assessment in a way that allowed her to know if students were meeting the content standards. She also commented that the group of science teachers was currently working on translating the content standards into student-friendly learning targets for the classroom books and unit packages. She provided an example that would also serve students as a self-assessment: “I strongly believe that I can determine the number of electrons, by looking at the periodic table” [Learning Team 5, Diane, May 2011]. Diane also mentioned her idea of making an effort to use quick formative assessment tools as checking for understanding. She wanted ideas of tools that would help her to verify students’ progress as group instead of talking to and asking each student. 91 However, after trying with the exit slips, Diane said she was not very sure if these tools were adequate for her instruction. She said that she was almost thinking of avoiding the exit slips because they were very time consuming. She explained: I am almost wondering about eliminating them and I do not like that either [using the exit slips], because…; and it’s a lot [of work], if you have to give, and I thought, I you have to give every kid, or three really [cards], because you don’t want a single mouth saying: “what color do you need? red?” [Learning Team 5, Diane, May 2011] Lisa’s foci. Lisa mentioned the importance of helping students be more aware of their thinking, as a reflection tool, in addition to the assessment of learning targets. She explained to the team: I wonder if we can take their tests and…going over the test, they do it by their own, just take a day and say, “here is your test, here is what you said you could learn, and go through and say OK, I said I can do this, but seriously, I didn’t”. [Learning Team 5, Lisa, May 2011] Lisa also mentioned that she would like to use the formative assessment process in a way that allows students to explain what they did or did not understand in a sort of journal log. Then, students would be able to relate their learning to the unit standards. Lisa was particularly interested in supporting the chemistry topics that are more of a struggle for students. 92 Regarding the use of the exit slips, Lisa said that she was using the tool in some of her other courses. She needed more time for implementation though, because she started to enact the tools in the middle of the second semester. Similarly, Lisa mentioned that the use of formative assessment has been helpful for her students to identify possible sources of mistakes in their responses or ideas. She valued the fact that, by using formative assessment, her students can see the progress in their own learning. Outcomes of the Learning Team Experience In this section, I will explain some implications of the learning team experience for both chemistry teachers. First, I will illustrate how the formative assessment tools were created based on the collaboration and learning team members’ shared experiences. Second, I will explain how some of these tools may connect different settings. Third, I will explain how the learning team experience helped Diane and Lisa to learn about formative assessment knowledge about practice. I will also explain how teachers used the learning team experience to make connections with classroom practice. Formative Assessment Tools as a Product of Shared Expectations The design and creation of artifacts to be used in instructional practice is a valuable task to be considered in professional development (Kazemi & Hubbard, 2008). Such tasks are helpful for teachers to understand the functions and implications of these artifacts in different settings. Windschitl and Thompson (2011) explained that some artifacts can be conceived as tools—as activity theory’s mediating tools (Engeström, 1987; Yamagata-Lynch, 2010)—that allow teachers to improve classroom practice. These artifacts are helpful because teachers can visualize, analyze, and critique their own practice, at the light of the classroom enactment. Therefore, these artifacts also constitute a mediating tool between the teacher and the formative 93 assessment theory that is enacted. According to Vigotsky (1987), mediation implies the relationship between human development and the environment, because individuals interact with the environment through artifacts, tools, and other social elements. Therefore in this learning team, the activities and discussions that occurred in the first meetings served to prepare the context to help the learning team members connect current classroom practice and the ideas around formative assessment and to promote their engagement in formative assessment practice. As previously described in this chapter, the team leader planned the two first meetings on a topic that was controversial for the teachers—the use of formative assessment in a high school culture that validated and emphasized grading practices. The description of the learning team meetings shows a clear difference in Diane’s and Lisa’s opinions about embedding formative assessment in their classrooms despite working in the same school and teaching the same subject, chemistry. The discussions, however, helped teachers reach some agreement about the importance of promoting formative assessment in their current classrooms in order to better support student learning. Hence, the exit slips that were co-created in the third meeting of the learning team year gathered and condensed teachers’ perspectives about what constituted an adequate formative assessment tool for that high-school context. Moreover, creating that artifact served to catalyze the processes of reflecting on and analyzing classroom practice. Formative Assessment Tools as Potential Boundary Objects The artifacts created in the learning team meetings, such as the exit slips, have the potential to be used as boundary objects (Star & Griesemer, 1989) that allow a connection between teachers in the professional development who have different backgrounds and classroom experiences. Cultural artifacts, tools, or processes can act as boundary objects 94 (Kazemi & Hubbard, 2008). Boundary objects (Star & Griesemer, 1989) are helpful to coordinate participants’ perspectives in a common purpose and may facilitate teacher learning and practice. Wenger (1998) noted that boundary objects connect a wide range of communities of practice and their members despite the fact that a boundary object may have different purposes in each setting. He explained: These connections are reificative, not in the sense that they do not involve participation, but that they use forms of reification to bridge disjoint forms of participation. They enable coordination, but they can do so without actually creating a bridge between the perspectives and the meanings of various constituencies. (p. 107) Previously in this chapter, I described how two chemistry teachers participated and co-led the creation of an artifact (the exit slips). In this particular case, the creation of the exit slip was based on different sources and experiences, related to the participation of different team members in the team. For Wenger (1998), the design of artifacts in communities of practice goes beyond the creation of the artifact itself. It is actually about the participation of members of the communities and how they engage. In developing the exit slips, several steps were completed. First, teachers had previous ideas and beliefs about the formative-assessment scope that they brought to the discussion. Second, some of the discussions in the initial meetings helped teachers share ideas, classroom experiences, and their expectations about how formative assessment might fit into the culture of the school and their current practice. Third, the team facilitator guided the team activities and discussions to generate the conditions to help teachers notice the importance of getting involved with formative assessment. Fourth, team members had 95 developed some common understanding of formative assessment—they also had some working familiarity—and they used their new ideas and understanding to co-create artifacts for their classroom use. Thus, developing an artifact implied the conjunction of a set of conditions in order to pursue an objective. Team members committed to enact this tool in the classroom with the purpose of gathering information of students’ ideas. When they did it, these tools were able to cross the boundaries of one (professional development) setting and to be part of the classroom, but with different functions. Teachers commented about the enactment of those tools in the last two meetings, although they did not have opportunities for refining the tools and reflecting on classroom implementation. If the learning team would have continued as a second-year team and had multiple opportunities to connect the exit slips in both settings (professional development and classrooms) they would be an example of boundary object to promote teacher learning better. In order to be considered a boundary object, an artifact needs to fulfill some conditions such as: 1) modularity, or the ability to gather and integrate different perspectives; 2) capacity of abstraction; 3) accommodation to different purposes; 4) standardization, which means they can be understood by anyone that interacts with the object (Star & Griesemer, 1989). Due to the limited number of meetings during the professional development year, the conditions aforementioned did not occur fully. Therefore, the exit slips created in the team served as an initial stage in the development of boundary objects to connect both settings. Influence of the Learning Team in Teachers’ Learning of Formative Assessment To be part of FAME, team members engaged in learning about formative assessment as goal. In pursuing this effort, the analysis of the meeting time shows that the learning team spent the 96 majority of the activities around the topic of formative assessment, although some components were more emphasized than others. Determining the success of the learning team is complex in terms of linking professional development opportunities to teacher learning (e.g., Wilson & Berne, 1999). In their review of research on the impact of professional learning communities on teacher practice and student learning, Vescio et al. (2008) outlined limitations of the research about teacher learning and practices. For example, although several studies recognize the impact of professional learning communities on making teachers’ practices more student-centered, most of the evidence comes from teachers’ perceptions of impact. Nevertheless, in the case of this FAME learning team, the evidence of meeting videotapes and teacher interviews show that the activities in the learning team helped the teachers to 1) recognize the importance of ‘balancing’ their assessment practices so that the use of formative assessment would help teachers verify student progress before summative assessments and provide some sort of instructional adjustments or feedback, 2) recognize that their current grading and assessment practices were not adequate to promote student learning, and 3) co-create formative-assessment tools to gather evidence of students’ ideas in relation to learning targets and get impetus for their classroom enactment. To illustrate these points, I will focus on the particular experiences of Diane and Lisa, the two chemistry teachers of the learning team. Diane’s Outcomes. Since the beginning of the learning team meetings, Diane actively participated in the discussions and expressed her points of view based on her experiences teaching chemistry. She evidenced some level of change in her understanding of formative assessment. At the beginning of the professional development year, Diane completed a survey 97 administered by the FAME research team in which she described formative assessment as a “feedback system between teachers and students. The function of the system is to assist students in mastery of content and develop a deeper understanding of material” [Fall Survey, October 2011]. For Diane, the emphasis was on communicating information in order to accomplish learning goals. Over time, the discussions that occurred in the learning team—previously described in this chapter—may have contributed to refining Diane’s ideas about formative assessment. Diane gave an expanded definition one year after the professional development experience. Then, she emphasized then a more active role for the teacher to monitor and support student work: Developing the classroom environment that allows me to check to see if everybody’s where I need them to be, and these kids move at all different levels, so when I think formative assessment for me, that’s actually me getting up, interacting with kids, coming around, checking for understanding to see if they’re where I need them to be. [Interview #1, Diane, March 2013] Of particular interest in this definition is that, while she considered a more active role for herself as the teacher, she still was focused on making sure that students were where she needed them to be (i.e., getting the “correct” answer). Other ideas that Diane mentioned in the follow-up interviews were that 1) formative assessment does not need to have a formal grade—although she supported grading the completion of student work;, 2) learning targets are key in formative assessment, and 3) formative assessment implies using feedback to make decisions such as whether to reteach or revisit areas where students poorly understood the content. Diane also considered that the learning team helped her reflect on her classroom assessment practices in the 98 sense that formative assessment was not an additional task to do, but it implied reorganization of some of her current practices. Participating in the learning team for Diane was helpful because she found herself more insightful about what she is doing in the classroom. She also believes that her assessment practices are better and have impacted student learning. This is aligned with what Vescio et al. (2008) concluded in their review of research on PLCs, in the sense that this professional development model may help teachers focus more on students’ thinking. In the two interviews conducted in 2013, Diane said that before participating in the learning team she was more focused on covering the material in order to get students ready to the test. She recognized the discussions with her colleagues helped her be more aware of students instead of just covering the content. Diane considered that the learning team experience was important in being more aware of students’ work as well as finding the ways to support students. Lisa’s Outcomes. In identifying her sources of learning about formative assessment, Lisa was much more illustrative and detailed about the learning team contribution. In the follow up interviews conducted in 2013, she recognized the importance of her colleagues in learning about new instructional and classroom assessment practices. She appreciated the space for getting new ideas from their colleagues and feeling encouraged to try out others’ practices used in content areas different from chemistry. Lisa explained that this process was facilitated because the learning team worked together and taught a group of similar students allowing her to mold others’ experiences in a way that was adequate and familiar for her students. She explained: 99 It was nice to have that team to even reflect where we are using it and what went wrong; what went good, the good things; the bad things about formative assessment, or even our understanding level. Sometimes just having a conversation about it with others, you realize you understood it a little bit more or maybe you did not understand as much as you did with discussing. It was a very knowledgeable group for me because they are all great teachers too so I just felt like I was picking up tips and hints and things that I could use in my own classroom. [Interview #2, Lisa, May 2013] The opportunities for collective inquiry in learning teams combine the practical knowledge from teachers’ experiences and the knowledge and theory generated by researchers (Vescio et al., 2008). In that sense, Lisa recognized that working with colleagues allowed her to get new ideas and insights about classroom practice and knowledge of formative assessment. The team discussions were helpful to reshape Lisa’s understanding of classroom assessment. Discussions were particularly important for reflecting on her grading practices and its implications and for making strides to balance formative and summative purposes of assessment. One year after concluding the learning team meetings, Lisa described her efforts to delay grading in the instructional process as well as the reasons that supported that decision. A lot of times I try not to do the grading or the assessment of the test until the very end. I do not even like to do a quiz because I think just doing the one number grade at the end is good because during the whole unit, I can assess by asking questions; by walking around; by making sure they are doing it correctly; checking homework but not really giving them grade for their homework. I do not give them grades for their homework because I 100 want to make sure they understand what they are doing and practice makes perfect. I do not want to mark them down for something that they did not understand and so that we can work on it. [Interview #1, Lisa, March 2013] Thus, the learning team experience contributed to Diane and Lisa in developing a better understanding of formative assessment, but what this means was slightly different for each teacher. In the next chapter, I will describe and analyze how Diane and Lisa enacted the set of tools about formative assessment in their chemistry classrooms as well as the implications of this enactment in their respective activity systems. 101 CHAPTER 5: ENACTMENT OF FORMATIVE ASSESSMENT IN CHEMISTRY CLASSROOMS In the previous chapter, I described and analyzed the participation of Diane and Lisa in the FAME learning team (the professional development). Both teachers learned about formative assessment theory and co-led the creation of a formative assessment tool to be used in the classroom. In this chapter, I will explain how Lisa and Diane enacted formative assessment in their classrooms and examine the factors that influenced their enactment process. From the research design perspective, this chapter refers to the sub-units of analysis (i.e. the teachers in their classrooms) that are considered in this embedded and single case study (Yin, 2009). Additionally, this chapter will make connections between the classroom enactment of formative assessment and the professional development (i.e., the FAME learning team) in order to understand teachers’ appropriation of formative assessment for improving chemistry instruction. I will use cultural-historical activity theory (CHAT) (Engeström, 1987, 1999) as an analytical framework to make sense of both teachers’ enactment of formative assessment. I will present evidence of Diane’s and Lisa’s instructional practices to characterize their activity systems (Engeström, 1990), specifically focusing on each teacher’s use of formative assessment as mediating tool to enhance chemistry practice. I will also explore the particular organization of the components of each teacher’s activity system. The triangular model presented in Figure 4 represents the activity system—the unit of analysis for CHAT—which results from the interaction of mediated and participative human action (Pryor & Crossouard, 2008; Roth & Lee, 2007) and is embedded in a social and historical context (Engeström, 1987; Wertsch, 1991; Yamagata-Lynch, 2010). In activity systems tensions, innovations, and agency become the driving force for change (Pryor & Crossouard, 102 2008). The activity system is a bounded, single, and articulated system in which a series of proceses and actions occur inside in order to accomplish a particular object (Roth & Lee, 2007; Yamagata-Lynch, 2010). Individuals participate in the activity in order to accomplish a particular object, making it the key component of the activity system. The object configures and arranges all the components of the system. Therefore, the object-oriented actions that occur within the activity system have capacity for interpretation, sensemaking, and transformation (Engeström, 2001). Figure 4. Activity system model (adapted from Engeström (1987)) The use of CHAT is helpful to represent and organize complex sets of information in simple models (Yamagata-Lynch, 2010). To represent how activity systems work, considering the activity system’s components as well as their features is important. Table 7 describes each component of the activity system. 103 Table 7 Activity System Components Component Description Object Goal or motive that organizes the activity Subject Individual or groups of individuals involved in the activity system Tool Artifacts, symbols, or concepts that can act as resources for the subjects Rules Any type of regulations that frame the activity Community Social group to which the subject(s) of the activity system belong Division of labor Modes by which the tasks are shared among the activity system subjects and the community Outcome Final result of the activity, in terms of learning Adapted from Yamagata-Lynch (2010) In this chapter, I will present evidence from Diane’s and Lisa’s chemistry instruction, specifically their enactment of formative assessment in order to illustrate the different components of the activity system for each teacher and characterize their chemistry instruction. In two years of data collection, I gathered 23 hours of videotaped lessons, so evidence of instructional and assessment practices of both teachers is abundant. Rather than documenting teachers’ change in practice between the first and second year of enactment (and attempting to make a claim about the effectiveness of the professional development) my interest is to look into Diane’s and Lisa’s classroom practices and their surrounding milieu that were influential and key in the enactment of formative assessment. I will provide in depth evidence of the characteristics, episodes, and teachers’ reflections that help to make sense of the classroom enactment of formative assessment as a mediating tool for enhancing teachers’ chemistry practice as well as the nuances that occurred in that endeavor. A Starting Point to Organize Teachers’ Activity Systems 104 Diane and Lisa participated in the same learning team in 2011-12 to learn about formative assessment knowledge and practice. This initiation is fundamental to understand the teachers’ chemistry instructional activity systems, because the learning team was the main opportunity to formally learn about formative assessment. As described in the previous chapter, both teachers actively participated in the learning teams. Diane and Lisa discussed the challenges of enacting formative assessment in a strong grading-oriented school culture. They also became more aware of focusing assessment on student learning instead of merely grading the completion of tasks or behaviors. Diane and Lisa created a formative assessment tool (exit slips) to be shared and implemented in learning team members’ classrooms. As evidenced by the conversations in the learning team where creating the exit slips, Diane and Lisa demonstrated understanding of characteristics of tools that have the potential to be used for formative assessment purposes. For example, when discussing the creation of a formative assessment tool Diane mentioned some features of an ideal tool, “I think for the goal [learning target]... I get it, because I want an example, like ‘give me, show me what you know’. But step two, I want questions? Two solid questions that you have [different from] ‘I don’t understand anything’…” [Learning Team #3, Diane, January 2012] The learning team experience adds to many other things that Diane and Lisa have in common: they teach in the same school; they teach the same chemistry course with the same curriculum, and they share the same resources (their classrooms share a laboratory space). By participating in the learning team Diane and Lisa both committed to enacting formative assessment in their chemistry instruction to promote student learning in chemistry. Thus, in terms of activity systems, the goal of improving chemistry instruction corresponds to the object of the activity. 105 Both teachers taught similar students in terms of grade level and background characteristics (because they teach in the same school). The chemistry courses that I observed were targeted for advanced students, whose levels of learning tended to be relatively homogeneous (e.g., all students had previously taken algebra). Moreover, both teachers described the videotaped classes as quite representative of the groups that they have taught historically. Diane explained: “most of them are going into the AP program, but there’re a lot of them that are not at that accelerated level so I would say they’re probably a good sample of an average high school chemistry class” [Interview #2, Diane, May 2013]. Therefore, in the activity systems, the subject is composed of the respective teacher and their group of (similar) students, and for these two teachers, the main difference between the subject of the activity system is themselves. The third component of the activity system that I consider equivalent for the two teachers is the activity system’s tool component. The tool refers to the formative assessment “content” that Diane and Lisa learned in the learning team. This included artifacts such as the exit slips cocreated in the learning team, ideas about formative assessment that the teachers learned from their colleagues, formative assessment theoretical concepts and approaches, and so forth. In some sense, both teachers had the same set of tools that served as mediation in the enactment of classroom practice. Hence, some of the components of the activity systems for Diane and Lisa are common and established. Those include the activity system’s object, tool, and subjects (see Figure 5). However, this is only a starting point in understanding each teacher’s activity system. In the next sections of this chapter, I will provide evidence that further describes the features of each teacher’s activity system as well as the nuances in each teacher’s process of enactment. For 106 example, although Diane and Lisa participated in the same learning team and had a set of tools available to utilize for formative assessment purposes, they did not enact these tools in the same way. Thus, this chapter will illustrate how the teachers understood the tools and used them for practice. In particular, I will focus on a description of Diane and Lisa’s instructional routines, the classroom enactment of exit slips as a tool, and the description specific formative assessment moments in which teachers used formative assessment to make instructional decisions. Formative Assessment knowledge and tools Improving chemistry instruction Teacher and (respective) students Figure 5. Common elements for Diane’s and Lisa’s activity systems. Mapping the Terrain for Formative Assessment: Teachers’ Instructional Practices This section is based on the analysis of video of the 10th grade chemistry classes, complemented by perspectives obtained from teacher interviews. I videotaped Diane’s and Lisa’s chemistry classes in four instances (two in 2012 and two in 2013) at the same moment of the school year, so the instructional units that I observed were the same in both years: energy in chemical reactions (thermochemistry) and redox reactions. 107 First, I will focus my description of each teacher’s space, instructional materials and curriculum. Second, I will describe the Diane and Lisa’s instructional patterns for the classroom activities. Third, I will analyze classroom instruction according to the activity system components. Spaces, Instructional Materials, and Curriculum Diane’s and Lisa’s classrooms were similar in size and have a capacity of close to 30 students. Both classrooms had a direct connection to the chemistry laboratory room that was shared by both teachers. The classrooms’ walls were covered with posters and banners with information about chemistry-related topics and motivational messages about the importance of chemistry (see Figure 6). Furthermore, in both classrooms, the ceiling tiles were painted with depictions of chemistry content. These were not there for decoration only. I saw Diane refer to one of the paintings when she explained to one student the difference in color between copper and cooper chloride. The arrangement of the student desks, however, is different in each class. In Diane’s classroom, student desks are arranged in a way that students sat in groups while in Lisa’s classroom, the desks were arranged in rows, and students looking to the whiteboard. For the 10th grade chemistry course, Diane and Lisa used the same instructional materials. The teachers did not use a textbook. The department teachers created their own instructional learning guide. Each instructional unit has a “unit package” that contains information and questions about the content, problems, and exercises. Diane and Lisa used the package as a main resource for their instruction. They usually organized the content sequence and the instructional activities according to the package organization and assigned exercises for working in the classroom and for homework. Once the instructional unit finished, students had to submit the 108 package to the teacher in order to be reviewed and graded. However, after the professional development year, Lisa started not to grade homework, but Diane continued. . Figure 6. Views of Diane’s (left) and Lisa’s (right) chemistry classrooms. Both teachers tended to have a similar approach to lab activities. Before the lab, Diane and Lisa explained the procedures to the students. The emphasis was on procedures, security issues, and possible sources of error that students had to pay attention during the lab. Then, students worked in groups of four to six students. Based on the videotapes, I observed that students were clearly familiar with lab work, and they knew how to use the instruments and were confident in completing the procedures. After the lab activity, the teachers explained the lab problems and emphasized the accurate use of the data collected in the lab, in order for students to be able to discuss possible results, identify sources of error, and explain a chemical phenomenon. Lisa explained: I usually have them kind of with the objective – do they understand what the objective is? and re-write that in their conclusion because I know that they understood what the lab was all about. It’s not just to write a data table; answer the questions, but they have to 109 conclude, using their objective, that they did what they were supposed to do. Maybe the results weren’t great, but do you understand what the lab was all about, the general idea so that I can kind of base this wasn’t just a lab to do a lab. [Interview #2, Lisa, May 2013] Diane’s Instructional Patterns Before introducing the activities for each lesson, Diane tended to spend some time informing students about the particular lesson and how this lesson was embedded in a particular instructional sequence. Strictly speaking, Diane did not use learning targets. In some cases, she used statements that posited tasks and activities for a lesson or for lessons that are consecutive. For example in her unit of thermochemistry, Diane said to the students: “we are going tomorrow to the lab…make sure that you know what Q is [energy transferred in a chemical reaction]. The goal for tomorrow, we are doing Q…, make sure you can do your Q problems, make sure you understand those thermochemical equations…that’s important” [Diane, Lesson #1, March 2013]. In these moments, Diane also had conversations with her students about topics such as particular chemistry problems that were difficult for the students in the previous lessons, the sequence of content and activities for the following lessons or for the entire week, or made connections with the chemistry content and what was going to be covered in quizzes and tests (a very recurrent topic). Thus, this classroom excerpt illustrates the types of conversations in which she informed the students about the class activities and where students may ask about grading. Diane. “I have [for you the] redox review now, or can be either [the final exam review]… So you will have something to do!” Student. “You will include these [questions] in the test?” 110 Diane. “You will have some questions on redox, a little quiz, there are some questions in the final exam of redox.” Diane. “So oxidation numbers will go on. As I’m looking through…I don’t see really many mistakes. It looks like you can get through it!” [Diane, Lesson # 5, May 2013] In a typical lesson for Diane, after discussing upcoming tasks, the next activity tended to be a lecture in which she explained concepts to the students or described procedures to solve problems. She tended to use the unit package to take notes for the students while she was explaining, so they could have the same notes in their packages (see Figure 7). During the lecture, she asked students questions in order to check if they were following her. The questions she tended to ask were single-response. For example in the thermochemistry unit she said, “that is a spontaneous reaction, you are going to lose energy, so your ∆H [enthalpy] is going to be positive or negative?” [Lesson #3, Diane, March 2013]. In general, she expected that students said the right response. When students gave wrong responses to her questions, she tended to ignore them until one student said the right one. Then she repeated the right response and continued with the lecture. In some cases, Diane provided evaluative feedback (e.g., I like that! Beautiful!) when students said the right response. In the middle of the lecture, she tended to ask questions to make sure that the students were following her (e.g., How do you feel about that?). In a few opportunities, she asked probing questions in which students had to elaborate on their responses (e.g., every year, this is the easiest problem on the test, when it [water] freezes…why this is the easiest problem?). For Diane, it was essential that her students were following her lecture, otherwise they would be lost. She explained: 111 If I do a lecture that’s longer than 20 minutes, that’s a long lecture for me. First of all, their attention spans can’t handle it. They need to do something, or I lose them; and if I lose them, I can’t get them back for the hour. So that’s why I’m trying to keep what I say pretty short, let them work a little bit, and then if I need to come back and turn on the overhead, I can do that or turn on the projector, I can do that to kind of work with them some more, but me being out and helping kind of get those questions going, I think that allows them to continue a little bit more. [Interview #2, Diane, May 2013] Next, students worked individually to practice solving problems in the unit package. Diane continually walked around to see if the students were working, if they had questions, and to help students when they were struggling. When students asked questions, Diane’s explanations were very similar to those ones she previously made in the lecture. While Diane observed students working, she often made comments or public announcements about something that she noticed or wanted to emphasize. To notice students’ work she trusted in her knowledge of generic or prior students’ non-normative ideas and in the information of her students’ academic performance—she identified students’ levels based on their performance at the beginning of the year. She commented that she was very knowledgeable of the ideas that students might bring up or where they might have problems, so she could anticipate who would be struggling (and when). While students worked, Diane continuously interacted with students to check if they were getting the right responses. She said: “I feel like I’m trying to like pull it out of them” [Interview 112 #2, Diane, May 2013] and if she noticed that no student was able to give the response that she was looking for, she started probing students. Diane explained, “I want them to kind of think. ‘why did you say that?’ ‘what were you thinking that made you say that?’ ‘is that what you really meant?’…” [Interview #2, Diane, May 2013]. She also commented that she used the information collected from the high-performing students in order to make instructional decisions such as a public announcement because, “if they don’t get it, I know the rest of them are probably lost... so I know if those kids are struggling, there’s more kids struggling” [Interview #2, Diane, May 2013]. Nevertheless, Diane recognized that she was not so familiar with the work of students who did not talk with her during the lessons. Lisa’s Instructional Patterns. Lisa’s lessons tended to be structured into two parts. At the beginning Lisa tended to note the topics to be covered in the lesson and then she introduced the main concepts through lecture. She rarely mentioned learning targets or goals for the day, but she tended to name the activities (sometimes making connections with future lessons) and emphasized certain things in order to focus students’ attention. For example, when she started introducing the redox reactions’ unit to the students, she said: Today it is the first day of oxidation reduction. This is your last package. Everything in this will be learned in three or four days this week and then, three days next week. So everybody has the package. Ready, redox is a short term describing oxidation and reduction of elements of a chemical reaction. [Lisa, Lesson # 5, May 2013] Lisa used the whiteboard to explain and write the main points of the content that were important for the students. This practice was different from Diane, who used the unit package 113 for writing explanations (See Figure 7). Lisa also tended to lecture to explain and review—step by step—the exercises that students had completed in the previous lesson. Sometimes Lisa had students provide experimental data or explain a procedure to support her explanations. Lisa’s focus was on providing clear explanations about the concepts as well as giving the general rules to solve the problems, so that students were confident with their learning before practicing. During her lecture, Lisa asked questions to see if the students were following her, especially to check how the students understood her explanations. The questions tended to be single-response and, similarly to Diane, Lisa also tended to ignore students’ wrong responses. Nevertheless, there was a slight difference. Lisa tended to use students’ right responses to support her explanations. She clarified the concept or procedure mentioned by the student, or connected them with prior knowledge. For instance, when reviewing one problem that the students solved in the previous lesson she asked: Lisa “What’s missing from the information of [question] number seven?” Student: “The ∆H” (enthalpy) Lisa: “The ∆H. Probably, find ∆H. Use any of your past prior knowledge, the heat of formation, products minus reactants. That means you have to pull out, from that fancy chart and go through products minus reactants” [Lisa, Lesson # 4, March 2013] 114 Figure 7. Comparison between Lisa’s and Diane’s notes to explain the Gibbs’ free energy equation. Lisa (top) used the whiteboard to write notes while she explained while Diane (bottom) completed her notes in the unit package. Note the differences in the notes’ organization and the level of detail. 115 After the lecture, students worked on the exercises in the unit package. Lisa tended to give to the students the numbers of the exercises that she expected be completed. Similarly to Diane, once the students were working, she spent most of the time responding students’ questions and walking around the students’ desks to monitor their work. She posited: “Usually, it’s like, ‘Okay, would you like me to go over anything?’…and if I get a yes: okay we’ll do some more practice. No? Okay, you’re ready” [Interview #2, Lisa, May 2013]. Similar to Diane, Lisa used her knowledge of students’ non-normative ideas and levels of performance as tools to notice students’ ideas, but her notion of these concepts was broader and more complex. She understood that students’ non-normative ideas can be generic, but she was more interested in gathering the particular ideas from her students. She had a system called “key students” in order to monitor students’ understanding and notice their ideas. Accordingly, Lisa explained how she identified these students. Getting to know the class for a while, I can see if they understand it or they don’t by the looks on their faces of confusion. I try to look for those and I have a couple key students that I look at to make sure that if they understand what’s going on, if they understood my lecture, then I think I’m good to go. I look at all levels of kids. I’ll look at my lower level and my upper level and if they understood the lecture and paid attention, then I know that I’m, average, I’m good to move on. If say that the kids kind of are confused, I’ll say, “Okay, let’s stop; let’s rewind. What part is confusing?” And I don’t like the words, “I don’t understand anything.” Well ‘cause there has to be something that they do understand and then I like to go from there. “Okay, what parts,” and I like to make them think about their understanding. I like them to think about their learning and 116 their level. “Where am I? What parts of that did I now understand?”…So that’s kind of how I’ll make the decision to go on. [Interview #1, Lisa, March 2013] Lisa used the key students to regulate her pace and gather evidence of students’ understanding. The use of key students helped Lisa make instructional decisions such as whether to reteach, give more time for practice, make public announcements, or enhance students’ public participation in providing explanations or solving problems. In the next sections of the chapter, I will provide more detailed descriptions of Lisa’s use of key students and how this system contributed to her enactment of formative assessment. Synthesis and Connections with Activity Systems The descriptions of lessons above for Diane and Lisa are somewhat generic. They represent some trends in their instruction, but it may not be applicable to all lessons. Even though there are differences between Diane’s and Lisa’s instructional patterns, these are constrained by the time demands, the curriculum, and the instructional culture. From the perspective of the community (in the activity system), both teachers recognized that they are teaching a course that includes a large number of standards to cover. Although Diane and Lisa were able to cover all the material during the school year and prepare their students for the district exams and college admission tests, they did not feel as though they have enough flexibility to implement other types of chemistry activities that may promote a deeper understanding of chemistry. Lisa explained that tension: “When I first started teaching until now, it’s like, ‘I don’t teach that subject.’ I feel like I don’t teach a lot of chemistry anymore. I feel like I teach a lot of math, and I teach a lot of the surface kind of chemistry” [Interview #2, 117 Lisa, May 2013]. Similarly, Diane was emphatic to express her frustration about a high-school curriculum that saturates students with content, but does not provide room to teach the basics. She expanded that idea: Really get kids thinking along those lines [core chemistry ideas] instead of trying to jam down their throats Hess’s Law, and this is how it applies because we don’t really have enough time to do it service; and I think that’s the point of university. That’s what a university is for because they break down chemistry into organic and inorganic and thermal and physical chem so that the kids have a much more in depth, thorough understanding. We’re just giving them pieces. We’re just throwing stuff at them and hoping some of it will stick, but I don’t think we give it enough depth for it to stick. I’m not saying they don’t learn it, but they would learn it better if we had more time to elaborate, and we don’t. We have 133 content standards. This is what we need to cover, and we do the best we can. [Interview #1, Lisa, March 2013] In terms of the activity system, the main point is that the influence of the community in Diane’s and Lisa’s activity systems is very persuasive, and it significantly influences their instruction especially from the perspective of the chemistry content enacted in the curriculum. Even though both teachers are experienced, their chemistry teaching tends to be fact-oriented and traditional (Gilbert et al., 2004; Osborne, 2012). The lessons that I observed were essentially focused on content and solving exercises and problems that included an important algebraic component. Lab activities were more related to the experimental verification of chemical 118 phenomena instead facilitating inquiry. Grading was a key factor that served to orientate and regulate student work (especially for Diane). In that context, however, there were slight differences between Diane and Lisa in terms of how they asked questions, provided feedback, and utilized student understanding to guide their chemistry instruction. Diane was much focused on teaching, questioning, and providing feedback based on completion of tasks and procedures that targeted the right response that she was expecting for. Lisa was somewhat less focused on getting a right response from students and a little more concerned about knowing students’ ideas. She showed some concern about helping students to be more aware of their ideas, for example, if they were struggling with chemistry concepts or algebraic procedures. These characteristics that are being depicted in the activity systems for Diane and Lisa are important to understand the role of tools (i.e., formative assessment) that can be used for enhancing chemistry instruction as well as the ways in which both teachers established the rules and organized division of labor. What is important to note is that any formative assessment practice that both teachers enacted was inserted and embedded in this general lesson structure. Thus, in the next section I will describe and analyze how Diane and Lisa used formative assessment as a mediating tool for promoting better chemistry instruction (the object) and how this enactment influenced their classroom activity system. Enacting Formative Assessment Tools In this section, I will focus on two classroom episodes—one for Diane and one for Lisa—that account for teachers’ practice of formative assessment. In terms of activity systems, this piece corresponds to the enactment of pedagogical tools to enhance chemistry instruction (the object). Amidst the collection of formative assessment tools that teachers learned in the learning team 119 (artifacts, concepts, etc.) I will focus on the exit slips that Diane and Lisa co-created in the learning team. The moment when this creation happened was described in the previous chapter. In this section, I will describe both teachers’ experiences using that formative assessment tool in their classrooms. I describe an episode in which Diane used the exit slips with her students and her reflections of the enactment. Unfortunately, I did not have an opportunity to capture Lisa’s enactment of the exit slips, but I include evidence from her interviews about her enactment experience. Diane’s Enactment In March of 2012 Diane used the exit slips in her chemistry class. She used the tool in a consecutive two-day sequence, just before the summative assessment for the unit on energy in chemical reactions. On the first day (Monday), she planned activities to review the content as preparation for a quiz. First, students individually completed a “pre-assessment practice test” activity that included the main topics covered in the unit. Diane warned the students that the practice test was not going to be graded, so students should respond only to what they knew. Second, after students completed the practice test, Diane did a public review of the material and focused her efforts on the water heating curve, one of the topics included in the practice test. She asked numerous questions of the students to verify if they were able to identify the changes in the states of matter as well as the meaning of the different stages on the graph. Diane completed the equations for the different states of matter, based on experimental data that students obtained in a previous lab activity. After finishing the review and before ending the lesson, Diane passed the exit slips out to the students so they could assess their understanding of the lesson (see Figure 8). 120 Figure 8. Diane’s students using the exit slips. Note that the student on the left selected the yellow card and the student on the right is holding the yellow and the green card in order to make a decision. On Tuesday’s lesson, Diane started by explaining what activities of the unit package should be completed, because students had to submit the package to be graded. In preparation to the summative assessment, Diane re-explained the equations related to the heating water curve and connected those equations to eventual questions of the quiz. Before administering the quiz, she provided feedback on the exit slip results. She said to the students: As I’m looking through your feedback sheets from yesterday, there are some people got totally lost, hopefully going over. I will say, as I’m going through the yellow, if you went through and you look at the pretest we did yesterday and you do it by yourself, you are going to be fine, because most questions that I have go all the way back to that cue, [because] you are getting this equation. [Diane, Lesson # 2, March 2012] This situation poses interesting points of analysis from the perspective of the enactment of the exit tool. A few months before, in the January 2012’s learning team meetings, Diane 121 demonstrated understanding of the purpose of the tool. She said when discussing with her colleagues what to include in the exit slips, “I think for the goal [or learning target]… I want an example, like ‘give me…show me what you know’. But step two, [do] I want questions?...two solid questions that you have” [Learning Team Meeting #3, January 2012]. This statement makes clear that Diane knew the purpose of the tool. However, her use of the tool in her classroom was not consistent with her apparent understanding. Diane used the exit slips with her students after reviewing the material multiple times. Students completed a practice test, and then the main topics were thoroughly reviewed and explained, so the actual use of the exit slip in that lesson was clearly redundant and reiterative. In addition, the feedback that Diane provided to her students based on the exit slips was very generic, non-descriptive, and non-actionable. It would not have served to move students’ thinking forward. When Diane told the students who picked the yellow card up that they were going to be ‘fine’ in the quiz if they already had reviewed the practice test, she was not supporting students’ learning and being specific in providing feedback on particular ideas with which they struggled. Diane neither referred to the ideas that students wrote on the cards nor to strategies to help students better perform in the quiz. Diane also said she found that some of her students were “totally lost,” but she did not provide any support for those students. In addition, the use of the exit slips was problematic in terms of timing, because this tool was administered the day before the summative assessment test and the feedback on these slips was provided just before completing the quiz. This feedback was non-actionable for making changes in learning or instruction, especially because the teacher and the students did not have time for making instructional decisions or adjustments. 122 Connections with Diane’s activity system. Did Diane know that the exit slips were not going to be effective if administered just before the summative assessment? The response is uncertain, but there is evidence that in one learning team meeting she posed this question when co-creating the exit slip. In the particular case, this formative assessment tool was designed to help the teacher gather information from students’ ideas to guide teachers’ instructional decisions and provide feedback to students. It was also designed to help students be more reflective on their learning. However, Diane’s use of the tool was disconnected from this purpose. After several attempts, Diane gave up using the exit slips. In an interview conducted one year later, she explained that the tool did not match to her instruction and was ineffective for her. I tried for a little bit those cards, of I don’t understand what I’m doing, and that’s fine. It just doesn’t fit what I do. I mean, I go back over, and I’m like, “Okay, I got some grades.” But it’s the same kids I already know because I watched. So I got the yellow card, and I’m like: “Yeah, I know.” I know this kid always gives me the yellow card because they’re always on the fence, or I know this kid always gets it. So having that piece of paper, I really don’t think helped as much as just interacting with the kids. [Interview #1, Diane, May 2013] What does this mean in terms of the activity system? In terms of the division of labor, Diane does not seem convinced that her students can provide her information about their learning and be actively engaged. She “already knew” the levels of performance of the students, based on previous summative assessments and she seems sure that this is not going to change. Similarly, 123 Diane did not seem very interested in providing room for students to identify and communicate their ideas and helping them be more engaged in the process. Similarly, regarding the activity system’s rules, Diane was much more focused on questioning and providing feedback based on completion of tasks and procedures that targeted the right response. This also coincides with Diane’s instructional routines in which she intensely monitored and asked questions of students in order to verify if they had the right response and if they were completing the right procedures. Diane’s understanding of an adequate formative assessment tool looks like a quick check of students’ ideas instead of something that makes students’ ideas visible, which implies a limited misuse of formative assessment. In terms of the activity system, the tool was used as a second check to see if students knew and not what students knew (Torrance & Pryor, 2001). Lisa’s Enactment Different from Diane, I did not have the opportunity to see Lisa enacting the exit slips in her classroom. However, in the interviews she provided rich descriptions of her use of the exit slips. Lisa mentioned that she had partial success in using the exit slip in her first year of enactment. She attributed this partial success to the timing of her enactment. Specifically, because she started using the exit slips in the second semester and teachers were pressured to cover the curriculum before the final exams. After the professional development year, Lisa had a positive appreciation of enacting the exit slips. She said in the interviews that using the exit slips was helpful for students to assess their progress during an instructional sequence. She also considered that using the exit slips was helpful to support her instructional decisions based on student evidence. She explained: With the exit slips if I get a positive overall slip where most of them say they understand it, then I go the next day and I say, “Okay, since you said you understand this, let’s move 124 on, maybe make it a little bit more challenging.” Because then if they do understand the concept, then they should be able to apply their knowledge to the next level. If they say they were not understanding it, then I will go back and kind of hit the basic key points again and try to re-introduce the topic maybe in a different way, maybe using different examples, real-world sometimes examples. [Interview #1, Lisa, March 2013] Lisa perceived that her use of the exit slips positively affected students’ engagement. She mentioned that it was helpful for students to communicate their ideas, especially because they did it anonymously and safely. Lisa noted that she used a mailbox so students could put their card inside. She explained that many high-school students did not like to participate publically in the lessons to avoid being ‘academically labeled’ by their peers—a fact probably related to the values around grading in high-school culture. Additionally, she commented that some students felt more confident to ask questions when they saw other students in the class doing the same. Lisa also perceived that the use of the exit slips may have triggered student participation. In doing so, she provided examples of how she used the exit slips: By choosing the green, they’re not just saying: “I understand it” and they’re turning it in, but they actually have to show me how they understand it; tell me what you understood or even do an example problem so that they really think about how did they understand it. Or, I struggled in this part so I’m going to use yellow.’ Or, ‘I didn’t get it at all, but this is what I didn’t understand.’ So they’re not just giving up with the red. They have to tell me, ‘I didn’t understand this because…’ Or, ‘I didn’t understand this particular part,’ and why or what was difficult about it? Was it just the 125 math, algebra or was it the actual wording? So I think it made them think about what they were writing down. [Interview #1, Lisa, March 2013] Initially Lisa implemented the exit slips in the 10th grade advanced chemistry classes, but she discovered that the exit slips were more effective for her lower-achievement chemistry course. The tool provided more room for eliciting different types of students’ ideas in that setting, because the students in the lower level classes were more academically diverse than the students in the advanced class. In those courses, she used the information from exit slips to make instructional decisions based students’ responses, especially when teaching big scientific ideas. Lisa says that she was able to find a niche in those courses because she teaches these courses at “slower pace, and we don’t have as much content to cover as the other ones [courses], and you don’t use the math” [Interview #2, Lisa, May 2013]. Connections with Lisa’s activity system. Although I did not observe Lisa enacting the exit slips in her classrooms, she expressed a more favorable stance to the use of this tool, especially for particular types of classes and students. It seems that Lisa was able to match the exit slips to her practice. What does it mean in terms of Lisa’s activity system? Lisa understood the exit slips, an example of the activity system tool, in a way that is more consistent with its intended purpose. Lisa put the emphasis on the exit slips as a tool to better understand what students knew —in particular, to identify students’ ideas, and to promote participation. Furthermore, Lisa had to make some adjustments and changes in her practice to make the enactment of exit slips possible. In terms of rules, she developed strategies to ensure students’ participation in a safe environment that allowed students to communicate their ideas 126 (e.g., the use of the mailbox). Moreover, Lisa’s description of her exit slips use was somewhat less focused on getting a right response from students and a little more concerned about knowing and understanding students’ ideas. In terms of division of labor, Lisa reported organizing her class in a way that students had some opportunities to ask about their conceptual doubts. Lisa believed that students needed a space to communicate their ideas and be more reflective in their thinking. Her description pointed to helping students identify and differentiate the ideas with which they struggled (e.g. differentiating the chemistry concepts from the procedures that include the use of mathematic and algebraic content). With her actions, Lisa was giving room for the students to be more cognitively engaged in their learning. She was also providing opportunities for the students to regulate their own learning. Synthesis of the Exit Slips’ Use and Connections with Learning About Practice The experiences of Diane and Lisa in using the exit slips illustrate the complexity of enacting artifacts in the classroom that were created in the context of professional development. In the previous chapter, I suggested that the creation of the exit slips in the learning team can be interpreted as an initial stage for the development of a boundary object (Star & Griesemer, 1989) that represented the interests, motivations, and different purposes of the learning team, especially in terms of the participation of the learning team members in their learning journey about formative assessment. In these first attempts in the classroom, the exit slips were designed to elicit students’ ideas and promote students’ (and teachers’) awareness of their learning. However, the exit slips were enacted in different ways. The evidence suggests that the exit slips were more helpful for Lisa than for Diane. Diane was emphatic to explain that the exit slips did not fit with her teaching and her approach to formative assessment even though she understood the purpose. For Lisa, the exit slips were not only helpful in her formative assessment intention, 127 they also triggered her reflection about her instructional and assessment practices to focus more on student thinking. Therefore, her use of exit slips scaffolded her thinking about how to transform her practices to make the chemistry content more accessible to students (Hammerness et al., 2005; Windschitl & Thompson, 2011). Although the use of the exit slips corresponds to a small component of Diane’s and Lisa’s instructional practice—and only a small piece of formative assessment – this tool represents a clear connection with the professional development experience. What can be said in terms of Diane’s and Lisa’s learning about formative assessment? Grossman et al. (1999) defined five levels of appropriation of pedagogical tools to be used in a social environment: (1) lack of appropriation, (2) appropriating a label, (3) appropriating surface structures, (4) appropriating conceptual understandings, (5) achieving mastery. Based on the evidence collected in the learning team meetings, lessons (for Diane), and teacher interviews, Diane understood the characteristics of the tool and the “formative” purpose of gathering information of students’ understanding before the administration of summative assessments. However, she was not successful in using the tool in a timely manner to make instructional decisions or to promote student engagement. In the case of Lisa, she understood the purpose of the exit slips and how it was connected to formative assessment. Although I did not record evidence of her use of the tool in the classroom, her reflections showed a deeper understanding of the implications of enacting the tool. In terms of her description, it seems that using tool influenced her practice and made it more student focused, especially when enacting formative assessment in terms of using information from students’ responses to help them move forward (Ruiz-Primo & Furtak, 2007). It is not clear, however, how systematic was Lisa in using the tool in her lessons and, more importantly, how the exit slips connected with other pieces of 128 formative assessment (e.g., learning targets, descriptive feedback) to enact this practice in a articulated manner. Hence, regarding the use of the exit slips as a tool that enables the learning experiences of the teachers who participated in the professional development. Lisa’s level of appropriation can be characterized as Appropriating conceptual understandings, while Diane’s level can be approximated to Appropriating surface features. Lisa understood the rationale and purpose of the exit slips and was able to characterize their features. However, she was not able to identify the conceptual underpinnings of the exit slips in the context of formative assessment to enact them accordingly. Diane identified some features of the exit slips, but her classroom enactment was disconnected of its purpose and lacked connections with the process of formative assessment. Formative Assessment Moments: Zooming in on Classroom Practice In this section, my aim is to illustrate and problematize how and why Diane and Lisa made decisions in some formative assessment moments and to explore the diversity of factors that may influence their instructional decisions. I will describe in detail two moments that are related to formative assessment practice that I observed and that I asked teachers to reflect on in the interviews. My purpose in this section is neither the description of a typical episode of both teachers’ formative assessment practice nor the characterization of an exemplary formative assessment moment. Similarly, I am not making claims about teachers’ learning of this practice. I am more interested in explaining in detail how both teachers made sense of these formative assessment moments and how they connect with each of their activity systems. Both moments refer to Diane and Lisa teaching the same content—determining oxidation states in redox chemical equations. 129 Diane’s Formative Assessment Moment In May 2013, Diane taught her redox (oxidation and reduction) reactions unit, the last unit before the final exam. On my first day of videotaping (Monday), students were working with a worksheet that contained a list of redox reactions. Students had to identify the states of oxidation of atoms that were transferring electrons. They also had to write the equations for the half-reactions (the oxidation and the reduction) and to identify the oxidizing and reducing agents for the redox reactions. Students worked individually while Diane walked around the students’ desks to observe how they worked, to respond to their questions, and to make comments when she saw students struggling or “making mistakes.” On Tuesday, Diane started the lesson by explaining that she wanted to review a couple of exercises “that were causing some irks.” Diane explained the two exercises using ideas from the beginning of the redox unit and some chemistry prior knowledge (from the beginning of the course). Diane asked students questions to check if they were following her explanation. For her, the main issues that students struggled with were related to determining the states of oxidation, as she explained when she saw the video. It’s the oxidation. They don’t understand the oxidation states. They don’t understand that, if they’re losing electrons, that the electrons should be… you should show them in the products because they’re being knocked off. Or when they’re being reduced, that they’re taking them on so it should be on the reactant side. They don’t understand. I mean, they’re getting it, but they struggle with that. [Interview #2, Diane, May 2013] 130 Her comment was consistent with what she had explained to me the day before (Monday) when I asked her how she noticed students’ doubts and mistakes when they asked her for guidance. She told me that she looked if the students had the right oxidation states in the worksheet and then she paid attention to the half reactions to see “if the states are correct to make sure if they know that the charges are coming down or the charges going up. It usually when they look at [pay attention to] the oxidation states they are good” [Lesson # 5, Diane, May 2013]. Finally, Diane not only reviewed two problems, but ten in total. She used these problems to re-explain the rules for determining the oxidation states and explain to the students what to pay attention to. For example, she reminded them of the rule for determining the reducing and oxidizing agents, by saying that both would be “on the left side of the equation…they will both be reactants. So when you guys start to pick up what’s oxidized and what’s reduced you’re looking at the right side of the equation” [Lesson # 5, Diane, May 2013]. Later, Diane had students complete the following problems for the rest of the lesson by saying that she will show the responses at the end of the hour. The formative assessment moment I will focus on illustrates how Diane provided support to one student in the class named Hannah (a pseudonym). Hannah was working individually on her unit package and asked Diane a question because she was not able to find the oxidation states for manganese in a redox equation (the reaction was potassium permanganate combined with hydrochloric acid). The following excerpt shows the dialog between Hannah and Diane, in which Diane responded to the student question. Hannah. “I don’t know if the oxidation number is right…” 131 Diane. “Here you do! So you’re saying this is a +5? [The state of oxidation for Mn in MnO4-]” Hannah. “Yes…” Diane. “No, that’s seven!” Hannah. “I don’t understand, how do I know this?” Diane. “They all together [the oxygen atoms] are going to be -8 that is going to be +7.” Hannah. “Oh, OK! I kind of get it!” Diane. “Does it make sense? So it’s +7 to +2 [the change in the half reaction], what’s going on there?” Hannah. “It goes...?” Diane. “…down.” Hannah. “Down” Diane. “So it is…?” Hannah. “Oxidation? Reduction?” Diane. “Reduction! Charge is reduced, so the charge goes from +7 to +2.” Hannah. “So, it makes sense, so these two [chlorine atoms], the one, and the others…” Diane. Zero here, -1 here. [oxidation states for chlorine]…makes sense? [Lesson # 5, Diane, May 2013]. A few minutes after this interaction, Diane drew two diagrams on the whiteboard that represented two atoms with 14 protons in their nuclei (she used the + symbol to represent the 132 protons), in order to illustrate a change in the state of oxidation from +7 to +2. In the first drawing, she drew seven lines that represented electrons on the model “orbits”. Diane then called Hannah and said: Diane. “Hannah, ready? This is an atom. I never ever, ever lose my protons. They are in the middle, so if I have 14 protons here, there are 14 protons here [Diane points to each atom]. If I have a +7 charge, that means…[Diane counts from 1 to 7], I have 14 protons but I have 7 electrons, because my overall charge is +7. Does it make sense?” Hannah. “Yes!” Diane. “Because seven of these electrons have a positive buddy, and seven don’t. So that my overall charge is +7 [write this number on the board]. If I’m going to +2, that means what…?” Hannah. “Hmm, there’s…, to 5?” Diane. “So, how many more electrons do I have to reduce my charge?” Hannah. “Five” Diane. “This one has more electrons now. It gained, so the charge is reduced. Does that help? [in the model she counts the electrons]” [Lesson # 5, Diane, May 2013] This formative assessment moment is relevant because it shows how Diane made an instructional decision based on what she noticed from her interaction with Hannah. In the 133 interview where I asked her about this moment, Diane noted that she decided to change her explanation, because she knew that previous explanation did not make sense to Hannah. Diane expanded her comment about her interaction with Hannah: “she’s like, ‘mm-hmm, yeah’, and I knew she had no idea...that was just her response. ‘Oh, yeah, I kind of get it.’ And she was going to figure it out, but I knew she didn’t fundamentally understand it.” [Interview #2, Diane, May 2013]. I asked Diane for the reason why she decided to explain this concept to Hannah by drawing the diagrams. She stated that Hannah is a very hands-on student who needs visual representations to understand the content and that she struggled with concepts that she could not manipulate. Diane reflected on the situation and explained that some kinds of representations are necessary for some students: “if she can see… all the positives in the middle, and the negative…you have to a negative for a positive to cancel it out…She’s just one of those kids that needs a visual to kind of kick it in” [Interview #2, Diane, May 2013]. Thus, Diane’s comment suggests that her expectations for students solving redox problems are working at the symbolic level, without making connections with the microscopic level (Gabel, 1998; Thomas & McRobbie, 2001). In this particular case, the microscopic idea is that a change in the state of oxidation implies an atomic transference of electrons. Diane considered that working with this kind of representation, such as the atoms’ diagrams, were important for some students. Although Diane recognized that this visual representation might help some students’ gain a deeper understanding of the content, she was reluctant to use these types of models for the entire class in her explanation. Her rationale for this decision was that she did not want to promote misconceptions in students’ ideas that may affect their performance in chemistry college courses. She explained. 134 So I’m almost reluctant to put a Bohr model down because I think, you know, they’re going to go to college. Professors are going to say, “That’s not true. I don’t know why your teacher taught you that.” But high school kids have to be able to see something. I mean, this is so difficult for them to understand what that even means that I have to give them something that’s more concrete, and I just always kind of preface that, you know, this isn’t the real model, but let’s just say if we had to draw it. [Interview #2, Diane, May 2013] Contribution of this moment to understand Diane’s activity system. This formative assessment moment illustrates the main characteristics of Diane’s instruction in terms of helping students getting the right response. In this particular case, Diane recognized that Hannah did not understand the meaning of reducing an atom (gaining electrons). When Diane explained to Hannah how to solve the problem, she was focused on mechanical procedures and working at the symbolic level to understanding the chemistry phenomenon (Gabel, 1998). Once Diane noticed that Hannah did not understand the problem and the underlying concept, she decided to change her teaching. Diane was re-teaching in a different way and the new explanation pointed to what she considered was essential for Hannah in that moment: (1) understanding that the charge of an atom in relation to the number of protons and electrons and (2) that a change in an atom’s charge implies a change in the number of electrons. Diane used a simple representation to explain this and to help Hannah learn. What is paradoxical of this situation is to see that Diane was able notice that a student was struggling and that she needed a particular type of representation to understand the content 135 better. However, this type of instruction that included these representations was not part of her usual instructional practices. As noted above, Diane’s instruction was so focused on solving mathematical procedures and ensuring that student got the correct answer that she omitted teaching the core ideas about redox reactions. She was aware that representations are important to support high-school students’ learning and she knew that her students had different types of learning styles, but she continued using the same rote-learning strategies to learn chemistry. What is more relevant in this situation is her justification for not using representations that target main chemistry ideas: to prevent interfering with students’ chemistry experience in college. In terms of the activity system, this moment illustrates that, for Diane, her object of enacting formative assessment to improve instruction was aimed at providing the correct responses to students and to give them tools to solve chemistry-mathematical problems. However, the core ideas behind these mathematical problems were not addressed. Diane said in an interview that she wanted their students to “understand chemistry”, but her ideas about chemistry instruction are more focused on learning facts and procedures. She also thought that students learn by making connections between chemistry content in a way that students moved from the chemistry facts to the application (e.g. solving problems, preparing exams for college). If the object of the activity system was understood as such (i.e. ensuring students could solve mathematical problems on summative assessments and could be prepared for college), the reason that Diane provided little room to the elicitation of students’ ideas is clear. The structure of her lessons is designed mainly to ensure students’ correct responses to (mathematical) problems. 136 Lisa’s Formative Assessment Moment In May 2013, Lisa was teaching the same instructional unit about redox reactions. Lisa explained that the redox reactions unit is the last one of the year, and many of her instructional decisions are influenced by the final exam and the expectations of the students learning chemistry. She posited that there are “five questions on the final exam on redox, where there’s 85 questions total. [So] I hit the main points that will be covered and that they need to know if they go into the advanced class, the AP class.” [Interview #2, Lisa, May 2013]. The lesson in which this formative assessment moment occurred was on the first day of the redox sequence (Monday). Lisa noted that the content in this unit tends to be difficult for her students because they had to integrate the new concepts with their previous knowledge such as “how to name a compound, how to write a compound, and…refer to polyatomic ions and their charges and their formulas” [Interview #2, Lisa, May 2013]. Lisa began by introducing the unit package to the students and then she explained what redox reaction was—that oxidation and reduction reactions are paired up because one element is oxidized and the other is reduced. She lectured and explained to the students that oxidation and reduction refer to losing and gaining electrons, respectively (which is different from Diane, who focused her lecture only on solving equations). Then Lisa provided examples of atoms that are gaining or losing electrons. She explained in her lecture: Lisa: “An oxidation example, I’ll put it over here. If I have the elemental form of magnesium and it goes to the ion form of magnesium. So what it happens here? I’m going from having no charges to +2. So it’s oxidized because I lost two electrons, you guys see that, no charge, that is a zero, 137 neutral, going to +2, that means I lost two electrons, so I’m going to show that lost as a product, because I lost them, it’s at the product phase. Where in reduction I’m adding, and when you add things, it’s going to be part of the reactants. So, let’s use Fe with the +3 charge, so how many electrons I may going to have to gain to get elemental Fe?” Student: “Three” Lisa: “Three, absolutely, we are going to add three electrons to the reactant side, good job! Because this is reducing, I’m adding three electrons to reduce, to make it more negative, OK?, it does make a little bit of jumble in your brain, because when you are adding electrons, you have to remember, how I’m adding and reducing at the same time, but just remember what we’re adding, you are adding a negativity, a negatively charged particle.” [Lesson # 5, Diane, May 2013] Lisa’s explanation was sequenced and organized. In that excerpt, she explained the meaning of an oxidized and reduced atom (i.e., that electrons are added or removed). While her explanation was organized, she only focused on the change in the number of electrons but not on the balance of protons and electrons in the atom. Then, she explained how to assign the oxidation states for different atoms. She set expectations for the students about learning this topic, “we are going over the rules and have some examples with you, so you can be a little bit comfortable, a little bit, probably, you’re not going to be super catching on until we practice a little bit” [Lesson # 5, Diane, May 2013]. Diane taught five rules, in detail, to determine the 138 oxidation states for different elements. She asked questions to verify whether the students understood the main concepts in the lecture as well as to connect with previously taught chemistry content. For example, she reminded the students the location and properties of some periodic table groups for determining the oxidation states of atoms in groups 1, 2, and 17 of the periodic table. She, then, modeled how to determine the oxidation states of two similar compounds where the rule was used. After reviewing the rules for determining oxidation states, Lisa modeled the procedure to solve four problems from the model package. These four problems considered different rules, and while she went through these problems, she asked several questions to make sure that the students used the right rules. Lisa let students complete the rest of exercises individually. She started walking around to see students work and respond to their questions. While Lisa was walking along the student desks 12 minutes after explaining the five rules to the class, two students (Kelly and Emma) asked questions about determining the oxidation states in compounds in which they had to use the different rules that Lisa explained at the beginning of the lesson. Kelly: “[Mrs. Lisa]. For the first one, is it -3 or -6?” Lisa: “Well, …So what’s the one thing you look at if you can’t determine? … You look at if it is a group 1, 2, or 17. So this is in group 1, so it’s going to have a +1, then you can determine the next one.” [90 seconds later] Emma:. “Why?…isn’t this a -3? Only…” 139 Lisa: “Only if it [the element] is diatomic by itself, not attached. So it was just sulfite or nitrate.” Emma: “OK” [3 seconds later] Kelly: “In order to those two H3…, how is it…?” Lisa: “So it is attached to what? and non-metal, so it’s acting as a metal, so it’s a +1.” Kelly “So there it go, so…” [Lesson # 5, Diane, May 2013] In explaining this situation, Lisa referred to the students’ problematic ideas that underlay Kelly’s and Emma’s questions in this situation. She noticed that Kelly and Emma were struggling with the same ideas, and she noticed this trend in several of other students as well. I don’t really have a problem with the kids’ identifying oxidation versus reduction…but assigning the numbers and not relaying it back to charge, sometimes that’s difficult if they can do… like if it’s a three-element compound, they can do the outside, and they have to figure out the inside. It’s multiplying and adding and realizing that that has to balance to a zero just like when they formed a compound back in October when we were writing and naming compounds, and that was hard for them to not look at the charges and try to balance that out. I think that whole misconception of things can be zero in a compound, that was huge for them. [Interview #2, Lisa, May 2013] 140 Different from Diane, who works to get Hannah to the right response, Lisa is trying to remind Kelly and Emma of the rules to determine the oxidation states that she taught before so that students can learn to use them. Lisa believes that continuous practice makes a difference. She emphasizes for students the things that “are very important, and I’ll repeat them three, four, five times so that they… like pure elements are zero. I will make them say over and over and over again, so that… ‘Okay, this must be important’…” [Interview #1, Lisa, March 2013]. Lisa mentioned that Kelly is one of the key students that she uses to monitor how the class is working and provide her with information to regulate the pace of instruction and to make instructional decisions (when pertinent). She noted that Kelly was particularly engaged in her learning and was good at providing information to the teacher. Lisa can often look to Kelly to see if she understood the content and know that this is indicative of the class as a whole. Therefore, Lisa was not only using information from Kelly to provide feedback to an individual student, she was also using Kelly to get feedback on the effectiveness of her instruction. Lisa expanded on her ideas about Kelly: So it’s like, “Okay, everybody’s kind of not getting it. Kelly, do you get it? Yes or no? Okay, then I need to…” She’s one, and she may not be the smartest in my class, but she’s one that pays attention and wants to learn. So with her as my little feeder into the rest of the class, it’s like, “Okay, am I getting this across? Do you understand it? Was that clear?” She does the little verbal or nonverbal signs where it’s like, “Okay, that was clear to everybody.” Because she understands how she’s been around several of the other students a lot where she’s like, “Okay, they get it, or they’re just not paying attention. You were fine, you know. You got it across.” And it took me a while to figure out who 141 that was in my class, you know. First semester, it was someone different in the same class. Then second semester, she’s the one that really stood out as, you know, “Yeah, I get it.” A lot of head shaking or she’d be like, “Oh, okay.” [Interview #2, Lisa, May 2013] In this context, the individual key students provided feedback to Lisa that helped her be more aware of the particular ideas of the students and the extent to which they understood the content. Similarly, Lisa had a type of rule of thumb’ to gauge when to make a public announcement for the class or to make changes in her instruction. She explained: “Two of my 30 kids,…one of them is probably a B student. One is an A student…if two of my students are having the same problem, there’s more out there that have the same problem” [Interview #2, Lisa, May 2013]. Contribution of this moment to understand Lisa’s activity system. This moment illustrates how Lisa collected information from her students. She developed a system to determine the pace of her lessons and some procedures to regulate her instruction—based on the cues that her key students provided. Differently from Diane, who assumed if her advanced students did not understand she had to make an instructional decision, Lisa integrated the information that she noticed from the key students, who span a range of levels of learning. Along with Lisa’s emphasis on recognizing and identifying students’ ideas, some connections with the object of the activity system can be made. Lisa believes that a main goal of her chemistry teaching is to make students passionate about chemistry so that they can see the connections with real life. She explained, “their grade doesn’t reflect that but they get it and they ask questions and they do research on their own and they come in with… ‘look at this article’ and, ‘look at this – it’s chemistry. It’s what we were talking about’” [Interview #1, Lisa, May 142 2013]. Lisa wants to explore the diversity of students’ responses to help students make connections with daily life. Hence, in the activity system Lisa’s object is to improve chemistry instruction by making student thinking visible, working with students’ ideas, and making chemistry applicable to real life. Similarly, Lisa was making changes to accommodate the rules of the activity system to pursue this object. In the second year that I observed her, she started to delay grading during her units in order to provide different types of support to the students before the administrations of summative assessments. I also observed a few episodes in which Lisa promoted students’ participation. For example, in one lesson she had students explaining the solution to a problem. She said: “so someone raise the hand and walk me through this one” [Lesson # 5, Diane, May 2013]; and one student volunteered. Lisa was making some attempts to accommodate aspects in her activity system’s division of labor by asking for more participation and allowing students more space to take ownership for their own learning. Activity Systems for Diane and Lisa The enactment of formative assessment as tool in Diane’s and Lisa’s activity systems certainly influenced their chemistry instruction. Both teachers recognized a positive impact of the enactment of formative assessment, although they considered different outcomes. In this chapter, I presented evidence of each teacher’s instructional and formative assessment practices and made connections with the different components of the activity systems. I will present two models that account for the ways these teachers organized their instruction when they enacted formative assessment as tool. I also will discuss the outcomes that emerge from each activity system. 143 Diane’s Activity System Figure 9 represents the activity system that describes Diane’s chemistry instruction. Diane used formative assessment tools with the object of improving her instruction. However, her main motivation to use formative assessment as tool was to determine whether the students knew the knowledge and procedures that related to her instruction. Diane’s activity system for her chemistry instruction Formative Assessment knowledge and tools Used to see if students know Improving chemistry instruction to help students solve problems and connect chemistry facts Diane and her students Using grades to monitor and regulate student work Focusing on procedures to solve problems and factual content Intensifying traditional practices Chemistry teaching and learning focused on completing tasks Grade-oriented culture (students, parents, schools) Diane knows what her students know and need. Influence of college chemistry (connections with high school chemistry: type of instruction, admission) Students just complete the procedures and tasks assigned Figure 9. Representation of Diane’s activity system that occurred when she enacted formative assessment as mediating tool to improve her chemistry instruction. In other words, the process of using a tool (i.e., formative assessment) to mediate her chemistry instruction occurred in terms of verifying students’ work. Formative assessment, as a tool, would be successful to the extent she was more aware of her own instruction. 144 In describing her learning and enacting formative assessment, Diane commented on her perception that students were performing better in their grades. She attributed this to the fact that enacting formative assessment helped her be more aware of her teaching and be more alert for students’ misconceptions during her instruction, so she could support her students in a more immediate manner. She explained: So I do think that you get the kids a little bit more invested because they know you’re more invested. They can tell if you’re connected to their failure or their success, and they can tell that when I’m walking around, when I’m over their shoulder, you know, they don’t particularly care for it, or you know what I’m saying, can I make them do better? I expect more. So I think it’s just more of a connection with the kids. That formative assessment kind of… I hate to say forces you to make… because you should make that anyway, but when you’re very comfortable in your content, and you move from behind the desk, and you’re actually out with the kids, to see where they are, to assess them individually. [Interview #1, Diane, May 2013] So for Diane, her learning about formative assessment practice mainly helped her recognize that she needs to be more involved in her teaching and in monitoring students’ work. The outcome of her activity system can be understood as the intensification of her (traditional) instructional practices. This may explain the fact that, although Diane recognized an influence of the formative assessment on her instruction, she did not make significant changes in the rules, 145 division of labor, or in the perception of her students (subject). This may also explain why Diane continued her extensive grading practices. Moreover, it is paradoxical that the enactment of formative assessment—in the case of Diane—did not produce changes in the ways her students were involved in her instruction. Therefore, rather than experiencing an instructional change, Diane’s enactment of formative assessment was employed in such a way that allowed her to keep the status quo with the ultimate outcome of improving her practices that maintained control and enhanced monitoring. Lisa’s Activity System Figure 10 represents the components for the activity system for Lisa. Enacting formative assessment produced some slight modifications in the ways she organized her chemistry instruction. In order to improve her practice, formative assessment—the tool—matched Lisa’s expectations of being more insightful of the variety of her students’ ideas and helping students be more regulative of their learning (Allal, 2010). Lisa’s enactment of formative assessment as tool allowed her to develop a new mindset for her instruction. She prioritized discovering students’ ideas as input to make instructional decisions. This use of formative assessment implied that she started making adjustments to her activity system. Even though Lisa’s instruction continues being traditional, she is attempting to make slight changes in her activity system in order to be more aware of students thinking and include students’ ideas in her instructional practice. She has also developed a mindset more favorable to the use of formative assessment in her classroom, which may be considered a first step before a change in practice. When enacting the exit slips, a piece of formative assessment, Lisa had to give more responsibility to her students. As a result, they were able to communicate their ideas, provide 146 feedback to her, identify their problematic ideas, and make instructional decisions as part of their involvement. This is consistent with Lisa’s perception of her enactment of formative assessment. I think I’ve slowed down a lot more and I’ve really tried to get the kids interested in the subject matter, not just to get them through the class. It’s almost like I want them to be an active part of my classroom instead of just a warm body with a grade attached to them. …. I think formative assessment is going to help strengthen my teaching with chemistry because I’m going to understand student learning a lot more – how they learn; how deep of an understanding they get out of the material – just your different levels of your kids. And they’re all not going to learn the same and they’re all not going to be A-students but I want them all to come out of my classroom with some knowledge of chemistry, and I think with formative assessment it should be easier to do that instead of looking at my computer screen and looking at their grades all the time. [Interview #1, Diane, May 2013] By considering the organization of the activity system in Lisa’s classroom and how she used formative assessment as a tool that mediates the improvement of her chemistry instruction, the outcome of Lisa’s activity system was developing a different mindset about the importance of formative assessment to know about students’ ideas and making attempts to include some student-centered practices with enhanced promotion of students’ ideas and in which students had higher students’ responsibility for their own learning. Therefore, Lisa plans to reflect more on her practice and make decisions that promote better student understanding. She said that she 147 needs reflect on better ways to “come up with different ways to teach the concepts that are not math based,” because she is thinking of not grading “students as much on performance but on understanding” [Interview #2, Lisa, May 2013]. Lisa’s activity System for her chemistry instruction Attempting changes in her practice to promote elicitation students’ ideas of students selfregulation Formative Assessment knowledge and tools used to know about students’ ideas Lisa’s mindset is more prone to studentcentered practices Improving chemistry instruction to help students see chemistry in their lives Lisa and her students Delaying grading Grade-oriented culture (students, parents, schools) Creating a safe environment to allow students show their ideas Influence of chemistry college (type of instruction, admission) Focus on chemistry concepts and procedures Lisa is trying to promote students’ participation, engagement and selfregulation. Traditional Ideas about chemistry Figure 10. Representation of Lisa’s activity system that occurred when she enacted formative assessment as mediating tool to improve her chemistry instruction. Tensions and Challenges in Activity Systems One of the main features of CHAT is the opportunity to visualize tensions and contradictions within the activity system. Human activity can provoke tensions that reflect systemic contradictions (Engeström, 1987), especially when the configuration of the activity system puts 148 the subject(s) in situations that interfere or hinder the accomplishment of the activity system object or the participation of the subject in the activity (Yamagata-Lynch, 2010). Contradictions can be characterized as unknowns, barriers to achieving the object, or conflicts between components (Nardi, 1993). I will explore some tensions that emerged when the two teachers attempted to embed formative assessment in classroom instruction. For Diane, the main tensions emerged when she started enacting formative assessment tools that were not aligned with the object of her activity system. The exit slips created in the professional development had the purpose of enabling student participation to elicit their ideas and understandings. However, that contrasted with Diane’s expectations of student participation, because she was more interested in getting the right responses from students—factual knowledge or procedures for solving problems. Two related tensions came up in Diane’s activity system. One exists between the subjects and the object, and the second exists between the object and division of labor. These tensions resulted from the fact that Diane did not provide agency to her students to participate in the activity. Figure 11 presents the main tensions in Diane’s and Lisa’s activity system based on the enactment of formative assessment as mediating tool to improve chemistry instruction. 149 Tensions in Diane’s activity system Formative Assessment knowledge and tools Used to see if students know Improving chemistry instruction to help students solve problems and connect chemistry facts Diane and her students Using grades to monitor and regulate student work Focusing on procedures to solve problems and factual content Tensions in Lisa’s activity system Intensifying traditional practices Chemistry teaching and learning focused on completing tasks Grade-oriented culture (students, parents, schools) Diane knows what her students know and need. Influence of college chemistry (connections with high school chemistry: type of instruction, admission) Students just complete the procedures and tasks assigned Attempting changes in her practice to promote elicitation students’ ideas of students selfregulation Formative Assessment knowledge and tools used to know about students’ ideas Lisa’s mindset is more prone to studentcentered practices Improving chemistry instruction to help students see chemistry in their lives Lisa and her students Delaying grading Grade-oriented culture (students, parents, schools) Creating a safe environment to allow students show their ideas Influence of chemistry college (type of instruction, admission) Focus on chemistry concepts and procedures Lisa is trying to promote students’ participation, engagement and selfregulation. Traditional Ideas about chemistry Figure 11. Characterization of the main tensions that arose in teachers’ activity systems. For Diane (top) the enactment of formative assessment produced tensions between the tool and the object, the subject and the object, and the object and division of labor. For Lisa (bottom) the main tension occurs between the object of the activity object and the community. 150 Regarding the subject of the activity (the students in this case), Diane believed that many of them were not interested in learning science and that they were in her courses only because they needed them for college (i.e., for college admission tests or for college courses). Similarly, she also explained that some of these students were prone to cheat or do minimal work in order to get the highest grades. This is one of the reasons why the enactment of formative assessment contradicts the role that Diane expects of her students in her classes. Students are expected to be supported by the continuous monitoring and control of Diane, and she does not expect that students will (or can) analyze their ideas and monitor their own work. Thus, tension between the implementation of formative assessment and the division of labor grew. In simpler words, why would students take more attribution for their own learning when their teacher does it for them? These tensions are exacerbated due to the type of goals that Diane expects of her students. She is interested in her students knowing the procedures to solve problems and making connections between chemistry and factual content. Strictly speaking, these contradictions not only pose the question of the meaning of learning chemistry in depth in Diane’s classrooms, but they also pose the question about what chemistry learning is socially valued (i.e. the community component of the activity system), because much of the current curriculum and high-stakes assessments are focused on the most basic types of knowledge. Those previous components, along with Diane’s extensive use of grades for controlling and managing of student work, configure a system where the enactment of formative assessment to promote students’ agency and self-regulation does not fit, and according to Diane, is not going to work for her in that particular high school. For Lisa, the tensions that emerged were different. Lisa made efforts to enact formative assessment in her classroom to promote the elicitation of students’ ideas and students’ self- 151 assessment. In this process, she made some adjustments in some components of the activity systems such as the rules and the division of labor to enable students’ engagement. However, the main contradiction that emerged in Lisa’s classroom was between the object of her instruction and the values of the community. Lisa’s object was the improvement of instruction to help students make connections between the chemistry content and their current lives so that students could feel engaged and motivated to learn chemistry. However, this contrasted with the expectations of parents and students, who are members of a grade-driven culture and tend to associate learning in the subject with the grades they get. Lisa even recognized this as an internal tension, because her chemistry education emphasized the value of grading and numbers, In addition, she was trying to have a different disposition in her chemistry teaching. She also mentioned on several occasions the importance of involving the parents and the rest of the school in recognizing the benefits of formative assessment for students. For example, Lisa noted that the efforts for implementing formative assessment in the school necessitated coordination at the school level, but she recognized the need to comply with rules about grading even though she would like to reduce grading. Thus, the next chapter has the purpose of synthesizing the main conclusions of the study, focusing on the nature of the formative assessment professional development, analyzing teachers’ enactment of formative assessment, and discussing the connections between these two systems. Finally, I will discuss some implications of this research study for research on professional development, science instructional practices, and formative assessment. 152 CHAPTER 6: DISCUSSION AND IMPLICATIONS In this chapter I will discuss the findings of this case study in order to respond to the research questions. I will also examine implications for research and practice about professional development, formative assessment, and science instruction. Chapters 4 and 5 presented the main findings of this study. In chapter 4, I described Lisa and Diane’s participation in the FAME learning team. I presented evidence of how participation in the learning team facilitated both teachers’ understanding of formative assessment, especially in relationship to their grading practices. The teachers also engaged in the creation of a formative assessment tool to be implemented in the classroom with the purpose of gathering information about students’ ideas and to help students be more reflective on their learning. I discussed how this formative assessment tool was the initial stage for the development of a boundary object (Star & Griesemer, 1989) that encouraged participation in the learning team and facilitated teachers’ attempts to try something new in their classrooms. In Chapter 5, I analyzed the enactment of formative assessment in each teacher’s classroom in order to develop activity systems (Engeström, 1999). Drawing on CHAT (cultural-historical activity theory), I explained how the implementation of formative assessment as a mediating tool influenced the configuration of each teacher’s activity system in order to accomplish a particular object (enhancing chemistry instruction). In that chapter, I presented evidence to illustrate how the tool developed in the learning team influenced Diane and Lisa’s different perspectives in the enactment of formative assessment. In particular, teachers had different ways of conceiving the object of the activity and, based on the interplay of activity system components, a different outcome was produced, where it was defined in terms of the teachers’ chemistry practice. 153 In this chapter, my purpose is to link teachers’ participation in the professional development to their classrooms activity systems. As explained in Chapter 3, this study was designed as a single embedded case study, with two sub-units of analysis (Yin, 2009). This case study encompasses the experiences of two chemistry teachers who learned about formative assessment in a team-based professional development model and who worked to embed formative assessment in their classrooms. In this case study, the two units of analysis that are embedded correspond, respectively, to the classroom enactment of formative assessment by Diane and Lisa. Accordingly, my purpose is to integrate and connect the findings from each unit of analysis with the context in which the teachers worked and learned about formative assessment. This broader configuration of the case study illustrates the extent to which the professional development informed the classroom practice and the factors that mediated classroom implementation (Patton, 2002). This study has one general and overarching research question and three sub-research questions that will serve to organize this chapter. I will focus this chapter on responding to the overarching research question posed for this study: “How does participating in a team-based professional development influence two chemistry teachers’ enactment of formative assessment classroom practices?” as well as the three sub-research questions, 1) How do these two chemistry teachers engage in a team-based professional development about formative assessment? 2) How do these two chemistry teachers enact formative assessment in their classrooms? 3) What tensions emerge when these two teachers learn about and enact formative assessment practices? Following this, I will describe implications of the study for the FAME professional development program, for research and practice about professional development, and for research on formative assessment. 154 Teachers’ Engagement in FAME Professional Development Throughout the professional development year, the learning team met five times in order to discuss formative assessment and talk about how this process looked in the classroom. Team members also spent time sharing their classrooms experiences about classroom assessment. They created artifacts (such as exit slips) to gather information from students in relation to learning targets. To be clear, the evidence from the learning team meetings showed that the teachers committed to a common endeavor, spent most of their meeting time working on formative-assessment-related topics, and made efforts to use their new learning about formative assessment for classroom practice. The learning team was composed of teachers from the same school who volunteered and committed to meet regularly. The team leader organized meaningful discussions and was responsive to the questions and concerns that the learning team members raised. Moreover, the team members taught the same group of students, knew each other (because they had worked together in the past), and were motivated to work together, grow professionally, and learn about formative assessment. In sum, the team had favorable conditions for conducting its work. Although the team made some progress during the first year, they did not continue meeting as second-year team to further their learning about formative assessment. Sustainability—a key factor for functional professional learning communities (Richmond & Manokore, 2011)—limited opportunities for teachers to move forward with their team-based learning about formative assessment. In order to have an impact on practice, school-based professional development models need to be sustained, in order to provide time for meeting and attempting new practices (e.g., Grossman & Woolworth, 2001; Thomas et al., 1998; Wylie & Lyon, 2009); and professional development efforts need to be supported by school and district 155 administrators (e.g., Richmond & Manokore, 2011; Wylie, et al., 2009). Prior studies have found that professional development programs with contact ranging between 5 to 14 hours per year had no statistically significant impact on student learning while the largest impact was in professional development ranging 30-100 hours over six to twelve months (Darling-Hammond et al., 2009). The FAME model recommended a minimum of a three-year commitment by the learning teams to allow for increased and deeper understanding of the complexities of enacting formative assessment as well as to provide impetus for making changes in practice (Michigan Department of Education, 2011). However, this learning team did not get the support from their school and district in order to allow them to continue working together and become a sustainable team. Thus, the evidence of these teachers’ learning has to be considered with caution. In this context, making claims about the impact of the FAME learning team on teacher learning of formative assessment may be too ambitious. The content covered during the first year was insufficient to assure a deeper understanding of the formative assessment process, especially for supporting the enactment of practice. As mentioned in the Introduction chapter, the FAME professional development model presented formative assessment as structured through eight interrelated components that, as a whole, contribute to a deeper understanding of this process. However, the analysis of the content covered in the meetings showed that the learning team only covered some components of the formative assessment. For instance, the team did not talk about planning formative assessment (i.e., to embed formative assessment in instructional planning) and little time was spent on formative assessment strategies that support the use of tools to gather students’ ideas. It is important to note that this scattered approach in covering the formative assessment components in FAME does not rest solely on the learning team’s 156 shoulders. Rather it is related to the fact FAME leaves learning teams to be autonomous in organizing their meeting agendas and priorities. These decisions were usually based on the team leader’s priorities, team members’ expectations, and school and district needs. Despite this imperfect scenario, analysis showed that for the two teachers of this case study, Diane and Lisa, the learning team experience appeared to contribute to teachers’ learning of formative assessment, and even impacted some of their beliefs and practices. Both teachers mentioned that the participation in the learning team provided a space to have discussions about formative assessment and get ideas and suggestions from their colleagues (especially Lisa). Discussions also helped teachers reflect on the importance of using formative assessment in instruction and how to embed formative assessment in their practice, despite the strong gradingoriented culture of the school. Both teachers perceived that the learning team enriched their mindset about instruction and classroom assessment. This contribution, however, was more important for Lisa. In her case, she recognized the importance of the learning team in refining her ideas about formative assessment in order to support students, especially to help identify their scientific ideas. The evidence presented in the previous chapter showed that Lisa also felt more impetus to enact formative assessment in her classroom. Learning Teams and Classroom Practice Several studies (e.g., Desimone, 2009; Wei et al., 2010, Schneider & Randel, 2009) have found that effective professional development models should structure activities to be close to the context of teaching and should promote teachers’ engagement in activities that allow them to share, discuss, and reflect on their classroom experiences. Kazemi and Hubbard (2008) also called for better integration of what teachers do in professional development and what they enact 157 in the classroom. These studies suggest that the closer the professional development is to the classroom; the easier it is to make instructional changes. However, the enactment of classroom practices is complex even when professional development conditions seem favorable. The experience of these two chemistry teachers shows that participating in a school- and team-based professional model was not enough to produce substantive changes in formative assessment practice. The case of Diane is illustrative. In the learning team meetings she demonstrated an understanding of the implications of using grades to extensively monitor and regulate student work and she actively participated in the creation of a formative assessment tool to be enacted in the classroom. Diane participated in a community of learners that created artifacts that reflected their goals and motivations (Wenger, 1998). However, in the classroom setting Diane persisted in her traditional grading practices, did not provide opportunities for the students to analyze and communicate their ideas, and actually obstructed the use of the exit slip in her class for its intended use. The evidence from her instruction and interviews showed that her appropriation of the formative assessment tools created and discussed in the learning team was superficial. For Diane the connection between professional development and classroom practice was more oblique, especially in the ways that Diane understood and enacted formative assessment. Diane identified some features of what formative assessment looks like, but her classroom enactment did not reflect its purpose. Lisa’s experience illustrates that the learning team was more helpful in supporting her classroom enactment of formative assessment. She was able to use some formative assessment practices in connection with what she learned in the learning team. Lisa found a niche in her instruction for the formative assessment tool and was successful in including these exit slips in her classroom practice. In using this tool, she attempted to help students communicate and share 158 their chemistry ideas. She also began trying small instructional changes, such as moving from grading every day to delaying grading until the end of an instructional unit. She also tried to be more student-centered through eliciting and using their ideas. Even though this progress occurred in small increments, Lisa demonstrated alignment among the ideas about formative assessment discussed in the learning team, her mindset about instruction, and the practices she was trying to enact in her classroom. She appropriated conceptual understandings of formative assessment tools, although she was not able to fully utilize the conceptual underpinnings of formative assessment to enact them accordingly. In sum, I cannot argue that participating in FAME as professional development impacted teacher learning of formative assessment and classroom practices, partially because formative assessment was enacted within teachers’ traditional chemistry instructional patterns and curriculum (e.g., Thomas & McRobbie, 2001) and the learning team experience was not sustained. Research has posited (e.g., Black et al., 2004; Bennett, 2011; Borko, 2004) that changing practice is a slow and intricate process. In this study, both of these teachers—even though they participated in the same professional development and taught in a similar context— were influenced in different ways. There are some pieces of Diane’s and Lisa’s classroom practice that can be better connected to the learning team activities and some learning can be inferred, but these attempts do not necessarily imply an overall shift in classroom assessment practices. In fact, teachers’ use of formative assessment tools was incipient and did not imply a complete understanding of their potential or purpose (especially for Diane). Making Connections at Early Stages of Professional Development Even though the limited contribution of this professional development in influencing substantive changes in classroom practice, this study show how the creation and development of 159 artifacts of practice at the early stages of professional development can serve to start making connections between teachers’ learning experiences in the professional development setting and their current practices in the classroom. In the early stages of the professional development, the use of these tools became the first attempt in the development of boundary objects (Star & Griesemer, 1989) that can navigate between different settings. In this study, teachers took the exit slips they created in the learning team to their respective classrooms and these were used with a formative assessment purpose (particularly for Lisa) and served as an opportunity to experiment and reflect on classroom practice. Due to the limited duration of the professional development the connections were incipient and teachers had few opportunities to discuss about the enactment of these tools in the team meetings that provide room for improvement in practice. Teacher interviews and classroom observations evidenced that the enactment of the exit slips was, for both teachers, the instructional moment that provided more explicit connections with the professional development. Diane used the exit slips for several months and then gave up, because she did not find a match to her traditional practices and was not willing (or able) to make adjustments in her activity system to provide room for this tool. Lisa reported that she was able to embed the exit slips in her practice and made some changes in her classroom activity system to facilitate their use. Kazemi and Hubbard (2008) suggested that the use of depictions and artifacts of practice are helpful for the design of professional development activities as well as for studying the effects of professional development models. These artifacts allowed me to trace learning trajectories of teachers in both settings (i.e., the learning team meetings and their classrooms). Hence, because the learning team in which Diane and Lisa participated lasted only one year, we might hypothesize that there would be more (and perhaps better) opportunities to analyze and 160 reflect on the enactment of artifacts (e.g., the exit slips). This, in turn, may have helped teachers enact these tools more successfully, especially for teachers whose classroom activity systems were severely influenced by external factors that provided tension for the implementation of formative assessment (e.g., Diane and her grading practices). Activity Systems and the Enactment of Formative Assessment Despite teaching in the same context and participating in the same professional development program, the analysis of the activity systems for Diane and Lisa—in which they enacted formative assessment as a mediating tool to enhance chemistry instruction in their classrooms— evidenced different configurations that can be used to explain how they organized their instruction. A central concern of activity systems analysis is to understand the forces and motivations of individuals and the types of tools that individuals use to mediate the accomplishment of the object of the activity (Grossman et al., 1999). In this case, both teachers used formative assessment as a mediating tool to enhance their chemistry teaching. However, the activity system tool and object were interpreted differently based on the ways by which Lisa and Diane’s activity systems were established prior to participation in the FAME professional development. Diane appropriated the tool of formative assessment (writ large; and the exit slip as an example of this tool) for the purpose of verifying if the students were following her explanations, and especially to check if the students were able to provide the correct response. Diane’s approach to the tool was consistent with the concept of convergent assessment (Torrance & Pryor, 2001) that purports determining “…if the students know, understand, and can do a predetermined thing. It is characterized by detailed planning and generally accomplished by closed or pseudo-open questioning and tasks” (p. 616-617). In the previous chapter, I presented 161 evidence that showed how Diane focused her instruction and assessment by checking if her students were completing procedures to solve problems (which basically implied using mathematical knowledge) or if her students were able to provide the right response, mainly related to factual knowledge. According to Diane, her instructional goals were that her students learn chemistry and make connections among the main pieces of (factual) knowledge, especially to prepare themselves for college admission tests and college. For Lisa, doing formative assessment implied that formative assessment as a tool mediated her instruction to promote better understanding of students’ ideas. In her mindset, Lisa showed an interest in using formative assessment to capture the diversity of ideas from her students, to find out what her particular students knew, and to use these ideas to guide her instruction. On some occasions (e.g., when using the exit slips) Lisa reported the use of formative assessment aligned with the purpose of having students’ ideas guide her instruction. Lisa made attempts to use formative assessment to pay more attention to students’ ideas and to help students be more aware of their own ideas. She focused on students self-assessing their learning, particularly focusing on ideas with which they struggled. Even though her instructional and assessment practices were very traditional and the use of formative assessment contained numerous episodes of convergent assessment, Lisa attempted to incorporate some practices into her practice related to divergent assessment, which has the aims of discovering “what the learner knows, understand and can do” (Torrance & Pryor, 2001; p.617). From the perspective of divergent assessment, Lisa initiated some efforts to attempt new practices, for instance, involving students in evaluating their own learning (i.e., choosing red, yellow, or green exit slips) or focusing her instruction on aspects of the learner’s work. 162 These differences in the way Diane and Lisa used the set of formative assessment tools implied making adjustments in their activity systems. Lisa, especially, was able to modify her activity system in terms of the rules and division of labor to accommodate formative assessment as a new tool. For example, Lisa changed her rules about grading practices to provide more opportunities for students’ reflection and gave more room for eliciting and sharing students’ different ideas. In contrast, the introduction of a new tool (i.e., formative assessment) did not provide impetus for Diane to modify her activity system. These differences also implied that the outcome of the activity system was different. Diane’s instruction was focused on getting students’ right responses and, accordingly, all the components of her activity system were consistently articulated with this purpose. As a result, Diane’s outcome implied the intensification of her traditional practices in order to be monitor student work. In other words, she modified her practice to make it more intense and frequent. By contrast, the outcome of Lisa was making baby steps to promote elicitation of students’ ideas and having a more studentcentered mindset consistent with the theory behind formative assessment. Therefore, the analysis of the activity systems shows that, despite being in the same professional development experience and school, teachers had different outcomes in the enactment of formative assessment. The characterization of Diane and Lisa’s activity systems suggests that teachers appropriated the same set of formative assessment resources and artifacts differently (for similar examples, see Grossman et al., 1999). The evidence of the learning team meetings showed that the level of agreement between Diane and Lisa about their understanding of formative assessment was higher than the way both teachers appropriated this practice in their respective classrooms. 163 As Torrance and Pryor (1998, 2001) concluded, implementing formative assessment in the classroom is a complex process. The simple use of formative assessment in the classroom does not necessarily mean that it will produce impact on student learning and engagement (Buck & Trauth-Nare, 2009). It might, such as in Diane’s case, be used in a superficial level that merely serves to intensify the traditional instructional and assessment practices or “deliberately” be misused to reinforce existing stances. It might occur, such as in Lisa’s case, to promote efforts at enhancing student engagement in an environment that sends contradictory messages about what aspects of chemistry learning are important. Change in practice is related to the need of balancing multiple constraints of different stakeholders that teachers intend to satisfy (Cobb et al., 2003), for example, when multiple demands create tensions in the balance of teacher practices connected with formative assessment and grading and summative assessment. Tensions in the Enactment of Formative Assessment The analyses of activity systems for Diane and Lisa show that classroom enactment of formative assessment is a process mediated by different factors that interact and produce particular types of outcomes. Within activity systems, the interplay of their components causes tensions that may influence individuals’ actions to particular outcomes. From an activity theory standpoint, these tensions emerge because they are the result of systemic contradictions (Engeström, 1993), which are inherent to the components of the activity system, but also manifest them in a contextual and societal level (Engeström, 1987, 1993). This study shows that the outcomes of both teachers’ activity systems reflected different tensions in the process of formative assessment enactment. For Diane, using formative assessment to improve her classroom instruction implied a tension between the purpose of the formative assessment tools that were created in the learning team and her expectations of 164 students’ learning chemistry. To be clear, the evidence of Diane’s lessons and interviews showed a focus on verification of students’ ideas and convergent assessment (Torrance & Pryor, 2001), mainly related to factual knowledge of chemistry as well as procedures to solve exercises and problems that use mathematical knowledge. Diane’s instructional focus was related to her understanding of students learning chemistry targeted to perform successfully in college chemistry courses and on high-stakes chemistry assessments, but not necessarily to help students understand core chemistry ideas. In that context, the enactment of formative assessment as a process that purports to collect evidence of students’ scientific ideas and help students more reflective in their learning process is not aligned with her expectations of student learning. In terms of activity systems, this tension evidences in different components of the model. First, a tension emerged between the activity system’s tool and object, because the set of formative assessment practices did not fit to the Diane’s understanding of good instruction. Second, there was a tension between the activity system’s subject (the students) and the object because student engagement was not substantively encouraged and student work was promoted by the use of grading. Third, a tension between the activity system’s object and division of labor emerged because formative assessment was not used to promote student self-reflection and cognitive participation, so Diane understood formative assessment as a way to intensify her teaching, but not to promote students’ active participation. As a result, the enactment of formative assessment as a set of instructional tools made these initial tensions grow. Lisa’s experiences in enacting formative assessment showed a different understanding of formative assessment in order to meet her expectations of students learning chemistry. Lisa’s mindset reflects a clear focus on student-centered instruction—although this is only partially evidenced in her lessons—that promotes students showing their ideas and being reflective of 165 what they are learning. In that sense, the enactment of formative assessment practices implied that Lisa started to make adjustments in some components of the activity systems such as in rules and division of labor in order to promote student participation and involvement. Moreover, this is related to Lisa’s understanding of chemistry instruction to the extent that she expects that students, besides being successful in college and high-stake states, can make connections between chemistry content and students’ lives. That implies a different conception of students’ expectations of learning chemistry and may explain why the enactment of formative assessment practices was more favorable for Lisa than Diane. Thus the main tension that emerged in Lisa’s activity system was related to the tension between the object and the community. For Lisa, the adjustments she started to make in her instruction when enacting formative assessment implied delaying grading and creating an environment to promote students’ participation. She mentioned that these changes would cause tension in the expectations of students, parents, and other members of the school community about grades. Lisa also recognized that the majority of her students were influenced by cultural expectations of learning science that conceive of learning in a grade- and number-driven perspective. Lisa’s tension implies that classrooms are places inserted within a particular context that influences how an activity system will be configured. In this example, the context was a suburban high-school where a series of dispositions and values configure the Habitus (Bordieu, 1990) where teachers work. Formative assessment is not only subject to the sociocultural aspects of the classroom (e.g., Black & Wiliam, 1998b; Pryor & Crossouard, 2008; Shepard, 2000; Torrance & Pryor, 1998), but also the broader sociocultural systems that promote student learning (Black & Wiliam, 2005b; Harris & Brown, 2009). 166 Implications of the Study The findings of this case study helped identify some challenges and implications related to the main areas of interest for this study. First, I will describe implications for the FAME professional development model. Second, I will expand the implications for professional development. Third, I will outline some implications for research on formative assessment with an emphasis on science. Implications for FAME Professional Development. This case study was conducted in only one learning team that worked together for one year, but did not persist as expected. Therefore, the findings of the study are not necessarily applicable to the context of other teachers (learning team members), learning teams, or to the design and implementation of the program. Similarly, research upon the impact of FAME as professional development on teacher practice shows that learning teams exhibit different patterns in their meetings activities, types of formative assessment topics, and levels of depth of discussion (Gotwals et al., in preparation). However, the particular experiences of Diane and Lisa in this professional development program may illuminate some issues that can be used for FAME design and implementation. The evidence from the learning team meetings showed that in the first year of the team, Diane and Lisa discussed some components of formative assessment. However, they did not discuss others that were equally important—for example, that the use of formative assessment tools needs to be planned and embedded within instruction and linked to a meaningful use of learning targets. In other words, the formative assessment curriculum for this team was scattered and not clearly linked to the current research on the topic. Even though the team leader planned activities that responded to the learning team members concerns, she did not use a conceptual 167 model of formative assessment grounded in research. The formative assessment content that she used in the meetings was based on her own knowledge about formative assessment. Grossman et al. (1999) warned about using tools for promoting teacher learning without grounding in theory. They posed that if a tool is presented without its conceptual foundations teachers “may appropriate only what is available, that is, the label and surface features” (p.19). Learning about formative assessment in this type of team requires guidelines for the research-based components of formative assessment that need to be addressed in the team meetings. That does not mean every FAME learning team has to do the same type of activities and organize their discussions in the same way, but learning teams (and especially team leaders) require a deeper understanding of the formative assessment process as well as suggestions to organize the sequence of formative assessment topics, guide team discussions, and find researchbased materials and resources. Diane and Lisa’s learning team worked only for one year. For various reasons, some FAME learning teams are not able to persist over time. However, these teams require opportunities to learn the basic underpinnings of formative assessment and need a common knowledge base. The promotion of formative assessment is not trivial. From a research-based perspective, well-implemented formative assessment efforts may impact student learning (e.g., Black & Wiliam, 1998; Ruiz-Primo & Furtak, 2006) and well sustained professional development can make strides in teachers’ learning about formative assessment knowledge and practice (e.g., Black et al., 2004; Sato et al., 2005; Torrance & Prior, 1998; Webb & Jones, 2009). However, when learning about formative assessment is scattered or not valued, it may be seen as unnecessary for promoting student learning and teacher practice; or still worse, becoming a label, or just “the new buzzword,” as Diane mentioned in one interview. 168 Along with a research-based curriculum in formative assessment, the findings of this study showed the importance of learning team meetings to make connections with classroom practice. In order to promote these connections, team facilitators and team members can be supported with guidelines to create depict, create, and enact artifacts in the classroom. These artifacts may provide opportunities to team members to analyze and discuss the enactment of these artifacts. These artifacts may also provide opportunities for gathering evidence of students’ understanding and promote reflection on students’ ideas and learning. In this study, the use of the exit slips was, for both teachers, the piece of classroom practice that was most connected with the learning team activities. Thus, teachers learning in a professional development program such as FAME need opportunities to reflect on the effects of creating formative assessment artifacts, grounded in research-based theory, especially when teachers are inserted in a culture where the influence of grades and high-stakes assessment is predominant. Moreover, teachers need opportunities to try out formative assessment tools in their classrooms and reflect on how these tools function in real classroom settings. Implications for Team-Based Professional Development Models Diane’s and Lisa’s experiences suggest that the enactment of new instructional practices is complex and influenced by different factors. This may occur partially because teachers are professionals acting in multiple activity settings (Wertsch, 1985), such as classrooms and professional teams. Teachers have multiple factors mediating their instruction and not all factors can be attended to at any given time. Regardless if the teachers are participating in a professional development model that is school-based and close to practice, their classrooms are different spaces (or activity systems that are particularly organized). Therefore, learning about practice requires opportunities for teachers 169 enact the set of tools that they are bringing, creating, or analyzing in the professional development and opportunities to discuss the multiple factors that mediated their use in the classroom. In terms of CHAT, professional development settings and classrooms are different activity systems configured to accomplish particular goals. Therefore, resources for teachers in the enactment of practice cannot be generic; they need to be related to a particular activity system. That means the design of professional development should include opportunities to analyze the enactment of formative assessment in teachers’ own classroom activity systems. It is key for professional developers to design activities to help teachers reflect on the implications of enacting new tools in terms of activity systems. For example, teachers can anticipate the implications of enacting a tool in terms of the classroom rules and division of labor. Professional development models require opportunities to visualize possible scenarios that include the enactment of new tools. Reflecting on the consequences of enacting the new mediating tools in the activity system would imply that teachers may discuss topics such as 1) defining the object of their instruction, 2) modifying the rules that structure the activity, or 3) understanding how the values of the community shape classroom instruction. Moreover, if teachers enact a set of mediating tools, they need to appropriate the conceptual foundations of the tools as well as the skills to use them effectively in the classroom (Grossman et al., 1999). Implications for Research on Professional Development This case study showed us that the process of enacting practice is slow and requires time and effort. Diane and Lisa reported that they felt more confident with their instruction and recognized some contribution of the professional development one year after the learning team experience. In terms of research on the effectiveness of professional development, this suggests 170 that determining the impact of professional development is a long-term endeavor. Kazemi and Hubbard (2008) noted that research on professional development needs to pay attention to the co-evolution of the teachers working on the professional development and in the classroom, in order to trace a learning trajectory. That also means that researchers need to consider long-term outcomes when determining the contribution of the professional development. In those efforts, the call for long-term and longitudinal studies is paramount. Similarly, an important component of research on effectiveness of professional development is to identify those key aspects of practice that are fundamental for making changes. In that sense, the use of activity system analysis might be a generative framework to identify how teachers are using what they learned in professional development, in a concrete setting and in action. That would help researchers and professional developers to identify how teachers appropriate pedagogical tools. Implications for Research on Formative Assessment This case study took place with chemistry classes in a suburban high school. In this particular context, the study showed that Diane’s and Lisa’s classrooms were highly influenced by the cultural context related to the expectations about high school chemistry. Both teachers mentioned that the students, school staff, and parents had high expectations for the students, which could be manifest in getting higher grades and going to college. On the other hand, the influence of district assessments was an important factor in guiding instructional practices of teachers. So summative assessment may be considered an exacerbated component from the community and the culture and—from the CHAT perspective, it influences classroom instruction. Research has documented numerous experiences in which tensions between formative assessment and the predominant culture of grading emerge (e.g., Brown, Lake, & Matters, 2011; Gioka, 2009; Harlen, 2005; Remesal, 2011). From an activity theory standpoint, these tensions 171 emerge because they are the result of systemic contradictions (Engeström, 1993), which are inherent to the components of the activity system, but also manifest them in a contextual and societal level (Engeström, 1987, 1993). Therefore, one implication for formative assessment research and practice is to consider the tensions and systemic contradictions between the enactment of formative assessment and grading (e.g, Brookhart, 2004; Looney, 2011; Wing, So, Tai, & Lee, 2011) and the types of support that professional development in formative assessment may provide to teachers in dealing with those tensions. The case of Diane is illustrative that the simple use of formative assessment tools, without giving agency to the students to be responsible of their own learning is meaningless in promoting students’ examination of their ideas. In her case, the outcome of Diane’s activity system was a higher investment in monitoring students’ work, although she continued with the same type of traditional instructional practices. This is concurrent with the argument posed by Coffey et al. (2011) who said that research on formative assessment in science has focused on tools to be enacted by the teacher, but this research has tended to overlook the disciplinary substance of students’ scientific ideas. What is key in formative assessment is not solely what teachers do, but what teachers elicit, see (and do) about students’ scientific ideas. In this study, the majority of the lessons that I observed consisted in very traditional approaches to science, oriented to teaching facts and the solution of problems that involved the use of mathematics as goal. However, a possible hypothesis is that the enactment of sound formative assessment practices might serve as a mediating tool to promote science instruction and student learning focused on deeper scientific ideas—to the extent students’ ideas can be elicited and discussed. If teachers can learn and have room to enact sound formative assessment 172 practices, developing a chemistry classroom more centered on core scientific ideas may be transformative. Amid those perspectives, this case study shows the complexity of enacting formative assessment as a grounded practice in science classrooms. Well-designed professional models are able to promote the teachers’ formative assessment knowledge and practice, but they also need to consider better connections between the professional development and the classroom settings, as well as the use of tools to help teachers navigate across them. The use of activity systems in the design of professional development may help teachers use formative assessment to promote formative assessment classroom practice. Limitations of the Study The limitations of the study findings are mainly related to the level of applicability to other contexts, some characteristics of the data collected, and some elements of CHAT as analytical framework. First, the findings of this embedded case study only represent the experiences of Diane and Lisa, who teach high-school chemistry in a particular suburban school and participated in a specific learning team. In other words, the results of this case study are not necessarily applicable to the other teachers of the learning team and school in which I conducted this study. The findings are not neccesarily applicable to other participants in the FAME program who are in different learning teams. I cannot consider that Diane’s and Lisa’s experiences enacting a new instructional practice in their classrooms are similar to other chemistry teachers. However, since this study purports to characterize in depth the experiences of both teachers in order to understand the enactment of formative assessment, it may provide particular insights process that can be helpful to understand other contexts. 173 The second limitation of this study regards the data collection. Even though I collected evidence from teacher videos from the same instructional units during two years, the data were insufficient to show changes in teacher practice from year 1 to year 2. In part, I did not have evidence from Diane and Lisa before participating in the learning team that could be used as baseline study. The evidence collected in professional development and classrooms was compelling, although it was not necessarily enough for documenting all the crucial moments that connected the professional development and the classroom enactment of formative assessment. For example, I was not able to capture Lisa’s enactment of the formative assessment tool that was created in the learning team. For documenting this episode, I only considered the evidence of her interviews—which is self-reported information. An additional limitation of my study is that the evidence of students’ engagement in formative assessment is scarce. My evidence of students’ involved in formative assessment was restricted to students’ public participation or short episodes in which I was able to capture the audio of some students’ conversations. The third limitation of this study refers to my analytical framework and data analysis. Using CHAT was my choice because it was appropriate to the study research questions because it focuses on understanding practice in context (Roth et al., 2009), investigates complex systems and identifies the main tensions within a system (Engeström, 1999), and provides support to organize complex real-data sets in graphic models (Yamagata-Lynch, 2010). I was able to integrate different sources of evidence to represent the complexity of classroom instruction. My analysis provided important elements about how both teachers’ chemistry instruction reflects the systemic demands and forces that are interplayed in a particular culture. However, my analysis focused less on the historical component of CHAT. That is important because the activity itself occurs in historical contexts (Roth, Radford, & LaCroix, 2012) that add a different layer for the 174 analysis. My data collection focused on the experiences of these two teachers in the context of a grade-driven culture, but I would have liked to expand my data collection in order to understand how their practices—and the enactment of these practices—are connected with the temporal experiences. For instance, I would have liked to explore how the enactment of formative assessment was related to the development of Diane’s and Lisa’s professional careers, to the historical elements that were consistent in the both teachers’ high school, and to the educational policies that promoted the use of formative assessment practices in schools. Projections for Future Research This study is the first step in my research agenda. My next goal is to work with professional development for science teachers in topics about instructional practices such as formative assessment. I plan to develop small-scale professional development in order to help teachers improve their practices in science topics that can also serve for research purposes. However, I will conduct this research in a context that is clearly different from this study. In addition, the findings of this case study will help me to design professional development models and research. I plan to include in the professional development different opportunities for teachers to try out formative assessment tools and reflect on how these tools function in activity systems. For example, I will focalize part of professional development agenda and curriculum for teachers in order to define activity system objects as well as designing activities to discuss about the enactment of formative assessment tools in terms of activity systems. I also plan to establish better connections between professional development and classroom practice through the iterative design of instructional tools that can be enacted in the classroom and serve as tools for reflection and teacher learning. 175 In a more general stance, I want to continue part of my research by using sociocultural theories of learning such as CHAT. I want to improve my knowledge and practice in these models as well as to have more opportunities to conduct this type of research, especially in contexts where sociocultural studies have been scarcely conducted (i.e., educational research in South America). 176 APPENDICES 177 APPENDIX 1: Summaries of Lessons Videotaped 178 Table 8 Descriptions of Diane’s Lessons Date 19/3/12 20/3/12 Content Hess’ law. Lesson: Unit review. SS determine and use the equations for Hess’ law at different states of matter (they analyze the water heating curve graph) Review of the main concepts of the unit. Test (summative) Description Learning Targets Eliciting students ideas Teacher reviewed previous lab on Hess law. She provided feedback on students’ reports. Then, students completed a pre-assessment practice test (not graded). Students worked individually and teacher monitored the activity. After finishing, teacher reviewed some problems related to the heating curve of water. The class reviewed the test responses. Teacher made a lot of questions and students responded. She expected single responses. When students gave many different responses, she repeated the one that was correct and continued. The teacher reviewed gas, liquid, and solid states and focused on the changes of state By using experimental data, the teacher completed the various equations. Then students reviewed related concepts. For example, entropy, exothermic/endothermic reactions. The teacher delivered the exit slips Mention at the beginning, a sort of target. “Task for today, you’re going to write the equations of Hess’ law, rearrange those, write the conclusions based on your aim” She started by explaining what students should submit, as part of the package, and makes questions about. She explained the lab equations and Hess’ law again. She also explained the procedure to determine the total delta H. She emphasized the conceptual part of the lab, instead of the calculations. Then she provided feedback on the exit slips, but I it was very general. Not mention to students’ specific ideas. Then she explained the lab and changes in Temp in the water. She reminds to the students the main ideas about Hess Law and what to remember Students submit their unit packages and give the test. She only asked to the students the purpose of the lab and some responded. Basically, during the practice test and in the by questions in the review activity. Most questions were made to verify, and a few for seeing understanding ideas. Lots of clarifying questions from the teacher. She tried to make connections with previous ideas (who remember…?) She made questions to verify that students understood the main ideas before the test. Most of them to verify knowledge. A few questions were to see if students understood the purposes and the big ideas of the unit. Note: Not all the lesson was related to that. 179 Closing the gap Notes Provided descriptive feedback on lab reports and responses to students’ questions. Explained the rationale of the activity (32:05) She provided a lot of statements about expectations for the tasks. And reminded students what they previously learned. She mentioned the idea of students making connections among content. She provided feedback on the exit slips. Her feedback was not specific though. Feedback was evaluative. Questions were to assess knowledge. Few questions were related to question students’ ideas. She made sure to review the main ideas before the test and asking those to students. Table 8 (cont’d) 21/3/12 Introduction to solution unit and ‘bubble activity’ At the beginning, she reviewed the test results, making comments about some students not working in the unit package and cheating. She reviewed the test, and briefly explains the questions that didn’t get the highest achievement. She basically explained the right choice and why the distractors were not. She made some questions, quickly, to make sure that students were giving the right responses. She made some connections with the unit activities and the final exam. Then the new unit is introduced: solutions. The introductory activity for the unit had students make bubbles with different substances (corn syrup and glycerol) and have the biggest bubble possible. She gave the materials and directions to the students. They had to register the time and the size of the bubble. Students worked in groups and were engaged. While students worked, there were a couple of inquiry questions (for example, the measure to use; determining the best solution). At the end of the activity teacher has students think about the activity as well as the solutions’ components. She also made connections with prior knowledge related to intermolecular forces. She presented a challenge at the end of the lesson. She mentioned some sort of goals for the students. “You have to make a bubble solution, a good one.” “You goal is to make the biggest bubble and the longest lasting bubble.” But she also has goals for herself. See example when she explains to the external person who asks about the lesson (she said that students need to get some basic terminology) 180 She basically observed the students working but she didn’t make many questions. Only to verify that students were working adequately. She made a couple of public announcements when she realized that students had to record the composition of solution and they hadn’t. In the test review, at the beginning of the lesson, she made connections with the final exam, so some ideas that students learned in the unit will be included. Other interesting issues were: * Issue about students cheating and not “playing the game”. * Evaluative questions such as: was the test easy or hard? * Connections with previous knowledge (intermolecular forces). Note: The idea of identifying the type of reaction based only on the enthalpy sign is an idea that continues being problematic for students. Table 8 (cont’d) 4/6/12 5/6/12 6/6/12 Working with the redox review Lab on cells voltaic Redox reactions students reviewed the lab handout about voltaic cells and also they complete some related questions (they start with the final exam review) The teacher starts by talking about the final exam review and how she will develop and explain the main topics of the year. The lesson is a review of the redox package. The main topics are reviewing of the concept of redox and oxidation, and understanding a voltaic cell. Some students still struggled with identifying the oxidation numbers (for example in group compounds). The teacher made comments in order to remind students the rules for determining the oxidation states and write the redox half equations. She only announced the activity. That is, students were going to work on the redox review. The teacher started by explaining the materials and the instructions for the lab, related to correctly set the galvanic cells. She reviewed the handout with the students, step by step. She explained table A (what to combine for the reaction: solution and metal) and table B (half reactions). Then students started to work in groups. She monitored students work. While observing she made questions and comments (mostly procedural). Only in one occasion the question was more conceptual. Then, students finished the lab and worked on completing the handout. The teacher started by explaining how to fill the formula sheet for the final exam. She says she will let students decide what to write, but she will check it later. Then she explained how a voltaic cell works and how to figure out what is reduced and oxidized. She explained the process with the help of students’ responses to her questions. They mentioned how to recognize the reactivity theories and identify the oxidation. Then they were asked about some physical phenomenon in the voltaic cell. Students were also explained how the circuit works (and the cell bridge) and identified different pairs of redox, in order to identify the ‘best cell’. Students finalize the lab report and will start working on the final exam review (very incipient) She operationalized the goal of the lab “is to find the best salt” In some moments she expressed to the students some expectations on the task (if you are able to do this is fine) She mentioned the goal of the lab. It is about: “understanding how reduction and oxidation work in a voltaic cell.” 181 In general, the teacher was monitoring student work and making questions and comments. She made a few questions about the concepts implied in this lab (min 17) Mostly evaluative feedback (e.g., I like it!). In a few cases, it is followed by an explanation. She explained some of her decisions about the final exam review. She gave suggestions to the students about what to include and what to focus. There were a couple of general announcement during the lab, basically referred to procedures (e.g., about the salt bridge, not allow that the leads touch the salt solution). Teacher made a lot of questions to identify what students learned in the lab. She made questions to verify that the lab procedures were correct and the students responded. She also made some questions to clarify what students said. Feedback was mostly evaluative and focused on the correct response. Individual explanations tended to be very similar to the lecture (no many alternatives) However, there was a good case of supporting students while working! She made questions to the students, but they were only referring to the content (facts). They don’t make connections with other content (prior or future). In this lesson, the teacher made a lot of explanations to support students solving the problems. However, it seems that some students continued struggling. There was a lot of procedural monitoring, that the students were doing the right stuff. Connections with prior knowledge: When students ask about the color of the copper solution. Connections with first semester content such as the periodic table. She suggested some kind of students’ selfregulation but finally, she said that she will check. She made questions to check if students understood, but sometimes she didn´t give time to respond. Table 8 (cont’d) 18/3/13 Students completed the thermochemistry review and the prelab worksheet Hess’ Law lab 19/3/13 Students worked on the review. At the beginning she explained how to adjust the values of delta H, because students struggled with understanding the reasons for changing the values of the equation. She reviewed questions about enthalpy, types of reactions (endothermic-exothermic) and problems related with Hess’ law. She reviewed different problems that were complicated for students(basically, math issues related to Hess law equations). Then the teacher started the prelab review. She made questions and explanations related to the activity. Then students started working on the review. She monitored students’ work by walking around and explaining them. She used a lot the word “goal” to refer to baby steps in the lesson. (not a lesson learning target in a problem, for example). The teacher started by reviewing if the students read the lab instructions. Then she asked procedural questions, related to the lab procedures and the security issues. She remarked the importance of doing good revisions. During the lab she monitored, observed and questioned students especially for checking procedures. She also provided some guidance when students were asking for calculations. In general students worked adequately. She mentioned the “goal of the lab”. Understand that the energy in reaction 1 and 2 together equals reaction 3. Note: She included this lesson in a sequence for the week and she gave the conditions. 182 Teacher asking for procedural questions. For example: what do you have to do to…? Her questions looked for right response. Some questions were to verify is students were following her (how do we feel about…?) She basically asked quickly to verify ideas. If one student said something wrong, she stayed silent until someone else did it again.. She made questions to verify that students knew the right procedures. She basically explained and wanted students to follow her. In some cases she provided public descriptive feedback based on students’ questions. She retaught a couple of problems in response to students’ demand. When students were working individually, she monitored student work and gave feedback when necessary. At the beginning the teacher made sure that students understood all the procedures and made sense of the Hess’ Law. She’s explaining the problems and the main issues to be solved. She also explains about what students have to focus on. She made connections with previous knowledge and lab activities (connections) There was a nice dialog about the test expectations and what is measured there. Table 8 (cont’d) Hess’ lab review 20/3/13 21/3/13 Concept of entropy Redox review and entropy before the test. She started by commenting on the lab results. In some cases she noticed that the calculations and numbers didn’t fit. She asked students about sources of error and there was an interesting discussion, in which students identified some sources of error in the data. Then, the concept of entropy is introduced. Instead of giving a formal definition, she just started with questions. The teacher was looking for the right ideas and she ignored the wrong responses. She wrote explanations in the package in order to give notes to the students. Then she made a connection with enthalpy, temperature. She also made questions to see students’ previous knowledge. She introduced the idea of spontaneous reactions. She went step by step to complete a table. Then she explained the factors that influenced entropy: temperature, pressure, # of moles. Then she showed an example of one equation to determine entropy. She explained the results. Students started working on the review. Some students asked questions to the teacher and she explained by using the same ideas. The review started by checking some concepts of the unit, such as specific heat. Then they reviewed the calorimetry problems related to heat transference in chemical reactions. Then the teacher reviewed the water heating graph. She started by fusion (she also adds the equations). She also explained the reasons that explained these changes. Then she solved one problem related to the graph equations. She reminds to the students the essentials for responding to the test questions. Then she reviewed Hess’ law problems and also explained the essential concepts. The same with endothermic and exothermic reactions. She really started her talk about the lab. No learning targets mentioned. She had a nice overview of the sources of error in the data and asked students about the concepts related to the concepts in the labs. When introducing entropy, she made questions to help students connect the concepts. Like the previous ones studied. She explained to the students what she noticed in the lab about how they measured, and provided feedback, but it was is evaluative. When talking about entropy, she addressed students’ previous ideas and expanded what students say. Some of her questions on entropy were more probing. This is the first time I saw this! Teacher was good at explaining students what is their expected level. To make connections with previous academic knowledge. The entropy part is rich to understand practice. No mention of learning targets, it is clear that the lesson is about reviewing the unit. She mentioned iteratively what students should understand Teacher questions are about following procedures She is checking that students are following the procedures, understand the concepts that are related. She provided, brief explanations. In this lesson there were some students making questions, basically about doubts to solve the problems. Evaluative feedback when students respond correctly. She tended to explain and then provided some feedback to the group, in general, about how the group performed. She emphasized the idea that she wanted students understand the problems. Note: she did the same activity last year. She was very clear to state what will be graded in the test and what won’t (for example, problems of the lab guide). 183 Table 8 (cont’d) 28/5/13 Redox review. Determining half reactions: oxidation and reduction The teacher introduced the lesson by explaining that she will review the oxidation states and provide feedback. Students worked on the redox review and the final exam review. She made some questions to check what students knew about reduction and oxidation. She explained how to solve a problem with students’ collaboration. She reminded the rules that were previously taught. So, the group together solved a group of problems for determining half reactions (oxidation and reduction). Then she explained the assignment. But she continued solving the problems. For example, she explained how to write the half reactions and locate the electrons correctly, depending on the gain or loss of electrons. She also introduced the concept of the oxidizing and reducing agent. Students start working by themselves. She explained that the lesson is concentrating on half reactions (redox). She contextualized the learning targets with previous knowledge. 184 Teacher checked prior knowledge at the beginning of the lesson. Then she made questions to students based on what they learned before (oxidation states) very quickly. However, it seems that she gives more opportunities to collect responses for several students (even though the nature of the questions are the same) She monitored student work when they are in groups, she checked what they were doing and made questions General feedback on previous work. It was not very descriptive though. When she solved the problem and students gave the correct responses, she provided evaluative feedback. In her explanation of redox, in some moments she made deeper explanations or responded to the student. For example, referring to the atom or to first semester content. She made a public announcement when students were working and explained the problem to the whole class (FA moment) I think the teacher was more explicit here in making clear to the students the rationale for the class, so students can figure out what they are doing. She made reference to previous knowledge. The teacher explained what she noticed when monitoring student work. Table 8 (cont’d) 29/5/13 Review of redox materials In this lesson the teacher started by explaining the two problems that were struggling students. She explained two problems in detail by determining oxidation states and half reactions. She reminded the rules and procedures for that. Then the teacher reviews one problem that students mentioned as struggling. She explained the entire process and emphasized the problematic ideas and procedures for the students. Students worked individually 1) completing a chart with redox reactions, 2) identifying the atoms that are oxidized and reduced, the half reactions, the oxidizing and reducing agent. She explained the exercises to the students, sometimes by different ways. In one case she drew an atom model on the board. In general, students’ questions were very similar, about determining oxidation states. Other students made different questions about the final exam review and she referred to prior knowledge. No learning target. She continued working with the guide. She just said: “redox stuff” At the beginning, the questions were about a task that students already did. When she explained the exercises, the students were making questions, or she made questions to the students. In general, most of the responses are straightforward and feedback is evaluative. 30/5/13 Redox reactivity theories Voltaic cells with redox reactions The teacher started the lesson by explaining the sequence for future lessons. Students were checking redox pairs based on reactivity theories. She defined what a redox couple was. Then she explained a voltaic cell. She explained how redox reactions occur at the cathode and anode, in detail. Students continue working on the redox review. No learning target. It is just an explanation of the sequence. Questions to verify, some of them are more open (the copper color). Students make some questions, to see if student remembered the basics. During the review, students worked individually and she asked students if they were understanding or how they felt. 185 She explained why she will start by redoing the problems that were more problematic for the students. She identified the areas and topics that were more struggling. She recurred to previous knowledge and atom model to explain one of the problematic ideas: change of charge in ions. While students work individually, she supports student work by different ways, including models. Connections with previous lab to explain red/ox of cooper. She’s then providing context to the unit. She responded to student questions. She again, is making connections with prior knowledge. For example atom structure and models There was an interesting discussion about grading with the students. There are several moments here about the final exam. A quiz for each chunk is announced. Students made a lot of similar comments. Table 9 Descriptions of Lisa’s Lessons Date Content Description Learning Targets Eliciting students ideas Closing the gap Notes 21/3/12 Hess’ law lab The teacher started by explaining the lab to the students. It consisted of three experiments. She explained the procedures, about what to measure and how. She said that in the next lesson students will deal with calculating the Hess’ law equation. The teacher insisted in following the procedures and directions: 1) To measure temperature accurately 2) to prepare the right solutions. She said that she did not want skewed data in the measurement. In the lab, students started working in groups. The teacher monitored was monitoring how students worked. In some cases, she clarified, responded to the students or made questions. She usually had students repeat instructions and read them (then she confirmed the right response). Her main concern was about helping students in completing the right procedures. She asked the learning target to the students. (the purpose of the lab…). Students had read the prelab before. She guided this to introduce the main ideas for the lab. It seems that the explanations, calculations, and debriefs will occur in the next lesson. She will give the results of the lab handout in a few more days, and she made connections with the final exam. 4/6/12 Review of redox lessons Students were working on the redox review. The students have half of the lesson to go through the review. Students started working and made questions individually to the teacher. Some students are working, others aren’t. Then she made a general explanation of the problems to the group. She started explaining the review. Sometimes she asked to the students for the responses. The is trying to provide more explanations more than just the mere response. She modeled the procedures for completing some exercises. She explained with mode details. She made questions to the students, and at the end she made more checking for understanding. She continued reviewing the rest of the questions briefly. Students made a few questions at the end. The teacher explained the characteristics of the quiz. No mention Few students made questions to clarify procedures. She briefly responded or made students more questions. Teacher made a couple of announcements related to follow some procedures. The teacher monitored student work and responded to the questions. Often, she explained;, she said that that will be explained later. Basically the teacher responded to students’ questions. She provided some good explanations on feedback. In the review, some students made questions. Note: it is likely the teacher tried to do more conceptual questions and checking points. Explanations to the students. When it occurred she basically explained again. Sometimes, in different ways. She made some comments to reexplain the main ideas of redox, and explained the sequence of future lessons. the 186 Table 9 (cont’d) 5/6/12 Beginning final exam review Students complete a quiz on redox reactions. Then they started working on the final exam review. Later, students worked on the review. She gave the formula sheet. No goal 6/6/12 Final review exam She started by explaining the materials and the formula sheet for the final exam. Students worked on the thermochemistry questions of the review. No goal. Just general indications. 18/3/13 Unit review before quiz She started by talking about the review before the test and the sequence of activities in the review. She also referred to the test characteristics (multiple choice, etc.). She reviewed some problems of the review. She started by the one and explained how to solve the problem and what to focus on the statement and other information given. She explained the data and gave an example with actual numbers, and emphasized on getting the right procedures and data. She went step by step. Then she explained enthalpy diagrams and Hess law. She explained one problem in detail. The same for heat of formation. She gives time to the students to complete the review that will enter in the test. The last half hour. She did not have a clear LT, but specified at the beginning of the lesson this. “What I’d like to concentrate on today, is make sure we understand Hess’ law, how to do these problems, the heat of formation problems, and specific heat problems” 187 Few questions. These were individual and focused on details The questions were not very audible. But it seems she was supporting students and giving explanations. Her questions were mainly to check that students were following her. At the end of each explanation she made questions to check that students were understanding. She made announcements: highlighted the questions that need review. The revision will take place in the next lesson. Teacher explanations tended to be very straightforward and directed. Basically there was explanation, but much more lecture. She introduced the sequence of activities that students are working during the week. Her focus was on helping students notice the clues and the details on the problem, for example, by looking at the delta H problems Table 9 (cont’d) 20/3/13 Hess’ law lab The teacher started by explaining the pre-lab. She described the procedures that students had to do and the security procedures when dealing with the reactives and the materials. It was basically a lecture. Students started working on the lab. They worked very well compared with last year lab. They were following the procedures and completing the handout. Few questions related. At the end, she made questions to review in the classroom. She asked for the three reactions. She wrote the equations and summed the values up to show how the Hess’ law equations are connected. Students made the calculations for determining enthalpy values. Before the review that will happen in the next lesson, students need to complete some exercises No mention 188 She made some questions to verify that students were following her lecture. During the lab she responded to students’ questions. In some cases she didn’t give the answer. Questions at the end of the lab regarding the equations. She made announcements related to lab procedures that students had to follow correctly. Table 9 (cont’d) 21/3/13 Review of lab results Entropy 22/3/13 Entropy review The teacher started by reviewing the information from the lab exercises. She used data from one student to make the calculations. She reminded students all the procedures to compute Q. She went reaction by reaction. She explained how to compute the math concepts, and gave the examples. Finally she calculated the values and applied Hess’ law by using experimental data, and found a small margin of error. She encouraged students to review their own data and self-correct. She explained with more details what entropy is. She used real-life examples, she re-did her explanations about factors that affect entropy with other examples. It was the same explanation, but more detailed. Later she explained Gibbs’ free energy equation to determine if a reaction was spontaneous. She explained how the formula operates in different situations. She explained the assignment in which students needed to determine if a reaction was spontaneous. She also explained the influencing factors for enthropy again, but with more details. She also explained one math problem. She then explained entropy problems of the guide. She said what these problems were about. Students worked by themselves in the last 10 minutes. The teacher was reviewing the problems of entropy. She gave the responses to the students and, in some cases, students responded or made comments. In a few cases, she solved the problems. The assignment for students was to complete the rest of the package. She explained what each problem was about. She referred twice to the instructional sequence that students are experiencing during the week. She also mentioned what students did and learned, what they are doing in the current lesson and what they are doing in the future. She debriefed learning targets for example, in the entropy part. Not mention But she explained the sequence of activities for the next week to help students know when they will have to submit the package. 189 She basically asked students about what they did in the lab and how students made sense of their work. In the section of entropy she made some good probing questions. In the Gibbs law, she made questions to help her verify that students were understanding. She explained the rationale for her instructional decisions, when she had students complete the results of the lab. She provided feedback on the results of calculations and explains that it was pretty good Questions were just to verify if students have the right response. She made explanations “on the fly” She showed to the students the rationale for her decisions. She also explained how this activity was connected with an instructional sequence. There were nice connections with previous and future knowledge. She mixed a lot of things, including instructions, explanations, and comments. She gave the indications for the rest of the problems. She provided a very detailed explanation of Hess’ law to the students. Table 9 (cont’d) 28/5/13 Redox reactions. Determining oxidation states The teacher introduced redox reactions, as the last package to be covered in the year. OIL – RIG as rules. She lectured on the concepts. Then she explained what oxidation numbers are as well as the different rules, for different types of substances. Next, the teacher solved four problems to help students figure out the rules and use them. Students work individually solving some redox problems. She said “Just in a normal chemical reaction, your job is to figure out, by assigning oxidation numbers, to the entire equation, what element was being reduced and what element was being oxidized, and them will pull them out and create these half reaction” 190 During the lecture she was concerned on the key students to see if they were following her as well as making questions. She made a comment on a student response. In terms of “don’t guess” and reminded them the periodic table. She asked for one student to solve one problem publicly, like a demonstration. The explanations tend to be very simple. Verification questions at the end. She recurred to students’ prior knowledge to explain the rules for oxidation states. Reminded the important rules and emphases when she lectured. Table 9 (cont’d) 29/5/13 Redox reactions. She started by summarizing the main ideas from the previous lesson. She used this to explain the concept of half reactions. The teacher started working with a chemical equation and she completed the oxidation states with student help. Then she taught students to cross out the reactions that are not redox. Then she taught to write the half reactions. She also explained the tables for reactivity theories as well as the relation of their values and the periodic table. Then she explained the voltaic cell, described, and explained how the cathode and anode work and the processes that occur (she also teachers the nmemotecnic REDCAT). Students started working with the assignment. They have to complete some problems by following these steps. They have to complete six questions. Before starting, they solved one problem with the class together. She completed all the steps and made questions to the students to get the right values. No mention of the learning target, but she also made different problems. She debriefed the task for the lesson: “we are taking the full redox reactions, assign oxidation numbers, determine what is the oxidation half reaction, what’s the oxidation half reaction, what’s the oxidation half reaction, and filling the chart.” 191 She asked students while she explains to integrate students’ ideas and check if they were writing. Students noticed an error in teachers’ explanation. Not many, only when she made sure that students were working and responding correctly. Connection with general ideas of chemistry and science. APPENDIX 2: Teacher Interview protocols 192 Interview Protocol #1 Teacher demographics: Years of experience (classroom and professional), certification, previous PD with FA, grade levels of teaching, number of students per class. Note: These questions refer to the process of implementing formative assessment in your classes. 1. What are the current ways you are using formative assessment with your students? Can you provide specific examples in science? Follow-ups: - How do you know if students understand what you are teaching? - How do you know if you need to re-teach? - How do you re-teach? Can you give an example? - Do you find that you use different types of formative assessment with different science activities or different contents? 2. What have components of formative assessment (e.g., learning targets, feedback, questioning, instructional decisions, self/peer assessment) have been effective in your classroom? Why? Please give an example. Follow-up - Have these been easy or difficult to implement? Why? 3. What factors have influenced (positively or negatively) your implementation of formative assessment? (if they only mention learning team or positive influences) Follow-ups - Has anything hindered your implementation of formative assessment? - Are any of these factors specific to science? Why or why not? 4. How do you currently understand formative assessment? 5. Regarding your current understanding of formative assessment. Can you identify some experiences that helped you learn about this process? 6. To what extent has your learning about formative assessment improved your teaching of science? Follow-ups - How? 7. Have your students been affected by your implementation of formative assessment? - If yes, how? 193 Interview Protocol #2 Introduction In this interview you will see video segments related to your formative-assessment practice. The questions focus on the formative-assessment-related decisions that you made as part of your instructional practice, to gain insights into your ideas about teaching and decision-making process ─based on evidence of students’ understandings. Two video prompts - Formative-assessment moments over the course a week Formative-assessment moments over two years, related to the same topic. After seeing each prompt 1) What are students’ ideas or misconceptions about this topic (energy/thermodynamics or redox reactions). Follow ups - Did this year’s students have problems with the topic? How did you notice that? - How do students learn these “problematic” ideas? How do you plan to teach this topic and address these ideas? 2) What did you notice about students’ ideas that made you…making a decision/question/ providing feedback? - Why did you make that decision? - After seeing the clips, would you have done something different? why? Follow ups - How do you know that all students are making progress to the learning targets? - How do you make students aware of their ideas/ understanding the content/ moving forward. Evaluative questions 1) After seeing these video clips, what do you think about your FA practice? 2) What factors affected these practices? What about the FAME learning team? 194 APPENDIX 3: Preliminary Coding Schema for Lessons 195 Preliminary Coding Schema for Lessons I. Assessment conversations (Delimit length) Context 1. Part of a planned activity 2. Part of an explanation / response 3. On the fly Involvement & participation 1. Whole class 2. Individual or short group of students Type of activity 1. Lecture/ Explanation 2. Group work 3. Lab/ practical Science Topic 1. Redox 2. Energy in chemical reactions 3. Other Nature of the topic 1. Conceptual 2. Procedural 3. Inquiry* II. Formative assessment practices Learning targets and process 1. Use of learning targets & connections with learning targets 2. Connections within the same lesson 3. Connections with instructional sequence *prior & next lessons Feedback to students 1. Evaluative 2. Descriptive 196 3. Sort of metacognitive (push students thinking forward) Gathering and using students’ ideas 1. 2. 3. 4. 5. Teachers elicit students’ ideas Students are asked for their ideas Teacher recurs to “generic” student ideas Teacher refers to previous student work Teachers make students ideas explicit Teacher actions & regulation 1. Make instructional decisions (explicit) 2. Having the potential of making instructional decisions *this is tricky, but I’d like to see if it works 3. Teachers doing interactive regulation Student learning/actions *this is hard to see, but maybe it is possible. 1. Students making questions 2. Students showing their understanding *for example, responding to questions correctly. or giving explanations. 3. Students solving a problem & showing their work 4. Students making connections with prior knowledge & ideas 5. Students understanding learning targets or big ideas Student involvement& participation 1. All students providing evidence of their learning 2. Teacher collect evidence of some students 3. Use of particular “key” students Use of students’ ideas 1. Checking for understanding 2. Convergent assessment 3. Divergent assessment 197 APPENDIX 4: Coding Schema for Learning Team Meetings 198 Meeting Activity Codes 1 – Sharing an example or tool from practice (stories of personal, observations experiences, or student work) 2 – Analyzing & discussing examples of samples of student work or video of classroom teaching 3 – Reading, writing, examining, discussing information from a book or other source (video, website) 4 – Presentation of information (lecturing…) 5 – Discussion of external constraints or classroom-based obstacles 6 – Discussion of potential uses of Formative Assessment for student learning, teacher collaboration, school-wide reform… 7 – Discussion of unrelated topics 8 – Guiding Discussion (e.g., setting the stage, giving directions, reviewing agenda or goals for meeting or for future meetings) 9 – Other 10- Planning for classroom implementation (e.g., writing learning targets, creating tools (exit slips, questions, models…), unpacking content…) Formative Assessment Content Codes 1 – Planning (mapping out when, why and how formative assessment will occur) 2 – Learning target use (student outcomes or goals that are defined in student friendly language) 3 – Student evidence & formative assessment tools (products, observations, conferences) 4 – Formative assessment strategies (i.e., activating prior knowledge, goal setting, feedback use, self-assessment, peer-assessment) AND/OR instructional decisions (what teachers do with student evidence) 5 – Formative feedback (feedback to students to let them know how close they are to learning targets and what they can do to get to the learning targets) 6 – Other (i.e., not specifically about formative assessment) 7 – General formative assessment or overview of formative assessment (no discussion of specific strategy or tool) 199 Depth of Content/Focus Codes 0 – No depth (e.g., if there is a 6 (“other”) for the content code; if the discussion is not specifically about formative assessment) 1 – Abstract coverage of the formative assessment content 2 – Coverage of theory only (e.g., strategies without linking to specific tools or mention of how the theory will be implemented in classrooms); AND/OR no mention of how content (math, science, ELA, social studies) plays a role 3 – Coverage of practice only (e.g., tools without a link to how they are linked to a specific strategy or a theory of how the tools will further student learning) 4 – Linking theory and practice (e.g., how a tool is linked to a strategy or how a specific teaching/learning practice) Depth of Discussion 0 – No discussion (e.g., reading material, completing a worksheet, watching a video…) or off topic 1 – One-way sharing (e.g., presentation of information, no discussion) 2 – Parallel sharing (sharing examples or ideas without building off of others’ examples or ideas) 3 – Discussion links ideas or examples together through building off of others’ ideas, but does not push for reflection or in depth analysis of the “whys” 4 – Discussion links ideas and examples through an examination of WHY things are similar/different or WHY they happened …; challenging each others’ ideas Participation Structure 1 – Whole group (e.g., presentation to whole group, whole group sharing, whole group watching a video…) 2 – Small group 3 – Pair-sharing 4 – Individual (e.g., individuals reading a book) 200 APPENDIX 5: Code Trees Representations 201 Figure 12: Tree codes generated in ggrounded theory analysis 202 REFERENCES 203 REFERENCES Abell, S. K., & Siegel, M. A. (2011). Assessment Literacy: What Science Teachers Need to Know and Be Able to Do. In D. Corrigan, J. Dillon, & R. Gunstone (Eds.), The professional knowledge base of science teaching (pp. 205–221). Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3927-9 Allal, L. (2010). Assessment and the regulation of learning. In P. L. Peterson, E. Baker, & B. McGraw (Eds.), International encyclopedia of education (Vol. 3, pp. 348–352). Oxford, UK: Elsevier. Allal, L., & Mottier-Lopez, L. (2005). Formative assessment of learning: A review of publications in French. In Formative assessment – improving learning in secondary classrooms (pp. 241–264). Centre for Educational Research and Innovation (Ed.), Paris: OECD Publishing. Alonzo, A. C., Kobarg, M., & Seidel, T. (2012). Pedagogical content knowledge as reflected in teacher-student interactions: Analysis of two video cases. Journal of Research in Science Teaching, 49(10), 1211–1239. doi:10.1002/tea.21055 Ash, D., & Lewitt, K. (2003). Working within the zone of proximal development: Formative assessment as professional development. Journal of Science Teacher Education, 14(1), 23– 48. Athanases, S. Z., & Achinstein, B. (2003). Focusing new teachers on individual and low performing students : The centrality of formative assessment in the mentor’ s repertoire of practice. Teachers College Record, 105(8), 1486–1520. Bell, B., & Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education, 85, 536–553. Bennett, R. E. (2011). Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5–25. doi:10.1080/0969594X.2010.513678 Berry, R. (2011). Assessment trends in Hong Kong: Seeking to establish formative assessment in an examination culture. Assessment in Education: Principles, Policy & Practice, 18(2), 199–211. Birenbaum, M., Kimron, H., & Shilton, H. (2011). Nested contexts that shape assessment for learning: School-based professional learning community and classroom culture. Studies in Educational Evaluation, 37(1), 35–48. doi:10.1016/j.stueduc.2011.04.001 Black, P. (2013). Formative and summative aspects of assessment: Theoretical and research foundations in the context of pedagogy. In J. H. McMillan (Ed.), SAGE handbook of 204 research on classroom assessment (pp. 167–179). Thousand Oaks, CA: SAGE Publications, Inc. doi:http://dx.doi.org/10.4135/9781452218649.n10 Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box : Assessment for learning in the classroom. The Phi Delta Kappan, 86(1), 8–21. Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. doi:0969-594X/98/010007-68 Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148. doi:10.1002/hrm Black, P., & Wiliam, D. (2005a). Changing teaching through formative assessment research and practice. The King’s-Medway-Oxfordshire formative assessment project. In Formative assessment – improving learning in secondary classrooms (pp. 223–240). Paris: OECD Publishing. Black, P., & Wiliam, D. (2005b). Lessons from around the world: how policies, politics and cultures constrain and afford assessment practices. Curriculum Journal, 16(2), 249–261. doi:10.1080/09585170500136218 Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment Evaluation and Accountability, 21, 5–31. doi:10.1007/s11092-008-9068-5 Blaikie, E. (2007). Approaches to social enquiry (2nd ed.). Cambridge, UK: Polity Press. Bohm, A. (2004). Theoretical coding: Text analysis in grounded theory. In U. Flick, E. v. Kardorff, & I. Steinke (Eds.), A companion to qualitative research (pp. 270–275). London: Sage. Bordieu, P. (1990). The logic of practice. San Francisco: Stanford University Press. Borko, H. (2004). Professional development and teacher learning : Mapping the terrain. Educational Researcher, 33(8), 3–15. Borko, H., & Putnam, R. (1996). Learning to teach. In D. Berliner & R. Calfee (Eds.), Handbook of educational psychology (pp. 673–708). New York: McMillian. Brookhart, S. M. (2001, March). The standards and classroom assessment research. Paper presented at the annual meeting of the American Association of Colleges of Teacher Education. Dallas, TX. Brookhart, S. M. (2004). Classroom assessment : Tensions and intersections in theory and practice. Teachers College Record, 106(3), 429–458. 205 Brookhart, S. M. (2007). A cross-case analysis of teacher inquiry into formative assessment practices in six Title I reading classrooms. Retrieved from http://www.duq.edu/castl/_pdf/CASTL_Technical_Report_1_07.pdf Brookhart, S. M. (2013a). Classroom assessment in the context of motivation theory and research. In J. H. McMillan (Ed.), SAGE handbook of research on classroom Assessment (pp. 35–55). Thousand Oaks, CA: SAGE Publications, Inc. doi:http://dx.doi.org/10.4135/9781452218649.n3 Brookhart, S. M. (2013b). Grading. In J. H. Mcmillan (Ed.), SAGE handbook of research on classroom assessment (pp. 256–272). Thousand Oaks, CA: SAGE Publications, Inc. Brookhart, S. M., Moss, C. M., & Long, B. A. (2007). A cross case analysis of teacher inquiry Into formative assessment practices in six Title I reading classrooms. CASTL Technical Report No . 1-07 Center for Advancing the Study of Teaching and Learning, Department of Foundations and Leadership School of Education. Brown, G. T. L., & Hirschfeld, G. H. F. (2008). Students’ conceptions of assessment: Links to outcomes. Assessment in Education: Principles, Policy & Practice, 15(1), 3–17. doi:10.1080/09695940701876003 Brown, G. T. L., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. F. (2009). Use of interactive– informal assessment practices: New Zealand secondary students’ conceptions of assessment. Learning and Instruction, 19(2), 97–111. doi:10.1016/j.learninstruc.2008.02.003 Brown, G. T. L., Kennedy, K. J., Fok, P. K., Chan, J. K. S., & Yu, W. M. (2009). Assessment for student improvement: understanding Hong Kong teachers’ conceptions and practices of assessment. Assessment in Education: Principles, Policy & Practice, 16(3), 347–363. doi:10.1080/09695940903319737 Brown, G. T. L., Lake, R., & Matters, G. (2011). Queensland teachers’ conceptions of assessment: The impact of policy priorities on teacher attitudes. Teaching and Teacher Education, 27(1), 210–220. doi:10.1016/j.tate.2010.08.003 Buck, G. A., & Trauth-Nare, A. E. (2009). Preparing teachers to make the formative assessment process integral to science teaching and learning. Journal of Science Teacher Education, 20(5), 475–494. doi:10.1007/s10972-009-9142-y Buck, G. A., Trauth-Nare, A., & Kaftan, J. (2010). Making formative assessment discernable to pre-service teachers of science. Journal of Research in Science Teaching, 47(4), 402–421. doi:10.1002/tea.20344 CCSSO, (2008). Attributes of effective formative assessment. Council of Chief State School Officers: Washington, DC. 206 Chappuis, S., & Chappuis, J. (2007). The best value in formative assessment. Educational Leadership, 65(4), 14–19. Cobb, P., McClain, K., de Silva Lamberg, T., & Dean, C. (2003). Situating teachers’ instructional practices in the institutional setting of the school and district. Educational Researcher, 32(6), 13–24. doi:10.3102/0013189X032006013 Coffey, J. E., Hammer, D., Levin, D. M., & Grant, T. (2011). The missing disciplinary substance of formative assessment. Journal of Research in Science Teaching, 48(10), 1109–1136. doi:10.1002/tea.20440 Coffey, J. E., Sato, M., & Thiebault, M. (2005). Classroom assessment: Up close – and personal. Teacher Development, 9(2), 169–184. Cook, S. D. N., & Brown, J. S. (1999). Bridging epistemologies: The generative dance between organizational knowledge and organizational knowing. Organization Science, 10, 381–400. Corbin, J., & Strauss, A. (2008). Basics of qualitative research (3rd ed.). Thousand Oaks, CA: Sage Publications. Cornett, J. W. (1990). Utilizing action research in graduate curriculum courses. Theory into Practice, 29, 185-195. Costa, A. L., & Garmstom, R. J. (2002). Cognitive coaching: A foundation for renaissance schools (2nd ed.). Norwood, MA: Christopher-Gordan Publishers. Cowie, B., & Bell, B. (2001). A model of formative assessment in science education. Assessment in Education, 6(1), 101–116. Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage Publications. Crossouard, B., & Pryor, J. (2012). How theory matters: Formative assessment theory and practices and their different relations to education. Studies in Philosophy and Education, 31(3), 251–263. doi:10.1007/s11217-012-9296-5 Danielson, C., & McGreal, T. (2000). Teacher evaluation to enhance professional practice. Alexandria, VA: ASCD. Daws, N., & Singh, B. (1996). Formative assessment: to what extent is its potential to enhance pupils’ science being realized? School Science Review, 77(281), 93–100. DeLuca, C., & Klinger, D. a. (2010). Assessment literacy development: identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419–438. doi:10.1080/0969594X.2010.516643 207 Desimone, L. M. (2009). Improving Impact Studies of Teachers’ Professional Development: Toward Better Conceptualizations and Measures. Educational Researcher, 38(3), 181–199. doi:10.3102/0013189X08331140 Driver, R. A., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science Education, 84(3), 287–312. Dunn, K. E., & Mulvenon, S. W. (2009). A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment, Research & Evaluation, 14(7), 1–11. Duschl, R. A. (2003). The assessment of argumentation and explanation: Creating and supporting teachers’ feedback strategies. In D. Zeidle (Ed.), The role of moral reasoning on socioscientific issues and discourse in science education (pp. 139–159). Dordrecht, The Netherlands: Kluwer Acdemic Publishers. Duschl, R. A., & Gitomer, D. H. (1997). Strategies and challenges to changing the focus of assessment and instruction in science classrooms. Educational Assessment, 4(1), 37–73. doi:10.1207/s15326977ea0401_2 Engeström, Y. (1990). Learning, working and imagining: Twelve studies in activity theory. Helsinki, Finland: Orienta-Konsulti. Engeström, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki, Finland: Orienta-Konsultit. Engeström, Y. (1999). Activity theory and individual and social transformation. In Y. Engeström, R. Miettinen, & R. Punamaki (Eds.), Perspectives on activity theory (pp. 19– 38). New York: Cambridge University Press. Engeström, Y. (2001). Making expansive decisions: An activity-theoretical study of practitioners building collaborative medical care for children. In C. M. Allwood & M. Selart (Eds.), Decision making: Social and creative dimensions (pp. 281–301). Dordrecht, The Netherlands: Kluwer Acdemic Publishers. Falk, A. (2012). Teachers learning from professional development in elementary science: Reciprocal relations between formative assessment and pedagogical content knowledge. Science Education, 96(2), 265–290. doi:10.1002/sce.20473 Fetterman, D. M. (2009). Ethnography: Step by step (3rd ed.). Beverly Hills, CA: Sage Publications. Furtak, E. M. (2011, April). “Flyng blind”: An exploration of beginning science teachers’ enactment of formative assessment practices. In Annual meeting of the American Educational Research Association, New Orleans, LA (pp. 1–23). 208 Furtak, E. M. (2012). Linking a learning progression for natural selection to teachers’ enactment of formative assessment. Journal of Research in Science Teaching, 49(9), 1181–1210. doi:10.1002/tea.21054 Furtak, E. M., Ruiz-Primo, M. A., Shemwell, J. T., Ayala, C. C., Brandon, P. R., Shavelson, R. J., & Yin, Y. (2008). On the fidelity of implementing embedded formative assessments and its relation to student learning. Applied Measurement in Education, 21(4), 360–389. doi:10.1080/08957340802347852 Gabel, D. (1998). The complexity of chemistry and its implications for teaching. In B. G. Fraser & K. Tobin (Eds.), International handbook of science education (pp. 233–248). Dordrecht, The Netherlands: Kluwer Acdemic Publishers. Gilbert, J. K., Justi, R., Van Driel, J. H., De Jong, O., & Treagust, D. F. (2004). Securing a future for chemical education. Chemistry Education Research and Practice, 5(1), 5. doi:10.1039/b3rp90027d Gilbert J. K., & Treagust D. F. (2006) Introduction: macro, submicro and symbolic representations and the relationship between them: key models in chemical education. In: Gilbert J. K., Treagust D. (eds). Multiple representations in chemical education. Dordrecht, The Netherlands: Springer. pp 1–8 Gilmore, A. (2002). Large-scale assessment and teachers’ assessment capacity: Learning opportunities for teachers in the national education monitoring project in New Zealand. Assessment in Education: Principles, Policy & Practice, 9(3), 343–361. doi:10.1080/0969594022000027663 Gioka, O. (2009). Teacher or Examiner? The Tensions between Formative and Summative Assessment in the Case of Science Coursework. Research in Science Education, 39(4), 411–428. doi:10.1007/s11165-008-9086-9 Glaser, B. G., & Strauss, A. L. (1967). Discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine. Glesne, C. (2010). Becoming qualitative researchers: An introduction (4th ed.). Boston, MA: Pearson. Gotwals, A. W., Cisterna, D., Kintz, T., & Lane, J. (2014). Formative assessment professional development at the local level: Necessary and sufficient conditions for critical colleagueship. Manuscript in preparation. Grossman, P. L., Smagorinsky, P., & Valencia, S. (1999). Appropriating tools for teaching English : A theoretical framework for research on learning to teach. American Journal of Education, 108(1), 1–29. doi:0195-6744/2000/10801-0001 209 Grossman, P. L., & Woolworth, S. (2001). Toward a theory of teacher community. Teachers College Record, 103(6), 942–1012. Hammerness, K., Darling-Hammond, L., Bransford, J., Berliner, D., Cochran-Smith, M., McDonald, M., & Zeichner, K. (2005). How teachers learn and develop. In Preparing teachers for a changing world: What teachers should learn and be able to do (L. Darling., pp. 358–389). San Francisco, CA: Jossey-Bass. Harlen, W. (2005). Teachers’ summative practices and assessment for learning— tensions and synergies. The Curriculum Journal, 16(2), 207–223. Harris, L. R., & Brown, G. T. L. (2009). The complexity of teachers’ conceptions of assessment: tensions between the needs of schools and students. Assessment in Education: Principles, Policy & Practice, 16(3), 365–381. doi:10.1080/09695940903319745 Hatch, T., & Grossman, P. L. (2009). Learning to look beyond the boundaries of representation: Using technology to examine teaching (overview for a digital exhibition: learning from the practice of teaching). Journal of Teacher Education, 60(1), 70–85. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi:10.3102/003465430298487 Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta Kappan, 89(2), 140–145. Heritage, M. (2010). Formative assessment and next-generation assessment systems : are we losing an opportunity? Washington DC: Council of Chief Schools Officers. Heritage, M. (2013). Gathering Evidence of Student Understanding. In J. H. Mcmillan (Ed.), SAGE handbook of research on classroom assessment (pp. 1–37). Thousand Oaks: SAGE Publications, Inc. Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement: Issues and Practice, 28(3), 24–31. doi:10.1111/j.1745-3992.2009.00151.x Hewson, P. (2007). Teacher professional development in science . In S.K. Abell and N. G. Lederman (Eds.), Handbook of research on science education. (pp 1179-1203). Mahwah, NJ:Lawrence Erlbaum Associates. Hiebert, J., Morris, a. K., Berk, D., & Jansen, a. (2007). Preparing teachers to learn from teaching. Journal of Teacher Education, 58(1), 47–61. doi:10.1177/0022487106295726 Huinker, D., & Freckmann, J. (2009). Linking principles of formative assessment to classroom practice. Wisconsin Teacher of Mathematics, 60(2), 6–11. 210 Johnson, J. M. (2002). In-depth interviewing. In J. F. Gubrium & J. A. Holstein (Eds.), Handbook of interview research: Context and methods. Thousand Oaks, CA: Sage Publications. Jones, A., & Moreland, J. (2005). The importance of pedagogical content knowledge in assessment form learning practices: A case study of a whole school approach. The Curriculum Journal, 193–206. Kazemi, E., & Hubbard, A. (2008). New directions for the design and study of professional development: Attending to the coevolution of teachers’ participation across contexts. Journal of Teacher Education, 59(5), 428–441. doi:10.1177/0022487108324330 Lamprianou, I., & Christie, T. (2009). Why school based assessment is not a universal feature of high stakes assessment systems? Educational Assessment, Evaluation and Accountability, 21(4), 329–345. doi:10.1007/s11092-009-9083-1 Lave, J. & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Lemke, J. L. (2001). Articulating communities: Sociocultural perspectives on science education. Journal of Research in Science Teaching, 38(3), 296–316. Looney, J. W. (2011). Integrating Formative and Summative Assessment: Progress Toward a Seamless System? (OECD Education Working Paper No. 58). Retrieved from http://www.oecd-ilibrary.org/education/integrating-formative-and-summativeassessment_5kghx3kbl734-en Maclellan, E. (2004). Initial knowledge states about assessment: novice teachers’ conceptualisations. Teaching and Teacher Education, 20(5), 523–535. doi:10.1016/j.tate.2004.04.008 Magnusson, S., Krajcik, J. S., & Borko, H. (1999). Nature, sources, and development of pedagogical content knowledge for science teaching. In J. Gess-Newsome & N. G. Lederman (Eds.), Examining pedagogical content knowledge (pp. 95–132). Washington, DC: National Academy Press. Mahaffy, P. (2004). The future shape of chemistry education. Chemistry Education Research and Practice, 5(3), 229. doi:10.1039/b4rp90026j Marshall, B., & Drummond, M. J. (2006). How teachers engage with assessment for learning: Lessons from the classroom. Research Papers in Education, 21, 133–149. Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: ASCD. 211 Matese, G. (2005, April). Cognitive Factors Affecting Teachers ’ Formative Assessment Practices. Paper presented at the annual meeting of the American Eductional Research Association. Montreal, Canada. McClam, S., & Sevier, B. (2010). Troubles with grades, grading, and change: Learning from adventures in alternative assessment practices in teacher education. Teaching and Teacher Education, 26(7), 1460–1470. doi:10.1016/j.tate.2010.06.002 Mcmillan, J. H. (2003). Understanding and improving teachers’ classroom assessment decision making : Implications for theory and practice. Educational Measurement: Issues and Practice, 22(4), 34–43. McMillan, J. H. (2013). Why we need research on classroom assessment. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 2–17). Thousand Oaks, CA: SAGE Publications, Inc. doi:10.4135/9781452218649.n1 Meade, P. & McMeniman, M. (1992) Stimulated recall—An effective methodology for examining successful teaching in science. The Australian Educational Researcher, 19(3), 1-18. Measured Progress. (2010). The formative assessment process: A guide for classroom and student success. Dover, NH: Measured Progress. Michigan Department of Education. (2011). FAME Initiative Expectations. Retrieved from http://www.michigan.gov/documents/mde/FAME+Team+ExpectationsAugustFINAL_384611_7.pdf Michigan Department of Education. (2014). Michigan School Data. Retrieved from http://www.mischooldata.org Milne, C., Scantlebury, K., & Otieno, T. (2006). Using sociocultural theory to understand the relationship between teacher change and a science-based professional education program. Cultural Studies of Science Education, 1(2), 325–352. doi:10.1007/s11422-006-9013-1 Minstrell, J., Anderson, R., & Li, M. (2011, May). Building on Learner Thinking: A Framework for Assessment in Instruction. Commissioned paper for the Committee on Highly Successful STEM Schools or Programs for K-12 STEM Education: Workshop. Moscovici, H., & Varrella, G. F. (2008). International professional development as a form of globalisation. In B. Atweh, A. Calabrese Barton, M. C. Borba, N. Gough, C. Keitel, C. Vistro-Yu, & R. Vithal (Eds.), Internationalisation and globalisation in mathematics and science education. Dordrecht, The Netherlands: Springer. Mwanza, D. (2002). Conceptualising work activity for CAL systems design. Journal of Computer Assisted Learning, 18(1), 84–92. doi:10.1046/j.0266-4909.2001.00214.x. 212 Myhill, D., & Brackley, M. (2004). Making connections: Teachers’ use of children’s prior knowledge in whole class discussion. British Journal of Educational Studies, 52, 263–275. Myhill, D., & Warren, P. (2005). Scaffolds or straitjackets? Critical moments in classroom discourse. Educational Review, 57(1), 55–69. doi:10.1080/0013191042000274187 Nardi, B. A. (1993). Studying context: A comparison of activity theory, situated action models, and distributed cognition. In Proceedings East-West Conference on Human-Computer Interaction. (pp. 352–359). St. Petersburg, Russia. Nelson, T., Slavit, D., Perkins, M., & Hathorn, T. (2008). A culture of collaborative inquiry: Learning to develop and support professional learning communities. Teachers College Record, 110(6), 1269–1303. Retrieved from http://www.tcrecord.org/Content.asp?ContentID=14745 Ofsted. (2008). Assessment for learning: the impact of national strategy support (pp. 4–16). Retrieved from http://dera.ioe.ac.uk/9309/ Osborne, J. (2012). The role of argument: learning how to learn in school science. In B. J. Fraser, K. Tobin, & C. J. McRobbie (Eds.), Second international handbook of science education (pp. 833–949). Dordrecht, The Netherlands: Springer. Otero, V. K. (2006). Moving beyond the “get it or don’t” conception of formative assessment. Journal of tTeacher eEducation, 57(3), 247–255. doi:10.1177/0022487105285963 Otero, V. K., & Nathan, M. J. (2008). Preservice elementary teachers’ views of their students’ prior knowledge of science. Journal of Research in Science Teaching, 45(4), 497–523. doi:10.1002/tea.20229 Patchen, T., & Smithenry, D. W. (2014). Diversifying instruction and shifting authority: A cultural historical activity theory (CHAT) analysis of classroom participant structures. Journal of Research in Science Teaching, 51(5), 606–634. doi:10.1002/tea.21140 Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications. Penuel, W. R. (2014). Emerging forms of formative intervention research in education. Mind, Culture, and Activity.21(2), 97-117. Perrenoud, P. (1998). From formative evaluation to a controlled regulation of learning: Towards a wider conceptual field. Assessment in Education: Principles Policy and Practice, 5(1), 85–102. Peterson, P. L., & Swing, S. R. (1982). Beyond time on task : Students’ reports of their thought processes during classroom instruction. The Elementary School Journal, 82(5), 481–491. 213 Popham, W. J. (2008). Transformative assessment. Alexandria, VA: ASCD. Popham, W. J. (2009). Assessment literacy for teachers : Faddish or fundamental? Theory Into Practice, 48(1), 4–11. doi:10.1080/00405840802577536 Pryor, J., & Crossouard, B. (2008). A socio-cultural theorisation of formative assessment. Oxford Review of Education, 34(1), 1–20. doi:10.1080/03054980701476386 Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13. Randel, B., & Clark, T. (2013). Measuring classroom assessment practices. In J. H. Mcmillan (Ed.), SAGE Handbook of research on classroom assessment (pp. 145–164). Thousand Oaks, CA: SAGE Publications, Inc. Remesal, A. (2011). Primary and secondary teachers’ conceptions of assessment: A qualitative study. Teaching and Teacher Education, 27(2), 472–482. doi:10.1016/j.tate.2010.09.017 Rogoff, B. (1995). Observing sociocultural activity in three places: Participatory appropiation, guided participation, and apprendiceship. In J. V. Wertsch, P. del Rio, & A. Alvarez. (Eds.), Sociocultural studies of mind (pp. 139–164). Cambridge, UK: Cambridge University Press. Roth, W. M., Lee, Y., & Hsu, P. (2009). A tool for changing the world: possibilities of cultural‐historical activity theory to reinvigorate science education. Studies in Science Education, 45(2), 131–167. doi:10.1080/03057260903142269 Roth, W. M., & Lee, Y.-J. (2007). “Vygotsky’s neglected legacy”: Cultural-historical activity theory. Review of Educational Research, 77(2), 186–232. doi:10.3102/0034654306298273 Roth, W. M., Radford, L., & LaCroix, L. (2012) Working with cultural-historical activity theory. 13(2). Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 13(2), Art. 23. http://nbn-resolving.de/urn:nbn:de:0114-fqs1202232. Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues in assessing students’ learning. Studies in Educational Evaluation, 37(1), 15–24. doi:10.1016/j.stueduc.2011.04.003 Ruiz-Primo, M. A., & Furtak, E. M. (2006). Informal formative assessment and scientific inquiry: Exploring teachers’ practices and student learning. Educational Assessment, 11(3 & 4), 237–263. doi:10.1207/s15326977ea1103&4_4 Ruiz-Primo, M. A., & Furtak, E. M. (2007). Exploring teachers’ informal formative assessment practices and students' understanding in the context of scientific inquiry. Journal of Research in Science Teaching, 44(1), 57–84. doi:10.1002/tea.20163 Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. doi:10.1007/BF00117714 214 Sadler, D. R. (1998). Formative assessment : Revisiting the territory. Assessment in Education : Principles , Policy & Practice, 5(1), 77–84. Sannino, A., Daniels, H., & Gutierrez, K. D. (2009). Activity theory between historical engagement and future-making practice. In In A. Sannino, H. Daniels, & K. D. Gutierrez (Eds.), Learning and expanding with activity theory (pp. 1–18). New York, NY: Cambridge University Press. Sato, M. (2003). Working with teachers in assessment-related professional development. In J. M. Atkin & J. E. Coffey (Eds.), Everyday assessment in the science Classroom. Arlington, VA: NSTA press. Sato, M., Coffey, J., & Moorthy, S. (2005). Two teachers making assessment for learning their own. Curriculum Journal, 16(2), 177–191. Schneider, M. C., & Randel, B. (2009). Research on characteristics of effective professional development programs for enhancing educators’ skills in formative assessment. In H. L. Andrade & G. Cizek (Eds.), Handbook of formative assessment (pp. 251–276). New York, NY: Routledge. Schram, T. H. (2006). Conceptualizing and proposing qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson. Shavelson, R. J., Young, D. B., Ayala, C. C., Brandon, P. R., Furtak, E. M., Ruiz-Primo, M. A., … Yin, Y. (2008). On the impact of curriculum-embedded formative assessment on learning: A collaboration between curriculum and assessment developers. Applied Measurement in Education, 21(4), 295–314. doi:10.1080/08957340802347647 Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. Shepard, L. A. (2005, October). Formative assessment: Caveat emptor. In ETS Invitational Conference 2005 The Future of Assessment: Shaping Teaching and Learning. New York. Shepard, L. A. (2009). Commentary: Evaluating the validity of formative and interim assessment. Educational Measurement: Issues and Practice, 28(3), 32–37. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Song, E. & Koh, K., (2010, August). Assessment for learning: understanding teachers’ beliefs and practices. Paper presented at the 36th Annual Conference of the International Association for Educational Assessment (IAEA) Bangkok, Thailand. Retrieved from http://www.iaea.info/documents/paper_2fb234cf.pdf Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage Publications. 215 Stake, R. E. (2006). Multiple case study analysis. New York, NY: The Guilford Press. Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, “translations” and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19, 387–420. Stiggins, R. J. (1999). Evaluating classroom assessment training in teacher education. Educational Measurement: Issues and Practice, 18(1), 23–27. Stiggins, R. J. (2006). Balanced assessment systems: redefining excellence in assessment (pp. 1– 9). Princeton, NJ: Educational Testing Service. Stiggins, R. J. (2007). Conquering the formative assessment frontier. In J. H. McMillan (Ed.), Formative classroom assessment: Theory into practice (pp. 8–28). New York, NY: Teachers College Press. Stiggins, R. J. (2009). Essential formative assessment competencies for teachers and school leaders. In H. L. Andrade & G. Cizek (Eds.), Handbook of formative assessment (pp. 233– 250). New York, NY: Routledge. Stoll, L., Bolam, R., McMahon, A., Wallace, M., & Thomas, S. (2006). Professional learning communities: A review of the literature. Journal of Educational Change, 7(4), 221–258. doi:10.1007/s10833-006-0001-8 Strauss, A. (1987). Qualitative analysis for social scientists. New York, NY: Cambridge University Press. Sweeney, A. E., Bula, O. A., & Cornett, J. W. (2001). The role of personal practice theories in the professional development of a beginning high school chemistry teacher. Journal of Research in Science Teaching, 38(4), 408–441. doi:10.1002/tea.1012 Thomas, G. P., & McRobbie, C. J. (2001). Using a metaphor for learning to improve students’ metacognition in the chemistry classroom. Journal of Research in Science Teaching, 38(2), 222–259. doi:10.1002/1098-2736(200102) Thomas, G. P., & McRobbie, C. J. (2012). Eliciting Metacognitive Experiences and Reflection in a Year 11 Chemistry Classroom: An Activity Theory Perspective. Journal of Science Education and Technology, 22(3), 300–313. doi:10.1007/s10956-012-9394-8 Thomas, G., Wineburg, S., Grossman, P. L., Myhre, O., & Woolworth, S. (1998). In the company of coleagues: An interim report on the development of a community of teacher learners. Teaching and Teacher Education, 14(1), 21–32. Tierney, W. G., & Dilley, P. (2002). Interviewing in education. In J. Holstein & J. Gubrium (Eds.), Handbook of interviewing (pp. 453–471). Thousand Oaks, CA: Sage Publications. 216 Topping, K. J. (2009). Peer Assessment. Theory Into Practice, 48(1), 20–27. doi:10.1080/00405840802577569 Torrance, H., & Pryor, J. (1998). Investigating formative assessment: teaching, learning and assessment in the classroom. Buckingham, UK: Open University Press. Torrance, H., & Pryor, J. (2001). Developing formative assessment in the classroom: Using action research to explore and modify theory. British Educational Research Journal, 27(5), 615–631. doi:10.1080/01411920120095780 Van Driel, J. H., Meirink, J. A., van Veen, K., & Zwart, R. C. (2012). Current trends and missing links in studies on teacher professional development in science education: a review of design features and quality of research. Studies in Science Education, 48(2), 129–160. doi:10.1080/03057267.2012.738020 Van Es, E., & Sherin, M. G. (2002). Learning to Notice : Scaffolding New Teachers ’ Interpretations of Classroom Interactions. Journal of Technology and Teacher Education, 10(4), 571–596. Van Zee, E. H., & Minstrell, J. (1997). Reflective discourse: developing shared understandings in a physics classroom. International Journal of Science Education, 19(2), 209–228. doi:10.1080/0950069970190206 Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional learning communities on teaching practice and student learning. Teaching and Teacher Education, 24(1), 80–91. doi:10.1016/j.tate.2007.01.004 Vygotsky, L. S. (1978). Interaction between learning and development. In (pp. 79–91). M. Cole, V. John-Steiner, S. Scribner, E. Souberman (Eds.). Mind and society. Cambridge, MA: Cambridge University Press. Webb, M., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy & Practice, 16(2), 165–184. doi:10.1080/09695940903075925 Wei, R. C., Darling-Hammond, L., & Adamson, F. (2010). Professional development in the United States : Trends and challenges (p. 129). Dallas, TX: National Staff Development Council. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge, UK: Cambridge University Press. Wertsch, J. V. (1985). Vygotsky and the social formation of mind. Cambridge, MA: Harvard University Press. 217 Wertsch, J. V. (1991). Voices of the mind: A sociocultural approach to mediated action. Cambridge, MA: Harvard University Press. Wiliam, D. (2013). Feedback and Instructional Correctives. In J. H. Mcmillan (Ed.), SAGE handbook of research on classroom assessment (pp. 196–215). Thousand Oaks, CA: SAGE Publications, Inc. Wiliam, D., & Leahy, S. (2007). A theoretical foundation for formative assessment. In J. H. McMillan (Ed.), Formative classroom assessment. Theory into practice (pp. 29–42). New York: Teachers College Record. Wiliam, D., & Thompson, M. (2007). Integrating assessment with instruction: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 53–82). Mahwah, NJ: Lawrence Erlbaum Associates. Wilson, S. M., & Berne, J. (1999). Teacher learning and the acquisition of professional knowledge: An examination of research on contemporary professional development. Review of Research in Education, 24(1), 173–209. doi:10.3102/0091732X024001173 Windschitl, M., & Thompson, J. (2011). Ambitious pedagogy by novice teachers : Who benefits from tool-supported collaborative inquiry into practice and why ? Teachers College Record, 113(7), 1311–1360. Wing, W., So, M., Tai, T., & Lee, H. (2011). Influence of teachers’ perceptions of teaching and learning on the implementation of assessment for learning in inquiry study. Assessment in Education : Principles, Policy & Practice, 18(4), 417–432. Wylie, E. C., & Lyon, C. J. (2009). Comentary: what schools and districts need to know to support teachers’ use of formative assessment. Teachers College Record. Retrieved from http://www.tcrecord.org ID Number: 15734 Wylie, E. C., Lyon, C. J., & Goe, L. (2009). Teacher professional development focused on formative assessment : Changing teachers, changing schools. Princeton, NJ: Educational Testing Service. Yamagata-Lynch, L. C. (2010). Activity systems analysis methods. Boston, MA: Springer. doi:10.1007/978-1-4419-6321-5 Yin, R. (2009). Case study research: Design and methods (4th ed.). Thousand Oaks, CA: Sage Publications. Zeek, C., Foote, M., & Walker, C. (2001). Teacher stories and transactional inquiry: Hearing the voices of mentor teachers. Journal of Teacher Education, 52(5), 377–385. doi:10.1177/0022487101052005004 218