A STUDY OF CURRICULUM AND WRITING INSTRUCTION IN K-3 CLASSROOMS By JoAnne M. West A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Curriculum, Instruction, and Teacher Education – Doctor of Philosophy 2023 ABSTRACT This mixed-methods dissertation study analyzed curriculum and teachers’ enacted instruction to determine if there was overlap between the evidence-based practices in teachers’ enacted practice and those that were also contained in the curricular materials. Participants in this study were kindergarten through third-grade teachers (n=25) from different school districts in Michigan (n=4), who participated in a larger study on literacy coaching. Data included video of teachers’ enacted instruction, collected twice during the 2020-2021 school year, and teacher’s manuals from the curriculum used in all four districts. A content analysis was conducted on units of writing curricula (n=18) from the four participating districts as well as 2300 minutes of teachers’ enacted instruction. Using descriptive statistics and qualitative methods for comparison, results from these analyses included that teacher practice varied within districts and across districts. Most curricula included presence for most of the evidence-based instructional practices, while two of the four curricula had higher quality markers consistently than the other two curricula. Additional analysis found there was not a clear correlation between the presence and quality of the curriculum and teachers’ enacted practice in most cases. The district that had the most alignment between curriculum presence and quality scores and teachers’ enacted instruction had some unique circumstances that suggest further research is warranted to determine why teachers’ take up, adapt, or drop instructional practices from their curriculum resources when teaching writing. Copyright by JOANNE M. WEST 2023 This dissertation study is dedicated to my parents, my husband Matt, and my four children. Thank you for your support and constant inspiration. iv ACKNOWLEDGEMENTS To begin, I would like to thank Michigan State University faculty and staff who have mentored, supported, and taught me through all three of my degree programs and for providing hard cost funding for this dissertation. There are too many outstanding faculty and staff to name all of them here, but two in particular have served as incredible mentors. Dr. Jan Alleman, who first taught me the joys of design-based research, teaching powerful social studies, partnering with local teachers and schools, and the power of high-quality teacher education. Jan’s ability to see the potential in each person and palpable passion for her work has served as an anchor for my own work. Thank you to Dr. Kate Roberts, who started as my instructor when I was an undergraduate at MSU. Working with you has brought more perspective, learning and fun than I can appropriately pay homage to. I would like to thank my committee members: Dr. Gary Troia, Dr. Anne-Lise Halvorsen and Dr. Patricia Edwards. Thank you for sharing your time and expertise with me. Each of you does incredible work and I am honored that you served as members of my committee. Dr. Edwards, outside of this committee, throughout my reading specialist and doctoral coursework, your expertise and wisdom has been a source of encouragement and insight. A special thank you to Dr. Tanya Wright, my doctoral advisor and committee chair. I appreciate all you have done to support me throughout this program. I regularly find small ways in teaching and researching that working with you has made me better. To this day I still believe you taught the most relevant literacy methods courses I have ever taken. To my fellow literacy doc students and my cohort members, Lisa, Blythe, Lori, and Brent, thank you for your support throughout this journey. This was not for the faint of heart, but you always helped me to stay centered and focus on the endless possibilities and impact of the v work. To all my past and current P-12 and university colleagues, thank you for your belief in me and your support on my journey. Working with each of you has made me better. Most importantly, I need to thank my family for supporting me on this long and sometimes arduous journey. To my chosen family, our friends and our neighbors especially, your cheerleading and tangible support has helped me push through in more moments than I could count. Thank you to my parents, both lifelong educators themselves. You instilled in me the belief that I could do hard things and that teaching was an intelligent and worthwhile profession. Your dedication to your students, continued formal and informal education, and support of my dreams inspired me and led me down this path. To my children, Eleanor, James, Gwyneth, and Katherine, each of you is the reason why I do what I do each day. Every child deserves an amazing teacher. I feel so honored to have been your first one and to get to work with teachers who reach children like you. To my husband Matt, your steadfast and unwavering commitment to helping me achieve my dreams is something I will always be grateful for. It has been a long journey, and I am better for having you at my side. Finally, I would like to thank the teachers and coaches in Michigan who participated in this study. Without their willingness to allow us to observe their classroom instruction and collect data on their professional learning and coaching experiences this study would not have been possible. During a difficult school year, they sacrificed time and put their energy into helping our research team better understand literacy coaching in the state. Thank you! I would also like to acknowledge the funders who made data collection for this study possible. This material is based upon work supported by the W.K. Kellogg Foundation under Grant No. P01319.83 and the Institute for Education Studies under Grant No. R305H190004. vi TABLE OF CONTENTS CHAPTER 1: INTRODUCTION ................................................................................................... 1 CHAPTER 2: LITERATURE REVIEW ...................................................................................... 11 CHAPTER 3: METHODS ............................................................................................................ 44 CHAPTER 4: RESULTS ............................................................................................................ 105 CHAPTER 5: DISCUSSION ...................................................................................................... 194 REFERENCES ........................................................................................................................... 219 APPENDIX A K-3 ESSENTIALS ............................................................................................. 232 APPENDIX B CODING PROTOCOL ...................................................................................... 233 APPENDIX C PRACTICE GUIDE ALIGNMENT................................................................... 240 APPENDIX D TABLE D.1 ........................................................................................................ 242 APPENDIX E TABLE E.1 ......................................................................................................... 243 vii CHAPTER 1: INTRODUCTION The past two decades of education policy have sought to make writing an integrated and central component of instruction. Work from twenty years ago, by the National Commission on Writing (2003, p.3), boldly opens with “Writing today is not a frill for the few but an essential skill for the many.” Researchers have also noted that writing demands are necessary for students to be successful in their lives - both personal and professional (Graham, 2019). Under a decade later, the Common Core State Standards, which promote strong early writing instruction, were unveiled and adopted by the majority of the United States. Currently, thirty-eight states have adopted a revised or full version of these English Language Arts Standards (EdGate, 2021). These standards called for children to compose across genres, use appropriate conventions, legible handwriting, and correct spelling for grade-appropriate spelling patterns (National Governor's Association Council for Chief State School Officers, 2010). Researchers have proposed that teachers may need to shift their instruction to account for the focus of the new standards (Graham & Harris, 2015). Similar to these standards, research has supported recommendations for daily time spent on writing, including component skills and time spent composing using the writing process, beginning in early childhood classrooms and extending into adolescence (Graham & Perin, 2007; Graham et. al, 2012a; Rowe, 2018). While these recommendations for robust writing instruction from research and standards are clear, what is less clear is if early elementary teachers have acted on these recommendations and are providing writing instruction aligned to these widely adopted standards and recommendations. In fact, despite increased clarity around writing instructional recommendations for teachers, writing scores on the National Assessment of Educational Progress (NAEP) standardized assessments have remained at similarly low rates for the last two 1 decades across the grade levels assessed, leading researchers to wonder what, if any, impact these new standards have had. Roughly a quarter of students scored proficient on NAEP writing measures (NCES, 2019). In the state of Michigan, fewer than half of students scored proficient on the third grade Michigan Student Test of Educational Progress (MSTEP) in English Language Arts (ELA) (Higgins, 2018) before COVID-19. After the COVID-19 pandemic, these scores have dropped across all demographic groups and the state’s current third graders have proficiency levels that are almost four percentage points lower than pre-COVID-19 scores (Lohman et al., 2023). While students’ ELA scores are likely impacted by a constellation of factors, one empirical factor on students’ writing is teachers’ instruction (e.g., Troia et al., 2011; Wang & Troia, 2023). Some believe that adoption of more robust and standards-aligned curriculum will lead to an increase in teachers’ evidence-based practices. To that end, some districts in the state of Michigan have adopted curricula that are aligned to the new, more robust standards and evidence-based practices (Wright et al., 2022). This attempt to improve student achievement via improving teachers’ instruction has likely been grounded in the idea that teachers’ knowledge, skills, and/or practices may improve with better curricular resources, as some empirical studies have demonstrated. For instance, Kaufman and colleagues (2018) found curriculum that had stronger standards alignment led to increases in teacher knowledge of standards. Puranik and Lonigan (2011) found significant and moderate effects on instructional practices as a result of a literacy-focused curriculum intervention. Since Michigan is a local control state, what districts adopt and what teachers use to teach literacy varies. In a recent policy brief based on a larger study in the state of Michigan around the early literacy initiatives (Wright et al., 2022), teachers indicated using multiple curricula to teach 2 ELA subjects, including writing. And some of the curricula they noted using were quite outdated, a problem similarly found in national studies (Tepe & Mooney, 2018). Robust, standards-aligned curricula are arguably one way to improve teacher instruction and in turn, student outcomes, though researchers disagree on how to most effectively impact this dynamic given the mediating role teachers play in curriculum implementation (Pak et al., 2020; Polikoff & Dean, 2019). In fact, teachers’ enacted instruction is variable in utilization of evidence-based practices in writing (e.g., Coker et al., 2016; Guo et al., 2023; Puranik et al., 2014; Wang & Troia, 2023), and commercial and open access curricular materials are not all rated for alignment and text quality or even up to date to current standards in all districts (Kaufman et al., 2018; Wright et al., 2022). How a teacher’s curriculum impacts their instruction is also quite complex (Remillard, 2005; Stein et al., 2007; Troia et al., 2011) and to consider whether this is in fact a lever that will lead to improved student ELA achievement, the implementation of curriculum by teachers across the state of Michigan warrants further study. Given that early writing instruction is of critical importance and curriculum materials can influence instruction in varied ways, in this study, I observe teachers’ enacted writing instruction and examine the curriculum materials the districts they work in have adopted. In this first chapter, I give an overview of the current study, including important context for the data collection work. I then provide a review of related research on early writing, classroom writing instruction, the role of curricula, and how researchers have documented that curricula have influenced teacher practice in studies to date. The third chapter fully explains the analytic methods used in this study. In the fourth chapter, I have presented an overview of the results of the data analysis as it relates to each question. I have also provided examples from both 3 the curriculum materials and teachers’ enacted instruction to help explain the scores. In the final chapter, I discuss how this study builds on and extends the work of other previous studies. Overview of the Present Study Strong early writing instruction is considered an important pathway to improved literacy outcomes for elementary student in the United States (e.g., Graham & Perin, 2007; Rowe, 2018). Practice guides, based in quantitative research, and national and state ELA standards provide guidance for what skills and content early elementary writing instruction should cover. These recommendations include that teachers should: provide student daily time to write, explicitly teach student to use the writing process iteratively and for authentic purposes and audiences and teach student the requisite component skills of writing they need to be successful in transcription, including handwriting, spelling, and conventions (e.g., Graham et al., 2012b; MAISA-GELN Early Literacy Task Force, 2016). Some researchers, reformers, and policy makers posit that curriculum materials are one way to inform and possibly shift teachers’ instructional practices (Chingos & Whitehurst, 2012; Piasta, 2016) although there are numerous implementation challenges (Polikoff, 2018) and how these materials shape instruction varies from teacher-to-teacher (e.g., McCarthey & Woodard, 2018; Valencia et al., 2006; Waldron, 2014). Curriculum materials also change often to reflect new trends in research and practice communities. Current curricula for English Language Arts are no exception. New curricular materials are intentionally incorporating content knowledge measures and academic researchers and research organizations studying their impact (e.g., Cabell & Hwang, 2020; Nichols-Barrer & Haimson, 2013). Implementation of these knowledge- building curricula, specifically for early elementary writing, has yet to be thoroughly explored. 4 Given the complexity of the relationship between curriculum and enacted instruction (Remillard, 2005; Remillard & Heck, 2014; Stein et al., 2007), this descriptive study explores the relationship between early elementary writing instruction and early elementary writing curricula, in the hopes of uncovering patterns and themes, confirming earlier research findings, and providing pathways for future research. To do so, I examined the writing curricular materials K-3 teachers in four separate districts used; writing instruction provided by K-3 teachers in the same districts; and what, if any, relationship existed between the two. To that end, the following questions were addressed in this study: 1. How can recommended writing instruction be described across four writing curricula? a. What effective writing practices were contained in these curricula? b. How did recommended writing instructional practices compare across the curricula? 2. How can writing instruction be described across K-3 classrooms in Michigan during COVID-19 pandemic? a. What effective writing practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials? To investigate district curriculum materials for writing, teachers’ enacted writing instruction, and look for interaction between the two, I conducted a mixed methods study. I used a content analysis approach similar to previous empirical work in early literacy studies (e.g., Gerde et al., 2019; Wright, 2011), to identify evidence-based instructional practices in the curriculum as well as in teachers’ enacted practices (see Table 1 below for practices) as well as 5 other analytic approaches such as compiling analytic matrices and writing analytic memos (Miles et al., 2020) Table 1 Instructional Practices in Early Writing Instructional Practice Definition Used for this Study Estimated Spelling Children use the sounds in words to spell them. Interactive Writing Children and their teacher work together to compose a shared text. Daily Time for Writing Children have time to write each day. Writing Process & Strategy Instruction Children have strategy instruction on how to complete a part of the writing process. Use of Mentor Text Children have text models of writing. Explicit Instruction on Component Skills Children have instruction on components of writing (e.g., handwriting, sentence construction). The coding scheme used was created as part of a larger, statewide study, which is explained in the next section of this chapter. After coding all the curriculum and observation data, I calculated descriptive statistics to come up with summary data for both teachers’ enacted instruction and the curriculum materials. I also used qualitative methods throughout the coding and analysis to determine if there were any patterns or nuance necessary to better understand the quantitative data. Next, I used the descriptive data from each of these two content analyses and compared them to one another to identify patterns among the materials and enacted practices. Finally, I employed further analysis using analytic memoing and the creation of analytic matrices (Miles et al., 2020). These methods are explained more fully in the third chapter of this study. Findings from this analysis include that the curricula varied in their uptake of evidence-based instructional practices, that teachers’ enacted instruction varied within and across districts, and 6 that there was some evidence of influence of curriculum on teachers' enacted instruction but that the evidence of influence was not consistent across all observations or practices. This supports and extends the findings of other research in the field to date. Larger Context for the Current Study The observational data and the coding protocol for this dissertation study was drawn from a larger study, Evaluating Michigan's Early Literacy Law: Impacts, Implementation and Improving State Capacity (Institute of Education Sciences (IES) Award Number: R305H190004). This study was being conducted as part of a five-year grant from IES to work on examining multiple implementation components of the state’s third grade reading law, called “Read by Grade Three” (RBG3). This study, led by PIs Katharine Strunk, and Tanya Wright, included a large-scale survey to focus on the implementation of the “Read by Grade Three” law in Michigan. Context that further impacts this study is that the data for this dissertation study was collected during a heavily COVID-19 pandemic-impacted school year, fall of 2020 through spring of 2021, the first year of data collection for the larger project. As part of the larger study, there was a multi-year survey and interview study conducted with K-8 educators including teachers, building principals, superintendents, ISD early literacy coaches and ISD superintendents. The study also included interviewing stakeholders (educators, policymakers, and state-level administrators) to gain insight into their perceptions of the law, the early literacy initiative, and the implementation. Another arm of the larger study focused on the implementation of literacy coaching in Michigan. Additional funding from the Kellogg Foundation (for PIs Katharine Strunk and Tanya Wright) was secured to support the coaching study. During the 2020-2021 year, the research team followed the work of four ISD Early Literacy Coaches and 25 teachers in four different districts in Michigan. Our team, comprising of 7 two research assistants, a project manager, and led by Dr. Wright, sought to understand how ISD early literacy coaches were doing their coaching work as well as to study any differences between coached and not coached teachers. Observational data of teachers’ instruction and the coding protocol used for the content analysis portion of this dissertation study comes from this larger study. Contribution of the Study The work of other researchers cited in this paper provides a substantive grounding for this proposed research, as outlined in the second chapter. However, this study extends the scope of the available empirical literature in four distinct ways. This study observes K-3 teachers’ enacted writing instruction in the same study and observes classroom instruction during COVID-19. It also examines four different K-3 writing curricula and then compares enacted instruction to the instructional emphasis in the curriculum. Below and in the discussion, I explain each of these extensions more thoroughly. To begin, this study examined observational data from a range of classroom teachers using different instructional materials and approaches, including all grades K-3. This builds on the existing literature base of survey and observational data by providing a view of the writing instruction taking place in early elementary classrooms across multiple, early elementary grade levels. The analysis of writing practices in this study speaks directly to the observational studies conducted in kindergarten and first grade (Coker et al., 2016; Guo et al., 2023; Puranik et al., 2014), which reported on alignment of teacher instruction to the evidence-based instructional practices (among other things). Second, the observational data used for this study was collected during the height of the COVID-19 pandemic. That means that this study saw teachers’ classroom instruction in varied 8 modalities (e.g., virtual, hybrid, in-person) during an unprecedented time in the American education system. While there has been much attention focused on the fact that children’s test scores have decreased since the start of the pandemic (Lohman et al., 2023), most of that conversation has focused on children’s testing outcomes on large-scale tests (e.g., MSTEP). What has not been examined or discussed on a larger scale is what teachers’ instruction looked like during that time and how it compares to instruction that was pre-pandemic. This study provides a unique lens into what writing instruction in classrooms looked like during the first year of the COVD-19 pandemic and adds to a small, but growing, literature base on the topic (e.g., Wright & Bruner, in press). Third, this study examined the writing opportunities present in elementary ELA curriculum materials. When considering writing instructional materials for kindergarten through third grade, there has been one systematic study for kindergarten instructional materials (Gabas et al., 2022). There was also one study which systematically studied writing instruction preschool curriculum materials (Gerde et al., 2019). When considering implementation of knowledge building curricula, a handful of case studies have been conducted (e.g., Ikpeze, 2013) and examination of the reading components of one knowledge building curricula (see Cabell & Hwang, 2020); however, since knowledge-building ELA curricula are newer and mark an expansion of the content that basal series attempt to cover, they have not been studied extensively, and no studies to date have been focused on the writing component of these curricula. Although, these types of curricula are not yet in use in Michigan in a large percentage of schools and classrooms (Wright et al., 2022), their use is on the rise as evidenced by the adoption of the knowledge building curriculum Expeditionary Learning by the largest district in the state (Chambers, 2018). This study also examined two standalone writing curricula, one of 9 which is one of the most used writing-specific curricula in the state (Wright et al., 2022). While neither type of writing curricula is brand new, the systematic study of these instructional materials for early elementary grades is. Finally, this study expands upon theoretical and empirical work around the complex interaction between literacy curricular materials and teachers’ enacted instruction (e.g., McCarthey & Woodard, 2018; Yoon, 2013; Waldron, 2014). This study was conducted in the hopes that researchers, curriculum writers, teachers and teacher educators can better understand the writing instructional practices different curriculum materials prioritize, the enacted instructional practices different teachers enact in K-3 classrooms, and the potential influence of the curricular materials on teachers’ enacted instruction. In the next four chapters, I will explain the unique contribution of this study to the field by first grounding this study in the other research in the field as it relates to the research questions in this study. Next, I describe the methods used for data analysis. Then I will share the results of that analysis. Finally, I discuss how that relates to the growing literature base on elementary writing instruction. 10 CHAPTER 2: LITERATURE REVIEW The following literature review is divided into five sections. First, I have highlighted the research on the development of children’s writing in early childhood and early elementary school Next, I highlight the importance of early writing instruction and component skills for writing from a growing body of research. Specifically, I have highlighted the features of research-based writing instruction for children in the early elementary school grades and explained subsequent practice guides. Next, I have shared findings from previous surveys and observational studies on what writing instruction teachers are providing in early elementary school. Then, I have provided an overview of curriculum materials followed by the role of curricular materials in teaching and in turn in teachers’ enacted instruction that happens in K-3 classrooms. To close, I have explained what is left to be learned about K-3 teachers’ enacted writing instruction as well as the curricular materials used to teach writing and the relationship between the two. The Importance of Early Writing Development Early writing, as defined by Gerde et al. (2012), is the writing young children use to communicate or compose. Composition or message creation, the main goal of writing, requires children coordinate multiple component skills, including letter-formation, oral language and vocabulary knowledge, letter-sound correspondence, and message generation. In alignment with Coker et al. (2018), in this study, composing is considered any generative writing task children engage in. Conversely, skills instruction is used to mean writing tasks that support message composition such as handwriting, capitalization and punctuation support, and spelling. So, how do these early writers develop these skills and processes? Earlier research held a readiness perspective, meaning children had to be ready to write before they could write meaningfully; however, more recent research indicates that young 11 children are composing meaningful messages by drawing and using letter strings well before they can use conventional spelling to spell words (Whitehurst & Lonigan, 1998; Rowe & Neitzel, 2010). Initially, these markings, early drawings and letter strings were not viewed with the same intentionality as they are now. Current perspectives now embrace these early literate acts as young children’s authentic engagements with writing (Rowe, 2008; Teale & Sulzby, 1986; Whitehurst & Lonigan, 1998). As a result of a shift from earlier readiness perspectives, there has been an increase in research on young children's engagements with literacy practices, including early writing. Some scholars pursued research emphasizing the process model of writing (Hayes & Flower, 1980) and children's writing development (Harste et al., 1984), while others examined children’s development of component skills or studies of the component skills required for message composition at various stages in early childhood and elementary school and have mapped those onto later reading and writing achievement (e.g. Graham & Hebert, 2011; Kent & Wanzek, 2016; Puranik & Lonigan, 2014). To summarize, children’s earlier writing compositions and skills predict their later reading and writing achievement. Children engage in writing and progress through various stages of development and seminal scholars from cognitive and sociocultural disciplines have documented ways. This work has provided clear conceptual and theoretical grounding (e.g., Dyson, 2003, Hayes & Flower, 1980; Harste et al., 1984; Puranik & Lonigan, 2014; Rowe, 1994) for how writers develop and the cognitive processes that they engage in while writing. Writing theories were originally heavily grounded in psycholinguistics and cognitive psychology (Rowe, 1994) but they have come a long way. Currently, as a field there is a shared understanding that very young children who are engaged in the writing process often do so as a result of an authentic engagement with an adult or peer and early childhood and elementary writing researchers take a more 12 sociocultural perspective (Dyson, 2003; Graham, 2018; Rowe, 2008). This sociocultural perspective on writing, that writers are operating within a broader context, means that writing development and achievement are the result of a variety of factors. Since scholars have recognized that early writing, including invented spelling and composition, do not take place in isolation, there has been a renewed focus on children’s writing acts and how they stem from and call upon a variety of linguistic and cognitive resources, alongside social experiences in a larger sociocultural context (Dyson 2003; Oulette & Sénéchal, 2017; Graham, 2018; Rowe, 2018; Sulzby, 1986). In this larger sociocultural context, children explore and expand their composition repertoire, including their use and understanding of audience, genre, and purposes for writing through the use of their own and others' language (Dyson, 2003). They also develop better writing skills through direct instructional approaches in genre, the writing process, and text structures (Donovan, 2001; Graham et al., 2012a; Traga Philippakos et al., 2023; Shen & Troia, 2018). Further, children’s writing development benefits when they are engaged in meaningful and authentic literacy tasks (Duke & Kays, 1998; Duke 2000; Purcell-Gates, Duke, Martineau, 2007). Young writers are also constantly sampling, or borrowing and manipulating, communicative materials from their world to create new pieces and new understandings when they compose (Dyson, 2003). For instance, they might try on language or write storylines they have heard in a repeated read aloud in their own compositions. Early writing skills, such as letter writing, sentence recall, and picture labeling, when measured, have been found to progress in some systematic ways, from scribbling to conventional written language (Puranik & Lonigan, 2011). In addition to general early writing skills such letter writing, picture labeling, and others examined in the study by Puranik & Lonigan (2011), researchers have found children as young as preschool and the primary grades can write in 13 genre-specific ways (Donovan, 2001; Duke, 2000). Moreover, elementary-aged children, some even as young as kindergarten, use genre-specific text structures and features with explicit instruction (Clark et al., 2013; Donovan, 2001; Duke & Kays, 1998; Englert et al., 1998; Hall et al., 2017; Traga Philippakos, 2023). For example, children might write an informational sentence about a butterfly and label the picture they drew with the parts of the butterfly as they would a diagram. To summarize, children begin writing meaningful messages well before they can put conventionally spelled words and grammatically sound sentences on paper. They are writing from the time they first mark a page (Rowe & Neitzel, 2010) and this writing includes many different types of skills. Since early writing development is multifaceted and complex, and since it can be tied to later reading and writing achievement, this study seeks to examine the types of opportunities children have to compose and work on component skills in their early elementary classrooms. The Influence of Early Writing Instruction Since early writing develops in systematic ways, even before children start their formal schooling, researchers, teacher educators and policymakers agree that early writing matters and in turn, early writing instruction matters. In fact, according to research and policy recommendations, writing instruction for young children should include opportunities to compose authentically across genres (Bingham et al., 2018; Graham & Hebert, 2011; National Governor's Association Council of Chief State School Officers, 2010; NELP, 2008; Rowe, 2018), instruction in letter-sound correspondence along with authentic opportunities to apply that knowledge (MAISA GELN Early Literacy Task Force, 2016; National Governor's Association Council of Chief State School Officers, 2010; NELP, 2008), and instruction in letter formation 14 (Berninger, 1999; Graham et al., 2002; MAISA GELN Early Literacy Task Force, 2016; National Governor's Association Council of Chief State School Officers, 2010). Further, there should be some genre-specific instruction since children write in genre- specific ways at a very young age. This instruction should include instruction using text models that children can model their writing after and borrow from (Graham, et al., 2012a). Instruction using text models is a practice that is shown to be valuable in some studies conducted on text model use in writing in elementary through adolescence (Graham & Perin, 2007; Traga Philippakos et al., 2023). In addition to providing text models, researchers have found that teachers should provide children with genre-specific, authentic opportunities to compose as well as explicit instruction around the form and purpose of the writing (Purcell-Gates et al., 2007; Shen & Troia, 2018; Traga Philippakos et. al, 2023b). This type of genre-specific instruction helps children build genre knowledge and support better writing (Olinghouse & Graham, 2009). In recognition of the fact that research shows genre-specific writing instruction is impactful, the Common Core State Standards also have recommendations for genre-specific writing instruction starting in kindergarten and continuing through 12th grade (National Governors Association and Council of Chief State School Officers, 2010). According to research and standards, this means that every elementary classroom teacher should have time during the year when they are teaching children to compose narrative, informational and persuasive texts. Looking more broadly at writing instruction, recent meta-analyses have made the case for the impact of a variety of types of writing instruction in the youngest classrooms. These studies have highlighted that elementary writing instruction makes a positive impact on writing and reading outcomes, sometimes even years later, and is useful for content area learning (Coker et al., 2018; Graham & Hebert, 2011; Graham et al., 2020; Hall et al., 2015; Kent & Wanzek, 15 2016). Additionally, beyond composing and genre-instruction, a large and growing body of component skills studies conducted in preschool and elementary schools have found correlations between children’s spelling, handwriting, word fluency, and their resulting composition outcomes (e.g., Graham & Harris, 2000; Kent & Wanzek, 2016; Kim et al., 2021; Puranik & Al Otaiba, 2012; Shanahan, 2006). These findings indicate that explicit instruction in component skills influences writing outcomes through elementary school and beyond (Graham, et al., 2012; Hall et al., 2015; Puranik & Lonigan, 2014). Therefore, the writing instruction teachers provide children can have long-lasting impacts on their writing achievement across a variety of measures. Policy recommendations and the recommendations of prominent writing researchers have made clear that early writing instruction, including genre-specific instruction, can lead to better writing achievement for children (e.g., Bazerman et al., 2017; Traga Philippakos et al., 2023b). Not only does strong early writing instruction led to better writing outcomes, which matter because of the lifelong need for writing coupled with the low writing achievement of children across the US on standardized assessments like the NAEP, but strong writing instruction also leads to better long-term reading outcomes (Graham & Hebert, 2011). Additionally, early reading skills are also indicative of the quality of children’s early writing (Puranik & Lonigan, 2014). As Traga Philippakos and colleagues (2023a) explain, reading and writing are both equally valuable and bridging instruction between the two can improve composition and reading comprehension outcomes. These empirical findings confirm that the relationship between early literacy skills is complex, not one-dimensional. They also support the notion that early and responsive writing instruction matters for short and long-term outcomes across a variety of measures (Graham & Hebert, 2011; NELP, 2008; Puranik & Lonigan, 2014; Rowe, 2018; Wang & Troia, 2023). Since 16 teachers’ instruction can directly impact a student’s writing achievement, this study investigates what instructional practices teachers are using to teach writing in K-3 classrooms and the quality of those instructional practices. Practice Guides for Teachers In order to support teachers’ uptake of evidence-based instructional techniques, researchers put together comprehensive practice guides to help clarify research findings for practitioners. From an exhaustive analysis of the studies conducted on elementary writing instruction, Graham and colleagues created the IES Practice Guide: Teaching Elementary Students to be Effective Writers (2012a) for the Institute of Education Studies (IES) repository. The repository, known as the What Works Clearinghouse (WWC) is a collection of evidence- based practice (EBP) guidance for schools, including a collection of practice guides created by researchers for the IES. To create the guide, Graham and colleagues systematically reviewed and analyzed the existing body of research around elementary writing instruction (Graham et al., 2012b). From this meta-analysis, they provided four recommendations for elementary writing instruction. The recommendations were 1) provide daily time for students to write; 2) teach students to use the writing process for a variety of purposes; 3) teach students to become fluent with handwriting, spelling, sentence construction, typing, and word processing; and 4) create an engaged community of writers (Graham et al., 2012a). Each of these recommendations was rated by the level of evidence to support it. The second and third recommendations had strong or moderate evidence, while the first and fourth recommendations had minimal evidence. Nevertheless, the practice guide’s authors noted that minimal evidence did not mean a practice was less important but rather that there were fewer studies that met the rigorous criteria for review. It was a promising practice for other reasons 17 (i.e., it logically supported uptake of the other practices) and therefore they included it in the document. The four recommendations, and the evidence supporting those recommendations, were each explained within the practice guide. The recommendation for providing daily writing time for students which is one practice that is particularly important to this study, had minimal evidence. However, the authors explained that they chose to include this as a recommendation because the other recommendations could not be carried out unless a reasonable amount of time was set aside for students to write during the school day (Graham et al., 2012a). Grounded in the same theory that distilling research into tangible practices could positively impact classroom instruction, some states have made their own practice guides similar to the IES Practice Guide. Michigan is one of those states. A group of education leaders in the state formed the Early Literacy Task Force to help support statewide early literacy improvement initiatives. These initiatives occurred in tandem with reform-focused policies from state legislators around early identification and interventions for students struggling with literacy learning in early elementary. The early identification and intervention were a complement to an accountability measure, appearing in the form of student retention based on third grade ELA test scores. This group of statewide literacy leaders created a document to guide teacher practice called the Essential Instructional Practices in Early Literacy: Grades K to 3 (referred to throughout this document as the K-3 Essentials; MAISA GELN Early Literacy Task Force, 2016). Within this document, there were ten “essential” instructional practices for teachers to implement. Each essential contained multiple bullet points, which were further practice recommendations for that general category (see Appendix A for an example of essential practices and bullets). These practices are called essentials and/or bullets from this point onward. 18 The K-3 Essentials were based on experimental and quasi-experimental studies or from meta- analyses, like the IES Practice Guide and each essential and bullet had footnotes that directed the reader to the appropriate research literature or the IES WWC Practice Guide. Thus, the ten documented essential practices in the K-3 Essentials were practices that have shown improved student learning compared to another type of instruction or business as usual instruction. Of note, these practices were updated in September of 2023, but this study uses the original version, which is very similar for the writing practices, from 2016. This K-3 Essentials document, as well as associated professional development modules and classroom videos, provided specific guidance around each of the ten essentials in early literacy ranging from family involvement to phonological awareness instruction. In an effort to support the early literacy initiative, the state had offered continuing education credit toward licensure renewal for teachers who used these online modules and complete them. To further support teachers to improve early literacy instruction, the state had also provided additional funding to hire Early Literacy Coaches at the Intermediate School District (ISD) level. This was done with the hope that these coaches could support teachers to adopt the practices outlined in the K-3 Essentials. The Early Literacy Task Force, with the help of the local ISD coaches, provided training and professional development grounded in these K-3 Essentials to K-3 teachers across the state. Since this was such a large initiative in the state since 2016, this framework was the one that the larger study on literacy coaching used to analyze classroom instruction (see Chapter 1: Larger Context for the Current Study section for more information on the larger study). Since this dissertation study used data from the larger literacy coaching study, the use of these K-3 Essentials was the same instructional framework used for this study. 19 The K-3 Essentials were grounded in best practices as they relate to literacy learning holistically; therefore, each essential did not necessarily focus on writing, although some do. For instance, one of the essential practices that was central to this study was essential six. Essential six stated teachers should provide: “research and standards-aligned writing instruction” and was broken into five separate bullets which outlined instructional practices they should employ to meet the essential. As noted in the K-3 Essentials endnotes, these bullets included recommendations that come directly from the IES Practice Guide written by Graham and colleagues (2012a) and as a result, there was strong alignment in the K-3 Essentials to the practices recommended in the IES Practice Guide. For this study's purpose, not all ten practices were analyzed, just those that directly correlated to the IES Practice Guide. For more information on which essentials and bullets are included in this study, please see the Methods chapter. To see the direct alignment of the K-3 Essentials used in this study with the IES Practice Guide, see Appendix B. Since both practice guides provide evidence-based practices for teachers to use, and since this study focuses on teachers’ enacted writing instructional practices, the evidence-based practices contained in these guides are the ones used to analyze teachers’ enacted instruction and the curriculum materials. What Does Classroom Writing Instruction Look Like? Although research-based guidance documents have outlined what instructional practices early elementary teachers should adopt in the last ten years, there has been less clarity on what practices teachers are already using in early elementary classrooms. There has also been less research on the impact of teachers’ enacted instruction on children’s writing achievement. In this section, I share findings from previous survey studies and observational studies on elementary classroom writing instruction. This section shares insights into teachers’ self-reports on their 20 instruction as well as what writing instruction researchers have observed in early elementary classrooms. Many survey studies cited here were conducted in the years preceding the rollout of the Common Core State Standards (CCSS) for ELA. However, the observational studies on writing cited here mostly occurred after the widespread adoption of the CCSS for ELA. Survey Studies: Teachers’ Reports on Writing Instruction In addition to the empirical evidence that writing instruction can improve writing outcomes in a child’s earliest school years (e.g., Traga Philippakos et al., 2023), there is some evidence provided about what instruction in primary grades classrooms looks like, what teachers believe about writing, and what materials they are using. Some evidence obtained from a collection of national survey studies has examined teachers’ spelling instruction, beliefs about writing instruction, self-efficacy when teaching writing, handwriting instruction, adaptations for struggling writers, and approaches to writing instruction more broadly (Cutler & Graham, 2008; Graham et al., 2002; 2003; 2008a; 2008b; Guo et al., 2023). In addition, another set of more recent survey studies have examined teachers' beliefs and attitudes about the new Common Core State Standards for writing and language (Troia & Graham, 2016; Hsiang et al., 2020) and teachers’ writing instructional practices. These surveys all used a representative sample of teachers at the grade levels studied (e.g., Kindergarten or grades 1-3) pulled from a national market research database. Although these surveys present many interesting findings about teachers’ self-reported instructional practices, some are more relevant to this study than others. For instance, Cutler and Graham (2008) found that teachers were varied in their use of instructional materials, time spent on writing, and instructional approaches used. However, teachers reported that they spent an average of 30 minutes a day teaching writing and taught a mix of both process- and skills-based 21 writing lessons. They also found that teachers reported needing more training and support to teach writing, a finding echoed by the National Commission on Writing (2003) and many of the other survey studies on teachers’ writing instruction (e.g., Graham et al., 2008a, 2008b; Troia & Graham, 2016). Additionally, teachers’ approaches to teaching writing reportedly incorporated both process and component skills instruction. Further, as a result of their survey study, they recommended many practices including increasing the time students spent writing, increasing their expository text writing, providing better teacher professional development around writing and better balancing the instruction around writing processes, strategies, and skills (Cutler & Graham, 2008). The main finding from Graham and colleagues (2008a) was that responding teachers indicated that they taught handwriting to students an average of 70 minutes per week; however, their use of instructional strategies for teaching handwriting varied widely. Most teachers also reported teaching lowercase letters first, and around one in five teachers reported teaching upper and lowercase letters in tandem. Additionally, four out of five respondents indicated that they taught children to use the correct pencil grip and how to position their paper through the use of one or more strategies. Finally, all respondents who taught handwriting, as well as five who did not, thought handwriting should be taught as a separate entity in the school day. In another study conducted by Graham and colleagues (2003), teachers were asked about their beliefs about teaching writing through a variety of questions targeting three constructs (correctness, natural development, and explicit instruction specifically). As a result, the authors provided an instrument validation for a construct measurement tool for primary grade teachers regarding writing instruction. The authors found that teachers believed that explicit instruction in 22 writing was important, with 99% of participants expressing that belief. Another 73% believed that natural learning in writing mattered, creating a significant amount of overlap. Graham and colleagues (2008b) studied teachers’ spelling instruction and found that most teachers did teach spelling, for at least 25 minutes per week, with many reporting anywhere from 60 to 90 minutes per week. This instruction was sometimes research-based but other times it was not. They also found that teachers made few or no adaptations for students who were unsuccessful. Interestingly, though teachers made few adaptations, Troia & Graham (2016) found that teachers believed they were capable of doing so. Troia & Graham (2016) also found that only a third of teachers believed that their writing instruction had a direct impact on student’s writing improvement and only ten percent more believed that their knowledge of how to teach specific writing concepts or skills led directly to students’ mastery of said skills. In a more recent survey of kindergarten teachers, teachers self-reported using varied instructional strategies with things like conferencing occurring once a week on average and other practices, such as using a graphic organizer occurred least frequently. Kindergarten teachers in this survey reported that they always encouraged estimated spelling. Component skills, including spelling instruction as well as handwriting, and instruction on capitalization and punctuation were included at least once a week or more often and the authors share concern that meaningful opportunities to compose were less present in teachers’ self-reports along with less focus on teaching the writing process and strategies for engaging in the writing process (Guo et al., 2023). A more recent survey of teachers in grades 3-4 (Brindle et al., 2016) found that teachers were not meeting the recommended daily amount of time for writing instruction. Teachers reported spending 15 minute a day teaching writing and reported that students wrote for 25 minutes a day. They reported teaching writing across all three major genres: narrative, 23 informational, persuasive. Researchers report that teachers’ self-efficacy and philosophy about writing made unique contributions to teachers’ use of evidence-based practices. Three-quarters of teachers in this survey also reported that they were least prepared to teach writing out of the subjects reading, writing, science, math, and social studies, and that their teacher preparation program did not prepare them to teach writing. Teacher preparation responses directly correlated to the teachers’ time spent teaching writing and time respondents reported students wrote inside and outside of school. The variation in writing instructional approaches and time spent teaching writing is a finding echoed in international studies. In an international survey conducted by Hsiang and colleagues (2020) surveyed teachers in grades 1-3 in Taiwan. Over half of respondents reported teaching writing only once a week or less, with only 37% saying they taught writing daily. Teachers also reported using varied instructional practices and slightly positive self-efficacy and writing philosophies (Hsiang et al., 2020). Although survey studies have provided important insights such as the ones shared here, they also have limitations including that teachers must estimate the frequency of a practice, self-reports may be influenced by what teachers know or believe they should do. Additionally, teachers may interpret the survey questions differently than they were intended by the researchers or teachers may interpret questions differently from one another. These survey studies provide valuable insight for this study because they highlight the things early elementary teachers say they do or do not do in their classroom writing instruction. Researchers’ Reports: Observational Studies Another path researchers have explored to gain insight into early elementary writing instruction has been to conduct observational studies. While there have been many observational studies of ELA instructional blocks, most of the attention has been paid to reading and not to 24 writing. For instance, in an elementary study of observed instruction in grades 1-5, conducted by Taylor and colleagues (2003) the focus was on reading instruction, but two writing components were measured including writing about texts. An additional group of observational studies investigated instruction in the literacy block in early elementary school and included codes about writing (Connor et al., 2004; Foorman et al., 2006) but reported no findings related to writing, aside from one finding from Kim et al. (2013). Specifically, Kim and colleagues (2013) found that a teacher’s responsiveness was related to student writing quality. Thus, findings from these studies did not shed much light on writing instruction overall. However, it should be noted that the finding from Kim et al. (2013) on teacher responsiveness directly impacting student writing quality has recently been confirmed by Wang and Troia (2023) through their use of hierarchical linear modeling to analyze student-level and teacher instruction factors on student’s writing outcomes, though this was with fourth and fifth grade. In an effort to shed more light on writing instructional practices in early elementary classrooms, some recent large-scale observational studies were conducted in Kindergarten and First Grade that focused solely on writing instruction. Studies by Puranik and colleagues (2014) and Coker and colleagues (2016) were large-scale observational studies of instruction across many classrooms and schools. Findings from these studies broadly confirm that many teachers are providing writing process instruction and spelling, but also observed handwriting instruction more than was self-reported. Specifically, Puranik et al. (2014) found that teachers’ instruction varied across classrooms and buildings when it came to handwriting, students’ independent writing time, and teacher instructional time. For example, 15 out of 21 teachers were observed to teach handwriting but the amount of time spent on handwriting varied widely, with 4.2 minutes being the largest amount of time observed averaged across two observations. In 3 of the 21 25 classrooms, students were not provided independent writing time at all. Further, teachers seemed to assign writing for children to complete independently more than they modeled or instructed children in strategies or approaches for how to complete that work, in contradiction to the recommendations. This observational study provides another lens with which to view survey study results and teacher self-reports. Coker and colleagues (2016) conducted another observational study of writing instruction in first grade across 50 first grade classrooms with 57 participating teachers. Coker and colleagues (2016) found that skills instruction was the most representative type of instruction, followed by process instruction and then sharing. Teachers also spent most of their time teaching whole group writing lessons and little time modeling (3.7% of observations). Further, the largest variation in the amount of writing instruction and the type of writing instruction teachers provided was found at the observation level, not at the classroom level. This means that from observation to observation, it was more likely that a teacher varied their practice than that one teacher consistently used certain practices more than another. This finding provides insight that could not necessarily be gleaned from a survey study in which self-reports are recorded for practices overall. Coker and colleagues (2016) and Puranik and colleagues (2014) found variation across classrooms within the same school as well as across schools. This means that there was variability from teacher to teacher but also that sometimes, some schools demonstrated differences in practices overall. For instance, some schools taught more capitalization and punctuation (Coker et al., 2016) than others. Additionally, there was variation at the school level that were significant for writing connected text (more and less) than the average, open writing tasks (more than average), and correction and copy tasks also had significance at a school level 26 (more than average). Each of these instances indicates that after using hierarchical linear modeling, some schools had statistically significant differences (one in each category) in the types of writing tasks students were given and sometimes the types of instruction. However, the authors’ overarching finding was that variation was mostly at the level of the observation, not the school (Coker et al., 2016). This echoes the findings from Puranik & colleagues (2014) that teacher variability exists within and across schools. Essentially, this means that children at the same grade level, in the same school, are receiving different writing instruction. In Coker and colleagues' (2016) study, spelling was the most common writing instructional topic observed and keyboarding was not observed a single time. Given that both are practices called for in the IES practice guide, with the recommendation for keyboarding instruction beginning in first grade, this finding is particularly salient for the current study. Additionally, both studies (Coker et al., 2016; Puranik et al., 2014) found that writing instruction was, on average, occurring less than the recommended daily amount. However, nuanced data behind that average reveal that some observations had zero minutes of writing instruction and others had roughly 75 minutes of writing instruction – meaning the average did not provide the full picture. This confirms the findings from previous survey studies that time spent on writing instruction varies from one individual teacher to the next. However, according to Coker and colleagues, it also varies from day-to-day. Of note, in Puranik and colleagues’ (2014) observations they did not observe for the entire day and so may have missed writing instruction that occurred outside of the ELA block in those kindergarten classrooms. These observational studies provide another lens with which we can examine classroom instruction and to some extent, generalize the time spent on writing and types of writing instruction young children are receiving in elementary school. What is clear from these studies 27 overall is that what types of writing instruction teachers provide varies from school to school and from classroom to classroom. Sometimes, a school will have a score that is higher than a mean score across classrooms, but more often the variation occurs at the level of the observation. This matches the variation found in survey studies from one respondent to another. These studies are of particular relevance to this study because teachers in this study were clustered in the same schools or the same districts, which allowed for examination of individual observations within and across school districts. The Role of Curricular Materials For the purposes of this study, I have used curriculum materials to refer to the instructional materials teachers are provided by their district and expected to use to guide their teaching. This is a departure from some recent literature (Remillard & Heck, 2014) where curriculum encompasses all components of the enacted instruction and instructional materials are defined separately. Further, I have used teachers’ enacted instruction to refer to the enacted curriculum, which is explained further in the conceptual framework subsection of this chapter. Since No Child Left Behind, many curricular materials for ELA were designed to meet the demands of standardized testing born out of those reforms (Darling-Hammond, 2007). As a consequence, many curricula still focus heavily on reading, leaving the writing components of the curriculum less regulated (McCarthey, 2008) and less studied (Gabas et al., 2023). However, in order to be adopted curricula must also meet the learning standards, currently the Common Core State Standards, which put an increased emphasis on writing. These standards include writing across three genres, research writing, grammar, spelling, handwriting, keyboarding, and word processing skills. Districts and states adopt new curriculum on a regular basis and often update their curriculum choices when new standards are adopted by the state. Thus, curriculum 28 materials are a lever that policymakers and state and local education leaders use as a lever to improve instruction (e.g., Chiefs for Change, 2021; Chingos & Whitehurst, 2012). Thinking of curricula as a lever for improved instruction is not isolated to the realms of educational reform or policy. In curriculum and teacher education research, the topic is also of interest. In fact, Ball and Cohen (1996) have called for curriculum materials designed to support educator learning. These curricula, called educative curricula (Davis & Krajcik, 2005) intentionally include components that may be helpful to teachers as they implement the said curricula. In writing, one example of this is the Lucy Calkins’ Units of Study program, which includes teaching scripts. When teachers read these scripts, they see the language they could use to explain difficult concepts, like revision, to children as young as five and six years old. Another feature that some curricula include are callout boxes that have tips related to specific content or learners, such as sidebars that help teachers by providing supports for English Language Learners. When looking to adopt new curricula, school leaders have been encouraged to seek out evidence-based resources. However, research literature does not often contain rigorous research on curricular efficacy for entire programs, which is often how curricula are adopted by districts. Most often researchers study the efficacy or contents of a subset of a curricular resource. For instance, Cabell & Hwang (2020) studied the efficacy of a knowledge-building ELA curriculum for kindergarten. And many others have studied curricula via content analysis methods for particular components (e.g., Wright & Neuman, 2013 studied vocabulary; Gerde et al., 2019 studied preschool writing curriculum). Currently, multiple avenues exist for a district or state leadership team to find evidence on the effectiveness of various curricula. But until the last few years, many curricula were not evaluated even on the sites that were created to bridge the gap 29 between research on curricula and those who would be adopting them, such as What Works Clearinghouse, the Evidence for ESSA or EdReports, Since Michigan is a local control state, district leaders are allowed to adopt their own curricula and do not have to look to resources vetted or adopted by the state. Even then, districts vary in how they expect curricula to be used, with around 60% of superintendents reporting they mandate ELA curricular materials that they have adopted be used by teachers but around 36% saying they provide and/or recommend curricula (Wright et al., 2022). Additionally, there is evidence that teachers often supplement curricular materials with open access, online resources, which have varied and often low quality (Polikoff & Dean, 2019). Although Michigan’s school districts have varied expectations of curriculum use and have adopted varied ELA curricula (Wright et al., 2022), there is evidence that curriculum materials can influence teachers’ knowledge (Kaufman et al., 2018). How Do Curricular Materials Influence Teacher’s Instruction? Researchers have long argued that curricular materials can and do influence teacher decision-making including both content and pedagogical approach (Ball & Cohen, 1996). To what extent and how specifically was an area seminal research did not fully agree upon. Some researchers articulated that heavily prescriptive and scripted curricula strip teachers of professional knowledge and autonomy (e.g., Apple & Junck, 1990), and others wrote curricula for teachers that were heavily scripted and later, educative (e.g., Carnine et al., 2004). Further studies of curriculum enactment for new teachers found that curricular materials can provide support for new teachers (e.g., Kauffman et al., 2002; Valencia et al., 2006). While materials vary, so too does a teacher’s interaction with those materials as a result of various factors (Remillard 2005; Stein et al., 2007) and studies on that phenomenon are not new. As policies, practices, and curricular materials change, so too do the ways that teachers 30 interact with and enact these curricula (Pak et al., 2020). This is especially true in this era of education reform where teachers are increasingly expected to follow district-provided curricular materials. These materials are often heavily scripted and do not provide for much instructional adaptation for individual needs (i.e., Gerde et al., 2019; Handsfield, Crumpler, & Dean, 2010; Stillman & Anderson, 2011). As interest in and the use of curriculum materials to teach ELA continues, researchers have conducted research focused on the complex interactions between the teacher and the curricular materials. For instance, McCarthey & Woodard (2018), Waldron (2014) and Valencia and colleagues (2006) have all examined teacher implementation of curricular materials. Valencia and colleagues (2006) used a case study approach and followed four beginning teachers to observe their use of curricular materials and found that the materials interacted with teachers’ knowledge and instructional practice. In some ways these materials hindered them and in others they supported their on-the-job learning. Specifically, the two teachers with less autonomy, more directive materials, and fewer materials, or “shackles” as the title of their piece so aptly states, exhibited less on-the-job learning and were least able to adapt to meet students’ needs. The two teachers with stronger professional knowledge to begin with, less restrictive materials and more access to a variety of materials, demonstrated stronger on-the-job learning and instructional practices that were better able to adapt and met students’ individual needs. McCarthey and Woodard (2018) in a study of writing curriculum enactment, which included 20 teachers found that teachers responded to the curriculum in different ways. Of all 20 teachers, 12 teachers adapted the curricular materials they had and because of professional development felt confident in their adaptations. They shared sentiments that curriculum could not contain everything that they would need to teach their students. Of the remaining eight 31 teachers, four teachers rejected the curriculum, and four teachers followed the curriculum almost exactly as it was written. Those who followed it perceived their curriculum to be adequate and had less access to professional learning than those who adapted the curriculum. Those who rejected their curriculum did not like the curriculum provided and instead drew from other resources. Unfortunately, the resulting instruction from the teachers who did not use the curriculum lacked clarity and coherence. Through a yearlong study of teachers’ enactment of the writing workshop instructional approach combined with professional development, Troia and colleagues (2011) followed seven teachers and examined each teachers’ practice as a case through analysis of multiple data sources (e.g., observations, surveys). They found that the majority of the 27 core components of the instructional model were present in teachers’ enacted writing instruction during observations (70-89% of the time), which shows uptake of instructional practices from a writing curriculum. However, since teachers were receiving professional development during the study, this finding cannot be attributed to the curriculum alone. Where variation was more apparent was in teachers’ instructional practices used when interacting with children. Teachers' enacted instruction varied as did their management techniques. For instance, teachers’ self-reports of support of estimated or invented spelling ranged from 1 (never) to 7 (always) and their use of engagement tactics varied widely as well. Interestingly, all teachers in the study provided modeling for students on how to use supportive materials (e.g., notebooks, graphic organizers), but varied in how they approached children’s authorship and autonomy. In addition to some of the implementation studies described above, some studies have used the lens of policies in combination with teachers’ beliefs, and their implementation of top- down mandated ELA curricula. Many of these studies took place during the Reading First era of 32 instruction and found complex interactions between teachers and the ELA curriculum. Some of these studies conducted during this large curricular reform movement in literacy found that teachers felt less agency in their roles as instructors (e.g., Pease-Alvarez & Samway, 2008), while others found that teachers were able to adapt and contextualize instruction from their mandated curriculum (e.g., Kersten & Pardo, 2007). While these studies do not necessarily indicate that curricular materials ensure instruction that follows the curriculum materials exactly, they do provide evidence that for many teachers the curricular materials they have are influential in some ways. Researchers have noted that curriculum can serve as a support for beginning teachers striving to meet new and more rigorous standards (e.g., Kauffman et al., 2002; Chingos & Whitehurst, 2012; Gerde et al., 2019) and that it can help teachers provide cohesive instruction, but also presents limitations and requires adaptation to meet student needs (e.g., McCarthey & Woodard, 2018). In fact, when considering the scope of influence of curriculum on teacher uptake, it is important to also consider the type of curriculum and the impact of said curricular uptake. For instance, in a synthesis of a multi-year, multi-design study conducted by Davis and colleagues (2017), teachers were found to have significant uptake of the material contained in educative features. The impact on student learning, however, was less clear. Other studies on educative features in curricula, many done in mathematics education, including that done with pre-service teachers have indicated that how teachers read the curriculum materials, including the epistemologies they bring to that reading (e.g., Drake et al., 2014; Land et al., 2015; Stein et al., 2007), can influence the uptake of the educative contents in those materials. Adding additional insight to the studies around curriculum uptake, content analyses of writing standards and curricula have examined the types of writing opportunities provided in 33 commonly used Kindergarten and preschool curricula (Gabas et al., 2023; Gerde et al., 2019) and the CCSS standards for writing (Troia & Olinghouse, 2013). Both studies of kindergarten and preschool writing curriculum found that the instructional strategies in the curricula varied depending on the practice. For instance, Gabas et al. (2023), found that all curricula had daily opportunities for genre-writing and grammar instruction, as well as opportunities to write within reading instruction. They also found that writing process and strategy instruction were commonly present, but what parts of the process the instruction focused on varied. Additionally, they found that component skills instruction was often removed from the time of day where children were engaged in writing their own texts and was instead relegated to other instructional times. While there is evidence that curricular uptake happens in varied ways across ELA subjects, there is some disagreement on how much instructional alignment to standards or standards-aligned curricula influences students’ achievement. Gamoran and colleagues (1997) found that students who were exposed to more rigorous mathematics course content than the remedial math courses they were tracked into experienced an increase in their achievement as compared to similar peers who did not receive the more rigorous mathematics course. However, a more recent study by Polikoff & Porter (2014) found there was only weak evidence from the research that teachers’ alignment to curriculum (as defined by standards) may have some modest influences on students’ achievement and outcomes. In turn, they posit that it is possible that standards-aligned assessments and standards-aligned instruction may not have the value-added outcomes for student achievement that policymakers are hoping for. However, the authors explain that it is possible the measures they used are a bit too removed to measure day-to-day instructional alignment meaningfully and propose ways to improve the accuracy of one of the 34 measures (the surveys of enacted curriculum) in a follow-up paper (Polikoff et al., 2019). Although this study does not study students’ achievement on standardized assessments or any assessments directly, one key theory underlying this study is that if teachers’ instructional quality is higher during writing instruction and better aligned to evidence-based practices, students’ writing achievement will improve. These studies, and others investigating alignment, have again reinforced that there is a complex relationship between the teacher, curriculum, students and student achievement and that these topics warrant further study. Writing Curricula: Contents and Types How teachers use curricular materials varies, but so too, does the content within those curricula. Since the advent of the No Child Left Behind act (NCLB, 2002), prescriptive and scripted curricula have dominated the market (Vaughn et al., 2019). Additionally, curriculum purchasing is a large part of a district’s per pupil expenditures (Cavanagh, 2017); therefore, there is a large market for curriculum and instructional materials resulting in a wide variety of materials produced and consumed by individual teachers, schools, and districts. In fact, in a recent survey study, in the state of Michigan, curricular materials teachers reported using for ELA instruction varied from core ELA curricula to lessons from Teachers Pay Teachers to teach literacy subjects such as reading, writing, spelling and handwriting. Core ELA curricula often come in the form of basal series, which claim to provide coverage for all of the literacy teaching that should happen in a classroom. This same survey study revealed that other teachers were reporting using multiple curricular materials to teach across ELA contents – some using separate curricular materials for reading, writing, phonics, and handwriting (Wright et al., 2022). Some of the core ELA curricula on the market that Michigan teachers report using in the statewide survey and in the larger literacy coaching project were considered knowledge-building 35 ELA curricula. These knowledge-building ELA curricula attempt to bridge a gap between the reality of the school day and evidence-based reading comprehension instruction (Cabell & Hwang, 2020). Often, reading and other tested subjects such as mathematics dominate instructional time, leaving little time for content instruction in subjects such as science (e.g., Blank, 2012). But since research repeatedly finds that language comprehension, specifically background knowledge and vocabulary, are important for reading comprehension (e.g., Cervetti & Wright, 2020; Hwang & Duke, 2020), these curricula aim to build background knowledge and content specific vocabulary. It is from this tension between building content knowledge and not having enough time in the school day that knowledge-building ELA curricula were born. Part pf the appeal of knowledge-building ELA curricula is the lack of time to cover all the content standards and subjects teachers are supposed to cover. This is of particular interest to practitioner and research communities alike. Knowledge-building ELA curricula are those that have taken the approach of content integration in literacy instruction. What this looks like in these knowledge-building texts is using science and social studies topics to create cohesive literacy experiences. For instance, the curricula generally include read alouds aligned with intentional teaching of academic vocabulary (Cabell & Hwang, 2020). Examples of these curricula include Core Knowledge Language Arts (CKLA; Core Knowledge Foundation & Amplify Education, 2017) and Expeditionary Learning (EL; EL Education, 2017). These curricula are similar to more traditional basal reading programs where teachers are provided with daily lesson plans as well as many supporting and supplementary materials for the various components of the lessons and for a variety of learners. Within these programs, the writing instructional components contained in the manuals and lesson plans vary in frequency and in content. 36 Although writing curricula are sometimes a part of core ELA curricula, including knowledge-building curricula explained above, sometimes they are standalone curricula that focus only on writing, as in the writing workshop curricula studied by Troia and colleagues (2011). In the statewide survey of Michigan educators, roughly thirty-five percent of teachers reported using a separate writing curriculum, not including spelling or handwriting curriculum (Wright et al., 2022). Examples of these curricula included Lucy Calkins: Units of Study for Writing: Small Moments, Personal Narrative Writing (Calkins, 2003) or Write from the Beginning and Beyond: Thinking Maps (Buckner, 2012). As with the knowledge-building writing curricula above, some of these writing curricula had daily lessons, which were scripted, while others provided a general framework for teaching writing. Further, some of the curricula teachers reported they used claim to be educative, meaning they are building teacher professional knowledge while also prescribing specific steps and instructional practices (i.e., MAISA Units of Writing, 2014). In a recent study of writing curriculum materials for kindergarten students by Gabas and colleagues (2023) some core ELA curricula were examined for their writing instructional content. The curriculum materials examined were similarly comprehensive ELA curricula and reports results on lessons including genre-based writing lessons, grammar lessons, and reading lessons. Since Gabas and colleagues’ (2023) study is the most directly applicable to the first question of this dissertation study, I spend a bit more time on their findings here. When examining genre-based writing instruction, the authors found that approximately three-quarters of lessons addressed drafting, and just over half of the lessons addressed planning and publishing. Around one-fifth of lessons also contained revising and editing as well as evaluation and just about a tenth of lessons contained sharing. The authors note that one 37 curriculum in the study had a systematic approach to moving a piece of writing from drafting through each of the steps of the writing process, while the other two did not. Sentence construction was the most common component skill contained in the genre-based writing lessons, though there was significant variation. Some of the curricula contained sentence construction lessons 97%, while the other two contained sentence construction lessons in 58% and 28% of the genre-based writing lessons. Spelling was present in an average of 15% of the lessons but varied from 33% to 2% across all three curricula. The curricula focused on conventional spelling with the exception of one curriculum which provided cues for conventional and estimated spelling. Handwriting was similarly taught infrequently and appeared in all three curricula in amounts ranging from 67% of lessons to 15% of lessons. These handwriting instructional moments were often general and more like reminders or cues to children, with the exception of one curriculum which addressed handwriting via specific letter formation cueing but did so out of the context of the focus of the writing lesson overall. When examining the other types of lessons including grammar and reading instruction lessons, almost all grammar lessons contained sentence construction lessons (98%). However, those lessons rarely focused on sentence expansion and combining, which is a more complex skill. Also, the lessons on sentence construction often took place in writing or work that was not the child’s own writing. Spelling instruction often took place during phonics lessons and support for students to use estimated spelling was not common, particularly during genre-based writing lessons. Teachers also used shared writing to model a task, but also sometimes this shared writing included writing things for children to copy. While students did have exposure to all parts of the writing process, the opportunity to write their own pieces was noticeably less. Overall, the authors found that while curriculum may contain some components of effective 38 instruction, some pieces were missing. The findings from these curriculum studies inform this study’s examination of curriculum materials used to teach K-3 writing by providing a base for what curriculum materials have contained in past studies as well as what they have not contained. Conceptual Framework Since curriculum enactment is a topic that has been studied across a variety of contents and contexts, there are many different theoretical models that have guided and emerged from empirical and descriptive work. Early landmark scholarship on curriculum helped define the orientation of the field. In this scholarship, there was differentiation between the official curriculum, communicated through standards and curricular materials and the operational curriculum, or the curriculum teachers enact in their classrooms (Goodlad et al., 1979). Other early empirical work, grounded in post-war education reforms viewed curriculum as “teacher- proof.” The language around these reforms has continued even today. However, the scholarship that has continued to evolve alongside this centered on curricular enactment has challenged the assumption that teachers are only the vehicles by which curricular materials were transmitted, including the studies cited above (e.g., Waldron, 2014, McCarthey & Woodard, 2018). Instead, these studies have challenged the reform narrative by highlighting ways that teachers had agency via their adaptations of, or resistance to, new and unfamiliar curricula (Remillard, 2005, p. 215). As curricular and education reforms continued throughout the twentieth century, so, too, did the study of curricular enactment. Scholars in the field have also proposed that curriculum may have a larger role to play than the one it has been allotted – positing that curricular materials might even be made educative and in their uptake further develop teacher pedagogical content knowledge (Ball & Cohen, 1996). Empirical studies across disciplines have highlighted the ways 39 that teachers’ pedagogical content knowledge, and in turn practice, is influenced by curricular materials and/or influences the enacted curriculum (see Valencia et al., 2006). Other studies have followed the work of Ball & Cohen (1996) to propose ways that curriculum materials could contain educative components that support teacher learning (Davis & Krajcik, 2005; Davis et al., 2017; Drake et al., 2014). Two of curricula in this study are marketed as educative and another marketed that it can build teachers’ pedagogical content knowledge around effective writing instructional practices. The conceptual framework for this study is grounded in the theoretical work of Stein and colleagues (2007) shown in Figure 1. This theory was informed by Remillard (2005)’s theory of curriculum enactment. Remillard (2005) outlined four common perspectives in curriculum enactment studies. These perspectives were that teachers are active agents who follow, subvert, draw upon, interpret, or participate with curricular materials. That is, there is a participatory relationship between teachers and the curriculum materials leading to an enacted curriculum that may or may not closely match the curriculum as written. Stein and colleagues (2007) have built upon this work by showing the phases of curriculum use across time. To begin, there is written curriculum, which leads to an intended curriculum, and an enacted curriculum. Since teachers’ instruction, or the enacted curriculum is what arguably leads to student achievement (e.g., Graham et al., 2012b; Kim et al., 2013; Wang & Troia, 2023), it is the focus of this dissertation study. Stein and colleagues’ (2007) temporal framework for curriculum enactment also highlights how the written curriculum is transformed through a variety of factors which can include teacher identity, self-efficacy, and other factors. This mechanism for transformation of the written curriculum to the enacted curriculum has been empirically supported in studies of 40 writing curriculum enactment (e.g., Troia et al., 2011) and survey studies on teachers’ writing practices (e.g., Hsiang et al., 2020). Figure 1 Temporal Phases of Curriculum Use Note: Image from Stein et al., 2007 Although these theories were originally conceived in response to work done in mathematics, likely because of the nature and long-history of teacher use of textbooks in mathematics instruction as opposed to other disciplines (Remillard, 2005), the mechanisms outlined in this framework have been explored in studies conducted in writing instructional (e.g., McCarthey & Woodard, 2018; Troia et al., 2011). Literacy studies, such as those by McCarthey & Woodard (2018), McCarthey & Kang (2017), Valencia and colleagues (2006) have found that teachers have interacted with curriculum materials in varied ways and have varied in their enactment of the materials. For instance, some teachers reject the curricula, while others follow the curricula, and still others adapt the curricula and teacher-level factors, and contextual factors seem to be at play. These findings support Remillard’s (2005) seminal theoretical framework that 41 curriculum enactment is participatory and not a one-directional phenomena. Studies on enacted curriculum in elementary literacy instruction have also found that teacher knowledge and experience (teacher factors) and the type of curriculum materials they are provided, or they have access to (material factors) both influenced the enacted instruction (e.g., McCarthey & Woodard, 2018; Valencia et al., 2006; Yoon, 2013). In line with Stein and colleagues’ (2007) theory, I posit that curricular materials and teachers have a relationship that leads to an enacted curriculum that, for various reasons, may be transformed from the written curriculum. That is, teachers and materials are both actors in the instructional opportunities provided to students (i.e., the enacted curriculum) and those opportunities may or may not align to evidence-based instructional practices. This dissertation study investigates districts as cases in order to explain some of the transformations that occurred from the written curriculum to the enacted curriculum. To describe how the curriculum is transformed from the written to the enacted, I investigated both the curriculum materials and teachers’ enacted instruction to see whether teachers reject, follow, or adapt the curriculum and if this can be connected to district-level or other factors. The Present Study Since theory and research studies have supported the idea that there is a dynamic and bidirectional relationship between the teacher and curricular materials, I have studied both individually and then examined them together for this dissertation study. For this mixed-methods descriptive study, I have examined the curriculum materials and teacher enactment separately using the lens of evidence-based instructional practices. I then used the data from both sources to see if any quantitative or qualitative evidence pointed to a relationship between the two by 42 examining teachers’ enacted practice collectively within districts and examining teachers’ enacted practices across districts. Therefore, this study seeks to answer the following research questions: 1. How can recommended writing instruction be described across four writing curricula? a. What effective writing practices were contained in these curricula? b. How did recommended writing instructional practices compare across the curricula? 2. How can writing instruction be described across K-3 classrooms in Michigan during COVID-19 pandemic? a. What effective writing practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials? While studies have certainly shown there can be a relationship between teachers and the enacted curricula, there have been very few of those studies focused on how this plays out during writing instruction in early elementary classrooms to date (see McCarthey & Woodard, 2018; Troia et al., 2011; Yoon, 2013). Additionally, there have been few studies to date that I know of that have done the following three things: 1) used a content analysis of the writing instructional opportunities contained in ELA curricular materials (see Gabas et al., 2023), 2) compared classroom instruction across classrooms that used knowledge-building curricula and those that did not use knowledge-building curricula, and 3) observed writing instruction in early elementary classrooms during the COVID-19 pandemic impacted school year. In order to answer the research questions in this study, I have done all three. 43 CHAPTER 3: METHODS The purpose of this study was to better understand evidence-based writing instructional practices contained in commonly used elementary writing curricula, teachers’ writing instruction as it was enacted, and finally if there was overlap between the instructional practices found in the curriculum districts use and teachers’ enacted instruction. Grounded in the theoretical and empirical findings that there is a difference between the written curriculum and the enacted curriculum, (Stein et al., 2007), this study investigates the following questions: 2. How can recommended writing instruction be described across four writing curricula? a. What effective writing practices were contained in these curricula? b. How did recommended writing instructional practices compare across the curricula? 3. How can writing instruction be described across K-3 classrooms in Michigan during COVID-19 pandemic? a. What effective writing practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials? This chapter provides an overview of the mixed methods employed to answer the study’s research questions. To begin, this chapter explains the design and logic for this study. Next, I describe who the study participants were and some information about the districts they worked in. Afterwards, I explain the data collection procedures used for the study. Then, I describe the data sources and explain the sampling plan and the coding and analytic processes for each 44 research question in the study. Finally, I discuss the limitations of this study and the limitations of the methodologies employed more broadly. Design and Logic To answer the research questions in this dissertation study and the larger Read by Grade Three (RBG3) study, we employed a content analysis methodology (DeJulio et al., 2020; Krippendorf, 2004; Neuendorf, 2002). The primary data source was teachers’ video recorded instruction. Because of the COVID-19 pandemic, we were unable to observe in person and had to pivot to use technology to gather the data. Therefore, video observations were paired with post-observation surveys to be sure we gathered contextual information that may be helpful for data interpretation (e.g., why did the class look small, how did children get materials for virtual small group instruction). To conduct the content analysis, we used evidence-based instructional practices based in the Essential Instructional Practices in Early Literacy: Grades K to 3 (referred to throughout this document as the K-3 Essentials; MAISA GELN Early Literacy Task Force, 2016) to create codes that allowed us to quantify how many recommended instructional practices were present in enacted instruction. These practices were selected because the larger study was focusing on coaching work which centers around this teacher practice guide (see Literature Review for more information on the development of this guide). After the codes were created and piloted over time with sample instructional videos, we engaged in an in-depth analysis of each RBG3 project observation using the a priori coding scheme. Coding in this way allowed our team to systematically evaluate the enacted curriculum (Stein et al., 2007). To evaluate the written curriculum, the same coding scheme was used. Using the same coding scheme was necessary in order to compare enacted instruction to the written curriculum. 45 During coding, qualitative notes were taken in the form of recoding quality descriptors and notes to describe practices as they were enacted or as they were written in the curriculum. Additionally, analytic and researchers’ memos were recorded during coding (added to throughout and to end each session) to capture data that would otherwise be lost when numeric data was assigned to a score. The focus in these memos was put on capturing evidence for quality scoring and capturing any initial noticings. For instance, noting that one curriculum had consistent call out boxes with suggested language supports for students while another had callout boxes and guidance for supporting students’ writing work throughout the lesson. Since there was a large volume of data, approximately 2300 minutes from 49 separate observations of teachers’ classroom instruction during the COVID-19 pandemic-impacted school year along with 18 units of curriculum, having a way to reduce the data to numerical scores helped guide the analysis and the qualitative notes and memos helped with interpretation of the quantitative data. More information about how these methods were employed to answer the research questions in this study are contained in this chapter and broken out by the question they answer. Developing the Coding Scheme For the larger study, we created measures to examine the content and quality of literacy instruction in the classroom videos to better understand coaching and its impact on classroom instruction. Since coaches and teachers for the study were in Michigan, and engaged in a larger early literacy coaching initiative that was going on as part of the RBG3 law implementation, we knew that coaches grounded their work in the K-3 Essentials (MAISA-GELN ELTF, 2016). We also understood that the goal was for teachers across the state to enact these practices with support and professional development from coaches (for more on the K-3 Essentials, see Literature Review chapter). Therefore, after engaging in each of the provided educational 46 modules for all ten essentials, we created a coding scheme to measure the presence and quality of each essential instructional practice. This coding scheme measured the presence of each instructional practice at the bullet level, as well as the quality of implementation, thereby capturing and scoring teachers’ instructional practices broadly in literacy. When discussion or debate came up while creating and piloting the protocol, for instance, about what a particular essential and bullet emphasized most, the project PI, Dr. Wright, helped resolve debate and approve refinements and nuanced interpretations as one of the authors of the essentials document. This series of protocol development and pilot coding discussions ensured the alignment of this coding scheme to the K-3 Essentials (MAISA-GELN ELTF, 2016). After developing the coding scheme and before enrolling participants, we scored practice videos of literacy instruction from early elementary classrooms. To adapt and refine the coding scheme, we double coded all videos and met twice weekly to discuss areas that were difficult to score as well as how we needed to refine the coding scheme to be more usable. One example of a way the scheme was refined early on, before we received participants’ videos issue, was that initially there was no way via the scoring protocol to tell the difference between whether something was not present, or it received a low score. To change this to better represent the quality of a teacher’s instruction, we changed the coding scheme to account for presence separately from the quality rating. The quality ratings were created on a five-point scale with scores of 1 (Beginning), 2 (Developing), 3 (Proficient), 4 (Strong), and 5 (Exemplary). Video of lessons from all of the observations was coded using this coding scheme to check provided instruction against the K-3 Essentials. 47 Codes Used for This Study For this study, I used the six coding categories that were most directly related to writing instructional practices. As a result, I was able to quantify enactment of recommended writing instructional practices across recorded instruction of twenty-five teachers as well as the quality of these recommendations. By using these same six coding categories, I was also able to quantify the presence and quality of recommended instructional practices from the K-3 Essentials across four separate writing curricula adopted and provided by the four school districts where those twenty-five teachers worked. Using the same codes used for the video data coding for the curriculum coding was necessary to compare teachers’ enacted instructional practices to those practices contained in the teacher-recommended sequences within the curricular materials. Neuendorf (2002) states that there is a typical set of steps to follow in a content analysis from theorizing to tabulation and reporting (pp. 49-51). These steps include beginning with a sound rationale and theoretical grounding and a strong conceptual framework (see Literature Review chapter). Additional steps include conceptualizing decisions, operationalizing measures, selecting/identifying a coding scheme, sampling, training and reliability, and tabulation and reporting. These additional steps required for a sound content analysis guided this descriptive study and are detailed in the sections that follow. Participants Remillard (2005) and Stein and colleagues (2007) both theorized that the relationship between curricular materials and the teachers who are using it is complex and influenced by multiple teacher factors (e.g., identity, teacher pedagogical content knowledge). Since teacher enactment of curriculum is mediated by various factors, in this section of my proposal I have 48 provided some baseline demographic data on the teachers in this study, as well as some general demographics about the districts they teach in. It should be noted, school level data is not available because within districts, the participants for the larger study were sometimes working in different schools. Teachers In this study, the participants were twenty-five teachers recruited for the larger RBG3 project on literacy coaching. To recruit these teachers, we first began by interviewing and recruiting ISD literacy coaches. Once coaches were identified teachers were recruited from one district within an ISD by their early literacy coaches. The teachers were recruited based on their willingness to be coached by the ISD Early Literacy Coach and based on having a matched pair at the same grade level and in the same building. While this resulted in all grades K-3 represented, not all were represented in each district and the total number of teachers in each district varied as some dropped out before the study was complete. Additionally, participants were not racially or ethnically diverse as all participants identified as white and female. The participants had varied experience levels ranging from early career teachers in the first five years of classroom teaching, to those who had over thirty years of teaching experience. For a full breakdown of teacher demographics, please see Table 2 below. Table 2 Participant Demographic Data Characteristic Gender Female Ethnicity (self-identified) Caucasian Total Sample (N=25) 100.0% 100.0% 49 Table 2 (cont’d) Teaching Experience (average years) Teaching Experience (range in years) 0-3 years 4-9 years 10-20 years 21-30 years Over 30 years Highest Degree Earned Bachelor’s Degree Master’s Degree Education Specialist Degree (Ed.S.) Teaching Endorsements* Early Childhood English as a Second Language Language Arts Reading None Other Current Grade Level Kindergarten First Grade Second Grade Third Grade Participants’ District District A District B District C District D 16.9 0% 26.1% 45.2% 19.2% 9.5% 52.0% 52.0% 4.0% 48.0% 16.0% 12.0% 4.0% 32.0% 16.0% 8.0% 48.0% 24.0% 20.0% 36.0% 24.0% 24.0% 16.0% Writing Curriculum Curriculum 1 Curriculum 2 Curriculum 3 Curriculum 4 Note. *Total will not equal 100% because some teachers held more than one endorsement. 36.0% 24.0% 24.0% 16.0% As shown in the Table 2 above, the participants in this study all self-identified as female and Caucasian. Teachers worked in four different public schools, in districts geographically 50 spread across the lower peninsula of Michigan and served diverse populations of children. Districts varied in the makeup of their student populations. School Contexts As a result of COVID-19 in-person school closures, instructional formats also varied between districts. One district has one set of videos that used fully virtual instruction as they were providing virtual instruction only until February of 2021. All three other districts were using hybrid instruction for the first round of video data collection. Hybrid instruction in these districts comprised teaching either half of the children at a time in- or teaching some children in person while others attended online. One teacher moved from in-person hybrid instruction to a fully virtual format between the first and second rounds of video data collection and one teacher dropped out of the study between the first and second rounds of data collection. Since all four districts used a hybrid instructional model at one point or another in the school year, many of the study’s teachers were teaching about half as many children as they would usually be instructing in person at one point. This was particularly true for the first set of video observations collected. However, some also taught children online simultaneously or recorded videos of their live classroom instruction for other children to watch at a later time, asynchronously. By the second round of data collection, in May of 2021, three out of four districts returned to a fully in-person instructional format with full class sizes. Table 3 Demographic Data by School District, Represented in Percentages District A District B District C District D Total Students 3763 769 20873 2517 51 Table 3 (cont’d) Ethnicity* White African American Hispanic Asian Native Hawaiian or Other Pacific Islander 2 or more races Special Education Status English Language Learners Free and Reduced Lunch Geographic Description 87 5 4 3 <1 0 15 0 50 72 2 20 <1 0 6 12 12 65 94 3 2 1 <1 <1 9 47 76 86 3 6 1 <1 3 12 1 57 Suburban Rural Urban Rural Writing Curriculum *Percentages may not add up to 100 due to rounding to the nearest whole percent Curriculum 3 Curriculum 1 Curriculum 2 Curriculum 4 Data Collection Procedures For this study, some data sources came from the larger teacher study. Those were the teacher background surveys, teacher’s videos of instruction, and post-video surveys from the larger study. Based on writing curriculum usage reported by teachers in their background surveys, I gathered curricular materials from four different curricula teachers reported using in their districts. Those materials are the second main source of data for this study. To explain the differences between coding and methods employed for each research question, I describe data collection, sampling, coding, and analysis for each research question separately. 52 Part One: The Curriculum Materials The first research question of this study focused on curriculum materials was born naturally from a noticeable difference in what was contained in teachers’ district-provided curriculum materials and to some extent the instruction observed when coding the videos. To answer the first set of research questions, I employed a mixed methods approach, beginning with a content analysis. The research questions for the first part of this study were: 1. How can recommended writing instruction be described across four writing curricula? a. What effective writing practices were contained in these curricula? b. How did recommended writing instructional practices compare across the curricula? As explained in the literature review and the context for the current study sections above, I used the K-3 Essentials (MAISA-GELN, ELTF, 2016) to define research-based recommendations for instructional practices for teaching writing in K-3 classrooms. As a reminder, this document uses Essential to refer to ten overarching practices in literacy that teachers should employ, based in research literature. Each essential practice then includes sub- practices in bullet form. For this study, I used the instructional practices from Essential 6 which covered writing instruction overall and had five bullets breaking down various parts of writing instruction teachers should provide. I also included one instructional practice from Essential 4 which focused on phonological awareness and using phonological awareness to use estimated spelling. These practices were: estimated spelling, interactive writing, daily time to write aligned to motivation and engagement, writing process and strategy instruction, use of mentor texts, and finally explicit instruction in component skills (e.g., handwriting, spelling). 53 I have not included the five instructional practices in Essential 5, which focuses on teaching letter-sound relationships, which could arguably be included. I left out this practice because the bullets for that practice seemed to focus more on decoding than encoding. The encoding process, where children use letter-sound knowledge to inform spelling and their writing, is captured in other bullets in Essential 4 and Essential 6, which were included in this study. I also did not use Essential 1 on its own, except as it related to Essential Six, Bullet 2, which was the instructional practice of providing daily time for writing aligned to Essential 1. This Essential was captured as it pertained to writing in the coding scheme for Essential 6, Bullet 2. To see the full coding scheme, please see Appendix B. To see the full text of each Essential used for coding instructional practices in this study, you can look at Appendix C where alignment to the IES Practice Guide is shown or find out more at the link provided in the References. Using the same protocol that the larger Read by Grade Three (RBG3) study used for video coding (see below for a detailed explanation of how the coding scheme was developed), I analyzed the content of the curricular materials teachers were provided by their district. While this question employed a similar design and logic as the next question about teacher practices, the data sources, sampling plan, and coding and analysis are somewhat unique and thus, are explained in detail below. Data Sources This question drew upon multiple data sources but primarily focused on coding of one source: the curriculum materials. Codes were created based on the K-3 Essentials (MAISA GELN Early Literacy Task Force, 2016). The four curricula examined were blinded and are referred to in this chapter as: Curriculum 1, Curriculum 2, Curriculum 3, and Curriculum 4. 54 Background Surveys. For the larger RBG3 study teachers were asked to complete a background and post-video survey. In these two surveys, teachers identified the curricular materials they used for all of the subject areas they teach. For instance, teachers were asked to list math, science, reading and writing curricula all separately. For some teachers, some of the responses were the same (i.e., they used their basal to teach all ELA subjects) for others it was not (i.e., they used a separate writing and reading program). The curricular materials listed by teachers across each district were consistent, implying that these were district-provided materials. To be sure, I checked each district’s website to see what curricula were listed. If I was unable to find it there, I checked with the ISD Early Literacy Coach to confirm teacher responses. The four curricula teachers reported using were blinded for the purposes of this study as information about these curricula would make the participating districts and literacy coaches identifiable. They have been matched to their district in Table 4 below. Teachers also reported which unit of the curriculum they were using. I recorded this data to inform my sampling (see Sampling Plan section below). Table 4 Curriculum Materials by District District Writing Curriculum Percentage Using Curriculum Curriculum Rated at Time of Study District A District B District C District D Curriculum 1 Curriculum 2 Curriculum 3 Curriculum 4 36.0% 24.0% 24.0% 16.0% Yes No Yes No Coded Curricular Materials. The second and main data source used to answer this set of research questions was the coded data from the writing lessons sampled from the four different curricula. These lessons were coded at the same unit of analysis, the essential and bullet 55 level, as the video observation data. As you can see in Figure 2 below, the database spreadsheet was laid out co capture the curriculum name, coder ID, district, grade level, type of curricula, whether the curricula were old or new, the unit number and/or genre, total lessons, if the teacher uploaded content area video, and then quality scores and presence scores for each of the instructional practices. Columns L – X capture the main data source for the quantitative analysis used for this question: the count of the total presence of that bullet for that lesson and the quality score averaged over each presence. Below, I provide an explanation of how the coding scheme was developed, how coding was performed and an overview of each of the four curricula. Figure 2 Spreadsheet of Curriculum Scores with Example Scores Curriculum Information. In addition to understanding what district-provided curriculum materials might be, I needed to better understand what units of instruction teachers were using from the curriculum to determine what lessons to analyze out of the entire year’s lessons provided by each curriculum. To that end, teachers also had the opportunity to share what curriculum materials they were using in their lessons after each observation. Some teachers replied by sharing a specific lesson and unit (e.g., Unit 5, Day 4), while others provided a more general response (e.g., Curriculum 4, Nonfiction Unit). This variation in response made sense given how the materials were set up, their overarching curricular goals, and how the materials were intended to be used. To better understand this variation, it helps to understand how the materials differ. 56 All four curricula were gathered for grades Kindergarten through third grade for initial analysis before coding. Materials for Curriculum 1 and Curriculum 4 were gathered through free and open access online in March of 2021. Curriculum 3 was accessed through an online version of the teacher and student materials and training by the ISD early literacy coach in the district was provided in March of 2021. Curriculum 2 materials were purchased on Amazon in March of 2021. Purchase of those materials was made possible through generous hard-cost dissertation funding provided by the Michigan State University College of Education. Curriculum 1. Curriculum 1 (most recent publication date of 2017) was a comprehensive curriculum (covering all the English Language Arts block). This curriculum included a recommended one-hour block of instruction on content-based literacy and one hour of structured phonics and foundational skills for grades K-2. All grades K-3 had content-rich literacy lessons for four different thematic units on topics such as pollination (grade 2) spread across the year. These four larger units were broken into three sub-units that covered approximately two and a half to three weeks each and integrated content area learning. They were centered on guiding questions and big ideas and the final sub-unit within a unit promoted synthesis of unit goals. It also required students to use writing to create a final end-of-unit product. Curriculum 1 also employed the use of trade books and had a social-emotional component to its curriculum. In each sub-unit, students learned character lessons, including things such as “working to contribute to a better world.” In the pollination unit referenced above, the lessons set students up to work to make their community better for pollinators (the content area focus of the lesson). Finally, Curriculum 1 prescribed an additional one hour of time for small group work daily. This time allowed children to explore, engineer, create, imagine and research. According to EdReports, Curriculum 1 met all expectations for meeting CCSS for 57 ELA, text quality and building content knowledge (the three ways that EdReports ranks all curriculum). Lessons in this curriculum were not fully scripted and it was open access, meaning anyone can use the entire curriculum for free. This curriculum was newer to the district using it. Curriculum 2. Curriculum 2 (publication date of 2012) was a system for teaching writing composition to students but was not a comprehensive curriculum with daily lessons. It prioritized building teacher knowledge and lessons were not scripted. In fact, the opening pages of the first teacher manual emphasized that teaching children to effectively compose is 1/13 each teacher’s responsibility (one teacher for each grade across grades K-12). The manual’s opening chapter also states that student writing will improve if teachers have knowledge and skills related to writing, along with five other core principles. The authors of this program argued if one teacher does not do their part, there will be more for the next teacher to carry. At the time of this study's completion, this curriculum was not found or ranked on any of the sites used for this study to measure curriculum against standards or for impact on student achievement (EdReports, ESSA, or the What Works Clearinghouse). Additionally, it was not knowledge-building through content integration. This curriculum only covered composition focused writing. In all, there were five separate teacher’s manuals to use over the year, beginning with one to orient the teacher and students to writing in their classroom. The other four manuals targeted four main areas: narrative, opinion/argumentative, informational writing, and response to texts. The introductory manual said the four other manuals could be used flexibly across the year. These curricular materials were educative and began with an overview that aimed to build teacher pedagogical content knowledge. This curriculum was designed to be used for people who have received training in a specific model of instruction meant for use beyond writing. To support teacher uptake of the curriculum approach beyond just providing the teacher manuals, 58 this program used a train the trainer/site-based model of delivering curriculum pedagogical content knowledge. It is not clear if teachers in this district have received training in this model recently or at all. According to the program, the curricular materials could be used as a core-writing program or as a supplement to other curricula. Each manual contained all the grade levels K-8 and each of the four instructional modules was designed with five components: focused modeled writing, mini-lessons, analytic rubrics, unassisted writing, and self-assessment of implementation (by teachers at a grade level, or across grade levels in a school). The teacher’s manual encouraged teachers to provide models of effective writing five days per week in kindergarten and three days per week in grades beyond that through eighth grade. The lesson plans contained teachers modeling how to write to a specific prompt during mini lessons. During mini lessons, there were activities to practice for students. Those activities ranged from 5-20 minutes according to the curriculum’s self-reported timing and mostly contained independent writing time for students. According to the manual, the time should increase over the course of the school year, with teachers modeling for five minutes and children doing “focused journal writing” for five minutes to start the year but eventually, it suggests, they might work their way up to forty minutes of “focused journal writing.” For this curriculum, “focused journal writing” means writing to a prompt which is the only type of writing contained in any of the K-3 sample units. Of note, upon analysis it appeared that 5-20 minutes may not be enough time for teachers and children to accomplish all of the steps listed in a day (i.e., sometimes more than eight or nine), so it is unclear if the materials could be implemented as laid out. Finally, there were analytic rubrics, rubrics for self-assessment, targets for children, and citations from noted early literacy researchers (i.e., Donald Graves, Marie Clay) to cite what 59 early elementary children should be doing and how they should be developing. Notably, it was recommended that children use journals and graphic organizer journals as an extension of their writing time and at another time of day. It was noted that this journaling time should not replace the focused modeled writing instruction called for in the teacher’s manual. Curriculum 3. Curriculum 3 (edition publication date of 2019) is a comprehensive ELA curriculum that served to address ELA instruction broadly across grades K-6. The company marketed the curriculum as knowledge building. EdReports ranked Curriculum 3 as partially meeting expectations for building content knowledge and fully meeting expectations for standards alignment. Similar to Curriculum 1, Curriculum 3 contained components for foundational skills (phonics beginning with simple spelling and moving into multisyllabic word work and morphology). The curricular materials also had alignment within and across grade levels. Each unit had a three-week focus on content or a topic and that was similar across grade levels. For instance, the first unit of the year might focus on the same topic, technology, in different ways across grades K-6. The curricular materials also included trade books and books for engaging in small group lessons written by a group of authors the company says were diverse. These curricular materials were available in both English and Spanish. The curricula had scripted prompts and statements to say to students for the teacher and served as a step-by-step lesson plan, not a general guide or system (like Curriculum 2). Additional teacher professional development books were available as part of the curriculum set but those were separate from the text that contained the daily lesson plans. The publishers considered varied instructional blocks. Sample plans were provided for 90-, 120-, and 150-minute blocks. These blocks included small group reading and writing composition instruction, grammar/writing, whole group reading instruction, and foundational 60 skills times (e.g., phonics). Of note, the daily small group writing instruction did not have a set time in any of the lessons analyzed for this study. The task children were supposed to complete in that independent writing time varied from day-to-day and may have required larger amounts of time than would have existed in the sample block schedules provided. Discussion was also a focus of the curriculum with text-talk prompts provided and text sets geared toward social- emotional learning (SEL) concepts so teachers could integrate SEL. The curricular materials provided over 120 leveled texts, readers theater texts, trade books for whole group reading, big books for repeated reading, big books of poetry, and decodable readers for students. Notably, the curriculum boasted a famous reading researcher as one of the consultants/authors. This was the first year the district was using this curriculum with all teachers. They had piloted this curriculum in a few buildings the year before. Teachers were supported with training to learn how to use it from their building literacy coach (the same person who was their ISD early literacy coach). Curriculum 4. Curriculum 4 (edition updated in 2014) consisted of process-based writing units that were also open access and freely available online. From the Literature Review chapter, process-based writing units are characterized by a few different practices including but not limited to, providing daily and ample time for students to write, writing for an authentic audience and purpose, moving flexibly through the iterative writing process throughout a unit, and individualized support (Troia, 2014). Each grade level K-3 had between 6-8 units. Some of the Kindergarten units began with oral language instruction and there were more units. The third- grade units had fewer and ended with informational research writing. Curriculum 4 was scripted, and publishers considered it educative, meaning that they provided a supportive script for teachers to employ if they need it, with the stated goal of 61 teachers needing the script less over time. This aligns to the stated goal of Curriculum 2 – building teacher pedagogical knowledge around writing instruction. The units averaged twenty lessons for each unit across the year for each grade, K to 3. The units begin with kickoff unit to orient students and set up a writing workshop model. The teacher followed a similar format in each lesson. Lessons began with a connection to a previous lesson, followed by a minilesson that included brief guided practice for under ten minutes, and then children wrote on their own. While children wrote on their own, the teacher met with children individually or in small groups to support their movement through the writing process. The units began with a unifying purpose and ended with children revising a piece to publish for a specific audience. Often, the units tried to connect to children’s motivation by providing a purpose for writing beyond the assignment and invitations to outside audiences as a culminating activity. Sometimes the culminating activity included an option to publish the work outside of their classroom as well (e.g., mail a letter, write a book for the school library). The lesson plans provided tips for the teaching content for the small group and individual work, as well as teaching tip sidebars and teaching tips to reinforce the day’s lesson objective by stopping children’s independent writing to draw attention to them. The lesson included a structure that allowed children to flexibly work through the writing process. There was also a component of self-reflection where children were asked to evaluate themselves. The lessons were structured to support students to engage in a full cycle of the writing process in multiple genres across the school year. These units were not ranked on any of the sites that check curriculum for alignment (EdReports, ESSA, or the What Works Clearinghouse). 62 Sampling Plan Now that I have described some of the characteristics of the curricula used in the four districts in this study, I will describe how I sampled from the abundance of resources the curricula offered. In order to answer the third research question, comparing teachers’ enacted practices with the written instruction (see Stein et al., 2007 for the conceptual framework for this question), I sampled the teacher’s manuals for the grade levels that each teacher in the study taught at. Since no teacher administered any writing assessments, assessments in the unit were not analyzed, though there is evidence that assessments influence teacher instruction, including content and most notably, time spent on assessed subject areas (e.g., McMillan et al., 1999; Taylor et al., 2002). Arguably, the teachers’ manuals provided the best understanding of what evidence-based instructional practices for writing instruction the curriculum recommended being used. Sampling the teacher’s manuals included materials referenced in them within the units analyzed (e.g., student worksheets, graphic organizers) from each curriculum. This sampling is consistent with other content analyses of literacy curricula (e.g., Gabas et al., 2023; Wright & Neuman, 2013). Within the curricula I used the following sampling plan and rationale. In order to answer my research questions, I sampled units of lessons that best allowed me to draw comparisons between the curriculum materials and the instruction. To do so, I needed the teacher’s instruction to overlap with the materials I analyzed, when possible. This deviates slightly from other content analyses of writing curricula where initial and final units were excluded from analysis (e.g., Gabas et al., 2023). Using post-video survey data, when possible, I determined the lesson the teacher was teaching and then selected that entire unit of instruction. I repeated this for all twenty-five teachers (see Appendix E) for the first and second round of observation data. 63 For example, in the district using Curriculum 3, all of the teachers responded in their post-observation survey that they were teaching Unit 5. Therefore, Unit 5 was included in my coding. However, there were still far too many materials and components of the comprehensive curricula and the writing curricula to sample all materials the entire unit. While sampling, I included the units that teachers said they were teaching in their entirety but if a curriculum referenced an additional material - leveled readers, for instance, to be used in small groups – I did not analyze those beyond looking at them to see if they incorporated writing instruction or had other unique attributes that were included in the coding for curriculum (i.e., meaningfully incorporated varied cultures, linguistically diverse texts). The same type of brief analysis was used for trade books used in the lessons. To summarize, I sampled and analyzed lessons in the district-provided curricular materials for entire units of lessons at each of the participants’ grade levels and for all observations recorded, meaning some curricula did not have units sampled for all grade levels K-3. This sampling plan worked well for three of the four curricula (Curriculum 1, 3 and 4). However, Curriculum 2 provided a unique analytical challenge, since the four genre-based curriculum teacher guides were meant to be used more flexibly and as a guide to inform teacher practice, not prescribe daily lessons via scripted plans. Like Curriculum 4, teachers could use information and rubrics from the genre-based curricular materials to plan their own lessons that met students’ needs during writing instructional time. Like Curriculum 1 and Curriculum 3, the writing opportunities could have been supportive of content area learning since they did not have specific content knowledge the units were covering or frankly much content beyond the prompt and parts of the writing process. However, unlike these other curricular materials, Curriculum 2’s genre-based teacher guides do not have a set scope and sequence of lessons but have suggestions, 64 graphic organizers, examples of student work, sample lessons that include set steps and suggested teacher language, and other teacher educational materials. Since there were sample lessons, for the purpose of this study, I used these lessons for each grade level where teachers were observed in this district to provide the best comparison to the other curricular materials. Perhaps the most important justification for this sampling decision was that the included sample plans should arguably provide emphasis on instructional practices they would like teachers to employ or that are integral to implementation of their instructional program. While this was not a perfect solution, it was the best way to compare teachers’ enacted instruction and the instructional practices contained in the curriculum materials. While Curriculum 2 proved challenging to sample, the other three curricula were fairly straightforward. All lessons in each unit were included in the sample because each one had the possibility to contain writing instruction when considering explicit instruction in component skills and interactive writing. Another factor I had to take into account when sampling was that not all units had equal numbers of lessons. In order to not give more weight to one curriculum over another based on the unit length, I chose to account for separate instances of presence but not report it and consider it in the quality rating instead. For example, in Curriculum 1 for one second grade sub- unit (a sub-unit is three weeks of instruction), there were twelve lessons. In Curriculum 3 for Unit 5, the total was more than double those of the two other curricula with fifteen lessons in all. Curriculum 4 units had over twenty lessons in one unit. However, because of the varied number of units analyzed the number of total lessons analyzed ended up being similar for each curriculum since sampling within each curriculum was determined based on participants. 65 If, as was the case in some districts, there was not a teacher at every grade level Kindergarten to third grade participating in the larger study, the writing curriculum for that grade level was not analyzed for that district. Additionally, I did not include some supplemental writing lessons, such as spelling, handwriting, and word study in this sampling plan unless teachers reported using them as part of their writing curriculum. For instance, teachers using Curriculum 3 reported using all components as their writing curriculum in post-observation surveys. Therefore, all components of a unit of lessons were analyzed because there were parts of lessons that contained writing. However, teachers using Curriculum 1, which also had a supplemental foundational skills curriculum, did not report using the supplemental curriculum. As a result, the supplemental components of this curriculum were not analyzed. This decision was made because when looking for the possible influence of curriculum on teachers’ practices, if a teacher did not list a supplemental curriculum resource, I assumed it was not influential on their practice. Curriculum 2 and 4 units had no handwriting or spelling components and teachers in the districts using these curricula did not provide additional writing curricula in their post-observation surveys for these component skills. This difference between curriculum materials is accounted for in the results and is considered in the discussion as well. Coding and Data Analysis To analyze the curriculum, I completed multiple analytic steps that are explained in this subsection. Some of these steps allowed for quantitative comparisons between different curricular materials and the recommended instructional practices. Others allowed for more qualitative comparisons between curricula. Methodologies were used in a way that created iterations of analyses that are explained more below. 66 Curriculum Reports. For the first layer of analysis, I looked up the curricula and started learning about them, including reading the publisher's description of the materials if applicable. I also looked up each curriculum on EdReports and in the What Works Clearinghouse (WWC) to see if they were considered high quality curricula. Some of the curricula were evaluated and others did not have data available. None of the curricula were evaluated on ESSA or the WWC. However, two of the curricula were evaluated on EdReports, which is an organization that analyzes curriculum for text quality, CCSS alignment, and whether the curriculum is knowledge building. Table 5 summarizing evaluation data is below. Table 5 Curriculum Evaluations Curriculum 1 Curriculum 2 Curriculum 3 WWC NA NA NA Evidence for ESSA NA NA NA EdReports Orange: partial alignment to CCSS strong text quality partially met knowledge building expectations NA Orange: strong alignment to CCSS strong text quality partially met knowledge building expectations Curriculum 4 NA NA NA As you can see in Table 5 above, Curriculum 1 and Curriculum 3 were both rated on EdReports at the time of publication of this study. According to EdReports, Curriculum 1 was ranked for the edition analyzed for this study (2021 edition). Curriculum 1 was rated as strong for text quality. It partially met expectations for alignment to the CCSS. The knowledge building rating was also that it only partially met the requirements for knowledge building. Curriculum 3’s text quality and alignment to the CCSS for ELA were strong. However, it only partially met expectations for knowledge building. More specifically, Curriculum 3 ranked highly on 67 indicators of quality for writing, including that it provided evidence-based writing instruction across multiple genres. At the time of this study, the other two curricula, Curriculum 2 and Curriculum 4 were not ranked in EdReports, ESSA, or the WWC so I have not reported on ratings for either of them. Selecting Specific Curriculum Materials. The next analytic step was to look across all of the teacher materials, including the teacher manuals and supplemental materials, for each curriculum. To best understand the instructional practices used within district-provided curricula, as explained in the sampling plan, I first needed to understand what the curriculum materials included. This meant reviewing the publishers’ website to look at what each curriculum package came with when purchased. Once I understood what the materials included, I was able to determine that coding teachers’ manual lesson plans would be the most effective way to determine what instructional practices the curriculum materials contained and emphasized. Coding Curriculum Materials. After determining what specific materials from each curriculum to analyze, (see Sampling Plan subsection above), I coded the entire unit using the coding scheme the larger project developed and used to code videos (explained thoroughly below in Part Two: The Instruction in the Data Sources subsection). The materials were analyzed thoroughly and included all parts of a lesson (e.g., steps, scripts, callout boxes, links to student workbook pages, etc.). The materials for each lesson were coded for presence and quality, meaning a quality score of 1-5 was assigned using the guidance in the coding protocol (see Appendix B). There was a summary coding score given for the entire unit, which I have explained more with some examples below. This coding process was the same process our larger team followed for teacher video submissions, which is explained in the subsection on teacher video later in this chapter. 68 While the materials were coded by lesson, they were given a summary quality score for the entire unit. Presence was marked for each lesson it appeared in to provide a sense of saturation of a practice within the materials but not marked for multiple appearances in the same lesson. For instance, if a teachers’ manual included directions for the teacher to use a mentor text in four separate lessons, it was marked as a four for instances but a one overall for presence. Marking each lesson overall practice appeared in was one way that coding curricular materials differed slightly from the video data, because instances were not time coded but rather coded as one instance. Video data also marked instances to consider saturation, but it was not totaled the same way it was for lessons. After coding for each instance of a practice, to arrive at a quality score using the protocol found in Appendix B, all of the instances of the instructional practice across the unit were considered using the information contained notes column and then a summary quality score was assigned for the practices overall. For example, in one Curriculum 1 unit, there was use of a mentor text four separate times across 12 lessons. Once the instructional practice was used in connection to a reading lesson, twice a mentor text was used again in two separate lessons later on in the unit during a time when students were writing their own texts. Finally, in one lesson the teacher produced a model text for students to reference when writing their own and that was used as a mentor text. These four instances were the only times a mentor text was used across this unit. In this case, the instructional practice of using a mentor text was marked present, there were four instances marked, and the quality was scored overall as a 3, taking all four instances and their quality into account using the coding scheme found in Appendix B. In another example, there was a unit from Curriculum 2 where students engaged in an independent writing task one day and students were prompted at the beginning of the lesson to 69 remember to use appropriate conventions. As they were revising a different day, they were again reminded to check their conventions (e.g., capitals, punctuation, etc.) when they were working independently. Since there were two separate instances of presence, when considering explicit instruction in component skills, the practice was marked present, two instances were marked down, and then both instances of instruction were considered using the scoring notes, and a quality score of 1-5 was assigned using the guidance in the coding protocol (see Appendix B). Since both instances were lower quality implementations according to the coding protocol, and they were the only two instances in the entire unit of eight lessons, it scored a one. An example of coding notes from one of the writing units is shown in Figure 3 below. Figure 3 Sample Codes from Curriculum 3 Second Grade, Unit 5 The lessons within the unit were coded using a series of tabs on Excel spreadsheets. There was one tab for scoring as shown in Figure 2. This tab housed the scores for presence, number of instances, and overall quality score for each essential at the bullet level for each curricular unit. So, one bullet’s quality scores were averaged across instances for each lesson and unit, using the notes found in column F. The notes in Column F included coder notes related to the evidence in the lesson of indicators from the coding protocol. For instance, when examining Writing Process and Strategy instruction, in the notes above, it was noted that the teacher 70 provided clarity on what stage of the writing process students were in for some lessons but not all. The notes also indicate that the unit lessons did provide a strategy for each writing task children were asked to do in the daily writing, but not in other parts of the day that included writing (e.g., spelling lessons, grammar). Sometimes, there were specific examples from the curriculum related to the notes in Column F that were important to capture for later analysis or as evidence for a particular quality score. When there were examples from the curriculum that helped justify quality scores or provided other deductive or inductive insights that might support later analyses, I saved these in the second tab on the same excel sheet. There I saved screenshots, photos, and added text explanations to the example to explained why it was saved. For an example of what this looked like, see Figure 4 below. Figure 4 Examples for Score Justification from Curriculum 2, Kindergarten, Narrative Unit 71 Another example, from a second-grade unit in Curriculum 3, is a picture of a section of the day’s writing lesson where students learned to capture notes to support their response to a shared text. The teacher is directed to conduct a think aloud about how to fill in this response planning chart through the first body paragraph analysis. However, the chart is supposed to be displayed already filled in. I saved this because I wanted to note that the curriculum is calling this modeling, but the way it is modeled, the teacher is doing the work before meeting with students then explaining her thought process. This was different than the ways that other curricula included modeling, so I noted it. In juxtaposition, a note in the examples tab for a Curriculum 4 fall unit for first grade includes an excerpt from the teacher’s modeling section of the lesson where the teacher thinks aloud in front of children and does the writing in front of the children during and after her think aloud. The third tab was for analytic memos (Saldaña, 2011). These memos were recorded whenever I took a break in the coding. I did this for two reasons. Most basically, it was so when I returned, I remembered where I stopped coding. Second, it captured a memory of what I was thinking and noticing. Overall, I used the memos if I noticed something that connected to video observations that I remembered. I also used it when looking for examples to use in the qualitative components of the results in combination with the notes in Column F of the first tab as well as notes housed in examples on the second tab. An example memo is shared in Figure 5. 72 Figure 5 Sample Researcher Memo from Curriculum Coding After the unit was fully coded and quality scores were assigned, the unit scores were entered into a separate excel database. This database was labeled with who the coder was, the grade level, the curriculum name, the district who used the curriculum, and then the presence and quality scores for each practice. Of note, zeros were often used in the curriculum database to indicate a practice was not present. This was done to indicate that the practice was coded for but was not in fact present and therefore did not receive a quality score. Scores of zero were used when completing counts to calculate presence scores for a practice but not when calculating quality score averages, so as not to influence the average scores. For instance, if there were 12 scores of zero, then presence was calculated by subtracting 12 from 49 (total observations), and then that number was divided by 49 to figure out the rate of presence. In this instance, the rate of presence would be represented by the equation 37/49 = 0.755, meaning 75% of teacher observations included that practice. In the findings chapter this rate is reported in decimal form in tables but is also discussed as a percentage, to indicate the number of teacher observations where the instructional practice was enacted. To ensure the reliability of the data, I trained a second coder to complete at least a 20% overlap, as is the standard for content analysis methodology when looking to ensure interrater reliability. To that end, the second coder evaluated 22% of the curriculum data and coded four units of instruction, one from each of the four different curricula. This coder was another 73 graduate assistant on the larger project and, as a result, was well-versed in the coding protocol used. The curriculum coding included three additional items not included in the larger study to address equity, diversity, and inclusion. Those items were qualitative notes and presence scoring for curricular resources or inclusion of language diversity, cultural diversity, or family diversity. We met twice initially so I could train this second coder on how the coding protocol would be different for the curriculum, and then we coded separately, coming together weekly for three months to compare our scores for presence and quality and resolve scoring conflicts as we worked our way through the units. After coding one unit from each of the curricula together, I checked for the coders’ percent agreement. Since we were using the same protocol used for the video data, on which Cohen’s kappa had already been computed to assess our agreement and the reliability of the protocol, I checked for percent agreement. Across all units and measures, both coders only disagreed on instances of a practice, presence of a practice, or quality scoring 5 times in total. That meant that agreement between the two coders across all points of the curriculum coded for this question was 95.4%; thus, I finished coding the rest of the sixteen curriculum units myself. After double and solo coding, the data, I continued analysis for this question. Since there was insight to be gained through quantifying instructional practices and providing qualitative descriptions, in my analysis I used a mixed methods approach explained below. Initial Analysis. I began my analysis by examining the presence and quality of each instructional practice within each curriculum overall. To do this, I first calculated the rates of presence for each instructional practice for each curriculum overall. This meant that across all units of one curriculum analyzed, I summed the number of units that had a practice present and divided by the total number of units examined to get a percentage of units that had presence for the instructional practice at hand. For example, if Curriculum 1 had presence for estimated 74 spelling in three units but not in the fourth, I calculated a presence percentage of 75 percent present for that practice (e.g., 3/4 = 75%). I also then ran descriptive statistics in SPSS to report on the frequency of instructional practices found by the K-3 Essentials at the bullet level, for each set of district curricular materials. From these overall presence scores, I was able to compare if there was consistency or variation in instructional practices present across specific curricula. Next, I analyzed the individual scores for instructional quality found in different curricular materials. To do so, I took each of individual quality scores from one type of curricular material (all Curriculum 1) units and averaged them to get an overall quality score for each bullet. This analysis included using the quality scores from each unit to find the mean quality score for each bullet for each of the four different curricula, again using descriptive statistics in SPSS. After doing this, I was able to compare the mean quality scores of one bullet across all four types of curricula. These mean quality scores were then compared across curricula in order to describe curriculum-recommended writing instructional practices and how it is similar or different across the curricula. Second Round of Analysis. Further analysis required looking at each curriculum and then each unit of instruction as a separate case and using within and across case analysis to draw further conclusions. These within and across case analyses included qualitative comparisons between writing instructional materials and plans found in the different curricula. As explained in Miles, Huberman & Saldaña (2020), to generate meaning from data, there are multiple paths one can follow. To begin, I engaged in noticing patterns and themes from the quantitative data. To do so, I employed counting and making contrasts and/or comparisons. 75 After identifying the descriptive data for all the curriculum materials, I created tables with just the descriptive curriculum data for each essential. I separated the tables by presence and quality because they told two different stories about instruction. Then, I separated the quality scores into tables for mean, median, mode, and range of scores from 1-5. To help draw out some of these patterns, I used highlighting to identify which were the high and low scores for each bullet for presence and for quality (see Figure 6 below). The table included overarching data about each curriculum and was organized so that curricula with similar characteristics were next to one another visually (e.g., both knowledge-building curricula were next to one another). Figure 6 Curriculum Data Table Example These tables allowed me to note patterns and themes before returning to analytic memos, coding notes, and the actual curriculum materials themselves. For instance, Curriculum 1 had some of the lowest scores across all measured areas for presence and quality (see yellow highlights in Figure 6). This was not a surprising outcome given that when I was analyzing the 76 curriculum, I was continually noticing and comparing one to the others I had analyzed. However, since Curriculum 1 varied in its implementation of writing instructional practices across sub- units, it was important to look holistically across all curricula via these tables to ensure I was as objective as possible when drawing conclusions. From these summary scores, I drafted thematic findings into memo form for research question 1. I then continued analysis to confirm or refine my thematic findings. For example, since Curriculum 1 had the lowest presence of instructional practices overall and also the lowest quality, I listed this as a finding but returned to the data to find an explanation for why this was the case. After comparing the scores for each bullet for each curriculum to one another and drafting initial findings, I also used the original excel database to identify high and low score cases for each curriculum. Once I identified high- and low-quality scores for each bullet from the curriculum database, I revisited the notes sheet for that curriculum unit. I reread the notes for that bullet and then compared it to the notes of other curricula. In the case that there were identical scores for a practice, I used two cases with the same scores. When I returned to the data, I asked myself a series of questions. First, did the numerical data in the descriptive tables represent the full picture? Next, did the numerical data point me to an explanation for why the curriculum scored what it did that might provide additional insights? Asking these questions often pointed me first to the coding notes, then to the examples and finally to the analytic memos. This process often helped to provide a rationale for why scores were lower or higher for one curriculum versus another. Once cases were identified via the scoring database, and I revisited the coding notes, examples, and analytic memos, I used these examples to confirm, clarify, or shift the findings I had written in my initial analytic memo for the research question. For instance, when examining 77 Curriculum 1’s overall score for presence and quality, the presence scores were lower than other curricular units and the quality scores for the instructional practice of daily time to write with attention to children’s motivation and engagement was lower. When revisiting the scoring notes, including the analytic memos in each unit for the curriculum, it was clear that the daily time for writing score was lower because of the lower number of instances provided for children to spend time writing, but when writing was present it also included some attention to motivation within each writing opportunity for children. Within the memo, I added examples and explanations from the coding notes to help explain some of the descriptive data. I also then shared examples of how the instructional practice looked at that specific quality score (e.g., a score of 4 for use of a mentor text meant...). Using the memos and examples from the instructional materials contained within each teacher’s manual allowed me to highlight differences and similarities to compare in a more nuanced way across curricular materials than analysis via only the descriptive scores would have allowed. For instance, two different curricula engaged students in the planning stage of the writing process but to better understand how they differed, I needed to re-examine the materials. Once I looked back at the coding notes for each case, it was clear that one of the curricula used a multifaceted approach (e.g., explicit modeling, a planning graphic organizer, and supportive conferring with students during their independent work), while the other had an approach that was single-faceted (e.g., the teacher showed the planning page to students and told them what to do for the first step, students filled the page in, the teacher then explained the next step, students did it, etc.). These types of qualitative comparisons were one way that analytic memoing was quite helpful because when a particular instructional sequence or resource stood out, it was often already noted in the memo. 78 To summarize, curriculum materials for four different writing curricula were examined for the presence of evidence-based instructional practices using codes for presence and quality, alongside other qualitative components (e.g., analytic memos). The data from those codes were used to describe instruction across six practices for all four curricula. Additional data, qualitative notes, memos, examples, were used to describe and analyze the similarities and differences between how curricula expected teachers to enact certain instructional practices. While the curriculum data tells one story about the intended writing instruction children in K-3 should receive, teachers’ enacted instruction tells another story. The methods for analyzing teachers’ observations are explained in the next section of this chapter. Part Two: The Instruction The second set of research questions was born out of a curiosity around how teachers’ writing instruction might have differed from classroom to classroom and district to district. To be able to answer the research questions in this study about teacher instruction, I employed mixed methods, including a content analysis of videos of teachers’ classroom instruction. The questions for this portion of the proposed study were: 2. How can writing instruction be described across K-3 classrooms in Michigan during the COVID-19 pandemic? a. What effective writing instructional practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? The videos of instruction were provided by teachers and were from two separate time points during the school year. While this question and the others employed the same design and logic and used the same participants, the data sources, sampling plan, and coding and analysis were somewhat unique and explained below. 79 Data Sources To answer the second set of research questions, multiple data sources were required. Although answering this question relied on utilizing multiple data sources, the analysis primarily focused on coding one source: the video data of teacher instruction. Background Surveys. For the larger study, teachers were asked to complete a background survey in Fall 2020. This survey provided insight into who they were and what experiences they were bringing to their teaching and the study. This survey was created in Qualtrics and included questions that asked what grades teachers had previously taught, how long they had been teaching, demographic data about their identity, specialized endorsements they held, and the level of education they had attained. This survey also asked what curricular materials they used for each subject area they taught (including reading, writing, math, science, and social studies). Video Data. Teachers were mailed a Swivl, which included three microphones, the Swivl machine itself, and an iPad (7th generation) to put into the Swivl to capture the video. A Swivl is a robot that holds a recording device, like an iPad. The machine then turns and tracks the teacher to capture their instruction. The Swivl came with multiple microphones to place around the room and one microphone that enabled teacher tracking which cued the Swivl robot to turn in order to capture the teacher’s instruction. This microphone was called a marker. As a result of using the marker, the Swivl also captured teacher conversations with individual students and conversations between students from a set of microphones placed around the room. Teachers were sent directions and a manual explaining how to use the Swivl and what lessons to record, what artifacts to upload. If teachers were teaching virtually, as some were at each data collection point because of the circumstances of the COVID-19 pandemic, they were 80 sent additional guidance on how and what to capture on Zoom as well as where to upload the video. In the manual, teachers were directed to record themselves teaching any part of their day that constituted ELA instruction (e.g., reading block, writing instruction, morning meeting if it included ELA content) and one content area lesson (e.g., science or social studies). When teachers recorded their instruction, the Swivl automatically uploaded all of the videos they captured to a private storage cloud. Once recording was complete and the Swivl machine was stopped, the Swivl software combined all microphones into one audio file as well as left each individual microphone track as a separate audio file. Teachers in one district, teaching only virtually, collected video by recording their lessons on Zoom and uploading them to a secure platform where they were removed and moved to a secure storage space used for the larger research project. Teachers collected videos in January and May of 2021. In total, this dissertation study utilized 49 separate days of ELA and content area instruction. Each day is counted as a single observation. A total of 25 participating teachers uploaded anywhere from 3-8 videos each for each round of data collection. One teacher dropped out before the spring data collection round of video submission, because her instructional modality and class of students changed, as a result of COVID-19 instructional programming shifts in her district. All videos were coded for the larger study by a team of graduate student assistants, totaling over 2300 minutes of video coded. Post-Observation Surveys. After teaching and sharing videos of their instruction, each teacher was sent a post-video survey to complete. The goal of the post-observation survey was to provide a full picture of their instruction as it aligned to the essentials, particularly those that could not be reliably measured from videos of instruction (e.g., family engagement practices). For the purposes of this study, the only information relevant on the survey was what lesson 81 and/or unit of the curriculum they were using for the lesson they taught. Some teachers listed a generic curriculum name while others wrote in the lesson number and unit they were teaching. Sampling Plan To fully capture writing instruction, it was important to code all of the videos a teacher provided. In order to more fully capture all writing instruction, it was important to capture writing whenever it occurred across the day. For example, multiple teachers in the study used science instruction as their content area but their science instruction included writing components. This was one example of how more writing instruction took place across the day than would have been captured if we had only sampled instruction within the ELA block. In fact, Coker et al. (2016) posited this lack of observation outside of the ELA block was a plausible reason for the discrepancy between the time spent on writing instruction they observed in first grade classrooms versus the total time spent on writing instruction observed by Puranik and colleagues in kindergarten classrooms (2014). Additionally, prominent writing researchers call for writing to occur across the day (e.g., Graham & Perin, 2007). Therefore, for this study, I have included coded data from all observation videos. Coding and Analysis For the larger study, we created measures to examine the content and quality of literacy instruction in the classroom videos to better understand coaching and its impact on classroom instruction (see subsection in Part One: The Curriculum above called Codes Used for this Study for more information). Creating this protocol was an iterative process and used an a priori codes to account for evidence-based instructional practices. Once we refined the coding scheme for the video data, we created a scoring sheet in Excel that had a series of columns to fully capture the instructional practices contained in the 82 video data. We created a column for each teacher video where we could record the start and stop times for each instructional practice, and then rows for each of the K-3 Essentials (MAISA- GELN ELTF, 2016) to mark for presence (0,1) and quality (1-5) at the bullet level. There was an additional column after the video columns that allowed for qualitative notetaking, which informed the quality rating for the bullet overall (see Figure 6 below). Figure 7 Blank Teacher Video Scoring Template To share an example of what these scoring sheets looked like for the evidence-based writing instructional practices this study was interested in examining, I have shared a blank notes sheet (Figure 7) and a completed notes sheet (Figure 8) of coded data from a set of videos from one teacher, Mrs. Smith (pseudonym). Mrs. Smith has included four videos from May of 2021 with no content area video(s). She has included videos for phonics, a whole group reading lesson, a small group, and writing (see video labels in Row 6, columns D through G). Below, I have an example of the individual teacher scoring sheet from Mrs. Smith’s lesson to highlight for you how the data was recorded. Timestamps for each practice can be found in columns D through G, quality scores in column H for each practice, and notes for scoring those practices are in Column I. A completed example of what these notes and scoring template looked like when coding video is provided in Figure 8 below. 83 Figure 8 Teacher Video Scoring from Mrs. Smith’s Spring Sample Lesson The essentials coded for this study included all of the K-3 Essentials that aligned to the IES Practice Guide Teaching Elementary School Students to be Effective Writers (Graham et al., 2012a). For an overview of their alignment, see Appendix C. Specifically, the video codes used for this project encompassed writing instruction (Essential Instructional Practice 6, bullets 1-5, and Essential Instructional Practice 4, bullet 5). Video coding work was done as part of the larger research project. Coder reliability on quality scoring was measured across both coding constructs (presence and quality) and coders worked together to refine the quality ratings in the codebook as well as marking presence until 90% reliability was achieved when coding alone. To ensure reliability or the protocol and usability of the codes, we used Cohen’s kappa to assess the agreement between two raters. We evaluated video with 30% overlap and a high inter-rater reliability score of 0.924 (95% CI, 0.84 to 0.95), p<0.001. After a teacher’s set of videos was scored, final scores were entered into a master database by the graduate assistant who coded that teacher (shown in Figure 9 below). Each teacher had their own row for scoring which included the teacher identification number (TEAID), ISD (Intermediate School District), number of students, coder identification number 84 (with 99 representing double-coded videos with resolved scores), ELA total video time, content area total video time, content area taught, presence scores (0, 1) and quality scores (1-5). Figure 9 Database of Teacher Video Scores It is important to note that there were things that occurred that the essentials coding scheme did not account for. In these instances, codes or rules were refined or created using a grounded approach in those categories. One example from videos of instruction is that a teacher provided cueing that reminded students of the conventions they should be using to ensure their text is readable. This instance was coded as present for the instructional practice explicit instruction in component skills (for capitalization and punctuation) and a quality rating was assigned. See Appendix B for a full set of codes and the rules that were created for this study. When initial coding and subsequent recoding was complete for all videos collected in the 2021 school year, I began data analysis. To make sense of the data, I engaged in quantitative and qualitative analysis, in multiple steps. The coded video database was housed in an excel sheet for overall scores, and individual excel notes sheets for descriptive scores and scoring notes (Figures 6 and 7). Below, I will explain how I used the coded data and notes to conduct analysis of teacher instructional practices in alignment with the K-3 Essentials (MAISA-GELN Early Literacy Task Force, 2016). 85 First Round of Analysis. To begin this mixed-methods analysis, I began with content analysis. I first calculated descriptive data points for each of the instructional practices coded for. I used Excel to perform basic analyses of overall presence and quality scores for each instructional practice to identify descriptive data. Then, I separated the data in the master database by district and repeated the descriptive calculations (mean, median, mode and range). This allowed me to see if there were any meaningful patterns in the data overall. I also ran the numbers through SPSS to compute descriptive statistics to ensure accuracy of my calculations in excel. To record the data in a way that supported my analysis, I created a series of data tables similar to those described in the subsection on the second round of analysis in Part One: The Curriculum Materials. The first page of tables is shown below in Figure 10. Figure 10 Descriptive Spreadsheet of Teacher Video Scores Using separate tables, I recorded a variety of presence and quality measures. For quality scores, I recorded the mean, median, mode and range. For some quality scores, when there were 86 an even number of scored observations and there were two modes, I listed both. Additionally, for some of the range calculations for quality, I included descriptive notes on the number of data points for the range (i.e., 1-4 with no scores of 2 or 3). These data points were important context to have when summarizing enacted instructional practices, particularly when examining similarities and differences within and across districts. Since 0s were in the spreadsheet (see explanation in Part One: The Curriculum Materials), but were indicative of lack of presence, I omitted those scores. Once these scores were calculated, I was able to compare the district averages for presence and quality for each instructional practice to one another and to the overall average for all districts. Comparing the data in this way allowed me to describe which instructional practices were absent within and across districts, how practice quality varied from district to district and to describe more general trends in writing instruction across these four separate districts. Second Round of Analysis. After conducting the quantitative analysis, I conducted qualitative analyses to find examples of teacher practice that would help explain and add nuance to the quantitative findings. Similar to the process followed for curriculum analysis, I used the data spreadsheet to identify themes, this was done by looking across individual observation scores, one-by-one. By identifying instances of high and low scores, I was able to use within and across case analyses, which included qualitative comparisons between teachers enacted instruction. Again, I used qualitative analytic strategies including noticing patterns, counting, and using comparisons and contracts. In Figure 10 above, the first table visible is an explanatory matrix, which as explained by Miles, Huberman & Saldaña (2020) allows the researcher to “get a feel for the causal 87 mechanisms that may be involved” (p. 230, 2020). The explanatory matrix included overarching data about each district and was organized so that districts that used similar curricula were next to one another visually to support later analyses for this study’s third research question (e.g., both districts using knowledge building curricula were next to one another). The explanatory matrix for teachers’ enacted instruction helped support early interpretation of the variation between teacher’s enacted instruction. For instance, the number of minutes of video was noted overall and by district in case total video time might help explain findings. Similar to the process used for the analysis of the curriculum materials, I then created tables with just the descriptive data for each bullet of the K-3 Essentials included in this study. I separated the data into tables for presence and various measures of quality because that provided a more complete picture of teachers’ enacted instruction. Again, as with the curriculum data, I separated the quality scores into tables for mean, median, mode and range. To support interpretation and analysis, as well as identification of cases I used highlighting to identify which were high and low scores for each bullet (see Figure 11 below). Highlighting allowed me to note at a glance if there were trends across essentials, notable differences within districts, or to compare the average score for one essential’s instructional practice compared to the average for all other essentials. 88 Figure 11 Teachers' Enacted Instruction Data Table Example These tables allowed me to note patterns and themes before returning to coding notes and the actual videos themselves. In order to provide insight into my analytic process, I explain below this process for one of the patterns that the analytic matrix in Figure 11 brought to light. District 4 had some of the highest and lowest scores across all measured areas for quality (see green and yellow highlights in Figure 9). This warranted further investigation to determine whether or not teachers in District 4 were doing something significantly different than their peers in other districts to achieve higher scores in the first three essential bullets compared to the last three where they achieved some of the lowest scores. First, I referred to the explanatory matrix and noticed that teachers in District 4 provided the second highest number of minutes for instruction on average across both data collection windows. With this knowledge in mind, I theorized that perhaps teachers in that district simply had more video time than teachers in District 3, which allowed us to capture more interactive 89 writing and daily time for writing. However, when looking a bit further, District 3 and District 4 had similar scores for daily time for writing (a difference of 0.01) and had over fifteen minutes difference in terms of total video time. Thus, I needed to conduct another analysis to see if there was a possible explanation. Since the first explanatory matrix was not sufficient to explain some of the findings, I then used the original Excel database to identify high- and low-score cases for each instructional practice within districts. If a high and low score were not available for a district, I would use two of the same scores. Once I identified high- and low-quality scores for each bullet from the video database, I revisited the notes sheet for that teacher’s video observation. I reread the notes for that bullet and then compared it to the notes of other teacher notes, in this instance teachers in District 3 and District 4 for the instructional practice of Daily Time for Writing. Again, I asked myself the same series of questions I asked in the curriculum analysis. First, did the numerical data in the descriptive tables represent the full picture of teachers’ enacted instruction? Next, did the numerical data point me to an explanation that might provide additional insights? However, the descriptive data matrix and these examples were insufficient for an explanation of the difference in the data. Since there was still not a clear explanation, I went back to the coding notes for all teachers in District 3 and 4 but I did not have a great way to view all of the descriptive notes at one time. To that end, I created another matrix for all the qualitative coding notes from all 49 of the teacher observation notes sheets. While creating these notes, I indicated which notes were good examples of high and low scoring practices. I looked across all the practices coded for in videos (including read aloud, small group work, etc.) to ensure I captured all instances of writing 90 instruction even if the notes were not recorded in the writing section of the video notes excel sheet. When possible, I also totaled time for all writing instruction captured (see Figure 12). Figure 12 Notes Matrix for Teachers' Enacted Instruction To further analyze the data after highlighting the qualitative examples of high and low scores, I referred back to the data tables I had created to then conduct searches within patterns or themes. For example, when District 4 presented the curious case of having both the highest and lowest average scores for instructional practices, I sorted the new matrix to only see District 4 notes all at once. After sorting by district, I reread the descriptive coding notes for all writing instruction across that individual day of instruction to search for patterns across teachers’ enacted instruction. I engaged in notetaking while reading the descriptive notes to see if any of the practices seemed similar or dissimilar across teachers within the same district. In the instance of District 4, patterns were difficult to discern because what practices teachers enacted varied. Some videos included spelling, others did not. Some videos included children composing 91 meaningful texts with time to write independently with some choice in their writing, others did not. What was evident from this closer examination of the descriptive notes alongside the database scores and the quantitative data matrices is that an observation or two in this district seemed to have skewed scores to be higher, except for interactive writing which also was coded as times teachers engaged in shared writing. Interactive writing had consistent scores of 2-3 alongside 100% presence of the instructional practice. This type of step-by-step analysis was used to determine what if any patterns existed and what plausible explanations might be for why those patterns existed. To summarize, forty-nine videos of teachers’ instruction were examined for the presence of evidence-based instructional practices using codes for presence and quality, alongside other qualitative components (e.g., analytic memos). The data from those codes were used to describe instruction across six practices for all four districts during the COVID-19 pandemic-impacted school year. Additional data, qualitative notes, memos, examples, were used to describe and analyze the similarities and differences between how teachers' enacted instruction compared within and across districts. While the curriculum data tells one story about the intended writing instruction children in K-3 should receive, and teachers’ enacted instruction tells another story, the relationship between the two still had not been clear. The methods for comparing the written curriculum to teachers’ enacted instruction are explained in the next section of this chapter. Part Three: The Curriculum and the Instruction To be able to fully answer the third research question for this study, I used the data and findings from the previous two questions and engaged in mixed methods analyses – employing both quantitative and qualitative data sources. Since scholars who study curriculum enactment recognize, both theoretically and empirically, that curriculum does not equal instruction 92 (Remillard, 2005; Stein et al., 2007), this part of the study attempted to draw parallels between the two data sets by thinking of districts as a case. This part of the study focused on the question: 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials during the COVID-19 pandemic? To answer this question, I used the coded results from the content analysis conducted on the curricular materials and the content analysis conducted on videos of teacher instruction from research questions one and two. Again, since this question employed the same design and logic, but the analysis was slightly different, I have explained each component of the methods used to analyze this data. Data Sources As with the previous two research questions, this final research question drew upon multiple data sources but primarily focused on comparing the scores of the video and curricular materials. That is, this question used the raw data and descriptive statistics from the previous two questions along with qualitative examples from scoring notes and memos. There was one additional source of data for this question: the ISD Early Literacy Coaches’ interviews. Through analysis of these bodies of data side-by-side, I created a comparison between the instructional practices present in the video and curriculum materials. Coded Video Data. The first data source used to answer this set of research questions was the coded data from the videos collected from all 49 classroom observations. These lessons were coded at the same unit of analysis, the essential and bullet level, as the curriculum data to allow for comparison between the two. The data used in this question included the qualitative examples and scoring notes along with the final descriptive data from the quality and presence scores. 93 Coded Curricular Material Data. The second data source used to answer this set of research questions was the coded data from the writing lessons sampled from the four different curricula. These lessons were coded at the same unit of analysis, the essential and bullet level, as the video data to allow for comparison between the two data sources. The data used in this question included the qualitative examples and scoring notes along with the final descriptive data from the quality and presence scores. Analytic Memos. This question also required the use of the researcher memos that were recorded while coding video and curriculum data. These notes were recorded to serve as a record of observations in the moment, used to fully capture what was happening in the curriculum and in videos that might have been outside of the scope of the coding notes required for scoring. Coaches’ Interviews. Coaches' interviews were conducted at the beginning and at the conclusion of the study to give insight into the context of the coaches’ work with teachers and their role in the ISD they were working in. These interviews provided insight into larger project questions such as how they thought about responses for weekly coaching log data, but also gave insight into some specific things about the district that might have influenced the curriculum uptake or teachers’ use of evidence-based practices such as the coaching location and priorities of the coaches. For instance, if a coach was prioritizing writing instruction in their coaching, this may have influenced instructional practices. However, since no coaches reported writing as a main focal point of their coaching, this was not considered an influential factor. Sampling Plan For this portion of the study, the sample was the final descriptive data and the qualitative examples along with the curriculum and video memos from the first two research questions. When considering what to include, I first had to determine what warranted paying attention to. 94 To do this, I determined that all quantitative data should be included in answering this question and qualitative data should be included that either a) supported the quantitative relationships, b) called into question the quantitative relationships, or c) were not captured by the quantitative relationships. Coding and Analytic Strategies To compare the curriculum material’s emphasis on instructional practices and the teachers’ enacted instructional practices, there were multiple steps, some ongoing through initial coding of the data, and some completed after the fact. While coding the curriculum and the enacted practices, I engaged in analytic memoing and the creation of analytic matrices. Both these approaches helped to note patterns and themes, which then directed me back to other data sources to come up with possible explanations. Analytic Memoing. Before the data sets for this question were completely coded and descriptive statistics could be run, I found myself comparing and analyzing across data sources and noticing preliminary trends, such as a higher presence of writing specific videos being submitted from one district compared to another. To capture this, I used analytic memoing (Miles et al., 2020), which often took place in narrative form in a separate word document. For instance, as I watched a video of a teacher in District B using the main graphic organizer in the curriculum, I made note of the teacher using the graphic organizer in the read aloud lesson. Since this example of teacher practice showed evidence of curricular uptake in teacher practice, this is an example of my use of within case analysis to support the results reported on in the next chapter (Miles et al., 2020). Analytic Matrices. After identifying all of the descriptive scores for each of the first two study questions and separating them into the analytic matrices created for question one and 95 question two, I created a third series of analytic matrices laying the video data over top of the curriculum data for a clear side-by-side comparison. Matrices for Video Data. Another example of the use of case analysis was when I looked across teacher video from the fall data collection and realized teachers had uploaded a variety of different content area instruction, but some had submitted none. When I realized this, I conducted a cross-case analysis (Miles et al., 2020; Yin, 2018). While this could have been due to teacher factors or other contextual factors noted in the theoretical framework for this study, such as student factors or teacher pedagogical knowledge, I decided to analyze it more methodically to see if it could be linked at all to the curriculum. To do this analysis, I recorded what types of videos the teacher I was coding had recorded and shared. Then, I made a table based on those uploads and sorted what they uploaded by content (e.g., ELA, writing, content area). I then added all of the other teachers that I had been tracking video submissions for, to this matrix, and organized the teachers by district, and then recorded what they uploaded and anything of note. Once the data was in a table, suddenly, I was able to find a small pattern (see Appendix D). Teachers in District A did not record a writing lesson that included meaningful writing beyond spelling practice as a separate lesson as many of the other teachers had done. According to their survey, they did not teach a separate writing lesson. However, five out of seven of them did upload and record a lesson which contained work on spelling as a component skill, either within the context of a lesson on phonics focused mostly on decoding or, in the case of one teacher, as a standalone lesson. Noting this while doing the initial video and curriculum coding was important to the later work of analysis for this third research question. When the descriptive data was complete, and I was examining the practices of providing daily time to write and writing process and strategy 96 instruction, for instance, this memo served as an important piece of evidence and context when drawing conclusions about whether there was a relationship between what instructional practices were called for in the curriculum and what instructional practices were enacted by teachers. For instance, this teaching of spelling but not larger writing pieces could be a result of the curriculum, which did not have children composing a larger writing piece in each sub-unit but rather has them engaging in a full writing process from drafting to revision one out of every three sub-units. Matrices of Data. Once all of the descriptive data was run, I also conducted comparisons using matrices and tables. To do so, I first used the tables of descriptive data to compare the curriculum and enacted practice. For instance, when doing the initial data analysis for each question as outlined for research question one and research question two, I created tables of the presence, modes, medians, and ranges for quality of instructional practices by district and by curriculum type (see Coding and Data Analysis subsections of question one and two above). As explained earlier in this chapter, to create these tables I compared the scores overall to get a sense of which curricula had the highest overall quality scores and presence for practices. Then, I highlighted the high scores across all practices for quality and presence in green. Then I identified which had the lowest overall practices and quality and highlighted those in red. After determining which had the highest and lowest presence and quality, I looked at tables of the video observation data which were laid out in the same way (separated by mode, median, and quality scores). I repeated the same process of identifying high and low scores, highlighting them, and then looking across to see if there were patterns around which curricula were highest scoring and which were not. Afterwards, I used the analytic memos I had created to check if there was any other qualitative information or other factors that should be considered or shared 97 in the results. Through this within case analysis, I was able to see which trends were evident and write a summary of results. Next, I separated the overview tables into six tables for each of the individual instructional practices for both the video and curriculum data. This resulted in twelve tables overall. I then paired those tables side-by-side in a document to conduct within-case analysis using districts as the case. To do this, I again used highlighting to pull out trends. I highlighted in green to indicate where there was agreement between the two scores and blue to indicate where there was a discrepancy. Using the overall scores, I also paid attention to anything that stood out from the other data as a whole (cross-case analysis). For an example of what this looked like for a single K-3 Essential see Figure 13. Figure 13 Analytic Matrix for Curriculum and Video Data Combined 98 After highlighting each subset of scores, I looked at the trends and then identified districts or teacher cases of interest based on these trends. I then conducted a review of the analytic memos and scoring notes in the curriculum notes. For instance, in Figure 13 above there is agreement between the video and curriculum scores for District C for the practice of estimated spelling. To further explore if these scores were providing a full picture of the emphasis of the instructional practice, I consulted the video scoring notes for teachers in District C and my analytic memos to search for a possible explanation or a contradiction. I also consulted the curriculum scoring notes and analytic memos for the four units analyzed for District C. After this final step, I reported on the results using both the quantitative data and the qualitative data contained in the notes and the memos in the results section of this dissertation. In the results chapter, I also share some possible reasons for the scoring alignment or the lack thereof. Limitations to the Method There were limitations to this study as a result of both the design, data collection, and the methodology. First of all, the teacher video that was used for this study comes from a larger study that had teachers collect their videos using a Swivl robot and iPads. Due to COVID-19 research restrictions, we were not allowed to collect video in person as we had planned to do, so the results of this study hinge on the teacher-collected video accurately reflecting the teacher’s instructional practices as well as that the videos captured students’ opportunities to write across the day. The directions sent to teachers with the Swivl and iPad were that they were supposed to collect video that encompassed their entire day of ELA instruction and one content area. However, teachers uploaded a variety of videos, some uploaded more instructional videos in their day while others only uploaded one or two. The directions told them to record all of the parts of their day that were related to ELA, so for this study, I assumed they did not teach it. 99 Additionally, the hope was that teachers could record a typical day of instruction. However, since teachers had to share the Swivl equipment there were some restraints on when they could record. Second, some teachers also submitted very little video, particularly in the first round of data collection. It seems unlikely that 25 minutes of instruction is their total for ELA for the day, unfortunately, without being there to collect the video myself, there was no way to know if the teachers who submitted little video do have missing data or do not teach ELA beyond the twenty-five minutes of video they submitted. Other observational studies (i.e., Coker et al., 2016) also had observations where no writing instruction occurred and that was with four observations per teacher across the year, instead of the two that took place in this project. Therefore, while not ideal for the coding scheme and comparisons across curricula, it is not unreasonable to think that of twenty-five teachers, there may be instances where no writing instruction was recorded because none actually took place that day. Another limitation of this study is that some teachers were receiving coaching at the time of the study, which could arguably improve their writing instruction. This would be a limitation to this study because it could impact constructs in the study (evidence-based writing instructional practices) as well as constructs that are not contained in this study (i.e., motivation, teacher pedagogical content knowledge) that would also impact the presence and quality of instructional practices teachers used in their classrooms. This seems unlikely, however, because from the larger study, there were coaching log data entries that indicate that the main K-3 Essential focused on in this study (Essential 6) was not one that coaches spent much time on, if they have spent any time on it at all. Interviews after the study took place confirmed that writing instruction was not a coaching focus. 100 Additionally, one ELA program, Curriculum 3, was new for some teachers from the study. However, it was not new for all of them as some participated in a pilot study last year. As teachers navigate a new curriculum, they may not feel comfortable making adaptations and adjustments that are necessary for their students to be successful. Additionally, they may feel increased pressure to follow the curriculum with fidelity (meaning they follow it exactly as it is written, if possible). This potential pressure could possibly impact presence and quality scores for teacher observation, but also would significantly impact the analysis of question three about the comparison between teacher’s instructional practices and the instructional practices present in the curricular materials. Because of COVID-19, many teachers were engaged in hybrid teaching or virtual teaching for one of the time points. This means that classroom instruction looked different from typical instruction in some ways. For instance, teachers were often distant from children who were writing, and they were not circulating very often to provide student support at their seats. Also, we did observe some practices, such as providing hand-over-handwriting support to students struggling with letter formation, likely because of social distancing expectations. Additionally, writing instruction in small groups may not have taken place very often also because of social distancing expectations. Further, one district began this study with students in a completely virtual format and finished their instruction with students in a hybrid format. This meant that the teacher was teaching students via Zoom. There are also some limitations to the methodology more broadly that should be mentioned. Some limitations that are not specific to this study but are based in the methodology and design are that twenty-five teachers are still a relatively small sample size to draw conclusions about, especially given that the teachers span grades K-3 and are not uniformly 101 spread across grade levels. In the study conducted by Taylor et al. (2003) there were 88 teachers spanning grades 1-5 which is roughly triple the sample size of this study but spans only one additional grade-level. With a smaller sample size, one or two teachers who are outliers (either with very high or very low quality and presence scores) will make comparisons across districts less reliable. Their scores would skew the totals and averages. To remedy this, I have used more than just mean scores to determine what results warrant mentioning. Another limitation of this study is that a content analysis of the video and the curriculum provided insight into what the teachers did but not necessarily why they did it. This gap, of understanding why teachers may have made the instructional decisions they made (i.e. epistemologies, pedagogical content knowledge), how they report using their curriculum, or even to what degree they do in fact use the district provided curriculum to plan, their perception of their students’ abilities and interests, were all factors that likely influenced the instructional opportunities they provided, as highlighted in the work of Cohen & Ball (1999) and Remillard (2005). Nonetheless, they are beyond the scope of this study. To find out more about literacy teachers' beliefs about curriculum or more nuanced examinations of how teachers report using their ELA curriculum, one could examine work by other researchers, such as Waldron (2014) or Rowan et al., (2004). To glean insight into why teachers made instructional choices, I would have to conduct teacher interviews, which is beyond the scope of this dissertation study. Nevertheless, interviewing teachers, especially with a video-cued recall protocol, would be insightful and is something that seems worth exploring in the future. Finally, two days of observation and the variety of curricular materials teachers were using, and to varying degrees, meant that causal relationships could not be determined. That is why this is a descriptive study. So, while I can make comparisons between emphasis in 102 curricular materials and emphasis in the writing instruction children were provided on two days during their school year, I cannot say that these were causally linked. Despite these limitations, I believe that this study will still be able to help us to understand more about three separate things. First, I believe this study will provide the field with more insight into teacher instructional practices in early elementary writing. Second, I believe this study will provide the field with some insight into the quality of writing instructional practices in four different writing curricula. Third, precisely because of the fact that this study took place during the COVID-19 pandemic-impacted school year, it is of interest. Particularly when considering the fact that student achievement has fallen on standardized measures both at the state and national levels. Finally, I believe this study will help add to the discussion of how teachers and curricular materials interact to provide the instruction children receive (Remillard, 2005; Stein et al., 2007). I will discuss this further in the significance section below. Conclusion This study examined curricular materials for teaching writing and teachers’ enacted writing instruction to determine what, if any, influence the emphasis on certain instructional practices might have on teachers’ enacted instruction. Curriculum units (n=18) and teacher video of instruction (n=49) were coded using a protocol that aligned to writing research on evidence- based instructional practices. Coders were trained and over 20 percent of both the curriculum materials and the teacher observations were double coded, with coders achieving near perfect interrater reliability. Curriculum materials and teacher observations were coded to account for presence of evidence-based instructional practices as well as the quality of the enacted practice. To make sense of the coded data, I used multiple analytic strategies including content analyses, analytic memoing, and the creation of analytic matrices. District and curriculum level 103 scores (percent present, quality mean score, range, mode) were used to determine differences among districts and between curriculum materials. These scores were then explained further using qualitative descriptions of the instructional practice as it was intended (in the curriculum materials) or enacted (in the videos of instruction). Using these methods, I was also able to answer a third question about whether there is evidence of the emphasis curriculum materials place on instructional practices in teachers enacted instruction. While it is generally understood in research literature about enacted instruction that teachers adapt, adopt and mediate curriculum materials in various and varied ways, this study, a descriptive study of teachers’ practice compared to the district-provided curriculum stands to shed light on how teachers' practices may or may not align with the materials they are provided. 104 CHAPTER 4: RESULTS The first goal of this study was to see what research-aligned instructional practices were recommended by various curricula for teaching writing to children in early elementary grades. The second goal of the study was to examine teachers’ use of the same recommended writing instructional practices. The third goal of the study was to compare the recommendations for research-based instructional practices in the curriculum to the instructional practices that teachers implemented (i.e., the enacted curriculum), with recognition that the written curriculum is often not the same as the enacted curriculum (Stein et al., 2007). Thus, the following questions guided this research study: 1. How can recommended writing instruction be described across four writing curricula? a. What effective writing instructional practices were contained in these curricula? b. How did recommended writing instructional practices compare across the curricula? 2. How can writing instruction be described across K-3 classrooms in Michigan during the COVID-19 pandemic? a. What effective writing practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials during the COVID-19 pandemic? Findings for the study’s three overarching research questions are broken down into descriptive findings related to curriculum, descriptive findings related to videos of teacher instructional practices during COVID-19, and a comparison between curriculum findings and observed teacher instructional practices. These findings are broken out into eight sections that 105 focus on one evidence-based practice (EBP) at a time. The first six sections focus on one essential instructional practice at a time. As explained in the previous chapter, these six practices are: estimated spelling, interactive writing, daily time to write aligned to motivation and engagement, writing process and strategy instruction, use of mentor texts, and finally explicit instruction in component skills (e.g., handwriting, spelling). The discussion of findings for each instructional practice begins with quantitative curriculum data, followed by qualitative examples of curriculum materials that highlight those findings. Then quantitative observational data of teachers’ enacted practice are presented, followed by qualitative examples of observed instruction that highlight those findings. Finally, findings for the third research question, addressing the comparison between curriculum materials as they are written and teachers’ enacted writing instruction, are presented. The seventh section provides a recap of the presence and quality of all six instructional practices accounted for in the units of curriculum analyzed and the video observations of classroom instruction for teachers’ enacted practice. The eighth section highlights other findings that are not necessarily direct answers to the research questions but tangentially related and of interest based on the current educational policy landscape and discourse around our state’s diverse K-12 learners. Defining Research-Based Writing Instructional Practices To best understand the findings in this chapter, it is important to remember the instructional practices that were accounted for in the study, as explained in previous chapters. The six research-based practices used in this study are: estimated spelling, interactive writing, daily time for writing, writing process and writing strategy instruction, use of mentor texts and strategies for instructional practices including handwriting, spelling, capitalization and punctuation, sentence construction, keyboarding, and word processing. This list of instructional 106 practices comes from a document widely used in Michigan: The Essential Instructional Practices in Early Literacy: Grades K to 3 (referred to throughout this document as the K-3 Essentials; MAISA GELN Early Literacy Task Force, 2016). A definition of these practices, with an example of what each looked like can be found in Table 6 below. Table 6 Instructional Practices in Early Writing Instructional Practice Definition Examples of the Practice 1. Estimated Spelling Children use the sounds in words to spell them. 2. Interactive Writing Children and their teacher work together to compose a shared text. 3. Daily Time for Writing Children have time to write each day. 4. Writing Process & Strategy Instruction Children have strategy instruction on how to complete a part of the writing process. 107 The teacher has a child say the word shopping slowly, enunciating each sound and writing the letters that make that sound. Children create a whole thank you message on chart paper for their chaperones. They decide on the message and take turns writing the words with coaching and support from their teacher on letter formation, spacing, estimated spelling, etc. Children have time to work independently on writing an informational text about an animal of their choice. Children participate in a lesson on how to brainstorm ideas for different chapters in a text about their life, past and present. Table 6 (cont’d) 5. Use of Mentor Text Children have text models of writing. 6. Instruction on Component Skills Children have instruction on components of writing (e.g., handwriting, sentence construction). Children look through informational texts that use captions and callout boxes and discuss the features they might use as a class. Children learn to make the lowercase letter m through cues for each part of the letter, such as “small down, roll down, roll down.” Instructional Practice One: Writing Meaningful Texts Using Estimated Spelling Research indicates that using estimated spelling is an effective way to support children’s writing (Pulido & Morin, 2018). Both curriculum materials and teachers’ enacted instruction were examined for evidence of opportunities for children to use estimated spelling in their writing. Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. Explanation of Presence and Quality Scoring for Estimated Spelling The first writing instructional practice focuses on opportunities for meaningful writing and listening for sounds in individual words to approximate their spelling. If estimated spelling was present, it was also scored for quality. Presence scores indicate the rate of the instructional practice’s implementation (e.g., 0.57 = 57% of classrooms provided estimated spelling instruction). The quality of the instruction was then broken down into a 5-point scale to account for varied levels of quality of implementation, from beginning to exemplary (see Methods Chapter, sub-section Part One for more information on the development of the coding protocol and scoring). The coding protocol for estimated spelling can be found in Figure 14 below. The full protocol can be found in Appendix B. 108 Figure 14 Estimated Spelling Scoring Guidelines Bullet #5: The teacher promotes phonological awareness development, particularly phonemic awareness development, by engaging children in daily opportunities to write meaningful texts in which they listen for the sounds in words to estimate their spellings. Developing (2) Exemplary (5) Beginning (1) Proficient (3) Strong (4) While writing meaningful text or participating in shared writing experiences, the teacher regularly coaches children to listen for the sounds in words and connect these sounds to letters. The children spend most of the time actively participating in these writing experiences themselves, with the teacher offering support. While writing meaningful text or participating in shared writing experiences, the teacher occasionally coaches children to listen for the sounds in words and connect these sounds to letters. However, the teacher does most of the work for the children (i.e., breaking words into sounds and then writing the letters for the children or telling them which letters to write). The teacher offers children opportunities to write meaningful texts but does not scaffold children to listen for the sounds in words to estimate their spellings. For instance, the teacher might spell words for children when they ask instead of prompting them to sound out the words and connect letters to the sounds in developmentally appropriate ways. Estimated Spelling Instruction Within Curricula (Research Question One) This section discusses the presence and quality of estimated spelling instruction found in curriculum materials. Table 7 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials. 109 Table 7 Estimated Spelling Scores by Curriculum Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) Curriculum 4 (n=2) All Curriculum Rate of Presence 1.00 1.00 1.00 1.00 1.00 Mean Quality 1.00 (SD = 0.0) 1.00 (SD = 0.0) 2.00 (SD = 1.16) 3.00 (SD = 0.0) 1.44 (SD = 0.86) Mode Range 1.00 1.00 1.00 1.00 1.00, 3.00 1.00-3.00 3.00 1.00 3.00 1.00-3.00 Variation Across Curricula. As indicated by the first column in Table 7, all the curricular materials examined for this study had opportunities where children could use estimated spelling. This was true for all four curricula as indicated by the rate of presence of 1.00, or 100 percent presence. While the presence of opportunities for children to use estimated spelling was consistent throughout the curricular units, there is some divergence in the quality of the materials, including guidance for teachers around estimated spelling. As shown in the second through fourth columns, which provide summary data of quality scores, the materials only rarely prompted teachers to help children use estimated spelling by listening for the sounds they heard in words they were writing. Curriculum 1 and Curriculum 2 both had quality scores that were 1.00 (SD = 0.00) indicating that all materials scored a 1.00. A score of 1.00 indicates that children had opportunities to write meaningful text where they could have used estimated spelling, but that the teacher did not coach children to listen for the individual sounds in words. Meaningful text was classified as work that children do that includes work that is not spelling words (in isolation of other writing), copying a worksheet, handwriting or Daily Oral Language (DOL). 110 Curriculum 3 had slightly higher quality scores with a mean quality score of 2.00 (SD = 1.16) and a mode of 1.00 or 3.00 indicating two of the four units analyzed scored a 1.00 and two of the units analyzed scored a 3.00. A score of 3.00 on this bullet indicates that children had occasional coaching to listen for the individual sounds in words but that the teacher did most of the work for the child (i.e., stretched the sounds out) and/or provided the correct spelling for children. Curriculum 4 had the highest scores for all curricula analyzed for estimated spelling. Curriculum 4 scored a 3.00 (SD = 0.00) for mean quality scores. Curriculum 4 also had a mode of 3.00 and a range of 3.00- indicating that both units analyzed had teachers occasionally coaching children to listen for the individual sounds in words but that the teacher still did most of the work for children. However, with only two first grade units analyzed (see Methods chapter for sampling rationale), it also had the least number of units analyzed at the fewest grade levels. Although the total lessons analyzed were similar to Curriculum 1, the lack of variation in grade levels analyzed was unique. Curricula 1 and 2 did not prompt teachers to intentionally draw children’s attention to the sounds in the words while they were writing meaningful texts (i.e., not spelling words) to approximate the spelling of the word. This was true for half of the curricula throughout the units (Curriculum 1 and Curriculum 2). Two of the curricula did at one point or more draw attention to the sounds in words and encourage coaching to listen to them. Those curricula were Curriculum 4, which scored a quality score average of 3.00 (SD = 0.00) and Curriculum 3, which had an equal distribution of quality scores of 1.00 and 3.00, with an average mean quality of 2.00 (SD = 1.16). 111 When this instructional practice was present, the quality of how it looked varied. For example, in one second grade unit of lessons from Curriculum 3, children were writing about a good or service of their choice. The teacher’s model text discussed a good (apples) and how this good (apples) got from the farm to grocery stores. Children were provided opportunities in this task to write something meaningful about a good or service and later in the day they were provided opportunities to write less meaningful text during a spelling lesson, grammar, and a response to reading lessons. Teachers were not prompted to draw children’s attention to the sounds in words during the writing of the text about the good or service. This example scored a one because children were not prompted to think about the sounds in the words while writing them and there was no coaching from the teacher to do so. However, children were coached to listen for sounds in words during spelling and phonics lessons in the curriculum. This is the only curriculum unit of those analyzed that provided strategies for children to use estimated spelling was a first-grade unit in Curriculum 4. In one lesson children co-created an anchor chart with steps on how to stretch out words to hear the individual sounds in them. The teacher also gave guidance and prompted children to return to the chart to remember how to stretch out sounds in later lessons and throughout the unit. However, explicit coaching where the child did the work to figure out the sounds in their words was not consistently present throughout the daily lessons, so this instructional practice scored a three for quality overall. To summarize, although the overall presence for the instructional practice of estimated spelling was high with 100 percent of curriculum materials offering opportunities for children to write meaningful texts in some or all lessons, there was a lower mean quality across curriculum materials with a mean quality score of 1.44 (SD = 0.86). The overall findings indicate that few instructional materials cued teachers to support and coach children to listen for the sounds in 112 individual words but some of the instructional materials in Curriculum 4, such as the example above, did have high quality instances of teachers guiding and explaining to children how to listen for sounds in words while they spell as well as prompting children to use estimated spelling while writing. Teachers’ Enacted Estimated Spelling Instruction (Research Question 2) In this section, the presence and quality of estimated spelling instruction found in teachers’ enacted instruction are presented to answer research question 2. Table 8 presents summary information about the rate of presence and the quality of estimated spelling in teachers’ enacted instruction. Table 8 Teachers’ Enactment of Estimated Spelling Instruction Presence Quality Mean Mode Range District A (n=18) District B (n=12) District C (n=12) District D (n=7) All Districts 0.56 0.75 1.00 0.86 0.76 2.10 (SD = 0.99) 1.89 (SD = 1.30) 2.00 (SD = 1.21) 2.67 (SD = 1.86) 2.11 (SD = 1.27) 3.00 1.00 1.00 1.00 1.00 1.00-3.00 1.00-4.00 1.00-5.00 1.00-5.00 1.00-5.00 Note: The maximum quality score for this instructional practice was 5. Instructional practices not observed were scored as 0. 0s were excluded from this analysis. Variation Across Districts. As seen in the first column in Table 8, the presence of estimated spelling varied by district with teachers enacting the practice 56 percent of the time in District A at the low end and teachers in District C enacting the practice 100 percent of the time at the highest end. Teachers in district D enacted this practice in 86 percent of the observations and teachers in District B enacted estimated spelling instruction in 75 percent of the 113 observations. Often, teacher observations across all four districts did show teachers provided opportunities for children to produce meaningful written texts, as evidenced by the mean presence of 76 percent for all districts. Though it should be noted that these were defined as meaningful by exclusion of other less meaningful tasks in the coding protocol (i.e., spelling lists, grammar work, DOL). In the second column showing mean quality ratings, teachers’ enacted instruction had an average quality mean score of 2.11 (SD = 1.27). Teachers’ enacted instruction in District D had the highest mean score of 2.67 (SD=1.86). Teachers in District A had the next highest mean score of 2.10 (SD = 0.99). Teachers in District C had the next highest mean score of 2.00 (SD = 1.21). And the teachers in District B had the lowest mean score of 1.89 (SD = 1.30). The average mean score of 2.11 (SD = 1.27) overall, indicates that although in most classrooms children had the opportunity to engage in writing meaningful texts, there was not always coaching for children to listen for the individual sounds in words they were spelling, which would have been indicated by a mean quality score of 3.00 or higher. Looking at the third and fourth columns, for the mode and range of quality scores, it is evident that this practice had the widest range of quality of implementation with some enacted instruction scoring at each of the quality scores from 1.00 to 5.00 and variance in teacher practice with standard deviations of 0.99 to 1.86. The smallest and lowest range of quality scores can be found in District A whose range was 1.00-3.00. Teachers in District B had a range of 1.00-4.00 and teachers in District D and District C had a range of 1.00-5.00. A score of 5.00 on this bullet indicates that children have regular coaching to listen for the individual sounds in words and that they were actively engaged in doing the work of listening for and identifying the individual 114 sounds in words on their own while writing meaningful texts. In both District C and District D, this score appeared only once. As shown in column three in Table 8, the quality mode for all districts was 1.00, except District A, which had the highest mode score of 3.00. Thus, it was most common for teachers 'enacted instruction in District A to include occasional coaching to listen for the individual sounds in words but that the teacher did most of the work for the child (i.e., stretched the sounds out) and/or provided the correct spelling for children. To contextualize what this practice looked like when a teacher provided consistent coaching for children to attend to the sounds in words, I highlight an observation of a first-grade teacher in District D. She helped children brainstorm lists of topics they could write about that they were experts at. The teacher then encouraged each individual child to sound out the words for their list of what they were experts at and rotated all children through a small group to work with her on this list at her u-shaped table. She provided consistent coaching to listen for the sounds in words as these first graders wrote their brainstorming list and did not do the work of sounding out the words for the children. As a result, this observation scored a 5.00 for overall quality. In contrast, a complete lack of coaching resulted in a low score on this practice from an observation of a third-grade teacher in District B. For example, in this observation, the teacher had children complete a worksheet to go with a science activity where children were invited to write down which puppy goes with which set of dog parents. During this lesson, children were required multiple times to write one or more words, but there was no coaching by the teacher to listen for the sounds in the words, which is why this observation scored a 1.00. 115 Variation Within Districts. Taking a closer look at the overall data for the highest scoring, in District D, the mean quality score for this essential was the highest of all the districts with a mean score of 2.67 (SD = 1.86) and the practice of writing meaningful texts while listening for the sounds in words was present 86 percent of the time. Upon further examination, the range of scores for District D was the widest of all districts, ranging from 1.00-5.00, with no scores of 2.00 or 3.00. Upon individual examination of the seven observations in District D, half of the teachers engaging in the practice were providing opportunities for children to write meaningful text (a 1.00) without coaching them to listen for the individual sounds in words. The other half of the observations showed teachers were regularly coaching children to listen for the individual sounds in words (a score of 4.00 or 5.00) while writing meaningful texts. Only one teacher observation in District D showed no presence of estimated spelling. While each observation was treated as its own unique data point, there was an interesting finding when examining individual practices: one teacher had consistently high scores (4, 5), one teacher had mixed scores (4, 1) and one teacher had consistently low scores (1,1). This variability to the findings of other researchers including Puranik and colleagues (2014) and Troia and colleagues (2011), who found that sometimes the variability was teacher specific and other times, it was not. In addition to the overall findings for this practice, teachers in District C all provided opportunities for children to write meaningful texts in both of their teaching observations. The predominant score of teachers’ enacted practice was a 1.00 or a 2.00, with three instances of a score of 3.00 and one instance of a score of 5.00 out of twelve separate observations. These findings indicate that the quality of implementation provided by teachers for estimated spelling varied widely, but most implementations of this practice were of low quality, with only 15 of the 49 total observations including teachers coaching children to listen for the 116 individual sounds in words (as indicated by scores of 3 or higher). However, they also highlight the fact that children in approximately three-quarters of classrooms observed for this study had the opportunity to write something more meaningful than spelling words or completing a worksheet. Comparison of Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Estimated Spelling (Research Question Three) In this section, a comparison of the presence and quality of estimated spelling instruction found in curriculum materials and teachers’ enacted instruction are presented to answer research question 3. Table 4 presents summary information about the presence and quality of the overall practice for curriculum materials and enacted instruction and scores for individual curricula and teachers in each of the four districts in the study. Table 9 Estimated Spelling Presence Quality Mean Mode Range District A Curriculum 1 Materials (n=6) Enacted Instruction (n = 18) District B Curriculum 2 Materials (n=6) Enacted Instruction (n = 8) District C Curriculum 3 Materials (n=4) Enacted Instruction (n = 12) 1.00 0.56 1.00 0.75 1.00 1.00 1.00 (SD = 0.00) 2.10 (SD = 0.99) 1.00 (SD = 0.00) 1.89 (SD = 1.30) 2.00 (SD = 1.16) 2.00 (SD = 1.21) 1.00 1.00 3.00 1.00-3.00 1.00 1.00 1.00 1.00-4.00 1.00, 3.00 1.00-3.00 1.00 1.00-5.00 117 Table 9 (cont’d) District D Curriculum 4 Materials (n=2) 1.00 Enacted Instruction (n = 7) 0.86 All Districts All Curriculum Materials All Enacted Instruction 1.00 0.76 3.00 (SD = 0.00) 2.67 (SD = 1.86) 1.44 (SD = 0.86) 2.11 (SD = 1.27) 3.00 3.00 1.00 1.00-5.00 1.00 1.00-3.00 1.00 1.00-5.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Curriculum Materials Compared to Enactment Overall. When examined together, the curriculum materials and the teacher instruction provide an interesting understanding of instruction in estimated spelling. Looking at Table 9 above, in the last two rows, where all curricula and all district data is presented, the presence of this practice was similarly high in the curriculum materials (100 percent presence) and teachers’ enactment of the curriculum (76 percent presence). Teachers’ enacted practice was at a lower level than curriculum. However, the inverse was true with the quality of teachers’ enacted practice. Teachers’ enacted quality of this practice (M = 2.11, SD = 1.27) was higher than the curriculum recommended practice (M = 1.44, SD = 0.86). Similarities between the overall data were found in looking at mode scores, where the most common score from each teacher enactment and curriculum examined was a 1.00, as indicated by the third column in Table 9. This indicates that the practice of coaching children to listen for sounds in words while they write something that is meaningful (not just practice spelling words) was not consistently executed at a proficient level in any teachers’ enacted instruction or curricula. 118 Curriculum Compared to Enactment: District-by-District. Moving onto examining the data district by district, in District A, the curriculum materials had a higher presence by almost double, with teachers’ enacted instruction only having estimated spelling opportunities about half the time. Additionally, the mean quality of instruction when comparing the planned and the observations of teachers’ enacted practice varied. Teachers enacted the instructional practice of encouraging estimated spelling at a quality that was 1.10 points higher in quality than the instructional materials. Additionally, the most common quality score for teachers' enacted practice was a 3.00 (see Table 9 above) versus in the materials which had a mode quality score of 1.00. In District B, the curriculum materials had high rates of presence of 100 percent while teachers’ enacted practice provided opportunities for estimated spelling only 75 percent of the time. District B had similar mode scores of 1.00 for both the curricula and the enacted practice. However, the range of scores differed with teachers’ enacted practice having more variation than the curriculum materials did. Teachers’ enacted practice ranged from 1.00-4.00, while curriculum materials only had scores of 1.00. Thus, District B followed a similar trend to District A with lower enactment of the practice in teacher observations, but a broader range and higher mean quality score in teacher implementation of the practice of using estimated spelling than the curriculum provided. In fact, in the curriculum for District A and District B there was no cueing for teachers to draw children’s attention to the individual sounds in words while they wrote, but in both districts, there were times when teachers did this cueing, albeit with varied levels of quality. In District C there were more similarities between the curriculum and the enacted practice. District C, which had identical consistency between the presence and some of the 119 quality scores for the recommended instruction, was an anomaly. In both Curriculum 3 and in teachers’ enacted practice in District C, there was 100 percent presence and an average score of 2.00 for the curriculum’s planned instruction and teachers’ enacted instruction. Similarly, Curriculum 3 had mode scores of 1.00 or 3.00, while in practice, teachers in District C had a mode of 1.00. However, like other districts already discussed, the range of teachers’ enacted practice had more variation with a range of 1.00-3.00 for the curriculum and a range of 1.00-5.00 for teachers’ enacted practice. Overall, when examining District C, there was some consistency between presence and quality of teachers’ implementation of estimated spelling and the instructional materials used. Of note, District C had a new curriculum, in building ISD Early Literacy Coach and weekly coaching support. In District D, the curriculum and teachers’ enacted practice had similarities in presence with 100 percent presence in the curriculum and 86 percent presence in teachers’ enacted practice. Additionally, there was similarity found in the mean quality scores with Curriculum 4 scoring 3.00 (SD = 0.00) and teachers’ enacted practice scoring 2.67 (SD = 1.86). Observations in District D had a mode score of 1.00 while in Curriculum 4 the mode score was a 3.00, indicating that the curriculum had more consistently higher quality instructional plans than teachers’ enactment did. However, teachers’ enactment had a wider range of quality scores with scores ranging from 1.00-5.00 across while the curriculum only had scores of 3.00 in the two units analyzed. These findings, taken in conjunction with the curriculum and the findings around teachers’ enacted instruction, indicate that while there are limited curricula supports for this practice across most of the units of curriculum materials analyzed, teachers’ instruction on average was of higher quality than the guidance found in the curriculum materials. However, 120 given that the mode score for the practice was a 1.00 across all teacher practices, it seems that teacher practice was not consistently higher than the curriculum units, but rather that teachers’ practice had more variation in quality than that of the curricular materials examined resulting in higher mean scores for enacted instruction. Instructional Practice Two: Interactive and Shared Writing Interactive writing is another practice that has shown promise in empirical work (e.g., Craig, 2003). Both curriculum materials and teachers’ enacted instruction were examined for evidence of opportunities for interactive writing. Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. Presence and Quality Scoring for Interactive and Shared Writing The second writing instructional practice focuses on teachers engaging children in interactive and shared writing where children and their teacher work together to compose a shared text. If interactive or shared writing was present, it was also scored for quality. Presence scores indicate the rate of the implementation of the instructional practice (e.g., 0.57 = 57 percent of classrooms provided interactive writing instruction). The quality of the instruction was then broken down into a 5-point scale to account for varied levels of quality of implementation, from beginning to exemplary (see Methods Chapter for more information on the development of the coding protocol and scoring procedures). The coding protocol for interactive and shared writing can be found in Figure 15 below. Of note, interactive writing is the K-3 Essential instructional practice, but shared writing was also captured by the coding protocol, which is why it is included here. From this point forward, it is just referred to as interactive writing, but this is important to distinguish because 121 the two practices are not the same (see Button et al., 1996 for interactive writing explanation and Mather & Lachowicz, 1992 for shared writing explanation). The full coding protocol can be found in Appendix B. Figure 15 Interactive Writing Scoring Guidelines Bullet #1: The teacher provides interactive writing experiences in grades K and 1. Interactive writing involves children in contributing to a piece of writing led by the teacher. With the teacher’s support, children determine the message, count the words, stretch words, listen for sounds within words, think about letters that represent those sounds, and write some of the letters. Proficient (3) The teacher designs an interactive writing experience in which children are involved in SOME of the components of the process OR where children are highly involved but in only one component of the process. Developing (2) Beginning (1) The teacher models writing for children but does not share ANY of the composition or transcription process with them. The teacher may explain choices in purpose or audience, composition, mechanics, or revision, but does not invite children to participate in this discussion. Exemplary (5) Strong (4) The teacher designs an interactive writing experience in which children are highly involved in ALL components of the process. These components include the following: selecting an audience or purpose for the writing; actively participating in the composition (sharing ideas for the message of the writing); actively participating in the mechanics (transcribing letters, adding words, etc.); revising the writing; and reading the writing (and/or rereading the writing) together. 122 Interactive Writing Instruction Within and Across Curricula (Research Question One) In this section, the presence and quality of interactive writing instruction found in curriculum materials are presented. Table 10 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials. Table 10 Interactive Writing Scores by Curriculum Presence Quality Mean Mode Range 2.00 1.00 1.00 2.00-3.00 Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) 2.33 (SD = 0.52) 1.83 (SD = 0.41) 2.50 (SD = 1.00) 2.50 (SD = 0.71) 2.22 (SD = 0.65) Note: Presence was scored as either present (1) or not present (0). The maximum quality score Curriculum 4 (n=2) All Curricula 2.00, 3.00 1.00-3.00 2.00-3.00 1.00-2.00 1.00-3.00 1.00 3.00 1.00 1.00 2.00 2.00 for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Interactive writing was present at one or more points in all of the curriculum units (see Table 10 above) as indicated by the overall presence scores of 1.00 or 100 percent. However, looking at the second column containing mean quality scores, most of the interactive writing instruction did not provide guidance that would lead to teachers enacting this practice at an elevated level of proficiency, with the average score being 2.22 (SD = 0.65). Curriculum 2 had the lowest quality score at 1.83 (SD = 0.41), Curriculum 1 had the next highest mean quality score at 2.33 (SD = 0.52) and Curricula 3 and 4 had the highest mean quality score at 2.40 (SD = 1.00) for Curriculum 3 and 2.50 (SD = 0.71) for Curriculum 4. Looking at the mode scores, 123 Curriculum 1 and 2 both had a mode of 2.00, Curriculum 3 had the highest mode of 3.00, and Curriculum 4 had a mode of 2.00 or 3.00 because between the two units analyzed there was one unit with a score of 2.00 and one with a score of 3.00. The range of scores was similarly clustered around the beginning levels of proficiency with Curriculum 1 and Curriculum 2 scoring 1.00 consistently, and Curriculum 3 and 4 having the highest quality score instances with score ranges of 1.00-3.00 for Curriculum 3 and 2.00-3.00 for Curriculum 4. Most curricular examples had children involved in one component of the writing process (usually idea generation or message composition) but not multiple components. Sometimes, this practice in the curriculum had children giving ideas for a whole class writing task (i.e., sharing their own examples while the class is brainstorming a list of specific goods and services) and then copying from the modeled brainstorming list to their own paper. Rarely, when children were engaged in this practice in any of the curricula were children doing the actual transcription of the writing on the shared writing piece for the entire class. The following examples illustrate how curricula were scored and provide context for descriptive data above. In Curriculum 4, in a first-grade unit that was intended for the beginning of the year, children practiced writing a sentence the teacher dictated from a shared class story that children had composed with the teacher. Children were instructed to write a sentence from the story (“The sun is shining.”) on their individual whiteboards to practice stretching out words. In this example, the practice scored a 3.00 because children were involved in more than one component (idea generation from the shared text, listening for the sounds in the words, and the transcription of those sounds on their own whiteboards) but were not highly involved in multiple components. A 3.00 was the highest score of any example in all of the curriculum materials. 124 In Curriculum 1, another first-grade unit also intended for the first half of the school year, children co-author a Noticing and Wondering anchor chart. The teacher is the one doing the spelling and transcribing, children, however, contribute their ideas of things they notice or wonder and therefore are composing the statements that will go on the anchor chart. This practice occurs in many lessons and the chart is available to children throughout the unit. Since they are only involved in contributing ideas for the chart, but there was no guidance provided in the lessons to have children help transcribe their ideas on the chart, this example scored a 2.00. When examining the overall trend in the curriculum materials examined in this study, the results suggest that curriculum, while having consistent presence of interactive or shared writing instruction within units, as indicated by the generally lower quality scores, there is not sufficient guidance for teachers to enact the practice with proficient quality (as would be indicated by average scores of 3.00 or more) and is often just teachers writing in front of children with minimal participation from the children. Teachers’ Enacted Interactive Writing Instruction Within and Across Districts (Research Question Two) In this section, the presence and quality of interactive writing instruction found in teachers’ enacted instruction are presented. Table 11 presents summary information about the rate of presence and the quality of the overall practice for teachers’ enacted instruction. Table 11 Teachers’ Enacted Interactive Writing Instruction by District District A (n=18) Presence 0.22 District B (n=12) 0.17 Mode 2. 00, 3.00 Range 2.00-3.00 2.00, 3.00 2.00-3.00 Quality Mean 2.50 (SD = 0.58) 2.50 (SD =0.71) 125 Table 11 (cont’d) District C (n=12) District D (n=7) All Districts 0.58 0.58 0.35 2.14 (SD = 0.90) 3.25 (SD = 0.50) 2.53 (SD = 0.80) 3.00 3.00 3.00 1.00-3.00 3.00-4.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Overall presence across districts for the use of interactive writing was low, with a presence score of only 35 percent across districts (see Table 11 above). This means that only 35 percent of classroom observations included interactive writing. Interactive writing was present across all grade levels from kindergarten to third grade, with 10 of 17 observations total coming from kindergarten and/or first grade. District C and D had the highest presence scores with 58% of teacher observations including interactive writing. District A had the next highest rate of presence with 22 percent of observations including interactive writing instruction. District B had the lowest rate of presence with 17 percent of teacher observations including some interactive writing. Thus, there was variation in presence because two districts had more than half of the observations of instruction include interactive writing (Districts C and D) and two had under a quarter of the observations include the instructional practice of interactive writing. When examining the quality of the instruction, most examples of this practice were similar and therefore quality scores did not deviate significantly from one district to another as is evident by the mode score column in Table 6 above. Most scores for this practice were 2.00 or 3.00. The quality of the instruction varied with District C having the lowest quality score (of 2.14 (SD – 0.90) and District D having the highest quality score of 3.25 (SD = 0.50). When examining the range of scores, there were examples of this practice ranging from scores of 1.00 126 to 4.00. The highest score was a 4.00 and it was the single instance of a score of 4.00, from a teacher in District D, out of all 17 observations that contained interactive writing instruction. Children were usually involved in one component of the process, but rarely, across classrooms where the practice was enacted, did children share in the transcription as well as message composition of a whole class, shared text. Limiting the interaction and ways children participated in the writing is why most scores for this practice were a 3.00 or lower. Most often, if children were transcribing anything during the interactive writing experience, the experience followed a sequence of children sharing their idea, the teacher transcribing on the model text, and children copying down what had been written on their own paper. In District C, a high scoring practice of interactive writing was observed during virtual instruction. During this lesson, the teacher engaged children in co-creating a circle map about things that illuminate. Children were invited to create their own circle map on their paper, while the teacher provided and wrote examples and scribed and discussed children's ideas. This example scored a 3.00 for quality because children were highly involved in one component and somewhat involved in the other components of writing on their own papers. A low scoring example of interactive writing also comes from a virtual observation of a teacher, also in District C and teaching virtually. This teacher modeled writing for the children using sentence stems. Then children used her model as a reference for their own writing, but they were not involved in the process of sharing the composition of the writing on the board or any of the transcription. The teacher wrote in front of the children. Since children were not involved in any of the process or production, this example scores a 1.00. Of note, scores or frequency of this practice did not increase once all classrooms returned to in-person instruction. In fact, frequency decreased from the first observation period to the 127 second overall and in the district with predominantly virtual instruction for the first series of observations. To summarize, District D and District C demonstrated the most consistent use of interactive writing with presence of the practice in 58 percent of observations. District D had the highest quality of this practice with a mean quality score of 3.25, median quality score of 3.00 and mode quality score of 3.00 with a range of 3.00 - 4.00 indicating children actively participated in more than one part of the interactive writing (i.e., composing or transcribing) or highly participated in one part of the process each time the practice took place out of the 12 observations total from the district. Conversely, teachers in District A and B used this practice in around one out of five observations. Taken together, these curriculum and video results suggest that curriculum materials generally had more opportunities for interactive writing than teachers’ instruction but that when teachers instructed children using interactive writing it was of a higher average quality than the curricular materials their district provided. Comparison of Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Interactive Writing (Research Question 3) In this section, a comparison of the presence and quality of interactive writing instruction found in curriculum materials and teachers’ enacted instruction are presented. Table 12 presents summary information about the presence and the quality of the overall practice for curriculum materials and enacted instruction as well as scores for individual curricula and teachers in each of the four districts in the study. 128 Table 12 Interactive Writing District A Curriculum 1 (n=6) Enacted Instruction with Curriculum 1 (n = 18) District B Curriculum 2 (n=6) Enacted Instruction with Curriculum 2 (n = 8) District C Curriculum 3 (n=4) Enacted Instruction with Curriculum 3 (n = 12) District D Curriculum 4 (n=2) District D (n = 12) All Districts All Curricula Districts A – D Presence Quality Mean Mode Range 1.00 0.22 1.00 0.17 1.00 0.58 1.00 0.58 1.00 0.35 2.33 (SD = 0.52) 2.50 (SD = 0.58) 1.83 (SD = 0.41) 2.50 (SD =0.71) 2.50 (SD = 1.00) 2.14 (SD = 0.90) 2.50 (SD = 0.71) 3.25 (SD = 0.50) 2.22 (SD = 0.65) 2.53 (SD = 0.80) 2.00 1.00 2.00, 3.00 2.00-3.00 2.00 100 2.00, 3.00 2.00-3.00 1.00 1.00-3.00 3.00 1.00-3.00 2.00, 3.00 2.00-3.00 3.00 3.00-4.00 2.00 1.00-3.00 3.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Overall, when comparing practice and curriculum materials, there was some discrepancy between presence and quality of curricular materials and teachers' enactment of interactive writing. The difference in presence scores between curricula and observational data was the most divergent with presence of interactive writing in 100 percent of curricular units analyzed but only contained in 35 percent of observations. When enacted, the average quality of the interactive 129 writing instruction scored 2.53 (SD = 0.80) compared to the average quality in curriculum materials which was 2.22 (SD = 0.65). This means that teachers in practice were more likely to have children involved in multiple components of the writing or highly involved in one part of the interactive writing experience than the curricular materials had planned for. Additionally, the mode score for the practice was consistently lower in curriculum materials than in teachers’ enacted practice (see Table 12 above). District D stands out as having some consistency in the use of interactive writing between the teacher practices and curricular materials. In observations of teacher practice, this district tied for the most consistent use of this practice with 58 percent of observations having this practice present and the highest quality of the practice as evidenced by the mean quality scores of 3.25 when enacted (see Table 12 above). Additionally, the lowest observed score was a 3.00. Teachers using Curriculum 4 (District D) and Curriculum 3 (District C) also had the highest quality of the enactment overall. Possibly as a result of the curricular materials, District D had the highest score likely in enacted practice as indicated by the lower standard deviation score of 0.50 compared to District C which had a standard deviation of 0.90. One explanation for the difference in the quality of teacher observation might be that the curriculum units that were used by District D and District C both emphasize an active engagement time via the gradual release of instruction model, which means that teachers model a practice, guide children through the practice together and then children can then practice on their own. During this time children could be found composing for shared class texts, sharing ideas for anchor charts that highlight a skill or process they were engaging in, or recording ideas for brainstorming. Conversely, Curriculum 2 used by District B used teacher models of writing and thinking aloud to scribe writing, but rarely included shared generation of a class text, which 130 resulted in the lowest quality score for all the curriculum (1.83, SD = 0.41). This discrepancy could provide an explanation for the lowest presence scores of all the observations with only 17 percent of observations including interactive writing instruction. In Curriculum 1 and Curriculum 2, interactive writing, if present, was most often a teacher scribing what children said for later reference or use, while in Curriculum 3 and Curriculum 4 there was more focus on the time spent doing the interactive writing as part of the writing strategy or process instruction. Teachers in District C and District D had the highest presence of this practice and the highest average quality scores. These results suggest that the curriculum may influence teacher uptake of use of interactive writing when materials include a component for active engagement of children through the writing task. Instructional Practice Three: Daily Time for Writing Aligned to Motivation and Engagement Daily time for writing with specific attention to children’s motivation and engagement is another practice that has shown promise in empirical work (e.g., Graham et al., 2012a). Both curriculum materials and teachers’ enacted instruction were examined for evidence of opportunities for children to write daily and for the motivation and engagement factors such as choice in their work and opportunities to see themselves as successful readers and writers (to name a few). Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. Presence and Quality Scoring for Daily Time to Write Aligned to Motivation and Engagement The third writing instructional practice focuses on children having time to write each day. If daily time for writing was present, it was also scored for quality. Presence scores indicate the rate of the instructional practice’s implementation (e.g., 0.57 = 57% of classrooms provided daily 131 time for writing). The quality of the instruction was then broken down into a 5-point scale to account for varied levels of quality of implementation, from beginning to exemplary (see Methods Chapter for more information on the development of the coding protocol and scoring). The coding protocol for daily time for writing can be found in Figure 16 below. The full protocol can be found in Appendix B. Figure 16 Daily Time for Writing Scoring Guidelines Bullet #2: The teacher provides daily time for children to write, aligned with Essential #1. Strong (4) Exemplary (5) The teacher provides daily time for children to write AND the teacher provides opportunities for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation, as aligned to Essential 1. Proficient (3) The teacher provides daily time for children to write OR the teacher provides opportunities for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation, as aligned to Essential 1. Developing (2) Beginning (1) The teacher provides few opportunities for children to write each day AND offers little opportunity for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation as aligned to Essential 1. 132 Daily Time for Writing Aligned to Motivation and Engagement Within and Across Curricula (Research Question 1) In this section, the presence and quality of daily time for writing found in curriculum materials are presented to answer research question 1. Table 13 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials. Table 13 Daily Time for Writing Aligned to Motivation & Engagement Scores by Curriculum Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) Curriculum 4 (n=2) All Curricula Presence 1.00 1.00 1.00 1.00 1.00 Quality Mean 2.00 (SD = 0.00) 2.67 (SD = 0.52) 3.50 (SD = 0.58) 4.50 (SD = 0.71) 2.83 (SD = 0.92) Mode 2.00 Range 2.00 3.00 2.00-3.00 3.00,4.00 3.00-4.00 4.00,5.00 4.00-5.00 2.00 2.00-5.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Overall, the presence for this instructional practice was strong as indicated by the presence scores in of 1.00 for all curricula. All curricula included writing time within multiple lessons in a unit as indicated by the 100 percent presence. Some, like Curriculum 4 included this daily (see discussion in the next subsection on number of writing lessons within a unit), others contained writing time for children multiple times within a unit but not daily. While the presence scores were similarly high, the quality and frequency of the practice contained in curriculum units varied. The mean quality scores ranged from 2.00 (SD = 0.00) for Curriculum 1 to 4.50 for Curriculum 4. The average quality score for all curricula (SD =0.71) was 2.83 (SD = 0.92). This mean quality range indicates that although children may have had time to write, they were not 133 always writing with attention to motivation and engagement via choice, collaboration, a chance to see themselves as successful literacy learners or engage in writing for an authentic purpose. Alternately, it is possible that there was some attention to their motivation and engagement via choice, collaboration, and/or a chance to see themselves as successful literacy learners and engage in writing for ana authentic purpose, but there was not always daily time for writing (see Figure 14 above for quality scoring criteria). The mode score for the curricula also varied. Curriculum 1 had the lowest mode score with a score of 2.00. Curriculum 2 had a mode score of 3.00. Curriculum 4 had a mode score of 3.00 or 4.00. And Curriculum 4 had a mode score of 4.00 or 5.00. The range of scores was the same as the mode scores with only Curriculum 2 including a score that was different from the mode scores in its range, (2.00-3.00 for the range compared to a mode of 3.00). Overall, the curricula attended to children’s motivation, collaborative and purposeful engagement with peers, having authentic purposes for writing beyond the assignment, and having choice to varying degrees. Some lessons, units and curricula did this better than others, which is explained more in the subsections below which further break out the data in Table 13 above. Variation in Quality Between Curricula. All curricula had presence for the practice of daily time to write with attention to motivation and engagement as indicated by the presence scores of 1.00 for all curricula. However, no curricula scored an average quality score of 1.00, indicating that there was limited time to engage in sustained writing and there was little attention to children’s motivation and engagement. There was a wide range of scores within the curriculum units analyzed, as is evident in the range of mean quality scores in the third column in 134 Table 9, that warrants further explanation of the nuances that may account for the variation in quality. Number of Writing Lessons in Units Varied Substantially. One factor to consider when examining the scores for presence and quality in these curriculum materials is that both Curriculum 3 and Curriculum 4 had children produce pieces that require a full writing process from drafting to publication and engaged in writing daily. Both curricula also had a number of lessons that more closely matched the number of days in a school year. To further expand, Curriculum 3 and Curriculum 4 had the most writing lessons overall. Curriculum 3 had 10 units with 3 weeks of daily lessons in each unit, which totaled 150 lessons, or 30 weeks (about 7 months) of daily lessons. Each daily lesson contained a writing lesson that focused on meaningful writing (defined as being writing that was beyond a worksheet or spelling lesson) as well as a grammar or spelling lesson. Curriculum 4 had 8 units of study ranging from 3-6 weeks long and according to the curriculum overview documents covered 37 weeks (about 8 and a half months) of the school year. Curriculum 1 and Curriculum 2 had the least amount of writing focused lessons for the course of the school year. The four overarching units in Curriculum 1 had three sub-units in total that were two to three and a half weeks each. One of those, the third sub-unit in each larger unit, had a focus on a written product, which accounted for 8-13 weeks (about 2-3 months) total with a writing focus for the school year. Curriculum 2 was a writing approach or system that had a total of 20-30 sample lessons (total example lessons varied based on grade level), which would only cover anywhere from 4-6 weeks of the school year. Additionally, Curriculum 2 highlighted and taught the structure with sample lessons for only four main genres: narrative, expository, informative, and opinion. 135 With the context that the total lessons varied widely from one curriculum to another, it is also important to highlight that only Curriculum 4 allowed children to write daily for sustained periods of time up until the publication of the written piece. This curriculum consistently had the highest quality for daily time for writing instruction and practice that aligned to Essential 1 (see Table 9 above) because the lessons included extended writing time and children had lots of opportunities for choice, including whether to work on multiple pieces of writing within the same period of sustained writing time, what their topic would be, work with a partner at multiple points throughout the unit, etc. Additionally, the total daily writing time in each lesson for Curriculum 4 was consistently within the recommended daily amount per the IES (Institute of Education Sciences) Practice Guide (30-60 minutes). In the other three curricula the writing time varied by day and by task, and sometimes fell within the recommended daily amount but not always. Curriculum 3 fell within the recommended daily amount when time was totaled across a spelling lesson, grammar work, writing in response to text, and the daily writing task that was more meaningful (i.e., composing their own piece within the unit’s focal genre). Quality of Units Varied. As shown in Table 13, Curriculum 1 had the lowest quality scores for this practice with a mean quality score of 2.00 (SD = 0.00). One assumption for this score could have been that lower curriculum scores were this low for quality was because not all sub-units within a larger unit had a writing piece that children were working on consistently, so sustained daily writing time would not be assumed to be present (see Methods Chapter for explanation of Curriculum 1). However, even when considering this for Curriculum 1, quality and presence scores for this practice did not vary for daily writing between the units analyzed. This lack of variation was true even if the unit focused on producing a written product by the end of the unit. Of the units analyzed for this study, one of them was the final sub-unit in a larger unit 136 for first graders to be taught in the winter or spring. This was the sub-unit that focused on producing a written product as the focus of the unit, and the quality score for this practice was still 2.00 because the writing throughout the unit was short and more worksheet and graphic organizer or notes focused. The written product work, to write a letter to an ornithologist, was only worked on for one day in the unit. Children’s writing on other days consisted of writing a response to reading using a word or phrase, writing adjectives, adding to a whole class anchor chart where children shared in composition but not transcription, and other worksheet focused work until the letter was drafted and finished in one lesson at the end of the unit. As a result, the unit received a quality score of 2.00. Similarly, in a second-grade sub-unit from Curriculum 1 that was not the focal writing sub-unit, taught in the fall, there were not daily opportunities to write, nor sustained writing times in most lessons, but there was time for sustained writing in three lessons towards the end of the unit (lessons 10-12). There was also often choice or collaboration in short writing tasks children did. Some of these other writing opportunities, occurring most days before lessons 10- 12, included writing questions children had at the start of the unit about the driving question, writing a retell of the beginning, middle and end of a shared read aloud, and writing a response to reading by answering questions about the story. These examples of daily writing time were short periods of time and often did not attend to motivation and engagement factors outside of constrained choice or working with a partner, and as a result the unit received a quality score of 2.00. Curriculum 3 had units with a mean score of 3.50 (SD = 0.58) or higher, meaning that each unit had some opportunity for writing that was aligned with Essential 1 and considered authentic motivation and engagement. It is important to note that this was often across an entire 137 two-and-a-half-hour English Language Arts block of instruction and broken up into aa variety of writing times. For instance, in one day, children’s writing consisted of five separate writing tasks. First, students engaged in dictated spelling of words and sentences, then students wrote a few sentences about how images in a text they were analyzing helped their understanding. Next, students annotated short paragraphs of text used during reading instruction for facts and signal words and then wrote a short paragraph about the text. Afterwards, students worked with partners to write sentences with possessive nouns, and finally there was a suggestion for independent writing time for children on a teacher chosen topic for an unspecified amount of time. As a result of daily instruction like this, the quality score for this unit was 4.00 overall. Even in this single day of instruction, children had many opportunities to write and sometimes within those daily opportunities the curricula attended to children’s motivation and engagement via engagement in things such as partner work time or having choice, but there was not much sustained writing time for children to focus on a piece of writing. The lack of sustained time on a piece of the child’s own writing was common in this curriculum as children generally worked on small parts of one single written piece across all fifteen days in a unit, but often for short periods of time. Curriculum 2 also had a mode score of 3.00 as seen in Table 13 above. What should be noted for both Curriculum 2 and Curriculum 3 is that in some of the daily lessons, the alignment to Essential 1 was found only in constrained choice or the ability to collaborate with a partner, but not for authentic purposes for writing beyond the assignment as the example assignments were writing to prompts. Partner work usually consisted of checking with a partner or sharing the work they had completed on the most recent step before coming back together for the teacher to model the next step. 138 Curriculum 4 unit had more attention to authentic reasons for writing and broader opportunities for choice including choosing a topic, choosing a partner, choosing what type of text feature to incorporate, choosing which piece of writing to work on that day, and more. As a result, the highest scoring curriculum unit, with a score of 5.00, was in Curriculum 4. It received this score for its sustained daily time for writing and attention to motivation and engagement. In the unit scoring a 5.00, found in a spring unit for first graders, children wrote full informational texts. Children were allowed to choose their own topic, work with partners to expand and rehearse their text, and then again to revise and publish their text. Throughout the unit, they were given opportunities to collaborate, and the authentic purpose for writing the text was to teach other people something they were already experts at. The teacher's sample text and lesson plans found in the unit came back to this point in multiple lessons, including anchoring their drafting and revising in this goal. This focus on others’ reading of their text grounded children in an authentic reason for being clear in their writing beyond the assignment. Relatedly, the culminating activity in the unit was to have a presentation where children presented their text to a family member or other invited guest. The curriculum findings suggest that there was variation in the quality of the daily time for children’s writing, as well as the ways curriculum materials accounted for their motivation and engagement. Additionally, while some curricula and curriculum units had children engaged in daily writing, very few curriculum units had children writing for sustained periods of time daily. The exception was Curriculum 4. 139 Teachers’ Enacted Instruction for Daily Time to Write with Attention to Motivation and Engagement Within and Across Districts (Research Question 2) In this section, the presence and quality of daily time for writing found in teachers’ enacted instruction are presented. Table 14 presents summary information about the rate of presence and the quality of the overall practice for teachers’ enacted instruction. Table 14 Teachers’ Enacted Instruction for Daily Time for Writing Aligned to Motivation and Engagement by District Presence Quality Mean Mode Range District A (n=18) District B (n=12) District C (n=12) District D (n=7) All Districts 0.94 1.00 1.00 1.00 0.98 2.00 (SD = 1.00) 1.83 (SD = 0.83) 2.42 (SD = 1.08) 2.43 (SD = 0.79) 2.13 (SD = 0.96) 1.00 1.00 2.00 3.00 1.00-4.00 1.00-3.00 1.00-4.00 1.00-3.00 1.00, 2.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Almost every classroom observation had daily opportunities to write or provided opportunities for children to see themselves as successful writers, have choice in writing, collaborate on writing or have a purpose for writing beyond their classroom. The overall presence score across all districts for this bullet was 98 percent (see Table 14 above). This meant that children in most K-3 classrooms observed had some opportunity to write in their class each day. District A was the only district where an observation of instruction did not include daily time for writing aligned to motivation and engagement. 140 The quality of the daily time for writing varied from district to district but was overall a lower score compared to other instructional practices with an average score of 2.13 (SD = 0.96). The highest performing district, District D, had a mean quality score of 2.43 (SD = 0.79) and the lowest scoring district, District B, had a mean quality score of 1.83 (SD = 0.83). District A and B had a mode score of 1.00, while District C and District D had slightly higher mode scores of 2.00 and 3.00, respectively. The range of scores was broad with District A and District C scoring 1.00-4.00 and District B and District D scoring 1.00-3.00. However, the fact that all districts had one or more quality scores of 1.00 indicates that while children did have time to write daily, it was not always aligned to authentic motivation and engagement practices (choice, collaboration, seeing themselves as successful writers, etc.) and/or was not for a sustained period. The variation in enacted practice is explained in more detail in the subsections below as it relates to variation between districts and within districts. Variation Between Districts. While the overall presence and mean quality scores are clustered, and the range of scores are within one point of one another across districts, there was some variation between districts that is worth highlighting. District A was the only district to have an average presence of the practice below 100%. This meant that 1 out of 18 classroom observations did not have daily time for writing observed. The quality scores in this district were an average of 2.00 (SD = 1.00) and with a most common quality score of 1.00. District B had the lowest quality mean score of 1.83 (SD = 0.83) with a range of scores from 1.00-3.00. District C and District D had the highest mean quality scores for this instructional practice with scores of 2.42 (SD = 1.08) for District C and 2.43 for District D (SD = 0.79). While their mean quality scores were similar, they diverged when examining the other scores with District C having a broader range of scores and a lower mode score of 2.00 and thus a 141 higher standard deviation than District D, which had a smaller range of scores and a higher mode quality score of 3.00 and thus, a smaller standard deviation. Interestingly, modality of instruction for these districts was also different for some of the observations. District C was virtual for the first observation and then in hybrid format because of the COVID-19 pandemic and related district-specific precautions. Hybrid instruction for this district was a format where some children came to school while others engaged in online synchronous and asynchronous work. This resulted in fewer in-person school days for children, small and socially distant classes, and some virtual instructional time each day for the second observation. District D, on the other hand, was in person for both observations. District D had a quality median and mode score of 3.00 indicating students were more likely to have sustained writing time and that writing time was more likely to have a purpose but had no quality scores higher than 3.00. Variation Within Districts. As indicated by the wide range of quality scores in all districts, instantiation of daily time to write with alignment to motivation and engagement varied widely from classroom to classroom. To contextualize these findings, it is important to understand what a high and a low scoring example looked like and to see variation within districts, not just across as has been shared previously. To highlight these scores, examples of two teachers from District A are shared below. First, in a third-grade classroom, the teacher asked children to stop and jot new vocabulary from their texts on sticky notes and complete a worksheet later in the lesson. However, it was not sustained writing time or aligned to the motivation and engagement factors outlined in Essential Practice 1 (i.e., having a reason for writing beyond the purpose of the class, collaborating with peers for meaningful reading and 142 writing tasks, having choice in their reading and writing, etc.). Therefore, this example scored a 1.00. On the other end of the range of scores for District A, a first-grade teacher’s practice scored a quality score of 4.00. In the sub-unit she taught, children researched birds’ feathers and beaks. Within the lesson observed, the teacher supported children through their recall of information using anchor charts of the research they had already conducted on feathers and beaks. Then children chose whether they would like to write their informative paragraph (the writing focus for the day) on feathers or beaks. They completed a graphic organizer to begin drafting their informative paragraph by filling in the sentence “How does a bird’s ________ (beak/feathers) help it survive?” Then, the teacher supported children through the graphic organizer page to create their informative paragraph. Children wrote down a word to describe what they chose (a specific beak and/or specific feather) and then produced a way that their specific beak or feather helped the bird survive. This observation received a score of a four because children had sustained writing time which was approximately10-15 minutes. The task children had to do after they completed their writing task was not writing related, which meant some children wrote briefly while others had more time dedicated to writing within this fifteen- minute period. Children also had consistent choice throughout the creation of this planning sheet. This choice included selecting beak vs. feather, what type of beak or feather, what words to use to describe the beak or feather and finally the purpose it served. Therefore, this example scored a 4.00 because there was some sustained writing time on a meaningful piece of writing (not a spelling list or worksheet) with attention to children’s motivation and engagement via choice. These two examples highlight the variation between classrooms within a district. Taken together with the findings shown in Table 14 and explained above, particularly the similar mean 143 quality scores and range of scores, indicate that the variation between classrooms within the same district was almost as likely as variation across districts. While some districts had overall higher scores and higher mode scores, the range of scores consistently included scores of 1.00 and 3.00 or 4.00 meaning children across all districts were receiving instruction with varied quality for the enacted practice of daily time to write with attention to motivation and engagement. Comparison of Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Daily Time to Write Aligned to Motivation and Engagement (Research Question Three) In this section, a comparison of the presence and quality of daily time for writing instruction found in curriculum materials and teachers’ enacted instruction are presented. Table 15 presents summary information about the presence and the quality of the overall practice for curriculum materials and enacted instruction as well as scores for individual curricula and teachers in each of the four districts in the study. Table 15 Daily Time for Writing Aligned to Motivation and Engagement District A Curriculum 1 (n=6) Enacted Instruction with Curriculum 1 (n = 18) District B Curriculum 2 (n=6) Enacted Instruction with Curriculum 2 (n = 8) Presence Quality Mean Mode Range 1.00 0.94 1.00 1.00 2.00 (SD = 0.00) 2.00 (SD = 1.00) 2.67 (SD = 0.52) 1.83 (SD = 0.83) 2.00 2.00 1.00 1.00-4.00 3.00 2.00-3.00 1.00 1.00-3.00 144 Table 15 (cont’d) District C Curriculum 3 (n=4) Enacted Instruction with Curriculum 3 (n = 12) District D Curriculum 4 (n=2) Enacted Instruction with Curriculum 4 (n = 12) All Districts All Curricula Districts A – D 1.00 1.00 1.00 1.00 1.00 0.98 3.50 (SD = 0.58) 2.42 (SD = 1.08) 4.50 (SD = 0.71) 2.43 (SD = 0.79) 2.83 (SD = 0.92) 2.13 (SD = 0.96) 3.00, 4.00 3.00-4.00 2.00 1.00-4.00 4.00, 5.00 4.00-5.00 3.00 1.00-3.00 2.00 2.00-5.00 1.00, 2.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Looking at Table 15, there is variation between curriculum and the enacted practice in every column except the presence column. Of note, District A was the only district to have a classroom observation that did not contain this instructional practice at all. Additionally, this was the only district to have a comprehensive ELA (English Language Arts) curriculum that had units where there was not always daily writing instruction within an ELA block (see explanation in the Overview of Daily Time for Writing Aligned to Motivation and Engagement Within and Across Curricula section above). There was some alignment between the instructional practices contained in the curricula and the enacted practice for District A as well. For instructional observations, this was evident in the rate of presence of 94 percent versus other districts which had mean presence scores of 100 percent. In observations, the range of scores was 1.00-4.00, with 1 being the most common score. The curricula, similarly, had an identical mean quality score of 2.00 and 100 percent 145 presence, but a much tighter range of scores with scores of all 2.00 - meaning the curriculum consistently provided opportunities for writing but little or no attention to motivation or engagement or provided some attention to motivation and engagement but little actual sustained time on writing. This finding, that teachers’ scores had wider and higher ranges for quality than their curriculum materials, suggest that teachers may implement or make changes to the practices contained within curricula that allow them to enact higher quality practices when it comes to daily time for writing and considering children’s interest and motivation. However, this was not true for all districts. District C and District D had the highest quality of this practice overall for both the curriculum and the enacted instruction (see Quality Mean in Table 15 above for both District C and D). In both districts, the adopted curriculum has children produce written pieces regularly and engage in writing daily. It is also important to note that unlike Curriculum 2, both curricula also had a number of lessons that closely matched the number of days in a school year. Curriculum 3 had 10 units with 3 weeks of lessons each unit totaling 150 lessons, or approximately 30 weeks of daily lessons. Each daily lesson contained a writing lesson that focused on meaningful writing as well as a grammar or spelling lesson. Curriculum 4 had 8 units of study ranging from 3-6 weeks long and covers 37 weeks of the school year. In juxtaposition, the four main units in Curriculum 1 had three sub-units in total that were between eight and fifteen lessons long each, with only one that focused on a written product. The total of these writing focused sub-units accounted for 9-12 weeks of the school year. And Curriculum 2 was a writing approach or system that had a total of 20-30 sample lessons (total example lessons varied widely based on grade level), which only covered anywhere from 4-6 weeks of the school year. 146 District D had the largest discrepancy between teachers’ enacted practice and the curriculum (Curriculum 4) with a score difference for average quality of over 2.00 points. With the curriculum material scores ranging from 4.00-5.00 and teachers’ enacted practice ranging from 1.00-3. 00, there is no overlap. This, combined with the range scores that do not have any overlap indicates that the curriculum materials provided higher quality for daily time for writing than teachers’ enacted practice. District D, while the district with the most significant discrepancy, is a good example of the trend for this instructional practice, which is that on average, the curriculum materials had higher quality scores for daily time for writing aligned to children’s motivation and engagement than teachers’ enacted instruction. However, the presence of writing in curriculum materials also seemed to influence whether teachers’ practice had writing as indicated by the almost identical presence scores for districts between the adopted curriculum materials and teachers’ enacted practice within that district. Instructional Practice Four: Writing Process and Strategy Instruction Research indicates that teaching children the writing process and strategies within that process supports better writing outcomes (e.g., Graham et al., 2012b). Both curriculum materials and teachers’ enacted instruction were examined for evidence of writing process and strategy instruction. Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. Overview of the Presence and Quality Scoring for Writing Process and Strategy Instruction The fourth writing instructional practice focuses on teachers providing strategy instruction for children on how to complete a part of the writing process. If writing process and strategy instruction were present, the practice was also scored for quality. Presence scores 147 indicate the rate of the instructional practice’s implementation (e.g., 0.57 = 57% of classrooms provided writing & process and strategy instruction). The quality of the instruction was then broken down into a 5-point scale to account for varied levels of quality of implementation, from beginning to exemplary (see Methods Chapter for more information on the development of the coding protocol and scoring). The coding protocol for interactive writing can be found in Figure 17 below. As a reminder, the full protocol can be found in Appendix B. Figure 17 Writing Process and Strategy Instruction Scoring Guidelines Bullet #3: The teacher provides instruction in writing processes and strategies, particularly those involving researching, planning, revising, and editing writing. Exemplary (5) The teacher provides children with explicit instruction in the writing process AND explicitly teaches a strategy or strategies within that process. This includes instruction in strategies used for researching, planning, revising, or editing their writing. Strong (4) Proficient (3) The teacher provides children with explicit instruction in the writing process OR explicitly teaches strategies within that process. This includes instruction in strategies for researching, planning, revising, or editing their writing. Developing (2) Beginning (1) The teacher briefly refers to writing processes and/or strategies (e.g., “remember to look at your graphic organizer”) but explicit instruction in writing processes and strategies is not evident. Writing Process and Strategy Instruction Within and Across Curricula (Research Question One) In this section, the presence and quality of writing process and strategy instruction found in curriculum materials are presented. Table 16 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials. 148 Table 16 Writing Process & Strategy Scores by Curriculum Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) Curriculum 4 (n=2) All Curricula Presence Quality Mean 0.67 1.00 1.00 1.00 0.89 2.50 (SD = 0.57) 3.00 (SD = 0.00) 4.00 (SD = 0.82) 5.00 (SD = 0.00) 3.38 (SD = 0.96) Quality Mode 2.00, 3.00 Quality Range 2.00-3.00 3.00 4.00 5.00 3.00 3.00 3.00-5.00 5.00 2.00-5.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Teaching children the writing process is one of the four main recommendations made by writing researchers in the IES practice guide (Graham et al., 2012a) and is one of the instructional practices that many educators would associate with teaching writing. With that said, it may be unsurprising that three out of four curricula had a high presence score for this practice at 100 percent each and the average presence score was 89 percent overall (see Table 16 above). The only curriculum that did not have 100 percent presence for this practice across the units analyzed was Curriculum 1, which only had 67 percent presence. Mean quality varied with individual curriculum scores ranging from 2.50 (SD = 0.57) for Curriculum 1 to 5.00 (SD = 0.00) for Curriculum 4. Curriculum 2 scored a 3.00 (SD = 0.00) and Curriculum 3 scored a 4.00 (SD – 0.82). No curriculum units analyzed scored a 1.00 for quality overall as indicated by the ranges of 2.00 or higher, and the mode was consistently a 3.00 or higher. Taken together, these findings indicate that when the practice was present in the curriculum materials, it most often entailed explicitly teaching a component of the writing 149 process (e.g., planning, revising) or a strategy (e.g., creating a web map for planning a main idea and supporting details). Similarities and Variation Across the Curricula. In addition to the findings about presence and quality above, there are qualitative considerations that may factor into this and other findings. Three out of the four curricula analyzed had children focus on one piece of writing throughout the entire unit, which required them to move through an entire writing process from drafting to revision and sometimes publication. These curricula used short bursts of independent work time with high levels of explicit instruction and structure for what children were writing. The sequence of this instruction in these curricula looked like the sequence that follows. First, the teacher provided a model and guided children through her thinking. Then, children had the opportunity to complete that step, usually in a shorter amount of time (varied based on age and length of task the teacher modeled). In one second grade sample lesson in Curriculum 2, the lesson format followed a similar sequence to the one outlined above. Children watched their teacher brainstorm reasons to use in an expository text written in response to a prompt. Then children brainstormed their reasons, after which they came back together as a class without debriefing or sharing and the teacher modeled the next step of selecting two distinct kinds of reasons for their argument. Then children decided which of their brainstormed reasons to use on their graphic organizer and wrote them down. During many of these moments, it was not clear whether or how children knew where they were in the writing process based on the instructional sequence and example dialogues provided for teachers. Curriculum 1 was the only curricula that did not have writing process or strategy instruction present during all the instructional times analyzed where writing took place. This is 150 likely a result of the fact that they did not consistently have children writing one piece or genre of writing within a unit (see discussion of this in the section on Daily Time for Writing above). For example, in one Curriculum 1 sub-unit for second graders, children were engaged in a series of close reads of a longer text and filling out a graphic organizer each day to retell the story. The teacher modeled orally retelling what happened first in the story in a summary statement based on the pages they read aloud that day. The teacher then wrote the retelling down on the graphic organizer children would also use. Then, the teacher handed out children’s graphic organizers and told them to complete the retelling of the beginning of the story orally and then to record it. While the other knowledge building, comprehensive ELA Curriculum, Curriculum 3, also had children writing in ways similar to those found in Curriculum 1 sequence described above, the difference was that in Curriculum 3 there was also a separate time each day where children were writing pieces that focused on a genre and moving through the entire writing process from the beginning stages of planning and drafting to revising and editing. Clarity around where children were in the writing process, coupled with strategy instruction on the step they were in, was found mostly in Curriculum 3 (mean quality of 4.00) and Curriculum 4 (mean quality of 5.00) but not regularly in the other two curricula analyzed. The following examples elaborate on what this looked like across curricula. First, in a first-grade unit in Curriculum 3, scoring at a high-quality level, children began by brainstorming in the early days of the three-week unit. This brainstorming was done with clarity around where they were in the writing process (the planning phase). Then they moved on to drafting from the brainstorming chart they completed. Children orally rehearsed by speaking their draft aloud. Then they engaged in shared writing with their teacher to produce a strong title for their teacher’s piece and then, in turn, their own. Throughout this process, children were aware of where they 151 were in the writing process and the teacher modeled and guided children through how to do what they did that day. Some lessons, such as this, contained prompts to confer with children as they worked independently such as “your title makes me curious to read more...” on the day they work on producing strong titles. Children close out the unit by sharing their writing with their class using support teacher provided sentence stems to respond to their peers. Thus, this unit had an overall score of 4.00 for this practice. Scoring in the middle range of quality scores is a third-grade unit in Curriculum 2 on expository writing. In this unit, there was process and strategy instruction at multiple points beginning with brainstorming ideas on a topic and stance to explaining their reasoning and expanding on reasoning towards the middle and end of the unit. During this unit, the teacher showed children how to choose transition words that sounded like the teacher’s voice and then children orally rehearsed their transition words. The teacher then coached the children through the drafting by explaining that they could change to things they hear with a partner or in the whole class share if they liked something else better than the words they chose. Arguably if the teacher followed this instructional sequence, children would have explicit strategies to complete the steps for where they are in the writing process but would not necessarily know where they were in the writing process, as it was not discussed explicitly. Therefore, this practice scored a 3.00 for quality. On the lowest scoring end of this practice, a first-grade unit in Curriculum 1, to be taught in the spring, contained a few lessons with explicit writing process and strategy instruction. The first opportunity, a few lessons in, the materials have the teacher engaged in a think aloud about how to use what is in the text to inform writing. In a subsequent lesson, the teacher text modeled how to take notes and make sentences into smaller phrases when students drafted their 152 paragraphs. The teacher text also modeled completing a paragraph planning page in a unit notebook. Finally, in one of the last lessons in the unit, the teacher text reminds children how to publish in their best handwriting and differentiating between revising and editing via teacher modeling. Overall, this unit scores a 2.00 because the materials do not have the teacher providing explicit instruction in the writing process or teaching a strategy for the process. However, children do engage in the process and there is some explicit instruction in how to complete the task children are working on that day. Overall, these findings suggest that curricula mostly have writing process and strategy instruction and that the curricula have proficient writing process and strategy instruction. Only one curriculum did not score at a 3.00 or above for a quality mean (Curriculum 1) and this curriculum also had the only presence score below 100 percent, which indicated that not all units analyzed had presence of writing process and strategy instruction. Teachers’ Enacted Writing Process & Strategy Instruction Within and Across Districts (Research Question Two) In this section, the presence and quality of writing process and strategy instruction found in teachers’ enacted instruction are presented. Table 17 presents summary information about the rate of presence and the quality of the overall practice for teachers’ enacted instruction. Table 17 Teacher Instruction on Writing Process & Strategies Scores by District District A (n=18) District B (n=12) District C (n=12) Presence 0.38 0.50 0.92 Quality Mean 2.43 (SD = 1.34) 1.83 (SD = 0.41) 2.09 (SD = 1.04) 153 Quality Mode 1.00 2.00 1.00 Quality Range 1.00-4.00 1.00-2.00 1.00-4.00 Table 17 (cont’d) District D (n=7) All Districts 0.57 0.57 1.75 (SD = 0.96) 2.07 (SD = 1.02) 1.00 1.00 1.00-3.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Overall presence across districts for this bullet was 57 percent, meaning just over half of teachers provided writing process instruction during our observations (see Table 17 above). The wide variation of the presence of this instructional practice across districts was noticeable while coding the videos and in the subsequent analyses. In District C, on the high end of presence, 92 percent of observations included writing process and strategy instruction and in District A at the lowest end of the spectrum there was writing process and strategy instruction in just 38 percent of observations. The quality of this instruction also varied, averaging 2.07 (SD = 1.02). The highest scoring district, District A, had a mean quality score of 2.43 (SD = 1.34). The lowest scoring district, District D, had a quality score of 1.75 (SD = 0.96). The scores range was the lowest in District B, at 1.00-2.00. However, District B was the only district with a mode above 1.00, with a mode of 2.00. While there were some similarities between districts, such as the lower quality of enacted instruction overall, there were also some differences across districts, which are shared below. Variation Across Districts. When looking beyond overall scores and examining scores across districts, there was variation between districts. The highest and lowest mean quality ratings for this practice were 2.43 (SD = 1.34) in District A and 1.75 (SD = 0.96) in District D. This is the inverse of teachers enacted practice of daily time to write with attention to motivation and engagement, for which teachers’ enacted instruction in District D had the highest mean 154 quality ratings. Teacher observations in this district also consistently had lower scores with the second lowest range of 1.00-3.00 and a mode score of 1.00. As far as presence, the teachers in District A were the least likely to use this instructional practice, with teachers in District B following close behind. While teachers in District A had a high-quality mean score, they also had the lowest presence score across all districts for this bullet with just 38 percent of classroom observations having writing process and strategy instruction. District A also had the widest possible range of scores (1.00-4.00). Teachers in District B had the smallest range with the range of 1.00-2.00, a mean quality score of 1.83 (SD = 0.41). District C was most likely to provide instruction in writing processes and strategies with a mean presence score of 0.92, which means in almost every observation, teachers in this district provided some writing process and strategy instruction. District C also had the widest quality range with quality scores between 1.00-4.00. The following examples illustrate a high and lower scoring example of writing process and strategy instruction in practice. In a high scoring example from District A, a teacher supported third grade children through the drafting portion of the writing process. The teacher specifically taught children the language for what they would be doing (“creating a sense of closure”) and then had children produce closing statements for the ending of their own individual narratives. They worked on a drafting planning page, talked with partners about their ideas, and then shared them with the whole class. The teacher used a model text of her own writing based on a chapter book they had read aloud; she guided them through to complete the lesson objective of creating a sense of closure in her writing. The teacher also discussed how children could use temporal words to indicate the end and supported children to include temporal words in their own closing statements on their individual planning pages. This example scored a four, and not a 155 five, per the scoring guide in Figure 4 above, because there was not a discussion of when and why children should use this strategy in their writing. In contrast, a third-grade teacher in District B, had children edit sentences in their own writing during individual writing conferences, an instructional practice which scored much lower. During the three conferences observed, the teacher said, “we are gonna go through each sentence...” and had children reread the text aloud. However, as they read the text, the teacher corrected the mistakes and told children what to do to fix it. Sometimes she asked questions to elicit their ideas. However, there was no attention to the writing process or strategy instruction to complete that process of editing with her. As a result, this example scored a 1.00. Variation Within Districts. Interestingly, District C had a different instructional modality in the first set of observations. As noted, the district used online instruction due to the COVID-19 pandemic and had lower scores for the six observations conducted during this time period. There were three scores of 1.00, one score of 2.00, one score of 3.00, and one teacher where the practice was not present at all and thus, did not receive a quality score. The second set of observations, done in-person, had all teachers using this instructional practice and higher average scores. There was one score of 1.00, two scores of 2.00, two scores of 3.00, and one score of a 4.00. It is unclear whether this was due to gradual shifts in instructional focus across the year or whether modality also had an influence. Overall, when taken together, the findings on teachers’ enacted instruction for writing strategy and process instruction indicate that teachers’ enacted instruction had a wider range and more instances of lower quality instruction than the curricular materials. Additionally, teachers’ enacted practice had lower presence for writing process and strategy instruction than the curriculum in all districts. 156 Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Writing Process and Strategy Instruction (Research Question Three) In this section, a comparison of the presence and quality of writing process and strategy instruction writing instruction found in curriculum materials and teachers’ enacted instruction are presented. Table 18 presents summary information about the presence and the quality of the overall practice for curriculum materials and enacted instruction as well as scores for individual curricula and teachers in each of the four districts in the study. Table 18 Writing Process & Strategy Instruction Presence Quality Mean District A Curriculum 1 (n=6) Enacted Instruction with Curriculum 1 (n = 18) District B Curriculum 2 (n=6) Enacted Instruction with Curriculum 2 (n = 8) District C Curriculum 3 (n=4) Enacted Instruction with Curriculum 3 (n = 12) District D Curriculum 4 (n=2) Enacted Instruction with Curriculum 4 (n = 12) 0.67 0.38 1.00 0.50 1.00 0.92 1.00 0.57 2.50 (SD = 0.57) 2.43 (SD = 1.34) 3.00 (SD = 0.00) 1.83 (SD = 0.41) 4.00 (SD = 0.82) 2.09 (SD – 1.04) 5.00 (SD = 0.00) 1.75 (SD = 0.96) 157 Quality Mode Quality Range 2.00, 3.00 2.00-3.00 1.00 1.00-4.00 3.00 3.00 2.00 1.00-2.00 4.00 3.00-5.00 1.00 1.00-4.00 5.00 5.00 1.00 1.00-3.00 Table 18 (cont’d) All Districts All Curricula Districts A – D 0.89 0.57 3.38 (SD = 0.96) 2.07 (SD = 1.02) 3.00 2.00-5.00 1.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. While most curriculum units analyzed had writing strategy and/or process instruction each day, as indicated by the high presence scores by district seen in Table 14 above, teachers’ enacted instruction had much lower presence with just over half of teachers using writing strategy and process instruction as seen in Table 14 above. There were other quality and qualitative differences outside of presence between the curriculum and the enactment that are worth noting. First, the quality of this practice was much lower in implementation than in curriculum materials. The mean and the mode quality scores were lower by at least one point in each district and overall. It warrants reminding that teacher’s enacted practice was captured at two separate times during the year, while curriculum scores covered more than one lesson (see Methods section). Since three out of four curricula had children engaging in strategy lessons that moved them through the writing process daily, and two of these curricula had daily lessons for writing, the lower presences scores means that teachers’ enacted practice in these districts did not mirror the presence found in the curricular materials. District A teachers provided less writing process and strategy instruction with only 38 percent presence in observations and yet had the highest mean quality rating of 2.43 (SD = 1.34). Interestingly, this district also had the closest quality score to their curriculum quality score with 158 only a 0.07 difference between the two, with the curriculum quality scoring an average of 2.50 (SD = 0.57). Similarly, Districts B, C and D all had variations from the quality scores of their curriculum to the implementation. All three had higher scoring curricular materials than teachers’ enacted practice. District D had the most dramatic difference. District D had mean quality scores that varied by over three points, with the curriculum being over three points higher on average and had the lowest quality of all enacted practice but the highest quality of the curriculum materials. This suggests that if teachers in District D had followed the teaching guidance in their curricular materials, children would have received higher quality writing process and strategy instruction. District C had almost 2.00 points difference and District B had just over one point difference between curriculum quality and teachers’ enacted practice. Again, these findings suggest that teachers have materials that, if followed, offer higher quality implementation for writing process and writing strategy instruction than the instruction provided. When taken in combination with the lower presence in teachers’ enacted practice, there is something interesting happening in teachers writing process and strategy instruction that cannot necessarily be explained by the curriculum materials the district has adopted. Instructional Practice Five: Use of Mentor Texts Research indicates that using mentor texts to help children understand text structure and genre features supports better writing outcomes (e.g., Traga Philippakos et al., 2023). Both curriculum materials and teachers’ enacted instruction were examined for evidence of the use of mentor texts. Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. 159 Presence and Quality Scoring for Use of Mentor Texts The fifth writing instructional practice focuses on teachers providing children text models of writing. If the use of a mentor text was present, it was also scored for quality. Presence scores indicate the rate of the instructional practice’s implementation. The quality of the instruction was then broken down into a 5-point scale to account for varied levels of quality of implementation, from beginning to exemplary (see Methods Chapter for more information on the development of the coding protocol and scoring). The coding protocol for interactive writing can be found in Figure 18 below. The full protocol can be found in Appendix B. Figure 18 Mentor Text Use Scoring Guidelines Bullet #4: The teacher provides opportunities to study models of and write a variety of texts for a variety of purposes and audiences, particularly opinion, informative/explanatory, and narrative texts (real and imagined). Strong (4) Proficient (3) The teacher reads/shows a mentor text that is the same type of text that children are writing AND tells children the purpose, audience, structure, and features that should be included in their writing. Exemplary (5) The teacher reads/shows a mentor text during writing instruction AND discusses the model with the children including supporting children to discuss the purpose, audience, structure, and features of this type of text that children could include in their writing. Developing (2) Beginning (1) The teacher reads a mentor text but does not explain or discuss how this relates to children’s own writing OR the mentor text is not aligned with the type of text that children are writing. 160 Overview of Use of Mentor Texts Within and Across Curricula (Research Question One) In this section, the presence and quality of mentor text use found in curriculum materials are presented. Table 19 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials. Table 19 Scores by Curriculum for Teacher Use of Mentor Texts Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) Curriculum 4 (n=2) All Curricula Presence 0.83 1.00 1.00 1.00 0.94 Quality Mean 2.80 (SD = 0.45) 3.00 (SD = 0.00) 4.00 (SD = 0.82) 3.00 (SD = 0.00) 3.18 (SD = 0.64) Quality Mode 3.00 Quality Range 2.00 - 3.00 3.00 3.00 4.00 3.00-5.00 3.00 3.00 3.00 2.00-500 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Ninety-four percent of curriculum analyzed called for teachers to engage children in using a mentor text during writing instruction. Only Curriculum 1 had less than 100 percent presence in curriculum units analyzed. Quality for this instructional practice was above average overall (see table 19 above), as the average quality score was 3.18 (SD = 0.64). The curricula varied in quality. Curriculum 3 had the highest average quality score with a mean score of 4.00 (SD = 0.64). Curriculum 1 had the lowest quality and presence score. In this curriculum there was 83 percent presence and a mean quality score of 2.80 (SD = 0.45). Overall, for all curriculum units analyzed the mode was consistently 3.00, with only Curriculum 3 having a 161 higher mode at 4.00. Additionally, the range of scores was also mostly above 3.00 with only Curriculum 1 with a 2.00 in the range of scores. Variation Across the Curricula. Most curricula included at least one of the following types of mentor texts in the same genre as children’s writing: the teacher’s writing, grade level or peer writing, and trade books. Not all curriculum materials used all three. In Curriculum 2, children did not use any widely available and published trade books and teachers were not directed to use exemplars of other children’s writing, though children did occasionally share their work with a partner. To give a more holistic sense of the type of mentor text use recommended via teacher text in the different curriculum materials, an example from each of the curricula follows. In a Curriculum 4 unit for first graders, there were multiple mentor texts used throughout the unit. There was a teacher created mentor text used in Lesson 3 and Lesson 4. In this lesson, the teacher's script suggested modeling adding to their book using a caption, and then asked children to do in their own books. In Lesson 5, the teacher script modeled sorting information from student work samples into a main topic and categories and then sent children back to do the same. In Lesson 8, there was a table of contents that the curriculum’s teacher script had the teacher and children co-create. This table of contents was then used as a model for children to write their own tables of contents. In Lesson 9, the lesson had children search and use sticky notes to flag mentor text headings in trade books that were aligned to the genre they were writing. In Lesson 13, the class revisited an informational text and decided if the author did something they would like to do in their own writing and then went back to their own writing to add it in. In Lesson 15, children listened to the teacher read and model using her own mentor text and then stopped to recognize where they exclaimed or wondered something more. Finally, in 162 Lesson 16 the curriculum’s teacher script used a trade book mentor text to teach children how to use comparisons in their own informational texts, and explicitly asked children to name whether they want to add certain features to their texts based on what they found in annotated mentor text. These examples all included ways that the children and teacher co-created mentor texts to highlight the various texts they would be required to write later in the lesson. This unit was one in which children consistently had mentor text exposure and the teacher used mentor texts of their own writing, trade books, and children’s writing. However, the curriculum did not have the teacher explicitly point out how and why the mentor texts were like the texts children would be writing, therefore, the unit scored a 4.00 overall for quality of mentor text use. In a unit where children write an informational text in Curriculum 3, there are multiple examples of the teacher using mentor texts to support children. For instance, on the first day of the unit, the teacher script used a mentor text of an informative research report to help children see "key features and qualities of a strong research report." The teacher script then has the teacher model how to identify key features. The curriculum then guided the teacher to have children work together to identify other key features, then children worked independently to identify the main idea, supporting details, and connected them to each key idea. There are similar examples across the unit, including that the text has a teacher explain to children that they would be writing an informative research report, so they needed to brainstorm goods and services, after which children then revisit the features of the good research report to close the lesson. After this lesson, the teacher text has students look through trade book mentor texts she brought in on goods and services. To close out that week of lessons, the teacher text has children analyze an informational text on dairy farming and look at the use of transition/sequencing words (next, last, etc.) and then has the teacher model filling in her own planning chart for her draft. She then 163 moves into using her planning chart to begin drafting. Throughout the examples provided above from just this first week of the unit, the teacher text was consistently providing references to other texts, including trade texts and the teacher’s own text that she created in front of children. The consistent reference to the features, type of text, and rationale for the use of specific features is why this example scored a 4.00 for quality. In an expository unit from Curriculum 2, the teacher script has the teacher create examples of their own writing for each of 14 steps of the writing process. This process includes the teacher creating examples of her reasons for a response to a prompt backing their argument. The teacher text says that as she moves her reasons to her draft from her planning map, the teacher should explain to children “the best reasons are those that differ from each other as well as those which are easier to elaborate.” On each day, the teacher works with students to model creating a text that is the same text type and form that children will create at that step in the lesson. There is no use of children’s texts as models or a trade mentor text. The teacher-created text model is the only mentor text in this unit, but it matches the text children are creating. As is highlighted in the scoring schema in Figure 16, this example received a score of 3.00 because the teacher read a mentor text that was the same genre as children were writing and told children the purpose, audience, text structure, and features they should include in their own writing. In a fall, third-grade unit from Curriculum 1, the teacher text had two instances of children using a mentor text. The first instance of the curriculum recommendations was when the teacher text had the teacher read aloud an informational paragraph on why polliwogs wiggle. The second instance was when children were using a note-catcher on their reading about glass frogs that the teacher modeled filling out partially and then children completed on their own. This example of the teacher text highlights that the children did have access to a mentor text on the 164 topic that they were going to be writing on, but the teacher did not highlight the text features, structure, or ways it would be like what children would be completing. This example unit scored a 2.00 overall and only had two instances of use of mentor text. Teachers’ Enacted Mentor Text Use Within and Across Districts (Research Question Two) In this section, the presence and quality of mentor text use found in teachers’ enacted instruction are presented. Table 20 presents summary information about the rate of presence and the quality of the overall practice for teachers’ enacted instruction by district. Table 20 Scores by District for Teachers Instruction for Use of Mentor Texts District A (n=18) District B (n=12) District C (n=12) District D (n=7) All Districts Presence Quality Mean 0.17 0.17 0.25 0.00 0.16 2.00 (SD = 1.73) 2.50 (SD = 0.71) 3.00 (SD = 1.00) NA* 2.50 (SD = 1.20) Quality Mode 1.00 Quality Range 1.00-4.00 2.00, 3.00 2.00-3.00 2.00, 3.00, 4.00 NA* 2.00-4.00 NA* 2.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. As indicated by the low presence score of just 0.16, or 16 percent, teachers’ enacted practice very rarely incorporated mentor texts (see Table 20 above). In total, our team only observed eight instances of this instructional practice out of forty-nine possible separate days of instruction. The range of quality scores was from 1.00- 4.00 overall. The mean quality score for this practice was 2.50 (SD = 1.20). 165 With such a small sample of teachers’ enacted practice to analyze, trends across districts are a bit less meaningful. Nonetheless, variation in observations across and within districts are analyzed below. Variation Across Districts. All districts except District D had some presence of the use of mentor text in instruction. Mean quality was highest in District C, with a score of 3.00 (SD = 1.00) and lowest in District A with a mean quality score of 2.00 (SD = 1.73). The most consistent scores for quality of implementation come from District B where there was a mean quality score of 2.50 (SD = 0.71). To demonstrate what the use of a mentor text looked like in action, below are a high and a low scoring example of this practice. On the lower scoring end of the range of scores for use of a mentor text, is an observation of a Kindergarten teacher in District B. During small group writing instruction, the teacher has children write a text about plants. The teacher tells children if they want to use two trade books they have already read to help them think about what they might like to write for their own writing in a small group, they can. However, that was the extent of the teachers’ enacted instruction. This practice was a low- quality level implementation and scored a 2.00. A higher scoring example of this practice was observed when a first-grade teacher in District C provided a mentor text for children during their persuasive writing lesson. The teacher created a model text of a brainstorming map with children and then connected how the model was related to what students should do. The trade book text she read was a series of persuasive letters, back and forth between a boy and his mom. While reading aloud, the teacher stopped to point out the letters’ greeting and taught children how to address a letter to and from when writing this type of text. After children listened to the mentor text and watched their teacher fill out a graphic organizer (circle map) highlighting the reasons included in the persuasive letters 166 from the boy and his mother, they used a circle map to brainstorm reasons for their own letters. Since this example has the teacher using multiple models of text with children that have the types of features they should incorporate in their own writing and is aligned to the genre children are working on, this example scored a 4.00 for quality. Variation Within Districts. While the presence of the practice was consistent across three out of four districts, there was no consistency in the quality of teachers’ use of mentor texts. For example, in one district all three observations had different quality scores of 1.00, 2.00, and 3.00 while another had three observations with scores of 2.00, 3.00, and 4.00. Additional analyses indicate there was not an increase in use of this practice by grade level (see Table 21 below). Taken together, these findings indicate that the likelihood teachers used this practice was low, it was not correlated with the grade level they taught, and if they did use the practice there was variability in the quality of implementation, with some implementing the practice at a low level of quality and others implementing the practice at a higher quality level. Table 21 Analysis by Grade Level for Teachers’ Use of Mentor Texts Rate of Presence Observed Use of Mentor Text Total Observations at Grade Level 4 Kindergarten 23 First Grade 12 Second Grade Third Grade 10 Note: The maximum quality score for this instructional practice was 5. Instructional practices not observed were scored as 0. 0s were excluded from this analysis. 2.00 1.00, 4.00,4.00 1.00, 2.00, 3.00 3.00 All Quality Scores 0.25 0.13 0.25 0.10 1 3 3 1 167 Comparison of Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Mentor Text Use (Research Question Three) In this section, a comparison of the presence and quality of mentor text use found in curriculum materials and teachers’ enacted instruction are presented. Table 22 presents summary information about the presence and the quality of the overall practice for curriculum materials and enacted instruction as well as scores for individual curricula and teachers in each of the four districts in the study. Table 22 Mentor Text Use District A Curriculum 1 (n=6) Enacted Instruction with Curriculum 1 (n = 18) District B Curriculum 2 (n=6) Enacted Instruction with Curriculum 2 (n = 8) District C Curriculum 3 (n=4) Enacted Instruction with Curriculum 3 (n = 12) District D Curriculum 4 (n=2) Enacted Instruction with Curriculum 4 (n = 12) Presence Quality Mean 0.83 0.17 1.00 0.17 1.00 0.25 1.00 0.00 Quality Mode Quality Range 3.00 2.00-3.00 1.00 1.00-4.00 3.00 3.00 2.00, 3.00 2.00-3.00 4.00 3.00-5.00 2.00, 3.00, 4.00 2.00-4.00 2.80 (SD – 0.45) 2.00 (SD = 1.73) 3.00 (SD = 0.00) 2.50 (SD = 0.71) 4.00 (SD = 0.82) 3.00 (SD = 1.00) 3.00 (SD = 0.00) NA 3.00 NA 3.00 NA 168 Table 22 (cont’d) All Districts All Curricula Districts A – D 0.94 0.16 3.18 (SD = 0.64) 2.50 (SD = 1.20) 3.00 2.00-5.00 2.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. When examining the difference between the curriculum recommendations and teacher practice, the use of mentor texts stands out as a spot where there is a difference between what is recommended and what teachers did. In most of the curriculum units examined, many lessons contained the use of a mentor text. Whether it was a modeled teacher text, student exemplars, or a trade book, mentor text use was commonly present. The rate of presence of mentor text use across a unit was 0.94, while teachers’ enacted practice of mentor text use within a day of instruction was only 0.16 (see Table 22 above). While some lessons within units did not have mentor text usage, it seems likely that mentor text use should have been higher in teachers’ enacted practice than approximately one in eight observations especially considering that in some curricula a mentor text was present via teacher modeling in every lesson. Some curricula, such as Curriculum 2 and 3, had mentor text use present in each step of each day’s lesson via the teacher modeled text, and yet teachers in District B where Curriculum 2 was used had a mean presence score of 0.17. Similarly, Curriculum 4 units had almost daily use of mentor texts and teachers using that curriculum in District D had zero observations of the use of mentor texts. This indicates that something other than curriculum support and materials and the grade level impacts teachers’ use of mentor texts during writing instruction. 169 Instructional Practice Six: Explicit Instruction in Component Skills Research indicates that explicitly teaching children component skills such as handwriting and spelling supports better writing outcomes (e.g., Graham et al., 2012b; Troia, 2014). Both curriculum materials and teachers’ enacted instruction were examined for evidence of explicit instruction in component skills. Then, the results from both were compared to see how teachers’ enacted instruction may or may not have aligned to the written curriculum materials. Presence and Quality Scoring for Explicit Instruction in Component Skills (Research Question One) The fifth writing instructional practice focuses on teachers providing children instruction on components of writing (e.g., handwriting, sentence construction). Explicit instruction in component skills contained six separate instructional areas of focus: letter formation (A), spelling strategies (B), capitalization/punctuation (C), sentence construction (D), keyboarding (E), and word processing (F). See Table 23 for a definition and description of each type of component skills instruction. Table 23 Component Skills Component Skill Definition Letter Formation Instruction in what strokes form upper and lowercase letters. Examples of Component Skills Instruction The teacher explains that children will use a ball and stick to form the lowercase letter a. Spelling Strategies How to correctly spell words using letter patterns, word associations, etc. The teacher explains that the letters c and h make the sound /ch/, like in chicken, when they are combined. 170 Table 23 (cont’d) Capitalization and Punctuation Instruction in when to capitalize words and when to use different types of punctuation. Sentence Construction Instruction on how to create, combine, and expand sentences. Keyboarding Instruction on appropriate hand and finger placement on a keyboard and typing practice. The teacher explains that when students use a question word, like who, they should end the sentence with a question mark. The teacher explains that when students are writing a sentence that lists more than two items, they can use commas to separate the items and put the word “and” before the last one. The teacher provides time in a keyboarding program that has them practice typing combinations of the letters g and r to practice using an extension of their left index finger. Word Processing Instruction on how to type a piece of writing up in a word processing program (e.g., Microsoft Word, Google Docs). The teacher provides time for children to type up their final writing product before the publication day. Instruction in this practice was broken down into a 5-point scale to account for varied levels of proficiency, from beginning to exemplary (see Methods Chapter for more information on the development of the coding protocol and scoring). The coding protocol for interactive writing can be found in Figure 19 below. The full protocol can be found in Appendix B. 171 Figure 19 Explicit Instruction in Component Skills Scoring Guidelines Bullet #5: The teacher provides explicit instruction in letter formation, spelling strategies, capitalization, punctuation, sentence construction, keyboarding, and word processing. Exemplary (5) Strong (4) Proficient (3) Developing (2) Beginning (1) The teacher may briefly use or reference a writing strategy while children observe and listen, but these are mentioned briefly or in passing. The teacher explains AND models AND guides children to practice one or more age-appropriate writing strategy. The teacher engages children in discussion of how and why to use the strategy. The teacher does two of the following three things: models, explains or guides children to practice one or more age- appropriate instructional strategy in the context of a writing lesson. The teacher may not complete a full gradual release cycle of explaining, modeling, guided and independent practice. Explicit Instruction in Component Skills (Research Question One) In this section, the presence and quality of explicit instruction in component skills found in curriculum materials are presented. Table 24 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials analyzed for this study. Table 24 Scores by Curriculum for Explicit Instruction on Component Skills Presence Quality Mean Quality Mode Quality Range 1.00, 3.00 3.00 1.00, 3.00 1.00, 3.00 Curriculum 1 (n=6) Curriculum 2 (n=6) 0.83 1.00 2.60 (SD = 0.89) 2.00 (SD = 1.01) 172 Table 24 (cont’d) Curriculum 3 (n=4) Curriculum 4 (n=2) All Curricula 1.00 1.00 0.94 3.00 (SD = 0.82) 3.50 (SD = 0.71) 2.59 (SD = 1.00) 3.00 2.00-4.00 3.00, 4.00 3.00-4.00 3.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. There was large variation between the rates of presence of explicit instruction in component skills found in curriculum materials. Overall, when averaged, most of the curricula provided some instruction in one or more of the component skills with a presence score of 94 percent for the instructional practice of explicit instruction in component skills overall (Table 20). The mean quality for instruction in component skills was 2.59 (SD = 1.00) overall for all curriculum units analyzed. The highest average quality score was a 3.50 (SD = 0.71) for Curriculum 4 and the lowest was a 2.00 (SD = 1.01) for Curriculum 2. The overall mode quality score of 3.00 indicates that most curriculum materials analyzed provided explicit instruction in one or more component skills and that this instruction generally contained two of the following three approaches: explanation, modeling, and/or guidance of children while they completed the practice. Curricula 1 and 2 were the only curriculum materials to have units that scored a 1.00, which indicated that the teacher used or referenced one of these categories but did not explicitly teach children the writing strategy. The range of quality scores indicates that most curricula had some proficient scores (3.00 or higher) and some scores that were less than proficient (lower than 3.00) for component skills. Importantly, these results represent an average of the quality of implementation across all components of all explicit instruction found in the units analyzed. For example, if a curriculum 173 had scripting for a teacher to provide sentence construction that scored a 5.00 on average and this instruction was found daily in lessons, but the unit also had two instances of teacher text referencing different types of punctuation marks but without explicit instructions about how or when to use each mark and as a result scored a 1.00 on average, the instances of explicit instruction in component skills would be averaged. In this instance, the score averaged out to a 4.00 to account for the low quality, but limited presence and relatively little time spent on the punctuation instruction. If an instructional practice was not observed, the score of zero was not factored in but it was accounted for in rates of presence. Variation in Presence of Explicit Instruction in Component Skills Across Curricula. In the section above the presence and quality of explicit instruction in component skills found in curriculum materials were presented to answer research question 1. However, all component skills were analyzed together. To further differentiate between the curriculum, Table 25 below presents summary information about the rate of presence of each of the specific component skills found in each of the four curricula analyzed and overall. Table 25 Overall Presence of Explicit Instruction on Component Skills by Curricula Curriculum 1 (n=6) Curriculum 2 (n=6) Curriculum 3 (n=4) Curriculum 4 (n=2) Overall Handwriting Spelling 0.17 0.33 0.00 0.00 Capitalization & Punctuation 0.17 0.50 Sentence Construction Keyboarding 0.83 0.00 0.83 0.00 1.00 1.00 1.00 1.00 0.25 0.00 1.00 0.27 0.44 1.00 0.56 0.50 0.00 0.83 0.06 174 Table 25 (cont’d) Word Processing 0.17 0.00 0.00 0.00 0.06 Note: Presence for practices were coded as 0 or 1. Instructional practices not observed were scored as 0. With an overall rate of presence of 83 percent and 44 percent respectively, sentence construction and spelling instruction were two of the most common types of explicit instruction in component skills found across all the curriculum units analyzed (see the last column in Table 21 above). Sentence construction was present in 83 percent of curriculum units and found across all four curricula. Notably, Curriculum 1 had a separate curriculum that was spelling- and phonics-focused that was not analyzed as teachers did not report specifically using this curriculum in addition to the core curriculum to teach writing on study surveys used to identify curriculum materials analyzed for this study. Thus, Curriculum 1 could have higher presence for the spelling component skills measured than it does from this analysis if that instructional resource had consistent instruction in spelling. Looking beyond the numbers and examining the type of instruction contained, Curriculum 3 included explicit and sequential instruction in spelling patterns. Instruction in Curriculum 4 was focused on helping children listen for sounds in words and use high frequency word walls, but not teaching specific spelling skills beyond that. The second most widespread practice was explicit instruction around capitalization and punctuation, represented in 44 percent of curriculum materials, followed by handwriting, which was represented in 27 percent of curriculum materials. It is worth noting that in Curriculum 2, which included Kindergarten units analyzed for this study, there was no presence of handwriting instruction. This was also true for Curriculum 4 which had only first-grade units analyzed. Curriculum materials generally did not include the practices of teaching keyboarding or word processing, as evidenced by the presence scores of 6 percent overall for each. In fact, 175 across all curriculum units analyzed, Curriculum 3 and Curriculum 1 were the only two curricula that incorporated any opportunities for keyboarding or use of a word processor, with the practice occurring one time each. For context, in Curriculum 3, there was a reference to a supplemental resource that contained a lesson on keyboarding that the teacher could use. That reference was noted in one day of one three-week unit. Similarly, in Curriculum 1 children were invited to use a word processor in one lesson to type up their written product but there was no instruction for the teacher to provide on how to do that. Variation in Explicit Instruction in Individual Component Skills Within Curricula. Just as there was variation across curricula in terms of what component skills were addressed, there was also variation within each curriculum. For instance, in Curriculum 1, there was infrequent instruction in handwriting, spelling, capitalization & punctuation, word processing and none in keyboarding. However, there was presence for explicit instruction in sentence construction in all except one of the six curriculum units analyzed. This was true across all grade levels analyzed. In Curriculum 2, there were similar trends where there was low frequency of instruction in a specific component skill or no presence of instruction at all for many component skills such as handwriting, spelling, keyboarding, and word processing. This was true across all grade levels analyzed. However, there was presence in half or more units for capitalization and punctuation instruction and in five out of six units, there was intentional instruction in sentence construction. The two Curriculum 4 units analyzed had frequent presence in daily lessons for instruction in multiple component skills, but none for keyboarding or word processing or handwriting. Additionally, while spelling instruction was present in this curriculum, it was not explicit and sequential instruction in spelling patterns, as noted in the previous section. 176 Curriculum 3 had the most balanced representation of the component skills. All the components except word processing were included, and four of the six components—a, b, c, and d—were included in 100% of units reviewed. In Curriculum 3, there was even mention of keyboarding. Many of these components, specifically spelling, sentence construction, and capitalization and punctuation took place almost daily. Handwriting work was usually present once or twice a week and was part of an additional resource provided by the curricula and not built into the overall structure of the standard lesson or day. In the written materials, this resource was referenced and recommended in a sidebar. Additionally, capitalization, punctuation, and sentence construction instruction were provided at a separate time from when children were working on their own writing. Therefore, they were not part of the larger writing process, but rather took place during a separate skills-focused instructional block. Teachers’ Enacted Explicit Instruction in Component Skills Within and Across Districts (Research Question Two) In this section, the presence and quality of explicit instruction in component skills found in teachers’ enacted instruction are presented. Table 26 presents summary information about the rate of presence and the quality of the overall practice for teachers’ enacted instruction by district. Table 26 Scores by District for Teachers’ Explicit Instruction in Component Skills District A (n=18) District B (n=12) District C (n=12) Presence 0.67 0.75 0.92 Quality Mode 1.00 1.00 3.00 Quality Range 1.00-4.00 1.00-3.00 1.00-4.00 Quality Mean 2.17 (SD = 1.27) 1.89 (SD = 1.05) 2.55 (SD = 0.82) 177 Table 26 (cont’d) District D (n=7) All Districts 0.71 0.76 2.40 (SD = 1.34) 2.24 (SD = 1.09) 1.00 1.00 1.00-4.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. The overall presence of explicit instruction in component skills was 76 percent, as seen in the first column for All Districts. In observations, teachers often taught one or more of the component skills of writing but may not have done so at a very high quality. While the presence of explicit instruction in component skills was high, the overall quality scoring on this practice was lower, with an average score of 2.24 (SD = 1.09). The mean quality scores all fell within 0.75-point range for enacted instruction. The mode score for all enacted instruction was 1.00 for all districts except District C, which had a mode score of 3.00. The quality range included scores from 1.00-4.00 for all districts except District B, which had a tighter range, 1.00-3.00. This means that enacted instruction in all districts ranged in quality from higher to lower quality levels of implementation, depending on the observation. While there were some similarities across districts, there were also variations. Teachers in District C were most likely to provide explicit instruction in the focal component skills of writing. In District C, teachers’ enacted instruction had a rate of presence of 92 percent, meaning that teachers’ enacted instruction contained explicit instruction in one or more of the six component skills accounted for in this study in almost every observation done in that district. This was a much higher percentage than the other districts in the study. In terms of quality, there was also variation across the districts. District B had the lowest mean quality score of 1.89 (SD = 1.05). District A had the next lowest mean quality score with 178 an average of 2.17 (SD = 1.27). District D had the next highest mean score with a score of 2.40 (SD = 1.34). Finally, District C scored an average of 2.55 (SD = 0.82). Variation in Presence of Explicit Instruction in Component Skills Across Districts. Table 27 presents summary information about presence in teachers’ enacted instruction. Table 27 Rate of Presence of Explicit Instruction on Component Skills by District Handwriting Spelling Capitalization & Punctuation Sentence Construction Keyboarding Word Processing District A 0.11 District B 0.33 0.50 0.22 0.17 0.00 0.00 0.50 0.50 0.42 0.00 0.00 District C District D 0.33 0.75 0.17 0.42 0.33 0.67 0.67 0.00 0.00 0.57 0.00 0.00 Overall 0.24 0.59 0.35 0.41 0.00 0.00 Note: Presence was scored as either present (1) or not present (0). Overall, the most frequently observed component skill practice was spelling instruction with a rate of presence of 0.59, or 59 percent. All districts had a rate of presence of 0.50 or higher for spelling instruction. The second most common practice observed was sentence construction with a presence score of 0.41, followed by capitalization and punctuation instruction with a presence score of 0.35, followed finally by handwriting with a presence score of 0.24. In Districts A, B, and C, explicit spelling instruction was the most common practice. Interestingly, in District B, spelling and capitalization and punctuation instruction were equally likely to occur, with 50 percent presence scores for both component skills. In District C, 75 percent of teacher observations included explicit spelling instruction and 67 percent included 179 sentence construction instruction. In District D, 42 percent of teacher observations contained spelling instruction and 57 percent of teachers provided sentence construction instruction, but 67 percent of observations included capitalization and punctuation instruction. In order to contextualize what this practice looked like in teachers’ observations, below are specific examples of each component skill. In District C, in a first-grade class, a teacher received a composite score of 4.00 for quality of instruction for component skills in handwriting, spelling, capitalization and punctuation. This teacher had children engaged in explicit spelling instruction in which they modeled blending and segmenting words to spell them. Children then wrote their own spelling words during a dictation time after the teacher provided explicit instruction in the spelling pattern and modeled words using that pattern on a projected screen. As children wrote the words that were dictated, the teacher reinforced the procedure she taught children to use saying, “I see some people tapping it out, nice job.” Additionally, the teacher then prompted students to write the lowercase letter r facing the right direction, had children double check for lowercase letters, prompted when she saw a letter that was not lowercase and should be on individual children's whiteboards, and then explained when children should use uppercase letters (“in a name”). Finally, the teacher coached children through adding punctuation to a sentence provided by a child in the class. After repeating the sentence aloud, the teacher asked the children if it should have a period or question mark. When they responded, she affirmed their selection with a rationale. For a low scoring example of explicit instruction in sentence construction, in a first-grade class in District D, the teacher prompted children to add 'a' or 'the' to their writing prompt response without an explanation, guidance, or support to do so. There was also no modeling of 180 this in the teachers’ writing or another child’s writing. Therefore, this example scores a 1.00 for providing explicit instruction in sentence construction. In a Kindergarten classroom in District B, the teacher asked children, "why do I do a capital M and capital B?" (while printing her name). She then asked children to check their name to see if they had a capital at the beginning and told them if they did already have a capital letter at the beginning of their name to clap one time. In the same lesson, she continued to teach about an exclamation point. She showed it to children and taught them what it does. Later in the lesson she said, “see here? That's a period so we just have to stop… we only have to be excited at exclamation points." Finally, she also taught about the use of a question mark and period, incidentally, when children started a worksheet about the color red. This practice overall scored a 3.00 for quality because the teacher did two of three things: modeled, guided, and/or supported students as they worked. In a different Kindergarten classroom in District B, the teacher engaged in explicit instruction in handwriting. Children were at their seats with a worksheet for practice and the teacher provided a cue for a child to start the letter H. The teacher said, “We are going to start at the top line. Start at the top line and then we’re gonna pull straight down.” This enactment scored a 3.00 for providing a model on the document camera and circulating the room to support. She was observed guiding children through the practice with similar cues, both proactively and as corrective feedback. Variation of Explicit Instruction in Component Skills by Grade Level. One might theorize that variations in component skills instruction simply reflect the variance in grade level instruction. Some of these skills, such as handwriting, could be more likely to be present and of higher quality in earlier grade levels, given their foundational nature. While there were grade 181 level differences in the rate and quality of instructional practices; however, based on a one-way ANOVA, none were considered statistically significant differences (see Table 28). Although, it should be noted that because of the small sample sizes at each grade level, these results should be taken as a part of the bigger picture of the data presented. Table 28 Presence of Explicit Instruction in Component Skills by Teachers’ Grade Level Kindergarten (n=4) First Grade (n=23) Second Grade (n=12) Third Grade (n=8) All Grades Mean Presence 0.75 0.83 0.83 0.50 0.76 Mean Quality Quality Range 3.00 (SD = 0.00) 2.63 (SD = 1.12) 1.90 (SD = 0.88) 1.00 (SD = 0.00) 2.24 (SD = 1.09) 3.00 1.00-4.00 1.00-3.00 1.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. Even though additional analyses indicated that there was no statistically significant difference in means between grade levels, the descriptive results indicated that there was some difference in rate of presence and quality by grade level. Kindergarten teachers had the highest mean quality of the practice, but with a small number of observations total (n=4). However, the highest mean presence of 83 percent was found in first and second grade and the highest quality was found in first grade with an average quality score of 2.63 (SD = 1.12). The lowest presence and quality for this instructional practice was found in third grade with only half of the third- grade observations containing explicit instruction on component skills and a mean quality score 182 of 1.00, meaning that teachers did not support or guide or model but rather briefly mentioned the component skill in passing (e.g., “don’t forget to capitalize the proper nouns in your writing”). Overall, the findings related to explicit instruction on component skills in teacher observations indicate that teachers were likely to provide instruction in component skills, and that instruction was likely to be higher quality overall if those teachers taught earlier elementary grades, such as K or 1, than if they taught second or third grade. Comparison of Presence and Quality Between the Curriculum and Teachers’ Enacted Practice for Explicit Instruction in Component Skills (Research Question Three) In this section, a comparison of the presence and quality of explicit instruction in component skills found in curriculum materials and teachers’ enacted instruction are presented. Table 29 presents summary information about the rate of presence and the quality of the overall practice for curriculum materials and teachers’ enacted instruction as well as scores for individual curricula and teachers in each of the four districts in the study. Table 29 Explicit Instruction in Component Skills District 1 Curriculum 1 (n=6) Enacted Instruction with Curriculum 1 (n = 18) District 2 Curriculum 2 (n=6) Enacted Instruction with Curriculum 2 (n = 8) District 3 Curriculum 3 (n=4) Enacted Instruction with Curriculum 3 (n = 12) Presence Quality Mean Mode Range 0.83 0.67 1.00 0.75 1.00 0.92 2.60 (SD = 0.89) 2.17 (SD=1.27) 2.00 (SD = 1.01) 1.89 (SD = 1.05) 3.00 (SD = 0.82) 2.55 (SD = 0.82) 183 3.00 1.00, 3.00 1.00 1.00-4.00 1.00, 3.00 1.00, 3.00 1.00 1.00-3.00 3.00 3.00 2.00-4.00 1.00-4.00 Table 29 (cont’d) District 4 Curriculum 4 (n=2) Enacted Instruction with Curriculum 4 (n = 12) All Districts All Curricula Districts 1 – 4 1.00 0.71 0.94 0.76 3.50 (SD = 0.71) 2.40 (SD = 1.34) 2.59 (SD = 1.00) 2.24 (SD = 1.09) 3.00, 4.00 3.00-4.00 1.00 1.00-4.00 3.00 1.00 1.00-4.00 1.00-4.00 Note: Presence was scored as either present (1) or not present (0). The maximum quality score for the quality of the instructional practice was 5. The minimum quality score for the quality of the instructional practice was 1. In both the teachers' enacted practice and the curricula, spelling was the most common component skill taught. Sentence construction was the next most common component skill taught. Another notable parallel is that neither the curricula nor any observed enacted practice contained any instruction in word processing or keyboarding. What is particularly interesting when examining the data from Table 29 above and considering this study’s third research question is where the similarities between districts and their curricular materials occur. For instance, in District B there is lower quality scoring for component skills instruction more broadly. The curriculum used by this district had the lowest presence overall of explicit instruction in component skills, and the lowest number of component skills covered. When explicit instruction in component skills was present, it had low quality scores. Similarly, teachers’ enacted instruction in District B had the lowest quality scores of any of the four districts, with not a single score of 4.00 for any observed practice, even before averaging across multiple component skills. While District B did not have the lowest presence, the lower quality of the instructional practice matches the curriculum findings. 184 In another example of the alignment between curriculum and teachers’ enacted practice, there is higher presence and quality of practices found in District C. There was alignment between the higher quality found in the Curriculum 3 curriculum materials, which had the second highest quality score for instruction in component skills, and the quality of teachers’ practice. Teacher observations in District C had the highest quality scores with a score of 3.00 for their mode score for the teacher observations as well as the curriculum. Finally, even when curricula did not include instruction on particular component skills instruction (e.g., handwriting), teachers sometimes engaged students in explicit instruction of that component skill. For instance, in Districts B and D, the curriculum units analyzed did not include any handwriting or spelling instruction, but observations of teachers’ instruction did include handwriting and spelling instruction. Moreover, the nature of that component skills instruction was often responsive to what children were doing in a way that the curriculum materials did not account or plan for. Comparisons of the rates of presence and quality scores for the curricula and teachers’ enacted instruction suggest that curricula may influence teachers’ inclusion of components skills instruction and that the quality of the curriculum materials may align with teachers’ instructional practices. However, these findings also indicate that teachers may provide explicit instruction in component skills of writing without them being included in curriculum materials, sometimes using their own curriculum materials (as is true in the example instructional sequence around the worksheet about the color red in the section preceding this section) and/or in response to children’s demonstrations of proficiency within these skills. 185 Summary of Presence and Quality of Instructional Practices Overall, curricular materials and teachers’ enacted practices included many, if not all, instructional practices at least once across the observations of teachers’ practice and the lessons analyzed across a unit. The quality of the practices varied across units of curriculum materials within the same curricula, the curricula compared to one another, and across video observations of the instruction, both within and across districts. What Did Curricula Recommend Teachers Do Overall? Each of the writing curriculum units analyzed for this study contained all or most of the effective writing practices, as determined by the K-3 Essentials document. In fact, all curricular units analyzed contained lessons addressing the first three instructional practices: estimated spelling, interactive writing, and daily writing aligned to motivation and engagement (Table 30). Most curricular units included the remaining writing essential instructional practices including writing process instruction, use of mentor texts, and instruction in six component skills: handwriting, spelling, punctuation and capitalization, sentence construction, keyboarding, and/or word processing. Table 30 Rates of Presence of Writing Instructional Practices by Curriculum and Overall Curriculum Essential Instructional Practices in Writing Estimated Spelling Interactive Writing Daily Time for Writing Curriculum 1 Curriculum 2 Curriculum 3 Curriculum 4 Overall 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 186 Writing Process & Strategy Instruction Use of Mentor Text Explicit Instruction in Component Skills 0.67 1.00 1.00 1.00 0.89 0.83 1.00 1.00 1.00 0.94 0.83 1.00 1.00 1.00 0.94 Table 30 (cont’d) Note: Presence for practices were coded as 0 or 1. Instructional practices not observed were scored as 0. The average quality of instructional practices in the curricula can be described generally across all curriculum units analyzed as being low (score of 1.00) to average quality (score of 3.00) with the lowest scores in the practices of estimated spelling (average quality score of 1.44, (SD = 0.86)), interactive writing (average quality score of 2.22 (SD = 0.65) and explicit instruction in component skills (average quality score of 2.59 (SD = 1.00). While the overall mean scores for all curricula combined highlight that there is room for improvement in how curricula recommend teachers carry out writing instruction, there is some nuance in these scores and variability between curricula. The highest overall mean quality scores were consistently found in Curriculum 4, with a few of the highest individual quality scores found in Curriculum 3. Table 31 Quality Scores of Writing Instructional Practices by Curriculum and Overall Estimated Spelling Interactive Writing Daily Time for Writing Writing Process & Strategy Instruction Use of Mentor Text Curriculum 1 (n=6) 1.00 (SD = 0.0) 2.33 (SD = 0.52) Curriculum 2 (n=6) 1.00 (SD = 0.0) 1.83 (SD = 0.41) Curriculum 3 (n=4) Curriculum 4 (n=2) 2.00 (SD = 1.16) 3.00 (SD = 0.0) 2.50 (SD = 1.00) 2.50 (SD = 0.71) 2.00 (SD = 0.00) 2.67 (SD = 0.52) 3.50 (SD = 0.58) 4.50 (SD = 0.71) 2.50 (SD = 0.57) 3.00 (SD = 0.00) 4.00 (SD = 0.82) 5.00 (SD = 0.00) 2.80 (SD = 0.45) 3.00 (SD = 0.00) 4.00 (SD = 0.82) 3.00 (SD = 0.00) Explicit Instruction in Component Skills 2.60 (SD = 0.89) 2.00 (SD = 1.01) 3.00 (SD = 0.82) 3.50 (SD = 0.71) 187 Table 31 (cont’d) All Curricula 1.44 (SD = 0.86) 2.22 (SD = 0.65) 2.83 (SD = 0.92) 3.38 (SD = 0.96) 3.18 (SD = 0.64) 2.59 (SD = 1.00) Note: The maximum quality score for these instructional practices was 5. Instructional practices not observed were scored as 0. 0s were excluded from this analysis. What Did Teachers Do? Overall, teachers’ enacted instruction included some, but not most recommended instructional practices (Table 32). Teachers in District C consistently had the highest presence scores for instructional practices observed. Teachers in District D had the second most frequent presence of instructional practices across all instructional practices measured. Table 32 Presence Rates of All Enacted Writing Instructional Practices by District and Overall District Essential Instructional Practices in Writing Estimated Spelling Interactive Writing Daily Time for Writing Writing Process & Strategy Instruction Use of Mentor Text Explicit Instruction in Component Skills 0.67 District A 0.75 District B 0.92 District C 0.71 District D Overall 0.76 Note: Presence for practices were coded as 0 or 1. Instructional practices not observed were scored as 0. 0.38 0.5 0.92 0.57 0.57 0.17 0.17 0.25 0.00 0.15 0.22 0.17 0.58 0.57 0.35 0.56 0.75 1.00 0.86 0.76 0.94 1.00 1.00 1.00 0.98 While practices may have been more present in teachers’ enacted instruction in some districts than others, the quality of the practice was not necessarily higher in districts with higher presence. The mean quality of the practices is shown in Table 33. 188 Table 33 Mean Quality Scores of Observed Writing Instructional Practices by District and Overall District Essential Instructional Practice for Essential 6 (Writing) Estimated Spelling Interactive Writing Daily Time for Writing District A (n=18) District B (n=12) District C (n=12) District D (n=7) All Districts 2.10 (SD = 0.99) 1.89 (SD = 1.30) 2.00 (SD = 1.21) 2.67 (SD = 1.86) 2.11 (SD = 1.27) 2.50 (SD = 0.58) 2.50 (SD =0.71) 2.14 (SD = 0.90) 3.25 (SD = 0.50) 2.53 (SD = 0.80) 2.00 (SD = 1.00) 1.83 (SD = 0.83) 2.42 (SD = 1.08) 2.43 (SD = 0.79) 2.13 (SD = 0.96) Writing Process & Strategy Instructio n 2.43 (SD = 1.34) 1.83 (SD = 0.41) 2.09 (SD = 1.04) 1.75 (SD = 0.96) 2.07 (SD = 1.02) Use of Mentor Text 2.00 (SD = 1.73) 2.50 (SD = 0.71) 3.00 (SD = 1.00) NA* 2.50 (SD = 1.20) Explicit Instruction in Componen t Skills 2.17 (SD = 1.27) 1.89 (SD = 1.05) 2.55 (SD = 0.82) 2.40 (SD = 1.34) 2.24 (SD = 1.09) Note: The maximum quality score for these instructional practices was 5. Instructional practices not observed were scored as 0. 0s were excluded from this analysis. The mean quality for teachers’ enacted instruction fell between 2.00 and 2.50 for most practices. However, there were districts with higher-than-average quality scores for enacted instruction across all practices. Generally, District D had the highest scores, with three of the highest ratings overall for the instructional practice of meaningful estimated spelling, interactive writing, and daily time spent on writing connected to children’s motivation and engagement. District A had the highest quality score for writing process and strategy instruction, but the lowest presence. And District C had the highest quality score for mentor text use and explicit 189 instruction on component writing skills. Given the high rate of presence for District C in both areas, it could be argued that District C had the strongest instruction in both areas. How Did Curriculum and Enactment Compare? When comparing overall presence and quality, the curriculum materials had higher presence scores and on average had higher quality scores for most practices than the teachers’ enacted practice. This was true for interactive writing, writing process and strategy instruction, use of a mentor text, and explicit instruction in component skills. Teacher's enacted instruction on average had higher quality scores than the curriculum units analyzed in estimated spelling and daily time for writing, though presence for both practices was higher in curriculum materials. Other Findings While the findings above answer the original research questions of this dissertation study, other findings of interest were uncovered during the study's duration. These findings related to the support found for linguistically diverse learners in the curricular materials analyzed and how teachers used, or did not use, the curricular materials. Presence of Supports for Linguistically Diverse Learners Unrelated to the presence and quality findings accounted for with the coding protocol, but important when thinking about engagement of Michigan’s increasingly diverse learners, Curriculum 1 and Curriculum 3 had embedded explicit language support for ELLs (English Language Learners) in each lesson while 2 and 4 did not. This information was presented as a sidebar that ran parallel to the recommended teacher text for the whole class instruction. Curriculum 1 also had explicit support for diverse learners via Universal Design for Learning (UDL) components in the front matter of each unit and in each lesson. These supports were written alongside the recommended teacher text for the whole class instruction within the lesson 190 plan (the right-hand column of a two-column plan). Curriculum 4 units also included UDL components in the unit overview materials, but this was less apparent in the daily lesson plans than in the previously mentioned curriculum materials analyzed for this study. Curriculum 2 did not have accommodations for language or culture at the lesson level or in the front matter of the sample lessons for the genres examined. Following the Curriculum Teachers in District A and District C were both using a new curriculum that the district had started using within the last two years, which was also comprehensive and knowledge building. Notably, the two districts with the new, knowledge building curricula had the most fidelity in terms of teaching from the curriculum. This was not something this study specifically examined, as how teachers take up instructional practices found in the curriculum was the focus of this study, but it was evident in videos that teachers in District A and District C were following set lessons from Curriculum 1 and Curriculum 3. It was interesting to note that in these two districts, it was most clear out of all the videos after coding the curriculum units, that teachers were following curricula provided lesson plans. Sometimes they were even holding the plan or text in their hands while teaching. In District B and District D, teachers were using an older, process-based curriculum. In both districts, after coding the video observations and after coding curricula, it was most evident that many teachers in this district were not following a set lesson plan from the curriculum materials, but rather had created their own plans or pulled plans from another resource (i.e., Mystery Science, Teachers Pay Teachers). Summary of Findings This study’s purpose was to identify what writing instructional practices were present in the district provided curriculum materials and teacher’s enacted practice and to examine what 191 parallels might be drawn between the two. To do so, each piece was analyzed separately, both quantitatively and qualitatively. Then those quantitative and qualitative results were compared for each instructional practice and evidence-based writing instruction. This comparison was done because as seminal works in curriculum and instruction have established, the written curriculum and the enacted curriculum are not always the same (e.g., Stein et al., 2007; Troia et al., 2011). This study also provided a glimpse into classroom instruction during the pandemic-impacted school year and analyzed instruction across multiple modalities. In line with other recent work examining early literacy instruction during the COVID-19 pandemic (e.g., Wright et al., accepted), this study finds that teachers’ enacted practices varied but aligned with other previous research findings. When examining individual differences in classroom instruction, the quality varied. Further, there were not district-based patterns in the differences found for the rate of presence or quality of writing instruction children were receiving within teachers’ enacted practice at the classroom. Teachers’ enacted practices were variable within districts and sometimes within their own classrooms from one observation to another. When examining the curriculum, the quality varied between the curricula and the curriculum materials. Though notably the scores in curricula were more stable and had less range, including the presence of instructional practices. The presence of instructional practices in the curriculum were somewhat consistent within curricula and across curricula, with the exception of Curriculum 1 which had variation based on the sub-unit analyzed. Perhaps the most interesting part of this study is that, when comparing the curriculum quality scores with the quality of the enacted practice, there were sometimes similarities in the quality of the practice between the enactment and the curriculum or there were similarities in the presence of the enacted practice and the curriculum. For instance, when looking at explicit 192 instruction in component skills, the curriculum that had the highest presence for spelling instruction (daily) also had higher levels of teacher enactment. Conversely, there were also instructional practices for which there was a strong presence in the curriculum materials but a very weak presence in enacted practice. For example, when examining use of mentor texts, all curriculum materials contained the use of mentor texts but only eight observations contained this practice. This matters because when educators and policymakers look to curriculum as a vehicle for improving teacher practice, they may assume the influence of materials is significant or that they can adopt curricula that are teacher-proof. What is clear from this study is that the relationship between the curriculum and teachers’ enacted practice is complex and multidimensional, and that as Stein and colleague (2007) have explained, the written curriculum is transformed by a variety of circumstances into the enacted curriculum. That is, sometimes teachers follow curriculum materials and sometimes they do not. Since there is not a clear pattern, we cannot say curriculum and teachers’ instruction are the same thing. 193 CHAPTER 5: DISCUSSION The purpose of this dissertation study was to identify what writing instructional practices were present in district-provided curriculum materials and teachers’ enacted practices and to describe what parallels might be drawn between the two. To that end, the following questions grounded this study’s design and methods: 1. How can recommended writing instruction be described across four writing curricula? a. What effective writing instructional practices were contained in these curricula b. How did recommended writing instructional practices compare across the curricula? 2. How can writing instruction be described across K-3 classrooms in Michigan during the COVID-19 pandemic? a. What effective writing practices were K-3 teachers using? b. How did enacted writing instructional practices compare across districts? 3. How did teachers’ enacted writing instruction compare to recommended instruction in district-provided curriculum materials during the COVID-19 pandemic? To draw conclusions from the data gathered for this mixed methods study, multiple analytic strategies were used. The bulk of the analysis in this study is derived from a content analysis of 18 units of curricula and 2300 minutes from 49 separate observations of teachers’ classroom instruction during the COVID-19 pandemic-impacted school year. Additional data sources include teachers’ background information and post-observation surveys, coaching logs, and coaches’ interviews. The units of curricula and observations of teachers’ enacted practice were both coded using the same protocol (DeJulio et al., 2020; Krippendorf, 2004; Neuendorf, 2002). The coding protocol captured evidence-based practices in writing instruction and was 194 derived from practice guides, one national and one local to Michigan and assigned quality scores to each practice present. Simultaneously, qualitative notes on the instructional practices were also recorded. The qualitative notes were particularly important because of the volume of the data and the desire to examine the relationship between the curriculum and the enacted practice. These notes included items such as descriptions of the writing task children were doing, language the teacher used that would influence the quality scoring, or analysis memos to capture thinking at the time. After the initial coding work was done, descriptive statistics were calculated to examine cases within and across districts, and qualitative analysis methods were used to examine analytic memos created during the coding process. and create analytic matrices which were created after the coding was complete. Once these matrices were created, some themes and patterns became noticeable, particularly when thinking about each of the four districts as unique cases (Miles et al., 2020; Yin, 2018). The results from these analyses were compared and findings were reported in the previous chapter. To begin, most practices were present in the curriculum units analyzed at one or more points throughout a unit. There was variation in quality from one curriculum to another but how curricula quality varied was different depending on the writing skill being examined (See Tables 30 and 31 above). For instance, Curriculum 3 and Curriculum 4 had the most consistent high- quality scores, but which curricula had better quality instantiations of instructional practices as written varied depending on the skill. Curricula 1 and 2 had the most consistently low scoring instructional practices as written. Some practices had lower averages across all curricula than others and the tighter range of scores for these practices, such as interactive writing, indicates that curricula write some instructional practices at a higher quality than others. And the range of 195 scores for practices within curricula held fairly constant within each curriculum across units and there was some consistency in scoring when considering curricula as individual cases. Conversely, there was variation in the rates of presence for a practice in teachers’ enacted instruction and a wider range in quality between teachers’ enacted instructional practices, as indicated by the larger standard deviations in quality scores and the varied rates of presence (see Tables 32 and 33 in the Results chapter above). For instance, some practices such as daily time for writing aligned to children’s motivation and engagement were present in almost every classroom. Opportunities for estimated spelling and explicit instruction in component skills were similarly present in three-quarters of classroom observations. The least observed instructional practices were use of a mentor text and interactive writing. When considering teachers within a district as cases, there were some similarities in presence of practices, but not necessarily consistency in quality of the instructional practice. For example, teachers in District A were the least likely to use interactive writing or writing process and strategy instruction and when they did scores were not consistent. Similarly, teachers in District D were very likely to teach estimated spelling (rate of presence = 86%) but the quality of the practice varied, as indicated by the mean score of 2.67 with a standard deviation of 1.86. When comparing the written curriculum quality scores with the quality of the enacted practice to answer the third question of this study, sometimes there were similarities in the presence and/or the quality of the instructional practice between teachers’ enactment and the district’s curriculum. For instance, District A and B had the lowest mean quality scores within district and also consistently had the lowest scores in teachers’ enacted instruction. However, there were also times where there was a discrepancy between the presence and/or the quality of the instructional practice in teachers’ enactment and the district’s curriculum, such as is evident 196 in the low use of mentor texts to teach writing in teachers’ enacted practice. To better understand what the results of this study mean for research, policy, and practice, below I discuss how findings for each research question add to or extend the existing literature base. To close, I discuss what implications these findings have for the larger research, policy, and practitioner communities. Curriculum Materials Teachers, districts and policymakers need to be informed consumers, particularly when considering the need for improvement in writing achievement for elementary aged children nationally and locally (NCES, 2019). While studies of curriculum materials have found variability in the quality of curriculum resources, there have been fewer systematic studies of the content in elementary ELA curricula, particularly ones that focus specifically on writing. This study adds to the small literature base on early writing curricula in ELA and confirms and extends some of the findings from those studies, including some that may be of interest to curriculum developers, researchers, and practitioners. First, curriculum units used a mixture of process and skills-based writing instruction, matching teacher approaches found in survey studies (Cutler & Graham, 2008; Graham et al., 2008b) and curriculum studies (Gabas et al., 2023) and a practice that is shown to have positive effects on students’ writing achievement (Graham et al., 2012a). Curriculum 2 skewed heavily towards process instruction. The lessons had zero references to children’s handwriting or spelling aside from mentioning spelling and grammatical components in a unit rubric at the end of the unit but did provide some guidance on sentence construction. Curriculum 4 was heavily process focused but regularly provided strategies for spelling, sentence construction, capitalization and punctuation. This curriculum also contained lessons that encouraged both 197 conventional and estimated spelling, as Gabas and colleagues found in one of the three curricula they studied. Curriculum 3 taught process and skills-based writing and was the only curricula teachers indicated using that included grammar and/or spelling lessons as separate times in the lessons. Aligned to the findings of Gabas and colleagues (2023) on kindergarten writing curricula, this work was not done in connection to a child’s own writing. Rather, children were doing spelling work and grammar work in isolation of genre-based writing or composing. Since the goal of component skills work is for children to translate those skills to their own writing, knowing that these skills are taught in isolation of children’s authentic work is important. Second, when teaching the writing process, curricula also approached moving children through the writing process in varied ways. Curriculum 2 and 3 appeared to systematically move children through their writing through a unit, from drafting and planning to completion of a single piece. This was also the case for Curriculum 1 in the sub-unit analyzed that focused on writing. Curriculum 4 was the only curriculum that moved children through the writing process but in a freer way, likely because of the writing workshop model which allowed for more autonomy and freedom for children to continue working on pieces or begin new ones. Since research confirms that teaching children to use the writing process is an important instructional practice (e.g., Graham et al., 2012a), noting whether and how curricula do this is important. Third, all units analyzed, except for four of the six Curriculum 1 sub-units, included an opportunity for children to write daily and complete a single piece of writing. However, how they did so was slightly different. This finding is similar to what Gabas and colleagues (2023) found in their study of curriculum materials, which was that curricula approached moving children through a piece of writing somewhat differently. In this study, two out of four curricula moved children through their own piece of writing regularly and one did it in the sub-units 198 focused on writing meaning more curricula in this study approached moving children through a piece of writing intentionally than in their study. This diverges from the findings in Gabas and colleagues (2023) in that there were more opportunities in the curricula examined here for children to write their own compositions. Part of this variation might be a result of the fact that in this study only two of the sixteen units analyzed were for kindergarten students. Of the two units of Curriculum 1 analyzed for Kindergarten there was less intentionality around moving children through specific stages of the writing process (more focused on journal entries) than in the other grades analyzed (2nd and 3rd) of Curriculum 1. However, since Curricula 1-3 had a tighter composing sequence, where the teacher had small bursts of instruction in composition, and then children completed each step, it seems possible that when children in classrooms with this curriculum move to writing on their own without the tight scaffolding and highly directive instruction, they may have some difficulty. This extends findings in previous work around the writing instructional sequences curricula provide. In addition to providing writing process and strategy instruction, all four curricular materials contained meaningful writing tasks, which in this study means children composed texts that were not just completing worksheets or copying down what a teacher wrote. Three out of four curricula included daily opportunities for children to write independently. Given that this is a recommended instructional practice in the practice guides (Graham et al., 2012b; MAISA- GELN ELTF, 2016) it seems important to note its presence in the curricula. When considering the instructional practices of using interactive writing and estimated spelling, the curriculum materials generally included high rates of presence but low quality. To compose meaningful messages, children need instruction in letter-sound correspondence (not reported on here) and opportunities to apply that knowledge to their own writing using estimated 199 spelling (Cabell et al., 2013; MAISA GELN Early Literacy Task Force, 2016). Similarly, using interactive and shared writing is one way that teachers can support children’s concepts of print, phonemic awareness, and uptake of genre and text features (Craig, 2003; MAISA GELN Early Literacy Task Force, 2016). While these opportunities were present in the curricula, because the quality of the written curriculum was generally low, it seems likely teachers would need to supplement this with extraneous sources (outside knowledge from teacher prep coursework, professional development, etc.) or curriculum developers need to include more explicit instruction and teacher language around the uptake. Further, when considering explicit instruction in component skills, some of the curricula had lessons that provided explicit instruction in component skills in a systematic way, via spelling and grammar lessons while others did not, and teachers would need to use supplemental resources to teach this content. This is of extra importance to note because from previous studies we know that teachers often do supplement their curriculum (Wright et al., 2022) and that what teachers may supplement their curricula with is not always high quality (Polikoff & Dean, 2019). Finally, curriculum materials did not include much support for diverse learners when it came to writing instruction. There were some generalized supports for children learning an additional language and UDL supports for diverse learners noted in two curricula (1 and 3), but explicit supports for writing tasks for these students was not present. Given that children with disabilities have more difficulty with writing tasks (Troia, 2006), it seems important to know what supports, if any, are provided by these whole class curricula. While this study examined K- 3 curriculum and not preschool curriculum, these findings are in line with the lack of support for learners with exceptional needs or nonnative speakers found by Gerde and colleagues (2019) in five Head Start curricula. Since teachers make few adaptations in their own instruction (Troia & 200 Graham, 2016), educative materials within their curricula might be one way to help improve this practice. These findings suggest three things. First, there may be variation in how curriculum teaches the writing process depending on the age of the children and depending on the specific curriculum used. As Gabas and colleagues (2023) note, there are not specific recommendations for the emphasis teachers should place on parts of the writing process depending on children’s ages. Understanding what emphasis the curricula have and how they teach the writing process seems important for practitioners to ensure they are delivering instruction on all parts of the writing process to children. Second, since not all curricula included opportunities for children to compose independent of their teachers’ tight instructional sequence or to practice using component skills in their own written pieces with intentional and explicit instruction, teachers may need to provide more time for children to practice the application of the skills taught in longer sequences and in their own meaningful written pieces. Last, but also maybe most importantly, since curricula are not always covering specific writing instructional skills at all, and when they do, it is at varied quality levels, it seems important for teachers and teacher educators to help practitioners build their professional judgement and knowledge around evidence-based writing instructional practices. Teachers’ Enacted Writing Instruction Since this study is grounded in the conceptual framework of Stein and colleagues (2017) around the difference between enacted instruction and the written curriculum, what teachers actually did in observations is important to discuss. Additionally, since this is one of the first studies that shares results on observations of early literacy instruction during the pandemic- impacted school year of 2020-2021, connecting the instructional practices of teachers in this 201 study to the findings from other observational and survey studies can give insight into any notable differences in enacted practices when teaching writing. Teachers’ enacted writing instruction showed wide variability both within school districts and across districts. This is evident across practices with only one practice taking place in almost all classrooms: daily time for writing instruction with attention to motivation and engagement. As noted in the literature review, both survey and observational studies of early elementary teachers and classrooms have noted wide variability in the amount of time dedicated to writing instruction from almost no time spent on writing each week to upwards of four or five hours (e.g., Brindle et al., 2016; Cutler & Graham, 2008; Hsiang et al., 2020). The same was true in this study. While students had time to write in 48 of the 49 classroom observations in some way, there was not always daily time dedicated to writing instruction, sometimes children were just completing assigned writing. Some writing in classrooms was structured and clearly part of writing to compose larger pieces while working through parts of the writing process (e.g., writing and revising a table of contents). Some writing occurred as a part of spelling and phonics lessons and that was the only writing present for that observation (e.g., working on short vowel patterns). Some writing time was in response to texts. Finally, some of the daily time for writing occurred outside of an ELA block of instruction and was part of a content area lesson (e.g., writing about plants in a Kindergarten classroom). This variation in approaches to and implementation of writing instruction across teachers is in line with elementary and secondary teachers’ self-reports of writing instruction (Cutler & Graham, 2008; Graham et al., 2023; Guo et al., 2022) and observational studies in early elementary classrooms showing variation in the average amount of time teachers spent teaching writing and their instructional approaches (Coker et al., 2016; Guo et al., 2023; Puranik et al., 2014). 202 In this study, the amount of time students spent on writing tasks also varied. This time could have ranged from no time at all, to a few minutes spent filling in a sticky note with vocabulary words they were flagging in their reading lesson, to spending a half hour writing an informational text about an animal or topic they were interested in. This aligns with the varied daily time for children to write observed by Puranik et al. (2014) and Coker et al. (2016), as well as the self-reports from Guo et al. (2021) who surveyed teachers in the US and Malpique et al. (2017) who surveyed teachers in Australia. Although instructional time spent exclusively on writing was not able to be totaled for all observations for this study because of how coders marked time for practices, approximately half were. Of those that were able to be totaled, there was wide variation in total time ranging from around two and a half minutes total to almost seventy minutes (counting teacher and student writing time). In fact, when instructional time was totaled across a day, few of the 49 classroom observations would meet the daily time recommendation from the IES Practice Guide (Graham et al., 2012a) of 30 minutes per day in kindergarten and 60 minutes or higher in grades 1 and above. In fact, in this study had only two classrooms at 1st - 3rd grade that had a total of 60 minutes or higher and no Kindergarten class had thirty minutes a day to write. That means that only two classes in this study would meet the IES recommendations (Graham et al., 2012b). One reason for this may align with the findings from the five-teacher case study conducted by Korth et al. (2016), which is that teachers may not have the time in their day to teach writing the way that they would like or the way they know they should be. In addition to aligning with previous research findings time spent on writing across a school day, this study also study found that teachers’ use of all evidence-based practices was varied in its presence and the quality of the implementation of the practice if they did use it. This 203 is evident across all instructional practices measured in this study, with some practices having higher implementation rates than others, including daily time for writing and writing process and strategy instruction. Teachers’ self-reports indicate they use evidence-based practices, but do so with variable frequency (Brindle et al., 2016; Cutler & Graham, 2008). This study similarly found some teachers used instructional practices in their observations of instruction while others did not use them at all. Although these observations only occurred twice, the fact that in some classrooms certain instructional practices were present and in others they were not, aligns with the variability in these survey self-reports and with the sociocultural perspectives on writing instruction (Graham, 2018). Further validating that these were normal days of instruction, not unique, and therefore can be compared to earlier studies, teachers self-selected what day to record the videos they had to send for our study. So, they knew that we were looking for a day that was representative of their general teaching practice. While time spent on writing instruction and the overall use of evidence-based practices varied, so did teachers’ component skills instruction for things such as spelling, handwriting, sentence construction. Overall, teachers spent less time on handwriting instruction than surveys of elementary teachers’ self-reported practices have indicated (Graham et al., 2008a). In fact, handwriting was the least taught component skill with only 24% of classroom observations containing handwriting instruction. Puranik and colleagues (2014) found that handwriting instruction while present was also occurring for only small amounts of time. Spelling instruction, something teachers’ self-reports indicated was also important and part of their practice (Graham et al., 2008b), was the most common component skill taught, with 59% of observations including spelling instruction and no observations including word processing or keyboarding. This aligns with the findings from Coker and colleagues (2016) who observed that spelling was the most 204 common writing component skill teachers taught and keyboarding was not observed a single time. Teachers in this study were also more likely to teach component skills than they were to teach the writing process and strategies, as indicated by the presence scores of 76% for component skills instruction to 57% for process and strategy instruction. This aligns to findings from previous survey and observational studies (Coker et al., 2016; Cutler & Graham, 2008; Guo et al., 2022; Malpique, 2017) where it was found that teachers’ practice varied in what they spent time teaching children at the individual teacher level (Puranik et al., 2014) and there was a heavier emphasis on teaching component skills than providing instruction and time for extended writing (Guo et al., 2022). In fact, the frequency of practice use is so varied that some teachers self-reported never teaching spelling, handwriting, grammar, planning and revision while others reported teaching these topics for extensive amounts of time each week (e.g., 400 or 600 minutes; Guo et al., 2022), both findings confirmed by this study where some teachers taught component skills such as spelling and others did not teach any component skills. These findings around component skills teaching could be a result of teachers, who have limited time in their day, prioritizing teaching what students appear to need in the moment or overall and are therefore making instructional decisions around what component skills to teach and when, rather than teaching each skill daily or even weekly as survey studies have asked. While empirical studies have supported the idea that teachers’ writing instruction is responsible for stronger writing outcomes (e.g., Hall et al., 2015), a recent study that using hierarchical linear modeling did find an effect for teacher instruction on some groups of students, and posited that responsive instruction may be supportive of better student writing outcomes than writing instruction that is generalized for all students (Kim et al., 2013; Wang & Troia, 2023). Perhaps, 205 teachers’ attempts to be responsive are a mitigating factor in why teachers' enacted instruction contained certain practices and did not use others, despite self-reports of using practices in survey studies. Since teachers’ enacted practice in this study varies from teacher to teacher and from school to school, and the range of quality of the practice is so varied, it seems prudent to spend time building teacher knowledge and skills around effective writing instructional practices. Since we know that teacher instruction can influence better student writing outcomes across measures, (e.g., Cabell et al., 2013; Kim et al., 2013; Wang & Troia, 2023), and teachers’ self-efficacy and knowledge about writing instruction are correlated to the types and amount of writing instruction they provide (Hsiang et al., 2020), perhaps by building teacher knowledge about evidence-based instructional practices (e.g., Desimone & Pak, 2017), we could improve student writing achievement. Based on the conceptual framework of Stein et al. (2007) and empirical work of Speak-Swearling and Sibulsky (2014) and Taylor and colleagues (2003), teachers’ improved knowledge of instructional practices could be one way to improve writing instruction. The Relationship Between Curriculum Materials on Teachers’ Enacted Instruction This study’s hypothesis of why a transformation of the written curriculum to the enacted curriculum may have occurred was because of the varied contexts of curriculum implementation within school districts and the varied quality of the curriculum materials themselves (Stein et al., 2007). There are some interesting findings here about the potential influence of the curriculum materials on teachers’ instruction. Perhaps the most interesting finding in this study, that confirms findings from other studies conducted on curriculum implementation (McCarthey & Woodard, 2018; Puranik et al., 2014), is that teachers’ enacted instruction within the same district, using the same curriculum 206 materials, often had as much or more variation in quality than across districts for some instructional practices. For instance, Troia and colleagues (2011) also found variation in teachers’ uptake of a writing workshop curriculum, with some teachers reporting use of a practice consistently (e.g., estimated spelling) and others reporting they never used that practice. McCarthey & Woodard (2018) also found variation among teachers in the same district and used a case study to explore the reasons for the uptake or lack of adherence to the curriculum. While there was some evidence of the influence of writing curriculum on teachers’ enacted practice, there was not always evidence of alignment between the curriculum and teachers’ enacted practices. In fact, sometimes an instructional practice that was abundant in a curriculum never appeared in a teacher’s practice. For instance, in Curriculum 4, there was consistent use of mentor texts via trade books. However, teachers using that curriculum actually never used a mentor text, not even a model of their own writing. In fact, most curricula included mentor texts via text models written by the teacher or other children and yet this was the least enacted instructional practice. As highlighted in case studies examining teacher uptake of curricular resources (e.g., McCarthey & Woodard, 2018; Waldron, 2014; Yoon, 2013), teachers adapt resources for various reasons because writing instruction takes place within a specific sociocultural context (Graham, 2019). Additionally, while some teachers' observation scores indicated that they enacted lower quality instruction than curriculum materials, other observations of enacted instruction had higher quality scores than the curriculum materials alone would account for. For example, estimated spelling instruction was very low quality when it appeared in Curriculum 1 and 2, but teachers in this district enacted this practice with quality scores that were almost 1 point higher in quality. This difference in enacted instruction scores, sometimes for the better, is again indicative 207 of the complex relationship between curricular materials and teachers (McCarthey & Woodard, 2018; Remillard, 2005; Valencia et al., 2006). It is not enough to just have materials or opportunities, but teachers need to have coaching and interaction with the materials (Traga Philippakos et al., 2023) as teachers’ orientations toward the curriculum, professional communities, and classroom structures and norms may be influenced by that coaching (Stein et al., 2007; Vanderburg & Stephens, 2010). While there was often variation within a district, there are some trends in enactment that make the case that there is some influence of the curriculum materials on teachers’ instruction. For example, District C, using Curriculum 3, had the highest percentage of observations with component skills instruction. Curriculum 2-4 all had 100% presence for component skills instruction in the units analyzed. However, District C was the only district to have similarly high rates of implementation. District C was also the only district that had an in-building coach, a brand-new curriculum, and part of the coaching work done with teachers included co-teaching or model teaching once a week. Thus, when considering the case of a district like District C, it seems possible that some transformations (Stein et al., 2007) are the result of factors including coaching as a professional development support (Desimone & Pak, 2017). Implications for Teacher Preparation and Practice While the work of other researchers cited in this paper certainly provided a substantive grounding for this proposed research, as outlined in the second chapter, this study extends the scope of the available empirical literature in three ways. First, this study examined observational data from a range of classroom teachers including all grades K-3. This builds on the existing literature base of survey and observational data by providing a view of the writing instruction taking place in early elementary classrooms across multiple, early elementary grade levels. The 208 analysis of writing practices in this study speaks directly to the observational studies conducted in kindergarten and first grade (Coker et al., 2016; Guo et al., 2023; Puranik et al., 2014), which reported on alignment of teacher instruction to the evidence-based instructional practices (among other things). This study’s findings show that overall, teachers’ implementation of instructional practices and the quality of that implementation varied within schools, grade levels, and districts confirm findings from these studies and extends the grade levels examined. This study can help teacher educators prepare teachers for the preschool to grade three certifications in Michigan to better understand the practices teachers implement and those they do not, to inform the work they do with pre- and in-service teachers. Second, this study examined the writing opportunities present in elementary ELA curriculum materials, two of which are marketed as knowledge building (see literature review for an explanation of these curricula), which has only been done in a couple of studies to date. There has been one study for kindergarten instructional materials (Gabas et al., 2022) and one for writing in preschool materials (Gerde et al., 2019). A handful of case studies of implementation of knowledge building curricula have been done (e.g., Ikpeze, 2013) and examination of the reading components of one of these curricula has also been recently conducted (see Cabell & Hwang, 2020). However, since knowledge-building ELA curricula are newer and mark an expansion of the content that basal series attempt to cover, they have not been studied extensively, and no studies to date have been focused on the writing component of these curricula. Additionally, although these types of curricula are not yet in use in Michigan in a large percentage of schools and classrooms (Wright et al., 2022), their use is on the rise as evidenced by the adoption of the knowledge building curriculum Expeditionary Learning by the largest district in the state (Chambers, 2018). This study also examined two standalone writing curricula, 209 one of which is one of the most used writing-specific curricula in the state (Wright et al., 2022). While this type of curricula is not new, the systematic study of these materials for early elementary grades is. Findings around what practices are contained, and more importantly, the often low-quality of the practice as it is written in the curriculum, provide insight for policymakers who believe curriculum materials are a powerful reform lever. They also provide insight into districts adopting the curricula who may believe that if a practice is present, that is enough to ensure effective teaching. Per the findings of this study, that is simply not the case. Third, this study provides insight into classrooms during the pandemic-impacted school year. Since teachers sent videos of their instruction to our larger project team, this study has unique insight into evidence-based instructional practices teachers used to teach writing and the quality of their implementation of these practices. This study found that in early elementary classrooms, writing instruction is variable between classrooms and that there is not enough time spent on writing instruction. These findings match the findings of pre-pandemic surveys and observational studies (e.g., Cutler & Graham, 2008; Coker et al., 2016; Puranik et al., 2014). Additionally, this study found high quality and low-quality examples of enacted instructional practices across teachers within different grade levels, districts, and unique and most important to the COVID-19 instructional context, modalities of teaching (Wright & Bruner, accepted). Finally, this study expands upon the theoretical and empirical work on the complex interaction between curricular materials and teachers’ enacted instruction (e.g., McCarthey & Woodard, 2018; Troia et al., 2011; Yoon, 2013; Waldron, 2014). This study was conducted in the hopes that researchers, curriculum writers, teachers and teacher educators can better understand the writing opportunities different curriculum materials and different teachers afford children in K-3 classrooms, as well as any similarities and differences between the curricular 210 materials and teacher instruction Additionally, because teachers have to enact curricula, but the comprehensive curricula often contained so much to do within a day and little to no guidance for special populations of students, it seems important to include supports for teacher pedagogical and content knowledge (e.g. Davis & Krajcik, 2005), in particular for students with exceptional needs (Gerde et al., 2019; Wang & Troia, 2023). Limitations There are some limitations to this study that include limitations that are a result of the study design and limitations that are a result of the analyses. Design Limitations To begin, while it was an affordance, COVID-19 was also a limitation. For instance, COVID-19 disruptions may have limited the amount and/or types of instruction we viewed, such as limiting small group instruction, one district was only engaged in virtual instruction for a portion of this study and they may have taught writing differently as a result of not being in- person, and some classes were half their usual size and had children alternate their days of attendance. Some districts were encouraging teachers to keep their distance from children which may have influenced the presence of writing support for children who had special needs (i.e., hand over handwriting support for handwriting, small group writing lessons, etc.). Of note, the modality of instruction did not seem to be a mitigating factor for the quality of instruction and presence of instructional practices for the study overall (Wright & Bruner, accepted) and the findings of this study around the practices do not show lower presence or quality of implementation as it relates to a district’s modality. Another potential limitation is that two of the districts (A and C) had new curricula they were using for part, or all, of their literacy instruction, which may have impacted teacher comfort 211 with the content and/or their ability to implement certain instructional practices. This also may have contributed to teacher uptake of the curriculum (either more or less uptake). District curricular materials did not contain the same number of lessons for analysis, nor were there equal numbers of units analyzed. For instance, Curriculum 4 only had two units analyzed because only first grade teachers participated in the larger Read by Grade Three study for that district, however, each unit had 20 or more lessons. On the other end of the spectrum, Curriculum 2 had six units analyzed, but only had 6-7 total lessons for each. However, since the curricula ultimately totaled roughly the same number of lessons analyzed, and since frequency of practices was taken into account when marking quality, this seems less concerning than if there had been a significant discrepancy in the number of lessons analyzed in total. Finally, there were also some limitations to consider regarding the time of data collection and the participants. Participants were drawn from the larger Read by Grade Three study and as a result of the method of participant selection for that study, there is less diversity than would be preferred and is representative of the teaching force writ large. Another limitation related to this is that teachers were selected by literacy coaches who worked in their district. A consequence of this selection is that not every district had the same number of teachers at each grade level K-3 or even all four grades represented. Since this study also focused on how teachers enacted the writing curriculum provided to them, not all grade levels of a specific curricula were examined because there were no teachers present at some grade levels within each district. This limited the ability to report findings across grade levels for some of the curriculum materials. Additionally, since teachers were filming in the height of the COVID-19 pandemic, their instruction may not reflect the instruction they would normally enact. And they may not have fully captured all of the writing that occurred across their day as they were responsible for 212 including video, they thought was relevant. However, most teachers included length ELA videos and an additional content area video, leading me to believe this may not be the case. Analysis Limitations While the analyses for this study were done in as methodically a way as possible, they relied on coded scores of teacher practice. To capture all instances of evidence-based instructional practices, each time a practice occurred it was noted. That means even if a teacher was not providing instruction in something intentionally, as the focal point of the lesson, it may have been captured. For example, if a teacher was instructing a student in a small group on drafting for an informational text, and mentioned in passing that a child needed a lowercase letter or period in their writing, the instructional practice was noted as present. This may have been too generous, resulting in higher presence scores than might have otherwise been true. However, since the quality score was based on an average across instances for each instructional practice, it did not skew the quality data which is the more heavily considered data for this practice. Additionally, since the teachers’ single observations were compared against an entire unit of curriculum, and they were coded the same way, it seems less likely that it influenced the comparison of the data to be skewed in favor of teachers’ enacted practice over instructional practices contained in the curriculum materials. Directions for Future Research While this study has certainly illuminated spaces where teachers’ enacted instructional practices and curriculum materials overlap in the early elementary grades, it has not answered why the overlap may or may not be present. For instance, one of the districts that participated in the study had a new curriculum and a building instructional coach who observed or co-taught with each teacher in the building weekly. This district had stronger evidence of use of the 213 curriculum materials, anecdotally, from videos and some closer alignment in presence scores for practices than other districts. Nonetheless, without further analysis and more information, potentially from surveys of teachers about their beliefs and use of curriculum and more observations, it is not possible to determine why. Surveys of teachers’ paired with observations, like Troia and colleagues (2011) used, or a case study of teachers’ writing instruction where much more data was collected like in the work done by Korth and colleagues (2016) or an in- depth ethnography study that takes in the entire context of the instruction, such as the work done by Yoon (2013) are options for how to systematically investigate why curricular transformations occurred. Previously, standards mandates and accountability-based reforms were ways U.S. policymakers tried to impact classroom teaching. Currently, there are reforms centered around the adoption of standards-based curriculum materials and professional development around those materials (Coburn et al., 2016). This presents two issues relevant to this dissertation study. First, there is some variation in how curricula addressed various evidence-based writing instructional practices. This means that when a district adopted a curriculum to address writing, they may not have been aware of the individual strengths and weaknesses of that curricula as it pertained to writing instructional practices. One solution could be to have researchers engage in the creation of a rubric, such as those used in Louisiana and freely available through their department of education (Louisiana Department of Education). A rubric with clear criteria for curricular evaluation could help teachers evaluate the presence and quality of instructional practices in the writing components of ELA curricula or standalone writing curricula. The creation and use of a rubric could help district officials, committee members, and even classroom teachers to note the 214 strengths and weaknesses of particular curricula and consider which of those best met the needs of their learners. However, there is still a lack of professional development on writing instruction (Troia & Graham, 2016), which may have influenced the teachers’ practice. Writing was not a main focus of much coaching work in this study, so it stands to reason that writing professional development did not significantly influence teachers’ practice in this study as it may have in other studies of writing curriculum implementation (Troia et al., 2011). Also, teacher practices varied quite a bit within districts but also across districts, meaning that the influence of shared curriculum materials is not as straightforward as policymakers and district leaders might hope (Chingos & Whitehurst, 2012; Polikoff, 2018). However, given the evidence of the influence of PD on writing instruction (Traga Philippakos et al., 2023) and the work to provide teachers with curricula that support evidence-based instruction (Coburn et al., 2016), it seems prudent to consider a study that pairs analysis of standards-aligned curricula and a professional development model and connects students’ writing achievement. One such study, like that of strategic genre-instruction in early elementary classrooms, conducted by Traga Philippakos and colleagues (2023) on their design-based research on genre-based lessons, might provide a road map for work done with commercially available and widely adopted curricula. While this study does include knowledge-building curricula, the way that instructional practices were coded did not necessarily capture ways that the knowledge building curricula might differ in terms of types of texts children produce or ways that the knowledge building might influence children’s written products. To capture how they differed, a coding protocol more like Gabas and colleagues (2023) where the writing instruction was analyzed separately for reading instruction that had writing, genre-based writing instruction and grammar and spelling 215 instruction. Some researchers have measured how knowledge impacts (or does not impact) writing achievement (Wen & Coker, 2020), but the connection of writing achievement to knowledge building ELA curricula in early elementary classrooms has yet to be explored. Additionally, as educative curricula are of interest to the research community when thinking about how to support teacher learning beyond their time spent in teacher education, coding for educative features (e.g., Davis & Krajcik, 2005) and then observing teachers’ enacted instructional practices, might be a worthwhile future study. Recent research, like that done by Wang and Troia (2023) and Coker and colleagues (2016), have conducted work that has allowed us to see relationships between instructional and child-level factors. While this study did not connect student achievement outcomes to teachers’ instructional practice, it seems important to do more of that type of work. This seems particularly important because of the stagnant writing achievement among students in the United States and because of the difficulty teachers report they have teaching writing. Given the increasing diversity in our classrooms, it seems important and timely to help connect teachers’ instructional moves to student-level variables. This could provide teacher educators and teachers themselves with invaluable insight on what works and for whom. Finally, given the few observational studies that have been conducted in early elementary school, including those in kindergarten (Guo et al., 2023; Puranik et al., 2014) and first grade (Coker et al., 2016), it seems important to continue conducting observational research in early elementary classrooms to capture classroom writing instruction across varied instructional contexts. Quantifying what teachers are doing in their classrooms provides valuable insight for researchers and teacher educators. This research can inform teacher education on a larger scale. 216 Importance While there are multiple results in this study that deserve attention, one is that curricular materials are not all providing equal opportunities for high quality writing engagement for students. This confirms what has been found in studies of preschool and Kindergarten writing curricula. Another result of this study, and perhaps the most important one is that teachers’ practice varies within and across districts, even when teachers are provided the same resources to teach from. This suggests that teacher sensemaking and teacher professional knowledge may play a large role in the type of instruction early elementary students receive. In order to provide high-quality writing instruction to our nation’s youngest students, as called for by prominent researchers (e.g., Graham, 2019; Rowe 2018), we need to continue to tease apart the relationship between teachers’ resources and their instruction. We also need to provide teachers with professional development and learning related to the curriculum they are using. However, just as giving a teacher a new curriculum is not enough, providing professional development on a curriculum is not enough to change all teachers’ instructional practices in meaningful ways (Troia et al., 2011). As is evident in the conceptual framework for this study, it is not enough to consider one transformational factor (as was explored here by using districts as cases), but rather multiple, complex factors are at play. This is echoed in work in other fields (e.g., mathematics; Jacob et al., 2017) which found that extensive, targeted professional development around a specific curriculum did not lead to improvements in teachers’ instructional practices or their student achievement. Teachers need responsive professional learning, that is ongoing and timely, to account for the complex interaction between the transformation factors (Stein et al., 2007) and the curriculum they are implementing. 217 To close, children need to be good writers for success in the world outside of K-12 education. To do that, they need good writing instruction. Good writing instruction requires we aim for more than just following a set lesson plan – it requires a teacher use their professional knowledge to adapt those materials to meet the needs of the learners in their classroom. Early elementary teachers need more support to build their professional knowledge and self-efficacy around teaching writing, and instructional coaching and teacher professional development around curriculum materials seems a promising path to improved instruction for K-3 students. 218 REFERENCES Apple, M. W., & Junck, S. (1990). You don’t have to be a teacher to teach this unit: Teaching, technology, and gender in the classroom. American Educational Research Journal, 27, 227–251. Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is—or might be—the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25, 6–8, 14. Bazerman, C., Applebee, A. N., Berninger, V. W., Brandt, D., Graham, S., Matsuda, P. K., Schleppegrell, M. (2017). Taking the long view on writing development. Research in the Teaching of English, 51(3), 351-360. Berninger, V. (1999). Coordinating transcription and text generation in working memory during composing: Automatic and constructive processes. Learning Disability Quarterly, 22, 99- 112. Bingham, G.E., Quinn, M.F., McRoy, K., Zhang, X., Gerde, H.K., (2018). Integrating Writing into the Early Childhood Curriculum: A Frame for Intentional and Meaningful Writing Experiences. Early Childhood Education Journal, online. Blank, R. K. (2013). Science instructional time is declining in elementary schools: What are the implications for student achievement and closing the gap? Science Education, 97(6), 830–847. Brindle, M., Graham, S., Harris, K. R., & Hebert, M. (2016). Third and fourth grade teacher’s classroom practices in writing: A national survey. Reading and Writing, 29, 929-954. Buckner, J. S. (2012). Write from the Beginning and Beyond: Thinking Maps. Thinking Maps Learning Company. Cary, NC. Button, K., Johnson, M. J., & Furgerson, P. (1996). Interactive writing in a primary classroom. The reading teacher, 49(6), 446-454. Cabell, S.Q., & Hwang, H. (2020). Building content knowledge to boost comprehension in the primary grades. Reading Research Quarterly, 55. https://doi.org/10.1002/rrq.338 Cabell, S. Q., Tortorelli, L. S., & Gerde, H. K. (2013). How do I write…? Scaffolding preschoolers' early writing skills. The reading teacher, 66(8), 650-659. Carnine, D. W., Silbert, J., Kame’enui, E. J., & Tarver, S. G. (2004). Direct instruction reading. Upper Saddle River, NJ: Pearson. Cavanagh, S. (2017, June 1). K-12 Spending: Where the Money Goes. Edweek. Retrieved on May 1, 2021 from: 219 https://marketbrief.edweek.org/marketplace-k-12/k-12-spending-where-the-money-goes/. Calkins, L., & Oxenhorn, A. (2003). Small moments, personal narrative writing. Portsmouth, NH: Heinemann. Cervetti, G.N., & Wright, T.S. (2020). The role of knowledge in understanding and learning from text. In E.B. Moje, P. Afflerbach, P. Enciso, & N.K. Lesaux (Eds.), Handbook of reading research (Vol. 5). New York, NY: Routledge. Chambers, J. (2018, July 22). Teachers in school for new curriculum in Detroit. EL Education, retrieved on May 1, 2021, from: https://eleducation.org/news/el-education-partners-with-detroit-public-school-to- implement-k-5-literacy-curriculum. Chiefs for Change. (2021, April). Incentivizing smart choices: How state procurement policies can promote the use of high-quality instructional materials. Chiefs for Change. Retrieved October 1, 2023 from: https://www.chiefsforchange.org/download-media/incentivizing- smart-choices-how-state-procurement-policies-can-promote-the-use-of-high-quality- instructional-materials. Chingos, M. M., & Whitehurst, G. J. (2012). Choosing Blindly: Instructional Materials, Teacher Effectiveness, and the Common Core. Brookings Institution. Clark, S. K., Jones, C. D., & Reutzel, R. D. (2013). Using the text structures of information books to teach writing in the primary grades. Early Childhood Education Journal, 41(4), 265-271. Coburn, C. E., Hill, H. C., & Spillane, J. P. (2016). Alignment and accountability in policy design and implementation: The Common Core State Standards and implementation research. Educational Researcher, 45(4), 243-251. Cohen, D. K., & Ball, D. L., (1999). Instruction, capacity, and improvement (CPRE Research Report RR-043). Philadelphia: Consortium for Policy Research in Education. Coker, D. L., Farley-Ripple, E., Jackson, A. F., Wen, H., Macarthur, C. A., & Jennings, A.S. (2016). Writing instruction in first grade: An observational study. Reading and Writing, 29(5), 793-832. Coker D. L., Jr., Jennings A. S., Farley-Ripple E., MacArthur C. A. (2018). When the type of practice matters: the relationship between typical writing instruction, student practice, and writing achievement in first grade. Contemp. Educ. Psychol. 54, 235–246. doi: 10.1016/j.cedpsych.2018.06.013 Craig, S. A. (2003). The effects of an adapted interactive writing intervention on kindergarten children’s phonological awareness, spelling, and early reading development. Reading Research Quarterly, 38(4), 438–440. 220 Connor, C. M., Morrison, F. J., & Katch, E. L. (2004). Beyond the reading wars: The effect of classroom instruction by child interactions on early reading. Scientific Studies of Reading, 8(4), 305–336 Core Knowledge Foundation & Amplify Education. (2017). Core Knowledge Language Arts: Knowledge strand [Curriculum]. Charlottesville, VA: Core Knowledge Foundation; Brooklyn, NY: Amplify Education. Cutler, L. & Graham, S. (2008). Primary grade writing instruction: A national survey. Journal of Educational Psychology. 100 (4), 907–919. Darling-Hammond, L. (2007). Race, inequality and educational accountability: The irony of ‘No Child Left Behind’. Race Ethnicity and Education, 10(3), 245–260. doi:10.1080/13613320701503207 Davis, E. A., & Krajcik, J. S. (2005). Designing Educative Curriculum Materials to Promote Teacher Learning. Educational Researcher, 34(3), 3–14. https://doi.org/10.3102/0013189X034003003. Davis, E. A., Palincsar, A. S., Smith, P. S., Arias, A. M., & Kademian, S. M. (2017). Educative Curriculum Materials: Uptake, Impact, and Implications for Research and Design. Educational Researcher, 46(6), 293–304. https://doi.org/10.3102/0013189X17727502. DeJulio, S., Hoffman, J.V., Sailors, M., Martinez, R.A., & Wilson, M.B. (2021). Content Analysis: The Past, Present, and Future. In M.H. Mallette & N.K. Duke (Eds.), Literacy Research Methodologies (pp.27-61). Guilford Press. Donovan, C. A. (2001). Children’s development and control of written story and informational genres: Insights from one elementary school. Research in the Teaching of English, 35, 394-447. Drake, C., Land, T. J., & Tyminski, A. M. (2014). Using Educative Curriculum Materials to Support the Development of Prospective Teachers’ Knowledge. Educational Researcher, 43(3), 154–162. https://doi.org/10.3102/0013189X14528039 Duke, N. K., Kays, J. (1998). “Can I say, ‘Once Upon a Time’?”: Kindergarten children developing knowledge of information book language. Early Childhood Research Quarterly, 13, (2), 295-318. Duke, N. K. (2000). 3.6 minutes per day: The scarcity of informational text in first grade. Reading Research Quarterly, 35(2), 202. Dyson, A.H. (2003) The Brothers and Sisters Learn to Write: Popular Literacies in Childhood and School Cultures. New York: Teachers College Press: New York, NY. 221 EdGate (2021). Common Core State by State. Retrieved on September 23, 2023, from https://edgate.com/standards/us-state-map. EdReports. (2021). Reports Center. September 23, 2023, from https://www.edreports.org/reports?s=ela. EL Education (2017). EL education. Retrieved May 2, 2021, from: https://eleducation.org/. Englert, C.S., Stewart, S.R., Hiebert, E.H., (1998). Young Writer’s Use of Text Structure in Expository Text Generation. Journal of Educational Psychology, 80 (2), 143-151. Foorman, B., Schatschneider, C., Eakin, M. N., Fletcher, J. M., Moats, L. C., & Francis, D. J. (2006). The impact of instructional practices in grades 1 and 2 on reading and spelling achievement in high poverty schools. Contemporary Educational Psychology, 31, 1–29. Gabas, C., Cabell, S. Q., Copp, S. B., & Campbell, M. (2023). Evidence-based features of writing instruction in widely used kindergarten English language arts curricula. Literacy Research and Instruction, 62(1), 74-99. Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997). Upgrading high school mathematics instruction: Improving learning opportunities for low-achieving, low- income youth. Educational Evaluation and Policy Analysis, 19(4), 325-338. Gerde, H.K., Bingham, G.E., Wasik, B.A., (2012). Writing in early Childhood Classrooms: Guidance for Best Practices. Early Childhood Education Journal, 40 (6), 351-359. Gerde, H. K., Skibbe, L. E., Wright, T. S., & Douglas, S. N. (2019). Evaluation of Head Start curricula for standards-based writing instruction. Early Childhood Education Journal, 47, 97-105. Goodlad, J., I. (1979). Curriculum Inquiry: The Study of Curriculum Practice, New York: McGraw Hill. Graham, S. (2018). A revised writer (s)-within-community model of writing. Educational Psychologist, 53(4), 258-279. Graham, S. (2019). Changing how writing is taught. Review of Research in Education, 43(1), 277-303. Graham, S., Bollinger, A., Booth Olson, C., D’Aoust, C., MacArthur, C., McCutchen, D., & Olinghouse, N. (2012a). Teaching elementary school students to be effective writers: A practice guide (NCEE 2012- 4058). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications_reviews.aspx#pubsearch. 222 Graham, S., Harris, K. R., & Fink, B. (2000). Is handwriting causally related to learning to write? Treatment of handwriting problems in beginning writers. Journal of Educational Psychology, 92, 620–633. Graham, S. Harris, K.R. (2002). Prevention and intervention for struggling writers. In M. Shinn, H. Walker, & G. Stoner (Eds). Interventions for academic and behavior problems: II. Preventative and remedial techniques (pp. 589-610). Washington, D.C.: National Association of School Psychologists. Graham, S., & Harris, K. R. (2015). Common Core State Standards and writing: Introduction to the special issue. The Elementary School Journal, 115(4), 457-463. Graham, S., Harris, K.R., & Fink-Chorzempa, B. (2000). Is handwriting causally related to learning to write? Treatment of handwriting problems in beginning writers. Journal of Educational Psychology, 92, 620-633. Graham, S., Harris, K.R., MacArthur, Co., & Fink-Chorzempa, B. (2003). Primary grade teachers’ instructional adaptations for weaker writers: A national survey. Journal of Educational Psychology, 95, 279-293. Graham, S., Harris, K. R. Mason, L., Fink-Chorzempa, B., Moran, S., & Saddler, B. (2008a). How do primary grade teachers teach handwriting: A national survey. Reading and Writing: An Interdisciplinary Journal, 21, 49-69. Graham, S. & Hebert, M., (2011). Writing to Read: A Meta-Analysis of the Impact of Writing and Writing Instruction on Reading. Harvard Educational Review, 81 (4), 710-744. Graham, S., Huebner, A., Skar, G. B., Azani, J., & Weinberg, P. (2023). Teaching writing during the COVID-19 pandemic in the 2021–2022 school year. Reading and Writing, 1-30. Graham, S., Kiuhara, S. A., & MacKay, M. (2020). The Effects of Writing on Learning in Science, Social Studies, and Mathematics: A Meta-Analysis. Review of Educational Research, 90(2), 179–226. https://doi.org/10.3102/0034654320914744. Graham, S. McKeown, D., Kiuhara, S., & Harris, K.R. (2012b). A meta-analysis of writing and writing instruction on reading. Harvard Educational Review, 81 (4), 710-744. Graham, S., Morphy, P., Harris, K. R., Fink-Chorzempa, B., Saddler, B., Moran, S., & Mason, L. (2008b). Teaching Spelling in the Primary Grades: A National Survey of Instructional Practices and Adaptations. American Educational Research Journal, 45(3), 796–825. Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools―A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education. Guo, Y., Puranik, C., Dinnesen, M. S., & Hall, A. H. (2022). Exploring kindergarten teachers’ 223 classroom practices and beliefs in writing. Reading and Writing: An Interdisciplinary Journal, 35, 457–478. https://doi.org/10.1007/s11145-021-10193-y. Guo, Y., Puranik, C., Xie, Y., Dinnesen, M. S., & Breit, A. (2023). An Observational Study of Writing Instruction and Practice in Kindergarten Classrooms. Literacy Research and Instruction, 62(2), 180-202. Hall, A. H., Simpson, A., Guo, Y., & Wang, S. (2015). Examining the effects of preschool writing instruction on emergent literacy skills: A systematic review of the literature. Literacy Research and Instruction, 54(2), 115-134. Hall, A.H., Boyer, D.M., Beschorner, E.A., (2017). Examining Kindergarten Students’ Use of and Interest in Informational Text. Early Childhood Education Journal. 45(5), 703-711. Hall, A. H., Gao, Q., Guo, Y., & Xie, Y. (2023). Examining the effects of kindergarten writing instruction on emergent literacy skills: a systematic review of the literature. Early Child Development and Care, 193(3), 334-346. Handsfield, L.J., Crumpler, T.P. and Dean, T.R. (2010). Tactical Negotiations and Creative Adaptations: The Discursive Production of Literacy Curriculum and Teacher Identities Across Space‐Times. Reading Research Quarterly, 45, 405-431. https://doi.org/10.1598/RRQ.45.4.3. Harste, J. C., Woodward, V. A., & Burke, C. L. (1984). Language stories and literacy lessons. Portsmouth., NH; Heinemann. Hayes, J. R., Flower, L. S. (1980). Identifying the organization of writing processes. In Gregg, L.W., Steinberg, E. R. (Eds.), Cognitive processes in writing (pp. 3-30). Hillsdale, NJ: Erlbaum. Higgins, L. (2018, August 29). More than half of Michigan’s Students failed M-STEP literacy exam. The Detroit Free Press. Retrieved from: https://www.freep.com/story/news/education/2018/08/29/mstep-literacy-test- michigan/1115750002/. Hsiang, T. P., Graham, S., & Yang, Y. M. (2020). Teachers’ practices and beliefs about teaching writing: A comprehensive survey of grades 1 to 3 teachers. Reading and Writing, 33, 2511-2548. Hwang, H., & Duke, N. K. (2020). Content Counts and Motivation Matters: Reading Comprehension in Third-Grade Students Who Are English Learners. AERA Open. https://doi.org/10.1177/2332858419899075. Ikpeze, C. (2013). Increasing urban students’ engagement with school: Toward the Expeditionary learning model. Journal of Urban Learning, Teaching, and Research, 9, pp. 55-64. 224 Kauffman, D., Johnson, S. M., Kardos, S. M., Liu, E., & Peske, H. G. (2002). “Lost at sea:” New teachers’ experiences with curriculum and assessment. Teachers College Record, 104, 273– 300. Kaufman, J. H., Opfer, V. D., Bongard, M., & Pane, J. D. (2018). Changes in what teachers know and do in the Common Core era. Santa Monica, CA: RAND. https://www.rand.org/pubs/research_reports/RR2658.html. Kent, S. C., & Wanzek, J. (2016). The Relationship Between Component Skills and Writing Quality and Production Across Developmental Levels: A Meta-Analysis of the Last 25 Years. Review of Educational Research, 86(2), 570–601. Kersten, J., & Pardo, L. (2007). Finessing and hybridizing: Innovative literacy practices in Reading First classrooms. The Reading Teacher, 61(2), 146–154. Kim, Y. S. G., Yang, D., Reyes, M., & Connor, C. (2021). Writing instruction improves students’ writing skills differentially depending on focal instruction and children: A meta-analysis for primary grade students. Educational Research Review, 34, 100408. Kim, Y.S., Al Otaiba, S., Sidler, J. F., & Gruelich, L. (2013). Language, literacy, attentional behaviors, and instructional quality predictors of written composition for first graders. Early Childhood Research Quarterly, 28(3), 461–469. https://doi.org/10.1016/j.ecresq.2013.01.001. Korth, B. B., Wimmer, J. J., Wilcox, B., Morrison, T. G., Harward, S., Peterson, N., ... & Pierce, L. (2017). Practices and challenges of writing instruction in K-2 classrooms: A case study of five primary grade teachers. Early Childhood Education Journal, 45, 237-249. Krippendorff, K. (2013) Content Analysis: An Introduction to Its Methodology (3rd ed). California, CA: Sage Publications. Land, T. J., Tyminski, A. M., & Drake, C. (2015). Examining pre-service elementary mathematics teachers' reading of educative curriculum materials. Teaching and Teacher Education, 51, 16-26. Lohman, I., Dellinger, H. & Wilkinson, M. (2023, August 31). Michigan students gain on M- STEP, but the results are still down from pre-pandemic levels. Chalkbeat Detroit. Retrieved from: https://detroit.chalkbeat.org/2023/8/31/23853714/michigan-mstep- scores-results. Louisiana Department of Education (n.d.). Instructional Materials Review Guidance and Overview Documents, retrieved on November 3, 2023, from https://www.louisianabelieves.com/resources/library/curricular-resources. MAISA Units of Writing. (2014) Retrieved on March 3, 2021, from 225 https://www.oaklandschoolsliteracy.org/resources/common-core-resources/ccss- curriculum/. Malpique, A. A., Pino-Pasternak, D., & Valcan, D. (2017). Handwriting automaticity and writing instruction in Australian kindergarten: An exploratory study. Reading and Writing: An Interdisciplinary Journal, 30, 1789–1812. https://doi.org/10.1007/s11145- 017-9753-1. Mather, N., & Lachowicz, B. L. (1992). Shared writing: An instructional approach for reluctant writers. Teaching Exceptional Children, 25(1), 26-30. McCarthey, S. J. (2008). The Impact of No Child Left Behind on Teachers’ Writing Instruction Instruction. Written Communication, 25(4), 462- 505. https://doi.org/10.1177/0741088308322554. McCarthey, S. J. & Kang, G. (2017). Understanding influences on writing instruction: cases of two kindergarten teachers. Early Child Development and Care, 187(3-4), 398- 417, https://doi.org/10.1080/03004430.2016.1211126. McCarthey, S.J., & Woodard, R. (2018). Faithfully following, adapting, or rejecting mandated curriculum: teachers’ curricular enactments in elementary writing instruction, Pedagogies: An International Journal, 13:1, 56 (80), DOI: 10.1080/1554480X.2017.1376672 Michigan Association of Intermediate School Administrators General Education Leadership Network (MAISA GELN) Early Literacy Task Force (ELTF) (2016). Essential instructional practices in early literacy: K to 3. Lansing, MI: Authors Miles M.B., Huberman, A.M., Saldana J. (2020). Qualitative Data Analysis: A Methods Sourcebook, 4th ed. Thousand Oaks, CA: SAGE Publications. National Center for Education Statistics. (2019). The nation’s report card: Writing 2011 (NCES2012–470). Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Retrieved from https://nces.ed.gov/. National Commission on Writing (2003). The Neglected “R”: The Need for a Writing Revolution. New York, NY: College Entrance Examination Board. National Early Literacy Panel (NELP). (2008). Developing early literacy: Report of the National Early Literacy Panel. Washington, D.C.: National Institute for Literacy National Governors Association Center for Best Practices and the Council of Chief State School Officers. (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, D.C.; Author. Retrieved from http://www.corestandards.org/ela-literacy. 226 Neuendorf, K.A. (2002). The Content Analysis Guidebook. Sage: Thousand Oaks, CA. No Child Left Behind (NCLB) Act of 2001, Pub. L. No. 107-110, § 115, Stat. 1425. (2002). Accessed 10/9/18 from http://www.ed.gov/policy/elsec/leg/esea02/index.html. Olinghouse, N. G., & Graham, S. (2009). The relationship between the discourse knowledge and the writing performance of elementary-grade students. Journal of educational psychology, 101(1), 37. Oulette, G., Senechal, M., (2017). Invented Spelling in Kindergarten as a predictor of Reading and Spelling in Grade 1: A New Pathway to Literacy, or Just the Same Road Less Known? Developmental Psychology, 53(1), 77-88. https://doi.org/10.1037/dev0000179. Pak, K., Polikoff, M. S., Desimone, L. M., & Saldívar García, E. (2020). The Adaptive Challenges of Curriculum Implementation: Insights for Educational Leaders Driving Standards-Based Reform. AERA Open. https://doi.org/10.1177/2332858420932828. Pease-Alvarez, L. & Samway, K.D. (2008). Negotiating a top-down reading program mandate: The experiences of one school. Language Arts, 86(1), 32-41. Polikoff, M. S., & Porter, A. C. (2014). Instructional alignment as a measure of teaching quality. Educational Evaluation and Policy Analysis, 36(4), 399-416. Polikoff, M. (2018). The challenges of curriculum materials as a reform lever. Economic Studies at Brookings: Evidence Speaks Reports, 2, 58. Polikoff, M., & Dean, J. (2019). The supplemental curriculum bazaar: Is what’s online any good? Washington, DC: Thomas B. Fordham Institute. Retrieved December 20, 2021, from https://fordhaminstitute.org/national/research/supplemental-curriculum-bazaar. Piasta, S. B. (2016). Current understandings of what works to support the development of emergent literacy in early childhood classrooms. Child development perspectives, 10(4), 234-239. Pulido, L., & Morin, M-F. (2018). Invented spelling: What is the best way to improve literacy skills in kindergarten? Educational Psychology, 38(8), 980–996. Puranik, C.S., Al Otaiba, S., Sidler, J.F. Greulich, L. (2014). Exploring the amount and type of writing instruction during language arts instruction in kindergarten classrooms. Reading & Writing, 27, 213–236, https://doi.org/10.1007/s11145-013-9441-8. Puranik, C. S., & Lonigan, C. J. (2011). From scribbles to scrabble: Preschool children's developing knowledge of written language. Reading and Writing, 24(5), 567-589. Puranik, C. S., & Al Otaiba, S. (2012). Examining the contribution of handwriting and spelling to written expression in kindergarten children. Reading and Writing: An Interdisciplinary 227 Journal, 25, 1523–1546. Puranik, C.S., & Lonigan, C.J. (2014). Emergent Writing in Preschoolers: Preliminary Evidence for a Theoretical Framework. Reading Research Quarterly, 49(4), 453-467. Purcell-Gates, V., Duke, N.K., Martineau, J.A., (2007). Learning to read and write genre- specific text: Roles of authentic experience and explicit teaching. Reading Research Quarterly, 42(1), 8-45. Remillard, J., & Heck, D. J. (2014). Conceptualizing the curriculum enactment process in mathematics education. Mathematics Education, 406, 705–718. Remillard, J. T. (2005). Examining Key Concepts in Research on Teachers’ Use of Mathematics Curricula. Review of Educational Research, 75(2), 211–246. https://doi.org/10.3102/00346543075002211. Rowan, B., Camburn, E., & Correnti, R. (2004). Using teacher logs to measure the enacted curriculum: A study of literacy teaching in third-grade classrooms. Elementary School Journal, 105(1), 75–101. doi:10.1086/428803 Rowe, D.W. (1994) Preschoolers as Authors. Hampton Press, Inc.: Cresskill, NJ. Rowe, D. W. (2008). The social construction of intentionality: Two-year-olds' and adults' participation at a preschool writing center. Research in the Teaching of English, 42(4), 387-434. Rowe, D. W., & Neitzel, C. (2010). Interest and agency in 2‐and 3‐year‐olds' participation in emergent writing. Reading Research Quarterly, 45(2), 169-195. Rowe, D. W. (2018). The Unrealized Promise of Emergent Writing: Reimagining the Way Forward for Early Writing Instruction. Language Arts, 95 (4) 229-241. Shanahan, T. (2006). Relations among oral language, reading, and writing development. In C.A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp.83–95). New York, NY: The Guilford Press. Shen, M., & Troia, G. A. (2018). Teaching Children With Language-Learning Disabilities to Plan and Revise Compare–Contrast Texts. Learning Disability Quarterly, 41(1), 44–61. https://doi.org/10.1177/0731948717701260. Spear-Swerling, L., Zibulsky, J. (2014). Making time for literacy: teacher knowledge and time allocation in instructional planning. Reading and Writing: An Interdisciplinary Journal, 27, 1353–1378. https://doi.org/10.1007/s11145-013-9491-y. Stein, M. K., Remillard, J., & Smith, M. S. (2007). How curriculum influences student learning. Second handbook of research on mathematics teaching and learning, 1(1), 319-370. 228 Stillman, J., & Anderson, L. (2011). To Follow, Reject, or Flip the Script: Managing Instructional Tension in an Era of High-Stakes Accountability. Language Arts, 89 (1), 22-37. Retrieved April 17, 2021, from http://www.jstor.org/stable/41804312. Sulzby, E. (1986) Kindergarteners as writers and readers. In M. Farr (Ed.), Advances in writing research: Children's early writing (Vol. 1, pp. 127-200). Norwood, NJ: Ablex. Taylor, G., Shepard, L., Kinner, F., & Rosenthal, J. (2002). A survey of teachers’ perspectives on high-stakes testing in Colorado: What gets taught, what gets lost. CSE technical report 588. Los Angeles, CA: Evaluation, Standards Student Testing. (ERIC Document Reproduction Service No. ED475139). Taylor, B. M., Pearson, P. D., Peterson, D. S., & Rodriguez, M. C. (2003). Reading growth in high poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning. The Elementary School Journal, 104(4), 3–28. Teale, W.H. & Sulzby, E., (1986). Emergent Literacy: Writing and Reading. Writing Research: Multidisciplinary Inquiries into the Nature of Writing Series. Ablex; Norwood, NJ. Tepe, L., & Mooney, T. (2018, May). Navigating the new curriculum landscape: How states are using and sharing open educational resources. New America. Retrieved September 13, 2023 from https://d1y8sb8igg2f8e.cloudfront.net/documents/FINAL_Navigating_the_New_Curricul um_Landscape_v4.pdf. Traga Philippakos, Z. A., Wiese, P., and A. Davis, (2023a). "Writing and Reading Connections: Giving Value to Both Sides of the Same Literacy Coin," The Language and Literacy Spectrum: Vol. 33: Iss. 1, Article 6. Available at: https://digitalcommons.buffalostate.edu/lls/vol33/iss1/6. Traga Philippakos, Z. A., MacArthur, C. A., & Rocconi, L. M. (2023b). Effects of genre-based writing professional development on K to 2 teachers' confidence and students’ writing quality. Teaching and Teacher Education, 135, 104316. Troia, G. (2006). Writing instruction for students with learning disabilities. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp.324-336). New York: Guilford Press. Troia, G. (2014). Evidence-based practices for writing instruction (Document No. IC-5). Retrieved from University of Florida, Collaboration for Effective Educator, Development, Accountability, and Reform Center website: http://ceedar.education.ufl.edu/tools/innovation-configuration. Troia, G. & Graham, S. (2016). Common core writing and language standards and aligned state assessments: a national survey of teacher beliefs and attitudes. Reading and Writing, 29 (9), 1719-1743. 229 Troia, G. A., Lin, S. J. C., Cohen, S., & Monroe, B. W. (2011). A year in the writing workshop: Linking writing instruction practices and teachers’ epistemologies and beliefs about writing instruction. The Elementary School Journal, 112(1), 155-182. Troia, G. A. & Olinghouse, N. (2013) The Common Core State Standards and Evidence Based Educational Practices: The Case of Writing, School Psychology Review, 42:3, 343- 357, DOI: 10.1080/02796015.2013.12087478 Valencia, S., Place, N., Martin, S., & Grossman, P. (2006). Curriculum materials for elementary reading: Shackles and scaffolds for four beginning teachers. The Elementary School Journal, 107(1), 93–120. Vanderburg, M., & Stephens, D. (2010). The impact of literacy coaches: What teachers value and how teachers change. The elementary school journal, 111(1), 141-163. Vaughn, M., Scales, R.Q., Stevens, E.Y., Kline, S., Barrett-Tatum, J., Van Wig, A., Yoder, K. K., & Wellman, D. (2019). Understanding literacy adoption policies across contexts: a multi-state examination of literacy curriculum decision-making., Journal of Curriculum Studies, https://doi.org/10.1080/00220272.2019.1683233. Wang, H., & Troia, G. A. (2023). How students' writing motivation, teachers' personal and professional attributes, and their writing instruction impact student writing achievement: A two-level hierarchical linear modeling study. Frontiers in Psychology, 14, 1213929. Whitehurst, G., & Lonigan, C. (1998). Child Development and Emergent Literacy. Child Development, 69(3), 848-872. Waldron, C.H. (2014). Reading, reforms, and resources: How elementary teachers teach literacy in contexts of complex educational policies and required curriculum. (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3618075). Wright, T. S. (2011). What classroom observations reveal about oral vocabulary instruction in kindergarten (Doctoral dissertation, University of Michigan). Wright, T.S., & Neuman, S.B. (2013). Vocabulary instruction in commonly used kindergarten core reading curricula. The Elementary School Journal, 113(3), 386–408. https://doi.org/10.1086/668766. Wright, T. S., Cummings, A., West, J., & Anderson, J. (2022). What Resources do Elementary Teachers Use for English Language Arts Instruction? The K-5 ELA Curriculum Landscape in Michigan. https://epicedpolicy.org/wp- content/uploads/2022/09/RBG3_Curriculum_PolBrief_Sept2022.pdf. 230 Wright, T.S., Bruner, L., Cummings, A., & Strunk, K. (accepted). Understanding Teachers’ Literacy Instructional Practices in K-3 Classrooms during COVID-19. Reading Research Quarterly. Yin, R.K. (2018). Case Study Research. Los Angeles: Sage. Yoon, H. (2013). Rewriting the Curricular Script: Teachers and Children Translating Writing Practices in a Kindergarten Classroom. Research in the Teaching of English, 48(2), 148- 174. Retrieved April 30, 2021, from http://www.jstor.org/stable/24398653. 231 Example Page from the Essential Instructional Practices in Early Literacy: Grades K-3 (MAISA-GELN Early Literacy Task Force, 2016) APPENDIX A K-3 ESSENTIALS *For the full essentials document, go to: https://literacyessentials.org/downloads/gelndocs/k- 3_literacy_essentials.pdf 232 APPENDIX B CODING PROTOCOL Coding Protocol for Video Data and Curriculum Coding Procedures for Essential Four: All evidence of activities that build phonological awareness (as outlined below) will be coded in the videos of classroom instruction. Phonological awareness activities are defined as those activities that ask children to pay conscious attention to the sounds that make up language. We will assign one score for each bullet in this essential practice, based on all observed instances of these activities in classroom instruction. Bullet #5: The teacher promotes phonological awareness development, particularly phonemic awareness development, by engaging children in daily opportunities to write meaningful texts in which they listen for the sounds in words to estimate their spellings. Exemplary (5) While writing meaningful text or participating in shared writing experiences, the teacher regularly coaches children to listen for the sounds in words and connect these sounds to letters. The children spend most of the time actively participating in these writing experiences themselves, with the teacher offering support. Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed While writing meaningful text or participating in shared writing experiences, the teacher occasionally coaches children to listen for the sounds in words and connect these sounds to letters. However, the teacher does most of the work for the children (i.e., breaking words into sounds and then writing the letters for the children or telling them which letters to write). The teacher offers children opportunities to write meaningful texts but does not scaffold children to listen for the sounds in words to estimate their spellings. For instance, the teacher might spell words for children when they ask instead of prompting them sound out the words and connect letters to the sounds in developmentally appropriate ways. Rules for E4B5: This should be scored any time children are writing “meaningful text” independently (not copying words or sentences from the overhead/chart paper). Do not score spelling lessons in this bullet (can score spelling in E6B5). 233 Coding Procedures for Essential Six: All evidence of writing instruction (as outlined below) will be coded in the videos and curriculum artifacts. Writing instruction is defined as any time teachers are modeling writing practices, dictating children’s spoken language into written form, discussing writing strategies, providing time for children to write independently and/or with others, or examining a text for elements of writing craft. Writing instruction also includes explicit instruction in letter formation, spelling, capitalization, punctuation, sentence construction, keyboarding, and word processing. Coders will assign one score for each bullet in this essential practice, based on all observed instances of writing instruction. Bullet #1: The teacher provides interactive writing experiences in grades K and 1. Exemplary (5) Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed (0) The teacher models writing for children but does not share ANY of the composition or transcription process with them. The teacher may explain choices in purpose or audience, composition, mechanics, or revision, but does not invite children to participate in this discussion. The teacher designs an interactive writing experience in which children are highly involved in ALL components of the process. These components include selecting an audience or purpose; actively participating in the composition (sharing ideas for the message of the writing); actively participating in the mechanics (transcribing letters, adding words, etc.); revising the writing; and reading the writing (and/or rereading the writing) together. The teacher designs an interactive writing experience in which children are involved in SOME of the components of the process OR where children are highly involved but in only one component of the process. 234 Rule for E6B1 - Interactive writing involves children contributing to a piece of writing led by the teacher. With the teacher’s support, children determine the message, count the words, stretch words, listen for sounds within words, think about letters that represent those sounds, and/or write some of the letters. Rules for E6B1 – Can count interactive writing outside of K and 1; do not count copying work from the board; grammar where children are working on editing or revising pre-written sentences does not count as interactive writing. This should be scored in E6B5C and D. Bullet #2: The teacher provides daily time for children to write, aligned with Essential 1 (Literacy Engagement & Motivation). Exemplary (5) Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed (0) The teacher provides daily time for children to write AND the teacher provides opportunities for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation, as aligned to Essential 1. The teacher provides daily time for children to write OR the teacher may provide opportunities for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation, as aligned to Essential 1. 235 The teacher provides limited opportunities for children to write each day AND offers no opportunities for children to see themselves as successful writers, have choice in their writing, have opportunities to collaborate on writing, and have purposes for writing beyond an assignment or expectation as aligned to Essential 1. Rule for E6B2: To score a 3 or higher, daily time = sustained writing (e.g., in a writing block), an actual piece of writing, not a “worksheet” (students copying teacher generated ideas, only one correct answer, etc.). Rule for E6B2: A “worksheet” can score beyond a 1 if the worksheet is being used as a graphic organizer for students to generate their own ideas that could be used in their own writing. Rule for E6B2: Definition of “opportunities for children to see themselves as successful writers” includes ONE of the following: • Not a “worksheet” (students copying teacher generated ideas, only one correct answer, etc.) • Developmentally appropriate for students • Students can explain what it means to be successful by referring to: o I can statement o Class discussions o Anchor charts o Checklists o Rubrics • Student is provided with feedback specific to the criteria outlined above Bullet #3: The teacher provides instruction in writing processes and strategies, particularly those involving researching, planning, revising, and editing writing. Exemplary (5) Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed (0) The teacher provides students with explicit instruction in the writing process AND explicitly teaches a strategy or strategies within that process. This includes instruction in strategies used The teacher provides students explicit instruction in the writing process OR explicitly teaches strategies within that process. This includes instruction in strategies for researching, 236 The teacher briefly refers to writing processes and/or strategies (e.g., “remember to look at your graphic organizer”) but explicit instruction in writing processes and strategies is not evident. for researching, planning, revising, or editing their writing. planning, revising, or editing their writing. Rule for E6B3: Grammar activities in which children are working on editing or revising pre-written sentences should not be scored here. Instead, score in E6B5C and D. Rule for E6B3: Instruction in drafting should also be included as part of the writing process. Bullet #4: The teacher provides opportunities to study models of and write a variety of texts for a variety of purposes and audiences, particularly opinion, informative/explanatory, and narrative texts (real and imagined). Exemplary (5) Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed (0) The teacher reads a mentor text but does not explain or discuss how this relates to children’s own writing OR the mentor text is not aligned with the type of text that children are writing. The teacher reads/shows a mentor text during writing instruction AND discusses the model with the children including supporting children to discuss the purpose, audience, structure, and features of this type of text that children could include in their writing. The teacher reads/shows a mentor text that is the same type of text that children are writing AND tells children the purpose, audience, structure, and features that should be included in their writing. 237 E6B4 – Can count a text as a “mentor text” if a text appears in writing instruction. Cannot score higher than 1 if it is not used explicitly to teach/connect to a lesson objective. Mentor Text = text that students model their writing after. Bullet #5: The teacher provides explicit instruction in letter formation, spelling, capitalization, punctuation, sentence construction, keyboarding, and word processing. Exemplary (5) The teacher explains AND models AND guides children to practice one or more age- appropriate writing skills. The teacher engages children in discussion of how and why to use the skill. Strong (4) Proficient (3) Developing (2) Beginning (1) Not Observed (0) The teacher may briefly use or reference writing skills while children observe and listen, but these are mentioned briefly or in passing. The teacher does two of the following three things: models, explains or guides children to practice one or more age- appropriate writing skill. The teacher may not complete a full gradual release cycle of explaining, modeling, guided and independent practice. Rule for E6B5: Grammar activities in which children are working on editing or revising pre-written sentences should be scored here, in the appropriate, corresponding sub-bullet(s). Rules for E6B5: Cannot score higher than a 1 if there is no explicit instruction 238 Rules for E6B5: Writing instruction addresses the following age-appropriate skills (as defined by Common Core State Standards for ELA for each grade level): ____ A. Letter formation • Handwriting, spacing between words, attention to grip and body position, explicit instruction in letter formation, short periods of supported practice, opportunities to write letters from memory, teacher feedback, opportunities to self-evaluate one’s writing, opportunities to write letters not only in isolation but also in context. ____ B. Spelling • Listening to and representing individual sounds in words, greater use of spelling patterns to spell words, learn to use analogy and knowledge of morphological relationships to spell words, breaking words into chunks ____ C. Capitalization/Punctuation • Mechanics instruction in the context of student’s own writing with a specified purpose, editing checklists, common set of editing marks ____ D. Sentence construction • Practice these skills while drafting, revising, and editing student’s own writing (sentence framing, sentence expanding, sentence combining) ____ E. Keyboarding (Third grade and above) • Utilizing keyboard software, monitoring children’s use of software, short/focused lessons ____ F. Word processing 239 APPENDIX C PRACTICE GUIDE ALIGNMENT Alignment of the IES Practice Guide: Teaching Elementary Students to be Effective Writers (Graham et al., 2012a) and Essential Instructional Practices in Early Literacy: Grades K-3 (Essentials K-3 (MAISA-GELN ELTF, 2016)) IES Recommendation Literacy Essential Time 1.1 Provide daily time for students to write 6.2 daily time for children to write, aligned with instructional practice #1 above 6.3 instruction in writing processes and strategies, particularly those involving researching, planning, revising, and editing writing Writing Process 2.0 Teach students to use the writing process for a variety of purposes 2a.1 Teach students strategies for the various components of the writing process 2a.2 Gradually release writing responsibility from the teacher to the student 2a.3 Guide students to select and use appropriate writing strategies 2a.4 Encourage students to be flexible in their use of the components of the writing process 6.1 interactive writing experiences in grades K and 1 Purposes for Writing 2b.1 Help students understand the different purposes of writing 2b.2 Expand students' concept of audience 2b. 3 Teach students to emulate the features of good writing 6.4 opportunities to study models of and write a variety of texts for a variety of purposes and audiences, particularly opinion, informative/explanatory, and narrative texts (real and imagined) 240 2b. 4 Teach students techniques for writing effectively for different purposes Component Skills (spelling, handwriting, etc.) 3.1 Teach very young writers how to hold a pencil correctly and form letters fluently and efficiently 3.2 Teach students to spell words correctly 3.3 Teach students to construct sentences for fluency, meaning, and style 3.4 Teach students to type fluently and to use a word processor to compose 4.1 Teachers should participate as members of the community by writing and sharing their writing Writing Community and Writing Motivation 4.2 Give students writing choices 4.3 Encourage students to collaborate as writers 4.4 Provide students with opportunities to give and receive feedback throughout the writing process 4.5 Publish students' writing and extend the community beyond the classroom 6.5 explicit instruction in letter formation, spelling strategies, capitalization, punctuation, sentence construction, keyboarding (first expected by the end of grade 3, see the Practice Guide cited immediately above for detail), and word processing 4.5 daily opportunities to write meaningful texts in which they listen for the sounds in words to estimate their spellings 6.2 daily time for children to write, aligned with instructional practice #1 above 241 APPENDIX D TABLE D.1 Table D.1 Analytic Matrix of Round 1 Teacher Video Teacher ID ELA video 10101 10111 10121 10130 10150 16461 16471 16481 16490 16500 16700 16510 16710 16720 18581 18591 18601 18610 18620 18630 19641 19651 19661 19670 19680 19690 Yes Yes Yes Yes Yes half – 1 small group Yes – not student work time Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Content Area Notes Yes Yes Yes No No No No No Math Math No No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No District B District B District B District B District B District A District A District A District A District A District A District A District A District D District D District D District D District D District D District C District C District C District C District C District C Writing Video Separate Yes Yes Yes Yes Yes Yes - Spelling No No Yes - Spelling No - combined Yes – Spelling Yes - Spelling Yes - Spelling No No - combined Yes Yes Yes No Yes –T was testing Yes Yes Yes Yes Yes Yes 242 APPENDIX E TABLE E.1 Table E.1 Units Corresponding to Round 1 Observations Curriculum Curriculum 3 Curriculum 3 Curriculum 1 Curriculum 1 Curriculum 1 Curriculum 4 Curriculum 2 Curriculum 2 Curriculum 2 Grade First Observation 1 2 1 2 3 1 K 2 3 Unit 5: Technology Unit 5: Technology Sun, Moon, Stars Fossils Frogs Unit 1 Expository Expository Expository 243