THE EFFECT OF TIER ONE LITERACY PRACTICES ON PRESCHOOLERS EMERGENT LITERACY SKILLS By Tamela Jo Mannes A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirement for the degree of Special Education - Doctor of Philosophy 2013 ABSTRACT THE EFFECT OF TIER ONE LITERACY PRACTICES ON PRESCHOOLERS’ EMERGENT LITERACY SKILLS By Tamela Jo Mannes Preschool education has grown exponentially over the last 50 years, demonstrating longterm benefits (Aos, Lieb, Mayfield, & Pennucci, 2004; Barnett, 2008, Camilli, Vargas, Ryan, & Barnett, 2010; Gorey, 2001). Despite the long-term benefits, children are leaving preschool and entering kindergarten with inadequate literacy skills (Carta, Greenwood, & Atwater, 2010). This investigation sought to examine the effect of a class-wide literacy intervention and the intervention’s impact on alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. This study evaluated the impact of preschoolers’ emergent literacy skills after the implementation of a hybridized version of two evidence-based curriculums, which have been shown to improve preschool students’ alphabet knowledge, phonological awareness, comprehension, vocabulary, and concepts about print. These interventions included the Kindergarten Peer Assisted Literacy Strategies (Mathes, Clancy-Manchetti, & Torgeson, 2001) and Developing Talkers: PreK (The Children’s Learning Institute, 2010). The hybridized intervention was administered for 12-weeks in duration for a total of 48 sessions. The first research question focused on the impact of the literacy intervention on specific emergent literacy skills: preschool students’ alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. The second research question examined the overall performance of the experimental group and control group on broader measures of literacy, specifically cumulative scores on non-targeted literacy components (i.e., sight words). The final three research questions compared the experimental and control group scores concerning the proportion of students who met benchmark goals on assessments, as well as the proportion of students who were deemed well below benchmark or below benchmark based on cut scores. The results of the experimental study revealed that participants in both conditions developed emergent literacy skills over the duration of the study. However, the experimental group outperformed the control condition in knowledge of letter sounds, phonological awareness, and vocabulary measures. The results are outlined empirically, with implications for teaching practice and assessment in preschool literacy programs. Copyright by TAMELA JO MANNES 2013 Dedicated to William for 15 years of steadfast love and support. v ACKNOWLEDGEMENTS First and foremost thank you to Dr. Carol Sue Englert, my advisor and dissertation chair. Her constant support and encouragement over the last three years is sincerely appreciated. Since our paths crossed, Carol Sue has encouraged me and provided honest feedback throughout the process. Thank you for the endless hours of support and helping mold me into a better researcher, learner, teacher, and person. I would also like to thank my dissertation committee members, Dr. Troy Mariage, Dr. Sara Bolt, and Dr. Josh Plavnick for their guidance on this project. Your questions, comments, and critiques challenged me to become a more critical thinker, researcher, and scholar. For that I am eternally grateful. Thank you to all of my peers over the course of my studies. The conversations we have had from differing perspectives have broadened my view of the world, thank you. Thank you to all of the educators, parents, and students who have participated in this study. Without you this would have never come to fruition. I am especially thankful to Kate Augustyn for her collaboration over the years, and her vision to support and enhance the education for some of the youngest learners. Additionally, thank you to my colleagues at work. Their support, flexibility and shared vision are greatly appreciated as I have pursued my dreams! I would not be here without the continual love and support from my family. I am forever indebted to you for all of the opportunities I have been given because of sacrifices you have made. Your dedication and support are the reason I am at this juncture. Finally, words cannot express the amount of gratitude I have for William. You and Olivia you are a blessing and remind me about the true purpose in life. Your tireless patience, vi support, trust, and love are the reason for the completion of this milestone. The weekends of working on one project or another, the late nights, and the early mornings – thank you. You have taught me to challenge the status quo, think deeper, and follow my dreams. vii TABLE OF CONTENTS LIST OF TABLES xi LIST OF FIGURES xii CHAPTER ONE INTRODUCTION Problem Statement Study Purpose Research Questions 1 1 1 4 5 CHAPTER TWO LITERATURE REVIEW Conceptual Frameworks Holdaway’s theory of literacy development (1979). Emergent literacy theory. Social constructivism. Preschool Education Preschool curricula. Multi-Tiered Systems of Support Benefits of an MTSS approach in preschool settings. Challenges of an MTSS approach in preschools settings. Research on preschool MTSS. Emergent Literacy Alphabetic principle. Phonological awareness. Vocabulary. Comprehension. Print awareness (Concepts about print). Preschool Emergent Literacy Intervention Kindergarten Peer Assisted Literacy Strategies research. Developing Talkers: PreK research. Intervention theoretical framework. Study Context 7 7 7 7 9 11 14 17 18 20 21 22 25 26 28 30 31 33 34 35 36 38 40 CHAPTER THREE METHODS Participants Preschool classrooms. District demographics. Participant demographics. Design Setting 43 43 43 43 45 46 47 48 viii Data Sources Assessment. Preschool Early Literacy Indicators (PELI). Background on the PELI. Administration information on the PELI. Reliability and validity measures of the PELI. Dynamic Indicators of Basic Early Literacy Skills Next (2011; DIBELS) Letter Naming Fluency (LNF). First Sound Fluency (FSF). Phoneme Segmentation Fluency (PSF). Michigan literacy progress profile (MLPP). Concepts about print. Teacher Rating of Oral Language and Literacy (TROLL). Curriculum-based vocabulary measure. Lower case letter names and sounds. Sight words. Intervention Alphabet knowledge and phonological awareness instruction. KPALS instruction. KPALS modifications . Vocabulary, comprehension, and concepts about print instruction. Developing Talkers: PreK. Concepts about print. Time spent in intervention. Procedural Fidelity Control Group Additional Common Instructional Themes/Components CHAPTER FOUR RESULTS Data Analysis Independent variable. Dependent variables included in data analysis. PELI. DIBELS. TROLL. Additional variables. Preliminary data analysis. Fall analysis results. Winter analysis results. Research question one. Alphabet knowledge. Phonological awareness. Vocabulary. Comprehension. ix 50 50 52 52 52 53 55 56 56 56 56 56 56 57 59 59 59 60 61 62 62 62 65 65 66 67 69 72 72 72 72 72 72 72 72 72 73 73 75 77 78 79 80 81 Concepts about print. Research question two. Total PELI score. TROLL assessment. Sight word assessment. Research question three. Research question four. Research question five. 81 85 86 86 87 88 95 97 CHAPTER FIVE DISCUSSION Research Question One Alphabet knowledge. Phonological awareness. Vocabulary. Comprehension. Concepts about print. Research Question Two PELI. TROLL. Sight words. Research Question Three Research Question Four Research Question Five Implications for Practice Limitations of Research Future Research Conclusion 104 104 104 104 105 107 108 110 110 110 111 111 112 114 115 117 119 120 122 APPENDICES Appendix A Curriculum-Based Vocabulary Assessment Measure and Rubric Appendix B Sight Word Assessment List Appendix C Procedural Fidelity Measure Appendix D Concepts about Print Cue Card 123 124 BIBLIOGRAPHY 144 x 137 139 142 LIST OF TABLES Table 1 Literacy Skills. Intervention, and Assessment 41 Table 2 Group Assignment for Dissertation Sample 45 Table 3 Classroom Characteristics 45 Table 4 Classroom Characteristics and Participant Demographics 47 Table 5 Teacher Characteristics 50 Table 6 Assessments 52 Table 7 Correlations of PELI with Other Standardized Preschool Assessments 55 Table 8 Daily Intervention Components 66 Table 9 Classroom Schedules 71 Table 10 Fall Independent t-test Results 74 Table 11 Fall and Winter Group Means and Standard Deviations 75 Table 12 Winter Independent t-test Results 77 Table 13 Pearson Correlations for Spring Dependent Variables 82 Table 14 Spring ANCOVA Results 84 Table 15 Spring Group Means, Adjusted Means, and Standard Deviations 85 Table 16 Spring MANCOVA Results 88 Table 17 Preliminary PELI Benchmark Goals and Cut Points 90 Table 18 Winter and Spring Percentages of Students well below benchmark, below benchmark, and at benchmark as measured on the PELI 94 Table 19 KPALS Instruction 139 Table 20 DTPK Instruction 140 Table 21 Zoo Phonics 140 xi LIST OF FIGURES Figure 1 Zoo Phonics Sample Letter Sequence 70 Figure 2 Proportion of Well Below Benchmark Students Based on the PELI 93 Figure 3 Proportion of Students Reaching Benchmarks on the PELI 97 Figure 4 Proportion of students who were well below benchmark on the DIBELS 102 Figure 5 Percentage of students who were at benchmark on the DIBELS 103 xii CHAPTER ONE INTRODUCTION Problem Statement Preschool education has changed dramatically over the past several decades (Barnett, 2008; Barnett & Frede, 2011; Bayat, Mindes, Covitt, 2010; Cabell, et al., 2010; Gorey, 2001; Sylva, Cahr, Melhuish, Sammons, Siraj-Blatchford, Taggart, 2011), with preschool enrollment growing exponentially (Barnett, 2008). In 1960 only 10% of three to four-year-olds were enrolled in preschool (Barnett, 2008). Currently, nearly three-quarters of four-year-olds and almost half of three-year-olds are receiving preschool education (Barnett, 2008). To meet the increasing enrollment demands, options for preschool programs have also expanded. Current preschool options include: tuition-based preschool, preschool in child care settings, state-funded preschool programs (i.e., Great Start Readiness Preschool in Michigan), federally-funded programs (i.e., Head Start), and preschool special education. For the past 50 years, research studies have been conducted to determine the effects of preschool education (Aos, et al., 2004; Barnett, 2008; Camilli, et al., 2010; Gorey, 2001; McKey et al., 1985; Nelson, Westhues, MacLeod, 2003). Research indicates that high-quality early childhood programming has a long-term academic impact on young children into elementary school (Aos et al., 2004; Barnett, 2008; Camilli et al., 2010; McKey et al., 1985; Winter & Kelley, 2008). Preschool education is shown to produce moderate effect sizes positively impacting children’s cognitive development (Camilli et al., 2010). The effect sizes were larger when preschool education had a direct instruction component (Camilli et al., 2010). Additionally, preschool education has proven to have a substantial impact on social-emotional development, as measured by young children’s ability to handle daily life experiences (McKey et al., 1985). Research indicates high-quality preschool education has positive effects on school 1 progress, specifically in the reduction of grade retention, reductions in special education referral and placement, and an increase in high-school graduation rates (Aos et al., 2004; Barnett, 2008; Camilli et al., 2010). More specifically, preschool attendance has been associated with a 12% decrease in special education identification (Greenwood et al., 2011) as well as a 200% special education cost savings (Bartik, 2012). High-quality preschool education also has a long-term impact with preschool attendees earning 7% more by the age of 26 (Bartik, 2012). Despite the research indicating the benefits of preschool education, many children attending preschool do not receive research-based literacy instruction that is designed to impact their skill proficiency at the start of Kindergarten (Carta, et al., 2010). Kindergarten literacy skills are vital because they are associated with later reading achievement in both elementary school (Denton, Vaughn, & Fletcher, 2003; Diamond, Gerde, Powell, 2008) and high school (Cunningham & Stanovich, 1997; Diamond et al., 2008). Consequently, children who lack literacy skills in kindergarten rarely catch up to their peers (Juel, 1988; Justice & Pullen, 2003). Preschool environments have the potential to impact long-term literacy achievement through the explicit provision of instruction in evidence-based literacy skills and through the implementation of efficacious teaching practices (Aos et al., 2004; Barnett, 2008; Burger, 2010; Camilli et al., 2010; Greenwood et al., 2011). National reports (National Institute of Literacy, 2008) further indicate that emergent literacy instruction should incorporate instruction in phonemic awareness, alphabet knowledge, vocabulary, comprehension, and concepts about print, presented through a literacy rich classroom and explicit, intentional, and targeted literacy instruction (Bailet, Pepper, Piasta, Murphy, 2009; Cabell, Justice, Konold, & McGinty, 2011; Christie, 2008; Wang & Algonzinne, 2006). As Cunningham (2010) states, “The key to early literacy development is a rich, well-organized environment that can support teachers’ goals for 2 children in other words, a high-quality literacy environment,” (p. 501). Research indicates increasing preschoolers’ literacy and language skills promotes school readiness and long-term academic success (Aos et al., 2004; Baker, Kupersmidt, Voeger-Lee, Arnold, & Willoughby, 2010; Barnett, 2008; Camilli et al., 2010; McKey et al., 1985; Winter & Kelly, 2008). The research findings have been bolstered by two federal policies. First, Good Start, Grow Smart (2002) was a federal initiative that focused on early learning in hopes of preparing children who are developmentally and academically ready for kindergarten. Good Start, Grow Smart (2002) also encouraged federal-state partnerships to develop high-quality preschools with an emphasis on literacy and language development, aligning with the kindergarten through twelfth grade state standards. Second, the importance of language and literacy was strengthened by the early childhood component of No Child Left Behind (2001), prompting states to develop preschool literacy standards and expectations. Currently, there is an increased focus on early childhood education through President Obama’s Race to the Top, Early Learning Challenge, which has granted State’s millions of dollars dedicated to improving early childhood education. Despite the research supporting preschool emergent literacy practice and the federal platform acknowledging and mandating emergent literacy practices, many preschool programs do not implement research-based emergent literacy practices (Barnett & Frede, 2011; Justice & Pullen, 2003; Powell, Diamond, Burchinal, & Koehler, 2010). This is partly due to the gap between research and practice, that is, the difficulty of translating research into instructional recommendations that practitioners can implement (Justice & Pullen, 2003). There are several other reasons for the lack of implementation of research-based literacy practices in preschool environments. First, many preschool teachers are insufficiently trained to implement researchbased literacy interventions and practices (Greenwood, et al., 2011). Preschool teachers’ 3 preparatory training is often limited, with few preschool teachers securing a bachelors degree (Greenwood, et al., 2011). Second, there is a high rate of turnover in preschool staff, due to the low wages that preschool teachers tend to make (Greenwood, et al., 2011). Third, there is a lack of quality, continuing professional development that is designed to increase the teachers’ knowledge of the best practices that should be implemented in the early childhood education curriculum (Hawken, Johnston, McDonnell, 2005). Finally, there remains minimal, conclusive research about using research-based literacy intervention programs in preschool settings (Lonigan & Cunningham, 2013; Preschool Curriculum Evaluation Research Consortium, 2008; PCER-C). When there is a lack of research surrounding efficacious preschool literacy interventions (Lonigan & Cunningham, 2013; PCER-C, 2008), school districts and personnel do not have instructional frameworks or guidelines that can inform the instructional practices of preschool teachers as they strive to improve literacy outcomes. Study Purpose The purpose of this study was to investigate and intervene to improve preschoolers’ emergent literacy skill development in (1) alphabet knowledge (2) phonological awareness (3) comprehension (4) vocabulary and (5) concepts about print, using a class-wide, research-based literacy intervention. The literacy intervention incorporated researched-based practices that were essential to the development of emergent literacy skills, including explicit emphases on the instruction of: alphabetic knowledge (i.e., letter names and letter sounds), phonological awareness (i.e., sound segmentation, initial sound phonemes, and rhyming), and through shared book reading activities, which promoted of comprehension, vocabulary, and concepts about print skills. 4 The intervention was based on the Kindergarten Peer-Assisted Literacy Strategies program (KPALS; Mathes, et al., 2001), which targets alphabetic knowledge and phonological awareness. In addition, the vocabulary, comprehension, and concepts about print components of the intervention were based on Developing Talkers: PreK (DTPK; The Children’s Learning Institute, 2010) intervention strategies. The hybridized intervention that was based on the integration of these two literacy programs took approximately 20-25 minutes per day. The intervention was delivered over the course of 12-weeks in daily sessions that were implemented by preschool teachers (i.e., typically four days per week). In addition, since the intervention was implemented in a preschool program located in a school district where a Multi-Tier System of Support (MTSS) was being implemented by teachers across the grade levels, the intervention was employed and evaluated for its efficacy as a tier 1 intervention Research Questions The specific research questions addressed in this study included: (1) Does implementation of a research-based literacy intervention have an effect on preschool students’ (4-5-year-olds) emergent literacy skills, including their: (a) alphabetic knowledge (b) phonological awareness (c) vocabulary (d) comprehension (e) concepts about print? (2) Does a research-based, class-wide (tier 1) intervention have a differential effect on the overall literacy development of intervention students compared to same-age peers who received a standard preschool literacy curriculum implemented by comparison preschool teachers? 5 (3) Does implementation of a tier 1 emergent literacy intervention reduce the percentage of kids who are considered well below (tier 3) and below (tier 2) benchmark on the Preschool Early Literacy Indicator (Kaminski & Bravo-Aguayo, 2010) assessment? (4) Does implementation of a tier one intervention increase the percentage of students who are at benchmark on the Preschool Early Literacy Indicator (Kaminski & Bravo-Aguayo, 2010) assessment? (5) What impact does the implementation of a tier 1 intervention have on students’ Dynamic Indicators of Basic Early Literacy Skills (Dynamic Measurement Group, 2011) scores and the potential to achieve Dynamic Indicators of Basic Early Literacy Skills (Dynamic Measurement Group, 2011 fall (beginning of the year) kindergarten benchmarks? 6 CHAPTER TWO LITERATURE REVIEW Conceptual Frameworks During the preschool years, children are developing rapidly as they participate in various literacy experiences, which facilitate the development of emergent literacy skills, ultimately leading to conventional literacy skills (Missall, McConnell, & Cadigan, 2006). The development of emergent literacy skills is a multifaceted process in which multiple theoretical frameworks are needed to fully understand the complexity of literacy learning and development in preschool literacy environments. The theoretical frameworks that frame this research include: Holdaway’s theory of literacy development (1979), emergent literacy theory (Tracey & Morrow, 2006), and social constructivism (Englert & Mariage, in press; Mallory & New, 1994; Tracey & Morrow, 2006). Holdaway’s theory of literacy development (1979). Holdaway’s (1979) theory of literacy development encompasses three assumptions: (1) acquisition of literacy skills follows a natural development pattern; (2) there are four processes central to learning literacy; and (3) the utilization of specific teaching methods will enhance literacy development. First, Holdaway (1979) asserts that the development of emergent literacy reflects a natural progression in literacy-rich environments, mimicking the development of oral language (Tracey & Morrow, 2006). For example, oral language development begins by adults talking to children, eventually children start babbling and imitating sounds, followed by imitating and vocalizing words, and language development continues to become more complex as children master the developmental oral language progression, ultimately understanding that utterances carry meaning (Genishi & Dyson, 2009). The process of oral language development is socially mediated and scaffold by adults (Justice & Ezell, 1999). 7 Similar to oral language development, emergent literacy also shows a developmental progression based on children’s participation in interactive environments with an adult language user (Tracy & Morrow, 2006). First, children observe adults engaging in literate behaviors, such as reading and writing (Holdaway, 1979). The children begin to explore these literacy behaviors by creating stories, memorizing and reciting storybooks, and scribbling to mimic writing (Justice & Ezell, 1999). Finally, as children progress and internalize the emergent literacy skills they are able to become independent literate individuals (Holdaway, 1979). Second, according to Holdaway (1979) there are several processes that are the foundation of literacy development, all of which are rooted in meaning based instruction (Holdaway, 1979). The first process is the child observes literacy behaviors (i.e., being read to). For example, the child observes specific linguistic and cognitive actions that are taken by adults (i.e., page turning, tracking the lines of print, pointing to pictures), and these behaviors come to be assimilated into the child’s own metalinguistic performance (Justice, Chow, Capellini, Flanigan, & Colton, 2003). The second process is the adult and child work together to jointly participate in book reading through interactions that are rich with encouragement, motivation, and assistance. For example, the adult may invite the child to participate in the book-reading routine, while the adult steps back to transfer the control of specific aspects of the linguistic processes to the child, but stepping in to scaffold and prompt performance when the child is not able to perform the processes independently (Justice & Ezell, 1999). The third process is allowing ample opportunity for additional practice of learned skills to become a fluent, literate individual. The adult, for example, may provide frequent opportunities for the child to reread the book with the support of adult or taped models (Justice, Kaderavek, Fan, Sofka, & Hunt, 2009). The final process is 8 having the child perform or share their knowledge with adults and peers without the guided or scaffolded assistance that characterized earlier interactions. Finally, the third dimension of Holdaway’s theory of literacy development (1979) is that specific teaching methods enhance literacy development. Holdaway (1979) asserted certain literacy practices facilitate literacy growth. The first aspect is developing a literacy rich classroom. For example, providing access to a multitude of books, explicitly placing print throughout the classroom (i.e., labeling objects), and systematically embedding print in every aspect of the classroom (i.e., free play, centers, classroom routines) are instructional instances that contribute to the creation of a literacy-rich classroom (Kantor, Miller, & Fernie, 1992). The second literacy practice Holdaway (1979) espoused was engaging children in high-quality literacy practices which were socially mediated through both peer interaction and adult-directed scaffolds. Shared book reading would be an example of a socially mediated activity in which peers and adults are actively engaged, while allowing for social interactions that support young readers through prompts, gestures, and linguistic or cognitive models. Emergent literacy theory. A second literature that contributes to the design and development of early literacy programs is emergent literacy theory, which builds upon Holdaway’s (1979) theory of literacy development (Tracey & Morrow, 2006). The tenets of emergent literacy theory are (1) emergent literacy begins at birth and emergent literacy development is continuous; (2) reading, writing, listening, and speaking development are all interrelated; (3) emergent literacy theory emphasizes the role of the home environment in literacy development (Tracey & Morrow, 2006). Emergent literacy theorists believe literacy development begins at birth and is continuous (Tracey & Morrow, 2006). Furthermore, “ . . . children’s development in the areas of listening, 9 speaking, reading, and writing are all interrelated . . .The interrelatedness of these skills suggests that positive growth in one area of literacy development will have a beneficial effect on the other areas of development” (Tracey & Morrow, 2006, p. 85). This interrelated development impacts how researchers and educators conceptualize emergent literacy instruction. Emergent literacy instruction needs to encompass multiple aspects of emergent literacy (i.e., alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print) and not just select components. “Emergent literacy theory underscores the finding that although many factors are important to children’s reading success, including parents’ education, occupation, and socioeconomic level, the quality of the literacy environment correlates most closely with children’s early literacy ability” (Tracey & Morrow, 2006, p. 86). Emergent literacy skills, as explained by Tracey and Morrow (2006), are fostered through rich-literacy language interactions that occur in home and at school (Tracey & Morrow, 2006). For those children who are afforded rich-literacy experiences at home, they come to school with an understanding of the basic structure of narrative stories as a result of their prior book-reading interactions with their parents. Through their prior participation in discursive events centered on answering and asking meaning-making questions, they know how to comprehend and converse about narrative texts (Justice et al., 2003). However, many young children who are at-risk for reading failure do not have richliteracy experiences at home (Justice et al., 2003). Such children often lack linguistic knowledge about sounds and words in print, and they do not have highly developed vocabularies or a fundamental understanding of how to comprehend or discuss story narratives (Cabell, Justice, Zucker, Kilday, 2009; Koutsaftas, Harmon, & Shelley, 2009; Missall, McConnell, Cadigan, 2006). These early literacy discrepancies put the children at great risk for later reading failure 10 (Cabel, et al., 2009; Koutsaftas, et al., 2009; Missal et al., 2006). For these students, it is essential that they receive literacy-rich experiences in preschool settings that are arranged to produce the best literacy outcomes in elementary school and beyond. Social constructivism. Emergent literacy can also be examined through a social constructivist lens. Social constructivism is rooted in the belief that knowledge is created through explicit, guided experiences and interactions (Tracey & Morrow, 2006). A premise of social constructivism is the inextricable link between mental activity and the social context. Vygotsky a pioneer of social constructivism formed his theory on the following: (1) learning through social interactions (2) the use of signs through which children learn the process of semiotic mediation (3) the zone of proximal development and scaffolding (4) the apprenticeship in communities (Englert & Mariage, 2011; Tracey & Marrow, 2006). First, social interactions develop higher-level skills, which ultimately foster prolific, literate individuals (Englert & Mariage, in press; Tracey & Morrow, 2006). Social interactions are an impetus to development. These interactions with adults, peers, parents, and teachers serve as the catalyst for development, via an orchestrated and calibration of moves that progress in a seamless fashion from child-directed experiences to adult-mediated experiences that are supported and informed by the other, and vice-versa. Simultaneously, the social context that promotes the acquisition of emergent literacy skills is one in which children are positioned to be active members and participants in a literacy community, and where children are encouraged to actively observe, appropriate, participate in, and transform literacy practices through the social exchange and development of new knowledge (i.e., student-to-student, teacher-to-student, and student-to-teacher). 11 Au as cited in Tracey & Morrow (2006) summarizes the impact of environment on literacy development: “School literacy is seen as a social process, affected not only by present but historical circumstances. Learning to read cannot logically be separated from the particular milieu in which it takes place. When children learn to read, or fail to learn to read, they do so in a particular social, cultural, and historical environment. Their success or failure in reading cannot be understood apart from that environment.” (p. 106). The second tenet of social constructivism is semiotic mediation. Semiotic mediation is the process of developing connections between cognitive functions, symbolic tools, and discourse (Tracey & Marrow, 2006). The development of reading requires the development of a secondary academic discourse and a level of metalinguistic knowledge about the symbol system and the arrangement of words and ideas in texts that is not always part of a natural literacylearning environment that features intuitive learning (Englert & Mariage, in press; Gee, 1990). For example, students must develop an understanding of how to analyze oral and printed words using their knowledge of the phonemic and graphemic properties of words (Cabell et al., 2009; Koutsaftas et al., 2009; Missall et al., 2006), as well as how to identify the meaning relations in a story based on their understanding of story structure and narrative elements (Justice et al., 2003). In addition, students must have access to the self-regulating talk that might be first employed by parents and teachers on the social or inter-psychological plane, but that students then acquire and turn inward to direct their own literacy performance on the intra-psychological or independent plane (Justice et al., 2003; Justice & Ezell, 1999). Through participation in conversations about words in books and book meanings, students come to understand the nature of the talk, language, structures, and the symbol system that characterizes social interactions about and with print, and 12 use that knowledge to mediate their subsequent interactions with language (oral and written), other language users, and printed symbols in words, books and stories (Mallory & New, 1994). The third tenet of Vygotsky’s theory of social constructivism is the zone of proximal development (ZPD) (Englert & Mariage, in press; Mallory & New, 1994; Tracey & Marrow, 2006). “ZPD is defined as the distance between the child’s actual developmental level as determined by independent problem solving,” (Englert & Mariage, in press). Essentially, the problems presented within environmental contexts are challenging enough to foster interest among learners, but so complex that the children are unable to complete the task independently, requiring the adult to provide temporary scaffolds which support the student’s learning and acquisition of knowledge so that they can participate at a more advanced level (Englert & Mariage, in press; Mallory & New, 1994). Theorists have proposed that the zone of proximal development is bridged when the child participates in mediating environments where there is an active collaboration and coparticipation by the child and adult in the entirety of the literacy activity. In their moment-tomoment interactions, the adult or teacher appraises the child’s current and evolving state of knowledge or cognitive functioning, and uses that knowledge to provide the necessary language or cognitive supports through just-in-time adjustments to help the child achieve what lies just beyond his/her current level of independent attainment (Englert & Mariage, in press). For example, if the goal is reading and discussing the story, the teacher might read the portions of the text that lie beyond the child’s current level of performance, but ask the child to supply the linguistic or cognitive actions related to book reading that lie just beyond the child’s development performance. As opposed to modifying or diluting the goal of book reading, therefore, the adult and child co-participate and share the entirety of the linguistic and cognitive 13 functions associated with book reading and thinking. Then through a gradual release of responsibility, responsibility for performing the linguistic and cognitive operations is transferred from the adult to the child to ensure greater literacy success and independence over time (Englert & Mariage, in press; Tracey & Morrow, 2006). In this way, the apprenticeship model thrives in the creation of zones of proximal development where the child participates in a complex cognitive process with the guided and graduated assistance of adults who mediate literacy performance through the moment-to-moment provision and removal of cues, prompts, models and supports that are elegantly attuned to the child’s developmental levels (Englert & Mariage, in press; Mallory & New, 1994; Tracey & Morrow, 2006). The subsequent section will delve into multiple areas of research, highlighting the key findings as well how the research-based practices are informed by the theories listed above. First, preschool education and the impact of preschool curricula will be examined. Second, I consider how preschool education is being influenced by a reform-oriented movement that features early literacy intervention and frequent progress monitoring in a multi-tiered systems of support framework (MTSS). MTSS is shaping when and how school districts monitor students’ literacy progress, with implications for the delivery and evaluation of preschool literacy programs. Third, I examine and review the variables that are most closely aligned with effective literacy interventions and assessments implemented in early literacy programs, including: alphabet knowledge, phonological awareness, comprehension, vocabulary, and concepts of print. The final section of the literature review will examine how all of the previous literature and theory informs the proposed development, implementation, and evaluation of preschool emergent literacy interventions. Preschool Education 14 Despite the known benefits of early education, preschool education is not universally available (Greenwood, 2009; Greenwood, et al., 2011). The lack of universal preschool programs forces educational and governmental agencies to address the gap by offering preschool programs which target underserved and at-risk populations. These preschool programs include: federally funded programs (i.e., Head Start), state funded programs (i.e., Great Start Readiness Preschool), tuition-based programs, preschool special education, and preschool offered through child care facilities (Greenwood et al., 2011). Each of these programs brings with it unique guidelines, entry criteria, goals for education, and teacher accreditation standards (Greenwood et al., 2011; Mashburn, 2008). Furthermore, there is little consensus among the programs’ developers about how and what to teach in preschool (Greenwood et al., 2011, Mashburn, 2008). Traditionally, the primary goal of preschool education was to provide children with social-emotional opportunities in which they were exposed to math, science, social studies, arts, and literature through play-based instruction, allowing children to explore freely (PrettiFrontzcak, 2011). As the importance of preschool education has been highlighted by empirical research and translated into federal policies, traditional preschool practices have been questioned, bringing an increased focus on the content of instruction and the manner in which content is taught in preschool environments (Pretti-Frontzcak, 2011). As the landscape surrounding preschool education has changed, the debate about developmentally appropriate practices in preschool settings continues to create a division between theoretical and practical implementation decisions (Greenwood et al., 2011; PrettiFrontzcak, 2011). Some practitioners believe instruction should be play-based while other practitioners believe in the provision of explicit instruction that is developmentally appropriate (Greenwood et al., 2011; Lonigan & Cunningham, 2013). Despite the qualitative differences in 15 explicit and play-based approaches to preschool education, the National Association of Education for Young Children (NAEYC) put forth recommendations for developmentallyappropriate instruction which supports the intentional teaching of skills to young children (NAEYC, 2009). According to the NAEYC (2009), “Developmentally appropriate teaching practices provide an optimal balance of adult-guided and child-guided experiences,” (p.17). Epstein (2007) as cited in NAEYC (2009) further defines adult-guided and child-guided experiences, “Adult-guided experience proceeds primarily along the lines of the teacher’s goals, but is also shaped by the children’s active engagement; child-guided experience proceeds primarily along the lines of children’s interests and actions, with strategic teacher support,” (p. 17). NAEYC’s position pushes teachers away from the traditional model of preschool instruction that features discovery and play as the only developmentally appropriate manner of instruction; and redirects teachers to models that emphasize the explicit teaching of specific skills to preschool children while maintaining developmentally appropriate practices (Lonigan & Cunningham, 2013; Pretti-Frontzcak, 2011). Much of the debate stems from a narrow view of social constructivism as the theoretical framework in which many preschools were developed (Snow, 2011). Original conceptualizations of this model in preschool settings featured the constructivist component of the framework where students were viewed as active agents who constructed knowledge through their own learning and self-directed activities, directing both the course and content of what was learned in preschool. Currently, the field is beginning to grapple with the view of social constructivism in preschool settings as representing a blend of child-initiated activities that are informed and mediated by adult in appropriate zones of proximal development (Lonigan & Cunningham, 2013; Snow, 2011). This broadening view of social constructivism has researchers 16 evaluating evidenced-based practices in preschool, specifically curricula (Lonigan & Cunningham, 2013). Preschool curricula. There is a lack of research on effective class-wide preschool literacy curriculums (PCER-C, 2008). This is startling given the fact that “In 2005, nearly half (47%) of all three to five-year-old children from low-income families were enrolled in part-day or full-day prekindergarten programs,” (PCER-C, 2008, p. 32). The Preschool Curriculum Evaluation Research Consortium (PCER-C; 2008) conducted a large-scale, nation-wide study to evaluate the effectiveness of currently available and widely used preschool curricula. The PCER-C project (2008) investigated the impact of 14 different preschool curricula on both student and classroom-quality outcomes. . There were 2,911 children and 315 preschool classrooms from low-income families, which included random assignment at the classroom level. The mean age of children was 4.6 at baseline and 6.1 at follow-up data collection. The performance of the preschool students at these sites were evaluated on several key dimensions: (1) classroom-quality outcomes which measured skills such as teacher-child interaction and instructional practices; (2) student-level literacy outcomes (i.e., including early reading skills, phonemic awareness, language development, early mathematical knowledge, and behavior); (3) follow-up data on kindergarten assessments. PCER-C project (2008) concluded that eight preschool curricula had some impact on classroom-quality outcomes, which measured teach-child interactions, quality of the environment, and instructional practices. Classroom quality was measured through scales designed to measure classroom quality through multiple observations, teacher interview, and parent interview. The curricula which impacted classroom quality included: Bright Beginnings (Pellin & Edmonds, 2001), Creative Curriculum (Dodge, Colker, & Heroam, 2002), Creative 17 Curriculum with Ladders to Literacy (Notari-Syverson, O’Connor, & Vadasy, 1998), Curiosity Corner (Success for All Foundation, 2003), DLM Early Childhood Express (SRA/McGraw-Hill, 2003a) supplemented with Open Court Reading Pre-K (SRA/McGraw-Hill, 2003b), Doors to Discovery (Wright Group, McGraw-Hill, 2001), Let’s Begin with the Letter People, (Abrams & Company, 2000) and Literacy Express (Lonigan & Farver, 2002). Additionally, the PCER-C project (2008) evaluated student level outcomes, which included: language, reading, phonological awareness, mathematics, and behavior. When evaluated against these variables 12 of the curricula had no impact on student-level outcomes, whereas two curricula had some impact on student-level outcomes in preschool. The two curricula were DLM Early Childhood Express (SRA/McGraw-Hill, 2003a) with Open Court Reading Pre-K (SRA/McGraw-Hill, 2003b); and Pre-K Mathematics (Klein, Starkey, & Ramirez, 2002) supplemented with DLM Early Childhood Express Math software (Clements & Sarama, 2003). The aforementioned research indicates a need for additional curricula that impact both classroom quality and student level academic outcomes. Determining preschool curricula that facilitate student outcomes is vital because literacy skills at Kindergarten entry are predictive of later literacy skills (Juel, 1998; Justice & Pullen, 2003; Spencer, Spencer, Goldstein, & Schneider, 2013). Furthermore, the gap that exists between literacy skills at the end of Kindergarten is likely to persist through school (Juel, 1998; Justice & Pullen, 2003; Spencer, et al., 2013). If we can impact emergent literacy skills during preschool we will likely have an impact on the students’ long-term literacy trajectory (Bayat et al., 2010; Greenwood, 2009; Greenwood et al., 2011). Multi-Tiered Systems of Support 18 In addition to the federal initiative to improve preschool education, there is a second initiative that prompts schools and educators to pay close attention to the instructional efficacy of a literacy intervention based on student outcomes. Multi-Tiered Systems of Support (MTSS) is a multi-tiered, problem-solving model, linking assessment to instruction to address the needs of all learners (Bayat et al., 2010; Denton et al., 2010; Fuchs, Fuchs, & Stecker, 2010; Gettinger & Stoiber, 2007; Jackson, Pretti-Frontczak, Harjusola-Webb, Grisham-Brown, Romani, 2009; VanderHeyden & Snyder, 2006; VanDerHeyden, Snyder, Broussard, & Ramdell, 2007; Walke, Carta, Greenwood, & Buzhardt, 2008). Commonly the terms MTSS and response to intervention (RtI) are used interchangeably (National Association of State Directors of Special Education, 2006). However, the concept of MTSS is broader than RtI because it encompasses all aspects that impact systems change in addition to the features of RtI which are data-based decision making, multiple intervention tiers, and a problem-solving approach (National Association of State Directors of Special Education, 2006). In an MTSS framework, there are multiple tiers or levels of interventions that are recommended by proponents of MTSS in the literature (Bayat et al., 2010; McMaster, Kung, Han, Cao, 2008). The primary goal of MTSS is prevention of reading problems by implementing evidence-based and timely interventions in the regular classroom (Bayat et al., 2010; Fuchs & Fuchs, 2005; Fuchs et al., 2010; Fuchs, 2003; McMaster et al., 2008; VanderHeyden & Snyder, 2006). At the first tier (tier 1) in the MTSS framework, all students receive research-based instruction that is implemented by their general classroom teachers, while academic progress is monitored using benchmark assessments (Bayat et al., 2010; Fuchs & Fuchs, 2005; Fuchs et al., 2010; Fuchs, 2003; McMaster et al., 2008; VanderHeyden & Snyder, 2006). 19 If, however, students fail to reach expected benchmarks, then they receive additional and more concentrated interventions (tier 2) that are delivered in small group or individualized instructional settings (Bayat et al., 2010; McMaster et al., 2008; Torgesen, Wagner, & Rachotte, 1997; VanderHeyden & Snyder, 2006). Tier 2 interventions are research-based, and supplement rather than replace the tier 1 intervention. Students within tier 2 placements receive additional instruction and progress monitoring in deficit areas, without referral to special education (Bayat et al., 2010; Xu & Drame, 2008). Finally, the students who make insufficient progress in tier 1 and tier 2 receive intensive individualized literacy instruction (tier 3) (Bayat et al., 2010; McMaster et al., 2008; Xu & Drame, 2008). Students who receive more intensive services in tier 3 programs obtain all tier 1, tier 2, and tier 3 supports with more concentrated instruction and frequent progress monitoring. Students in tier 3 can be considered for a special education referral based on the data from benchmark and progress monitoring assessments (Bayat et al., 2010; Xu & Drame, 2008). MTSS holds promise for remediation of learning challenges (Jackson et al., 2009). The field of early childhood education is attempting to, “Conceptualize and interpret RtI within the context of the preschool program” (Jackson et al., 2009, p. 425). The application of a MTSS framework in preschool shows promise. However, due to the structure within preschool environments there are inherent benefits and challenges. Benefits of an MTSS approach in preschool settings. The implementation of a MTSS approach in preschool settings has many benefits. The primary goal of MTSS within a preschool context is to intervene at earlier ages in hopes of mitigating a potential reading or learning disability (Bayat et al., 2010; Coleman, Roth, & West, 2009; Greenwood, 2009; Greenwood et al., 2011; VanderHeyden, Snyder, Broussard, Ramsell, 2007). Furthermore, research indicates 20 time and time again, learning begins at birth and the preschool years offer great opportunity to provide quality early learning experiences (Greenwood, 2009; Greenwood et al., 2011; Heckman, 2000). These targeted early learning experiences serve as a potential avenue to close the achievement gap, preparing the youngest learners to be ready for kindergarten. Heckman (2000) as cited in Greenwood et al. (2011) summarizes the potential impact of quality early learning experiences, “ Recent research in psychology and cognition demonstrates how vitally important the early preschool years are for skills formation . . . Early learning begets later learning and early success begets later success” (p. 5). A MTSS framework has the potential to greatly enhance the quality of early instruction, hence facilitating more positive social and academic outcomes for students. Additionally, an MTSS framework provides preschool students with social-emotional experiences, especially students who are at-risk for academic failure (Bayat et al., 2010; Greenwood et al., 2011; VanDerHeyden et al., 2007). The use of a MTSS framework in preschool settings allows teachers and staff to explicitly target students’ early literacy skills (Bayat, 2010; Greenwood et al., 2011; VanDerHeyden et al., 2007). Finally, the use of a MTSS model enhances the inclusion opportunities for children with disabilities, granting access to peer models and explicit, targeted interventions (Greenwood et al., 2011). Challenges of a MTSS approach in preschool settings. Despite the many potential benefits there are also challenges inherent in the implementation of preschool MTSS. The primary challenge is lack of a universal preschool system (Greenwood, 2009; Greenwood et al., 2011). Specifically, preschool programs within the United States consist of a variety of programs including: federally funded programs (i.e., Head Start), State funded programs (i.e., Michigan’s Great Start Readiness Preschool), tuition-based programs, and daycare-based 21 programs (Greenwood et al., 2011). Each of these programs operate under different policies, have different funding sources, and operate programs for varying amounts of time and days (days per week and time per day) (Greenwood et al., 2011). The myriad of stakeholders involved in Head Start, state-funded preschool, and tuition-based preschool create systematic challenges in the creation of more uniform policies, procedures, and selection criteria. Another set of challenges arises when evaluating intervention practices implemented at all levels (tier 1, tier 2, tier 3) of instruction. First, there is debate about the content taught in preschool settings and which instructional practices are considered developmentally appropriate (Greenwood et al., 2011). Not only how and what to teach is a challenge, but there is a lack of consensus about which of the many commercially available curriculums have a strong evidence base (Lonigan & Cunningham, 2013; Mashburn, 2008; PCER-C, 2008). The lack of evidencebased interventions for the design and implementation of tier 1 class-wide programs presents a monumental challenge to the achievement of one of the founding principles of MTSS. At the same time, there are minimal research-based tier 2 and tier 3 interventions (Buysse & PeisnerFeinberg, 2009; Greenwood, Carta, Goldstien, Kaminski, & McConnell, 2009; Greenwood et al., 2011), further complicating the implementation of research-based tiered interventions. More tier 1 research is needed as it provides the foundation from which the most appropriate tier 2 and tier 3 strategies will be further developed. Research on preschool MTSS. There is a lack of research focusing on preschool tier 1 instruction, specifically language and emergent literacy instruction. According to VanDerHeyden, Synder, Broussard, Ramsdell (2007), “The ways in which RtI can be linked to efforts to support preschool children with and without disabilities and children with known risk factors has yet to be widely discussed or examined empirically,” (p. 233). Despite the paucity of 22 research, there are a few studies, which have examined preschool literacy instruction within a MTSS framework. The few studies that do exist focus on a variety of topics: curriculum-based measurement, professional development, and a descriptive study about tier 1 instruction (Buysee & Peisner-Feinberg, 2009; Carta, et al., 2010; VanderHeyden et al., 2007). These efforts are described in the section that follows. First, VanDerHeyden et al., (2007) conducted a study examining the utility of curriculum-based measurement in a preschool MTSS decision-making framework. The study revealed that curriculum-based measurement can be done in preschool and can lead to instructional data based decisions, which led to specific interventions. The study was conducted with 4-year-old children enrolled in either an urban Head Start program or a rural public preschool. There were two classrooms per setting, one of which was randomly selected as the treatment group and the second classroom was assigned to the control group. Both settings employed The Creative Curriculum, which is a preschool curriculum to guide instructional content based on the children’s needs and interests (Dodge et al., 2002). Several child-level measures were used to gather data targeting emergent literacy development, including letter identification, alliteration, rhyming, letter sound knowledge, and initial sound fluency. The researchers found that curriculum-based measurement enhanced the accurate of decision making about children’s literacy skills, which helped teachers select, targeted literacy interventions, and led to the treatment groups literacy performance surpassing that of the control group. An additional study was conducted by Buysee & Peisner-Feinberg (2009) and outlined in Greenwood et al. (2011). This study included 300 four-year-old children in 24 different classrooms. The researchers examined the role of professional development in implementing a MTSS framework, the utility of a universal screener, and the effectiveness of tier 2 interventions. 23 Unfortunately, the authors did not delineate the screener or the intervention used, and the results from this study were not outlined in either article; instead Buysee & Peisner-Feinberg (2009) presented the study as a theoretical framework for preschool MTSS implementation. Greenwood et al., (2011) referenced the study as a current investigation of preschool MTSS, but the ongoing nature of the research precluded the reporting of the research findings. As a consequence, the study failed to fully chronicle the impact or implementation of tier 1 instruction in a preschool MTSS model. Carta, Greenwood, and Atwater (2010) conducted a descriptive study of preschool classrooms tier 1 literacy instruction. The study evaluated 68 classrooms with a total of 840 children. Preschool classrooms included state-funded preschool, Head Start, Title 1, and tuitionbased preschool. Student level data collection included standardized assessments, observation data, and literacy screener measures. Classroom level data collection was also conducted via observation through checklists. The Carta et al. (2010) study found higher proportions of students who need tier 2 or 3 support in preschool classrooms than in Kindergarten through fifth grade classrooms. Additionally, like other educational environments, children who had more risk factors were more likely to need tier 2 and 3 supports. Finally, the data reflected the fact that there was a higher incidence of tier 2 and tier 3 children in preschool because of their entrylevel characteristics on the literacy assessments (Carta et al., 2010). At a classroom level, the Carta et al. (2010) study found the teacher support for literacy and language varied considerably from classroom to classroom. Teachers who focused on literacy skills saw sizable growth in student academic achievement. However, teacher behaviors that provided explicit literacy and language instruction were infrequent, with only 16% of the 24 instructional time dedicated to a literacy focus, equating to less than 30 minutes in a three-hour class. The lack of research concerning the effectiveness of tier 1 instruction is evident and was rd identified as a need at the 3 Annual Preschool Response to Intervention Summit (Santa Ana Pueblo, New Mexico, September 26-27, 2011). Additionally, much of the research surrounding preschool MTSS surrounds measurement, tier 2 and 3 interventions (Preschool Response to Intervention Summit, 2011). Prior to implementing tier 2 and 3 interventions, research needs to be conducted to implement and to evaluate the impact of research-based tier 1 literacy interventions. In the next section, I review the essential literacy components that must be incorporated into tier 1 literacy interventions. The next section of this paper examines the tenets of emergent literacy, which is the foundation of MTSS in preschool settings. Emergent Literacy The National Early Literacy Panel Report (2008) indicates there are several literacy skills that are strong indicators of a child’s later reading success. These skills include: alphabet knowledge, phonological awareness, rapid naming of letters and digits, writing/name writing, and definitional vocabulary (oral language). The National Early Literacy Panel Report (2008) further suggests that the relationships between the above variables and later reading success are relatively stable and reliable. Other moderate indicators for later reading success include: concepts about print, print knowledge, visual processing, and expressive and receptive vocabulary (oral language) (National Institute of Literacy, 2008). The statistics concerning early reading skill acquisition and the long-term impact on school achievement are startling. A child who is not able to read by the end of second grade only has a 25% chance of catching up by the end of elementary school (Gettinger & Stoiber, 2007). 25 Conversely, students who are poor readers at the end of elementary school are likely to have encountered literacy problems as early as preschool (Gettinger & Stoiber, 2007). This suggests that the improvement of children’s early literacy skills in the preschool years might influence their reading achievement in later years. Such early literacy instruction might narrow the gap between high readers and low readers, since more young readers will be able to meet the literacy standards measured on literacy assessments administered in kindergarten and first grade (Gettinger & Stoiber, 2007). In the next sections, the literacy skills that are considered foundational in the design of PreK literacy interventions are reviewed, given their importance in predicting future reading performance. Alphabetic principle. A child’s alphabetic knowledge is vital to long-term reading outcomes, and one of the single best predictors of later literacy success (Dion, Broduer, Gosselin, Campeau, & Fuchs, 2010; Elliott & Olliff, 2008; Kim, Foorman, Petscher, & Zhou, 2010, Phillips & Piasta, 2013; Piasta & Wagner, 2010). The National Early Literacy Panel Report (2008) indicates alphabet knowledge has a strong correlation (.50) with conventional literacy skills (reading, comprehension, and writing). Students who encounter difficulty with letter names are more likely to encounter reading problems in later grades and to be diagnosed with learning disabilities (Piasta & Wagner, 2010). When a child enters kindergarten with limited knowledge of letter names and sounds, it correlates with weaker word recognition abilities at the end of first grade (Dion et al., 2010). The NAEYC and International Reading Association put forth a joint position statement recognizing the importance of the learning letter names and sounds during preschool, allowing students to enter kindergarten as prepared as possible (NAEYC & IRA, 1998). 26 Share (2004) argues letter name knowledge emerges prior to letter sound knowledge and letter name knowledge makes it easier for children to develop letter-sound knowledge. Share (2004) investigated letter name and sound acquisition with Israeli children who were not familiar with English letter names and sounds. Forty-six children were paired based on pre-test assessments and randomly assigned to either treatment or control conditions. There were ten 10minute sessions with two to three sessions per week. The experimental group learned six letter names and sounds, which were paired with a symbol instead of the actual letter. This choice was made to ensure accurate measurement of learning because the Latin letters are common in Israeli culture. The control group learned real-word names for the symbols. Based on statistical analysis the results indicated students in the experimental group were at an advantage when learning letter sounds, because of the explicit letter name instruction. Another study conducted by Bunn, Burns, Hoffman, & Newman (2005) taught letter identification using incremental rehearsal. This study targeted a preschooler who had difficulty mastering letter name knowledge. The incremental rehearsal strategy was used to teach letter names by incorporating new content into already mastered content. The Bunn et al., (2005) study found incremental rehearsal to be an effective manner to teach letter names. The subject learned 22 of the 26 letters in three weeks of instruction. This study was characterized by multiple weaknesses (design, lack of specificity); however, it demonstrates that explicit letter name instruction leads to increases in letter name knowledge. Many studies in emergent literacy do not target alphabet knowledge in isolation; instead alphabet knowledge and phonological awareness knowledge are targeted together (Schnieder, Roth, Ennemoser, 2000). Combining literacy skills in interventions is based on Hatcher, Hulme, & Ellis (1994), who proposed the “phonological linkage hypothesis”. This hypothesis was based 27 on the premise that instruction of skills in isolation is less effective than integrating emergent literacy skills (Hatcher, Hume, & Ellis, 1994). Hatcher et al. (1994) tested the theory by developing a reading only intervention group, a phonological only training group, a reading and phonological training group, and a control group. They found the reading and phonological training group surpassed all other groups in word identification, word reading, comprehension, and spelling. The following section outlines phonological awareness acquisition, research surrounding phonological awareness in isolation, and the combination of alphabet knowledge and phonological awareness instruction. Phonological awareness. In addition to the alphabetic principle, a child’s phonological awareness skills are critical for long-term literacy outcomes (Kim et al., 2010), demonstrating a moderate relationship (.40) with conventional reading skills (National Institute of Literacy, 2008). Phonological awareness refers to the sublexical structure of oral language (Justice & Pullen, 2003). This involves the ability of students to segment spoken words into phonemes or sounds, and to rearrange, rhyme, or substitute sounds to construct other words. Phonological awareness includes alliteration, blending, phoneme identification, rhyming awareness, segmenting, and word awareness (Justice & Pullen, 2003). The development of phonemic awareness skills is typically viewed on a continuum, developing from the relatively simpler task of segmenting and blending larger segments of words (e.g. compounds words and syllables), and progressing to the more difficult tasks involving the manipulation, isolation, and construction of smaller segments of words, such as individual sounds in words (National Institute of Literacy, 2008). 28 Bradley and Bryant (1985) were the pioneers in conducting research on effective phonological awareness instruction (Bailet et al., 2009, Schnieder et al., 2000). Bradley and Bryant (1985) investigated sound categorization instruction (e.g., “What word doesn’t belong, “hat, teeth, hop”) with five to seven-year-olds who were struggling. Results showed that learning to categorize sounds in words in combination with manipulating the corresponding plastic letter tiles produced better results in reading and writing than practicing sound categorization in isolation. It is widely accepted that best practice in emergent literacy combines both phonological awareness skills and alphabetic principle skills (Bailet et al., 2009). Developing phonological awareness skills foster the development of letter name and sound knowledge, and vice versa (Lonigan, 2006). In order to facilitate maximum development of emergent literacy skills teaching phonological awareness and letter naming simultaneously is preferential. One example is research conducted by Justice, Chow, Capellini, Flanigan, & Colton (2003). This study examined the effect of a six-week literacy intervention, administered twice weekly for 30 minutes each session. The experimental group received explicit literacy instruction focused on name writing, reciting the alphabet, and phonological awareness, which focused on rhyming, syllable segmentation, and initial sound fluency. Additionally, the comparison intervention focused on shared storybook reading. The experimental group, which received the targeted literacy instruction, had superior literacy skills in alphabet knowledge and phonological awareness than that of the control group. Furthermore, Bailet, Pepper, Piasta, & Murphy (2009) conducted an emergent literacy intervention study. The intervention was nine weeks with two 30-minute lessons per week. The intervention targeted letter names and sounds, syllable counting and segmentation, rhyming, 29 alliteration, and onset rime. The intervention was administered to two groups of participants in a delayed-onset design. Children who received the intervention made significant gains in emergent literacy skills, specifically rhyming and alliteration. This proposed study aims to encompass multiple components of an effective literacy intervention which include not only alphabet knowledge and phonological awareness but also vocabulary, comprehension, and concepts about print. When targeting emergent literacy skills, it is paramount to include multiple emergent literacy domains to maximize student growth in all areas (National Institute of Literacy, 2008). The subsequent sections evaluate vocabulary, comprehension, and concepts about print instruction and research. Vocabulary. Vocabulary encompasses semantic, syntactic, and conceptual knowledge, and is linked to later reading proficiency (Whitehurst & Lonigan, 1998). The ability to contextualize unknown vocabulary is key in the comprehension of text, and vocabulary is built upon a child’s oral language abilities (Whitehurst & Lonigan, 1998). Vocabulary development begins years before preschool, and is impacted by environmental factors (Hart & Risley, 1995). Hart & Risley’s (1995) seminal study examined vocabulary development and found children from upper-class homes, working-class homes, and welfare homes were exposed to 11.2 million, 6.5 million, and 3.2 million words respectively. These differences are vast and show the need to focus on vocabulary learning in preschool classrooms, especially for at-risk populations. Vocabulary instruction in preschool has not been extensively researched (Justice, Bowles, Pence Turnbull, Skibbe, 2009; Tuckwiller et al., 2010). The emerging research has shown vocabulary instruction can yield powerful effects (Neumann, 2011). However, vocabulary instruction in preschool classrooms is often inadequate, with no targeted, rich vocabulary instructional opportunities for young learners (Neumann, 2011). Additionally, “. . . normal 30 preschool read alouds contain an average of 0.94 vocabulary explanations” (Zucker, Solari, Landry, Swank, in press). Despite the troubling statistics, research has shown remediation of language deficits by age three or four reduces the likelihood for later reading difficulties (Justice et al., 2009). Tuckwiller, Pullen, & Coyne (2010) conducted a vocabulary intervention study to address the paucity of vocabulary intervention studies at the kindergarten level. The study consisted of 92 kindergarteners from six classrooms. The students were selected for the study based on a predetermined cut off score. The intervention consisted of a class wide shared storybook reading session, with two follow-up sessions to reinforce the vocabulary concepts in the story. The Tucker et al. (2010) study focused on one story read twice with two follow-up sessions. This study found small effects for the treatment group in vocabulary acquisition. Silverman & Crandell (2010) completed a correlational study examining the relationship between teachers’ vocabulary practices and children’s vocabulary development. There were 244 participants, from16 Kindergarten teachers’ classrooms. The results indicated that certain teaching and learning practices led to better vocabulary outcomes, specifically acting out or illustrating words, analyzing words, defining words in rich contexts, using the word in multiple contexts, and word study. Comprehension. In addition to vocabulary, successful readers not only decode the letters on the page to read words, but they must also comprehend the words being read to make meaning of the text (Zucker et al., in press). Preschool children who present weak oral language and vocabulary skills are at a greater likelihood to have reading comprehension difficulties in later grades (Cabell et al., 2011; Zucker et al., in press). In order to comprehend text a reader 31 must inference, activate prior knowledge, and make meaning (Cabell et al., 2011; Zucker et al., in press). Comprehension and vocabulary instruction for preschoolers can be achieved through an apprenticeship model of shared book reading (Hindman, Connor, Jewkes, & Morrison, 2008; Zucker et al., 2011). Shared book reading is a dialogic activity in which the adult and child interact with the pictures, concepts, and words in the book. Through these interactions, the adult brings attention to the print, words, vocabulary, letters, structure, and the visual and phonological qualities of the print, demonstrating book handling skills, and building comprehension. Furthermore, in shared book reading, three specific literacy skills that develop in young readers (e.g., building alphabet knowledge, print awareness, and phonological awareness) produce moderate effects sizes in predicting reading comprehension (0.48, 0.48, and 0.44) (National Institute of Literacy, 2008). Hargrave and Senechal (2000) conducted a study of the impact of shared book reading with 36 preschoolers who had poor expressive language abilities. Teachers with a supplemental parent component administered the intervention. The intervention consisted of teachers and parents asking targeted questions and responding to the child’s interest throughout the story. The intervention consisted of ten books, read twice. The outcome measures included teacher level behavior checklists, standardized vocabulary assessments, and a curriculum-based vocabulary measure. The parent component was measured by parent report of having read or not read, there was not examination of specific parent level shared reading behavior. The results indicated that teacher behavior was significantly changed and the intervention had a small effect size on children’s vocabulary development on the curriculum-based measure but no on the standardized measure. 32 Print awareness (Concepts about print). Finally, print awareness involves a child’s ability to understand the various components and features of print (Whitehurst & Lonigan, 1998). A student must understand the conventions of print in order to fully comprehend the text when reading (Whitehurst & Lonigan, 1998). The conventions of print refer to reading left to right, reading front to back, the difference between the text and the pictures, the meaning of punctuation, and spaces between words (Whitehurst & Lonigan, 1998). A child’s ability to understand various components of print may be able to predict later reading comprehension abilities (0.48 effect size) (Whitehurst & Lonigan, 1998). Justice & Ezell (2000, 2002) have been pioneers in researching the effectiveness of print awareness interventions. The 2000 study evaluated the impact of a print awareness intervention administered within the home environment. Results indicated parents in the treatment group increased print and non-print references of the book based on parent behavior checklists compared to the control group. Justice & Ezell conducted a classroom study in 2002, which evaluated the impact of a print awareness intervention in 30 Head Start classrooms. The treatment group received shared reading sessions that were print focused while the control group received shared reading sessions that focused on the pictures within the text. The results showed students in the treatment group outperformed students in the control group in knowledge about words in print, print recognition, and alphabet knowledge. An additional classroom study by Justice & Ezell was conducted with 59 teachers assigned to two conditions over a 30-week period. The treatment group had 120 read aloud sessions with specific print targets, where the control group read the same 120 books in a “business as usual” manner, with the teacher determining what aspects to focus on. All studies found a statistically significant difference between children who received the print awareness intervention and those children who did not. 33 McGinty, Briet-Smith, Fan, Justice, & Kaderavek (2011) measured the impact of dose and dose frequency on print knowledge acquisition. The McGinty et al. (2011) study measured the impact of dose, number of print strategies per session, and dose frequency, which was the number of sessions per week. The study included 367 children who were randomly selected for the 30-week intervention. The results indicated a statistical difference in children’s print knowledge when either the dose (number of print strategies per session) or the dose frequency (the number of sessions per week) were increased. However, when both the number of strategies per sessions and the number of sessions per week were increased simultaneously the results were not as positive. This finding indicates a need to provide explicit, targeted instruction of print knowledge. Preschool Emergent Literacy Intervention As the literature has shown, preschool emergent literacy development encompasses a multitude of skills, which include: alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print and the aforementioned skills are essential to children’s later reading proficiency (National Institute of Literacy, 2008). However, commercially available preschool curricula typically do not contain specific emergent literacy instruction lessons and strategies, instead they provide suggestions to incorporate emergent literacy instruction into the child-directed activities (i.e. Creative Curriculum by Dodge et al., 2002; Lonigan & Cunningham, 2013; PCER-C, 2008). A seminal study by Whitehurst et al. (1994) indicates that “modest additions” (p. 552) to a curriculum can significantly increase preschoolers’ emergent literacy skills. The NAEYC and IRA joint position statement (2009) highlights the need for continued understanding of providing emergent literacy instruction in preschool settings, “The early childhood profession now recognizes that gaining literacy 34 foundations is an important facet of children’s experience before kindergarten, although the early literacy component still needs substantial improvement in many classrooms” (p. 7). The remainder of the literature review will now look at the components of the intervention that will be used for the purposes of this study. Kindergarten Peer Assisted Literacy Strategies research. A published curriculum with a strong evidence base is the Kindergarten Peer-Assisted Learning Strategies (Fuchs et al., 2001; Mathes et al., 2001; KPALS). KPALS was based on the work of researchers at Vanderbilt University and the University of Kansas (Fuchs et al., 2001; Mathes et al., 2001). KPALS is designed to improve literacy instruction within kindergarten classrooms (Fuchs et al., 2001; Mathes et al., 2001). The program is designed to be administered to the whole class in three, 20minutes sessions, weekly (Mathes et al., 2001). The students are paired, allowing the entire class to be acitively engaged in building literacy skills. This program has been empirically validated as a program that is effective for children at-risk of reading failure (Mathes et al., 2001). The KPALS program involves three activities that target letter knowledge, phonological and phonemic awareness, and phonics (Mathes et al., 2001). The letter knowledge portion included students learning the names and letter sounds in tandem with each other (Mathes et al., 2001). In the phonological and phonemic awareness component students learned rhyming, initial sound fluency, phoneme blending, and phoneme segmentation skills (Mathes et al., 2001). The phonics section of the KPALS program involved students sounding out words, making the lettersound correspondence, and reading words quickly after they have been sounded out (Mathes et al., 2001). Fuchs et al. (2001) conducted a large-scale study to determine the impact of the KPALS program on kindergarten students’ phonological awareness and letter naming skills. The results 35 were significant in increasing the students’ phonological awareness skills, skills that were still apparent five months after the intervention. McMaster, Kung, Han, & Cao (2008) investigated the use of the KPALS curriculum with English Language Learners to remediate potential reading challenges. The intervention was administered for 18 weeks, four times a week, measuring student performance using a pre-test, post-test design. The results indicated that students who received KPALS instruction outperformed those in the control group in phonemic awareness and letter recognition with effect sizes ranging from .58 to .69 (McMaster et al., 2008). A recent study conducted by Rafdal, McMaster, McConnell, Fuchs, & Fuchs (2011) confirmed the previous research findings that students who received KPALS instruction had alphabet knowledge and phonological awareness skills that surpassed their counterparts. The KPALS curriculum focuses on specific instruction in the alphabetic principle and phonological awareness, which are the cornerstones of emergent literacy development (Mathes et al., 2001). Despite the effectiveness of KPALS, it does not demonstrate strong emphasis on several key emergent literacy components, vocabulary development, comprehension, and concepts about print. The proposed study targets emergent literacy instruction using KPALS to target alphabetic principal and phonological awareness and supplementing the program with specific instruction in vocabulary, comprehension, and concepts about print through DTPK. Developing Talkers: PreK research. Zucker, Solari, Landry, Swank (in press) piloted a curriculum, Developing Talkers: PreK (DTPK) (The Children’s Learning Institute, 2010), which targeted vocabulary and comprehension acquisition through shared storybook reading. DTPK (The Children’s Learning Institute, 2010) consists of curricular activities centered on themes, with eight authentic children’s books for each theme, (i.e. all about me and my body, animals, and nature around us). Within the themes, the researchers developed specific instructional 36 routines to teach the specific comprehension and vocabulary instruction strategies for each book. For example, there are multiple levels of questions that teachers asked before, during, and after reading the text, targeting the continuum of higher-level thinking. These questions, which target comprehension development, include: (1) guiding questions (2) checking for understanding questions (3) contextualized questions, which focus on simple recall of story details and (4) decontextualized questions which include strategy instruction in comparing, summarizing, inferring, explaining, predicting and problem solving. Furthermore, to support the children’s comprehension, DTPK (The Children’s Learning Institute, 2010) incorporated text structure organizers into the curriculum to provide children with the semantic tools to mediate and organize their comprehension performance (Landry, 2011; Neuman, 2011). The DTPK (The Children’s Learning Institute, 2010) curriculum includes several organizing tools to aid teachers in comprehension strategy instruction, including: (1) characters and setting t-chart (2) event sequencing map (3) story retell with pictures from the story (4) storytellers, which allow the students to use their own language to retell the story (5) concept sort (6) cause and effect (7) graphic organizer to evaluate the story by using their senses (8) Venn diagrams and (9) KWL charts. Additionally, DTPK (The Children’s Learning Institute, 2010) includes instruction on tier two vocabulary words that are not part of most children’s everyday vocabulary (Beck, McKeown, & Kucan, 2002), and high-frequency words that have multiple meanings (Beck et al., 2002). Each story has six preselected tier two words, which are highlighted during the story. As each word is introduced, the teacher apprentices the students through presenting a child friendly definition, using the word in a supportive sentence, and having the children act out the word. 37 Research has shown children need to repeatedly interact with new vocabulary words in order to ensure that mastery and retention is achieved (Tuckwiller et al., 2004). The DTPK (The Children’s Learning Institute, 2010) program has enhancements that allow teachers to provide repeated exposures to the tier two vocabulary words. These enhancement activities include: (1) example and non-examples of each of the tier two words (2) acting the word out (3) asking questions about pictures to enhance deep understanding of tier two words (4) development of a semantic web (5) word associations (6) draw and write about the tier two word, (7) discussion about pictures that depict various contexts of the tier two vocabulary words (8) how much can you tell me, which starts with a simple sentence and students add additional information to develop higher-level complex sentences. Zucker et al., (in press) evaluated DTPK (The Children’s Learning Institute, 2010) in a study that consisted of 39 preschool classrooms that included students at risk for learning difficulties. The study was conducted in daily sessions for 4-weeks in duration. A pretestposttest design was utilized to study the performance gains of students over time. The results showed that students who received intervention made significant improvements on their receptive vocabulary skills (d=.81). Intervention theoretical framework. Both KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010) instantiate features of the theoretical frameworks that were outlined in the initial section of the literature review, all of which have overlap in how children’s learning is constructed. A primary theme that characterizes both programs is that children learn through engaging in literate behaviors with adults as they acquire literacy skills through a developmental continuum. KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010) feature the design of activity settings in which the adult and student 38 participate in the emergent literacy skills in both small group and large group settings, with adults engaging in literacy behaviors through read-aloud or modeling formats. Furthermore, the nature of KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010) interventions feature lessons where the teacher and child work together, with the teacher apprenticing the child in alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. A second common theme is the emphasis on providing students with multiple practice opportunities. KPALS (Mathes et al., 2001) teaches the same concepts multiple times within and across lessons prior to advancing to more challenging activities. It also employs a Model (Watch Me) – Lead (Do It with Me) – Test (By yourself) format, transferring control of learning from the adult to the child. DTPK (The Children’s Learning Institute, 2010) teaches vocabulary and comprehension by using explicit strategies and re-reading the stories, allowing additional practice. The final common theme that characterizes both programs is the recognition that the acquisition of emergent literacy skills requires that students learn a secondary discourse, a discourse that is embedded in the development of a new shared understanding of the literacy language, behaviors, actions, and knowledge that characterizes effective reading. In social interactions with skilled language users, KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010) scaffold the development of literacy discourses in multiple ways. DTPK (The Children’s Learning Institute, 2010) provides access to a literacy discourse by explicitly teaching not only vocabulary and comprehension but also offering symbolic tools (i.e. graphic organizers) that scaffold learning. The KPALS (Mathes et al., 2001) program develops a literacy and metalinguistic discourse surrounding letter names, sounds, and phonological 39 awareness through multiple practice opportunities, and through learning arrangements that progress from teacher modeling to partner tutoring. In summary, the ultimate goal is for children to become independent, literate individuals. In order for children to reach this goal, the social interactions surrounding literacy must be designed so that children can access and internalize the literacy processes that will help them to become independent and literate individuals. The purpose of this study was to bundle and integrate two effective literacy programs (KPALS and DTPK) in order to offer a solid foundation for developing emergent literacy skills among preschool students, and to evaluate the efficacy of the multicomponent and hybridized preschool intervention based on its potential merits to improve the literacy outcomes of the students. Study Context As stated, the purpose of this study was to evaluate the effect of a tier one literacy intervention in a preschool classroom on emergent literacy skills. The literacy intervention incorporated evidence-based practices that were essential to the development of emergent literacy skills, including emphases on: alphabetic knowledge (i.e., letter names and letter sounds), phonological awareness (i.e., sound segmentation, initial sound phonemes, and rhyming), and book reading activities to promote vocabulary, comprehension, and concepts about print. The intervention was a programmatic hybrid based on the KPALS program (Mathes et al., 2001), which targets alphabetic knowledge and phonological awareness. In addition, the intervention incorporated DPTK (The Children’s Learning Institute), which offered instruction in vocabulary, comprehension and print awareness. (See Table 1 for literacy skills and intervention component, and assessment of skills match - below). 40 To measure performance gains, a preschool universal literacy screener was employed, Preschool Early Literacy Indicator (PELI; Kaminski & Brave-Aguayo, 2010). This screener evaluated performance on alphabetic knowledge, phonological awareness, vocabulary and oral language, and comprehension. These data were verified by the use of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) (Dynamic Measurement Group, 2009). The study also utilized a curriculum-based vocabulary assessment, sight words, lower case letter names and sounds, and the Teacher Rating of Oral Language and Literacy (TROLL) (Dickinson, McCabe, & Sprague, 2001). To gather additional information about print awareness, this study included a measure of concepts about print through the Michigan Language and Literacy Profile, or MLLP (2001). (See Table 1 for literacy skills and intervention component, and assessment of skills match - below). Table 1 Literacy Skills, Intervention, and Assessment Literacy Skill (Indicator Intervention According to the NELP): Component: Alphabetic Knowledge KPALS Zoo Phonics (mandated by district) Phonological Awareness KPALS Vocabulary Developing Talkers: PreK Concepts about print Developing Talkers: PreK Developing Talkers: PreK Comprehension Assessment PELI DIBELS Lower case letter names Lower case letter sounds PELI DIBELS PELI TROLL Curriculum-based vocabulary measure MLPP PELI TROLL This study was designed to contribute to the research literature in several ways. First, the study investigated the effectiveness of integrating two effective programs, KPALS (Mathes et 41 al., 2001) and Developing Talkers: Pre-K (The Children’s Learning Institute, 2010), which focused on different aspects of early literacy into a single intervention. Second, this study targeted alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. Most intervention studies only implement and evaluate the efficacy of a specific instructional component related to emergent literacy (Bailet et al., 2009; Bradley & Bryant, 1985; Bunn et al., 2005; Hargrave & Senechal, 2000; Hindman et al., 2008; Justice et al., 2003; Justice and Ezell, 2000; McGinty, Briet-Smith, Fan, Justice, & Kaderavek, 2011; Schnieder et al., 2000; Share, 2004; Silverman & Crandell, 2010; Tuckwiller et al., 2010; Zucker et al., in press). However, given the relationship of all five components to reading achievement, this study’s intervention offered a multicomponent and integrated literacy approach that was designed to meet the needs of a majority of students in a tier 1 preschool program. This intervention program was implemented in combination with the district-mandated curriculum, Zoo Phonics. Third, this intervention was delivered for 12-weeks and implemented four times weekly for a total of 48 sessions to offer a more intense and concentrated dosage of literacy instruction. Most interventions range from ten to 24 sessions, over a three to ten week period and offering 2-3 sessions per week. Thus, this study’s intervention offered a more intense program that nearly doubled the dosage that is most typically provided in literacy interventions at the preschool level. 42 CHAPTER THREE METHODS Participants Preschool classrooms. Participants in this investigation were preschool aged children (four- year-olds to five-year-olds). Participants were enrolled in various preschool classes in a large, public school in western Michigan. The district offered a variety of early childhood programs including: daycare (ages two and a half through fifth grade), Head Start programs, Great Start Readiness Preschool (GSRP), Early Childhood Special Education, and tuition-based developmental preschool classrooms. This investigation targeted the GSRP programs, a tuitionbased developmental preschool, and a young five’s class. The GSRP program was funded by the State of Michigan. It served children who were st four-years-old prior to December 1 , and was free for students who met the family income and risk factor criteria set forth by the State of Michigan. GSRP was located within three of the district’s elementary school buildings. GSRP had adopted the Creative Curriculum (Dodge et al., 2002), which focused on meeting the individual students’ strengths and needs. The GSRP program met four days a week for three hours daily. All classrooms had one classroom teacher, a teacher assistant, and 16 children. The tuition-based developmental preschool program was located in one of the seven elementary schools, serving children who were four-years-old. The tuition-based developmental preschool program adopted Creative Curriculum (Dodge, et al., 2002). Classes were held five days a week for two and a half hours a day, in which there was a lead teacher, assistant teacher, and 18 children. The cost for the five-day program was $505. There were scholarships available for those who needed tuition assistance. The criterion for tuition assistance was the same as the 43 GSRP classrooms. Specific group characteristics are discussed further in the participant demographics section. The young five’s classroom was located in an elementary building and served children st who were five-years-old by December 1 of the school year. The young five’s program also used the Creative Curriculum (Dodge et al., 2002) and was in session five days a week for three hours. The classroom consisted of a teacher, teacher assistant, and up to 18 enrolled children. All programs were aligned with the Pre-Kindergarten State of Michigan Early Learning Standards (Michigan State Board of Education, 2005), and the district kindergarten-twelfth grade curriculum. Finally, all preschool programs were located within the local elementary buildings, and were licensed according to the State of Michigan Child-Care Standards. Due to the teaching assignment variables already in place (i.e., Teacher A having two sections of GSRP and Teacher C having one section of GSRP and one section of Young Five’s) the groups were assigned to condition to ensure comparability in student ages and class size across conditions (See Table 2 and Table 3, below). Teachers A and C were balanced between conditions, respectively, across experimental and control because they each taught two classes of 16 students. Likewise Teacher B and D were balanced across conditions, respectively, across experimental and control because they each taught one class of 18 students, the teachers were not randomly assigned to condition. The statistical analysis controlled for any group differences at pre-test to ensure comparability among groups, because random assignment was not possible. There were 50 students each in the experimental and control groups for a total of 100 students. 44 Table 2 Group Assignment for Dissertation Sample Class Type Students Total Classes Per Class (Max) Young Five’s 18 1 (5days/week) GSRP 16 1 (4 days/week) Experimental Teacher A 16 Experimental Teacher A 16 Experimental Teacher B 18 2 Control Teacher C 32 1 18 Teachers 1 4-5 year old (5days/week) Assignment Total Students Control Teacher D 18 Total Number of Students in all classes Total Number of Students in Experimental Group Total Number of Students in Control Group Table 3 Classroom Characteristics Program Creative Curriculum Young Five’s Yes GSRP Yes Tuition Yes Location Elementary Building Elementary Building Elementary Building 100 50 50 Child-Care Licensed Yes Students Atrisk 88% Yes 100% Yes 78% District demographics. The school district served students preschool through twelfth grade. Enrollment in kindergarten through twelfth grade was approximately 6,000 students. There were seven elementary buildings (grades K-5), two middle schools (grades 6-8), and two high schools (grades 9-12). The school district had a graduation rate of 84%. According to the district’s 2009-2010 annual report all schools received a rating of A on the State of Michigan’s report card, except the alternative high school. All schools made adequate yearly progress (AYP) except the alternative high school, which is based off the 45 Michigan Education Assessment Program (MEAP) results, percentage of students taking the MEAP, and attendance and/or graduation rates, in addition all subgroups of students made AYP. The district exceeded the proficiency standards set forth by the state in reading, mathematics, and science with proficiency rates of 94.1%, 92.8%, and 91%, respectfully. The students in the th kindergarten through 12 grade system are 91% Caucasian, 4% Hispanic, 2% multi-racial, 2% Black-not Hispanic, 2% Asian, less than 1% American Indian, and less than 1% Hawaiian Pacific Islander. The district has a rate of 12% of students in special education and 32% of students qualifying for free or reduced lunch. Participant demographics. Participants in this study were four-year-old to five-year-old children (n=100) enrolled in pre-kindergarten programming. There were a total of 100 children enrolled in the pre-kindergarten programming over the course of the intervention. The experimental group and control group included 50 participants each. The demographics of the entire sample (n=100) consisted of 56% males (n=56) and 44% females (n=44). The experimental group consisted of 50% females (n=24) and 50% males (n=24). The control group was 62% male (n=31) and 38% female (n=19). The average age of student at onset of the intervention was four years six month and at the conclusion of the intervention the average age was four years nine months. For specific information on demographics see Table 4. The demographics of the preschool classrooms mirrored the demographics of the district at large. The students overall were 88% White, 8% Hispanic, 3% Black, and 1% Asian. There were not any students of American Indian or Hawaiian Pacific Islander within the sample. The experimental group consisted of 88% White students, 6% Hispanic students, 4% Black students, and 2% Asian students. The control group contained 88% White students, 10% Hispanic students, and 2% Black students. 46 The entire sample had 89% if the students who were 200% below federal poverty guidelines which equated to an annual income of less that $46,000 for a family of four, and 11% of the students were above the federal poverty line which was $69,000 for a family of four. The experimental group had 96% of the families living in poverty, and 4% who were not in poverty. The control group had 82% of the families in poverty and 18% above the poverty threshold. See Table 4 for specifics on classroom demographics. Table 4 Classroom Characteristics and Participant Demographics Group Gender Age Ethnicity Working (onset) Parents Overall 56 Males 4 yrs 6 88 White 93 Working 44 Females mo 8 Hispanic 7 not working 3 Black 1 Asian Experimental 24 Males 4 yrs 8 44 White 46 Working 24 Females mo. 3 Hispanic 4 not working 2 Black 1 Asian Control 31 Males 4 yrs 5 44 White 47 Working 19 Females mo. 5 Hispanic 3 not working 1 Black 0 Asian Income 89 below poverty 11 above poverty 48 below poverty 2 above poverty 41 below poverty 9 above poverty Design A pretest - posttest research design was utilized in this investigation. Data was collected three times throughout the study. The first data collection in the fall was administered during the first two weeks of October. The second data collection in the winter occurred during the first week of January. The final period of data collection in the spring occurred during the third week of April. The fall data collection was conducted prior to the start of the study, so the data obtained was not as comprehensive as the winter and spring data collection periods. Due to these 47 complexities, the fall data collection was used to compare groups to determine their equivalence at the beginning of the school year. However, fall data was not used in the analysis of intervention impact, other than to offer additional information about the entire sample as well as the relative comparability of the treatment and control groups. Since the intervention was implemented starting in January, the winter data collection (January) served as the baseline or pre-test measurement of students’ literacy performance. The third and final data collection, spring (April), was used as a post-test to determine the relative performance of the two groups at the conclusion of the intervention. Setting This research was conducted in an authentic preschool classroom setting. This naturalistic environment created challenges that precluded the random assignment of subjects to conditions, or matching students on specific traits before assignment to classrooms. Due to these constraints, the teachers were assigned to experimental and control groups based on researcher assignment in which every attempt was made to create experimental and control groups that were as similar as possible based on certain demographics (i.e., age of students, tuition vs. grant programs, etc.). After assignment there were minor differences between groups (See Table 2, Table 3, and Table 4). However, initial data analysis indicated student performance across groups was equal, which will be discussed in the data analysis section. Due to the group complexity group differences were accounted for to the maximum extent possible in the data analysis. There were four teachers who participated in the study. See Table 5 for detailed characteristics of the teacher participants. All teachers were White females. The experimental group consisted of one teacher (Teacher A) with nine years experience. Teacher A had her 48 professional teaching certificate through the State of Michigan and was certified in elementary education kindergarten through fifth grade, reading specialist sixth through eighth grade, and had her early childhood endorsement. Additionally, Teacher A obtained her Masters in early childhood education. The other experimental teacher (Teacher B) had two years teaching experience. Teacher B had her provisional teaching certificate through the State of Michigan and was certified in elementary education kindergarten through fifth grade, language arts sixth through eighth grade, had her early childhood endorsement, and was half-way through her early childhood masters coursework. The control group teachers were similar to the treatment group teachers in that one teacher (Teacher C) had two years experience and provisional teaching certificate through the State of Michigan. Teacher C was certified in elementary education kindergarten through fifth grade, math sixth through eighth grade, had her early childhood endorsement. Teacher C was approximately halfway through her Masters in early childhood education. Teacher D, the final control group teacher had her professional teaching certificate through the State of Michigan, was certified in elementary education kindergarten through fifth grade, language arts sixth through eighth grade, had her early childhood endorsement, and a Masters in early childhood. Teacher D had six years of prior experience. 49 Table 5 Teacher Characteristics Teacher Assignment Certificate Teacher A Experimental Professional Teacher B Experimental Provisional Teacher C Control Provisional Teacher D Control Professional Certifications Masters Elementary K-5 Reading 6-8 Early Childhood Elementary K-5 Language Arts 6-8 Early Childhood Elementary K-5 Math 6-8 Early Childhood Elementary K-5 Reading 6-8 Early Childhood Yes Years of Experience 9 No 2 No 2 Yes 6 Data Sources Assessment. There were multiple data sources used to obtain information about the preschool intervention programs and student outcomes. See Table 6 for the developmental domains measured by each assessment. The researcher and classroom teacher administered the fall data collection. All teachers were trained to administer the Preschool Early Literacy Indicator (Kaminski & Bravo-Aguayo, 2010; PELI) assessment in a three-hour training session, and inter-rater reliability checks were conducted on 33% of all assessments, ensuring an interrater reliability of 80% at a minimum. The researcher administered the Dynamic Indicators of Basic Early Literacy Skills (Dynamic Measurement Group, 2011; DIBELS) assessment specifically the First Sound Fluency, Letter Naming Fluency, and Phoneme Segmentation Fluency. The teachers who were all previously trained by the local district administered the concepts about print assessment. However, to guarantee the data was accurately collected interrater reliability checks were conducted and scores were at 100% inter-rater reliability. 50 The winter and spring data collection included the PELI (Kaminski & Bravo-Aguayo, 2010), DIBELS (Dynamic Measurement Group, 2011), and concepts of print assessment. Additionally, there was a lower case letter name and sound assessment, sight word assessment, curriculum-based vocabulary assessment, and the Teacher Rating of Oral Language and Literacy assessment (Dickinson, McCabe, Sprague, 2003; TROLL). Advanced psychology students administered the winter and spring data collection from a local university, to ensure non-biased assessment. The only assessment that was not administered by the advanced psychology students was the TROLL (Dickinson, McCabe, Sprague, 2003) assessment, which was completed by the teacher. All university students were trained in the data collection procedures during a four-hour session, which included the PELI (Kaminski & Bravo-Aguayo, 2010), DIBELS (Dynamic Measurement Group, 2011), concepts about print, sight words, lower case letter names and sounds, and the vocabulary assessment. The inter-rater reliability on all aforementioned assessments was 98%. Individual inter-rater reliability was at 97% for the PELI (Kaminski & Bravo-Aguayo, 2010), 100% for all DIBELS (Dynamic Measurement Group, 2011) measures, 100% for concepts of print, 100% of sight words, 100% for lower case letter names and sounds, and 100% for the vocabulary assessment. 51 Table 6 Assessments Literacy Skills Alphabet Knowledge Phonological Awareness Vocabulary Comprehension Concepts about print General Measure of Literacy Skills PELI DIBELS  MLPP   TROLL Lower Case Letter Names Lower Case Letter Sounds     Sight Words        Preschool Early Literacy Indicators (PELI). Background on the PELI. The PELI (Kaminski & Bravo-Aguayo, 2010) is a preschool literacy screener. Currently, the PELI (Kaminski & Bravo-Aguayo, 2010) is an experimental measure. In 2010, Kaminski & Bravo-Aguayo completed the first large-scale (n=130) pilot study using the PELI tasks. Kaminski & Bravo-Aguayo (2010) are continuing to collect data throughout the United States. The emerging PELI (Kaminski & Bravo-Aguayo, 2010) data shows promise to be a reliable and valid measure of emergent literacy skills. Administration information on the PELI. The PELI (Kaminski & Bravo-Aguayo, 2010) was administered in its entirety for each data collection session. The PELI (Kaminski & BravoAguayo, 2010) was presented to each child in a print-based book format, with the assessor sitting next to the child (Kaminski & Bravo-Aguayo, 2010). The assessor administered the PELI (Kaminski & Bravo-Aguayo, 2010) by following the standard directions, questions, and prompts set forth by the manual. The PELI (Kaminski & Bravo-Aguayo, 2010) was an untimed test containing four sections: comprehension, alphabet knowledge, phonemic awareness, and 52 vocabulary/oral language. For the comprehension section the child orally answered questions and made predictions about the text. The comprehension questions included prediction, information gathering, and recall. All questions were embedded throughout the text. The child was asked to name upper case and lower-case letters in the alphabet knowledge portion of the assessment, presented in a random manner. The phonemic awareness section measured the child’s ability to identify initial segments and phonemes of words. The vocabulary/oral language portion assessed the students’ expressive vocabulary, definitional vocabulary, and story retelling. Each section was scored using raw scores. Reliability and validity measures of the PELI. Although the PELI (Kaminski & BravoAguayo, 2010) was in the experimental phases of determining reliability and validity of the measure, the inter-rater reliability on the PELI was .99 (Kaminski & Bravo-Aguayo, 2010), signifying the measure was consistently administered with similar results. Additional studies concerning the reliability of the PELI (Kaminski & Bravo-Aguayo, 2010) are in progress. Preliminary results of the validation study (Kaminski & Bravo-Aguayo, 2010) indicated the measure also demonstrated concurrent validity in alphabet knowledge, comprehension, vocabulary and oral language, and phonemic awareness skills. The preliminary results were analyzed by comparing the subtests on the PELI (Kaminski & Bravo-Aguayo, 2010) to various standardized preschool assessments. These standardized assessments included: The Test of Preschool Early Literacy – Phonemic Awareness portion (TOPEL PA) (C.J. Lonigan, Wagner, Torgesen, & Rachotte, 2007), The Test of Preschool Early Literacy – print knowledge portion (TOPEL PK) (C.J. Lonigan et al., 2007), Get Ready to Read (GRTR) (G.J. Whitehurst, Lonigan, Fletcher, Molfese, & Torgesen, 2010), Clinical Evaluation of Language Fundamentals (CELF) 53 (Semel, Wiig, & Secord, 1995), and the Peabody Picture Vocabulary Test (PPVT) (Dunn & Dunn, 2007). The alphabet knowledge total portion of the PELI (Kaminski & Bravo-Aguayo, 2010) correlated highly (.77) with TOPEL PK (C.J. Lonigan et al., 2007). Upper case and lower case letter knowledge also correlated highly with the TOPEL PK, (C.J. Lonigan et al., 2007), with a coefficient at .79 and .65 respectively (Kaminski & Bravo-Aguayo, 2010). The phonemic awareness total correlated with the CELF (.67) (Semel et al., 1995). The vocabulary/oral language portion of the PELI correlated with the CELF (Semel et al., 1995) at .68 (Kaminski & Bravo-Aguayo, 2010). Additionally, the comprehension portion of the PELI correlated at a rate of .67 with the CELF (Semel et al., 1995). The results are all preliminary, but overall the PELI total score correlated more highly with the language tests (Kaminski & Bravo-Aguayo, 2010). The alphabet knowledge portion was highly correlated with the TOPEL PK (C.J. Lonigan et al., 2007). Information concerning the validity of the PELI (Kaminski & Bravo-Aguayo, 2010) is presented in Table 7. 54 Table 7 Correlations of PELI with Other Standardized Preschool Assessments TOPEL TOPEL GRTR CELF PPVT PA PK (n=45) (n=45) (n=130) (n=44) (n=46) Total PELI Score .43 .42 .59 .77 .62 Alphabet Knowledge Total .36 .77 .56 .13 .48 Upper Case Letters .38 .79 .60 .17 .45 Lower Case Letters .29 .65 .45 .11 .46 Phonemic Awareness Total .28 .06 .40 .67 .45 Vocabulary/Oral Language .37 .36 .56 .68 .59 Total Name Pictures .27 .29 .50 .57 .67 Tell about Pictures .30 .29 .42 .61 .42 Story Retell .29 .26 .39 .42 .31 Comprehension .34 .42 .51 .67 .54 The Test of Preschool Early Literacy – Phonemic Awareness portion (TOPEL PA), The Test of Preschool Early Literacy – print knowledge portion (TOPEL PK), Get Ready to Read (GRTR), Clinical Evaluation of Language Fundamentals (CELF), and the Peabody Picture Vocabulary Test (PPVT) Dynamic Indicators of Basic Early Literacy Skills Next (2011; DIBELS). The DIBELS measures were created to assess literacy skills in kindergarten through sixth grade students (Dynamic Measurement Group, 2011). The goal of the DIBELS assessment is to measure the foundational skills, which impact reading success (Dynamic Measurement Group, 2011). For the purposes of this study, the DIBELS (Dynamic Measurement Group, 2011) subtests related to letter naming fluency, first sound fluency, and phoneme segmentation fluency measures were administered. The DIBELS (Dynamic Measurement Group, 2011) measure was selected as a secondary confirmatory measure because the concepts being measured (letter naming fluency, first sound fluency, and phoneme segmentation fluency) were also assessed using the PELI (Kaminski & Bravo-Aguayo, 2010), just in a different manner. The students’ scores were not compared to kindergarten benchmarks instead the scores were used to further describe the data collected surrounding letter naming and phonemic awareness. In the next section, the DIBELS (Dynamic Measurement Group, 2011) subtests are further explained. 55 Letter Naming Fluency (LNF). The LNF measure assessed a child’s knowledge of letter names (Dynamic Measurement Group, 2011). Upper and lower case letters were presented to the child in a random order (Dynamic Measurement Group, 2011). The measure was administered using the standardized administration and scoring protocols. The scores for LNF were raw scores. First Sound Fluency (FSF). The FSF measure was designed as an indicator of a child’s phonemic awareness abilities (Dynamic Measurement Group, 2011). FSF measured the child’s ability to identify the initial sound in the word. The measure was administered using the standardized assessment and scoring protocols. FSF was scored using raw scores. Phoneme Segmentation Fluency (PSF). The PSF measures a child’s phonemic awareness abilities (Dynamic Measurement Group, 2011). PSF tested children’s ability to segment words in individual phonemes. The measures were administered using the standardized assessment and scoring protocols. Raw scores were used in the PSF measure. Michigan literacy progress profile. The district had adopted the use of the Michigan Literacy Progress Profile (MLPP) for the concepts about print measure. Concepts about print. The assessor administered the concepts about print assessment individually to each child. The assessor presented the child with a developmentally appropriate storybook, and asked the child to point to specific book components (i.e. cover, period, word). For each correct answer the child was awarded one point, with a total of 22 points possible. Teacher Rating of Oral Language and Literacy (TROLL). The TROLL (Dickinson, McCabe, Sprague, 2003) was designed to evaluate students oral language and literacy skills based on teacher rating. The TROLL (Dickinson, McCabe, Sprague, 2003) consisted of 25 questions surrounding language and literacy skill development. The teacher questionnaire was 56 completed in five to ten minutes with language, reading, and writing subscales. The tool had high internal consistency with alpha ratings at .89 (Dickinson, McCabe, Sprague, 2003). Additionally, the TROLL (Dickinson, McCabe, Sprague, 2003) showed moderate correlations with the Peabody Picture Vocabulary Test-III (Dunn & Dunn, 1997), the Emergent Literacy Profile (Dickinson & Chaney, 1997), and the Early Phonemic Awareness Profile (Dickinson & Chaney, 1997). The TROLL (Dickinson, McCabe, Sprague, 2003) was administered twice at the winter and spring assessment times. The TROLL (Dickinson, McCabe, Sprague, 2003) assessment was divided into three sections: language use, reading, and writing. Each section had multiple questions that were scored on a one to four point lickert scale. The points were summed for each section as well as an overall raw score. The language portion of the assessment evaluated a student’s ability to start conversations, communicate about personal experiences, ask questions, the quality of language use the classroom, as well as the quality of vocabulary. The reading section examined the student’s behavior towards books (i.e. like to be read to), the child’s ability to “read” on their own, as well as pre-reading skills (i.e. word recognition, name recognition, ability to recognize letters). The writing section of the TROLL (Dickinson, McCabe, Sprague, 2003) considered the quality of writing, imitation of writing behaviors, and attitude towards writing. Curriculum-based vocabulary measure. The curriculum-based vocabulary measure was added as an assessment measure based on the recommendation of the dissertation committee. The curriculum-based vocabulary measure consisted of specific vocabulary targets that were included in the DTPK (The Children’s Learning Institute, 2010). The DPTK (The Children’s Learning Institute, 2010) curriculum consisted of three themes, with eight books in each theme, 57 of which three books were randomly selected. From the randomly selected books three of the six vocabulary targets were also randomly selected. The vocabulary assessment was video taped (pre and post) to ensure that the assessor recorded each child’s response verbatim. After reviewing ten percent (n=20) of the videos the reliability was 99%. This was calculated using a point-by-point method. There were not any patterns in the errors. The most common errors were recording the word “a” instead of “the” which did not lead to misinterpretation of the child’s response. The vocabulary assessment was scored by the development of a rubric. To determine an appropriate rubric 20% of the vocabulary assessments, both pre and post were selected, which equated to 40 assessments. The point allocations were as follows: zero points: gave an unrelated or incorrect answer; one point: gave an example of the word meaning; two points: gave an example with a partial definition; three points: gave a full, correct definition of the target word. For initial rubric development the responses were recorded for each answer on the rubric to determine patterns of answers. During initial rubric development this researcher and an advance psychology student both scored the curriculum-based vocabulary measure. The inter-rater reliability was 90% on the initial rubric, as certain answers needed clarification. After the initial development a tenured-faculty member reviewed the rubric and recommended application to the larger sample. To ensure non-biased scoring the researcher and psychology student independently scored another 10% of the curriculum-based vocabulary measures in which interrater reliability was 100% based on the final rubric. The psychology student then scored the remainder of the curriculum-based vocabulary assessment as an unbiased party, who was blind to conditions (See Appendix A Curriculum-Based Vocabulary Assessment Measure and Rubric for complete measure and rubric). 58 Lower case letter names and sounds. Lower case letter names and sounds were assessed through the use of all lower case letters being presented in a random order on a single sheet of paper. The assessor pointed to each letter and asked the student to name the letter. This was done for a total of 28 lower case letters, which presented the letter g and a in two formats (i.e., g and g, a and a). Lower case letter sounds were also presented using the same page of letters as the letter name assessment. The assessor would first probe the student about letter names and then follow up with letter sounds after all 28 letter names were requested. For both assessments the student response was marked correct or incorrect with one point granted for correct responses and zero points for incorrect responses. The total score was a summed score of correct responses for both lower case letter names and lower case letter sounds. Sight words. The sight word assessment was generated from the district’s kindergarten sight word list. The list contained Dolch words as well as common beginning sight words. The students were presented with each word on a 4x6 index card and were given a chance to respond. Correct responses were granted one point and incorrect responses were allocated zero points. The total score was a summed score of correct responses. See Appendix B Sight Word Assessment List for the sight word list. Intervention The intervention consisted of targeted instruction in alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. The intervention lasted 12 weeks, and was administered by the classroom teacher with the support of the classroom assistant. Both experimental teachers attended a three-hour training on the intervention. Topics covered in the training included introduction to the intervention and targetted literacy areas in the 59 interventions. Teachers also received a detailed manual on both the K-PALS (Mathes et al., 2001) and the DTPK (The Children’s Learning Insitute, 2010). Furthermore, the training consisted of practicing the two interventions with feedback from the researcher. Teachers left the training with all materials, handbooks, and a calendar outlining the lessons for each day. For DTPK (The Children’s Learning Insitute, 2010), teachers received a copy of the picture books correlated to each theme, together with the DTPK (The Children’s Learning Insitute, 2010) daily lesson plans related to the weekly themes. After the training session the researcher provided assistance to the teachers twice within the first two weeks of the intervention. The goal of these sessions were to assist the teacher in the implementation of the intervention. The researcher was present for an entire session, or a total of three hours each session. During this time the researcher assisted with daily acitivties but with the primary purpose was providing feedback to the teachers on the implementation of the intervention, specifically K-PALS (Mathes et al., 2001) and DTPK (The Children’s Learning Insitute, 2010). After the first session the researcher provided the teachers with feedback about DTPK (The Children’s Learning Insitute, 2010) which encompassed a reminder to not only read the prompts within the story according to the correct day, but also to ask the before reading and after reading questions. The teachers were implementing K-PALS (Mathes et al., 2001) with fidelity at the first coaching session. The second coaching session occurred the subsequent week to see how the feedback was incorporated into the lessons to ensure fidelity of implementation. Again, this session the teachers were implementing the K-PALS (Mathes et al., 2001) intervention with 100% fidelity as well as the DTPK (The Children’s Learning Insitute, 2010; See Appendix C Procedural Fidelity Measure for procedural fidelity checklist). Alphabet knowledge and phonological awareness instruction. 60 KPALS instruction. The KPALS (Mathes et al., 2001) intervention incorporated explicit, targeted instruction in letter names and sounds, phonological awareness, and phonics. This evidenced-based practice was used within the preschool setting as a tier 1 intervention, meaning all students received the intervention. This curriculum was created for students in Kindergarten, however it shows promise for use in a preschool setting with slight modfications (Mannes, 2011). The KPALS (Mathes et al., 2001) intervention was scripted, which allowed the teacher to read the instructional prompts throughout the lesson. The teachers completed 12 weeks of intervention with four sessions a week, which allowed 48 of the 60 lessons to be completed. The lessons consisted of the students engaging in the name game, which had students name each of the letters presented in the lesson. The letter sequence was presented twice at the beginning and in the middle of each lesson. The letter name (upper and lower case) and letter sound corespondences were taught in tadem starting with the first lesson and contiuning through lesson 48. The letter names and sounds were introduced according to the schedule set forth by the KPALS (Mathes et al., 2001) curriculum and included consistent review as new letter names and sounds were introduced. The additional component of KPALS (Mathes et al., 2001) taught students phonological awareness skills. The lessons taught students about rhyming, initial sounds, last sounds, and blending and segmenting words. By Lesson 48, students were presented with 21 upper and lower case letter names and correspnding letter sounds (I, U, X, Y, and Z were not introduced), phonological awareness instruction involving CVC words, as well as decoding VC and CVC words with short vowels. See Table 8 (below) for specific information on the intervention components, procedures, and target skills. 61 During KPALS (Mathes et al., 2001) instruction the students each received a copy of the corresponding reproducible sheet. The student was then able to follow along with the teacher. Additionally, the sheet was sent home daily with the child. KPALS modifications. There were two modifications made to the K-PALS (Mathes et al., 2001) curriculum. First, the instruction was delivered in a small group context instead of whole group and pairing students. The rationale for this modification was explicit emergent literacy instruction is vital to ensuring that all students are prepared as possible to become conventional readers and writers (Landry, 2011; National Reading Panel, 2000; Phillips & Piasta, 2013). Another modification made to the K-PALS (Mathes et al., 2001) curriculum was the duration of the activity. Instead of 20-minutes session, the teachers presented students with 10minute sessions. This modification was made due to the preschool childrens’ shorter attention span as well as research indicating that 10-minutes of repetitive instruction can make a difference in student skills when it is targeted and explicit (Landry, 2011). Vocabulary, comprehension, and concepts about print instruction. The K-PALS (Mathes et al., 2001) program provided a strong base for many emergent literacy skills, however, three additional early literacy components were included to complete the essential emergent literacy skills necessary for effective early literacy programs outlined by the National Early Literacy Panel Report (2008). These three literacy activities were concepts about print, vocabulary, and comprehension, all of which were targetted during explicit implementation of shared book reading using the DTPK curriculum (The Children’s Learning Institute, 2010). Developing Talkers: PreK. The DPTK (The Children’s Learning Institute, 2010)program included 12-weeks of vocabulary and comprehension instruction, targetted in a shared book 62 reading context. The DTPK (The Children’s Learning Institute, 2010) program included detailed and scripted lesson plans for the teachers to follow concerning vocabulary and comprehension strategies as a tier 1 shared reading strategy. The story served as the platform to target both comprehension and vocabulary. The target vocabulary in each of the intervention stories were tier two words (Beck et al., 2002). Tier two words are words which are encoutred in reading infrequently and have multiple meanings (Pentimonti, Justice, & Piasta, 2013). The series was designed to read the story two consecutive days, with specific targets each day. The story always began with comprehension question which was used to evaluate and elicit student knowledge prior to reading the story. The first day focused on generalized knowledge whereas the second day the questions focused on application. For example, the book My Five Senses by Aliki presented reading comprehension questions on the first day about students’ background knowledge, e.g., “What are the five sense?” On the second day, the question was, “When was a time you used your five senses?” The opening questions always offered teachers additional scaffolds to help probe children’s thoughts. For example, the first day follow-up question is “What makes you think so?” On the other hand, day two the follow-up statement was, “Tell me something you learned about our five senses.” This strategy is an example of how the litearcy learning was scaffolded to support students’ participation in the lesson discourse, moving from contextualized to decontextualized or applied comprehension questions. Furthermore, prior to reading the story, the DPTK (The Children’s Learning Institute, 2010) guide always provided a cognitive prompt that cued the students what to think about while the story is being read. The example for this story was, “After we finish reading we are going to talk about the book. There is one question I especially want you to think about as we read: ‘What are the five senses’?” 63 Throughout the story read aloud, there were both contextualized questions and decontextualized questions. On the first day, the focus was on understanding the story through contextualized questions, which were embedded in to the story (i.e., What can he see?). On the second day, the questions were focused on the experiences of individual students, allowing for additional internalization of the story and how it related to them. These decontextualized questions were also embedded throughout the story (i.e., What do you see?). Additionally, the decontextualized entailed students using contextual cues to answer and predict the answers (i.e. “How does the boy use all four senses at once?). Both days the concluding comprehension question after reading was a restatement of the opening question. All of the strategies listed within the book follow a dialogic reading sequence in which the adult and child interact, making the child an active participant in the story, promoting verbal participation (Pentimonti, et al., 2013). The tenets of dialogic reading include cloze procedures, a form of scaffolding to guide children to success, asking wh- questions, and making text-to-life connections (Pentimonti, et al., 2013). All of the comprehension strategies and vocabulary instruction follow the scaffolding of children’s learning to promote errorless learning. The vocabulary instructional components are described below. The vocabulary target words were also embedded into the story. All vocabulary targets were pre-determined by the curriculim and were all tier two words, meaning that they are words with “relatively low frequency but are considered to be very useful; they often offer a more precise and mature way of desribing a basic concept or idea,” (Pentimonti, et al., 2013, p. 123). This manner of selecting words from books coincides with the current research in that children are more likely to learn words that are expanded upon in books than words that are not within a shared reading context (Justice et al., 2005; Pentimonti, Justice, & Piasta, 2013). The instruction 64 surrounding the vocabulary words included definitions, manners in which to teach the vocabulary, supporting picture cues, and activities. DTPK (The Children’s Learning Institute, 2010) included a scope and sequence for each unit as well as scripted lesson plans and all supporting materials. See Table 8 for specific information on the intervention components, procedures, and target skills. Concepts about print. First, print referencing (Ezell & Justice, 2000) encourages early literacy skills. Print referencing was performed when the adult was reading to the child in a shared storybook context. The adult provided both non-verbal and verbal references to print, providing exposure to key literacy principles (Justice & Pullen, 2003). Specific principles by Ezell and Justice (2000): “(1) ask questions about print, (2) make comments about print, (3) pose requests about print, (4) point to print when talking about the story, and (5) track the print when reading; will guide the print referencing portion of the intervention” (p. 107). Since the DTPK did not have print referencing included in the intervention the researcher developed cue cards based on Justice and Ezell’s (2000) work. The cards were included in each story book and served as a prompt to reference print throughout the story. See Appendix D Concepts about Print Cue Card for a sample. See Table 8 (below) for specific information on the intervention components, procedures, and target skills. Time spent in intervention. The multi-component and integrated literacy intervention was multi-faceted and occurred at various times throughout the instructional day. The K-PALS component of the intervention occured in small groups for 10-minutes per instructional day. Furthermore, the Developing Talkers: PreK program along with concepts about print instruction was administered in the DTPK curriculum during story time and was be approximently 10-15 65 minutes in length. See Table 8 for specific information on the intervention components, procedures, and target skills. Table 8 Daily Intervention Components Intervent- Duration Total ion Minutes Component KPALS 10 480 minutes minutes 12 weeks 4 sessions/w eek Developi ng Talkers: PreK 10-15 minutes 12 weeks 4 sessions/w eek 480-720 minutes Instructional Group Target Literacy Skills Procedures Treatment Group Small Group Alphabet Knowledge Phonological Awareness Follow Scripted Lessons Experimental Large Group Vocabulary Comprehensi on Concepts about Print Follow Scripted Lessons Experimental Reviewed all letters daily Experimental and Control Zoo 5 minutes 240 Large Alphabet Phonics 12 weeks minutes Group Knowledge (mandate 4 d by the sessions/w district) eek *See procedural checklist for specific procedural components Procedural Fidelity There were multiple measures used to account for the procedural fidelity of the intervention implementation. First, each teacher’s instruction was video-taped five times throughout the intervention to examine the procedural fidelity. Additionally, there were three unannounced observations to ensure the instruction was occuring as set forth by this proposal, on top of the two initial coaching sessions within the first two weeks of the intervention. Finally, the teacher provided the researcher with a copy of the lesson plans. These measures were conducted to to ensure the accurate implementation of the integrated literacy intervention. 66 The teachers received training and were provide with two follow-up sessions to ensure full understanding of the intervention and curriculum. The teachers were assigned five weeks in which to video tape. The video tapes were reviewed for multiple components which included: adherance to KPALS (Mathes et al., 2001) instruction, adherence to DTPK (The Children’s Learning Institute, 2010) curriculum, and concepts about print references. The KPALS (Mathes et al., 2001) instruction was implemented with high fidelity within the video taped sessions which was at 94% fidelity of implementation. The DTPK (The Children’s Learning Institute, 2010) was broken down into before reading, during reading, and after reading implementation which were at 80%, 85%, and 82% respectively. Concepts about print implementation was at 95% (See Appendix C Procedural Fidelity Measure for procedural fidelity checklists). The three unannounced visits were similar to the video taped sessions in frequency of implementation. KPALS (Mathes et al., 2001) was implemented at 96%, DTPK (The Children’s Learning Institute, 2010) before reading at 83%, during reading 90%, after reading 86%, and concepts about print at 96%. After the three unannounced visits the researcher reviewed the results with the teacher. Additionally, the lesson plans were collected and the correct lessons were indicated in the lesson plans, however, there was not a mechanism to ensure that the lessons were taught. Control Group The control group teachers, who were different from the experimental group teachers, participated in literacy instruction strategies set forth by the district. The intervention was offered to all teachers and classrooms after the completion of the study. Literacy instruction in the control group was videotaped eight times to determine the nature and scope of the instruction 67 and there were three unannounced visits. Lesson plans were provided to the researcher to verify the literacy practices occured daily within the classroom environemnt. Literacy instruction included a letter of the week in which the children had multiple opportunities and activities which focused on the sound and letter name associated with the letter of the week. In addition, the children participated in centers of which some had a literacy focus. Every session the teacher selected a story to read aloud to the children. Addtionally, all preschool classrooms used the Zoo Phonics curriculum to teach letter names and sounds. The control group of students received the standard readiness instruction offered by the district in the developmental preschool programs. This included literacy instruction during circle time, center time, and story time (See Table 9 for classroom schedules). The literacy instruction was rooted in a developmental perspective. Weekly the students were introduced to a letter a week, there were activties that the teachers created to target the weekly letter and letter sound. The curriculum was generated by the individual teacher, and did not follow a research based or standard published curriculum. During story time there were some common themes that emerged from reviewing the videos. The control group teachers engaged in before reading, during reading, and after reading strategies. This included making predictions about the text before reading based on the front cover as well as pointing out concepts about print such as the title, author, and where to begin reading. Throughout the story the control group teachers asked questions about the story sequence (i.e., What is the character doing?) as well as text to self questions (i.e., Have you ever played with balloons?). Furthermore, throughout reading of the text the teacher would point out concepts about print, such as the first word on a page, the first and last letter of a word, and punctuation. After reading strategies included asking questions about what occurred within the 68 text (i.e., Who was the story about? What happened in the story?). The control group teachers had access to the books from the intervention, however, they were not provided with any of the DTPK (The Children’s Learning Institute, 2010) lessons or curriculum materials. According to teacher report the teacher’s read 50% of the titles out loud in a large group reading session. Additional Common Instructional Themes/Components There were some common curricular features and components that characterized the two instructional conditions. Both experimental and control groups determined the sequence for the week by focusing on a letter. For example, one week would focus on the letter “Rr”. During the week the activities at centers focused on “Rr’ activities. For example, they made rainbows, found rectangles, and cut out pictures that started with the /r/ sound. This theme was present in all of the classrooms. The classrooms were all studying the letter “Oo” when the intervention started and had completed each letter of the alphabet by the conclusion of the intervention. Additionally, both experimental and control groups used ZooPhonics (Bradshaw & Wrighton, 1985), which is an instructional multi-sensory method to teach the letter names and sounds. This was a curriculum that was mandated for implementation by the kindergarten and pre-kindergarten teachers in the district, and it had been used for several years. The Zoo Phonics (Bradshaw & Wrighton, 1985) curriculum has an animal that represents each lower case letter of the alphabet. Instruction is focused on letter sounds prior to letter names. The learning progression began with the student stating the animal name and letter sound. For example, the letter ‘a’ was allie alligator, and instruction was ‘allie alligator, /a/, /a/, /a/. All of the letters/animals are introduced at the same time (See Figure 1). As children begin to master the animals the letter names are introduced along with the animal in a blended fashion. The instruction then was, ‘allie alligator’, ‘a’, /a/, /a/, /a/. Finally, the children stop stating the animal 69 name and transition to ‘a’, /a/, /a/, /a/. This program was adopted by the district many years ago and is still used consistently in the prekindergarten and kindergarten classrooms. Despite the popularity of the Zoo Phonics program I was not able to find any peer-reviewed research articles, which showcase this as an evidenced-based intervention. However, the Zoo Phonics website does have dissertation and Masters’ thesis papers on their website delineating the results of the research (www.zoophonics.com/aboutresearch). Figure 1 Zoo Phonics Sample Letter Sequence The classrooms all had similar schedules (See Table 9 for classroom schedules) because they were all district operated programs. The classroom schedule included opening, circle time, free choice time, small group, snack, story time, and recess. The opening time consisted of the students entering the classroom and checking in by either writing their name or finding their name based on the teacher’s preference. Following ‘check-in’ the students were able to choose from two to three fine motor activities set at the tables throughout the classrooms. Following opening the students then attended circle time in which the teacher provided instruction on the calendar, beginning math concepts, and reviewing of the zoo phonics letter names and sounds. Additionally, during the circle time music was integrated at some point. Free choice time was provided in all classrooms. All of the materials were open in the classroom and the students were allowed to choose the activities in which they participated. During free 70 choice time the experimental teachers pulled the students to receive the K-PALS instruction, for eight minutes each small group. The control group teachers occasionally pulled the students to work on particular skills, but it was inconsistent. All classrooms provided snack for the children which met all child care licensing standards. The classrooms also contained a story time in which the teacher selected a book to read in a large group. The experimental group read the selected book based on the intervention scope and sequence. The control group also participated in a story time in which the students participated in a read aloud. To close out the day all of the preschool classrooms had recess outside on the school playground. Table 9 Classroom Schedules Daily Activity Classroom A Experimental Opening X Circle Time X Free Choice X Time Small Group X Snack X Story Time X Recess X Classroom B Experimental X X X Classroom C Control X X X Classroom D Control X X X X X X X X X X X X X X X 71 CHAPTER FOUR RESULTS Data Analysis Independent variable. The independent variable in this study was the emergent literacy intervention, which included instruction in alphabet knowledge, phonological awareness, comprehension, vocabulary, and concepts about print through the hybridized and integrated intervention that was anchored by the teachers’ implementation of the KPALS and DTPK programs. Dependent variables included in data analysis. PELI. The variables included in data analysis were the total PELI (Kaminski & BravoAguayo, 2010) score variable, alphabet knowledge variable, phonological awareness variables, comprehension variables, and the vocabulary and oral language variables. The total PELI (Kaminski & Bravo-Aguayo, 2010) score was a composite score, which consisted of a summed score, obtained from the alphabet knowledge, phonological awareness, comprehension, and vocabulary and oral language variables. DIBELS. Letter naming fluency, first sound fluency, and phoneme segmentation fluency were included in data analysis. The raw scores for each dependent measure at the three points in time were entered into the SPSS database. TROLL. The variables included in data analysis for the TROLL (Dickinson, McCabe, Sprague, 2003) assessment were the summed scores for the individual subtests associated with reading, language, writing, and the total aggregated score. The variables were collected and analyzed for both the winter and spring test administration. Additional variables. There were additional variables that were also included in the data analyses. Those included: curriculum-based vocabulary assessment, lower-case letter names, 72 lower-case letter sounds, sight words, and concepts about print. All of the aforementioned variables were summed as raw correct scores from the assessments. Preliminary data analysis. All data were tested for normality and to determine if it met statistical assumptions. This analysis was conducted by examining the skewness and kurotis, in combination with examination of the Shapiro-Wilk test of normality, and visual inspection of the normal Q-Q plot. The data were normally distributed for all variables. The sample at the fall data collection was N=95 with the control condition n=50 and the experimental condition n=45. From fall collection to winter collection the overall attrition was 3% with two students leaving from the experimental condition and one student leaving from the control condition. The experimental condition from fall to winter replaced the two students who left as well as added an additional three students., bringing the experimental condition sample size to 48 at winter collection. The student who left the control condition was not able to be replaced by the winter collection so the control condition sample size at winter data collection was 49. Attrition from winter to spring was 3% with two students leaving from the control group and one student leaving from the experimental group. To equalize group size if a student was not present for both the winter and spring data collections they were removed from the analysis. This action brought the overall sample size to 94 with both the control and experimental conditions with 47 participants. Fall analysis results. To ensure equality between groups at the start of the school year, independent t-tests were conducted on the fall tests. The results within the fall data collection showed no statistically significant differences between the performance of the experimental and control groups across all variables, with the exception of the DIBELS first sound fluency t (94) = -1.47, p > .05. See Table 10 for fall independent t-test results. On the First Sound Fluency 73 assessment, the control group students significantly surpassed the performance of the experimental students (Experimental: 9.3; Control: 13.42). See Table 11 for fall group means and standard deviations. Table 10 Fall Independent t-test Results Dependent Variable Df Total PELI 94 Alphabet 94 Knowledge Phonemic 94 Awareness Comprehension 94 Vocabulary and 94 Oral Language DIBELS FSF 94 LNF 94 PSF 94 Concepts of Print 94 Note: * p < .05; **p < .01; ***p < .001 t -.67 -.45 -.22 .31 -1.07 -.38 .43 .74 -1.47 -.39 -.86 -.13 74 p-value .10 .13 .01** .36 .12 .74 Table 11 Fall and Winter Group Means and Standard Deviations Winter analysis results. Furthermore, to confirm the comparability of the experimental and control groups at the start of the study, the winter scores were also analyzed by independent t-tests to determine group comparability in literacy performance across all dependent measures. The January pretest assessment revealed that the groups demonstrated similar performance across all dependent variables, with the exception of two variables: DIBELS first sound fluency and DIBELS phoneme segmentation fluency. Examination of mean scores revealed that the students in the control group outperformed the students in the experimental group on both measure (DIBELS first sound fluency: t (94) = 75 1.5, p < .05; DIBELS phoneme segmentation fluency: t (94) = -.90, p < .05). The means for DIBELS first sound fluency for the experimental and control groups were 11.43 and 16.14, respectively (See Table 12 for the winter independent t-test results). These differences were fairly large and the control group was approximately half a standard deviation above the experimental group. Similarly, on the phoneme segmentation fluency assessment, the experimental group mean was 5.96, whereas the control group mean was 8.04. (See Table 11 for winter group means and standard deviations). This suggested that the two instructional groups were somewhat different in their phonological knowledge at the start of the study. Since these two variables are predictive and have been correlated to later reading achievement (National Institute of Literacy, 2008; Phillips & Piasta, 2013), this result suggested that the control group might possess literacy skills that predicted early reading success. Based on this finding, the remaining data were analyzed by covarying for winter pretest differences. 76 Table 12 Winter Independent t-test results Df Dependent Variable Total PELI T .98 .98 .30 1.68 -1.01 .47 .83 -1.75 .69 -.90 6.51 4.49 6.15 7.20 .03 .001*** .43 .05* .39 .69 .38 .008** .79 -1.64 .56 -.57 1.02 Alphabet Knowledge Phonemic 93 Awareness Comprehension 93 Vocabulary and 95 Oral Language DIBELS FSF 93 LNF 93 PSF 93 TROLL Total 93 TROLL Language 93 TROLL Reading 93 TROLL Writing 93 Lower case letter 93 names Lower case letter 93 sounds Concepts of Print 93 Sight Words 93 Note: * p < .05; **p < .01; ***p < .001 .37 .17 -.35 93 93 p-value .83 .62 Research question one. The first research question was stated as follows: “Does implementation of a research-based literacy intervention have an effect on preschool students’ (4-5-year-olds) emergent literacy skills (a) alphabetic knowledge (b) phonological awareness (c) vocabulary (d) comprehension (e) concepts about print?” To answer this question, Pearson Correlations were run followed by Analysis of Covariance (ANCOVA) to further examine the difference between preschool groups (experimental, control). Spring posttest scores were adjusted and analyzed using the winter 77 pretest scores as covariates. The following sections will outline the results for alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. Alphabet knowledge. There were multiple dependent measures for the concept of alphabet knowledge, which included PELI alphabet knowledge, DIBELS letter naming fluency, lower case letter names, and lower case letter sounds. The dependent variables measuring the construct of alphabet knowledge were highly correlated with each other (See Table 13 for Pearson Correlations). The PELI alphabet knowledge subtest was correlated with lower case letter names (.89), lower case letter sounds (.64), and DIBELS letter naming fluency (.71). These results indicated that the construct of alphabet knowledge, as measured by each dependent variable, was highly predictive of performance on similar alphabet knowledge subtests obtained from other instruments. On measures of alphabet knowledge, lower case letter sounds was correlated at the lowest level with the other letter naming assessments being more highly correlated, though it was still significant at the 0.01 level. Following Pearson correlations, each alphabet knowledge dependent variable on the spring posttest was analyzed via ANCOVA to determine group differences, while covarying for the winter pretest score on each instrument. At the time of the posttest, the ANCOVA revealed statistically significant differences between the treatment groups on their knowledge of lower 2 case letter sounds, F (1,91) = 53.48, p < .00, ηp =.37, which was associated with a medium effect size. Upon further examination of group means, the experimental group was almost a full standard deviation above the control group in the production of letter sounds for lower case letters (experimental adjusted M =19.53, control adjusted M = 12.32, group SD = 7.83). The remaining alphabet knowledge variables measured the students’ ability to name lower case and upper case letters, which revealed no statistically significant differences between 78 the groups on the PELI Alphabet knowledge, F (1,91) = 0.84, p > .05 subtest; DIBELS letter naming fluency, F (1,91) = 0.05, p > .05; and lower case letter name F (1,91) = 1.35, p > .05. Despite a lack of statistical significance, the experimental group means were somewhat higher than the control group in all aforementioned measures. Given the fact that the experimental and control groups were both provided instruction on the letter-of-the-week, the similarity letter recognition performance of students in the two conditions was not unexpected (See Table 14 for spring assessment group means, adjusted means, and standard deviations). Phonological awareness. Phonological awareness was assessed through multiple measures, which included: DIBELS first sound fluency, DIBELS phoneme segmentation fluency and the PELI phonemic awareness section. The Pearson correlation analysis indicated highly correlated variables with PELI phonemic awareness correlated with DIBELS first sound fluency (.72) and DIBELS phoneme segmentation fluency (.46). See Table 13 for Pearson correlations. The ANCOVA analysis on spring scores covarying for winter performance revealed statistically significant differences between the groups across all phonological awareness dependent measures. All analyses of phonological awareness measures showed significant differences between groups, which included PELI phonemic awareness, F (1,91) = 6.28, p < .05; DIBELS first sound fluency, F (1,91) = 8.28, p < .01; and DIBELS phoneme segmentation, F (1,91) = 8.32, p < .01, with medium effect sizes for the partial eta squared values obtained on 2 each assessment; ηp =.07, .08, and .08 respectively. When the mean scores of students in the experimental and control groups were examined, the results suggested students in the experimental group significantly surpassed the performance of students in the control group in phonemic awareness by almost a one-half standard deviations (PELI phonemic awareness: experimental adjusted M =15.58, control adjusted M = 13.29, group SD = 5.68; DIBELS first 79 sound fluency: experimental adjusted M=23.43, control adjusted M=16.61, group SD=13.51; DIBELS phoneme segmentation fluency: experimental adjusted M=15.14, control adjusted M=8.43, group SD=14.54). The results also show that there were large intra-individual differences in the performance of students, which was reflected in the large standard deviations for the groups. This also made the group differences more striking because main effects for condition were less likely to result in the face of large individual variation. Vocabulary. The dependent measures for vocabulary and oral language were the PELI vocabulary and oral language component and the curriculum-based measure vocabulary assessment. The vocabulary measures were significantly correlated at the .001 level with r=.43. See Table 13 for Pearson Correlations. When ANCOVA analyses were performed on both vocabulary posttests assessments covarying for pretest winter scores, there were statistically significant differences between experimental and control groups. The PELI vocabulary and oral language variable analyses revealed statistically significant results for the two groups with a small effect size, F (1,91) = 2 2.72, p < .05, ηp =.03. Likewise the curriculum-based vocabulary measure showed a statistically significant result for the treatment conditions that was associated with a large effect size, F 2 (1,91) = 20.74, p < .01, ηp =.18. Group means and standard deviations were examined to determine the direction of the differences, which revealed that the students in the experimental group surpassed the performance of the students in the control group on both the PELI vocabulary assessment (experimental adjusted M=22.51, control adjusted M=20.91, group SD=5.56), as well as the curriculum-based vocabulary: experimental adjusted M=19.33, control adjusted M=12.82, group SD=8.89). 80 Comprehension. The comprehension variable was measured on the PELI in the comprehension section of the assessment. The comprehension variable was most highly correlated with the total PELI score at r=.72 and the PELI phonological awareness measure at r=.52. ANCOVA analyses performed on posttest scores that were adjusted for the pretest winter scores demonstrated a slight difference between experimental and control groups that was 2 significant, and associated with a small effect size, F (1,91) = 4.27, p < .05, ηp = .04. Upon inspection of adjusted posttest scores, the control group mean (M=14.56) was more elevated than the experimental group (M=12.94) mean with a group standard deviation of 4.51. Concepts about print. Concepts of print was measured via a single measure that examine students’ ability to identify various components of books and text. Pearson correlation analysis showed the concepts of print variable was correlated at .01 with all variables except the curriculum-based vocabulary variable that was correlated at the .05 level (See Table 13 for Pearson Correlations). The concepts of print variable was most highly correlated with lower case letter sounds at .61. When the ANCOVA analysis was conducted, the results indicated there was not a statistically significant difference between experimental and control groups (F (1,91) = 1.34, p > .05). 81 Table 13 Pearson Correlations for Spring Dependent Variables PT PP PA PC PV DT FSF LF PT PP PA PC PV DT FSF LF PSF 1 PSF TT TR TL TW SW LN LS CP CV .81 ** .80 ** .72 ** .68 ** .79 ** .65 ** .63 ** .56 ** .73 ** .68 ** .54 ** .58 ** .42 ** .75 ** .74 ** .59 ** .35 ** 1 .54 ** .52 ** .38 ** .69 ** .72 ** .44 ** .46 ** .63 ** .54 ** .43 ** .52 ** .29 ** .56 ** .66 ** .38 ** .24 * 1 .39 * .31 ** .68 ** .46 ** .71 ** .41 ** .62 ** .64 ** .40 ** .52 ** .37 ** .89 ** .64 ** .54 ** .19 1 .43 ** .53 ** .45 ** .36 ** .42 ** .43 ** .40 ** .37 ** .29 ** .36 ** .39 ** .47 ** .39 ** .24 * 1 .44 ** .38 ** .28 ** .40 ** .49 ** .40 ** .43 ** .37 ** .23 * .28 ** .42 ** .42 ** .43 ** 1 .78 ** .77 ** .78 ** .70 ** .68 ** .47 ** .63 ** .48 ** .73 ** .76 ** .53 ** .41 ** 1 .45 ** .38 ** .52 ** .48 ** .32 ** .45 ** .23 * .52 ** .64 ** .36 ** .25 * 1 .38 ** .60 ** .63 ** .38 ** .53 ** .46 ** .77 ** .63 ** .47 ** .32 ** 1 .54 ** .47 ** .38 ** .48 ** .40 ** .43 ** .50 ** .40 ** .37 ** 82 Table 13 (con’t) PT TT PP PA PC PV DT FSF LF PSF TT CP CV .68 ** .84 ** .82 ** .55 ** .71 ** .65 ** .49 ** .32 ** .60 ** .68 ** .51 ** .73 ** .68 ** .48 ** .26 * .44 ** .26 * .48 ** .31 ** .31 ** .27 ** .42 ** .59 ** .67 ** .32 ** .26 * .48 ** .45 ** .42 ** .18 .71 ** .61 ** .20 .57 ** .38 ** 1 LS CP 1 LN LS 1 SW LN 1 TW SW 1 TL TW 1 TR TL 1 1 TR .24 ** Note: PT = PELI total score, PP=PELI phonological awareness, PA=PELI alphabet knowledge, P C=PELI comprehension, P V=PELI vocabulary, DT=DIBELS total score, FSF=DIBELS first sound fluency, LF=DIBELS letter naming fluency, PSF=DIBELS phoneme segmentation fluency, TT=TROLL total score, TR=TROLL reading, TL=TROLL language, TW=TROLL writing, SW=sight words, LN=Lower case letter names, LS=Lower case letter sounds, CP=Concepts about print, CV=curriculum vocabulary measure. 83 Table 14 Spring ANCOVA Results Measure PELI Alphabet Knowledge Phonemic Awareness Comprehension Vocabulary and Oral Language Concepts of Print Vocabulary Assessment DIBELS FSF LNF PSF Lower Case Letter Names Letter Sounds 2 Df F Partial-Eta 1, 91 1, 91 1, 91 1, 91 .84 6.28 4.27 2.72 .00 .07 .04 .03 .84 .01** .04** .10* 1, 91 1, 91 1.42 20.74 .02 .18 .23 .00** 1, 91 1, 91 1, 91 1, 91 8.28 .05 8.32 1.35 .08 .00 .08 .01 .01** .82 .01** .25 1, 91 53.48 .37 .00** 84 p-value Table 15 Spring Group Means, Adjusted Means, and Standard Deviations Experimental Control Group Experimental Spring Spring Spring Adjusted Mean Dependent M (SD) M (SD) M (SD) M Variable Control Adjusted Mean Group Adjusted Mean M M 72.26 (15.3) 68.48 (19.45) 70.37 (17.5) 71.32 69.42 70.37 Alphabet Knowledge 21.04 (7.4) 19.87 (7.61) 20.46 (7.5) 20.42 20.62 20.52 Phonemic Awareness 15.52 (4.83) 13.35 (6.29) 14.43 (5.68) 15.58 13.29 14.44 Comprehension 13.48 (4.23) 14.02 (4.81) 13.75 (4.51) 12.94 14.56 13.75 Vocabulary and Oral Language 22.24 (5.13) 21.24 (6.13) 21.74 (5.56) 22.57 20.91 21.74 58.24 (31.35) 46.38 (31.43) 52.18 (31.43) 59.25 45.41 52.33 FSF 22.24 (13.91) 17.75 (12.88) 19.95 (13.51) 23.43 16.61 20.02 LNF 21.63 (13.51) 19.46 (12.98) 20.52 (13.22) 20.70 20.35 20.53 PSF 14.37 (15.37) 9.17 (13.37) 11.71 (14.54) 15.14 8.43 11.78 9.98 (3.69) 11.10 (5.71) 10.55 (4.84) 10.08 11.00 10.54 76.13 (11.78) 72.27 (10.25) 74.18 (11.15) 69.97 78.30 74.14 Total PELI DIBELS TOTAL Concepts about Print TROLL Total Research question two. The second research question was: “Does a research-based, class-wide (tier one) intervention have a differential effect on the overall literacy development of intervention students compared to same-age peers who received a standard preschool literacy curriculum implemented by comparison preschool teachers?” 85 There were multiple variables that measured the overall impact of student learning in both the experimental and control groups that included: the total PELI (Kaminski & BravoAguayo, 2010), the TROLL (Dickinson et al., 2003) assessment (language, reading, writing, and total score) as well as sight word recognition. Each of the aforementioned dependent variables were measured using Pearson Correlation and ANCOVA and Multivariate Analysis of Covariance (MANCOVA). For this analysis, the focus was on how non-targeted or integrated literacy components might be impacted by the two instructional conditions. The subsequent sections discuss the implications of the analysis on each dependent variable. Total PELI score. One measure of early literacy performance was the total PELI score which was a composite score of all emergent literacy measures including: PELI alphabet knowledge, PELI comprehension, PELI phonological awareness, and PELI vocabulary and oral language. Due to the highly correlated nature of the PELI dependent measures, MANCOVA analysis was conducted. The MANCOVA results showed no significant effect for treatment conditions, F (5,88) = 0.52, p > .05. Despite the non-significant difference between the groups, the experimental group adjusted mean (adjusted M=71.32) showed a slightly higher but nonsignificant aggregated total PELI score than the control adjusted group (adjusted M=69.42). (See Table 16 for MANCOVA results). TROLL assessment. The TROLL (Dickinson et al., 2003) assessment was a teacher rating scale on language, reading, writing, and total score which was a cumulative score of the first three scores. The TROLL (Dickinson et al., 2003) total score was highly correlated with the subsections of reading (.89), writing (.82), and language (.84). Due to the high correlation with variables, a MANCOVA was conducted on the TROLL (Dickinson et al., 2003) subtests in a multivariate analysis and covarying for pretest performance. The MANCOVA revealed an 86 overall statistically significant main effect , which suggested that there were differences between 2 the two instructional conditions, F (4, 89) = 17.38, p < .05 ηp = .45. When univariate scores were analyzed, the results indicated a significant difference between groups on teachers’ ratings 2 of students’ language (F (1,91) = 27.82, p < .05, ηp =.24) and reading development (F (1,91) = 2 40.05, p < .05, ηp =.31), with the control group of students being assigned higher teacher ratings than the experimental students in language and reading. There were no statistically significant differences on teachers’ ratings of students’ writing development. See Table 13 for MANCOVA results. Sight word assessment. The Pearson correlation analysis showed variability in correlations to other dependent variables and was correlated at the .05 or .01 level with all variables except the curriculum-based vocabulary assessment. The sight word variable was most highly correlated with the TROLL total score at .56 (See Table 13 for Pearson Correlations). ANCOVA analysis was conducted on the spring sight word assessment with winter scores as the covariate to examine group differences. There was not a significant difference between groups (F (1,91) = 0.1, p > .05). The finding of non-significance was not unexpected, given that this was a transfer measure that evaluated students’ performance on a non-instructed area of the literacy curriculum, and potentially above the students’ developmental levels. 87 Table 16 Spring MANCOVA Results Measure Df PELI Total Score 5,88 Alphabet Knowledge 1, 91 Phonemic Awareness 1, 91 Comprehension 1, 91 Vocabulary and Oral 1, 91 Language TROLL 4, 89 Reading 1, 91 Language 1, 91 Writing 1,91 DIBELS 3,87 FSF 1, 91 LNF 1, 91 PSF 1, 91 Note: * p < .05; **p < .01; ***p < .001 2 F Partial-Eta p-value 5.88 .84 6.28 4.27 2.72 .007 .00 .07 .04 .03 .112 .84 .01** .04** .10* 17.38 40.05 27.82 .006 5.27 8.28 .05 8.32 .45 .31 .24 .00 .15 .08 .00 .08 .00** .00** .00** .94 .00** .01** .82 .01** Research question three. The third research question was as follows: “Does implementation of a tier one emergent literacy intervention reduce the percentage of kids who are considered well below (tier three) and below (tier two) benchmark on the Preschool Early Literacy Indicator assessment?” To answer this question, the focus shifted to the percent of students who met the benchmarks, who were below the benchmark, or who were well below the benchmark. Dynamic Measurement Group, publisher of the PELI (Kaminski & Bravo-Aguayo, 2010) assessment, released the benchmark goals and cut points for the PELI measure over the course of the previous year (Kaminski, Abbot, Bravo-Aguayo, 2012). The benchmark goals and cut points were determined for preschool students using validation studies that correlated the PELI (Kaminski & Bravo-Aguayo, 2010) using other external assessments, including the DIBELS (Dynamic Measurement Group, 2011) composite scores and the CELF (Semel et al., 1995; 88 Kaminski, Abbot, Bravo-Aguayo, 2012). However, the recent release meant that the benchmarks were preliminary. The preliminary benchmark goals were established to determine the student who achieved benchmark and had favorable odds, 80%-90%, likelihood of achieving end of year prekindergarten literacy standards, and hence were performing on track concerning language and literacy outcomes (Kaminski, Abbot, Bravo-Aguayo, 2012). On the other hand, the preliminary cut scores also identified students who were below and well below the established benchmarks and who would most likely need intensive remediation in language and literacy in order to meet grade-level expectations at the end of the year (Kaminski, Abbot, Bravo-Aguayo, 2012). The students achieving below the benchmark goal were predicted to have a 40-60% likelihood of achieving end-of the year literacy outcomes, where as students who were well below the benchmark only had a 10%-20% likelihood of achieving end of the year language and literacy outcomes (Kaminski, Abbot, Bravo-Aguayo, 2012). The remainder of this section will examine the experimental and control groups’ who were at-risk (well below benchmark) based on the number of students who scored at or below the cut point scores. For specific benchmark and cut scores on the PELI assessment see Table 17. 89 Table 17 Preliminary PELI Benchmark Goals and Cut Points Beginning of Middle of Year/Fall Year/Winter Benchmark Benchmark (Cut Point) (Cut Point) Composite Score 157 191 (123) (157) Alphabet 7 20 Knowledge (3) (10) Comprehension 14 15 (11) (12) Phonological 4 9 Awareness (1) (6) Vocabulary 21 23 (17) (19) End of Year/Spring Benchmark (Cut Point) 221 (185) 24 (16) 16 (13) 13 (9) 23 (19) The dependent measures were all coded as for both benchmark and cut score criterion. If the student was meeting or exceeding benchmark he/she received a 1, if the student did not meet the benchmark, a score of 0 was assigned. Similarly, if a student exceeded or met the cut point, a score of “1” was allocated, whereas if the student did not meet the cut point, a score of “0” was assigned, and the student was then deemed well below benchmark. All PELI dependent measures of alphabet knowledge, phonemic awareness, vocabulary and oral language, and comprehension were coded according to the criteria set forth above. At the time of the winter data collection, there were significant differences between the groups in the number of students who were well below benchmark (i.e., the lowest 10-20% of students based on national benchmarks) in the areas of alphabet knowledge, vocabulary, and comprehension. In the area of alphabet knowledge, there were 30% of the students (n=14.1) who were deemed at-risk because they scored well below benchmark in the experimental group; and 44% (n=20.68) who were judged to be well below benchmark in the control group. Further 90 breaking down performance based on the other PELI subtests, vocabulary and oral language measures showed that 28% of the students in the experimental group (n=13.16) were considered to be well below benchmark; whereas half that number or 16% of the control group (n=7.52) were considered to be at-risk. Finally, in the area of comprehension, the experimental group included 15% of the students who were considered at-risk (n=7.05) compared to 35% of the control group (n=16.45). On the other hand, the groups were fairly equal on the phonological awareness measure of the PELI with 21% (n=9.87) of the experimental group well below benchmark and 22% (n=10.34) of the control group well below benchmark. Due to group differences across the measures, ANCOVA analysis was conducted on spring scores using the winter scores as the covariate. ANCOVA analysis indicated similarity across conditions in all areas of the PELI, with no significant differences between the groups. Both groups showed a reduction in the proportion of students who were below benchmarks. Only 17% of the experimental group was considered to be well below benchmark in alphabet knowledge (n=7.99) on the spring posttest compared to the control group, which had 26% of the students (n=12.22) who were considered to be well below benchmark. This represented reductions of 13% and 14%, respectively, in the number of students who performed below benchmarks across the experimental and control groups. The result was similar in the vocabulary and oral language measure as well. The experimental group had fewer students who were judged to be well below benchmark based on the cut points than the control group (experimental=15%, n=7.05; control=29%, n=13.63), just the opposite of winter measures with the experimental group being more at-risk than the control group. Finally, the phonological awareness measure also indicated the proportion of students from the experimental group who were considered well below benchmark fell to 3% (n=1.41) from 21% in the winter; whereas the proportion of 91 students in the control group fell to 8% (n=3.76) from the winter levels of 22%. Finally, the control group was judged to be slightly less at-risk on the comprehension measure than the experimental group (experimental=35%, n=16.45; control=33%, n=15.51). However, the relatively high number of students in both groups who performed poorly on the comprehension component of the PELI revealed an area of concern for the preschool students that remained unresolved during the course of the study. (See Table 18 for percentages of students who were well-below benchmark, below benchmark, and at benchmark based on the PELI). The following figure shows the proportion of students in the experimental and control groups who performed at or below the cut-offs for ‘at-riskness’ based on the PELI norms for the various subtests on the pretest (winter) and posttest (spring) assessments. 92 Figure 2 Proportion of Well Below Benchmark Students Based on the PELI (For interpretation of the references to color in this figure and all other figures, the reader is referred to the electronic versoin of this dissertation) Additionally, there were children who did not achieve benchmarks, but who were not well below benchmark (tier 3), as was reported in the preceding paragraphs. These students were deemed slightly below benchmark (tier 2) and were only 40-60% likely to achieve end of the year benchmarks. In the area of alphabet knowledge the experimental group had a lesser percentage (22%, n=7.99) of students who were below benchmark compared to the control group (27%, n=12.69). The experimental and control group were similar on both the vocabulary and comprehension measure with students who were below benchmark, but not well below benchmark with 28% (n=13.16) of the students in the category on comprehension and 2% (n=0.94) of the students on vocabulary. The control group had fewer students who did not meet 93 benchmark but were not well below benchmark in the area of phonological awareness (experimental=13%, n=6.11; control=11%, n=5.17). Table 18 Winter and Spring Percentages of Students well below benchmark, below benchmark, and at benchmark as measured on the PELI Winter Winter Spring Spring Experimental Control Experimental Control Percentages Percentages Percentages Percentages Alphabet Well 30% 44% 17% 26% Knowledge Below Benchmark Below 19% 12% 22% 27% Benchmark At 51% 44% 61% 47% Benchmark Phonological Well 21% 22% 3% 8% Awareness Below Benchmark Below 11% 9% 13% 11% Benchmark At 68% 69% 84% 81% Benchmark Vocabulary Well 28% 16% 15% 29% Below Benchmark Below 21% 21% 2% 2% Benchmark At 51% 63% 83% 69% Benchmark Comprehensi Well 15% 35% 35% 33% on Below Benchmark Below 19% 22% 28% 28% Benchmark At 66% 43% 37% 39% Benchmark 94 Research questions four. Research question four was: “Does implementation of a tier one intervention increase the percentage of students who are at benchmark on the Preschool Early Literacy Indicator assessment?” In addition to the proportion of students who were at-risk based on the PELI, consideration was given to the proportion of students who performed at the high criterion levels established for the literacy benchmarks. For specific information on how the benchmark goals were determined see the opening paragraph under research question three as the same methods were employed in determining benchmark goals (Kaminski, Abbot, Bravo-Aguayo, 2012). The dependent measures were all coded as meeting or exceeding benchmark goals on a 0, 1 scale. If the student met the benchmark goal he/she received a “1” or if the student did not met the benchmark goal a “0” was allocated. The PELI dependent measures of alphabet knowledge, phonological awareness, vocabulary and oral language, and comprehension were all coded if the student was at-risk or at benchmark. On the winter pretest, there were no significant differences between students who were at benchmark in the areas of alphabet knowledge, vocabulary and oral language, comprehension, and phonological awareness. In the area of alphabet knowledge the experimental group had 51% (n=23.97) of the students who achieved benchmark and the control group had 44% (n=20.68) at benchmark. The control group had 63% (n=29.61) of the students who performed at or above benchmark compared to the experimental group, which had 51% (23.97) who were at or above benchmark in the area of vocabulary and oral language. Students reaching benchmark in the area of comprehension favored the experimental group with 66% (n=31.02) attaining benchmark compared to the control group with 43% (n=20.21) at benchmark. The groups were almost equal on the phonemic awareness measure (experimental=68%, n=31.96; control=69%, n=32.43). 95 Again the ANCOVA analysis was performed on spring scores, covarying for the pretest performance. ANCOVA analysis indicated similarity across conditions in all areas of the PELI. Despite the non-significance, it is interesting to note that 61% (n=28.67) of the experimental group achieved benchmarks in alphabet knowledge compared to the control group, which had 47% (n=22.09) of the students at benchmark. The result was similar in the vocabulary and oral language measure as well. The proportion of students in the experimental group meeting benchmark standards surpassed the control group (experimental=83%, n=39.01; control=69%, n=32.43). The phonological awareness measure indicated that 84% (n=39.48) of the experimental group achieved benchmark compared to 81% (n=38.07) of the control group. The comprehension measure favored the control group with 39% (n=18.33) of the students meeting benchmark and the experimental group with 37% (n=17.39) of the students meeting benchmark. These results are summarized in Figure 3. (See Table 18 for specific benchmark percentages). 96 Figure 3 Proportion of Students Reaching Benchmarks on the PELI Research question five. Research question five was: “What impact does the implementation of a tier one intervention have on students’ DIBELS scores and the potential to achieve DIBELS fall (beginning of the year) kindergarten benchmarks?” The DIBELS assessment subtest results were discussed above and will not be outlined below. Relative to the PELI, the DIBELS was a more established criterion, using kindergarten literacy benchmarks that are associated with reading achievement in the elementary grades. In order to assess the impact of the emergent literacy intervention on DIBELS benchmarks, two additional scores were created. First, all of the DIBELS scores were combined for a total DIBELS score, which included first sound fluency, letter naming fluency, and phoneme segmentation fluency. The second variable that was created was a kindergarten DIBELS 97 composite score, which included only first sound fluency and letter naming fluency, not phoneme segmentation fluency. This was created based on how the Dynamic Measurement Group (2010) constructed the fall kindergarten DIBELS composite score. The first analysis focused on the total DIBELS winter score, which was first sound fluency, letter naming fluency, and phoneme segmentation fluency, through independent t-tests, which demonstrated there was not a significant difference between the experimental and control groups (t (93) = -.692, p > .05). Despite comparability of groups the control group mean (M=36.23) surpassed that of the experimental group (M=32.02). The total DIBELS score was highly correlated with the first sound fluency, letter naming fluency, and phoneme segmentation fluency measures at .78, .77, and .77, respectively (See Table 13 for Pearson Correlations). Due to the highly correlated nature of the DIBELS variables, a MANCOVA was conducted on the spring total DIBELS score, which includes: first sound fluency, letter naming fluency, and phoneme segmentation fluency while covarying for winter scores. The results 2 indicated a significant main effect for groups, F (3,87) = 5.27, p < .05, ηp = .15. Examination of the univariate analyses was conducted which showed first sound fluency, F (1,89) = 6.82, p < 2 2 .05, ηp = .07) and phoneme segmentation fluency, F (1,89) =9.22, p < .05, ηp = .09, contributed to the significant multivariate result. The experimental group outperformed the control group in both first sound fluency and phoneme segmentation fluency. Additionally, the experimental group adjusted mean on phoneme segmentation fluency was 15.26 whereas the control group adjusted mean was 8.3. Furthermore, when comparing the total DIBELS score the experimental group adjusted mean (M=59.25) surpassed the control group adjusted mean (M=45.41). See Table 15 for group means, adjusted means, and standards deviations. 98 Additional DIBELS analysis was conducted on the kindergarten DIBELS composite score, which was first sound fluency and letter naming fluency measure only. The DIBELS publisher recommends that school districts employ a composite score based on the two variables in making judgments about a student’s level of riskness. Consequently, the analyses in the subsequent section focuses on the proportion of students who met the fall kindergarten DIBELS composite score benchmark and the proportion of students who were considered at-risk based on a composite score that represented whether they were above or below the criterion levels for kindergarteners in the fall. The kindergarten DIBELS composite score was coded exactly like the PELI for both benchmark and cut scores, which was described above. The benchmark goal for the beginning of the year composite score was 26 and the cut point for students being at-risk was 13 (Dynamic Measurement Group, 2010). Finally, since the DIBELS phoneme segmentation fluency assessment is not given at kindergarten entry, the scores at spring collection were compared to the middle of kindergarten benchmark and cut scores. These scores were coded like the PELI and is detailed in the previous section. The mid-kindergarten benchmark goal was 20 and the cut point for at-risk was 10. At the winter assessment an ANOVA was conducted on the winter kindergarten DIBELS composite score, which encompasses first sound fluency and letter naming fluency measures, with no covariate. The ANOVA was conducted on the amount of students meeting or exceeding benchmark (F (1,91) =.014, p > .05) and for those who were well below the benchmark F (1,91) = .449, p > .05), both analysis indicated there was not a significant difference between groups. The experimental group had fewer students at benchmark (47%, n=22.09) than the control group (48%, n=22.56). However, the control group had more students well below benchmark at 34% 99 (n=15.98) than the experimental group at 28% (n=13.16). The students who were below benchmark were those who did not achieve benchmark but did not fall below the cut point for well below benchmark were 25% (n=11.75) for the experimental group and 18% (n=8.46) for the control group. An ANCOVA was performed on the fall kindergarten DIBELS composite spring score, entering the winter score as the covariate, which revealed no significant main effect for treatment, F (1,91) =2.67, p > .05. However, when examining the mean scores based on the proportion of students meeting grade-level benchmarks in kindergarten, the experimental group had 83% (n=39.01) of the students who met kindergarten fall benchmarks whereas the control group had 69% (n=32.43). This was a fairly sizable difference even though it was not statistically significant. This finding is especially compelling, given the fact that significantly more control students outperformed the experimental students on the winter pretest assessment. Yet by the administration of the spring post-test assessment, the experimental students had made significant gains over time, as indicated by the finding that 83% had met the fall Kindergarten benchmarks. The 83% of students achieving benchmark exceeds the recommended criteria that 80% of the students should achieve benchmark based on effective tier 1 instruction (Horner, 2012). In addition, there was not a significant difference between the percentage of students who fell well below the cut-point that indicated that they were at-risk (F (1,91) = 0.8, p > .05). However, the experimental group had 11% (n=5.17) of the students who were judged to be atrisk whereas the control group had 17% (n=7.99) of the students who were judged to be at the same level of riskness. Finally, consideration was given to phoneme segmentation fluency on the DIBELS. This was considered to be a fairly difficult subtest that is not administered to kindergarteners until the 100 winter of the kindergarten year. However, this delay also presented a dilemma in the study. Since the phoneme segmentation fluency assessment has benchmarks that only apply to students the middle of kindergarten, the application of phonological awareness standards to students in the preschool experimental and control conditions set a stringent and tough standard of academic achievement. At the winter assessment an ANOVA was conducted on the winter phoneme segmentation fluency measure, compared to kindergarten middle of the year phoneme segmentation fluency benchmarks. The ANOVA was conducted on the amount of students meeting or exceeding benchmark (F (1,91) =.222 p > .05) and for those who were well below the benchmark F (1,91) = .699, p > .05). At the winter assessment the experimental group had fewer students who met the benchmark (6%, n=2.82) than the control group (10%, n=4.7). The control group had slightly more students well below benchmark at 76% (n=35.72) than the experimental group at 72% (n=33.84). Furthermore, the students who were below benchmark but did not fall below the cut point for well-below benchmark were 22% (n=10.34) for the experimental group and 14% (n=6.58) for the control group. See Figure 4 for the percentage of students who were well below benchmark. 101 Figure 4 Proportion of students who were well below benchmark on the DIBELS An ANCOVA was conducted on spring scores to analyze and compare the students who were at benchmark on the phoneme segmentation fluency subtest, with winter scores as the covariate. The ANCOVA revealed a significant main effect for group, F (1,91) =7.32, p < .05, 2 ηp =.07. Examination of the means revealed that 37% (n=17.39) of the experimental group achieved mid-year kindergarten benchmarks, whereas 21% (n=9.87) of the control group reached mid-year kindergarten benchmarks. However, when evaluating the proportion of at-risk students on the DIBELS phoneme segmentation fluency, the ANCOVA revealed no significant differences between groups (F (1,91) =3.3, p > .05). However, this result was not entirely meaningful given the fact that the experimental group had 48% of the students (n = 22.56) who were judged to be at-risk on that 102 basis, whereas the control group had 77% of the students (n = 31.49) at-risk as measured by the mid-year kindergarten benchmark. See Figure 6 for percentages of students who were at benchmark for each DIBELS assessment. Figure 5 Percentage of students who were at benchmark on the DIBELS 103 CHAPTER FIVE DISCUSSION The primary purpose of this investigation was to determine the effects of an emergent literacy intervention, KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010), on preschoolers’ emergent literacy skills. The emergent literacy skills assessed were alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print. The present chapter discusses the results within the framework of each research question. Additionally, implications for practice, limitations of the current study, and directions for future research will be discussed. Research Question One The first research question was: “Does implementation of a research-based literacy intervention have an effect on preschool students’ (4-5-year-olds) emergent literacy skills (a) alphabetic knowledge (b) phonological awareness (c) vocabulary (d) comprehension (e) concepts about print?” Alphabet knowledge. The first question focused on alphabet knowledge skills, as measured using the PELI (Kaminski & Bravo-Aguayo, 2010), DIBELS (The Dynamic Measurement Group, 2011) letter naming fluency, lower case letter names, and lower case letter sounds. Analyses indicated that there were no significantly differences in the students’ ability to name letters across all assessments: PELI (Kaminski & Bravo-Aguayo, 2010) alphabet knowledge, DIBELS (The Dynamic Measurement Group, 2011) letter naming fluency, and lower case letter names and sounds). This finding contradicted the finding of the Mannes (2011) study that there was a statistically significant difference between experimental and control groups. However, the Mannes (2011) study was a 2-day a week classroom with ages ranging from three-years-old to four-years-old, whereas this study targeted a specific population of 104 children who were all four-years-old with a smaller percentage of five-year-olds, all of which met four to five days a week. The difference between the results of the two studies could be for a multitude of reasons. First, both the experimental and control group received instruction in alphabet knowledge and letter recognition, which from the results indicates that zoo phonics in combination with teacher created material or zoo phonics in combination with K-PALS (Mathes et al., 2001) are equally effective in teaching letter names. This finding is consistent with previous research in that specific, direct instruction of letter names results in children acquiring letter name knowledge and skills (Bunn et al., 2005; Mannes, 2011; Phillips & Piasta, 2013; Share, 2004). Despite the non-significance effect for letter names, there was a significant difference between experimental and control groups on lower case letter sounds, with the experimental group almost a full standard deviation above the control group. This finding was not surprising given the increased attention to letter sounds in isolation and in VC, CV, and CVC words in the K-PALS (Mathes et al., 2001) component of the intervention. This finding is encouraging based on Spencer, Spencer, Goldstein, & Schnieder (2013) in which they stated, “Learning letter names is a strong predictor of learning to read because it facilitates learning letter sounds (Ehri & Wilce, 1979), but naming letters without phonological awareness and letter-sound association has little effect on reading development,” (p. 47). The intentionality of the K-PALS (Mathes et al., 2001) curriculum to combine letter naming, letter sounds, and phonological awareness shows promise as an intervention that could influence long-term reading development for students in preschool settings. Phonological awareness. Phonological awareness was measured by multiple assessments, which included the PELI phonological awareness (Kaminski & Bravo-Aguayo, 105 2010), DIBELS first sound fluency (The Dynamic Measurement Group, 2011), and DIBELS phoneme segmentation fluency (The Dynamic Measurement Group, 2011). There were significant differences between groups on all of the phonological awareness assessments with the experimental group surpassing the control group means on the multiple measures of phonological awareness. Most likely, this result is attributable to the KPALS (Mathes et al., 2001) intervention, which targets phonological awareness and phonics. This finding is consistent with the results of previous research on the positive impact K-PALS (Mathes et al., 2001) has on phonological awareness skills (Fuchs et al., 2001; Mannes, 2011; Mathes et al., 2001; McMaster et al, 2008; Rafdal et al, 2011). This finding suggests that the use of KPALS (Mathes et al., 2001) shows promise as an appropriate intervention for developing phonological awareness skills among preschool students. The finding supports the importance of addressing both alphabet knowledge and phonological awareness in an explicit manner to facilitate literacy outcomes in preschool programs (Bailet et al., 2009; Justice et al., 2003; Phillips & Piasta, 2013; Spencer et al., 2013). Furthermore, the experimental group was underperforming compared to the control group on all phonological awareness measures prior to the implementation of the intervention (i.e., PELI phonemic awareness (Kaminski & Bravo-Aguayo, 2010), DIBELS first sound fluency (The Dynamic Measurement Group, 2011), DIBELS phoneme segmentation fluency (The Dynamic Measurement Group, 2011)) at both the fall and winter assessment periods. In contrast, the spring results revealed a reversal in the performance status of the groups. The experimental students had caught up and surpassed the performance of the control students in their phonological awareness performance. This finding is significant in that K-PALS (Mathes et al., 2001) seems to represent a superior intervention for phonological awareness skills, and it 106 has the utility to be used as part of a tier 1 intervention program with good prospects for improving the phonological awareness outcomes of students who are below benchmark in preschool literacy environments. This finding enhances the current corpus of literature by extending KPALS (Mathes et al., 2001) into a new environment, preschool, with minor implementation modifications. To understand the full potential impact additional research must be conducted. However, the initial use of KPALS (Mathes et al., 2001) in a preschool environment indicated a potentially promising intervention. Vocabulary. Vocabulary was measured from the PELI (Kaminski & Bravo-Aguayo, 2011) and the curriculum-based measure. The PELI (Kaminski & Bravo-Aguayo, 2010) assessment provided a broad assessment of vocabulary as well as a generalization measure where as the curriculum based measure assessed specific vocabulary targets from the DTPK (The Children’s Learning Institute, 2010) intervention. Results from the curriculum-based measure showed a statistically significant difference between experimental and control groups, however, the result did not generalize to the broader vocabulary measure in the PELI (Kaminski & BravoAguayo, 2010). This finding is consistent with previous research in that vocabulary instruction yields small effects on targeted vocabulary words, but tends to not generalize to other vocabulary measures which measure broader vocabulary skills (Dickinson & Darrow, 2013; Pentimonti, Justice, & Piasta, 2013; Pollard-Durodola et al., 2011; Zucker et al., in press). The nonsignificance of results on the generalized PELI (Kaminski & Bravo-Aguayo, 2010) vocabulary measure indicates that targeting acquisition of vocabulary is beyond the scope of the interventions administered in this study. These results are contrary to the initial study of DTPK (The Children’s Learning Institute, 2010) conducted by Zucker et al (in press), in which the study showed a significant impact on 107 children’s receptive vocabulary (d=.81). The initial study and this study have several differences, which may explain the variation of results. Many differences include: the number of classrooms in the study, the duration of the intervention, as well as the frequency of coaching. The primary difference between the Zucker et al., (in press) study and the current study is that the Zucker et al., (2010) measure receptive vocabulary where as this study measured expressive vocabulary. Receptive and expressive vocabulary are two different sets of skills and may have been a component as to why the results were different. Comprehension. Student comprehension was measured by the PELI (Kaminski & Bravo-Aguayo, 2010) assessment, and was targeted in the DTPK (The Children’s Learning Institute, 2010) intervention. The findings indicated there was a slight difference between groups with the control group surpassing the experimental group. Teaching comprehension to students is an abstract task that involves many integrated strategies, which presents various challenges to teachers (Center for the Improvement of Early Reading Achievement, 2001), essentially making teaching comprehension more challenging for most teachers. The comprehension instruction relied on teacher knowledge and ability, but it was explicitly included as part of the intervention program. At the same time, the control condition also featured readalouds using picture books in which the teachers asked before reading, during reading, and after reading questions to promote comprehension. Based on these findings, it might seem that the development of comprehension requires more explicit instruct in the preschool program to produce desired effects. It is also possible that another comprehension assessment might be more sensitive to comprehension differences among the groups. The PELI (Kaminski & Bravo-Aguayo, 2010) uses many scaffolding questions to evaluate students’ understanding of literal comprehension elements in the story (i.e., who, when, 108 what, etc.). This type of supported and contextualized comprehension assessment might have obscured differences in the inferential comprehension processes that were developed as part of the DTPK (The Children’s Learning Institute) program. Another possibility is that the DTPK (The Children’s Learning Institute, 2010) in the experimental classrooms and the guided readalouds in the control classrooms were more similar in the fabric and substance of the comprehension instruction than dissimilar. This was evidenced by the control group teachers asking similar pre-reading, during reading, and after reading questions as the experimental group. Finally, the students in the experimental group were lower at all assessment measures, which could indicate that the students in the experimental group needed more than tier one instruction in comprehension to make up the differences and surpass the control group performance. At this point, however, the primary conclusion that can be reached pertains to the difficulty of impacting comprehension, and the fact that this subtest produced the highest number of students in both groups who performed at the at-risk levels. Currently, the consensus surrounding teaching comprehension skills to young children is that children can construct meaning from text (Gregory & Cahill, 2010). However, there is little research available about how to teach comprehension skills to young children, specifically how to teach children to think critically and acquire new knowledge (Gregory & Cahill, 2010). The importance of early comprehension skills and the need for more research was supported by the Institute of Education Sciences in that they awarded a large federal grant to research and better understand early comprehension development as well as how to measure comprehension in preschool (Longigan & Shannahan, 2013). 109 Concepts about print. The concepts about print assessment was measured through the Michigan Literacy Progress Profile, which assessed 22 different concepts about print (i.e. capital letter, period, quotation marks, etc). There was not a significant difference between the experimental and control groups. It is possible that the similarity in instructional methodologies used in shared reading and read aloud in the two conditions might account for the lack of significant differences between groups. Research Question Two The second research question was as follows: “Does a research-based, class-wide (tier one) intervention have a differential effect on the overall literacy development of intervention students compared to same-age peers who received a standard preschool literacy curriculum implemented by comparison preschool teachers?” Overall, the results suggest that there were few significant differences between experimental and control students on their overall literacy development, based on transfer measures and composite scores. These results typified the groups’ performance on the PELI (Kaminski & Bravo-Aguayo, 2010), TROLL (Dickinson et al., 2003), and sight words. Each of these measures will be discussed in turn. PELI. (Kaminski & Bravo-Aguayo, 2010). First, the overall impact of the intervention was not significantly different between groups as measured by the total PELI (Kaminski & Bravo-Aguayo, 2010) score. Initially, one would expect the experimental group to surpass the control group when evaluating overall literacy components. However, upon further reflection, there were components of emergent literacy instruction embedded in the mandated Zoo Phonics and read aloud that might be comparable in both conditions. Further research must be two-fold. The first line of research should try to tightly control teaching strategies to determine the 110 differential impact of those strategies on student level skills. Second, additional research must be conducted to evaluate the reliability and validity of the PELI (Kaminski & Bravo-Aguayo, 2010) instrument as a screening measure with preschool students. Since the PELI (Kaminski & BravoAguayo, 2010) was in its first year of field-testing, the PELI (Kaminski & Bravo-Aguayo, 2010) may need additional minor adjustments to be more sensitive to child level skill differences. TROLL. (Dickinson et al., 2003). The TROLL (Dickinson et al., 2003) measure was a teacher rating scale, which evaluated several components of emergent literacy. On this measure the control group surpassed the experimental group in language and reading on the posttest. Previous research indicated this was a reliable and valid assessment (Dickinson et al., 2003) and was predictive of students’ literacy achievement. However, the consistency of rating literacy skills across teachers would need to be further investigated to draw any causal conclusions. The teachers rated their own students without and controls on the ratings to determine the teachers’ accuracy of rating. With the small sample size of four teachers the ratings could have been highly influenced by teacher bias and potential inconsistency of rating. Sight words. The sight word measure was administered as a transfer variable to determine if there was a significant difference between groups on a more sophisticated skill, sight word recognition. There was not a significant difference between experimental and control groups on the sight word measure. This was not entirely unexpected given that neither group of teachers instructed the preschool students in sight words. Furthermore, because the sample consisted of preschoolers, the developmental potential for sight word acquisition is different than it is for students in elementary school, who would be expected to master sight word recognition through a process of explicit instruction. In summary, no conclusion can be reached regarding 111 the potential of KPALS (Mathes et al., 2001) or DTPK (The Children’s Learning Institute, 2010) to impact sight word acquisition in preschool students. Research Question Three The third research question was stated as: “Does implementation of a tier one emergent literacy intervention reduce the percentage of kids who are considered well below (tier three) and below (tier two) benchmark on the Preschool Early Literacy Indicator assessment (Kaminski & Bravo-Aguayo, 2010)?” The results indicated there were no significant differences between groups in the students deemed to be well below PELI (Kaminski & Bravo-Aguayo, 2010) benchmarks. Despite this non-significance, there were meaningful differences in the percentages of students who were judged to be well-below benchmarks in both the experimental and control group. In the area of alphabet knowledge the control group had 26% of the students who were at-risk because they fell at or below the lowest 10-20% cut point; whereas the treatment group had 17% of the students who were judged to be at-risk. Converting this proportion into actual numbers of students, this means that 12 control students fell well-below benchmark levels in the control group, compared to 8 students in the experimental group. Although these results are exploratory, this figure has potential economic consequences for school districts. When considering the cost of providing tier 3 services on a 1:1 basis for 8 students relative to 12 students, it is apparent that the experimental intervention could yield economic benefits. Furthermore, these statistics are consistent with phonological awareness, vocabulary and oral language, and comprehension as well. In phonological awareness the experimental group had one student at risk compared to three within the control group. On the vocabulary measure 112 the experimental group had seven students who were well below benchmark compared to the control group’s 14 students who were well below benchmark. On the other hand, the PELI (Kaminski & Bravo-Aguayo, 2010) comprehension measure yielded a slight advantage for the control group, which had 15 students who performed wellbelow benchmarks compared to the experimental group, which had 16 students, a difference of one. This slight difference is not especially consequential, but the result compelled further consideration of the comprehension instruction, which yielded several insights and possible implications. First, the experimental and control group treatments may have been potentially similar in the instructional strategies that teachers employed. At the same time, there remained a fairly sizable percentage of students who were deemed well below comprehension benchmarks. It could be argued that there needs to be additional and intensive literacy methods that are employed in preschool to impact reading comprehension since neither treatment was as effective as they needed to be. Furthermore, it is possible that the PELI (Kaminski & Bravo-Aguayo, 2010) may not be as accurate or sensitive as possible in measuring comprehension abilities. The PELI (Kaminski & Bravo-Aguayo, 2010) questions focus on general Wh-questions (who, what, when, where, etc.) asked before, during, and after reading. On the other hand, the DTPK (The Children’s Learning Institute, 2010) comprehension questions feature inferential comprehension, problem-solving and text-to-self connections, so the assessment questions asking in the PELI (Kaminski & Bravo-Aguayo, 2010) did not match the comprehension instruction within DTPK (The Children’s Learning Institute, 2010) The experimental group, having fewer students at-risk is encouraging, but lends itself to delving into potential reasons for why the groups were different but not significantly different. First, the cut points and benchmarks are preliminary and need to be interpreted with caution. 113 That is, the PELI (Kaminski & Bravo-Aguayo, 2010) cut scores are being field-tested this year, so they may not reflect the true picture of risk. Additionally, the K-PALS (Mathes et al., 2003) intervention may need to be delivered in its entirety, instead of 12 weeks. If the entire intervention was delivered, then maybe the results would be different. Finally, the potential for increased procedural fidelity measures to ensure the most accurate implementation would have increased the results to the point of significance. Research Question Four The fourth research was: “Does implementation of a tier one intervention increase the percentage of students who are at benchmark on the Preschool Early Literacy Indicator assessment (Kaminski & Bravo-Aguayo, 2010)?” The Spring analysis indicated the treatment groups were not significantly different in the number of students at benchmark as measured by the PELI (Kaminski & Bravo-Aguayo, 2010). However, there were mean differences of students achieving benchmark with the experimental group surpassing the control group in every area except comprehension. For example, in alphabet knowledge (which consistently across all measures had been non-significant), the experimental group had 61% (n=28.67) of the students meeting benchmarks compared to the control group with 47% (n=22.09). The control group had seven more students who would require a more intensive level of services (i.e., tier 2) to address the students’ alphabet knowledge. This has an impact on cost, materials, and staff needed. Practically, for example, with seven children needing additional remediation a district would be looking at employing a teacher or paraprofessional who would need to teach two to three groups of students to provide the additional support in the concept of alphabet knowledge. Tier 2 instruction could include an additional 10 minutes of instruction per group for up to eight weeks. 114 To help put this in perspective a typical paraprofessional makes $10-12 an hour (The wage information was gathered from a wage and salary study conducted at a local level during the 2012 year), over four days a week, 10 minutes each day, for eight weeks would equate to $3,200 to $3,840 of additional cost. Breaking down the cost on a per pupil basis, this amounts to $457 to $549 per child estimate to remediate learning challenges in this setting. The employment of a teacher to provide this remediation reflects the real cost to a school district to provide this remediation, with the average teacher making $30-$35 an hour, which is $1,371$1,600 per child increase of cost to remediate in tier 2 instruction. The increased cost of remediation has implications. However, we also know that the gap between the lowest learner and the highest learner is much smaller in preschool than it would be in the later elementary grades (Greenwood et al., 2011), therefore it is easier and more cost effective to remediate in the preschool program than in later years. The results of this study reveal a unique opportunity for explicit, targeted instruction to close the achievement gap prior to students entering kindergarten. Furthermore, the more students who enter kindergarten possessing grade-level literacy skills, the more likely those students are to achieve later literacy benchmarks, hence meeting literacy expectations with tier 1 instruction (National Institute of Literacy, 2008; Phillips & Piasta, 2013; Spencer et al., 2013). Research Question Five The fifth research question was: “What impact does the implementation of a tier one intervention have on students’ DIBELS (The Dynamic Measurement Group, 2011) scores and the potential to achieve DIBELS (The Dynamic Measurement Group, 2011) fall (beginning of the year) kindergarten benchmarks?” 115 The findings of the DIBELS (The Dynamic Measurement Group, 2011) assessment presented with statistically significant differences between experimental and control groups. This finding solidifies other research that the KPALS (Mathes et al., 2001) intervention produces superior phonological awareness skills (Fuchs et al., 2001; Mannes, 2011; McMaster et al., 2008; Rafdal et al., 2011). The combination of letter naming with letter sounds and phonological awareness is the catalyst for later reading achievement (Spencer et al., 2013), which hopefully puts the experimental group at an advantage upon kindergarten entry. Unfortunately, follow-up data on the DIBELS (The Dynamic Measurement Group, 2011) was outside of the scope of this study, so this conjecture cannot be substantiated. When evaluating the spring collection of the kindergarten DIBELS (The Dynamic Measurement Group, 2011) composite score against fall benchmarks 83% of the students in the experimental group were at benchmark surpassing the goal of 80% criterion of students achieving benchmark (Horner, 2012). This finding is promising because it projects a figure that shows that the district is likely to meet the building goals and standards for students entering kindergarten at benchmark on the DIBELS (The Dynamic Measurement Group, 2011) assessment. Again, follow-up data would substantiate or negate if the literacy skills were maintained through kindergarten entry. When evaluating the composite score and the phoneme segmentation score, there were not significant differences between experimental and control groups. However, there was a difference in percentage of students’ at-risk or achieving benchmarks. One example is on phoneme segmentation fluency where the experimental group had 48% of the students at-risk compared to the control group which had 77% of the students at-risk. Though these numbers are high, it must be remembered that the skill that is being measured here is a middle-of- the-year 116 kindergarten benchmark score. One would not expect that preschoolers would achieve this standard at the end of the preschool program for 4-year olds. However, as research indicates acquisition of emergent literacy skills is interrelated (Phillips & Piasta, 2013; Spencer et al., 2013), meaning that we should be teaching a variety of skills along the developmental continuum to expose students to phonological awareness, which should encompass skills similar to those measured in the phoneme segmentation fluency measure. Again, these findings suggest that the KPALS (Mathes et al., 2001) intervention may be an effective tier 1 strategy within preschool learning environments. Implications for Practice This study evaluated a research-based emergent literacy intervention within multiple preschool classrooms. The intervention contained several components that were unique and enhanced the current corpus of literature. First, K-PALS (Mathes et al., 2001) is typically delivered in a whole class format (Mathes et al., 2001), where this study used small groups (3-4 students) for instruction delivery. Second, many of the current intervention studies target a single emergent literacy skill, for example, the intervention isolates alphabet knowledge or vocabulary, but does not target the skills in combination (Bailet et al., 2009; Bunn et al., 2004; Justice et al., 2003; Share, 2004; Tuckwiller et al., 2010). This study focused on alphabet knowledge, phonological awareness, vocabulary, comprehension, and concepts about print, which encompassed many of the foundational emergent literacy skills (NELP, 2008). The combination of the KPALS (Mathes et al., 2001) intervention and the DTPK (The Children’s Learning Institute, 2010) intervention created a holistic intervention, and this integration has not been fully attempted in the current published, peer-reviewed literature (Bailet et al., 2009; Bunn et al., 2004; Justice et al., 2003; Share, 2004; Tuckwiller et al., 2010). Third, the intervention 117 was delivered as a program within a preschool setting, whereas many of the prior intervention studies focused on students in kindergarten classes (Fuch et al., 2001). The results indicate the combination of KPALS (Mathes et al., 2001) and DTPK (The Children’s Learning Institute, 2010) is a potentially beneficial intervention within the preschool setting to develop emergent literacy skills. Additionally, this study adds to the literature in that direct, systematic, and targeted instruction that is still developmentally appropriate can produce enhanced literacy outcomes (Neuman, 2012). This is especially vital when given that 59% to 70% of the Head Start and early childhood programs that serve children at-risk for school failure use the High Scope Curriculum or Creative Curriculum in which there is “no causally interpretable evidence regarding effective early childhood curricula,” (Longigan & Cunningham, 2013, p. 176). This study shows promise in that there are other potential curriculum that may facilitate more enhanced literacy outcomes. This does need to be interpreted with caution as the results were not significant across all domains, nor did it encompass all components of preschool (i.e., math, social-emotional development, etc). However, it challenges the status of High/Scope and Creative Curriculum as child-directed with the teacher integrating all instruction into areas of children’s’ interest and lead throughout the classroom. This study begins to elucidate of the importance that small amounts of direct instruction can improve emergent literacy outcomes. Finally, the body of research regarding preschool explicit literacy instruction, in a multitiered system of support model is limited (Bayat, et al., 2010; VanderHeyden et al., 2007), so this study informs the current literature in explicit tier 1 interventions in preschool are beneficial. This intervention again adds to the current corpus of literature in exploring the impact of tier 1 instruction on preschoolers’ emergent literacy skill acquisition. By no means does this answer 118 all of the questions surrounding the effectiveness and impact of MTSS in preschool settings, instead it scratches the surface and spurs forth additional questions that need to be clarified and answered. Limitations of Research There are several limitations of the current research. First, there was not random assignment to the conditions. Random assignment of students to groups would have enhanced the group comparability in the study, as well as provide additional control. Second, there was not a standardized measure given to assess students’ pre and post assessment. This would have added to the current research in several ways. First, it would have determined additional group comparability. Including standardized measures would also have determined if the impact of the intervention was generalized to a broader skill set. Finally, it would have determined if any of the effects of the intervention were generalizable achievement and intelligence measures. There were other limitations to the current research concerning the assessment measures utilized. First, there was high variability, large effect sizes, in the DIBELS (The Dynamic Measurement Group, 2011) phoneme segmentation scores. This variability could be related to many factors, one of which, would be potential lack of reliability in scoring this measure. Second, the curriculum based vocabulary measure required children to verbalize a definition of the target vocabulary word, which relies heavily on a students’ ability to clearly articulate the correct definition. In future research, the vocabulary measure could be enhanced by asking the definition of the target vocabulary word along with asking a four yes-no questions to elicit the students’ understanding of the target vocabulary word. Finally, the addition of having students read CV and CVC words would have been a direct measure of the impact of KPALS (Mathes et 119 al., 2001), as reading of CV and CVC words is included in the KPALS (Mathes et al., 2001) instructional sequence. An additional limitation is implementation of the DTPK (The Children’s Learning Institute, 2010) was between 80%-85%, leaving room for improvement. If this study were conducted again the procedural fidelity would need to be monitored in a more strategic, comprehensive manner to ensure closer alignment with prescribed intervention. An additional limitation would be the use of an emergent preschool literacy screener, the PELI (Kaminski & Bravo-Aguayo, 2010). Though it aligns with the key emergent literacy skills, it is in the initial stages of reliability and validity. There were also limitations in the small sample size of teachers in the entire sample. Even though there were 94 subjects there were only four teachers in the study, of which two were assigned to the control group and two to the experimental group. This small sample size of teachers makes it difficult to determine if the effects were solely related to the instructional methodologies or if the teacher was the impetus for change, not the intervention. Furthermore, due to conditions beyond the control of this study the teachers had classes that were already assigned and class times that were not identical in experience and class days and hours. These conditions may have influenced the outcomes and should be controlled for in subsequent studies. Another potential limitation is the make up of the control and experimental groups. In the experimental group there were more female students compared to male students than the control group. It is difficult to determine if this had an impact on the results. However, group differences were controlled for in the data analysis. Future Research 120 There are multiple directions for future research. First and foremost additional research on the potential impact of direct instruction within preschool environments must continue to be researched. Currently, within early childhood education there is debate about if play and direct instruction can be blended or if they are instead play and direct instruction create too broad of an instructional framework and they cannot be blended (Hirsh-Pasek, & Golinkoff, 2011). This study begins to delve into the framework of utilizing a curriculum that is focused on the whole child, Creative Curriculum (Dodge et al., 2002) while embedding a small amount of direct instruction in emergent literacy skills. The initial results show promise. Future research needs to begin to further investigate numerous components of this concept which included: longitudinal data of emergent literacy gain, skills best taught through direct instruction versus play-based instruction, and ideal amount of time in direct instruction. Future research needs to take place to further enhance the research standards. First, conducting the study with additional standardized and fully research emergent literacy screeners to enhance measurement of outcomes. Additionally, random assignment of students to condition would also enhance the study. Another area for future research would to conduct the study again with different intervention combinations. For example, one group would implement just KPALS, another would implement just DTPK, another group would implement both K-PALS and DTPK, another group would implement K-PALS, Zoo Phonics, and DTPK, and a final group would implement just Zoo Phonics. This arrangement with large enough sample sizes would allow additional conclusions to be drawn about the impact of the intervention and potential interaction between interventions, which this study does not aim to address. Finally, future research should evaluate student level responsiveness to the interventions. For example, which 121 students benefit more from direct instruction over other children and what learning profiles are associated with particular learning trajectories. Conclusion In conclusion, this study adds to the body of research in that the KPALS and DTPK interventions produced superior literacy skills in the areas of phonological awareness and vocabulary. This work serves as a starting point to continue to investigate the impact of direct, targeted instruction in preschool settings. If the study can be conducted again with additional features it has the potential to enhance the corpus of literature surrounding preschool emergent literacy outcomes. 122 APPENDICES 123 Appendix A Curriculum-Based Vocabulary Assessment Measure and Rubric 124 Curriculum-Based Vocabulary Measure Vocabulary Assessment Child’s Name: Directions: Assessor says, “What does (insert target word) mean?” If student does not answer in 5 seconds restate the question, “What does (insert target word) mean?” If again, the child does not respond move on to the next target word. Ask all 27 target words. WRITE DOWN THE CHILD’S RESPONSE VERBATIM 1. Safe 2. Pretend 3. Responsible 4. Upside-down 5. Tool 6. Find 7. Inside 8. Seed 9. Outside 10. Beach 11. Shatter 12. Sandcastle 13. Crush 14. Worry 15. Destroy 16. Protect 17. Build 18. Nest 19. Shadow 20. Tracks 21. Warning 22. Weather 23. Join 24. Heavy 25. Ripe 26. Share 27. Cranky 125 Rubric Target Vocabulary Word: Safe Definition - Story: Not in danger 0 points (incorrect) - No response, Something you need to keep, I don’t know, Keep safe, Be safe, Your safe, Keep safe, Have to be safe, Got to be safe, Better be safe, S, Do, You’ll be safe, Doing something safe 1 point (example) – Hurt, Be safe and sound, no one getting you, be safe and not get hit by car, don’t hit, middle of road, walking feet, not family, fire, be careful 2 points (partially correct definition and example) – dangerous, have to be safe and good 3 points (correct definition) – not in danger, be careful Target Vocabulary Word: Pretend Definition - Story: Make-believe you are someone or something else 0 points (incorrect) – punching, you’re playing, no response, you’re pretending, eating in room, play with stuff, pretend, playing pretend, wanna play, something not right 1 point (example) – pretend to be . . . , different human, with babies, eat, costume, dead, save, people, snow angel, animal, puppy, scare people, monster, puppet 2 points (partially correct definition and example) – be something, something else (play/pretend), something doesn’t happen, dressing up and you’re not that person, to wish something is true 3 points (correct definition) – pretend your someone else, not real Target Vocabulary Word: Responsible Definition: Someone you can count on 0 points (incorrect) – responsible, no response, be responsible, nice, be safe, respectful, don’t let on one in, everything you want, not be mean, don’t hit anyone and play nice 1 point (example) – be safe with sisters, pick up, keep something, responsible for child or baby, be responsible to teacher, have to listen (mom, dad, teacher), don’t let anyone in, not in trouble, 2 points (partially correct definition and example) 126 3 points (correct definition) – respectful, being good Target Vocabulary Word: Tool Definition Something that helps you do something else 0 points (incorrect) – doing something, no response, breaks (something, stuff breaks fix it), too ot T, using a tool, work on stuff, no touching 1 point (example) – fixing something, fix (fix stuff), hammering, sewing, building, fixing car or trucks, hit down board, dad uses 2 points (partially correct definition and example) – you use something to work on and fix a car 3 points (correct definition) – something you use to fix Target Vocabulary Word: Find Definition To get something you are looking for 0 points (incorrect) find something, no response, someone wants something, be fined, someone is mean you say find, fined, find stuff and fix it, finding to play, better be find, you’re okay, P, not safe, good 1 point (example) Treasure hunt/buried treasure/surprises/treasure, found somebody, finding your dog, find your friend, find someone, hiding, find toys, find Easter egg, could play and have to find someone 2 points (partially correct definition and example) - Find something that is yours, lost and someone found you 3 points (correct definition) – find something lost Target Vocabulary Word: Upside-Down Definition To turn something so the wrong side is up 0 points (incorrect) – silly/funny, laugh, upside down, no response, walking upside down, going to fall, facing upside down, faking upside down, go upside down, something’s upside down, stand upside down, upside down on something 127 1 point (example) – upside down on ceiling, hand upside down on feet, looking upside down, hanging upside down, looks like walking on ceiling, I do upside down on couch, summersault, headstand, standing on hands, hang from something like monkey bar, head on flood and feet on ceiling, bar under legs 2 points (partially correct definition and example) – hanging on something 3 points (correct definition) Target Vocabulary Word: Seed Definition - A thing a plant grows from 0 points (incorrect) – no response, sees something, comes from a bird, seed(ing), finding a side, eat something with sees, R, seeing, S, throw away, get something, guard plant 1 point (example) – bird seed, planting flower, planting seed, plant a pumpkin, trying to plant tree/garden, plant food, plant blueberries, plant flower with water and sun, put in dirt and want to grow, put seed in dirt, plant 2 points (partially correct definition and example) – grow something, grow a seed, grow, growing flower, grow seed 3 points (correct definition) – grows plant, growing something, plant seed and grows into plant, plants and grow plants Target Vocabulary Word: Inside Definition - The inner part of something 0 points (incorrect) – Don’t want to be outside, walk into door, be inside/inside, have to stay inside, you’re inside, run in circle, home, coming inside, no response 1 point (example) – Inside house, when raining go inside, inside voice/quiet, inside bedroom house/building, things you can do inside: drinking hot coco, play, warm, shoes off, play toys, not outside, safe 2 points (partially correct definition and example) 3 points (correct definition) 128 Target Vocabulary Word: Outside Definition - The outer part of something 0 points (incorrect) – want to play outside, play, you’re cold, you’re outside, going outside somewhere, outside, sunny day, scream, have to play, better be outside, go outside people working inside, when it rains (can rain), no response, when you go outside 1 point (example) – walk out door, playing outside: monkey bars, playgroup, garden, paying in rain and puddles, bike, run and play 2 points (partially correct definition and example) 3 points (correct definition) - Target Vocabulary Word: Shatter Definition: To break into pieces 0 points (incorrect) – No response, scared, color pictures outside, shatter (something), whole, cold, go in water, shivering, yelling, sunny day, like your cold, like a shatter, go in water 1 point (example) – shatter glass, no touch 2 points (partially correct definition and example) – things break, glass 3 points (correct definition) – something breaks, you break it Target Vocabulary Word: Sandcastle Definition: A pretend castle people make out of sand 0 points (incorrect) – you’re building your own, playing, build something, sand, you’re making, beach (at beach), sandcastle, we put a sandcastle, you’re sand castling, no response 1 point (example) – build(ing), build it, build a sandcastle at beach, at beach making something, building sand, making sandcastle, bring cars and crash, water knock down, breaking it 2 points (partially correct definition and example) – build something out of sand, made out of sand, go to beach build out of wet sand, something built of sand that might get destroyed, building something the princess and queen live in 129 3 points (correct definition) – build castle out of sand, castle made out of sand, go to beach and make castle out of sand not sure if wet sand, making sandcastle out of sand Target Vocabulary Word: Beach Definition: The ground that is next to the water 0 points (incorrect) – going to beach, at the beach, something’s a beach, find surprises, take bus to beach and walk on, beach, going in water to swim, have to play, means play, play in water, no response, where you play, summer time 1 point (example) – playing in: sand, beach, water, sandcastles, in Holland beach, hot and go to beach to cool down, sand, get to play at beach, swim, bathing suit 2 points (partially correct definition and example) – sunny day go to beach and play in sand, get to play in water and sand, playing at beach and making sandcastles, play on beach in the water, swimming and playing in sand 3 points (correct definition) – Target Vocabulary Word: Crush Definition: To press something so hard it changes its shape 0 points (incorrect) – Falling, no response, crushing, a rock, when on a date with a boy, fall crush, did something mean to friend, crushed, crush somebody/something, getting over, C, can’t be safe, mean to friend 1 point (example) – Step on something, crushing: mud, sandcastle, bottle, pill, car, nut with hammer, bottle, something, can, stuff 2 points (partially correct definition and example) – Crush things into pieces 3 points (correct definition) – 130 Target Vocabulary Word: Worry Definition: To feel upset 0 points (incorrect) – don’t worry about something, no response, say something, about crushing stuff, you’re worrying, don’t worry, worry about I don’t know, worried somewhere, when you’re worried, M, where you can be safe, you get worried 1 point (example) – scared someone will get hurt, worry about: people, something, your mom is not home from work, mom and dad in jail, you’re lost, mom, don’t know you have school, alone 2 points (partially correct definition and example) – 3 points (correct definition) – afraid, scared, feel sad Target Vocabulary Word: Destroy Definition: To smash and break apart 0 points (incorrect) – no response, not being nice, destroy something/someone, zombies coming, went and destroyed, you’re destroying, we destroy, S, can’t touch, get destroyed, you’re safe 1 point (example) – destroy stuff, destroy nut, destroy tree, destroy house, destroy school, crane picks something up and throws it, hammering stuff 2 points (partially correct definition and example) – 3 points (correct definition) – ruin something Target Vocabulary Word: Protect Definition: To keep out of danger 0 points (incorrect) – no response, protecting someone/you/self/something, protect, talking to someone else, B, made fart and doesn’t destroy fart, 1 point (example) – protect dog/fire/child/mom/family/cat/stuff/baby/people, you’re safe, protecting yourself, someone’s protecting you, family will protect you 2 points (partially correct definition and example) – on a mission with kids and have to protect them 3 points (correct definition) – make someone safe, safe 131 Target Vocabulary Word: Build Definition: Putting smaller parts together 0 points (incorrect) – no response, build something/stuff, build, don’t knock down, build something from dumpsters, building something at home, B, do, building, hiding stuff 1 point (example) – trucks build, build something with blocks, build a pretend house/tower/truck/car/castle, you’re building, when you build something and get wood cut 2 points (partially correct definition and example) – 3 points (correct definition) – Target Vocabulary Word: Nest Definition: A place where animals lay their eggs 0 points (incorrect) – birds, no response, building something, nest, it means baby, the birds get the egg, you’re nesting, N, when back, birds can hatch 1 point (example) – a bird is in the nest, birds, next, birds lay in nests, birdy in there, building a nest, birds live in them, a bird builds a nest, home for birds 2 points (partially correct definition and example) – when you’re building a nest for your birds kids, coming out of an egg 3 points (correct definition) – Some bird or chicken in nest, lay eggs, birds eat and lay eggs Target Vocabulary Word: Shadow Definition: dark shape made when someone blocks the light 0 points (incorrect) – That you pretend to be something, there's a shadow, your shadow, no response, person made out of light, something is making a shadow, people, cold, have your shadow, scary, shadow, you go deep in forest, means that someone is there she’s a shadow like a door, shadowing, Sh, summer with sand, make a picture, dark 1 point (example) – Shadow covers the sun with the trees, when you have a shadow you get scared at night, when your right by the tree you see shadow, make shadow with hands, pretend monster with flashlight, suns out 132 2 points (partially correct definition and example) – cover sun with tree, when it’s sunny out you can see your shadow, sun shining on you 3 points (correct definition) – sun making shadow Target Vocabulary Word: Tracks Definition: Marks made by the feet of the person or animal 0 points (incorrect) – no response, means you’re driving, cars go on tracks, for trains, tracks that go by, going on a bus ride, train, a choo-choo trains coming, tracks, follow stuff, stopped in the car, tractor, train tracks, tractor 1 point (example) – things that make tracks (ducks/trains/shoes/quad/bike/trucks), stepping on sand, leaving tracks on floor with dirt, cheetahs find tracks, 2 points (partially correct definition and example) – you leave tracks on something, leaving tracks, when drive a car in dirt and grass and leave tracks, humans and animals make tracks, tracks show 3 points (correct definition) – footstep in snow/sand Target Vocabulary Word: Warning Definition: Something that tells you about danger 0 points (incorrect) – Warn something/someone, no response, warning, vampire after you, morning, get a up and play outside, it’s warm, only get one warning Someone’s coming, sunny day, W, wake up, you’re scared or something, something coming 1 point (example) – warning if there is a fire, don’t go in road, your mom warns you, no hit when playing, not listening 2 points (partially correct definition and example) – Telling someone to not go places, if there is a fire you will warn someone, you blow a whistle that a storm or fox is coming 3 points (correct definition) – telling someone to watch out Target Vocabulary Word: Weather Definition: What is feels like outside 133 0 points (incorrect) – watching the weather, check the weather, no response, weather is bad, a movie and watch to see weather, when don’t know weather, look at weather on tv, L 1 point (example) – windy, rainy, raining, sunny, snowy, cloudy, warm, bad weather scary, something makes weather like a tornado 2 points (partially correct definition and example) – cold outside and warm, weather out today 3 points (correct definition) – Target Vocabulary Word: Join Definition: To connect or link something together 0 points (incorrect) – happy, come back, no response, in people, join in good thing/something/someone/me, if you don’t you just go with then, you’re going to your house, letting someone in, you’re joining, jellyfish, have a good day, joining 1 point (example) – when you go to someone’s birthday, everyone comes to join your house, invite someone over, when you join a friend and take car of them, join the club/people/party/game/play/song 2 points (partially correct definition and example) – join in and ask friend if you come there 3 points (correct definition) – Target Vocabulary Word: Heavy Definition: Something that weighs a lot 0 points (incorrect) – you’re building something heavy, something’s heavy, really heavy, play with micky mouse, nice to people, heavy, no response, when you brothers, I am heavier, I don’t know, you’re heavy, H, when you die, something’s really low 1 point (example) – weight, something too heavy for you, a bunch of apples, heavy when carrying something, can’t pick stuff up, carrying a car, tree is heavy, when to pull something and heavy, heavy to pick up, really heavy like stone or wood 2 points (partially correct definition and example) – carrying heavy backpack and have to take stuff out, weight, strong guy can lift, can’t carry, pull something too heavy 3 points (correct definition) – 134 Target Vocabulary Word: Ripe Definition: Ready to pick and eat 0 points (incorrect) – no response, ripe, ripe a potato, red/green/blue, writing something, when you’re playing with a tiger and you get stripes, ripe something, you’re being nice, you write pictures, it’s ripe, B, safe, something ripe, cooking something, you have stripes 1 point (example) – strawberry, apple, banana, grapes, eating fruit 2 points (partially correct definition and example) – 3 points (correct definition) – ready to pick, ready to eat, when fruit is ripe and you can eat them Target Vocabulary Word: Share Definition: Give some to somebody else 0 points (incorrect) – no response, share, take turns, sharing, share something 1 point (example) – share toys with sister, share toys, you share, you’re sharing, you have to share, share with someone, sharing is caring, share your dress, share stuff, share sand or play dough, share to for 10-15 days 2 points (partially correct definition and example) – giving toys 3 points (correct definition) – give and take Target Vocabulary Word: Cranky Definition: Bad mood, upset easily 0 points (incorrect) – you’re cranky, no response, you’re playing, being cranky, you don’t like any, cranky, crankin, get all cranky, you’re crankier, cranking someone, C, robbers, you’re scared 1 point (example) – didn’t get enough sleep, go to room if you wanna to be like that, cranky to someone, being annoying, cranky and want to go to bed, dad is being cranky, you’re whining 2 points (partially correct definition and example) – you get cranky and mad, sad 3 points (correct definition) – you’re mad, someone’s mad, you get mad, mad at someone, you’re really angry 135 Appendix B Sight Word Assessment List 136 Sight Word Assessment List I, my, and, it, a, one, see, big, into, in, goes, is, am, green, as, to, can, take, make, the, for its, do, no, she, he, said, so, made, was, of, you, went, pull, we, now, then, at, have, an, out, by, up, from, some, come, me, go, their, long, what, why, when, like, look, yes, mom, dad, cat, love, they, on, will, are, stop, dog, here, not, where, who 137 Appendix C Procedural Fidelity Measure 138 Procedural Fidelity Checklist Experimental Condition Date: Evaluator: Teacher: Table 19 KPALS Instruction Intervention Component Delivered in Small Group Time of small group Game sheet for each child Proper materials Completed first letter naming Completed first ‘game’ Completed second letter naming Completed second ‘game’ Followed scripted lesson plan Group responses Individual responses Indicators Yes No _____ 8-10 minutes _____ less than 8 minutes _____ more than 10 minutes Yes No Yes No Yes No Game Name: Yes Yes No No Game Name: Yes Yes No No Yes Yes No No 139 Additional Comments: Table 20 DTPK Instruction Intervention Component Delivered in large Group Time of large group Proper materials Completed before reading strategies During reading questions Completed after reading questions Completed vocabulary targets Followed scripted lesson plan Group responses Individual responses Concepts about print Indicators Yes No _____ 10 minutes _____ 15 minutes _____ 20+ minutes Yes No _____ out of ______ Additional Comments: _____ out of ______ _____ out of ______ _____ out of ______ Yes No Yes Yes Yes No No No Table 21 Zoo Phonics Intervention Component Indicators Delivered in large Group Yes Duration _____ 5 minutes Additional Comments: No _____ less than 5 minutes _____ more than 5 minutes Reviewed all zoo phonics animal names Yes No Reviewed all zoo phonics animal sounds Yes No 140 Appendix D Concepts about Print Cue Card 141 Print Cue Card: - - Point out the cover of the book o Author o Title o Where to start reading Throughout the book point out: o Punctuation o Words o Letters o Letters in words (beginning, middle and end) o Where to start reading o Where to go after finishing a line 142 BIBLIOGRAPHY 143 BIBLIOGRAPHY Abrams & Company. (2000). Let’s read with the letter people. Waterbury, CT: Abrams & Company. Adolf, S. M., Catts, H. W., & Lee, J. (2010). Kindergarten predictors of second versus eight grade reading comprehension impairments. Journal of Learning Disabilities, 43(4), 332-345. doi: 10/1177/0022219410369067 Aos, S., Lieb, R., Mayfield, J., Miller, M., & Pennucci, A. (2004). Benefits and costs of the prevention and early intervention programs for youth. Washington State Institute for Public Policy. Olympia, WA. Retrieved June 30, 2011 from http://www. wsipp.wa.gov/pub.asp?docid=04-07-3901. Arnett, J. (1989). Caregivers in day-care centers: Does training matter? Journal of Applied Developmental Psychology, 10, 541-552. Bailet, L. L., Pepper, K. K., Piasta, S. B., & Murphy, S. P. (2009). Emergent literacy intervention for pre-kindergarteners at risk for reading failure. Journal of Reading Disabilities, 42(4), 336-355. doi: 10.1177/0022219409335218 Baker, C. N., Kupersmidt, J. B., Voeger-Lee, M. E., Arnold, D. H., & Willoughby, M. T. (2010). Predicting teacher participation in classroom based, integrated preventative intervention for preschoolers. Early Childhood Research Quarterly, 25, 270-283. doi: 10.1016/j.ecresq.2009.09.005 Barnett, W. S. (2008). Preschool education and its lasting effects: Research and policy implications. Boulder and Tempe: Education and the Public Interest Center & Educational Policy Research Unit. Retrieved July 1, 2011 from http://epicpolicy.org/publication/preschool-education Barnett, W. S. & Frede, E. C. (2011). Preschool education’s effects on language and literacy. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 435-449). New York, NY: The Guildford Press. Bartik, T. (2012, March). Investing in kids: Early childhood programs and local economic development. Grand Haven Chamber of Commerce, Grand Haven, MI. Bayat, M., Mindes, G., & Covitt, S. (2010). What does RTI (Response to Intervention) look like in preschool? Early Childhood Education Journal, 37, 493-500. doi: 10.1007/s10643-010-0372-6 Beck, I. L., McKeown, M. G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York: Guilford Press. 144 Bradley, L. & Bryant, P. (1985). Rhyme and reason in reading and spelling. Ann Arbor: University of Michigan Press. Browder, D. M., Ahlgrim-Delzell, L., Courtade, G. Gibbs, S. L., & Flowers, C. (2008). Evaluation of the effectiveness of an early literacy program for students with significant developmental disabilities. Exceptional Children, 75(1), 33-52. Bulotsky-Shearer, R. J. & Fantuzzo, J. W. (2011). Preschool behavior problems in classroom learning situations and literacy outcomes in kindergarten and first grade. Early Childhood Research Quarterly, 26, 61-73. doi: 10.1016/j/ecresq.2010.04.004 Bunn, R., Burns, M. K., Hoffman, H. H., & Newman, C. L. (2005). Using incremental rehearsal to teach letter identification with a preschool-age child. Journal of EvidenceBased Practices for Schools, 6(2), 124-133. Burger, K. (2010). How does early childhood care and education effect cognitive development? An international review of the effects of early interventions for children from different social backgrounds. Early Childhood Research Quarterly, 25, 140-165. doi: 10.1016/j.ecresq.2009.11.001 Buysse, V. & Peisner-Feinberg, E. (2009). Recognition and response (R&R) Implementation sites in Florida and Maryland. In M. R. Coleman, F. P. Roth, & T. West (Eds.), Roadmap to Pre-K RtI. National Center for Learning Disabilities. Retrieved from http://www.ncld.org Cabell, S. Q., Justice, L. M., Konold, T. R., & McGinty, A. S. (2011). Profiles of emergent literacy skills among preschooler children who are at-risk for academic difficulty. Early Childhood Research Quarterly, 26, 1-14. doi: 10.106/j.ecresq.2010.05.003 Cabell, S. Q., Justice, L. M., Zucker, T., & Kilday, C. R. (2009). Validity of teacher report for assessing the emergent literacy skills of at-risk preschoolers. Language, Speech & Hearing Services in Schools, 40(2), 161-173. doi: 10.1061/09/4002-0161 Camilli, G., Vargas, S., Ryan, S. & Barnett, W.S. (2010). Meta-analysis of the effects of early education interventions on cognitive and social development. Teachers College Record, 112(3), 579-620. doi: 0161-4681 Carta, J. J., Atwater, J. & Bradfield, T. (2010, February). What happening in tier one early literacy instruction? Examination of the foundations of RtI in pre-k. Panel presentation at the 7th annual biennial conference on Research Innovations in Early Intervention (CREI), San Diego, CA. Carta, J. J. & Buysse, V. (2008). What do we know and what does it look like? Response 145 to intervention (RTI) in early childhood. Presentation at the OSEP National Early Childhood Conference, Washington, DC. Carta, J. J., Greenwood, C. R., & Atwater, J. (2010). Tier 1 instruction in early education classrooms: Implications for response to intervention. Presentation at the RtI Early Childhood Summit, Kansas City, MO. Center for the Improvement of Early Reading Acheivement. (2001). Put Reading First. http://www.nationalreadingpanel.org/Publications/researchread.htm Center for Response to Intervention in Early Childhood. (2011). RTI Early Childhood Summit, Santa Ana Pueblo, NM. Children’s Learning Institute (2010). Developing Talkers: PreK. Texas: University of Texas. Christie, J. F. (2008). The scientifically based reading research approach to early literacy instruction. L. M. Justice & Vukelich, C. (Eds.). Achieving excellence in preschool literacy instruction, (pp.25-40). Clements, D. H. & Sarama, J. (2003). DLM early childhood express math resource guide. Columbus, OH: SRA/McGraw-Hill. Coleman, M. R., Roth, F. P., & West, T. (2009). Roadmap to pre-K RtI. In M. R. Coleman, F. P. Roth, & T. West (Eds.), Roadmap to Pre-K RtI. National Center for Learning Disabilities. Retrieved from http://www.ncld.org Cosgrove, M., Fountain, C., Wehry, S., Wood, J., & Kasten, K. (2006). Randomized Field Trial of an Early Literacy Curriculum and Instructional Support System. Paper presented at the annual meeting of the American Educational Research Association, San Francisco. Cunningham, D. D. (2010). Relating preschool quality to children’s literacy development. Early Childhood Development Journal, 37, 501-507. doi: 10.1007/s/0642009-0370-8 Cunningham, D. D. & Stanovich, K. E. (1997). Early reading acquisition and its relation to reading experience and ability 10 years later. Developmental Psychology, 33(6), 934945. doi: 0012-1649-97 Denton, C. A., Nimon, K., Mathes, P. G., Swanson, E. A., Kethley, C., Kurz, T. B., & Shih, M. (2010). Effectiveness of a supplemental reading intervention scaled up in multiple schools. Exceptional Children, 76(4), 394-416. Denton, C. A., Vaughn, S., Fletcher, J. M. (2003). Bringing research-based practice in 146 reading intervention to scale. Learning Disabilities Research & Practice, 18(3), 201211. doi: 10.111/1540-5826.00075 Dial, A. R. & Payne, R. L. (2010). Recasting the role of family involvement in early literacy development: A response to the NELP report. Educational Researcher, 39(4), 330-33. Diamond, K. E., Gerde, H. K., & Powell, D. R. (2008). Development in early literacy skills during the pre-kindergarten in head start: Relations between growth in children’s writing and understanding of letters. Early Childhood Research Quarterly, 23, 467-478. doi: 10.1016/j.ecresq.2008.05.002 Dickinson, D. K. & Daroow, C. L. (2013). Methodological and practical challenges of curriculum based lagnuage interventions. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national early literacy panel and beyond (pp. 195-216). Baltimore, MD: Paul H. Brookes Publishing Company. Dickinson, D. K., Golinkoff, R. M., & Hirsh-Pasek, K. (2010). Speaking out for language: Why language is central to reading development. Educational Researcher, 39(4), 305-310. doi: 10.3102/0013189x10370204 Dickinson, D. K., McCabe, A., Sprague, K. (2003). Teacher rating of oral language and literacy: early literacy instruction with a standards-based rating tool. The Reading Teacher, 56(5), 554-564. Dion, E., Brodeur, M., Gosselin, C., Campeau, M., & Fuchs, D. (2010). Implementing research-based instruction to prevent reading problems among low-income students: Is earlier better? Learning Disabilities Research and Practice, 25(2), 87-96. doi: 10.1111/j.1540-5826.2010.00306.x Dodge, D. T., Colker, L. J., & Heroam, C. (2002). The creative curriculum for preschool (4th ed.). Washington DC: Teaching Strategies, Inc. Dynamic Measurement Group. (2011). DIBELS Next. Retrieved from https://dibels.org/next/index.php Dynamic Measurement Group. (2010). DIBELS Next benchmark goals and composite scores. Retrieved from https://dibels.org/response_to_uo_goals.html Elliot, E. M. & Olliff, C. B. (2008). Developmentally appropriate emergent literacy activities for young children: Adapting the early literacy and learning model. Early Childhood Education Journal, 35, 551-556. doi: 10.1007/s10643-007-0232-1 Englert, C. S. & Mariage, T. The Sociocultural Model in Instructional Intervention Research in Literacy. In press. 147 Frey, N. & Fisher, D. (2010). Reading and the brain: What early childhood educators need to know. Early Childhood Education Journal, 38, 103-110. doi: 10.1007/s/0643010-0387-z Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18(3), 172-186. doi: 10.1111/15405826.00073 Fuchs, D. & Fuchs, L. S. (2005). Peer-assisted learning strategies: Promoting word recognition, fluency, and reading comprehension in young children. The Journal of Special Education, 39(1), 34-44. doi: 10.1177/00224669050390010401 Fuchs, D., Fuchs, L. S., Al Otiaba, S., Thompson, A., Yen, L., McMaster, K. N., . . . Yang, N. J. (2001). K-PALS Helping kindergarteners with reading readiness: Teachers and researchers in partnerships. Teaching Exceptional Children, 33(4), 76-80. Fuchs, D., Fuchs, L., Mathes, P. G., Simmons, D. C. (1997). Peer-assisted learning strategies: Making classrooms more responsive to diversity. American Educational Research Journal, 34(1), 174-206. doi: 10.3102/00028312034001174 Fuchs, D., Fuchs, L. S., & Stecker, P. M. (2010). The “blurring” of special education in a new continuum of general education placements and services. Exceptional Children, 76(3), 301-323. Fuchs, D., Fuchs, L., Thompson, A., Al Otiabe, S., Yen, L., Yang, N., Braun, M. & O’Connor, R. (2001). Is reading important in reading-readiness programs: A randomized field trial with teachers a program implementers. Journal of Educational Psychology, 93(2), 251-267. doi: 10.1037/0022-0662.93.2.251 Gee, J. (1990). Social Linguistics and literacies. London: Falmer Press. Genishi, C. & Dyson, A. H. (2009). Children language and literacy. New York: Teachers College Press. Gettinger, M. & Stoiber, K. (2007). Applying a response-to-intervention model for early literacy development in low-income children. Topics in Early Childhood Special Education, 27(4), 198-213. Gorey, K. M. (2001). Early childhood education: a meta-analytic affirmation of the short- and long-term benefits of educational opportunity. School Psychology Quarterly, 16(1), 9-30. doi: 10.1521.scpq.16.1.9.19163 Greenwood, C. R. (2009). Foreword. In M. R. Coleman, F. P. Roth, & T. West (Eds.), Roadmap to Pre-K RtI. National Center for Learning Disabilities. Retrieved from http://www.ncld.org 148 Greenwood, C. R., Bradfield, T., Kaminski, R., Linas, M., Carta, J. J., & Nylander, D. (2011). The response to intervention (RTI) approach in early childhood. Focus on Exceptional Children, 43(9), 1-22. Greenwood, C. R., Carta, J., Goldstein, H., Kaminski, R., McConnell, S. (2009). Center for response to intervention in early childhood (CRTIEC) consortium partner states: Kansas, Minnesota, Ohio, Oregon. In M. R. Coleman, F. P. Roth, & T. West (Eds.), Roadmap to Pre-K RtI. National Center for Learning Disabilities. Retrieved from http://www.ncld.org Gregory, A. E. & Cahill, M. A. (2010). Kindergartners can do it, too! Comprehension strategies for early readers. The Reading Teacher, 63(6), 515-520. doi: 10.1598/RT.63.6.9 Grisham-Brown, J., Pretti-Frontczak, K., Hawkins, S. R., Winchell, B. N. (2009). Addressing early learning standards for all children within blended preschool classrooms. Topics in Early Childhood Special Education, 29(3), 131-142. doi: 10.1177.0271121409333796 Good, R. & Kaminski, R. (2011). Dynamic Indicators of Basic Early Literacy, Next. Eugene, OR: Dynamic Measurement Group. Hair, E., Halle, T., Terry-Humen, E., Lavelle, B., & Calkins, J. (2006). Children’s school readiness in the ECLS-K: Prediction to academic, health, and social outcomes in first grade. Early Childhood Research Quarterly, 21, 431-454. doi: 10/1016/j.ecresq.2006.09.005 Hamre, B. K., Justice, L. M., Pianta, R. C., Kilday, C., Sweeney, B., Downer, J. T., & Leach, A. (2010). Implementation fidelity of My Teaching Partner literacy and language activities: Association with preschoolers’ language and literacy growth. Early Childhood Research Quarterly, 25, 329-347. doi: 10.1016/j.ecresq.2009.07.002 Hargrave, A. C., & Senechal, M. (2000). A book reading intervention with preschool children who have limited vocabularies: The benefits of regular reading and dialogic reading. Early Childhood Research Quarterly, 15, 75-90. ISSN: 0885-2006 Harms, T., Clifford, R. M., & Cryer, D. (1998). Early childhood environment rating scale-revised edition. New York: Teachers College Press. Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Brookes. Hatcher, P., Hulme, C., & Ellis, A. (1994). Ameliorating early reading failure by integrating the teaching of reading and phonological skills. Child Development, 65(1), 41-57. 149 Hawken, L. S., Johnston, S. S., & McDonnell, A. P. (2005). Emerging literacy views and practices: Results from a national survey of Head Start preschool teachers. Topics in Early Childhood Special Education, 25(4), 232-242. doi: 10.1177/02711214050250040401 Heckman, J. J. (2000). Invest in the very young. Chicago: Ounce of Prevention and the University of Chicago. Retrieved from http://www.ounceofprevention.org/downloads/publications/Heckman.pdf Helm, J. H. & Katz, L. (2001). Young investigators: The project approach in the early years. New York: Teachers College Press. Henry, A. E. & Pianta, R. C. (2011). Effective teacher-child interactions and children’s literacy: Evidence for scalable, aligned approaches to professional development. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 308-321). New York, NY: The Guildford Press. Hindman, A. H., Connor, C. M., Jewkes, A. M., & Morrison, F. J. (2008). Untangling the effects of shared book reading: Multiple factors and their associations with preschool literacy outcomes. Early Childhood Research Quarterly, 23, 330-350. doi: 10/1016/j.ecresq.2008.01.005 Hirsh-Pasek, K. & Golinkoff, R. M. (2011). The great balancing act: Optimizing core curricula through playful pedagogy. In E. Zigler, W.S. Gilliam, & W.S. Barnett (Eds.), The pre-k debates: Current controversies and issues. Holdaway, (1979). The foundations of literacy. Sydney, Australia: Ashton Scholastic. Invernezzi, M., Landrum, T. J., Teichman, A., & Townsend, M. (2010). Increased implementation of emergent literacy screening in pre-kindergarten. Early Childhood Education Journal, 37, 437-446. doi: 10.1007/s10643-009-0371-7 Jackson, S., Pretti-Frontczak, K., Harjusola-Webb, S., Grisham-Brown, J., & Romani, J. M. (2009). Response to intervention: Implications for early childhood professionals. Language, Speech, and Hearing Services in Schools, 40(4), 424-435. Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 80, 4, 437-447. doi: 0022-0663/88/500.75 Justice, L. M., Bowles, R. P., Pence Turnbull, K. L., Skibbe, L. E. (2009) School readiness among children with varying histories of language difficulties. Developmental Psychology, 45, 460-476. doi: 10.1037/a0014324 Justice, L. M., Chow, S., Capellini, C., Flanigan, K.,& Colton, S. (2003). Emergent literacy intervention for vulnerable preschoolers: Relative effects of two approaches. 150 American Journal of Speech-Language Pathology, 12(3), 320–332. doi: 10580360/03/1203-0320 Justice, L. M. & Ezell, H. K. (1999). Vygotskian theory and it’s application to language assessment: An overview for speech-language pathologists. Contemporary Issues in Communication Science and Disorders, 26, 111-118. Justice, L. M., & Ezell, H. K. (2000). Enhancing children’s print and word awareness through home-based parent intervention. American Journal of Speech-Language Pathology, 9(3), 257–269. doi: 1058-0360/00/0903-0257 Justice, L. M., & Ezell, H. K. (2002). Use of storybook reading to increase print awareness in at-risk children. American Journal of Speech-Language Pathology, 11(1), 17–29. doi: 1058-0360/02/1101-0017 Justice, L. M., Kaderavak, J. N., Fan, X., Sofka, A., & Hunt, A. (2009). Accelerating preschoolers’ early literacy development through classroom-based teacher-child storybook reading and explicit print referencing. Language, Speech, and Hearing Services in Schools, 40(1), 67-85. Justice, L., McGinty, A., Piasta, S., Kaderavek, J., & Fan, X. (2010). Print-focused read-alouds in preschool classrooms: Intervention effectiveness and moderators of child outcomes. Language, Speech, and Hearing Services in Schools, 41(4), 504–520. Justice, L. M. & Pullen, P. C. (2003). Promising interventions for promoting emergent literacy skills: Three evidenced-based approaches. Topics in Early Childhood Special Education, 23(3), 99-113. doi: TECSE 1177/02711214030230030101 Kaminski, R. & Bravo-Aguayo, K. (2010). Preschool Early Literacy Indicators. Eugene, OR: Dynamic Measurement Group. Kantor, R., Miller, S. B., Fernie, D. E. (1992). Diverse paths to literacy in a preschool classroom: A sociocultural perspective. Reading Research Quarterly, 27(3), 184-201. Kim, Y., Foorman, B. R., Petscher, Y., & Zhou, C. (2010). The contributions of phonological awareness and letter-name knowledge to letter-sound acquisition – A crossclassified multilevel model approach. Journal of Educational Psychology, 102(2), 313326. doi: 10.1037/a0018449 Klein, A., Starkey, P., & Ramirez, A. B. (2002). Pre-K mathematics curriculum. United States of America: Scott Foresman. Koutsoftas, A. D., Harmon, M. T., & Shelley, G. (2009). The effct of tier 2 intervention for phonemic awareness ina response-to-intervention model. Language, Speech & Hearing Services in Schools, 40(2), 116-130. doi: 0161-1461/09/4002-0116 151 Landry, S. H. (2011). Tier one: Key components of effective tier one instruction: Supporting young chidlren’s language and literacy development. Presentation at The Preschool Response to Intervention Summit, Santa Ana Pueblo, NM. Landry, S. H., Crawford, A., Gunning, S., & Swank, P. (2002). Teacher behavior rating scale. Center for Improving the Readiness of Children for Learning and Education: Unpublished research document. Landry, S. H., Swank, P. R., Assel, M. A., & King, T. (2009). The CIRCLE phonological awareness langage and literacy system (C-PALS): Technical manual. Children’s Learning Institute: Unpublished research. Loftus, S. M. Coyne, M. D., McCoach, B., Zipoli, R., Pullen, P. C. (2010). Effects of a supplemental vocabulary intervention on the work knowledge of kindergarten students at risk for language and literacy difficulties. Learning Disabilities Research & Practice, 25(3), 124-136. doi: 10.1111/j.1540-5826.2010.00310.x Lonigan, C. J. (2006). Development, assessment, and promotion of literacy skills. Early Education and Development, 17(1), 294-311. doi: 10.1207/5/s15566935eed1701-5 Lonigan, C. J. & Cunningham, A. E. (2013). Significant differences: Identifying the evidence base for promoting children’s early literacy skills in early childhood education. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national early literacy panel and beyond (pp. 161-194). Baltimore, MD: Paul H. Brookes Publishing Company. Lonigan, C. J. & Shannahan, T. (2013). Reflections on the national early literacy panel: Looking back and moving forward. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national early literacy panel and beyond (pp. 273-303). Baltimore, MD: Paul H. Brookes Publishing Company. Lonigan, C. J. & Farver, J. M. (2002). Litearcy express: A preschool emergent literacy curriculum. Tallahassee, FL: Author. Mannes, T. (2011). The effect of tier one literacy practices on preschoolers’ emerging literacy skills. (Unpublished practicum research). Michigan State University, East Lansing, MI. Mashburn, A. (2008). Evidnce for creating, expanding, designing, and improving highquality preschool programs. In L. M. Justice & C. Vukelich (Eds.). Achieving excellence in literacy instruction (pp. 5-24). New York: Guilford. Mathes, P. G., Clancy-Manchetti, J., & Torgesen, J. K. (2001). Kindergarten Peer Assisted Literacy Strategies. Longmont, CO: Sopris West Educational Services. McDonald, C. & Figueredo, L. (2010). Closing the gap early: Implementing a literacy 152 intervention for kindergarteners in urban schools. The Reading Teacher, 63(5), 404-419. doi: 10.1598/RT.63.5.6 McGinty, A. S., Briet-Smith, A., Fan, X., Justice, L. M., Kaderavek, J. N. (2011). Does intensity matter? Preschools’ print knowledge development with a classroom-based intervention. Early Childhood Research Quarterly, 26, 255-267. doi: 10.1016/j.ecresq.2011.02.002 McKey, R. H., Condelli, L., Ganson, H., Barrett, B. J., McConkey, C., Plantz, M. C. (1985). The impact of head start on children, families and communities. U.S. Department of Health and Human Services. Stock No. D17-092-00098-7 McMaster, K. L., Kung, S., Han, I., & Cao, M. (2008). Peer-assisted learning strategies: A “Tier 1” approach to promoting English learners’ response to intervention. Exceptional Children, 74(2), 194-214. Missall, K. N., McConnell, S. R., & Cadigan, K. (2006). Early literacy development: Skill growth and relations between classroom variables for preschool children. Journal of Early Intervention, 29(1), 1-21. doi: 10.1177/105381510602900101 Michigan Board of Education. (2001). Michigan Literacy Progress Profile. Lansing, MI: Central Michigan University Printing Office. Michigan State Board of Education, Lansing, MI. (2005). Early Childhood Standards of Quality for Prekindergarten. National Association for the Education of Young Children (NAEYC). (2009). Developmentally appropriate practice and intentionality: Big ideas. Washington, DC: Author. Retrieved from http://www.naeyc.org/files/naeyc/file/ecprofessional/DAP%20and%29Intentionality%20 Big%20Ideas.pdf National Association for the Education of Young Children (NAEYC) and International Reading Association (IRA). (2009). Learning to Read and Write: Developmentally Appropriate Practices for Young Children. Washington DC: Author. Retrieved from www.naeyc.org/positionstatements/learning_readwrite National Association of State Directors of Special Education. (2006). Response to intervention: Policy considerations and implications. Alexandria: National Association of State Directors of Special Education. National Institute of Literacy. (2008). Developing early literacy: Report of the National Early Literacy Panel. [Online]. Available: http://www.nifl.gov/earlychildhood/NELP/NELPshanahan.html National Reading Panel. (2000). Teaching Children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute for Literacy. 153 Nelson, G., Westhues, A., & MacLeod, J. (2003). A meta-analysis of longitudinal research on preschool prevention programs for children. Prevention and Treatment, 6, 134. Neuman, S. B. (2011). The effects of vocabulary interventions for children at risk: Evidence from a meta-analytic review. Presentation at The Preschool Response to Intervention Summit, Santa Ana Pueblo, NM. Neuman, S. B. (2010). Lessons from my mother: Reflections on the National Early Literacy Panel Report. Educational Researcher,39(4), 301-304. doi: 10.3102/0013189x10370475 Neuman, S. B. & Dwyer, J. (2009). Missing in action: Vocabulary instruction in pre-k. The Reading Teacher, 62(5), 384-392. doi: 10.1598/RT.62.5.2 No Child Left Behind Act of 2001, (NCLB). Pub. L. No. 107-110, 115 Stat. 1425. Notari-Syverson, A., O’Connor, R. E., & Vadasy, P. F. (1998). Ladders to literacy: A preschool activity book. Baltimore: Paul H. Brookes Publishing Company. Paris, S. G. (2011). Developmental differences in early reading skills. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 228-241). New York, NY: The Guildford Press. Pellin, B. & Edomnds, E. (2001). Bright Beginnings. Charlotte, NC: CharlotteMecklenburg Schools. Pentimonti, J. M., Justice, L. M., Piasta, S. B. (2013). Sharing books with children. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national early literacy panel and beyond (pp. 117-134). Baltimore, MD: Paul H. Brookes Publishing Company. Phillips, B. M. & Piasta, S. B. (2013). Phonological awareness and alphebt knowledge: Key precursors and instructional targets to promote reading success. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national earl yliteracy panel and beyond (pp. 95-116). Baltimore, MD: Paul H. Brookes Publishing Company. Piasta, S. B. & Wagner, R. K. (2010). Developing early literacy skills: A meta-analysis of alphabet learning and instruction. Reading Research Quarterly, 45(1), 8-38. doi: 10.1598/RRQ.45.1.2 Pollard-Durodola, S. D., Gonzalez, J. E., Simmons, D. C., Kwok, O., Taylor, A. B., Davis, M. J., . . . . Simmons, L. (2011). The effects of an intensive shared book reading intervention for preschool children at risk for vocabulary delay. Exceptional Children, 77(2), 161-183. 154 Powell, D. R., Diamond, K. E., Burchinal, M. R., & Koehler, M. J. (2010). Effects of an early literacy professional development intervention on Head Start teachers and children. Journal of Educational Psychology, 102(2), 299-312. doi: 10.1037/a0017763 Preschool Curriculum Evaluation Research Consortium (2008). Effects of Preschool Curriculum Programs on School Readiness (NCER 2008-2009). Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education. Washington, DC: U.S. Government Printing Office. Pretti-Fronztcak, K. (2011). Understanding the practical and system level recommendations for ensuring access and participation of all young children. Presentation at the Michigan Association for Administrators in Special Education, Lansing, MI. Pullen, P. C., Tuckwiller, E. D., Konold, T. R., Maynard, K. L., & Coyne, M. D. (2010). A tiered intervention model for early vocabulary instruction: the effects of tiered instruction for young students at risk for reading disability. Learning Disabilities Research and Practice, 25(3), 110-123. doi: 10.1111/j.1540-5826.2010.00309.x Rafdal, B. H., McMaster, K. L., McConnell, S. R., Fuchs, D., & Fuchs, L. S. (2011). The effectiveness of kindergarten peer-assisted learning strategies for students with disabilities. Exceptional Children, 77(3), 299-316. Roth, F. P., Rogers, P., Michney, J., & Mahon, N. (2009). The literacy partnership implementation site Washington DC. In M. R. Coleman, F. P. Roth, & T. West (Eds.), Roadmap to Pre-K RtI. National Center for Learning Disabilities. Retrieved from http://www.ncld.org Schneider, W., Roth, E., & Ennemoser, M. (2000). Training phonological skills and letter knowledge in children at risk for dyslexia: A comparison of three kindergarten intervention programs. Journal of Educational Psychology 92(2), 284-295. doi: 10.1037/0022-0663.92.2.284 Scott-Little, C., Lesko, J., Martella, J., & Milburn, P. (2007). Early learning standards: Results from a national survey to document trends in state-level policies and practices. Retrieved from http://ecrp.uiuc.edu/v9n1/little.htm Senechal, M. (2011). A model of the concurrent and longitudinal relations between home literacy and child outcomes. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 175-188). New York, NY: The Guildford Press. Shanahan, T. & Longigan, C. J. (2013). The national early literacy panel: A summary report. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national earl yliteracy panel and beyond (pp. 1-20). Baltimore, MD: Paul H. Brookes Publishing Company. 155 Share, D. L. (2004). Knowing letter names and learning letter sounds: A causal connection. Journal of Experimental Child Psychology, 88, 213-233. doi: 10.1016/j.jecp.2004.03.005 Silverman, R. & Crandal, J. D. (2010). Vocabulary practices in prekindergarten and kindergarten. Reading Research Quarterly, 45(3), 318-340. doi: 10.1598/RRQ.45.3.3 Snow, C. E., Burns, M. S., & Griffin, P. (1999). Language and literacy environments in preschools. In E. Digest (Ed.). ERIC Identifier: ED426818 Spencer, E. J., Spencer, T. D., Goldstein, H., & Schnieder, N. (2013). Identifying early literacy learning needs: Implications for child outcome standards and assessment systems. In T. Shanahan & C. J. Longigan (Eds.), Early Childhood Literacy: The national earl yliteracy panel and beyond (pp. 45-70). Baltimore, MD: Paul H. Brookes Publishing Company. SRA/McGraw-Hill. (2003a). DLM Early Childhood Express. Desoto, TX: Author. SRA/McGraw-Hill (2003b). Open Court Reading Pre-K. Desoto, TX: Author. Success For All Foundation. (2003). Curiosity Corner. Baltimore: Author. Sylva, K., Chan, L. S., Melhuish, E., Sammons, P., Siraj-Blatchford, I., & Taggart, B. (2011). Emergent literacy environments: Home and preschool influences on children’s literacy development. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 97-117). New York, NY: The Guildford Press. Sylva, K., Melhuis, E., Sammons, P., Siraj-Blatchford, I., & Taggart, B. (2011). Preschool quality and educational outcomes at age 11: low quality has little benefit. Journal of Early Childhood Research, 9(2), 109-124. doi: 10.1177/1476718x10387900 Torgeson, J. K., Wagner, R. K., & Rashotte, C. A. (1997). Prevention and reemdiation of severe reading disabilites: Keeping the end in mind. Scientific Studies in Reading, 1(3), 217-234. doi: 10.1207/s/532799xssr0103_3 Tracey, D. H. & Morrow, L. M. (2006). Lenses on reading: An introduction on theories and models. New York, NY: Guilford Press. Tuckwiller, E. D., Pullen, P. C., Coyne, M. D. (2010). The use of the regression discontinuity design in tiered intervention research: A polit study exploring vocabulary intrection for at-risk kindergarteners. Leraning Disabilities Research & Practice, 25(3), 137-150. doi: 10.1111/j.1540-5826.2010.00311.x U.S. Department of Education (2002). Good Start, Grow Smart. Washington D.C.: U.S. Government Printing Office. 156 VanDerHeyden, A. M. (2005). Intervention-drive assessment practices in early childhood/early intervention: Measuring what is possible rather than what is present. Journal of Early Intervention, 28(1), 28-33. VanderHeyden, A. M., & Snyder, P. A. (2006). Integrating frameworks from early childhood intervention and school psychology to accelerate growth for all young children. School Psychology Review, 35(4), 16. VanDerHeyden, A. M., Snyder, P. A., Broussard, C. & Ramsell, K. (2007). Measuring response to early literacy intervention with preschoolers at risk. Topics in Early Childhood Special Education, 27(4), 232-249. doi: 10.1177/0271121407311240 Walker, D., Carta, J. J., Greenwood, C. R., & Buzhardt, J. F. (2008). Developmental indicators for progress monitoring and intervention decision making in early education. Exceptionality, 16, 33-47. doi: 10.1080/09362830701796784 Wang, C. & Algonzzine, B. (2008). Effects of targeted intervention on early literacy skills of at-risk students. Journal of Research in Childhood Education, 22(4), 425-439. doi: 0256-8543/08 Whitehurst, G. J., Epstien, J. N., Angell, A. L., Payne, A. C., Corne, D. A., & Fischel, J. E. (1994). Journal of Educational Psychology, 86(4), 542-55. doi: 0022-0663/94 Whitehurst, G. J., & Lonigan, C. J. (1998). Child development and emergent literacy. Child Development, 69, 848-872. doi: 0009-3920/98/6903-0015 Winter, S. M., & Kelley, M. F. (2008). Fourty years of school readiness research: What have we learned? Childhood Education, 84(5), 260-266. doi: 10.1080/00094056.2008.10523022 Wood, J. (2002). Early Literacy and Learning Model. Jacksonville, FL: Florida Institute of Education and the University of North Florida. Wright Group/McGraw-Hill. (2001). Doors to discovery: A new prekindergarten program. Bothhell, WA: Wright Group/McGraw-Hill. Wrighton, C. & Bradshaw, G. (1985). Zoo Phonics. www.zoophonics.com Xu, Y. & Drame, E. (2008). Culturally appropriate context: Unlocking the potential of response to intervention for English language learners. Early Childhood Education Journal, 35, 305-311. doi: 10.1007.s10643-007-0213-4 Zucker, T. A., Solari, E. J., Landry, S. H., & Swank, P. R. (in press). Early language intervention for pre-kindergarteners at risk: a tier one and tier two response to intervention pilot study. Early Education and Development. 157