EFFECTS OF SELF- OR PEER-REFERENCED CBM FEEDBACK ON ORAL READING FLUENCY AND SELF-EFFICACY OF STRUGGLING READERS By Kristen Girard A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of School Psychology–Doctor of Philosophy 2014 ABSTRACT EFFECTS OF SELF- OR PEER-REFERENCED CBM FEEDBACK ON ORAL READING FLUENCY AND SELF-EFFICACY OF STRUGGLING READERS By Kristen Girard Achievement data from national assessments have highlighted that many students in the United States are still failing to meet proficiency standards in reading (U.S. Department of Education, 2011). Many school districts are using curriculum-based measurements (CBMs) in reading, such as Oral Reading Fluency, to collect data on student progress to help guide the use of evidence-based instruction and intervention (Dorn, 2010). These data are shared with school staff, and even parents, but are not necessarily shared with students directly. Sharing these CBM data with students would easily fit in with current school procedures, and research shows that some types of performance feedback can be motivating for students (Kluger & DeNisi, 1996). The motivation of students who are struggling in reading is especially important to consider. Students who experience reading failure early may be less motivated to read and may develop negative attitudes towards reading (Oka & Paris, 1986). Students’ beliefs and attitudes, especially those regarding their own skills, can influence their future behavior and performance. Self-efficacy, or a student’s self-perception of competence, can affect a student’s motivation and subsequent performance (Bandura, 1982). Students who lack confidence in their reading abilities put forth less effort and concentration (Dweck & Bempechat, 1983), and engage in less reading (Cox & Guthrie, 2001). Interventions designed to promote learning may be particularly successful if they address the low self-efficacy of students struggling in reading in addition to students’ reading skills. One way to improve students’ self-efficacy is to provide performance feedback. The current study focused on what type of CBM feedback might improve students’ reading skills and their self-efficacy. Performance feedback can help improve the reading achievement of students, but the research on CBM feedback is more limited (Alber-Morgan, Matheson Ramp, Anderson, & Martin, 2007; Chafouleas, Martens, Dobson, Weinstein & Gardner, 2004; Eckert, Ardoin, Daly, & Martens, 2002), and has not explicitly considered student self-efficacy. Understanding the effects of certain types of CBM feedback on students’ self-efficacy and reading skills could inform current school practices and potentially maximize the effects of evidence-based reading interventions on student learning. The oral reading fluency and self-efficacy of six struggling second grade students was functionally analyzed using a multiple-baseline-across participants design. Students participated in a baseline and two intervention phases in which they received a repeated readings intervention and either self-referenced or peer-referenced feedback. It was hypothesized that self-referenced feedback would be related to greater growth in oral reading fluency as well as higher reported self-efficacy. Visual analysis and effect size calculations were used to examine the effects of the different types of feedback. Students generally made progress in oral reading fluency rates over the intervention sessions, but this progress was most apparent for the majority of participants in whichever intervention phase occurred first. There was also a considerable amount of variability in student oral reading fluency performance in each phase. Data were insufficient to support functional relations between self-referenced feedback and greater growth in oral reading fluency or higher self-efficacy ratings and scores. Results are discussed in terms of the current literature on performance feedback and CBMs, and the applied implications are provided. ACKNOWLEDGEMENTS I want to thank my advisor, Dr. Sara Witmer, for her support and guidance throughout the entire dissertation process. She spent many hours brainstorming potential topics, discussing current literature, reviewing drafts, providing constructive feedback, and writing letters of support for grants. I also want to thank my other dissertation committee members, Drs. Evelyn Oka, Joshua Plavnick, Gary Troia, and Peter Youngs, who provided help whenever I needed it, even when they were out of the state. This project would have been impossible without the dedication of my graduate research assistants, Danielle Balaghi, Jessica Hanold, and Heather Schmitt, who gave up many hours of their summer vacation to run intervention sessions with the participants. Finally, I want to express my extreme gratitude to my parents, David and Diane Girard, and to Matthew Schrauben, for their love and support during my entire graduate school career. I would not be where I am today without their encouragement and help during the past five years. This research was supported in part by grants from the Michigan State University College of Education and the Michigan State University Graduate School. iv TABLE OF CONTENTS LIST OF TABLES…………………………………………………………………………… ix LIST OF FIGURES………………………………………………………………………….. x KEY TO ABBREVIATIONS……………………………………………………………….. xi CHAPTER 1: INTRODUCTION……………………………………………………………. Purpose……………….……………………………………………………………….. Background………………………………………………………………..…………... Importance…………………………………………………………………………….. Rationale………………………………………………………………………………. Research Questions……………………………………………………………………. 1 1 1 5 6 8 CHAPTER 2: LITERATURE REVIEW…………………………………………………….. Importance of Reading ……………………………………………………………….. Reading Fluency………………………………………………………………………. Progress Monitoring and Curriculum-Based Measurements.….……………………… Motivation and Self-Efficacy…………………………………………………………. Performance Feedback………………………………………………………….…….. Performance Feedback & Reading Interventions………………………………. Different Types and Forms of Performance Feedback…………………………. Gap in the Research…………………………………………………………………… Research Questions and Hypotheses………………………………………………….. 10 10 11 14 20 25 27 31 39 41 CHAPTER 3: METHODS…………………………………………………………………… Design.………………………………………………………………………………… Participants……………………………………………………………………………. Abby……………………………………………………………………………. Becca…………………………………………………………………………… Claire…………………………………………………………………………… Diane…………………………………………………………………………… Eric……………………………………………………………………………… Frank……………………………………………………………………………. Measures…...………………………………………………………………………….. Oral Reading Fluency…………………………………………………………... Motivations for Reading Questionnaire…………………………………........... Achievement Goal Orientation Questionnaire…………………………………. Self-Efficacy Measure………………………………………………….............. Reader Self-Perception Scale…………………………………………............... Procedures…………………………………………………………………..………… Reading Assessment……………………………………………………………. Research Assistant Training……………………………………………………. Interscorer Agreement………………………………………………………….. Intervention Integrity…………………………………………………………… 43 43 44 48 49 49 50 50 50 51 51 52 54 55 56 58 59 59 60 60 v Initial Data Collection………………………………………………………….. Baseline (BL) Phase……………………………………………………………. Peer-Referenced Feedback (PRF) Phase……………………………………….. Self-Referenced Feedback (SRF) Phase………………………………………... Data Analysis………………………………………………………………………….. 61 61 62 64 65 CHAPTER 4: RESULTS..………………………………………………………………...…. Initial Student Data……………………………………………………………………. Abby……………………………………………………………………………. Becca…………………………………………………………………………… Claire…………………………………………………………………………… Diane…………………………………………………………………………… Eric……………………………………………………………………………… Frank……………………………………………………………………………. Interscorer Agreement………………………………….……………………………... Intervention Integrity..………………………………….……………………………... Research Question 1…………………………………………………………………... Abby……………………………………………………………………………. Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Becca…………………………………………………………………………… Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Claire…………………………………………………………………………… Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Diane…………………………………………………………………………… Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Eric……………………………………………………………………………… Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Frank……………………………………………………………………………. Visual analysis……...……………………………………......................... Effect size…………………..……………………………………………. Research Question 2……………………………………………………………….….. Abby……………………………………………………………………………. Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Becca…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Claire…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Diane…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... 68 68 68 70 70 71 71 72 72 73 74 83 83 83 84 84 85 85 85 86 86 86 87 87 87 88 88 88 89 89 97 97 97 98 98 98 99 99 100 100 100 vi Reader Self-Perception Scale……………………………………………. Eric……………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Frank……………………………………………………………………………. Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Research Question 3……………………………………………………………….….. Abby……………………………………………………………………………. Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Becca…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Claire…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Diane…………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Eric……………………………………………………………………………… Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. Frank……………………………………………………………………………. Self-Efficacy Ratings……………………………………......................... Reader Self-Perception Scale……………………………………………. 101 101 101 102 103 103 103 104 109 109 109 110 110 110 111 111 111 112 112 112 113 113 113 114 114 114 CHAPTER 5: DISCUSSION………………………………………………………………... Research Question 1…………………………………………………………………... Research Questions 2 & 3…………………………………………………………….. General Discussion……………………………………………………………………. Limitations and Future Research.……………………….…………………………….. Implications for Practice.……………………….………...…………………………... Conclusions…………………….……………………….…………………………….. 116 116 119 123 127 131 133 APPENDICES...……………………………………………………………………………... Appendix A Initial Parent Consent Form……….……….……………………………. Appendix B Parent Demographic Survey……….……….…………………………… Appendix C Final Parent Consent Form……….………….………………………….. Appendix D Student Assent Form……….………………….………………………… Appendix E Motivation for Reading Questionnaire………..…………………………. Appendix F Achievement Goal Orientation Questionnaire….…….…………………. Appendix G Self-Efficacy Rating……….……………………..……………………… Appendix H Reader Self-Perception Scale……….…………….……………….…….. Appendix I Script for Peer-Referenced Feedback Phase..………..…….……………... Appendix J Script for Self-Referenced Feedback Phase..………..…………………… Appendix K Intervention Integrity Checklists……………………..………………….. 134 135 137 138 140 142 153 156 157 164 165 166 vii Appendix L Intervention Quality Ratings…….………………………………………. 169 REFERENCES………………………………………………………………………………. 170 viii LIST OF TABLES Table 1 Student Demographics………………………………………………………... 48 Table 2 Average Student Ratings on Motivation for Reading Questionnaire (MRQ).... 69 Table 3 Student Achievement Goal Orientation Questionnaire (AGOQ) Data……….. 69 Table 4 Average Percentage Intervention Integrity and Average Intervention Quality Ratings……………………………………………………………………….... 74 Table 5 Oral Reading Fluency (Words Read Correctly in One Minute)...…………….. 77 Table 6 Variability in Oral Reading Fluency (Words Read Correctly in One Minute).. 77 Table 7 Oral Reading Fluency Accuracy (Percentage of Words Read Correctly)…….. 78 Table 8 Results of Improvement Rate Difference (IRD) Visual Analysis and Calculations for PRF Phase…………...…………………………...………….. 78 Results of Improvement Rate Difference (IRD) Visual Analysis and Calculations for SRF Phase………………………………...…………………. 78 Table 10 Self-Efficacy Ratings……………………………….………………………… 91 Table 11 Reader Self-Perception Scale Data………………………………..…………... 94 Table 12 Reader Self-Perception Scale Score Interpretation (Based on Henk & Melnick, 1995)………………………………………………………………... 94 Table 13 Self-Efficacy Ratings…………………………………………………………. 106 Table 14 Reader Self-Perception Scale Data for Abby.………………………………… 107 Table 15 Reader Self-Perception Scale Data for Becca……………………………….... 107 Table 16 Reader Self-Perception Scale Data for Claire….……………………………... 107 Table 17 Reader Self-Perception Scale Data for Diane………………………………… 108 Table 18 Reader Self-Perception Scale Data for Eric…...……………………………… 108 Table 19 Reader Self-Perception Scale Data for Frank……….………………………… 108 Table 9 ix LIST OF FIGURES Figure 1. Visual Conceptual Framework………………………………………………... 9 Figure 2. Sample Multiple Baseline Design…………………………………………….. 45 Figure 3. Flowchart for Screening and Recruitment Procedures………………………... 46 Figure 4. Sample Feedback Graph for Peer-Referenced Feedback Phase………………. 64 Figure 5. Sample Feedback Graph for Self-Referenced Feedback Phase………………. 65 Figure 6. Oral Reading Fluency (Number of Words Read Correctly) for Abby, Becca, and Claire……………………………………………………………………... 79 Oral Reading Fluency (Number of Words Read Correctly) for Diane, Eric, and Frank……………………………………………………………………… 80 Oral Reading Accuracy (Percentage of Words Read Correctly) for Abby, Becca, and Claire……………………………………………………………… 81 Oral Reading Accuracy (Percentage of Words Read Correctly) for Diane, Eric, and Frank………………………………………………………………... 82 Figure 10. Self-Efficacy Ratings for Abby, Becca, and Claire…………………………… 92 Figure 11. Self-Efficacy Ratings for Diane, Eric, and Frank…………………………….. 93 Figure 12. Reader Self-Perception Scale Data for Abby, Becca, and Claire……………... 95 Figure 13. Reader Self-Perception Scale Data for Diane, Eric, and Frank……………….. 96 Figure 7. Figure 8. Figure 9. x KEY TO ABBREVIATIONS AGOQ Achievement Goal Orientation Questionnaire AYP Adequate Yearly Progress CBM Curriculum based measurement DIBELS Dynamic Indicators of Basic Early Literacy Skills DORF DIBELS Next Oral Reading Fluency IDEA Individuals with Disabilities Educational Improvement Act IRD Improvement Rate Difference MRQ Motivation for Reading Questionnaire MTSS Multi-Tiered Systems of Support NAEP National Assessment of Educational Progress NCLB No Child Left Behind PRF Peer-Referenced Feedback R-CBM Reading-Curriculum based measurement RSPS Reader Self-Perception Scale RtI Response to Intervention SCD Single-case design SRF Self-Referenced Feedback xi CHAPTER 1: INTRODUCTION Purpose The purpose of the current study was to examine the effects of a skill-based reading intervention (i.e., repeated readings) in combination with performance-based feedback on the oral reading fluency and self-efficacy of elementary students struggling in reading. This research examined whether sharing data on curriculum-based measurements directly with students in a certain format improved their performance and self-beliefs about their competence. Specifically, it was hypothesized that student performance and self-efficacy in reading would improve more when the feedback was based on individual performance over time, compared to when it was based on peer-based norms. The focus on individual performance may foster a mastery goal orientation, which is related to more positive student outcomes. A multiple baseline-acrossparticipants design with two feedback phases was used to examine the effects of a repeated readings intervention in combination with two different types of performance feedback: selfreferenced or peer-referenced. The oral reading fluency rates and self-ratings of self-efficacy for second grade students struggling with reading fluency were examined across the baseline and the two intervention/feedback phases. Background Many children in the United States are failing to meet expected levels of proficiency in reading (U.S. Department of Education, 2011). According to the National Center for Educational Statistic’s Nation’s Report Card, only 34% of fourth grade students and 34% of eighth grade students demonstrated expected proficiency levels on national assessments of reading (U.S. Department of Education, 2011). Data like these, from national and state assessments, have brought attention to student achievement levels and to those students who are struggling in 1 reading. Concurrently, federal educational initiatives and educational support systems have been developed in order to improve student reading achievement. This legislation and these systems were created in order to hold schools accountable for student achievement and to promote the use of evidence-based instruction, intervention, and assessment in order to improve teaching and learning. One such piece of legislation was the No Child Left Behind Act (NCLB) of 2001, also known as the Elementary and Secondary Education Act. This act requires public schools that receive federal funding to administer standardized tests annually to show that students are meeting Adequate Yearly Progress (AYP) targets in academic areas such as reading. The act also allocates funds to state and local educational agencies to establish early literacy programs for kindergarten through third grade students. These programs must be based on scientific research to ensure that all students are reading on grade level by the end of third grade. Additionally, NCLB provides funds to help school districts and state departments select or administer reading assessments for screening, diagnostic, or instructional purposes. These assessment tools should be brief but still scientifically based. Many school districts are collecting reading data using curriculum-based measurements (CBMs) for these purposes (Dorn, 2010). CBMs are also crucial in Response to Intervention (RtI) or Multi-Tiered Systems of Support (MTSS) models which are being used in schools to guide general education service delivery and special education eligibility (Wanzek & Vaughn, 2008). Within this type of tiered model, the progress of students receiving additional support is monitored weekly (Wanzek & Vaughn, 2008). This progress monitoring can provide school professionals and parents with data regarding student improvement or lack thereof. These data are often represented graphically, and 2 they illustrate the student’s progress over time and performance relative to grade-based norms (Deno, 1992). The Individuals with Disabilities Educational Improvement Act of 2004 (IDEA 2004) indicated that a lack of response to intervention (RtI) could be used as a means to identify students with specific learning disabilities. Many schools are using RtI frameworks to provide tiered levels of research-based instruction and intervention to meet students’ needs and to use their responses to make further educational decisions (Batsche et al., 2005). As of December 2007, 37 states had begun implementing RtI on a statewide level (Berkeley, Bender, Peaster, & Saunders, 2009). RtI models can differ somewhat from district to district, or from state to state, but all RtI models are similar in that they involve providing individual students with researchbased interventions targeted to students’ needs and with consistent monitoring of student progress (Berkeley et al., 2009). With a heightened focus on implementing evidence-based practices in reading, school districts are seeking to implement research-based reading interventions and to use research to guide instructional programming more than ever before. Many of the research-based reading interventions focus on providing students with instruction in basic reading skills; however, information on students’ beliefs, and specifically their self-efficacy in reading, may also be important to consider. Albert Bandura’s social cognitive theory emphasizes how a combination of factors, both internal and external, influence human motivation and behavior. Behavior can be explained through the interaction of cognitive, behavioral, and environmental factors (Bandura, 1986). Humans are influenced by their own direct experiences and by observing the experiences of others; these experiences are then analyzed and transformed through cognition (Bandura, 1977a). 3 Any given individual has beliefs about whether a certain behavior will lead to a certain outcome (i.e., outcome expectations), and about whether the individual can perform in a way that will produce the outcome (i.e., efficacy expectations; Bandura, 1977a). These expectations, or anticipations, can motivate individuals and affect how much effort they will put forth, as well as affect how they will cope and persist when faced with difficulties (Bandura, 1977a). Bandura (1977b) explored these efficacy expectations and outlined a theoretical framework specifically for the concept of self-efficacy. Self-efficacy, or a person’s perceived competence in performing a certain activity, takes into account four main factors: performance, observational comparison (i.e., vicarious experience), social feedback (i.e., verbal persuasion), and physiological states (i.e., emotional arousal; Bandura, 1977b; Henk & Melnick, 1995). These four sources inform the judgments that individuals make about their own abilities to perform tasks, which can then affect motivation and subsequent performance, which in turn inform future beliefs (Bandura, 1982). These theories about human behavior and cognitions have a wide range of applications but they could be especially useful in considering the motivation and behavior of children in schools. Students’ mental states are an important component in the learning process, especially when they are expected to perform to demonstrate their learning. “How individuals interpret the results of their performance attainments informs and alters their environments and their selfbeliefs, which in turn inform and alter their subsequent performance” (Pajares, 1996, p. 544). Research has highlighted the link between self-efficacy beliefs and achievement by examining individuals’ self-efficacy judgments, choices of activities, amounts of effort, and persistence and avoidance behaviors (Bandura & Schunk, 1981; Schunk, 1984). These theories may be used to explain the motivation and behavior of students who are struggling to read at an early age. Students who struggle in reading may have low self-efficacy, 4 or low perceived competence in their own abilities to perform certain tasks, and therefore behave in ways that maintain their difficulties in reading. For example, if students experience repeated failure during reading tasks, their beliefs and expectancies about reading can become more negative. Students are then less likely to put forth effort and concentration into the tasks (Dweck & Bempechat, 1983), and to engage in less reading, offering them fewer opportunities to improve. Bandura’s theories can be used not only to explain the motivation and behavior of students struggling in reading, but also potentially to inform strategies and interventions to promote better learning among students through increasing students’ sense of self-efficacy. Efforts to help struggling readers improve may be particularly successful if they address the negative self-beliefs that students adopt in addition to addressing skill development (Chapman & Tunmer, 2010). Importance In addition to using research-based reading interventions to improve student outcomes, research is needed to identify how student learning can be improved efficiently, helping students improve quickly and without placing additional burdens on general classroom teachers. Providing students with feedback is one way to promote student learning without taking too much additional time from teachers. In particular, feedback based on student progress on CBMs in reading could easily fit into the current frameworks and procedures used by schools. Many schools are already collecting CBM data on student progress as part of RtI or MTSS models. Student progress is typically monitored when they are provided with researchbased interventions designed to improve their skills. These data are then shared with teachers, parents, and other school staff to inform educational instructional and placements decisions. Sharing these data directly with students may be an efficient way to enhance the positive effects 5 on struggling readers. By improving their self-efficacy, they could potentially benefit more from the skill-based interventions. It would be useful to know how this data should be shared with students to enhance the learning outcomes of those who receive evidence-based reading interventions. Rationale Providing feedback to students will not automatically improve student learning. In fact, there is some evidence to suggest that certain types of feedback have no effect on achievement, and that other types can actually negatively affect learning (Kluger & DeNisi, 1996). Teachers most often provide feedback in the classroom that is corrective feedback about the current task (Hattie & Timperley, 2007). Approximately 90% of teachers’ questions in classrooms are seeking to gather information about student task accomplishment (e.g., correctness, neatness, or behavior; Airasian, 1997). Teachers also often provide students with personal feedback in the classroom that contains little information about learning tasks (e.g., “good girl” or “great effort”; Hattie & Timperley, 2007). This type of praise or performance feedback has little effect on student achievement (Kluger & DeNisi, 1996), and rarely enhances student self-efficacy because it is uninformative for students (Hattie & Timperley, 2007). According to a review of feedback intervention studies, the effect size for feedback is higher when it provides information on correct (average effect size was 0.43) rather than incorrect responses (0.25), and when it provides feedback about changes from previous trials (0.55; Kluger & DeNisi, 1996). Praise about task performance has little effect (0.09). Based on the results of this review and other research, Hattie and Timperley (2007) outlined a model of feedback based on the conditions that positively affect learning. They indicated that feedback should be provided in a way that answers three questions: Where am I 6 going? How am I going? Where to next? Feedback that addresses these questions provides specific information regarding learning goals and progress towards meeting those goals (Hattie & Timperley, 2007). This type of feedback may be most beneficial for students because it fosters a mastery goal orientation, or the belief that an individual’s progress is based on effort and success is based on improvement (Dweck, 1986). This is significantly different from a performance goal orientation, which instead focuses on social comparison and doing better or worse than others (Dweck, 1986). Providing specific feedback on their performance may be one way to enhance students’ self-efficacy or feelings of competence and enhance their learning. Considering the different effects of certain types of feedback, it would be important to determine the type of feedback using CBMs that might be most beneficial to student learning. If feedback on CBM performance could be provided in a way that fosters a mastery goal orientation, focusing on improvement and feelings of competence, it might enhance students’ self-efficacy and their learning. CBM data are often illustrated using graphs that show the students’ scores over time and the grade-based norm (Deno, 1992). It is unknown whether sharing information about the students’ progress over time or their performance relative to peers may be more beneficial to students who are struggling in reading. The current study sought to explore the oral reading fluency and self-efficacy of second grade students struggling in reading when they were given a repeated reading intervention and different types of feedback on their CBM performance. Specifically, students were exposed to one condition in which the feedback was self-referenced and showed their current CBM score relative to all their previous CBM scores. The same students were exposed to a second condition in which the feedback was peer-referenced and showed only the current CBM score relative to 7 the grade-based norm. The self-referenced feedback focused on individual performance that aligns with a mastery goal orientation, compared to the peer-referenced feedback that focused on peer comparison and aligned more with a performance goal orientation. Students’ oral reading fluency rates and self-ratings of self-efficacy were examined to explore whether and how the repeated readings intervention with different types of feedback affected students’ performance and feelings of competence. A visual conceptual framework illustrating the current study is presented in Figure 1. Research Questions The present study was intended to address three main research questions regarding students who are struggling with reading fluency. First, the study examined whether combining a repeated readings intervention with self-referenced performance feedback contributed to higher oral reading fluency rates than combining repeated readings with peer-referenced feedback. Second, the study examined whether the repeated readings intervention combined with performance feedback affected students’ self-efficacy ratings. Finally, the study considered whether the self-referenced performance feedback contributed to higher self-efficacy ratings than the peer-referenced performance feedback. 8  Effort  Belief that success is related to effort  Persistence  Time reading  Internalized goal setting SelfReferenced Feedback Repeated Readings Intervention & Performance Feedback Received Increased self-efficacy Highest increase in oral reading rate  Effort  Belief that success is related to ability  Competition PeerReferenced Feedback No Repeated Readings Intervention & No Performance Feedback Received Moderate increase in oral reading rate Little/no increase in oral reading rate (based on normal development & services received) Figure 1. Visual Conceptual Framework 9 CHAPTER 2: LITERATURE REVIEW The following sections describe the importance of developing reading skills, reading fluency in particular, and how CBMs can be used to inform educators about student progress in reading. Next, the motivation, and specifically self-efficacy, of struggling readers is introduced in order to explain how this information might be used to improve current reading interventions. Finally, the research on the effects of performance feedback, including different types of feedback, on student achievement and motivation is presented. This relevant literature highlights one way that educators might use CBM data as feedback to improve students’ academic and psychological outcomes. The effect of different forms of CBM feedback on students’ reading and self-efficacy has not been explored in existing research. The background literature described below stresses the need to consider the types of feedback (i.e., self-referenced and peerreferenced) provided to struggling students and the effects on their reading and feelings of competence. Importance of Reading Reading is a foundational skill that is important for accessing advanced learning opportunities. Yet according to the 2011 Nation’s Report Card, only 34% of fourth grade students were performing at or above the Proficient reading achievement-level on the National Assessment of Educational Progress (NAEP; U.S. Department of Education, 2011). This is concerning because early reading failure can have harmful and long-lasting effects on students (deBettencourt & Zigmond, 1990, Slavin, 1994; Slavin et al., 1994; Torgesen, 2002). For example, students who struggle to read in the first years of school are at risk for grade retention and placement in special education (Slavin, 1994). In addition, students’ reading skills at the end of third grade predict whether students will graduate from high school (Slavin et al., 1994), with 10 many students with learning disabilities in reading dropping out of school before graduating (deBettencourt & Zigmond, 1990). Delays in reading development affect vocabulary development (Cunningham & Stanovich, 1998) and attitudes towards and motivation to read (Oka & Paris, 1986), and they lead to missed opportunities for the development of comprehension skills (Brown, Palincsar, & Purcell, 1986). Research has shown that lost practice opportunities make it difficult for students who struggle with reading during the first three years of elementary school ever to reach average levels of reading fluency (Torgesen, Rashotte, & Alexander, 2001). Cunningham and Stanovich (1998) found differences in the amount of out-ofschool reading done by fifth grade students of different skill levels. They determined that a child reading at the 10th percentile may read the same amount of words in a year that a child at the 90th percentile reads in just two days outside of school (Cunningham & Stanovich, 1998). Several longitudinal studies provide evidence “…that children who are poor readers at the end of first grade almost never acquire average-level reading skills by the end of elementary school” (Torgeson, 2002, p. 8). Reading Fluency One particularly important reading skill is fluency, or the ability to quickly and accurately read with proper expression (National Institute of Child Health and Human Development, 2000). Reading fluency includes one’s reading rate, reading accuracy, and reading prosody (i.e., phrasing, intonation, rhythm). A historical discussion of oral reading by Rasinski and Hoffman (2003) highlighted that oral reading has been a part of classroom literacy instruction since colonial times, and the focus on fluency developed out of research from the 1970s through the 1990s. A large survey of public elementary school teachers indicated that they spend a significant amount of classroom reading instruction time on activities that are linked to student 11 fluency development (Baumann, Hoffman, Duffer-Hester, & Moon Ro, 2000). Teachers spent moderate to considerable amounts of instructional time on student oral reading, student silent reading, and reading aloud to students (Baumann et al., 2000). The National Research Council’s report, Preventing Reading Difficulties in Young Children (1998), recommended that fluency instruction be part of the first through third grade curricula (Snow, Burns, & Griffin, 1998), and the National Reading Panel included an entire section on reading fluency in their research report (National Institute of Child Health and Human Development, 2000). The National Reading Panel indicated that developing readers should first be taught to recognize and manipulate sounds in isolation and in words (National Institute of Child Health and Human Development, 2000). When these skills are mastered, students should begin to identify common words automatically and then instruction may focus on fluency with written text (National Institute of Child Health and Human Development, 2000). Although some fluency measures allow a rater to indicate a student’s prosody, or rhythm and intonation when reading, fluency is most commonly measured as the number of words read correctly per minute (Fuchs, Fuchs, Hosp, & Jenkins, 2001). This allows fluency to be quantified so that performance levels can easily be compared between students (Fuchs et al., 2001). In addition, it allows quantifying reading development within a single student to measure individual growth over time (Fuchs et al., 2001). Norms have been established and published for oral reading fluency measures, starting with the middle of first grade or the beginning of second grade (Good & Kaminski, 2002; Hasbrouck & Tindal, 1992; Osborn, Lehr, & Hiebert, 2003). In a theoretical, empirical, and historical summary of oral reading fluency, Fuchs and colleagues (2001) indicated that it is commonly used as an overall indicator of reading competence in research (Fuchs et al., 2001). More recently, researchers have found significant, 12 strong correlations between CBM oral reading measures and other standard tests of reading achievement (Reschly, Busch, Betts, Deno, & Long, 2009). Specifically, oral reading fluency has been found to be a significant predictor of comprehension for White, African American, and American Indian elementary students (Hintze, Callahan, Matthews, Williams, & Tobin, 2002; Pearce & Gayle, 2009). CBM in reading (i.e., oral reading fluency) has a moderate to strong relationship with one state’s high-stakes assessment measure and two norm-referenced, standardized achievement tests, with correlations close to or above .70 (Shapiro, Keller, Lutz, Santoro, & Hintze, 2006). Other research has confirmed the relationship between oral reading fluency and reading proficiency as measured by standardized reading assessments for students in first through third grade (Baker et al., 2008). The National Institute of Child Health and Human Development’s National Reading Panel (2000) identified fluency as an important component of reading, particularly because of its close relationship with reading comprehension. Students who read too slowly or without proper expression may not understand the meaning of what they read. Although there has been some debate about the extent to which a fluency-based intervention can affect the primary goal of reading instruction, namely reading comprehension, there is some evidence to support that fluency instruction can have positive effects on student comprehension. For example, Chard, Vaughn, & Tyler (2002) indicated that growth in fluency was associated with growth in comprehension, even if an intervention did not specifically target comprehension. Studies of other fluency practice interventions have shown positive effects on the fluency and reading comprehension of elementary students compared to control students (O’Connor, White, & Swanson, 2007). 13 Repeated reading is one type of reading fluency intervention with an extensive research base (Therrien, 2004). Literature reviews of repeated reading studies have highlighted the improvements in the fluency and comprehension of students with and without learning disabilities (Kuhn & Stahl, 2003; Therrien, 2004). Kuhn and Stahl’s (2003) review of remedial fluency research included 33 studies of repeated readings. In addition, the National Reading Panel (2000) analysis of studies on fluency development indicated that repeated reading and other guided repeated oral reading techniques are effective ways to improve reading fluency and reading achievement overall. According to the National Reading Panel analysis, repeated reading procedures have positive effects on students through fourth grade (National Institute of Child Health and Human Development, 2000), with the highest effect on reading accuracy (mean effect size of 0.55), reading fluency (mean effect size of 0.44), and reading comprehension (mean effect size of 0.35; National Institute of Child Health and Human Development, 2000). Progress Monitoring and Curriculum-Based Measurements Schools are responsible for assessing the development of students’ reading skills and ensuring that they make progress to meet the required standards. Traditionally, teachers’ academic assessments have not focused on student progress. A survey of over 900 elementary teachers’ assessment and grading practices revealed that on average, teachers place the most emphasis on academic performance, performance compared to a set scale of percentage correct, and specific learning objectives mastered (McMillan, Myran, & Workman, 2002). They put less emphasis on improvement of performance since the beginning of the year (McMillan et al., 2002). Another study surveyed 228 second, fifth, eighth, and eleventh grade teachers (Stiggins & Bridgeford, 1985). Seventy-nine percent of these teachers reported some level of use of teachermade objective tests, 65% reported some use of published tests, 90% reported some use of 14 structured performance assessments, and 97% reported some use of spontaneous performance assessments (Stiggins & Bridgeford, 1985). These assessments are often referred to as subskill mastery measures because they are a way to break the curriculum down into smaller domains that can be examined (Hintze, Christ, & Methe, 2006). When the students master one domain, the teacher can move onto the next (Hintze et al., 2006). These types of assessments do not provide data on students’ use of combined subskills for ultimate learning goals (e.g., learning to read). RtI or MTSS service delivery models, on the other hand, typically involve using specific tools designed to monitor progress systematically and examine student growth. Student progress is measured using standardized assessments that can be used repeatedly and consistently in order to compare a student’s performance at one time point to the same student’s performance at another time point. This type of assessment is referred to as general outcome measurement because the assessment measures more global outcomes (e.g., reading), rather than specific subskills (Hintze et al., 2006). This is different from the assessment strategies and assessment purposes that teachers have traditionally used. Reading CBM (i.e., oral reading fluency measure) is considered to be a general outcome measure because it measures a student’s overall proficiency in reading. The measure is designed to be sensitive to student improvement to provide information indicating whether instructional changes are needed. Using CBM assessment tools is now more common in United States schools (Coulter, Shavin, & Gichuru, 2009). The online AIMSweb system reported oral reading fluency scores for 680,000 students for the 2006-2007 school year (Coulter et al., 2009). A similar tool, Dynamic Indicator of Basic Early Literacy Skills (DIBELS) Next Oral Reading Fluency has been used with more than 3 million children in 3,000 districts in the United States (Coulter et al., 2009). 15 In their practical RtI guide for school professionals, Brown-Chidsey and Steege (2010) outlined recommended RtI procedures to address learning difficulties using a tiered model, including how to utilize CBM data. They argued that this type of model can be applied in schools to support elementary student fluency development. At tier one, they pointed out that general education teachers should utilize scientifically based curricula to provide all students with core instruction (Brown-Chidsey & Steege, 2010). For fluency development specifically, general education teachers should provide reading instruction using a research-based curricula that includes fluency-building activities such as oral reading fluency practice. Data should be collected three times a year on all students, to determine whether students are making adequate progress (Brown-Chidsey & Steege, 2010). A school might use CBMs, such as DIBELS Next Oral Reading Fluency benchmark assessments, to measure the reading fluency of all students in the school. A curriculum-based measure is specifically defined as a “…systematic, standardized, reliable procedure for documenting a student’s progress…” (Swain, 2005, p. 259). These measures are designed to be efficient and are also technically adequate tools for following student progress (Stecker, Fuchs, & Fuchs, 2005). According to Brown-Chidsey and Steege (2010), students’ performance on a benchmark screening can be used to identify students who might need additional support. Students’ scores can be compared to the DIBELS Next Oral Reading Fluency norms to determine those who are at risk for later reading problems (Brown-Chidsey & Steege, 2010). Further analysis of these students’ difficulties might indicate that they would benefit from a scientifically-based smallgroup intervention (Brown-Chidsey & Steege, 2010). For example, students who scored low on the DIBELS Next Oral Reading Fluency screening and are accurate but slow readers could be pulled out for a repeated readings fluency intervention with a Title 1 reading instructor two or 16 three times a week. This provides students with more individualized, intensive practice (Wanzek & Vaughn, 2008). Monitoring a student’s response to this more intensive, evidence-based intervention is an important component of an RtI model (Brown-Chidsey & Steege, 2010). CBMs such as DIBELS Next Oral Reading Fluency progress monitoring probes, or AIMSweb R-CBM, can be used as progress monitoring tools for fluency. This monitoring process often involves administering the quick progress monitoring probes once or twice a week (Deno, 1992; Madelaine & Wheldall, 2004), although more recent research has suggested that less frequent monitoring may be sufficient (Jenkins & Terjeson, 2011). CBMs can provide data for teachers regarding student progress and signal when an instructional change may be needed. Without changes in instruction, CBM administration alone will not necessarily improve student achievement, but the research is mixed. A review of research on using CBM to improve achievement highlighted that students with mild and moderate disabilities are more likely to make growth if the progress monitoring is used in combination with instructional modifications (Stecker et al., 2005). For example, Stecker and colleagues (2005) reviewed a study by Jones and Krouse (1988). In this study, student teachers were taught to apply CBM reading procedures, including making necessary instructional changes with students with disabilities in third through sixth grade (Jones & Krouse, 1988). These student teachers were provided with weekly supervision to discuss the data. These student teachers’ students gained an average of 13 correct words per minute after eight weeks of intervention, compared to the students of control student teachers who only gained an average of 3 correct words per minute after eight weeks. In comparison, a study of resource room teachers trained to monitor the progress of their students on oral reading fluency illustrated different results (Skiba, Wesson, & Deno, 1982). No significant differences were found between students whose progress 17 was measured and those whose progress was not measured (Skiba et al., 1982). Although the teachers were encouraged to make instructional changes, the teachers only made 1.4 teaching changes on average across the entire study (Skiba et al., 1982). Oral reading fluency student data are collected, organized, and often presented graphically (Deno, 1992; Swain, 2005). A graph is often used to illustrate a student’s progress over time towards a goal established by the teacher, grade-level or school-based team, or district. These goals may be based on individual growth (e.g., reading a certain number of additional words each week for a certain number of weeks) or comparisons to established norms for a certain grade (Deno, 1992). A goal line is sometimes used on a progress graph to connect the student’s starting (baseline/benchmark) score(s) and the end-of-year goal typically identified by the teacher (Swain, 2005). While these goals are important to the teacher or instructional aide administering the measures and adjusting instruction, the school psychologist analyzing the data, or the child study team determining if a student will qualify for special education or other instructional services, it is not required that these goals or CBM data be shared with the student being assessed. As previously discussed, research on the benefits of CBM data has focused on how teachers might use the data to inform or change instruction and intervention. This is consistent with how CBM data use is described in documents focused on practical applications for school professionals. The RtI manual developed by the National Center on Response to Intervention describes procedures for teachers to create graphs and interpret CBM scores. The National Center on Student Progress Monitoring has published a fact sheet on the benefits of CBM which describes how the “…graphs can help create a common understanding among parents, teachers, administrators, and other professionals (such as school psychologists)” (McLane, 2008, p. 1). 18 This fact sheet also indicates that the visual graphs can help students track their own growth, and that doing so may motivate students to persevere to reach their goals (McLane, 2008), although this is not always a required part of an RtI model. Sharing the CBM data regarding student progress with a student may be particularly beneficial to one who is struggling. There is some evidence to suggest that progress monitoring with CBMs might have psychological benefits for students. Davis, Fuchs, Fuchs, and Whinnery (1995) surveyed special education students regarding their perceptions of the benefits of CBM assessment in reading and their involvement in the learning process. Compared to control students who were not engaged in CBM assessment procedures, students in the CBM group felt more responsible for their own efforts in learning to read, although their academic self-concepts and perceptions of themselves as learners did not improve (Davis et al., 1995). Glor-Scheib and Zigmond (1993) found that sixth graders’ overall attitudes towards reading did not change, but their self-perceptions of competence in reading improved after their progress was monitored and they were directly involved in plotting their own scores and reviewing their own progress. In summary, monitoring the progress of students who are struggling in reading is an important component of an RtI or MTSS model. CBMs are frequently used in the United States to measure the progress of students’ oral reading fluency (Coulter et al., 2009). The data collected using CBMs can be used to evaluate whether students are making progress with additional instruction and intervention. The progress monitoring process using CBMs may have educational and psychological benefits for students who are struggling, but the evidence is mixed. The progress monitoring process can provide useful data for the teacher to use for instruction, or for school-based teams to use when making educational decisions. Involving 19 students directly in the progress monitoring processes may not directly improve students’ reading skills like instructional modifications can, but they may affect student’ self-perceptions of learning and reading. Students may feel more responsible for their learning when they are involved in the process (Davis et al., 1995), and their self-perceptions of competence may improve when they review their own progress using CBM data (Glor-Scheib & Zigmond, 1993). Motivation and Self-Efficacy Bandura’s (1977a) social cognitive theory of motivation indicates that behavior is influenced by one’s self-beliefs. According to this theory, a student’s performance on a reading task may be altered by the student’s self-beliefs about his or her reading skills. If the student performs poorly, these self-beliefs may become more negative, which then may affect the student’s future performance on reading tasks. The student may be less motivated the next time he or she is asked to read because of the student’s mental state. This is why it is particularly important to consider the motivation of students with difficulties in reading. The long-lasting and harmful effects of early reading failure may be the result of students caught in a cycle of failure. Another established view of motivation is expectancy–value theory, the idea that an individuals’ effort, persistence, and performance is directly related to their beliefs about their abilities to perform successfully and their subjective value of the task (Wigfield & Eccles, 2000). According to this theory, students are not likely to invest effort in tasks they do not expect to be successful at or in tasks they do not value (i.e., find meaningful or beneficial; Wigfield & Eccles, 2000). Furthermore, an expectation of success is predicted to lead students to invest more effort in a given task, and to be increasingly persistent (Wigfield & Eccles, 2000). This connection between beliefs and behavior or performance has been discussed extensively in the literature on motivation. Students who struggle in reading because of skill 20 deficits or disabilities may lack confidence in their abilities or have negative attitudes towards reading. Oka and Paris (1986) found that third, fourth, and fifth grade underachievers in reading had lower levels of comprehension and used less effective strategies when reading, but also had more negative attitudes about reading and lower self-perceptions of competence. These students with lower perceptions of competence, self-efficacy, or confidence in reading skills are likely to engage in behavior that will perpetuate their difficulties in reading. Students’ beliefs about themselves influence their choices of activities, effort, and persistence (Bandura, 1982; Nicholls, 1983; Weiner, 1993). For example, students who lack confidence in their reading abilities are less likely to engage in reading activities (McCabe, 2006), and children’s reading motivation can be used to predict the amount and breadth of their reading (Wigfield & Guthrie, 1997). Individuals who lack confidence in their abilities to complete academic tasks may give up easily, engage in off-task behavior, and avoid the tasks (Baker & Wigfield, 1999; Schunk & Swartz, 1993; Zimmerman & Martinez-Pons, 1992). There is a relationship between students’ lowperceptions of their abilities and their low expectations for future outcomes (Chapman & Tunmer, 2003). As students’ attitudes, beliefs, and expectancies become negative due to failure in reading, students are likely to put forth less effort and concentration in reading tasks (Dweck & Bempechat, 1983). The amount of reading activity of third, fourth, fifth, and sixth graders is positively correlated with these students’ reading achievement (Cipielewski & Stanovich, 1992; Cunningham & Stanovich, 1991; Taylor, Frye, & Maruyama, 1990). There is a correlation between student exposure to print and student achievement in reading. This relationship is longlasting, and in fact individual differences in exposure to print can be used to predict growth in reading comprehension in the elementary grades and even in tenth grade (Cunningham & 21 Stanovich, 1997). By engaging in less reading, investing less effort, and not persisting when the task is difficult, students will lose opportunities to practice and will continue to struggle in reading. These lost practice opportunities make it difficult for children who are poor readers in early elementary school ever to catch up with their average peers (Torgeson, Rashotte, & Alexander, 2001). This pattern of behavior has been studied with students with learning disabilities who show, compared to general education students, “…lower levels of achievement, effort investment, [and] academic self-efficacy…” (Lackaye & Margalit, 2006, p.432). The motivation level of general education students also significantly predicts students’ amounts of reading for enjoyment (Cox & Guthrie, 2001). Stanovich (1986), in his theoretical discussion of the development of individual differences in reading skills, highlighted the cycle of failure by noting how struggles in reading may result in behavioral or motivational changes, which then affect reading achievement. Efforts to remediate students’ early struggles in reading may be particularly successful when they promote skill development as well as students’ self-beliefs. While overcoming skill deficiencies is an essential component to growth in reading abilities, the negative reading- or achievement-related self-beliefs that develop as a result of reading difficulties should also be addressed (Chapman & Tunmer, 2010). For example, Helping Early Literacy with Practice Strategies (HELPS), a research-supported, multi-component reading intervention program that targets oral reading fluency, includes motivational elements such as goal setting, performance feedback, and incentives (Begeny, Braun, Lynch, Ramsay, & Wendt, 2012; Begeny et al., 2010). Despite their addition, there have not been any specific investigations of the value these motivational components add to student outcomes. 22 Other researchers have recommended the incorporation of motivational components into more reading instruction and intervention programs. Torgesen and colleagues (2008) from the Center on Instruction funded by the U.S. Department of Education recommended that teachers attempt to increase the motivation of adolescents during literacy instruction. Quirk and Schwanenflugel (2004), in their analysis of five popular remedial reading programs, claim that although several of the programs do not address the motivation of program participants, some components could easily be added. One specific belief that can influence behavior is self-efficacy. Self-efficacy refers to one’s perceived competence for a certain task or within a certain area (Bandura, 1977a). Although it is built upon past performance and experiences with competence, it is based on an individual’s subjective beliefs about his or her capabilities (Bandura, 1982; Lackaye & Margalit, 2006). In research self-efficacy is often assessed by asking individuals to report the level of confidence they feel in succeeding at a certain task (Pajares, 1996). These self-efficacy items must therefore be written with the appropriate level of specificity regarding the topic/task of interest (Pajares, 1996). Even early elementary students “…have distinct beliefs about what they are good at…” (Wigfield & Eccles, 2000). Self-efficacy beliefs begin to form from a young age because children have already experienced a wide variety of tasks and situations (Bandura, 1982). Understanding students’ self-efficacy may provide teachers with useful insights about their students to aid learning (Pajares, 1996). Three decades of research have focused on the relationships between self-efficacy, motivation, and achievement. The results of an investigation of students with and without learning disabilities supported self-efficacy as a significant predictor of academic effort investment (Lackaye & Margalit, 2006). In addition, students with 23 learning disabilities reported lower academic self-efficacy scores compared to students without disabilities. This may be the result of the negative cycle in which students with disabilities do not have many experiences where they feel competent or positive about their learning (Lackaye & Margalit, 2006). One study of undergraduate students indicated that outcome expectancy beliefs, or beliefs about the relationships between successful task performance and outcomes, and selfefficacy beliefs jointly accounted for significant variance in reading achievement, but “…selfefficacy beliefs were more strongly related to achievement…” (Shell, Murphy, & Bruning, 1989, p. 96). However, it is important to note that focusing solely on students’ beliefs about themselves may not be an effective way to develop their skills. In their review of research on self-esteem, Baumeister and colleagues (2003) found little evidence that programs and interventions can affect the self-esteem of students. Scheirer and Kraut (1979) reviewed studies evaluating the effects of self-concept change on educational achievement. Their review indicated there was not an association between self-concept change and academic achievement, and any associations that were found were temporary (Scheirer & Kraut, 1979). These large reviews of research highlight that interventions just targeting students’ negative self-beliefs may not be enough to break out of this negative cycle of failure. Self-efficacy beliefs or perceived competence, as defined by Bandura (1982), come from direct performance experiences as well as observational comparisons, social feedback, and physiological feelings. An intervention designed to improve students’ self-efficacy could tap into these components. In summary, according to the expectancy-value theory of motivation, student motivation and effort are directly related to students’ expectancies of success and students’ subjective task values. One student expectancy belief is self-efficacy, or an individual’s belief of competence, 24 which begins to develop at a very young age through performance, observed comparisons, feedback, and physiological feelings (Bandura, 1982). Self-efficacy is related to overall motivation and achievement, so understanding how to increase self-efficacy may help educators working with struggling students. How do educators motivate students who are struggling in reading and may have low self-efficacy? Researchers have established that future work should focus on how to encourage struggling students, such as those with learning disabilities, and how to enhance their self-efficacy, in order that they put forth more effort even when faced with difficulty (Lackaye & Margalit, 2006). Interventions designed to improve students’ self-beliefs alone have been largely ineffective (Baumeister et al., 2003; Scheirer & Kraut, 1979). Because self-efficacy beliefs come from direct performance experiences as well as observational comparisons, social feedback, and physiological feelings, an intervention designed to improve these factors may be more effective in improving students’ self-efficacy. One strategy for increasing the achievement and self-efficacy of struggling students is providing feedback to students on their performance. Performance Feedback Self-efficacy beliefs develop from students’ experiences with competence, or incompetence, but sometimes it is difficult for students to judge their own abilities. Performance feedback can provide students with information about their performance and has the potential to help develop students’ self-efficacy and sustain their motivation. Receiving feedback on performance can be an intervention in itself and can improve performance and motivation (Kluger & DeNisi, 1996). Kluger and DeNisi (1996) conducted a meta-analysis of literature on the effects of performance feedback on performance (e.g., reading errors, puzzles, reaction time, adherence to regulations). They concluded that feedback interventions improved performance on 25 average, but the feedback in more than one third of the studies included in their meta-analysis actually decreased performance. As a result, it is important to consider what types of performance feedback can have positive effects on students and how to avoid feedback that leads to decreases in performance. Kluger and DeNisi (1996) found support for their theory that attention may moderate the effects of feedback. Namely, they found that feedback is more effective when individuals focus on the task and learning rather than focusing on themselves and their affective feelings. There are different theories as to why feedback is linked to improvements in performance and motivation. One theory presents positive feedback as a type of reinforcement and negative feedback as a form of punishment (Kluger & DeNisi, 1996). Within this operant conditioning framework, feedback as a consequence will shape learning and therefore performance (Hattie & Timperley, 2007; Kluger & DeNisi, 1996). Students receiving positive feedback or praise are being rewarded and their correct reading behaviors will increase. On the other hand, errors in reading behavior will decrease when followed by negative feedback. Other theories focus on the regulation of goal-directed behavior that occurs when feedback is compared to certain goals or standards (Kluger & DeNisi, 1996). Feedback interventions may tap into individuals’ executive functioning processes such as attention and metacognition. Receiving feedback can focus an individual’s attention on his own performance, cue an individual to a certain task component, or cause an individual to reevaluate his progress (Kluger & DeNisi, 1996). In this case, students learn to regulate their own behavior when they receive feedback to evaluate progress towards their goals. Students may increase their effort when they receive feedback that their performance falls short of their goals. Based on this theory, 26 setting high goals can be beneficial for students but it is unclear what type of feedback should be provided to students as they work towards these goals. Performance Feedback & Reading Interventions Some researchers have attempted to expand the literature on evidence-based reading interventions and to improve their efficiency by combining skill-based instructional strategies with performance-based instructional strategies. These performance-based strategies include providing different types of consequences, reinforcement, and feedback after receiving skillbased instruction (Chafouleas, Martens, Dobson, Weinstein, & Gardner, 2004). For example, in one study, four middle school students received a repeated reading intervention to help improve their reading fluency and comprehension (Alber-Morgan, Matheson Ramp, Anderson, & Martin, 2007). All four students were reading one or more grade levels below their current grade level in reading and other academic subjects, and they were participating in an outpatient day treatment program for behavior problems. The repeated reading intervention included systematic error correction and performance feedback, which involved the data collector telling the student the number of words he or she read correctly and encouraging the student to exceed this number during the next timed reading. In addition, the student was given praise for improvement and given encouragement for lack of improvement. The intervention had significant positive effects on three of the four students in the study; their reading fluency (i.e., number of words read correctly in one minute) increased from the baseline data collection when students did not receive intervention with performance feedback. After completing data collection, the students were interviewed to assess their opinions about the intervention. “All four students indicated they knew their reading improved (e.g., ‘It’s pretty cool because I’m reading a lot better.’ ‘I can read faster now and I’m finishing more of my work’)” (Alber-Morgan et al., 2007, p. 25). 27 Other researchers have attempted to isolate and combine skill- and performance-based intervention components. Eckert, Ardoin, Daly, and Martens (2002) used brief experimental analysis to examine whether combining an intervention (listening passage preview and repeated readings) with contingent reinforcement (i.e., receiving a small prize when exceeding initial passage reading rate by 5%) or performance feedback, or both, would improve elementary students oral reading fluency. In the performance feedback condition, the experimenter worked with a student to create reading goals (i.e., number of errors and reading time). First, the experimenter shared the previous session’s performance data with the student. Then the experimenter identified realistic goals (i.e., 1% to 5% improvement from previous session) and presented them to the student, who needed to accept the goals or work with the experimenter to make modifications. After each passage reading, the experimenter shared the data (i.e., number of errors and number of minutes to read passage) with the student, who was responsible for recording the data on the corresponding reading goal graphs. For four of the six students in the study, combining the intervention with one or both components enhanced the effectiveness of the intervention. Reading rates were higher when the listening passage preview and repeated readings were used with contingent reinforcement or performance feedback, but performance did not increase any more when both reinforcement and feedback were combined. These improvements in performance were specific to the passages practiced as part of the repeated readings intervention and did not necessarily generalize to fluency on new passages. Chafouleas et al. (2004) also examined a repeated reading intervention and combined it with performance feedback and performance feedback with contingent reward components to explore which condition led to the greatest increase in three elementary students’ reading fluency. The performance feedback simply involved telling the student how many words were 28 read correctly in one minute after reading a passage. In the contingent reward condition, students were allowed to select a small prize if they read at least one word more correctly than the last reading. Students were explicitly told how many words they read correctly and how many they needed to read the next time in order to earn the self-selected prize. The two students with the highest reading rates and lowest error rates at baseline improved the most when receiving repeated readings alone (i.e., without performance feedback or feedback and contingent rewards). One student, however, who began the study with the lowest reading rates and highest error rate, seemed to benefit more when the repeated readings intervention was combined with performance feedback or performance feedback and contingent reward. Chafouleas et al. (2004) speculated that more specific feedback on accuracy might have helped this student, who continued to have higher error rates, even when making improvements in reading rate. One specific type of feedback involves providing information about progress towards pre-determined goals. Having a goal in mind gives an individual a framework for responding to events (Dweck & Leggett, 1988). Students can use goals in the process of becoming better learners, and to help them focus on an academic task; evaluating progress towards their goals can provide students with realistic achievement expectations and help them consider the strategies needed to reach goals (Locke & Latham, 2002; McMasters, 2011; Schunk, 2001). For example, Gaa (1973) conducted a study of the effects of individual goal-setting conferences on reading achievement. Fifty-four first and second grade students were assigned to one of three conditions: goal-setting, conference, or control. Students in the goal-setting condition were asked to choose certain goals to accomplish during the next week, and they also had individual weekly meetings during which they received feedback about the appropriateness and accuracy of self-set goals and general achievement. Students in the conference group 29 received individual conferences similar to the goal-setting group, but they did not set specific goals or receive feedback related to specific goals. Students in the control group did not receive any individual conferences. Reading achievement, attitudes towards reading, and goal-setting behaviors were measured for all students. Students in the goal-setting group showed significantly higher achievement scores, as measured by the Wisconsin Tests of Reading Skill Development Battery, than those in the conference group or control group. These students also set fewer goals and had smaller differences between the number of goals set and the number of goals achieved, which may indicate that practice in goal-setting led the students in this group to be more realistic and accurate in setting their own goals. However, there were no significant differences between the conference group and control group in regards to their attitudes towards reading. The research on goal setting feedback in combination with early reading interventions for more targeted groups of students is much more limited. Swain (2005) conducted a study to examine the use of goal setting with CBMs, as little is known about the effect of student involvement in the CBM process. Previous studies have shown that students respond better to goals they set themselves compared to goals set by their teachers (Fuchs, Bahr, & Rieth, 1989; Johnson, Graham, & Harris, 1997). Combining goal setting and CBM might help students understand their reading goals and work with teachers to monitor their progress. Training students in goal setting procedures is not typically part of the CBM process. Swain’s study examined the ability of sixth and seventh graders to set realistic goals and how goal setting might increase their understanding of their reading goals. The study included 19 students with learning disabilities from an urban Midwest area. The students’ special education teachers tracked their progress toward their reading goals using CBM for seven weeks. Students in the treatment group were trained on how to set goals using their CBM data. “Students were taught (a) the purpose of 30 a goal and goal line, (b) how to identify their daily goal, (c) to try to reach their daily goal, and (d) how to determine if they were successful in reaching their daily goal” (p. 261). The control group students still completed CBM assessments but were not trained to self-set goals and did not see any graphs with their CBM scores. The Student Awareness of Goals and Progress Interview was administered as a measure of goal knowledge. The results of the study supported previous research by illustrating that students with learning disabilities have difficulty setting realistic goals. Students in the goal setting group only met their self-set goals 38% of the time. However, despite these difficulties, eight of the nine students in the goal setting group did meet the end-of-intervention goals set by their teachers. During post-intervention testing, 44% of the students in the goal setting group, compared to 0% of the control group, were able to state specific, rather than broad, reading goals. This research suggests that students might benefit from goal setting with CBM but that teacher support is needed. Teacher feedback on student progress can help make assessment data more meaningful for students with learning disabilities who often experience academic failure. Different Types and Forms of Performance Feedback The research on motivation for students who are struggling suggests that providing feedback to students may improve performance and also improve students’ self-efficacy (Conte & Hintze, 2000; Eckert, Dunn, & Ardoin, 2006; Schunk, 1983a; Schunk & Rice, 1991; Schunk & Swartz, 1993; Shih & Alexander, 2000). However, there is evidence to suggest that certain types of feedback can have negative effects on learning. In particular, when feedback is focused solely on outcomes, it can hinder the learning of complex tasks and performance on the task (Kluger & DeNisi, 1996). As a result, there is debate as to what the format and content of performance feedback should be to improve performance and motivation. 31 Researchers have compared the learning and motivational effects of different types of performance feedback (Codding, Chan-Iannetta, Palmer, & Lukito, 2009; Conte & Hintze, 2000; Eckert et al., 2006; Schunk, 1983a; Schunk & Rice, 1989, 1991; Schunk & Swartz, 1993; Shih & Alexander, 2000). Some studies compare two different types of performance feedback based on goals, but many researchers are examining the same types of goals semantically, even if they name the types differently than other researchers. For example, studies of proximal versus distal goals are similar to studies of short-term versus long-term goals (Bandura, 1982; Dunn & Eckert, 2002). Some research on the effects of different types of performance feedback is highlighted below. Performance feedback could include information regarding attainment levels (i.e., number produced), velocity (i.e., change from previous trials), normative information (i.e., comparison to others), or norms (i.e., performance of others; Kluger & DeNisi, 1996). This information can be presented in a variety of ways, including verbally, written, or graphically (Kluger & DeNisi, 1996). There can be variations in the frequency of the feedback (i.e., number of times it was provided) and whether it is designed to discourage or praise (Kluger & DeNisi, 1996). A review of studies on performance feedback examined these different types/formats of feedback as moderators between the relationship of performance feedback and performance (Kluger & DeNisi, 1996). Significant moderators, including discouraging feedback and verbal feedback, weakened the effects of feedback interventions on performance. In addition, velocity feedback focusing on changes from previous trials and frequent amounts of feedback enhanced the effects. Normative feedback was a nonsignificant moderator in the explored literature. When considering progress towards goals as a form of performance feedback, researchers have examined different types of goals and goal orientations that may lead to improved 32 achievement. Achievement goal orientations involve “…an integrated pattern of beliefs, attributions, and affect…[that result in] different ways of approaching, engaging in, and responding to achievement-type activities” (Ames, 1992, p. 261). The two main categories have been identified as mastery and performance orientations (Ames, 1992; Dweck, 1986). A mastery goal orientation refers to when the individual focuses on effort and individual progress (Ames, 1992; Dweck, 1986). An individual with this orientation believes that as effort increases, the outcome also improves/increases. Students with this orientation are motivated to continue learning. Those with a performance goal orientation focus more on social comparisons (Ames, 1992; Dweck, 1986). Individuals with performance goal orientations think that their abilities are defined by doing better or worse than others and/or surpassing norm-based standards (Ames, 1992; Dweck, 1986). Researchers in this area have encouraged educators to organize classrooms, design learning activities, and use language that promote mastery orientations (Ames, 1992; Meece, Anderman, & Anderman, 2006). Mastery goal structures in a classroom positively predict students adopting personal mastery goals (Murayama & Elliot, 2009; Wolters, 2004). There is some evidence that mastery goal orientations are related to achievement. For example, eleventh grade science students with mastery goal orientations had higher grades than those with performance goal orientations (Sins, van Joolingen, Savelsbergh, & van Hout-Wolters, 2008). The research on achievement goal orientations has led others to compare the effects of different types of goals. According to Schunk (2003), an expert in the goal-setting and selfefficacy research, merely setting a goal does not improve learning or motivation. The specificity, proximity, and difficulty level of the goals all matter (Schunk, 2001, 2003). Goals based on more specific performance standards are more likely to enhance learning than general goals without 33 specific standards in mind (Locke & Latham, 2002; Schunk, 2003). When goals are specific and proximal, closer in time or short-term, students can better perceive their progress which can enhance their self-efficacy and lead them to set more challenging goals in the future (Zimmerman, 1990). Like goals, certain types of feedback from teachers can promote certain goal orientations. Students who were praised on their intelligence chose easier learning tasks to continue appearing smart (Dweck, 1999). These students also believed that intelligence was an innate quality that you have or you do not. On the other hand, students who were praised for their effort chose more challenging learning tasks (Dweck, 1999). These students viewed intelligence in terms of skills, knowledge, and motivation which can be improved. Feedback conveys messages to students about what is important and how they think about themselves, which can then affect their performance (Dweck, 1999). University students who were taught about the relationship between intelligence and effort earned significantly higher grades than students who did not receive the same instruction (Dweck, 1999). A line of research within the school psychology field has focused on the effect of performance feedback on academic outcomes with groups of students. For example Codding et al. (2009) conducted a study of mathematics fact fluency with third graders. The students’ teachers used the cover-copy-compare strategy as a class-wide intervention to improve students’ subtraction fluency (i.e., number of problems correctly solved in two minutes). Students were assigned to a control group, a cover-copy-compare group, or one of two experimental goalsetting groups. These groups received goals based on the number of problems solved correctly or based on the number of errors. Results showed that students who set goals based on the number of problems correct had significantly higher scores at the end of the intervention and greater 34 growth between sessions than those in the control group and those who set goals based on the number of errors. Behavioral research utilizing single case designs has also examined the effect of performance feedback on academic outcomes. Eckert et al. (2006) examined the effects of feedback on the oral reading fluency of second graders struggling in reading in a multielement single case design format in which the students experienced baseline and two feedback phases. The examiner held sessions with each student twice a week; each session started with the presentation of a performance feedback bar graph illustrating the student’s performance from the previous session. In one condition, the student received feedback on the number of words read correctly, and in another condition the student received feedback on the number of words read incorrectly (i.e., number of errors). Although it was hypothesized that the feedback on words read correctly would result in higher oral reading fluency rates, the feedback on words read incorrectly actually resulted in higher rates of fluency for five of the six participants. The remaining student showed an increase in fluency from baseline level when presented with feedback on words read correctly. Most of the students had lower numbers of oral reading errors when receiving feedback on words read correctly. More research on the effects of different types of feedback may better inform practical applications for schools and also allow for examination of the effects on student motivation. Another line of research within the field of educational psychology has focused on motivational outcomes in addition to academic outcomes. For example, Schunk (1983a) explored how feedback can enhance students’ abilities and self-efficacy in mathematics and writing. In one study, third grade children were provided with frequent feedback during subtraction training sessions with proctors (Schunk, 1983a). Every eight minutes the proctor would ask each student 35 “What page are you working on?” and provide feedback following the student’s response. In one condition the students received feedback focused on their abilities (“You’re good at this”). In a different condition the students received feedback focused on their effort (“You’ve been working hard”). In a final condition students received both types of feedback. Students who received ability-feedback reported higher levels of self-efficacy and had the highest levels of subtractions skills. There was no difference between students in the effort-feedback and the both-feedback conditions; however, both of these groups performed better on subtraction tasks than students in a control group that received no feedback. Additional research within educational psychology has focused specifically on performance feedback regarding goals that focus on individual progress compared to goals that focus on social comparisons or peer performance (Schunk, 1983b; Shih & Alexander, 2000). Focusing on social comparisons makes an individual’s performance in comparison to peers’ performance salient (Ames & Ames, 1984). This type of goal-setting can lead to adopting a performance goal orientation and attributing success to abilities (Ames & Ames, 1984). This forced competitive structure may be motivating for some students striving to perform better than peers who have some success. For example, one study of fourth and fifth graders examined whether providing comparative information could increase students’ growth in division skills and self-efficacy (Schunk, 1983b). Students were assigned to one of three treatment groups (providing information on the number of problems completed by 50% of other children, providing a goal number of problems to work on, or providing a combination of both) or a control condition (Schunk, 1983b). Students who received both goals and comparative information demonstrated better division skills than students in the other conditions. In addition, students in the combined condition reported higher self-efficacy than students with comparative 36 information only and those in the control conditions (Schunk, 1983b). In summary, providing students with specific, proximal goals that include social comparative information improved the skills and self-efficacy of elementary students struggling with division. The focus on peer comparisons and competition may not be appropriate for all students. Those students who continually experience academic failure may feel more discouraged and less motivated when they are aware of the discrepancy between their performances and that of their peers. For these students, a noncompetitive, individualistic motivational system may be more appropriate. Self-referenced goals focus an individual’s improvements from past performance and have been classified as fitting with a mastery goal orientation (Ames & Ames, 1984; Shih & Alexander, 2000). According to Ames and Ames (1984), experts in the area of goal orientations and the consequences on student motivation, students whose goals are self-referenced are likely to attribute success to efforts (Ames & Ames, 1984). If students believe that success is due to their effort then students are more likely to put in effort and persist longer to experience future success. Does the goal and goal orientation theory described above fit with the effects of different types of feedback on CBMs of oral reading fluency? Only a few researchers have directly examined this idea. One study of fourth graders in Taiwan directly compared the effects of goal setting and different types of comparative feedback (i.e., self- versus social-referenced) on student self-efficacy and math fraction skills (Shih & Alexander, 2000). Students in the selfreferenced feedback condition were encouraged to meet goals based on the number of fraction problems they had solved correctly in the previous session. Students in the social-referenced feedback condition were encouraged to meet goals based on the average number of problems that other students had correctly solved in the previous session. It was hypothesized that the students 37 in both of these goal-setting groups would show better fraction skills and higher self-efficacy than students in a control, no-goal group and that children in the self-referenced group would show better skills and higher self-efficacy than those in the social-referenced group. While the goal setting and no-goal setting groups did not differ on the fraction skill test and self-efficacy measure, the self-referenced group did show improvements over the social-referenced group, as expected. Another study that examined this combination of performance feedback and different types of goal setting was Conte and Hintze’s (2000) smaller study of second grade students, focusing on oral reading fluency CBMs. Students were assigned to one of three groups: a control group, a dynamic goal line group, or a static goal line group. In the goal line feedback phases students were presented with graphs before and after being instructed to read aloud. The dynamic goal line condition involved presenting the student’s performance on CBM oral reading fluency probes in comparison to a diagonal line connecting the student’s oral reading rate on the very first passage with the student’s expected rate eight weeks from the beginning of the study. A student in this condition “…was provided with a biweekly goal to be met, which corresponded to an oral reading rate score on the diagonal line. Before and at the end of each session, students were encouraged to meet their goal and provided feedback toward the attainment of the goal” (Conte & Hintze, 2000, p.91). Students with static goal lines were consistently provided graphs with a horizontal line highlighting the long-term goal for a reading rate at the end of the eight weeks. Students in the control group were asked to read aloud but did not receive any graphs with goals or performance feedback. Effect size calculations revealed the positive effects of goal setting and performance feedback on oral reading fluency (i.e., dynamic goal line effect size equaled .47, static goal line effect size equaled .32), compared to students in the control group. 38 Visual analysis of week-to-week data showed that students in the dynamic goal line group made the most growth, followed by the static goal line, and then the control groups. These results can be taken as preliminary evidence that using goal setting and performance feedback with CBM progress monitoring data can have positive effects on students’ fluency. However, the motivational effects were not examined by Conte and Hintze (2000). In summary, performance feedback has been combined with other evidence-based interventions to determine whether it can augment the positive effects. However, receiving feedback can be an intervention in itself because it acts as a social reinforcement, directs attention to the task, and encourages metacognition. Typically it includes information about how one is performing, and it can therefore affect one’s motivation and achievement. Many studies have examined the potential effects of different types and formats of performance feedback. One area of literature has examined the feedback on progress towards meeting goals. Different achievement goal orientations and types of goals appear to be more motivating for some students. Mastery orientations are more focused on individual growth and progress compared to performance orientations, which are more focused on social comparisons and meeting external standards. Students who are struggling in reading and who are continually experiencing failure may respond better to performance feedback on goals that are focused on individual growth and progress, particularly when they are far behind their peers. Gap in the Research Research has shown that “goals motivate people to exert effort necessary to meet task demands and persist over time [and] …goal attainment builds self-efficacy and leads people to select new, challenging goals” (Schunk, 2001, p. 2). Ames (1992), as an expert in goal orientations in education, indicated that if educators want students to adopt a mastery orientation 39 towards their academic achievement (i.e., concerned with increasing competence rather than gaining favorable judgments of competence), reading interventions and progress monitoring for students who continually experience academic failure must be designed to focus on individual improvement and self-monitoring. Students must be supported as they learn to establish shortterm, self-referenced goals that offer a reasonable challenge (Ames, 1992). One way to support students’ self-efficacy and academic growth is with the use of performance feedback in the context of progress monitoring with CBMs. Several studies have shown the positive effects of different types and forms of performance feedback on student achievement and/or self-efficacy, but they did not examine feedback specifically on CBMs which are frequently being used in schools as part of an RtI or multi-tiered support model. The few studies that have examined performance feedback or goal setting using CBMs did not consider student motivation but rather focused solely on academic outcomes (Conte & Hintze, 2000; Eckert et al., 2006). It is essential to consider the motivation, and specifically the selfefficacy, of students who are struggling in reading. Students who are more confident in their abilities and have higher self-efficacy may be able to escape the cycle of failure associated with many struggling readers. Students who are more confident may be more motivated and more likely to put forth more effort, be persistent, and attribute their success to their effort. This study filled this gap by examining the effect of different types of performance feedback using self-referenced versus peer-referenced CBM data for oral reading fluency. In addition to oral reading fluency rates, the self-efficacy of students was examined to consider the potential positive motivational outcomes of providing different types of CBM feedback. Selfefficacy was explored using student self-report measures. 40 Research Questions and Hypotheses The current study focused on three main research questions: (1) Does self-referenced performance feedback when combined with a repeated readings intervention contribute to higher oral reading fluency for students struggling in reading than peer-referenced performance feedback that is combined with a repeated readings intervention? It was hypothesized that students’ rates of oral reading fluency would be higher when they received a repeated readings intervention and their performance feedback was selfreferenced compared to when it was peer-referenced. The National Reading Panel’s (2000) analysis of studies on fluency development indicated that guided repeated oral reading had positive effects on students up to fourth grade. Previous research has also shown that providing performance feedback to students can improve academic outcomes (Alber-Morgan et al., 2007; Chafouleas et al., 2004; Dunn & Eckert, 2002; Eckert et al., 2006; Schunk, 1983a). Some research on goal-setting has shown that self-referenced goals can have positive effects on students’ academic skills (Shih & Alexander, 2000). The self-referenced performance feedback was likely to lead students to adopt a mastery achievement goal orientation, which has been shown to link to increased achievement (Sins et al., 2008). In addition, this condition was similar to the dynamic goal line condition used by Conte and Hintze (2000), where students received more proximal goals and focused on week-to-week progress rather than a long-term, norm-based standard. Students in the dynamic goal condition made more growth with oral reading fluency than students in other conditions (Conte & Hintze, 2000). (2) Does the sharing of progress monitoring data (i.e., performance feedback) in combination with a repeated reading intervention affect the self-efficacy ratings for students struggling in reading? 41 It was hypothesized that students’ self-efficacy ratings would be higher in the intervention conditions when they received the repeated reading intervention and performance feedback compared to when they received no feedback or intervention. Research on performance feedback has illustrated that it can be used to affect the self-efficacy of students for specific academic areas (Schunk, 1983a; Schunk & Rice, 1989, 1991; Schunk & Swartz, 1993). In addition, students’ skills should improve with the intervention, providing them with opportunities for them to observe their increases in performance which can also inform selfefficacy beliefs (Bandura, 1982). (3) Does self-referenced performance feedback when combined with a repeated readings intervention contribute to higher self-efficacy ratings for students struggling in reading than peerreferenced performance feedback that is combined with a repeated readings intervention? It was hypothesized that students’ self-efficacy ratings would be higher when their performance feedback was self-referenced compared to when it was peer-referenced. While studies have shown that different types of feedback have varying effects on self-efficacy, no research has examined the effects of self- versus peer-referenced performance feedback on student self-efficacy. It was expected that the self-referenced feedback would lend itself to students developing a mastery achievement goal orientation, rather than a performance achievement goal orientation, which may be more motivating for struggling readers (Ames, 1992; Dweck, 1986). 42 CHAPTER 3: METHODS Design The effects of peer-referenced and self-referenced CBM performance feedback were investigated through a multiple baseline design across participants with two feedback phases. Participants were exposed to both feedback phases. In order to control for order effects participants were matched with another participant with a similar initial oral reading fluency score creating three pairs. Then one student from each pair (N=3) was given peer-referenced performance feedback followed by self-referenced performance feedback, and their partners (the other three participants) were given self-referenced performance feedback followed by peerreferenced performance feedback. Not unlike traditional group research designs, single-case designs involve a comparison of performance under different conditions (Murphy & Bryan, 1980). Experts in single-case design methodology indicate that single-case designs (SCD) have been used for decades in psychology and education and can be particularly useful in documenting the effects of educational interventions in this time when schools are focused on accountability and improvement (Kazdin, 2011; Riley-Tilman & Burns, 2009). A multiple-baseline SCD is “…particularly well suited to the practical demands operating in typical educational…settings” (Murphy & Bryan, 1980, p. 326). In addition, this design does not require a reversal to pre-intervention performance levels, which is an important consideration when behavior or performance changes are irreversible (Murphy & Bryan, 1980). While there is some variability in oral reading fluency (i.e., words correct per minute) scores because of the standard error in curriculum-based measurements (Christ & Silberglitt, 2007), it was not expected that there would be significant declines in students’ fluency after they made progress. A 43 multiple-baseline-across-individuals design provides an opportunity for replication of the intervention effects with multiple participants in similar settings (Murphy & Bryan, 1980). See Figure 2 for a sample multiple baseline design. Participants Participants for the present study were six (N = 6) second grade students who struggled with reading fluency in a Midwestern elementary school. Second grade was selected because previous research indicates that by this time, many students are working on developing fluency, although classroom instruction will continue to focus on this in third, fourth, and even fifth grade as some students continue to struggle (Osborn et al., 2003). Teachers provided consent forms to those parents whose children teachers perceived would benefit from summer reading support. This consent form explained to parents that their child might or might not be selected and given the opportunity to participate in the study. Once these students were identified and parental consent was obtained, screening procedures were utilized to determine if students met the criteria for inclusion in the present study. In addition, the consenting parents completed a demographic survey to determine if the students met the inclusionary/exclusionary criteria. Only those who met the criteria were recruited as participants for this study. These criteria were used to select participants that were similar on relevant features (e.g., second grade, native English speakers) and were likely to benefit from the feedback phases (i.e., student’s primary academic difficulty was with reading fluency; Riley-Tilman & Burns, 2009). In addition, students needed to be available for at least 7 weeks of the summer vacation to be included in the current study. See Figure 3 for a flow chart outlining the screening and recruitment process for the current study as described below. 44 16 14 Student 1 12 10 8 6 4 2 0 Student 2 Words Read Correctly in a Minute 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 14 12 Student 3 10 8 6 4 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Figure 2. Sample Multiple Baseline Design Session Number 45 Step 1: Initial Teacher Referral: 2nd grade teachers gave initial parent consent forms to any parents whose children struggled with reading fluency and might benefit from summer reading support. Was parent consent obtained? No Discontinued Yes Step 2: Parents completed demographic survey. Student completed reading assessment, MRQ & AGOQ surveys. Does child meet the study criteria? - English was primary language - Available for at least 7 weeks during summer - Reading less than 82 words correct per minute with 92%+ accuracy No Discontinued Yes Step 3: Parent was given final parent consent form and student was given assent form. Was parent consent and student assent obtained? No Discontinued Figure 3. Flowchart for Screening and Recruitment Procedures 46 Yes Step 4: Researcher conducted school file review. Study began. As part of screening procedures, school data (e.g., reading screeners or other assessment results) and a reading assessment were used to determine that the participants had deficits with reading fluency only (i.e., not decoding or phonemic awareness). Specifically, this reading fluency deficit was defined as reading less than 82 words correct per minute on spring benchmark assessments (i.e., performance below the 25th percentile according to spring second grade AIMSweb norms) but reading with 92% or higher accuracy. Students could be included in the current study if they were slow but accurate readers. Although students’ reading rates varied, the use of this criteria helped ensure that participants had similar reading difficulties and would likely benefit from the intervention. If students met these criteria, final parent consent (Appendix C) and student assent (Appendix D) were obtained. Students were not excluded for having received Title 1 or other supplemental reading services as part of their school’s service delivery model. Students were also not excluded if they received speech therapy through the school without academic support. School record reviews were conducted after consent/assent to gather data on these services. This study incentivized participation by providing free reading support during the summer months in participants’ homes. In addition, each participant also got to choose one book to keep at the end of each week throughout the study for a total of 8 books across the 8 weeks of the summer. Details regarding each participant are provided below. Actual student names were replaced with pseudonyms to protect their privacy. In addition, Table 1 provides an overview of student demographic information including reading screening data. Although there was some variability in students’ ages, students had all completed the same grade (i.e., second) in the same school district, indicating that they had been exposed to similar curricula prior to the study. It 47 was not expected that age would influence the findings of the current study. Tables 2 and 3 provide information on the participants Achievement Goal Orientation Questionnaire (AGOQ) and Motivation for Reading Questionnaire (MRQ) data. Table 1 Student Demographics Studenta Gender Age Race/Ethnicity Abby Becca Claire Diane Eric Frank 7:10 8:9 8:3 8:3 7:9 8:1 Caucasian Asian Caucasian Caucasian Multiracial Caucasian a Female Female Female Female Male Male Median Words Read Correctly 48 58 77 67 58 79 Median Accuracy 96.0% 94.8% 97.5% 95.7% 93.5% 92.3% Actual student names were replaced with pseudonyms to protect anonymity. Abby Abby was a seven-year old Caucasian female. Her mother reported that she had a diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD), but that her symptoms were being controlled through diet rather than medication. Abby had never received special education services, but participated in leveled reading groups in school. Her second grade report card indicated that she was just beginning to meet grade level expectations in reading fluency and other reading skills by the end of the school year. Her teacher reported that she made a lot of gains in reading, but recommended that she continue to practice reading aloud on a daily basis over the summer on appropriately leveled books. To examine students’ reading fluency prior to the start of the intervention, students were administered the DIBELS Oral Reading Fluency (DORF) spring second grade benchmark assessments. In order to meet the criteria for inclusion in the study, students’ median scores on the benchmark assessments needed to indicate they were 48 reading less than 82 words correctly per minute with 92% accuracy. Abby read 48 words correctly in one minute with 96% accuracy on this reading screening. Becca Becca was an eight-year old Asian female. She received special education services for speech-language since she was four years old. Her current Individualized Education Plan goals focus on syntax, grammar, and listening comprehension. Becca was identified by her school for Tier 2 small group reading support focusing on decoding and comprehension in second grade. Her second grade report card indicated that she was just beginning to meet grade level expectations in reading fluency by the end of the school year. Her teacher reported that she made a lot of gains in reading fluency over the year, but recommended that she continue to work on comprehension. Becca also received private tutoring two mornings a week in reading and math over the summer while the research study took place. On the reading screening, she read 58 words correctly in one minute with 94.8% accuracy. Claire Claire was an eight-year old Caucasian female and the twin sister of Diane. She received special education services for speech-language since she was four years old. Her current Individualized Education Plan goals focus on syntax and articulation. Claire was identified by her school for Tier 2 small group reading support in Kindergarten and first grade. Her second grade teacher indicated that she could read on grade level and met the end-of-year expectations in fluency, but recommended that she continue working on fluency over the summer vacation. On the reading screening, she read 77 words correctly in one minute with 97.5% accuracy. 49 Diane Diane was an eight-year old Caucasian female and the twin sister of Claire. Her mother reported that she has been diagnosed with ADHD and takes medication during the school year only. Her second grade report card indicated that she was just beginning to meet grade level expectations in reading fluency and other reading skills by the end of the school year. Her teacher reported that she worked very hard in reading but recommended that she continue to work on reading fluency over the summer vacation. On the reading screening, she read 67 words correctly in one minute with 95.7% accuracy. Eric Eric was a seven-year old multiracial male. He was evaluated for special education services in Kindergarten but was found ineligible. In first grade he received small group reading instruction as part of an “extended education” program. His second grade report card indicated that he was just beginning to meet grade level expectations in reading fluency and other reading skills by the end of the school year. His teacher recommended that he continue to work on his reading fluency and comprehension over the summer. On the reading screening, he read 58 words correctly in one minute with 93.5% accuracy. Frank Frank was an eight-year old Caucasian male. His mother indicated that he had never received special education services, but participated in reading interventions in school for students who were struggling. His second grade report card indicated that he was just beginning to meet grade level expectations in reading fluency by the end of the school year, although his other reading skills were well developed. On the reading screening, he read 79 words correctly in one minute with 92.3% accuracy. 50 Measures Oral Reading Fluency Students’ reading fluency was measured using Curriculum-Based Measurements of oral reading fluency. Although the researcher planned to use AIMSweb Oral Reading Fluency (RCBM) probes (Edformation, 2005), the participants’ school district already used AIMSweb for progress monitoring. It was important that students did not have previous exposure to the oral reading fluency probes prior to the research study, so DIBELS Next Oral Reading Fluency (DORF) was chosen as a replacement with similar technical adequacy. The examiner administered a probe by having a student read aloud from a designated passage for 1 minute. The examiner used the examiner copy of the assessment to note student errors. If the student failed to say a word after 3 seconds, the examiner told the word to the student and marked it as an error. The number of words read correctly, which excludes errors (i.e., pausing on a word for 3 seconds, mispronouncing words, substituting, omitting words), was calculated for an administered passage. The median score of the three administered probes was used as a measure of performance. The difficulty of DORF passages was determined using a combination of readability formulas. The assessment publisher’s difficulty index examined word difficulty (i.e., characters per word, percent of words with three or more syllables, percent of words with seven or more characters, and number of syllables per word), semantic difficulty, and syntactic difficulty. The difficulty of the DIBELS Next passages was designed to be approximately equivalent to the DIBELS 6th Edition passages. Second grade DORF passages were 225 to 275 words. The readability of DORF passages and the reliability and validity of the passages were examined in a study of actual student performance (Powell-Smith, Good, & Atkins, 2010). 51 Research on the reliability of DORF probes has been used to calculate alternate-form correlations for a single probe and for the median of three probes at each grade level (PowellSmith et al., 2010). DORF alternate form reliability is 0.89 for a single probe and .96 for the median of three probes in second grade (Powell-Smith et al., 2010). The National Center of Response to Intervention (http://www.rti4success.org/) identified DIBELS 6th Edition Oral Reading Fluency as a progress monitoring tool that has convincing evidence to meet technical adequacy standards. The center has not yet reviewed the DORF tool because the assessment manual was only published in 2011 and the technical manual was published in 2013. The center identified the previous edition, DIBELS 6th Edition Oral Reading Fluency, as having convincing evidence in several key areas. Specifically, students’ performance level scores are adequately reliable and validly measure the construct of oral reading. In addition, the tool is designed to be sensitive to student improvement and includes at least nine alternate forms that are comparably difficult. Third grade DORF passages were not used in case students would be exposed to the passages the following school year. The order of the second grade reading probes was randomized for each participant. Due to the number of probes available (i.e., 20) as part of DIBELS Next, each probe needed to be repeated no more than one time for each participant. Motivations for Reading Questionnaire The Motivations for Reading Questionnaire (MRQ; Wigfield & Guthrie, 1997) was used to collect descriptive data on the reading beliefs of participating students. Students’ reading beliefs were assessed with this questionnaire as part of the screening procedures, although no specific scores were used as criteria for inclusion in the study. The original measure contained 82 items (Wigfield & Guthrie, 1995). The revised version was shortened to include just 53 items 52 that measure 11 constructs related to reading motivation (Wigfield & Guthrie, 1997). Appendix E includes a copy of this measure. The 11 constructs include reading efficacy, reading challenge, reading curiosity, reading involvement, importance of reading, reading work avoidance, competition in reading, recognition for reading, reading for grades, social reasons for reading, and compliance. Students responded to each item using a 4-point Likert-type scale (1 = very different from me, 2 = a little different from me, 3 = a little like me, 4 = a lot like me). The MRQ is typically completed independently by students in a group setting, and takes 15 to 20 minutes. For the present study, the researcher read each of the items to the participants in a one-on-one session. The MRQ has been used in published research with students in third grade (Wigfield, Guthrie, Tonks, & Perencevich, 2004), fourth and fifth grade (Guthrie et al., 2004), and sixth, seventh, and eighth grade (Unrau & Schlackman, 2006). Wigfield and Guthrie (1997) reported reliabilities for all constructs ranging from .43 to .81. Nine of the 11 constructs (all except Work Avoidance and Reading for Grades) showed consistent reliabilities between .52 and .81 (Wigfield & Guthrie, 1997). Factor analyses conducted by the measure developers were used during the revision process (Wigfield & Guthrie, 1997). Items loading at less than .40 (27 items) on the appropriate domain were eliminated from the scale (Wigfield & Guthrie, 1997). In addition, the majority of items (all but 7) were moderately to strongly correlated with the total score for the corresponding domain (Wigfield & Guthrie, 1997). Unrau and Schlackman (2006) also found evidence for the 11-factor model (confirmatory fit index of .90) with a sample of sixth through eighth graders. For the current study, students had completed the second grade and were promoted to third grade. Minor changes in wording were made on nine items to account for students’ reading skills. In addition, the oral reading of items by the researchers to the 53 participants accounted for participants’ vocabulary and level of understanding. The research assistant stressed to the students that there were no “right” or “wrong” answers and that he or she was just interested in what the student thinks about reading. Achievement Goal Orientation Questionnaire The Achievement Goal Orientations Questionnaire (AGOQ) was used to collect descriptive data on the achievement goal orientations of participating students. Students were assessed with this questionnaire as part of the screening procedures, although no specific scores were used as criteria for inclusion in the study. This was important information to collect because achievement goal orientations are related to student achievement (Ames, 1992; Dweck, 1986). It is possible that certain types of performance feedback may be more effective with students with certain achievement goal orientations. The AGOQ includes three scales (Mastery Orientation, Performance-Approach Goal Orientation, and Performance-Avoid Goal Orientation). These scales are from the Patterns of Adaptive Learning Scales (PALS; Midgley et al., 2000). The Mastery scale includes five items, the Performance-Approach scale includes five items, and the Performance-Avoid scale includes four items. Students respond to each item using a 5-point Likert-type scale (1 = Not at all true, 3 = Somewhat true, 5 = Very true). The list of items is included as Appendix F. Confirmatory factor analysis was used to validate the use of the revised goal scale. Internal consistency reliability coefficients for the scales range from .74 to .89 (Midgley et al., 2000). These scales have been used with third through ninth grade students, but the reliability coefficients are typically higher for older students. The researcher read each of the items to the participants in a one-on-one session. For the current study, students had completed the second grade and were promoted to third grade. Minor changes in wording were made on four items to account for students’ vocabulary and level of understanding. In addition, the oral 54 reading of items by the researchers to the participants accounted for participants’ reading difficulties. The research assistant stressed to the students that there were no “right” or “wrong” answers and that he or she was just interested in what the student thinks about reading. Self-Efficacy Measure A self-efficacy item related to oral reading fluency was constructed using Bandura’s guide for constructing self-efficacy scales (Bandura, 2006). Bandura (2006) indicated that a measure of self-efficacy must be written as a judgment of capability (“can do” not “will do”) to execute a certain type of performance in regards to a specific domain (e.g., reading aloud). In addition, responses should ideally be based on a 0 to 100 scale, with ten-unit intervals for the most sensitivity and reliability (Bandura, 2006). For younger children this can be simplified to a 0 to 10 scale with single unit intervals (Bandura, 2006). For the current study, the scale was simplified to a 0 to 5 scale to account for students’ developmental levels. The scale used was similar to that used by Shell, Colvin, and Bruning (1995) with fourth grade students. This measure was first introduced to students using a practice item with a physical task to familiarize them with the scale and strength of their perceived efficacy (Bandura, 2006). Markers at increasing distances were placed on the floor. Students were be asked to rate their degree of confidence that they could jump to the indicated distances by choosing a number on the 1 to 5 scale. After providing a confidence rating for a distance, a student was asked to attempt to jump the distance. This concrete demonstration can be used to teach children how to use the numerical scale value to convey their confidence level or perceived self-efficacy (Bandura, 2006). Then students were asked, “If you were asked to read this passage right now, how sure are you that you can read aloud well?” and indicated their confidence rating on the same 1 to 5 scale as the concrete demonstration. This concrete demonstration followed by academic self-efficacy 55 questions has been frequently used in self-efficacy research (Schunk, 1983a; Schunk, 2003; Zimmerman & Martinez-Pons, 1992). See Appendix G for a copy of this measure. Once the student was familiar with the measure after several sessions, the concrete demonstration was skipped. Research has typically examined self-efficacy in a pre- post-intervention design, rather than by weekly or daily measures of confidence. Due to the creation of these self-efficacy measures for the specific purpose of the present study, validity and reliability information was not available. As a result, research question 3 using the self-efficacy ratings was more exploratory in nature. Reader Self-Perception Scale Due the lack of validity and reliability information on the self-efficacy rating created for the study, research question 3 considered student self-efficacy using a more technically adequate measure: the Reader Self-Perception Scale (RSPS; Henk & Melnick, 1995). This scale was administered at the end of each condition (baseline, intervention 1, and intervention 2). See Appendix H for a complete copy of the RSPS. This questionnaire was created by Henk and Melnick (1995) for use with fourth through sixth grade students. All 33 items on the RSPS are worded as questions that were administered verbally to participants by the researcher. Participants will respond to items on a 5-point scale (5 = Strongly Agree, 4 = Agree, 3 = Undecided, 2 = Disagree, 1 = Strongly Disagree). According to Henk and Melnick (1995), it takes 15 to 20 minutes to administer. The scale was introduced to students as a questionnaire about reading that has no right answers, so they should be as honest as possible. The scale consists of 1 general item and 32 items that assess self-perceptions from four dimensions of self-efficacy: Progress (9 items), Observational Comparison (6 items), Social 56 Feedback (9 items), and Physiological States (8 items). The internal consistency alpha reliabilities for the four subscales are as follows: Progress .84, Observational Comparison .82, Social Feedback .81, and Physiological States .84. The validity of the items was determined through exploratory factor analysis on an initial pool of items developed to reflect the four factors affecting self-efficacy (i.e., performance, observational comparison, social feedback, and physiological states), as defined by Bandura (1977b; Henk & Melnick, 1995). This factor analysis supported the scales for Observational Comparison, Social Feedback, and Physiological States, but it also indicated that revisions were needed for the Performance scale (Henk & Melnick, 1995). A panel of experts helped adjust this scale to measure perceptions of personal progress, and they renamed it (i.e., Progress; Henk & Melnick, 1995). A factor analysis was conducted again, and it provided support for the four scales (Henk & Melnick, 1995). This second pilot project also showed moderate significant relationships between the RSPS scores and the Elementary Reading Attitudes Survey and a variety of standardized reading achievement measures (Henk & Melnick, 1995). Since the initial pilot projects by Henk & Melnick (1995), the measure has been used to measure the self-efficacy of other students in fourth through sixth grade (Hinson, DiStefano, & Daniel, 2003), and a second version was created for older students (Melnick, Henk, & Marinak, 2009). The scale has been administered to students at three separate time points as part of another single case design study for a student struggling in reading (Nes Ferrara, 2005). An adapted version of the scale was created for use with younger children (i.e., 8 and 9-years old; Lynch, 2002). This scale used corresponding happy faces rather than words for student responses, which actually may have changed students’ understanding of the question (Lynch, 2002). It was decided that it would be necessary to adapt the scale for the current study’s 57 participants. The wording of responses was changed to simplify the language for students, but still closely aligned with the original scale. Students responded to items using a 1 to 5 scale that is similar to the MRQ scale with the addition of an Undecided category (1 = very different from me, 2 = a little different from me, 3 = not sure, 4 = a little like me, 5 = a lot like me). In addition, the oral reading of items by the researchers to the participants accounted for participants’ reading difficulties. The research assistant stressed to the students that there were no “right” or “wrong” answers and that he or she was just interested in what the student thinks about reading. Procedures As part of screening procedures, a reading assessment was conducted. Baseline and experimental sessions were conducted over the 8-week summer period. Each intervention with feedback session lasted 30 minutes, four or five times per week depending on student availability. The final session of each phase was longer than 30 minutes to allow for the administration of the RSPS survey. Efforts were made to schedule sessions at the same time each day. The number of sessions in the baseline phase for each participant was expected to vary, due to the fact that one must wait for stability of oral reading fluency data collected to be obtained. Stability was defined as baseline data without an ascending trend (i.e., three successive data points in an ascending trend) and data with little variability (i.e., data points fall within the low and high range as defined by M + ½M). At least five data points were collected per participant in each phase. Students were matched based on their initial reading fluency scores. One student from each match (for a total of three students) received the intervention and peer-referenced performance feedback, first followed by the intervention and self-referenced performance feedback. The other three students experienced the feedback phases in the opposite order to control for any order effects associated with the feedback phases. The length of the feedback 58 phases was randomly assigned to participants in advance, but included a minimum of five sessions per phase. Reading Assessment A reading assessment was conducted with students who met initial screening criteria and for whom initial parental consent was obtained. A benchmark assessment in oral reading fluency using second grade spring probes from DIBELS Next was conducted. The same procedures previously described for the oral reading fluency progress monitoring were used. In addition, students’ accuracy levels were calculated. This reading assessment was used to determine if the student had a fluency deficit (i.e., reading less than 82 words correct in 1 minute but reading with 92% accuracy or higher). Research Assistant Training Data collection and intervention sessions were conducted by three graduate students in school psychology. These graduate students had at least one year of prior experience working with students and engaging in assessment and intervention work similar to the current study (i.e., administering DORF probes, conducting surveys of students, and implementing reading interventions). These graduate assistants were trained as research assistants by the research study’s primary investigator/study coordinator. Training involved direct instruction and modeling in the research protocols including DORF administration and scoring, survey administration, creating DORF feedback graphs, providing scripted DORF feedback to students, and providing the fluency intervention to students. During training the research assistants were also observed engaging in the research protocols and were provided feedback by the primary investigator. Research assistants completed a minimum of three sample intervention sessions with the primary investigator during training and also observed the other research assistants 59 completing their sample intervention sessions. The research assistants also completed one additional intervention session with a sample child, which was observed. Interscorer agreement calculations on DORF scoring and intervention integrity checklists and intervention quality ratings were utilized during training to provide feedback to research assistants. Interscorer Agreement Interscorer agreement on DORF scoring was examined for all training administrations. AIMSweb (Edformation, 2005) does not specify a specific percentage of agreement. Many studies using inter-rater reliability require at least 95% agreement and a similar oral reading fluency tool (i.e., DIBELS ORF) identified 98-99% agreement as an appropriate target (Kaminski & Good, 2009). For the current study, interscorer agreement (words read correctly) of at least 98% on training administrations was obtained before training was considered complete. Research assistants must have met this percentage agreement before data collection began with participants. In addition, interscorer agreement (words read correctly) was calculated for a selection of intervention sessions for each participant. The interscorer agreement on these intervention sessions is discussed in the results section. While the primary research assistant administered and scored the probe given to the participant, a second research assistant also listened and shadow-scored the administration. Interscorer agreement was calculated between the primary and secondary scorers of the DORF probe administrations by dividing agreements by agreements plus disagreements, multiplied by 100. Intervention Integrity An intervention integrity checklist (Appendix K) was used to evaluate the occurrence and nonoccurrence of session components similar to that used by Wanzek and Vaughn (2008). A secondary observer completed the checklist and also rated the overall quality of the intervention 60 on a Likert-type scale range from 1 (low) to 3 (high) for four criteria (Appendix L). These criteria included: following of session protocol (i.e., including required components and applying scripted feedback), research assistant engagement and enthusiasm, participant attentiveness and engagement, and participant frustration. This intervention integrity checklist and intervention quality ratings were used in 100% of sample intervention sessions during the training of research assistants. In addition, intervention integrity checklists and intervention quality ratings were used during a selection of intervention sessions for each participant. These intervention integrity data are discussed in the results section. Initial Data Collection Students who were administered the reading assessment were also administered the MRQ and AGOQ measures orally. In addition, the research assistant spent some one-on-one time with the child to build rapport. Each research assistant was assigned to two participants. With the exception of a few sessions, the same research assistant worked with the same participant for each session in the baseline and two intervention phases. Baseline (BL) Phase Each student spent the first twenty minutes of a baseline phase session reading aloud to the research assistant. A selection of late first-grade and second-grade level books was provided for the student to choose from. The student was then directed to respond to the self-efficacy item. At the end of the session, the research assistant administered one DORF probe using the standardized instructions. During the baseline phase, students did not receive any feedback on their performance on these probes. Upon completion of the baseline phase sessions, the research assistant orally administered the RSPS. Each session ended with the research assistant and 61 student playing the student’s choice of card game for approximately five minutes, as time allows, for rapport-building. Peer-Referenced Feedback (PRF) Phase During this phase, the student spent the first few minutes receiving feedback on the previous session’s DORF performance. This feedback involved the research assistant presenting and explaining a graph illustrating the student’s median score from the last session in comparison to the spring second grade norm (i.e., 106 words correct per minute). Sample scripted feedback for this phase is provided in Appendix I and will be used with a feedback graph (see Figure 4). The feedback did not include an explicit goal for the student to meet. Twenty minutes were spent on the guided repeated reading intervention. A guided repeated oral reading intervention was conducted using teacher-student assisted reading methods described by Osborn and colleagues (2003) and discussed by Kuhn and Stahl (2003). This assisted reading provides a model of fluent reading to the student. Participants were allowed to choose their own short book from a selection of first-grade and second-grade texts but were encouraged to select texts that were not too difficult based on their reading level. These books were between 130 and 460 words in length and corresponded to Fountas and Pinnell (2005) levels E through M (i.e., first and second grade levels books). The research assistant started by asking the participant, “why did you choose this book?” and “what do you think this book will be about?” The research assistant then read the book while the participant followed along. Then the participant read the same book to the research assistant who provided encouragement and helped the student with unfamiliar or difficult words. The participant read the book a second time to encourage fluency. Then the research assistant asked the participant to verbally answer three comprehension/engagement questions about the text. One question asked about a fact or story 62 detail from the text; another asked about personal experiences or connections to the text material; a third question asked about the student’s favorite part of the text. If the participant had difficulty answering a question, the research assistant and the student reviewed the text or discussed ideas together to identify an answer. Following the answering of these questions, the participant was asked to read the book one more time. If time remained after these activities, the procedure continued using a new book. Research has shown that three to five practice readings is optimal for fluency development (O’Shea, Sindelar, & O’Shea, 1985; Spring et al., 1981). The majority of repeated reading research studies reviewed by Kuhn and Stahl (2003) had students read passages a set number of times (i.e., three or four) rather than using a criterion reading rate. The student was then directed to respond to the self-efficacy item. At the end of the session the research assistant instructed the student to read one DORF passage using the standardized instructions. The student’s score was immediately added to a new feedback graph and shared with the student. The student received feedback illustrating his or her new score in comparison to norm. Upon completion of sessions in this phase, the research assistant orally administered the RSPS. Each session ended with the research assistant and student playing the student’s choice of card game for approximately five minutes, as time allows, for rapportbuilding. 63 Words Read Correct in 1 Minute 120 110 100 90 80 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 Session Number 8 9 10 Figure 4. Sample Feedback Graph for Peer-Referenced Feedback Phase Self-Referenced Feedback (SRF) Phase. During this phase, the student spent the first few minutes receiving feedback on the previous sessions’ DORF performance. This feedback involved the research assistant presenting and explaining a graph illustrating the student’s median score from all the previous sessions. A sample graph is provided in Figure 5 and scripted feedback for this phase is provided in Appendix J. The feedback did not include an explicit goal for the student to meet. Twenty minutes was spent on the repeated reading intervention as previously described. The student was then directed to respond to the self-efficacy item. At the end of the session the research assistant instructed the student to read one DORF passage using the standardized instructions. The student’s score was immediately added to the current feedback graph and shared with the student. The student received feedback illustrating his or her new score in comparison to the median performance of all of the previous sessions. Upon completion of sessions in this phase, the research assistant orally administered the RSPS. Each session ended with the research 64 assistant and student playing the student’s choice of card game for approximately five minutes, Words Read Correct in 1 Minute as time allows, for rapport-building. 120 110 100 90 80 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 Session Number 8 9 10 Figure 5. Sample Feedback Graph for Self-Referenced Feedback Phase Data Analysis To assess the effects of the different types of feedback, the oral reading fluency scores and self-efficacy ratings were graphed using visual representations similar to the research design figure (see Figure 2). Once all of the data were entered into the graph, they were visually analyzed within each phase and across phases for each participant. The data were examined for patterns regarding the level (e.g., number of words read correctly per minute, high/low selfefficacy ratings), trends in the data (e.g., are scores or ratings increasing or decreasing), changes in trends (e.g., slope), and variability in the data (e.g., consistency). Slopes (i.e., changes in number of words read correctly in a minute per session) for visual analysis were calculated using a basic statistical software program as the slope of a linear regression line through the data points. The data were also analyzed across participants to determine whether phase changes lead to similar patterns of responding and whether there is evidence to support a functional relation. 65 These visual analyses were used as the primary way of determining the effects of different types of performance feedback on the oral reading fluency and self-efficacy ratings of the participants. This type of analysis is typical in single case design research (Kazdin, 2011; Riley-Tillman & Burns, 2009). As there is not much research available to support the repeated use of the RSPS and there was limited data available with only six participants, statistical analysis of the differences in scores between the phases was not appropriate. Visual analysis of the scores on the four factors measured by the questionnaire (progress, observational comparison, social feedback, and physiological states) was also used. The data were examined for patterns regarding the level of scores across phases. Effect size calculations were also used to supplement visual inspection of the effect of the different types of performance feedback on reading fluency (research question 1). Effect sizes were calculated using an improvement rate difference (Parker, Vannest, & Brown, 2009). Many types of effect size calculations are difficult to use with single-case design research because the data violate assumptions needed for certain calculations, the calculations are difficult to complete, or the effects can be misinterpreted. Improvement rate difference (IRD) “…is defined as the improvement rate (IR) of the treatment phase(s) minus the improvement rate of the baseline phase(s)…” (Parker et al., 2009, p. 138). To address research question 1, the IRD effect sizes were calculated using oral reading fluency data to compare each of the two intervention phases to the baseline phase. “An improved data point in baseline is defined as one that ties or exceeds any data points in a treatment phase… [and] an improved data point in an treatment phase is defined as any which exceeds all data points in the baseline phase” (Parker et al., 2009, p. 139). Visual analysis was used to determine the number of improved data points in each 66 phase, which was used to calculate an improvement rate for each phase (i.e., number of points improved divided by total number of points in the phase; Parker et al., 2009). IRDs of .50 (i.e., 50%) indicate that improvements could be related to chance (Parker et al., 2009). IRDs between .50 and 1.00 indicate some level of improvement from baseline to a treatment phase (Parker et al., 2009). Negative IRD scores in the case of the current study indicate a large amount of overlap in scores in the baseline and an intervention phase. Negative IRD scores represent maintenance, or no increase, in oral reading fluency scores in an intervention phase. Effect size calculations were not used for research questions 2 and 3 due to the limited data available for the RSPS and lack of validity and reliability information on one-item measure of self-efficacy. 67 CHAPTER 4: RESULTS The purpose of the study was to examine the effects of a skill-based reading intervention in combination with performance-based feedback on oral reading fluency and self-efficacy. By measuring the potential changes in oral reading fluency and self-efficacy ratings of students struggling with reading fluency across baseline and two intervention conditions, the effect of performance-based feedback and different types of feedback (i.e., self-referenced and peerreferenced) was examined. Initial Student Data The following paragraphs describe the results of the MRQ and AGOQ for each of the individual participants as well as information about intervention sessions. Tables 2 and 3 present detailed information regarding participants’ scores on these surveys. Abby Abby’s responses on the MRQ indicated that she was most motivated by the importance of reading, recognition for reading, and reading efficacy. She was least motivated by competition in reading and reading curiosity. Her responses on the AGOQ indicated that she equally endorsed a mastery orientation and a performance-approach goal orientation. Abby participated in a total of 38 intervention sessions. Abby was generally a cooperative participant and developed good rapport with the research assistant. Although her intervention sessions took place in close proximity to her siblings, she was able to stay focused the majority of the session. Abby was less interested in the intervention activities when choosing unfamiliar books and during the last four weeks of the intervention. 68 Table 2 Average Student Ratings on Motivation for Reading Questionnaire (MRQ) Student Efficacy Challenge Curiosity Involve. Importance Work Avoidance Competition Recognition Grades Social Reasons Compliance Abby Becca Claire Diane Eric Frank 5.00 3.00 4.67 3.67 3.67 4.67 4.20 3.00 2.80 2.80 4.20 4.40 3.50 2.67 4.17 5.00 4.00 4.33 4.50 4.33 4.17 4.17 4.67 4.33 5.00 4.00 5.00 3.00 5.00 4.00 2.33 4.00 4.00 3.33 3.83 2.67 5.00 3.80 4.60 5.00 5.00 3.50 4.00 4.25 3.75 4.75 4.75 4.00 4.14 4.14 4.43 5.00 4.14 3.14 4.20 3.60 4.40 4.80 4.60 4.20 4.50 2.50 3.50 3.75 3.50 1.75 Table 3 Student Achievement Goal Orientation Questionnaire (AGOQ) Data Student Abby Becca Claire Diane Eric Frank Mastery 4.2 4.8 5.0 4.6 4.6 4.4 Performance-Approach 4.2 3.6 4.2 3.4 4.4 4.4 Performance-Avoidance 2.0 3.0 2.0 1.5 5.0 3.0 69 Becca Becca’s responses on the MRQ indicated that she was most motivated by grades in reading, social reasons for reading, and involvement in reading. She was least motivated by reading work avoidance and reading curiosity. Her responses on the AGOQ indicated that she most highly endorsed a mastery goal orientation. Becca participated in a total of 40 intervention sessions. Becca was always an engaged and cooperative participant and developed good rapport with the research assistant. Claire Claire’s responses on the MRQ indicated that she was most motivated by the importance of reading, reading efficacy, and recognition for reading. She was least motivated by reading challenge, reading work avoidance, and reading grades. Her responses on the AGOQ indicated that she most highly endorsed a mastery goal orientation, followed by a performance-approach goal orientation. Claire participated in a total of 34 intervention sessions, with a maximum of four sessions per week due to scheduling conflicts. Claire was generally a cooperative participant and developed good rapport with the research assistant. However, during the final weeks of the intervention, Claire had difficulty focusing and became defiant with the research assistant. Specifically, Claire had difficulty transitioning from preferred summer activities to intervention sessions. The research assistant determined that beginning with intervention session 27, an external reinforcement system would be used for behavior management with Claire and her twin sister (Diane). It was clearly articulated to the student that this reinforcement was for her behavior and was unrelated to her reading performance. Claire could earn one “good behavior ticket” per intervention session. After the student earned four tickets, the research assistant gave 70 the student a coloring book. Between intervention session 27 and 34, Claire earned one coloring book. Diane Diane’s responses on the MRQ indicated that she was most motivated by reading curiosity, recognition for reading, social reasons for reading, compliance, and reading grades. She was least motivated by reading challenge and importance of reading. Her responses on the AGOQ indicated that she most highly endorsed a mastery goal orientation. Diane participated in a total of 34 intervention sessions, with a maximum of four sessions per week due to scheduling conflicts. Diane was usually unfocused and defiant during intervention sessions. Although her mother indicated that she would begin taking a small dosage of ADHD medication to improve her behavior during sessions, it was unclear whether the mother followed through. There were some days when Diane was very focused, but on most days she required frequent prompting and re-direction. Diane had particular difficulty transitioning from preferred summer activities to intervention sessions. The research assistant determined that beginning with intervention session 27, an external reinforcement system would be used for behavior management. The research assistant used the same external reinforcement system that was used with Claire. Between intervention session 27 and 34, Diane earned one coloring book. Eric Eric’s responses on the MRQ indicated that he was most motivated by recognition for reading, importance of reading, reading involvement, reading grades, and compliance. He was least motivated by reading work avoidance and reading efficacy. His responses on the AGOQ indicated that he most highly endorsed a performance-avoidance orientation, followed by a mastery orientation, and then a performance-approach goal orientation. Eric participated in a 71 total of 39 intervention sessions. Eric was generally an engaged and cooperative participant and developed good rapport with the research assistant. His sleep schedule during the summer vacation, however, was variable and he was often observed yawning during intervention sessions. Frank Frank’s responses on the MRQ indicated that he was most motivated by reading efficacy, reading challenge, reading curiosity, reading involvement, and compliance. He was least motivated by reading work avoidance and competition in reading. His responses on the AGOQ indicated that he equally endorsed a mastery orientation and a performance-approach goal orientation. Frank participated in a total of 30 intervention sessions. Frank’s mother frequently altered her schedule and cancelled many of the student’s intervention sessions as result. He also attended a different elementary school in the same district as the other students. The elementary school had recently changed to a “balanced calendar,” resulting in a shorter summer vacation for students. Frank began third grade right before his 22nd intervention session. Frank was generally a compliant and engaged participant and developed good rapport with the research assistant. He often questioned or commented on having to read texts over as part of the repeated readings intervention, however he responded well to re-direction without complaint. Interscorer Agreement Interscorer agreement (words read correctly) on DORF scoring was calculated for 27 to 43% of all probe administrations per participant (between 11 and 13 administrations per participant) during participant sessions. These probe administrations were selected randomly. While the primary research assistant administered and scored the probe given to the participant, a second research assistant also listened and shadow-scored the administration. Interscorer 72 agreement was calculated between the primary and secondary scorers of the DORF probe administrations by dividing agreements by agreements plus disagreements, multiplied by 100. The average interscorer agreement was 99.41% for words read correctly (range 92-100%). Intervention Integrity Research assistants conducting intervention sessions with participants kept logs of dates/times of sessions, texts used during fluency intervention, DORF probes administered, participants’ DORF scores, and feedback provided. A second research assistant observed 27 to 43% of all intervention sessions with a participant and a primary research assistant (i.e., 11 to 13 sessions per participant). These sessions were selected randomly. This second research assistant completed the intervention integrity checklist and intervention quality rating during each observation (see Appendices K and L). The average intervention integrity was 99.6% and ranged from 90.9% to 100%. The average score for the four criteria of intervention quality was also calculated. The average score for following session protocol was 3.0 and ranged from 2 to 3. All research assistants received a score of 3 for research assistant engagement and enthusiasm across all randomly selected sessions. The average participant attentiveness and engagement level was 2.8 and ranged from 2 to 3. The average participant frustration level was 1.0 and ranged from 1 to 3. Participant attentiveness/engagement levels were generally high and frustration levels were generally low even for Claire and Diane who seemed less interested in the intervention in the final weeks. These participants’ behavior was most challenging during the transition from preferred summer activities to the intervention sessions. The average intervention integrity and intervention quality ratings for each participant are presented in Table 4. Overall, participants were attentive and engaged during sessions and showed few signs of frustration. 73 Table 4 Average Percentage Intervention Integrity and Average Intervention Quality Ratings Student Intervention Following Integrity Session Protocol Abby Becca Claire Diane Eric Frank 100% 100% 98.5% 99.2% 100% 100% 3.0 3.0 2.8 2.9 3.0 3.0 Research Assistant Engagement & Enthusiasm 3.0 3.0 3.0 3.0 3.0 3.0 Participant Attentiveness & Engagement 2.9 3.0 2.8 2.5 2.9 2.6 Participant Frustration 1.0 1.0 1.0 1.0 1.0 1.2 Research Question 1 The first research question was intended to examine the effects of self-referenced performance feedback (SRF) compared to peer-referenced performance feedback (PRF) when combined with a repeated readings intervention. Specifically, the effects on the oral reading fluency of students struggling in reading were tested. It was hypothesized that students’ rates of oral reading fluency would be higher (i.e., higher increasing trend) when they received a repeated readings intervention and their performance feedback was self-referenced compared to when it was peer-referenced. It was particularly important to consider the trends in the data within phases given that all participants were expected to improve their oral reading fluency when receiving the repeated reading intervention. The oral reading fluency rate for each phase, as measured by an average, was expected to increase from one treatment phase to the second treatment phase due to the amount of intervention received. For example, a student who received PRF first and SRF second was likely to have higher oral reading fluency rate (i.e., higher means) in the SRF phase because it was the second treatment phase. Similarly, a student who received SRF first and PRF second might have a higher oral reading fluency rate in the PRF phase because it is the later treatment phase, and may mask the positive effects that SRF may have on 74 students’ rates of growth in reading rate. Closer examination of the changes in oral reading fluency rate trends could help determine whether an increase in performance was related to a specific type of feedback. The level, variability, and trend of oral reading fluency data were examined using visual inspection, with careful consideration given to trend, comparing the two intervention phases. Visual inspection is used in single-case design methodology to compare these different dimensions (i.e., level, variability, trend) of the dependent variable. A functional relation between the independent variable (i.e., type of feedback with intervention) and the dependent variable (i.e., oral reading fluency) was identified for a participant if an individual read considerably higher numbers of words, but more importantly, an individual’s rate of growth in oral reading fluency was considerably higher during one treatment phase when compared to the other. The consistency of this functional relation is examined by comparing the relation between the variables across the participants who switched to the different intervention phases at different times. IRD effect sizes were calculated for each individual participant to compare the first intervention phase to the second intervention phase. The improvement rate of the baseline phase was subtracted from the improvement rate of a treatment (i.e., intervention) phase for an IRD effect size. This IRD was compared to the IRD effect size calculated by subtracting the improvement rate of the baseline from the improvement rate of the other intervention phase. Tables 5 through 7 compare baseline and intervention phase means, standard deviations, and trends for all six participants. Figures 6 through 9 illustrate students’ oral reading fluency (words read correctly in one minute) and accuracy (percentage of words read correctly) across baseline, PRF, and SRF phases on DIBELS Next Oral Reading Fluency passages. Visual 75 analysis for the research question focused on oral reading fluency rates (i.e., words read correctly in one minute), rather than accuracy levels. Ideally, students read passages with 95% accuracy. The students in the current study read the majority of passages with 90% accuracy or higher, but there were exceptions for all six participants when their accuracy levels fell below this threshold. A simple visual analysis of oral reading fluency rates in combination with accuracy rates was conducted to determine if students were “speed reading” and making a considerable amount of errors. It was determined that gains in oral reading fluency rates did not correspond to decreases in accuracy on the same passages. Tables 8 and 9 present the results of the IRD effect size calculations. First, any general patterns identified through visual analysis across participants that correspond to this research question are described. Next, the result of each individual participant are described, in addition to the results of the effect size calculations. Overall, results of visual analysis were variable across participants. Oral reading fluency averages were highest during the SRF phase, compared to the PRF phase, for three participants. However, this was potentially influenced by the order of the phases for two of these participants. Namely, it is expected that those who received the SRF phase last would have higher overall scores in this phase given the cumulative effects of the intervention over time on participants’ reading development. For four participants, the oral reading fluency average was higher for whichever phase was last. The trend, or slope, of oral reading fluency is less likely to be influenced by these cumulative effects. Only two participants had higher trends (i.e., slopes) in the SRF phase compared to the PRF phase. IRD effect sizes were negative for both intervention phases for all six participants, indicating more maintenance, rather than progress, in scores between baseline and intervention phases for each participant. IRD effect sizes, however, were influenced by the variability in oral reading fluency scores. For example, a lower number of 76 words correct per minute in an intervention phase would result in more “improved data points” in the baseline phase for the purpose of IRD calculations. For four participants, effect sizes were slightly higher for whichever intervention phase occurred second. For two students, this was a slightly higher effect size for the PRF phase and for two other students, this was a slightly higher effect size for the SRF phase, indicating less overlap between baseline oral reading fluency scores and the final intervention phase. Table 5 Oral Reading Fluency (Words Read Correctly in One Minute) Student Baseline Mean Baseline SD Baseline Slope PRF Mean PRF SD PRF Slope SRF Mean SRF SD SRF Slope Abby Becca Claire Diane Eric Frank 41.80 57.57 67.31 61.38 53.30 72.00 4.32 8.10 12.33 4.84 9.36 9.77 1.70 3.57 1.05 -0.42 -0.24 -0.54 42.76 58.14 78.44 69.86 60.09 84.29 9.64 12.39 12.42 14.66 6.80 11.03 -0.34 1.10 1.63 1.30 -0.82 1.61 47.13 68.63 74.00 66.67 60.89 74.00 10.09 8.62 10.40 9.54 10.18 9.46 -0.68 0.61 -0.23 0.57 0.46 3.02 Table 6 Variability in Oral Reading Fluency (Words Read Correctly in One Minute) Student Baseline Minimum Baseline Maximum PRF Minimum PRF Maximum SRF Minimum SRF Maximum Abby Becca Claire Diane Eric Frank 36 48 44 56 36 56 47 70 85 72 66 90 27 35 55 25 46 63 56 81 98 84 70 96 32 49 55 57 41 62 67 83 84 86 74 89 77 Table 7 Oral Reading Fluency Accuracy (Percentage of Words Read Correctly) Student Baseline Mean Baseline SD Baseline Slope PRF Mean PRF SD PRF Slope SRF Mean SRF SD SRF Slope Abby Becca Claire Diane Eric Frank 93.62 93.22 97.07 97.17 90.23 86.95 3.51 2.05 2.18 1.78 6.87 11.07 -0.09 -0.07 -0.09 0.06 -0.27 0.51 92.30 92.64 97.30 94.03 93.68 91.21 7.78 3.95 2.26 5.11 2.30 8.28 0.07 -0.18 0.08 0.02 -0.14 2.37 95.60 94.45 96.70 92.13 94.05 92.80 3.19 3.39 2.56 11.89 3.63 3.41 0.14 0.07 0.46 -0.64 -0.13 -0.49 Table 8 Results of Improvement Rate Difference (IRD) Visual Analysis and Calculations for PRF Phase Baseline Student Abby Becca Claire Diane Eric Frank Improved Points 5 7 13 8 8 11 Total Points 5 7 16 8 10 14 PRF Improvement Rate 100% 100% 81% 100% 80% 79% Improved Points 6 2 2 6 2 2 Total Points 17 14 9 14 11 7 Improvement Rate 35% 14% 22% 43% 18% 29% IRD -65% -86% -59% -57% -62% -50% Table 9 Results of Improvement Rate Difference (IRD) Visual Analysis and Calculations for SRF Phase Baseline Student Abby Becca Claire Diane Eric Frank Improved Points 5 6 13 7 9 11 Total Points 5 7 16 8 10 14 SRF Improvement Rate 100% 86% 81% 88% 90% 79% Improved Points 8 10 0 3 7 0 78 Total Points 16 19 9 12 18 9 Improvement Rate 50% 53% 0% 25% 39% 0% IRD -50% -33% -81% -63% -51% -79% 100 90 SRF PRF 80 70 Abby 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940 100 90 70 Becca Words Read Correctly 80 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 90 80 Claire 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 6. Oral Reading Fluency (Number of Words Read Correctly) for Abby, Becca, and Claire 79 100 90 SRF PRF 80 Diane 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 90 80 60 Eric Words Read Correctly 70 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 90 80 Frank 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 7. Oral Reading Fluency (Number of Words Read Correctly) for Diane, Eric, and Frank 80 100 95 90 85 Abby 80 75 PRF SRF 70 65 60 55 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 90 85 Becca Percentage of Words Read Correctly 95 80 75 70 65 60 55 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 95 90 Claire 85 80 75 70 65 60 55 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 8. Oral Reading Accuracy (Percentage of Words Read Correctly) for Abby, Becca, and Claire 81 100 95 90 Diane 85 80 75 SRF PRF 70 65 60 55 Eric Percentage of Words Read Correctly 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 100 95 90 85 80 75 70 65 60 55 50 100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 95 90 Frank 85 80 75 70 65 60 55 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 8. Oral Reading Accuracy (Percentage of Words Read Correctly) for Diane, Frank, and Frank 82 Abby Visual analysis. When Abby first switched from baseline to PRF, it appeared that Abby’s oral reading fluency scores were increasing. This trend was not maintained, and Abby’s oral reading fluency scores were fairly stable across both intervention phases. There was also a significant amount of variability in her scores in both intervention phases. Abby’s baseline oral reading fluency scores ranged from 36 to 47 with an average score of 41.80; her baseline trend was positive but small (1.70 words correct in a minute per session). Abby received PRF first and SRF second. In the PRF phase, Abby’s oral reading fluency scores ranged from 27 to 56, with an average score of 42.76. Her score trend was negative but nearly flat (-0.34) in this phase. In the SRF phase, Abby’s oral reading fluency scores ranged from 32 to 67, with an average score of 47.13. The trend in the scores was negative but small (-0.68). These results suggest that Abby did not make considerable gains in either intervention phase. Abby’s average oral reading fluency score only increased 4.37 between the PRF and SRF phases. The results do not demonstrate a functional relation between SRF with repeated readings and improved oral reading fluency. Based on trends in each phase, Abby’s oral reading fluency scores reflected a more negative slope for the SRF phase. Effect size. Using IRD visual analysis and calculations, Abby had an effect size of -65% for the PRF phase and -50% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Abby, the IRD for the SRF phase was less negative than the IRD for the PRF phase. 83 Becca Visual analysis. Although she met the criteria for switching to the intervention phase (i.e., last three data points were not in ascending trend and all data points fell within the low and high range of M + ½M), Becca’s oral reading fluency scores in baseline were generally increasing. This may have been the result of the private tutoring she received two mornings a week during the entire summer while the research study took place. Becca struggled with reading books independently during the baseline phase. To prevent Becca from becoming frustrated and discouraged, she was switched to the intervention phase as soon as she met the stability criteria, despite her generally increasing trend. When Becca first switched from baseline to PRF, her scores were quite variable. Becca appeared to make some limited progress over time in the PRF phase. There was slightly less variability in oral reading fluency scores after she switched to the SRF phase, and there was still a general increase in scores over time, but this increase was small. Becca’s baseline oral reading fluency scores ranged from 48 to 70 with an average score of 57.57; her baseline trend was positive (3.57). Becca received PRF first and SRF second. In the PRF phase, Becca’s oral reading fluency scores ranged from 35 to 81, with an average score of 58.14. Her score trend was positive but small (1.10) in this phase. In the SRF phase, Becca’s oral reading fluency scores ranged from 49 to 83, with an average score of 68.63. The trend in the scores was positive but small (0.61). These results suggest that Becca made some gains in the SRF phase. Becca’s average oral reading fluency score increased 10.49 between the PRF and SRF phases. The results do not demonstrate a clear functional relation between SRF with repeated readings and improved oral reading fluency. Although her scores were more consistent (i.e., less variability) and the average oral reading fluency score was higher in the SRF phase, Becca’s oral reading fluency scores reflected a more positive slope for the PRF phase. She 84 continued to receive private tutoring throughout the summer, but it is unclear whether this tutoring focused specifically on her oral reading fluency or related reading skills. Effect size. Using IRD visual analysis and calculations, Becca had an effect size of -86% for the PRF phase and -33% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Becca, the IRD for the SRF phase was less negative than the IRD for the PRF phase. Claire Visual analysis. Based on visual analysis, it appeared that Claire’s reading performance was similar across the baseline and PRF phases. Oral reading fluency scores were variable but increased in those phases. Claire’s progress appeared to stall in the SRF phase. Claire’s baseline oral reading fluency scores ranged from 44 to 85 with an average score of 67.31; her baseline trend was positive but small (1.05). Claire received PRF first and SRF second. In the PRF phase, Claire’s oral reading fluency scores ranged from 55 to 98, with an average score of 78.44. Her score trend was positive but small (1.63) in this phase. In the SRF phase, Claire’s oral reading fluency scores ranged from 55 to 84, with an average score of 74.00. The trend in the scores was negative but nearly flat (-0.23). These results suggest that Claire made limited gains in the intervention phases. Claire’s average oral reading fluency score decreased 4.44 between the PRF and SRF phases. The results do not demonstrate a clear functional relation between SRF with repeated readings and improved oral reading fluency. Claire’s data suggest a functional relation between PRF with repeated readings and improved oral reading fluency. There was more 85 variability in her scores, but her average oral reading fluency score was higher in the PRF phase. In addition, Claire’s oral reading fluency scores reflected a more positive slope for the PRF phase. Effect size. Using IRD visual analysis and calculations, Claire had an effect size of -59% for the PRF phase and -81% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Claire, the IRD for the PRF phase was less negative than the IRD for the SRF phase. Diane Visual analysis. Based on visual analysis, Diane had a positive, though small, response to intervention phases. There was a significant amount of variability in her scores, but Diane’s performance generally increased across the SRF and PRF intervention phases. Diane’s baseline oral reading fluency scores ranged from 56 to 72 with an average score of 61.38; her baseline trend was negative but small (-0.42). Claire received SRF first and PRF second. In the SRF phase, Diane’s oral reading fluency scores ranged from 57 to 86, with an average score of 66.67. Her score trend was positive but small (0.57) in this phase. In the PRF phase, Diane’s oral reading fluency scores ranged from 25 to 84, with an average score of 69.86. The trend in the scores was positive but small (1.30). These results suggest that Diane made limited gains in the intervention phases. Diane’s average oral reading fluency score only increased 3.19 between the SRF and PRF phases. The results do not demonstrate a clear functional relation between SRF with repeated readings and improved oral reading fluency. Instead, Diane’s data show some 86 support for a functional relation between PRF with repeated readings and improved oral reading fluency. Although her oral reading fluency scores were more consistent (i.e., less variability) in the SRF phase, the average score was higher and the scores reflected a more positive slope for the PRF phase. Effect size. Using IRD visual analysis and calculations, Diane had an effect size of -57% for the PRF phase and -63% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Diane, the IRD for the PRF phase was less negative than the IRD for the SRF phase. Eric Visual analysis. Based on visual analysis, it appeared that his performance was similar across conditions, with no clear patterns suggesting that one phase was associated with a higher performance level or slope than the other phase. There was more variability in scores in the SRF phase and it appeared that performance was generally stable. Eric’s baseline oral reading fluency scores ranged from 36 to 66 with an average score of 53.30; his baseline trend was negative but nearly flat (-0.24). Eric received SRF first and PRF second. In the SRF phase, Eric’s oral reading fluency scores ranged from 41 to 74, with an average score of 60.89. His score trend was positive but small (0.46) in this phase. In the PRF phase, Eric’s oral reading fluency scores ranged from 46 to 70, with an average score of 60.09. The trend in the scores was negative but small (-0.82). These results suggest that Eric did not make any progress between the intervention phases. Eric’s average oral reading fluency score only increased 0.80 between the SRF and PRF 87 phases. The results do not demonstrate a clear functional relation between SRF with repeated readings and improved oral reading fluency. Eric’s oral reading fluency scores were similarly variable across the SRF and PRF phases. His scores did reflect a more positive slope in the SRF phase, but this slope was very small. Effect size. Using IRD visual analysis and calculations, Eric had an effect size of -62% for the PRF phase and -51% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Eric, the IRD for the SRF phase was less negative than the IRD for the PRF phase. Frank Visual analysis. Frank’s baseline performance was higher compared to the other participants. Based on visual analysis, Frank responded positively to the SRF phase. His oral reading fluency performance generally increased during this phase. He continued to make limited gains in the PRF phase, although his progress seemed to stall in this phase. There was a considerable amount of variability in Frank’s baseline and PRF phases. The trend in scores was most clear in the SRF phase although he appeared to continue to make limited progress after switching to the PRF phase. Frank’s baseline oral reading fluency scores ranged from 56 to 90 with an average score of 72.00; his baseline trend was negative but small (-0.54). Frank received SRF first and PRF second. In the SRF phase, Frank’s oral reading fluency scores ranged from 62 to 89, with an average score of 74.00. His score trend was positive and larger (3.02) in this phase. In the PRF phase, Frank’s oral reading fluency scores ranged from 63 to 96, with an 88 average score of 84.29. The trend in the scores was positive but small (1.61). These results suggest that Frank made limited gains in the intervention phases. Frank’s average oral reading fluency score increased 10.29 between the SRF and PRF phases. It should be noted that Frank’s third grade school year began towards the end of his SRF and throughout his PRF phase, which may have also improved his oral reading. The results do not demonstrate a clear functional relation between SRF with repeated readings and improved oral reading fluency. Although his oral reading fluency scores were similarly variable across the SRF and PRF phases, the average score was higher for the PRF phase and the scores reflected a more positive slope for the SRF phase. Effect size. Using IRD visual analysis and calculations, Frank had an effect size of -50% for the PRF phase and -79% for the SRF phase when each was compared to the baseline phase. This indicates that there was not a positive effect of SRF on oral reading fluency rates when compared to the prior PRF phase. The negative IRD scores indicate that there was a lot of overlap in oral reading fluency scores between each intervention phase and the baseline phase and a general maintenance in levels of scores. For Frank, the IRD for the PRF phase was less negative than the IRD for the SRF phase. Research Question 2 The second research question addressed the effect of a repeated reading intervention and the sharing of any type of performance feedback on the self-efficacy ratings of students struggling in reading. It was hypothesized that students’ self-efficacy ratings would be higher in the intervention phases compared to the baseline phases when they receive no intervention or feedback. The level, variability, and trend of self-efficacy ratings were examined using visual inspection, comparing the baseline phase to the combined intervention phases. In addition, the 89 levels of the RSPS subscale scores were examined, comparing the baseline phase to the combined intervention phases. Figures 10 and 11 illustrate students’ self-efficacy ratings across baseline, PRF, and SRF phases. Table 10 compares baseline and intervention phase means, range, and trends for all six participants. Students rated their degree of confidence in reading a passage aloud well by recording a number from 1 to 5 (1 = I’m sure I can not, 2 = Don’t think I can, 3 = Maybe I can, 4 = Pretty sure I can, 5 = I’m sure I can). Figures 12 and 13 illustrate students’ Reader SelfPerception Scale (RSPS) subscale scores across baseline, PRF, and SRF phases. Table 11 compares baseline and intervention phase scores for each RSPS subscale for each of six participants. All 33 items on the RSPS are worded as questions that were administered verbally to participants by the researcher. Students’ rated their agreement to one general perception item and items on four subscales (i.e., Progress, Observational Comparison, Social Feedback, and Physiological States) on a 5-point scale (5 = Strongly Agree, 4 = Agree, 3 = Undecided, 2 = Disagree, 1 = Strongly Disagree). For each of the four subscales, Henk and Melnick (1995) outlined score interpretation as low, average, or high. Score interpretation, as defined by Henk and Melnick is presented in Table 12. First, any general patterns identified through visual analysis across participants that correspond to this research question are described. Next, the result of each individual participant are described. Self-efficacy rating averages were higher during the intervention phases (i.e., SRF and PRF combined) compared to the baseline for four participants: Abby, Becca, Claire, and Frank. Overall, these four participants rated their confidence in their abilities higher when receiving intervention to improve their skills and some form of feedback about their performance. Only two participants had higher self-efficacy rating trends (i.e., slopes) in the feedback phases 90 compared to the baseline phase in which they received no intervention and no feedback. The limited data provided by the RSPS do not allow for the clear demonstration of a functional relation between repeated readings with performance feedback and higher self-efficacy ratings but may provide some support that providing intervention and feedback may be associated with higher self-efficacy ratings. Although there were some differences for individual subscales for some participants, RSPS scores were generally consistent with baseline levels or slightly higher in the intervention phases for five of the six participants. Table 10 Self-Efficacy Ratings Student Baseline Mean Baseline Range Baseline Slope Abby Becca Claire Diane Eric Frank 3.8 3.0 1.8 4.0 4.0 2.7 3-5 2-4 1-5 4-4 3-5 1-3 0.0 -0.1 -0.1 0.0 0.0 0.1 91 PRF & SRF Combined Mean 4.1 4.1 4.3 4.0 3.4 2.9 PRF & SRF Combined Range 3-5 3-5 1-5 4-4 3-4 1-3 PRF & SRF Combined Slope 0.1 0.0 0.0 0.0 0.0 0.0 5 4 Abby 3 2 PRF SRF 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 5 Becca 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 5 4 Claire Self-Efficacy Rating 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 10. Self-Efficacy Ratings for Abby, Becca, and Claire 92 5 Diane 4 3 2 SRF PRF 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 5 Eric 3 2 1 0 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 4 Frank Self-Efficacy Rating 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 11. Self-Efficacy Ratings for Diane, Eric, and Frank 93 Table 11 Reader-Self-Perception Scale Data Student Baseline General Perception Abby 3 Becca 4 Claire 5 Diane 5 Eric 5 Frank 3 Baseline Baseline Progress Observational Comparison 33 19 37 23 43 30 44 23 44 18 41 20 Baseline Social Feedback 31 36 42 43 37 29 Baseline Physio. States 32 36 40 35 33 36 PRF/SRF General Perception 4.5 4.5 5 1 4 5 PRF/SRF PRF/SRF Progress Observational Comparison 36 18 43 22.5 44 19.5 31 17.5 33 18 45 29.5 PRF/SRF Social Feedback 35.5 36 43 27 37.5 25 Table 12 Reader Self-Perception Scale Score Interpretation (Based on Henk & Melnick, 1995) Score Interpretation High Average Low Progress 44+ 35-43 0-34 Observational Comparison 26+ 17-25 0-16 94 Social Feedback 38+ 28-37 0-27 Physiological States 37+ 26-36 0-25 PRF/SRF Physio. States 34.5 34 38 23.5 31.5 38.5 45 40 SRF PRF 35 Abby 30 25 20 15 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 40 35 30 Becca Reader Self-Perception Scale Subscale Score 5 25 20 15 10 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 40 35 Claire 30 Progress 25 20 15 10 5 0 Obs. Comparison Social Feedback Physio. States 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 12. Reader Self-Perception Scale Data for Abby, Becca, and Claire 95 45 40 SRF 35 PRF Diane 30 25 20 15 10 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 40 35 30 Eric Reader Self-Perception Scale Subscale Score 0 25 20 15 10 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45 40 35 Frank 30 25 20 15 Progress Obs. Comparison Social Feedback 10 5 0 Physio. States 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Number of Intervention Sessions Figure 13. Reader Self-Perception Scale Data for Diane, Eric, and Frank 96 Abby Self-Efficacy Ratings. Visual analysis of Abby’s self-efficacy ratings in the baseline phase compared to the intervention phases indicated that there was less variability in her ratings, but overall levels of ratings did not change. There were no clear patterns suggesting that the intervention phase was related to higher levels or trends in self-efficacy ratings. Abby’s baseline self-efficacy ratings ranged from 3 to 5 with an average rating of 3.8; her baseline trend was flat (0.0). In the intervention phases, Abby’s self-efficacy ratings ranged from 3 to 5, with an average rating of 4.1. The trend in her ratings in the intervention phases was positive but nearly flat (0.1). These results suggest little difference in Abby’s self-efficacy ratings after receiving intervention and feedback. Abby’s average self-efficacy ratings only slightly increased between the baseline and intervention phases. The results do not demonstrate a functional relation between repeated readings with performance feedback and higher self-efficacy ratings. Reader Self-Perception Scale. Based on simple visual analysis of Abby’s RSPS scores, it appeared that Abby rated her self-efficacy higher at the end of the intervention phases compared to the baseline phase, but the differences in ratings were minimal. When administered at the end of the baseline phase, Abby’s ratings indicated she scored in the low range on the Progress subscale (33), average range on the Observational Comparison subscale (19), average range on the Social Feedback subscale (31), and average range on the Physiological State subscale (32). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. Abby reported higher ratings in the intervention phases on the Progress subscale (36), placing her score in the average range. Her ratings did not change much for the Observational Comparison subscale and her score (18) remained in the average range. On the Social Feedback (35.5) and Physiological States subscales (34.5), Abby reported slightly 97 higher ratings in the intervention phases, but she still scored in the average ranges. These data illustrate that Abby rated her Progress and Social Feedback dimensions of self-efficacy, as defined by the RSPS, higher in the intervention phases. Abby rated herself similarly on the Observational Comparison and Physiological States dimensions of self-efficacy. Abby’s data show some support to suggest that providing intervention and feedback may be associated with higher self-efficacy ratings for some aspects of self-efficacy (i.e., Progress, Social Feedback, Physiological States). Becca Self-Efficacy Ratings. Although there was still a lot of variability in ratings across the baseline and intervention phases, Becca’s self-efficacy ratings increased at the start of feedback and generally increased across the intervention phases. Becca’s baseline self-efficacy ratings ranged from 2 to 4 with an average rating of 3.00; her baseline trend was negative but nearly flat (-0.1). In the intervention phases, Becca’s self-efficacy ratings ranged from 3 to 5, with an average rating of 4.1, but the trend in her ratings in the intervention phases was flat. These results suggest that Becca rated her self-efficacy higher after receiving intervention and feedback. Becca’s average self-efficacy ratings increased between the baseline and intervention phases. The results demonstrate a functional relation between repeated readings with performance feedback and higher self-efficacy ratings. Reader Self-Perception Scale. Based on simple visual analysis of Becca’s RSPS scores, it appeared that Becca rated her self-efficacy higher at the end of the intervention phases compared to the baseline phase, but the differences in ratings were minimal. When administered at the end of the baseline phase, Becca’s ratings indicated she scored in the average range for all subscales (Progress 37, Observational Comparison 23, Social Feedback 36, Physiological States 98 36). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. On the Progress subscale, Becca reported slightly higher ratings in the intervention phases, but she still scored in the average range (43). Her ratings did not change much for the Observational Comparison subscale (22.5) and her score remained in the average range. Her ratings remained the same on the Social Feedback subscale (36). On the Physiological States subscale, Becca reported slightly lower ratings in the intervention phases, but she still scored in the average range (34). These data illustrate that Becca rated her Progress dimension of self-efficacy, as defined by the RSPS, higher in the intervention phases. Becca rated herself similarly on the Observational Comparison, Social Feedback, and Physiological States dimensions of self-efficacy. Becca’s data show some support providing intervention and feedback may be associated with higher self-efficacy ratings for some aspect of self-efficacy (i.e., Progress). Claire Self-Efficacy Ratings. Although there was still a lot of variability in ratings across the baseline and intervention phases, Claire’s self-efficacy ratings were higher at the start of feedback and generally increased across the intervention phases. Claire’s baseline self-efficacy ratings ranged from 1 to 5 with an average rating of 1.8; her baseline trend was negative but nearly flat (-0.1). In the intervention phases, Claire’s self-efficacy ratings ranged from 1 to 5, with an average rating of 4.3. The trend in her ratings in the intervention phases was flat. Although the variability in scores and slope were not differ considerably between baseline and intervention phases, these results suggest that Claire rated her self-efficacy higher on average after receiving intervention and feedback. The data demonstrate some support for a functional relation between repeated readings with performance feedback and higher self-efficacy ratings. 99 Reader Self-Perception Scale. Based on simple visual analysis of Claire’s RSPS scores, it appeared that Claire rated herself similarly at the end of the intervention phases compared to the baseline phase. When administered at the end of the baseline phase, Claire’s ratings indicated she scored in the average range on the Progress subscale (43), high range on the Observational Comparison subscale (30), high range on the Social Feedback subscale (42), and high range on the Physiological State subscale (40). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. Claire reported slightly higher ratings in the intervention phases on the Progress subscale, placing her score (44) in the high range. Claire’s reported lower ratings on the Observational Comparison subscale, placing her intervention phases score (19.5) in the average range. On the Social Feedback subscale, Claire’s ratings did not increase much and her score (43) was still in the high range. On the Physiological States subscale, Claire reported lower ratings in the intervention phases, but her score (38) was still in the high range. These data illustrate that Becca rated her Observational Comparison dimension of self-efficacy, as defined by the RSPS, lower in the intervention phases. Becca rated herself similarly on the Progress, Social Feedback, and Physiological States dimensions of self-efficacy. Claire’s data do not show any support that providing intervention and feedback may be associated with higher self-efficacy. Diane Self-Efficacy Ratings. Diane rated her confidence as 4 (Pretty sure I can) for every single session. Due to no variability in her ratings, Diane’s data were not visually analyzed as part of the research question. The results do not demonstrate a functional relation between repeated readings with performance feedback and higher self-efficacy ratings. 100 Reader Self-Perception Scale. Based on simple visual analysis of Diane’s RSPS scores, it appeared that Diane rated her self-efficacy lower at the end of the intervention phases compared to the baseline phase. When administered at the end of the baseline phase, Diane’s ratings indicated she scored in the high range on the Progress subscale (44), average range on the Observational Comparison subscale (23), high range on the Social Feedback subscale (43), and average range on the Physiological State subscale (35). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. Diane reported considerably lower ratings in the intervention phases on the Progress (31), Social Feedback (27), and Physiological States subscales (23.5), placing her scores in the low ranges. She reported slightly lower ratings for the Observational Comparison subscale, but her score (17.5) stayed in the average range. These data illustrate that Abby rated herself lower on all dimensions of selfefficacy, as defined by the RSPS, in the intervention phases. Diane’s data do not show any support that providing intervention and feedback may be associated with higher self-efficacy. Eric Self-Efficacy Ratings. Based on visual analysis of Eric’s self-efficacy ratings, it appeared that Eric rated his self-efficacy lower in the intervention phases compared to the baseline phase. Eric’s baseline self-efficacy ratings ranged from 3 to 5 with an average rating of 4.0; his baseline trend was positive but flat. In the intervention phases, Eric’s self-efficacy ratings ranged from 3 to 4, with an average rating of 3.4. The trend in his ratings in the intervention phases was negative but flat. These results suggest that Eric rated his self-efficacy slightly lower after receiving intervention and feedback. Eric’s average self-efficacy ratings slightly decreased between the baseline and intervention phases. The results do not demonstrate a 101 functional relation between repeated readings with performance feedback and higher selfefficacy ratings. Reader Self-Perception Scale. It should be noted that due to research assistant error, Eric’s first RSPS was not administered at the end of the baseline phase as planned. It was administered late, specifically seven sessions into the SRF phase, which means these data should be interpreted with caution. Based on simple visual analysis of Eric’s RSPS scores, it appeared that Eric rated his self-efficacy higher at the end of the intervention phases compared to the initial administration in the middle of an intervention phase, but the differences in ratings were minimal. When administered partway through the SRF phase, Eric’s ratings indicated he scored in the high range on the Progress subscale (44), average range on the Observational Comparison subscale (18), average range on the Social Feedback subscale (37), and average range on the Physiological State subscale (33). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. Eric reported considerably lower ratings in the intervention phases on the Progress subscale, placing his score (33) in the low range. His ratings did not change for the Observational Comparison subscale and his score (18) remained in the average range. Eric reported slightly higher ratings on the Social Feedback subscale (37.5) and slightly lower ratings on the Physiological States subscale (31.5) in the intervention phases, but he still scored in the average ranges. These data illustrate that Eric rated his Progress dimension of self-efficacy, as defined by the RSPS, lower in the intervention phases. Eric rated himself similarly on the Observational Comparison, Social Feedback, and Physiological States dimensions of self-efficacy. Eric’s data do not show any support that providing intervention and feedback may be associated with higher self-efficacy. 102 Frank Self-Efficacy Ratings. Although there was less variability in ratings in the intervention phases, visual analysis indicated that Frank’s self-efficacy ratings were similar across the baseline and intervention phases and there were no consistent patterns to suggest that intervention was associated with higher ratings. Frank’s baseline self-efficacy ratings ranged from 1 to 3 with an average rating of 2.7; his baseline trend was positive but nearly flat (0.1). In the intervention phases, Frank’s self-efficacy ratings ranged from 1 to 3, with an average rating of 2.9. The trend in his ratings in the intervention phases was positive but flat. These results suggest little difference in Frank’s self-efficacy ratings after receiving intervention and feedback. Frank’s average self-efficacy ratings only slightly increased between the baseline and intervention phases. The results do not demonstrate a functional relation between repeated readings with performance feedback and higher self-efficacy ratings. Reader Self-Perception Scale. Based on simple visual analysis of Frank’s RSPS scores, it appeared that Frank rated his self-efficacy higher at the end of the intervention phases compared to the baseline phase, but the differences in ratings were minimal. In addition, there were some differences in this pattern for individual subscale scores of the RSPS. When administered at the end of the baseline phase, Frank’s ratings indicated he scored in the average range for all subscales (Progress 41, Observational Comparison 20, Social Feedback 29, Physiological States 36). The scores at the end of the PRF and SRF phases were averaged for intervention phase scores on the RSPS. Frank reported considerably lower ratings in the intervention phases on the Progress subscale, placing his score (33) in the low range. His ratings did not change for the Observational Comparison subscale and his score (18) remained in the average range. Frank reported higher ratings on the Social Feedback subscale (37.5) and slightly 103 lower ratings on the Physiological States subscale (31.5) in the intervention phases, but these scores were still in the average ranges. On the Progress (45), Observational Comparison (29.5), and Physiological States (38.5) subscales, Frank reported higher ratings in the intervention phases and he still scored in the high range. Frank reported lower ratings in the intervention phases on the Social Feedback subscale and his score (25) was in the low range. These data illustrate that Frank rated his Progress and Observational Comparison dimensions of selfefficacy, as defined by the RSPS, higher in the intervention phases. He also rated his Social Feedback dimension lower in the intervention phases. Frank rated himself similarly on the Physiological States dimension of self-efficacy. Frank’s data show some support providing intervention and feedback may be associated with higher self-efficacy ratings for one aspect of self-efficacy (i.e., Social Feedback). Research Question 3 The third research question examined the effects of self-referenced performance feedback (SRF) compared to peer-referenced performance feedback (PRF) when combined with a repeated readings intervention on the self-efficacy ratings of students struggling in reading. It was hypothesized that students’ self-efficacy ratings would be higher (i.e., higher mean) when receiving self-referenced feedback compared to peer-referenced feedback. The level, variability, and trend of self-efficacy ratings were examined using visual inspection, comparing the two intervention phases. In addition, the levels of the RSPS subscale scores were examined comparing the two intervention phases. First, any general patterns identified through visual analysis across participants that correspond to this research question are described. Next, the result of each individual participant are described. 104 Figures 10 and 11 illustrate students’ self-efficacy ratings across baseline, PRF, and SRF phases. Table 13 compares baseline and intervention phase means, range, and trends for all six participants. Information on participants’ ratings during baseline phase is as previously described in the second research question results. Again, as Diane rated her confidence as 4 (Pretty sure I can) for every single session and there is no variability in ratings, Diane’s data were not given much consideration in the examination of this research question. Figures 12 and 13 illustrate students’ Reader Self-Perception Scale (RSPS) subscale scores across baseline, PRF, and SRF phases. Tables 14 through 19 compare baseline and intervention phase scores for each RSPS subscale for each of six participants. Self-efficacy rating averages were higher during SRF phase compared to the PRF phase for three participants. Overall, these three participants rated their confidence in their abilities higher when receiving self-referenced feedback. However, this was potentially influenced by the order of the phases for two of these participants. Namely, it is possible that those who received the SRF phase last would have higher overall self-efficacy ratings in this phase given the potential cumulative effects of the intervention and feedback over time on participants’ selfefficacy. For three participants, average self-efficacy ratings were higher for whichever phase was last. One participant had a higher self-efficacy rating trend (i.e., slope) in the SRF phases compared to the PRF phase. The limited data provided by the RSPS do not allow for the demonstration of a clear functional relation between a certain type of feedback and higher reported self-efficacy ratings, but may provide some support that providing a certain type of feedback may be associated with higher self-efficacy ratings. Although there were some differences for individual subscales for some participants, RSPS scores were similar for the SRF and PRF phases for three participants (i.e., Becca, Claire, and Frank). Two participants (i.e. 105 Abby and Diane) had generally higher RSPS scores for the SRF phase compared to the PRF phase. Table 13 Self-Efficacy Ratings Student Baseline Mean Baseline Range Baseline Slope PRF Mean PRF Range PRF Slope SRF Mean SRF Range SRF Slope Abby Becca Claire Diane Eric Frank 3.8 3.0 1.8 4.0 4.0 2.7 3-5 2-4 1-5 4-4 3-5 1-3 0.0 -0.1 -0.1 0.0 0.0 0.1 3.2 4.0 4.3 4.0 3.1 3.0 3-5 3-5 3-5 4-4 3-4 3-3 0.0 0.1 0.0 0.0 -0.1 0.0 5.0 4.1 4.2 4.0 3.6 2.8 5-5 3-5 1-5 4-4 3-4 1-3 0.0 0.0 -0.1 0.0 0.0 0.0 106 Table 14 Reader-Self-Perception Scale Data for Abby General Perception Progress Observational Comparison Social Feedback Physiological States Baseline 3 33 19 31 32 Low Average Average Average Peer-Referenced Feedback Phase 4 30 Low 14 Low 33 Average 33 Average Self-Referenced Feedback Phase 5 42 Average 22 Average 38 High 36 Average Peer-Referenced Feedback Phase 4 42 Average 23 Average 35 Average 34 Average Self-Referenced Feedback Phase 5 44 High 22 Average 37 Average 34 Average Peer-Referenced Feedback Phase 5 45 High 19 Average 42 High 38 High Self-Referenced Feedback Phase 5 43 Average 20 Average 44 High 38 High Table 15 Reader-Self-Perception Scale Data for Becca General Perception Progress Observational Comparison Social Feedback Physiological States Baseline 4 37 23 36 36 Average Average Average Average Table 16 Reader-Self-Perception Scale Data for Claire General Perception Progress Observational Comparison Social Feedback Physiological States Baseline 5 43 30 42 40 Average High High High 107 Table 17 Reader-Self-Perception Scale Data for Diane General Perception Progress Observational Comparison Social Feedback Physiological States Baseline 5 44 23 43 35 High Average High Average Peer-Referenced Feedback Phase 1 29 Low 22 Average 19 Low 18 Low Self-Referenced Feedback Phase 1 33 Low 13 Low 35 Average 29 Average Table 18 Reader-Self-Perception Scale Data for Eric Baselinea Peer-Referenced Feedback Phase Self-Referenced Feedback Phase General Perception 5 4 4 Progress 44 High 33 Low 33 Low Observational Comparison 18 Average 17 Average 19 Average Social Feedback 37 Average 37 Average 38 High Physiological States 33 Average 31 Average 32 Average a Due to research assistant error, the first administration of the RSPS was given 7 sessions into the SRF feedback phase (Session 17), rather than at the end of the baseline phase (Session 10). Table 19 Reader-Self-Perception Scale Data for Frank General Perception Progress Observational Comparison Social Feedback Physiological States Baseline 3 41 20 29 36 Average Average Average Average Peer-Referenced Feedback Phase 5 45 High 30 High 21 Low 40 High 108 Self-Referenced Feedback Phase 5 45 High 29 High 29 Average 37 High Abby Self-Efficacy Ratings. Based on visual analysis, it appeared that Abby’s self-efficacy ratings increased when she began the SRF phase and her ratings remained consistent through the rest of this phase. Abby received PRF first and SRF second. In the PRF phase, Abby’s selfefficacy ratings ranged from 3 to 5, with an average rating of 3.2. The trend in her ratings was flat in this phase. In the SRF phase, Abby had no variability or trend in self-efficacy ratings. She selected a rating of 5 every session in that phase. These results suggest that Abby rated her selfefficacy higher in the SRF phase. Abby’s average self-efficacy ratings and consistency increased between the PRF and SRF phases. The results demonstrate a functional relation between SRF and higher self-efficacy ratings. Reader Self-Perception Scale. Based on visual analysis, Abby’s RSPS scores were higher at the end of the SRF phase compared to the end of the PRF phase. When administered at the end of the PRF phase, Abby’s ratings indicated she scored in the low range on the Progress subscale (30), low range on the Observational Comparison subscale (14), average range on the Social Feedback subscale (33), and average range on the Physiological State subscale (33). At the end of the SRF phase, Abby rated herself higher on all subscales. Her higher ratings on the Progress (42) and Observational Comparison (22) subscales, placing her scores in the average ranges. On the Social Feedback subscale, Abby’s higher ratings indicated her score (38) was in the high range. Although Abby rated herself higher on the Physiological State subscale, her score (36) remained in the average range. These data illustrate that Abby rated her different dimensions of self-efficacy, as defined by the RSPS, higher in the SRF phase. Abby’s data show some support providing SRF feedback may be associated with higher self-efficacy ratings for 109 some aspects of self-efficacy (i.e., Progress, Observational Comparison, Social Feedback, Physiological States). Becca Self-Efficacy Ratings. Visual analysis of Becca’s self-efficacy ratings score indicated that she rated herself similarly across the SRF and PRF phases. The main difference between ratings in these phases was the increased variability in ratings in the SRF phase. Becca received PRF first and SRF second. In the PRF phase, Becca’s self-efficacy ratings ranged from 3 to 5, with an average rating of 4.0. Her rating trend was positive but nearly flat (0.1) in this phase. In the SRF phase, Becca’s self-efficacy ratings ranged from 3 to 5, with an average rating of 4.1. Her rating trend was flat. These results suggest that Becca rated her self-efficacy similarly in the PRF and SRF phases. Becca’s average self-efficacy ratings only increased slightly between the PRF and SRF phases. The results do not demonstrate a functional relation between SRF and higher self-efficacy ratings. Reader Self-Perception Scale. Based on visual analysis, Becca’s RSPS scores were similar across the PRF and SRF phases, with no clear patterns suggesting that one phase resulted in higher RSPS scores. When administered at the end of the PRF phase, Becca’s ratings indicated she scored in the average range for all subscales (Progress 42, Observational Comparison 23, Social Feedback 35, Physiological States 34). At the end of the SRF phase, Becca rated herself higher on the Progress subscale, indicating her score (44) was in the high range. She rated herself slightly lower in the Observational Comparison (22) subscales and slightly higher on the Social Feedback (37) subscale, but her scores remained in the average ranges. Becca’s ratings were the same on the Physiological States subscale so her score (34) remained in the average range. These data illustrate that Becca rated her Progress and Social 110 Feedback dimensions of self-efficacy, as defined by the RSPS, higher in the SRF phase. Becca rated herself similarly on the Observational Comparison and Physiological States dimensions of self-efficacy. Becca’s data show some support providing SRF feedback may be associated with higher self-efficacy ratings for some aspects of self-efficacy (i.e., Progress, Social Feedback). Claire Self-Efficacy Ratings. Visual analysis of Claire’s self-efficacy ratings did not identify any consistent patterns suggesting that one phase was associated with higher ratings. There was less variability in her ratings in the PRF phase, but overall there were not large differences in ratings. Claire received PRF first and SRF second. In the PRF phase, Claire’s self-efficacy ratings ranged from 3 to 5, with an average rating of 4.3. Her rating trend was flat in this phase. In the SRF phase, Claire’s self-efficacy ratings ranged from 1 to 5, with an average rating of 4.2. Her rating trend was negative but nearly flat (-0.1). These results suggest that Claire rated her self-efficacy similarly in the PRF and SRF phases. Claire’s average self-efficacy ratings only decreased slightly between the PRF and SRF phases. The results do not demonstrate a functional relation between SRF and higher self-efficacy ratings. Reader Self-Perception Scale. Visual analysis of RSPS scores indicated that Claire rated herself similarly across the PRF and SRF phases and there were no consistent patterns to identify a functional relationship. When administered at the end of the PRF phase, Claire’s ratings indicated she scored in the high range on the Progress subscale (45), average range on the Observational Comparison subscale (19), high range on the Social Feedback subscale (42), and high range on the Physiological State subscale (38). At the end of the SRF phase, Abby rated herself lower on the Progress subscale, indicating her score (43) was in the average range. Claire rated herself slightly higher on the Observational Comparison (20) and Social Feedback (44) 111 subscales, so her scores remained in the average and high ranges, respectively. Claire’s ratings were the same on the Physiological States subscale so her score (38) remained in the high range. These data illustrate that Claire rated her Progress dimension of self-efficacy, as defined by the RSPS, lower in the SRF phase. Claire rated herself similarly on the Observational Comparison, Social Feedback, and Physiological States dimensions of self-efficacy. Claire’s data show some support providing SRF feedback may be associated with higher self-efficacy ratings for some aspects of self-efficacy (i.e., Progress, Social Feedback, Physiological States). Diane Self-Efficacy Ratings. Diane received SRF first and PRF second. Diane rated her confidence as 4 (Pretty sure I can) for every single session. Due to no variability in her ratings, Diane’s data were not visually analyzed for the research question. The results do not demonstrate a functional relation between SRF and higher self-efficacy ratings. Reader Self-Perception Scale. Visual analysis of RSPS scores indicated that Diane generally rated herself lower in the PRF phase compared to the SRF phase. When administered at the end of the SRF phase, Diane’s ratings indicated she scored in the low range on the Progress subscale (29), average range on the Observational Comparison subscale (22), low range on the Social Feedback subscale (19), and low range on the Physiological State subscale (18). At the end of the PRF phase, Diane rated herself higher on the Progress subscale, but her score (33) remained in the low range. She rated herself considerably lower on the Observational Comparison subscale, indicating her score (13) was in the low range. Diane rated herself higher on the Social Feedback (35) and Physiological States (29) subscales, and her scores were in the average ranges. These data illustrate that Diane rated her Progress, Social Feedback, and Physiological States dimensions of self-efficacy, as defined by the RSPS, higher in the SRF 112 phase. Diane rated herself lower on the Observational Comparison dimension of self-efficacy. Diane’s data show some support providing SRF feedback may be associated with higher selfefficacy ratings for some aspect of self-efficacy (i.e., Observational Comparison). Eric Self-Efficacy Ratings. Visual analysis of Eric’s self-efficacy ratings did not identify any consistent patterns suggesting that one phase was associated with higher ratings. There was less variability in his ratings in the PRF phase, but overall there were not large differences in ratings. Eric received SRF first and PRF second. In the SRF phase, Eric’s self-efficacy ratings ranged from 3 to 4, with an average rating of 3.1. His rating trend was negative but nearly flat (-0.1) in this phase. In the PRF phase, Eric’s self-efficacy ratings ranged from 3 to 4, with an average rating of 3.6. His rating trend was flat. These results suggest that Eric rated his self-efficacy slightly higher in the PRF phase. Eric’s average self-efficacy ratings increased between the SRF and PRF phases. The consistency of his ratings and the slopes in his ratings were similar across the phases. The results do not demonstrate a functional relation between SRF and higher selfefficacy ratings. Reader Self-Perception Scale. Visual analysis of RSPS scores indicated that Eric generally rated himself higher in the PRF phase compared to the SRF phase. When administered at the end of the SRF phase, Eric’s ratings indicated he scored in the low range on the Progress subscale (33), average range on the Observational Comparison subscale (17), average range on the Social Feedback subscale (37), and average range on the Physiological State subscale (31). At the end of the PRF phase, Eric’s ratings were the same on the Progress subscale so his score (33) remained in the low range. Eric rated himself slightly higher on the Observation Comparison (19) and Physiological States (32) subscales, but his scores remained in the average 113 ranges. Eric rated himself higher on the Social Feedback, indicating his score (38) was in the high range. These data illustrate that Eric rated himself similarly on the Progress, Observational Comparison, Social Feedback, and Physiological States dimensions of self-efficacy in the PRF and SRF phases. Eric’s data do not show any support that providing intervention and feedback may be associated with higher self-efficacy. Frank Self-Efficacy Ratings. Visual analysis of Frank’s self-efficacy ratings score indicated that he rated himself similarly across the SRF and PRF phases. Frank received SRF first and PRF second. In the SRF phase, Frank’s self-efficacy ratings ranged from 1 to 3, with an average rating of 2.8. His rating trend was flat in this phase. In the PRF phase, Frank had no variability or trend in self-efficacy ratings. He selected a rating of 3 every session in that phase. These results suggest that Frank rated his self-efficacy slightly higher in the PRF phases. The results do not demonstrate a clear functional relation between SRF and higher self-efficacy ratings. Reader Self-Perception Scale. Although there were some differences among individual subscale scores, RSPS scores indicated that Frank rated himself similarly across the SRF and PRF phases and there were no consistent patterns to identify an overall functional relationship. When administered at the end of the SRF phase, Frank’s ratings indicated he scored in the high range on the Progress subscale (45), high range on the Observational Comparison subscale (30), low range on the Social Feedback subscale (21), and high range on the Physiological State subscale (40). At the end of the PRF phase, Frank’s ratings were the same on the Progress subscale so his score (45) remained in the high range. Frank rated himself lower on the Observation Comparison (29) and Physiological States (37) subscales, but his scores remained in the high ranges. Frank rated himself higher on the Social Feedback, indicating his score (29) was 114 in the average range. These data illustrate that Frank rated his Social Feedback dimension of selfefficacy, as defined by the RSPS, higher in the SRF phase. Frank rated himself similarly on the Progress, Observational Comparison, and Physiological States dimensions of self-efficacy. Frank’s data show some support providing SRF feedback may be associated with higher selfefficacy ratings for some aspect of self-efficacy (i.e., Physiological States). 115 CHAPTER 5: DISCUSSION The purpose of the current study was to examine the effects of a skill-based reading intervention (i.e., repeated readings) in combination with different types of performance-based feedback (i.e., self-referenced or peer-referenced) on the oral reading fluency and self-efficacy of elementary students struggling in reading. The present chapter discusses the results of the study’s three research questions within the context of previous literature, the limitations of the study and directions for future research, as well as the implications for practice. Research Question 1 Does self-referenced performance feedback when combined with a repeated readings intervention contribute to higher oral reading fluency for students struggling in reading than peer-referenced performance feedback that is combined with a repeated readings intervention? It was hypothesized that students’ rates of oral reading fluency would be higher when they received a repeated readings intervention and their performance feedback was selfreferenced compared to when it was peer-referenced. Previous studies of repeated reading interventions have been shown to improve students oral reading fluency (National Institute of Child Health and Human Development, 2000) and performance feedback have been shown to positively affect students’ academic outcomes (Alber-Morgan et al., 2007; Chafouleas et al., 2004; Dunn & Eckert, 2002; Eckert et al., 2006; Schunk, 1983a). Some research on goal-setting has shown that self-referenced goals (Shih & Alexander, 2000) and more proximal goals which are focused on week-to-week progress (Conte & Hintze, 2000) can have positive effects on students’ academic skills. Responses to the intervention were mixed across students. The majority of students responded positively in one or both intervention phases, namely their average oral reading 116 fluency score was higher than baseline and the scores reflected a positive slope. The exception was one participant who showed only small differences in average oral reading fluency scores and whose scores reflected negative slopes for both PRF and SRF phases. For four of the six students, improvement (i.e., slope) in students’ oral reading fluency rates were greater during the first intervention phase, whether it was PRF or SRF. This may indicate that oral reading fluency improvements may have been associated with any intervention and feedback, rather than a specific type (i.e., PRF or SRF). The data does not suggest a clear functional relation between SRF and improved oral reading fluency. Two participants’ data actually provide more support for a functional relation between PRF and improved oral reading fluency. The lack of support for the research hypothesis is inconsistent with some similar lines of research. Conte and Hintze (2000) also studied oral reading fluency with second grade students. They designed similar conditions to the current study but students had different types of goal lines on their CBM graphs. The dynamic goal line condition of Conte and Hintze’s (2000) study was most similar to the SRF phase in the current study and their static goal line condition was most similar to the PRF phase in the current study. Conte and Hintze (2000) found positive effects for students in both goal setting groups but showed that students in the dynamic goal line group made the most growth. Perhaps the form of the feedback provided by Conte and Hintze (2000), presenting student’s scores relative to specific goals was the key to their positive results. In the current study’s SRF phase, students saw their current score relative to all their previous scores, but did not receive information about a goal to aim for next session based on this reading rate. The SRF feedback may have caused students to focus on their previous performance, which was poor in many instances, rather than focusing on making improvements for future performances. Kluger and DeNisi (1996), in their meta-analysis of studies of performance 117 feedback, discussed that performance feedback might be associated with improved learning because it helps regulate goal-directed behavior. Feedback might lead students to increase their effort when performance falls short of goals (Kluger & DeNisi, 1996). High-performing students might naturally set goals for themselves, whereas the lower performing students, such as those in the current study, may have needed more explicit information regarding goals within their performance feedback to more clearly direct their behavior. Other researchers have conducted studies in which students were provided feedback on oral reading fluency without specific goals for students. In Chafouleas and colleagues’ study (2004), students were simply told how many words they read correctly in one minute after reading a passage. Two of the three participants improved the most when receiving a repeated reading intervention alone (i.e., without performance feedback or feedback with contingent rewards). Only the student, who began the study with the lowest reading rate and highest error rate, benefited the most when the intervention was combined with performance feedback. This small study might provide preliminary evidence that feedback has differential effects on students. Considering the data collected during the screening procedures, the two students who started the study with the highest oral reading fluency rates responded more positively to the PRF phase compared to the SRF phase. Perhaps these students who started with higher oral reading fluency scores at the start of the study were more motivated by the competition inherent in peer-referenced feedback. Eckert and colleagues (2002) also found positive effects on elementary students’ oral reading fluency when providing performance feedback or contingent reinforcement. However, the researchers noted that the improvements in students’ oral reading fluency rates were specific to the intervention passages and there were little to no generalized improvements in oral reading 118 fluency rate when a new probe was administered. Only one of the six students in their study made improvements in words read correctly in a minute on readings of novel passages (Eckert et al., 2002). Students in the current study read a book three times after hearing a research assistant model the reading of the text. Students in the current study may have made more improvements within sessions using intervention texts, but did not fully generalize these improvements to the CBM probe texts. It is possible that another type of feedback may have led to more improvements in students’ oral reading fluency rates. For example, Eckert and colleagues (2006) found that feedback on the number of words read incorrectly, compared to words read correctly, resulted in higher oral reading fluency rates. This pattern was noted for five of the six participants in their study. Although it was hypothesized that the feedback on words read correctly would result in higher oral reading fluency rates, the feedback on words read incorrectly actually resulted in higher rates of fluency for five of the six participants. It is also possible that the novelty of the intervention and feedback wore off over time for participants and therefore students were not engaged enough to fully benefit from the intervention. Although research assistants attempted to keep students engaged by introducing new reading materials for the intervention, the students may have gotten bored with rereading texts. Students may have responded better to a second form of feedback if a new intervention was also implemented to re-engage students. Research Questions 2 & 3 Does the sharing of progress monitoring data (i.e., performance feedback) in combination with a repeated reading intervention affect the self-efficacy ratings for students’ struggling in reading? Does self-referenced performance feedback when combined with a repeated readings 119 intervention contribute to higher self-efficacy ratings for students struggling in reading than peerreferenced performance feedback that is combined with a repeated readings intervention? It was hypothesized that students’ self-efficacy ratings would be higher in the intervention conditions when they received the repeated reading intervention and performance feedback compared to when they received no feedback or intervention. Several studies by Schunk and colleagues have shown that performance feedback can affect students’ self-efficacy (Schunk, 1983a; Schunk & Rice, 1989, 1991; Schunk & Swartz, 1993). It was also hypothesized that students’ self-efficacy ratings would be higher when their performance feedback as selfreferenced compared to when it was peer-referenced. Although there is some research on the effects of different types of feedback on students’ self-efficacy, no research has specifically examined the effects of self-referenced versus peer-referenced feedback. The hypothesis was based on the theoretical link between self-referenced feedback and the development of a mastery achievement goal orientation (Ames, 1992; Dweck, 1986). Providing a skill-based intervention and performance feedback had mixed effects on students’ reported self-efficacy. Two participants’ average reported self-efficacy ratings were higher in the intervention phases. Two different participants’ average reported self-efficacy ratings were only slightly higher in the intervention phase. Another participant’s average reported self-efficacy rating was lower in the intervention phases. Although self-efficacy ratings were higher, even slightly, for four of the six participants for the intervention phases compared the baseline, a full visual analysis does not suggest a clear functional relation between intervention with performance feedback and higher reported self-efficacy. When comparing the self-efficacy ratings between the PRF and SRF intervention phases, students’ reports were mixed. One participant’s average reported self-efficacy ratings were 120 higher in the SRF phase. Two participants had higher average reported self-efficacy ratings in the SRF phase compared to the PRF phase. Two participants reported similar average selfefficacy ratings across PRF and SRF phases. One participant’s average reported self-efficacy ratings were higher in the PRF phase compared to the SRF phase. As was discussed previously regarding research question 2, one participant reported the same rating for every baseline and intervention session. The data do not suggest a clear functional relation between self-referenced performance feedback and higher reported self-efficacy. There was little evidence overall that providing intervention and any performance feedback or a specific type of performance feedback was associated with higher self-efficacy ratings for the current participants. These results were inconsistent with other studies of academic ability and self-efficacy. Schunk (1983a; 1983b) has conducted studies with third, fourth, and fifth graders with mathematics and writing. In one study of third graders, groups who received any type of feedback improved performance on a math task, but students who received ability-referenced feedback had the best performance and also reported higher levels of selfefficacy (Schunk, 1983a). Another study of fourth and fifth graders demonstrated better division skills and reported higher self-efficacy when they received goals and comparative feedback, rather than one piece of feedback (i.e., goals or comparative information). Results may have differed if students were involved in plotting their own scores and evaluating their progress. Glor-Scheib and Zigmond (1993) found that sixth graders self-perceptions of competence in reading improved when they were directly involved in graphing their progress. Although students in the current study received frequent feedback, they did not graph their own data. Additional consideration was also given to whether students’ self-efficacy ratings were related to the performance on which they received. Data were examined to consider any patterns 121 regarding a decrease in student self-efficacy after receiving feedback on a decrease in performance. There were not consistent data across participants to suggest that this was a general pattern. Only two participants (i.e., Becca and Claire) had five sessions that suggested a relationship between lower self-efficacy ratings following a decrease in oral reading fluency performance. Considering the amount of sessions for a given individual, this evidence provided by the current participants is insufficient to provide evidence that the performance feedback was sufficient to make students aware of their progress and change their corresponding effort and interest in reading. The current results may be related to participants’ ages and the inability of the students to accurately understand and interpret the feedback in a way that would affect their beliefs. One participant reported the same self-efficacy rating during every single session (baseline and intervention phases). This may indicate that her self-efficacy truly did not change, she did not understand the ratings, or the measure was not sensitive enough to pick up on changes in selfefficacy. The other students’ data also did not provide support for the hypotheses that intervention and performance feedback were related to increased self-efficacy ratings. Although other studies have examined the self-efficacy beliefs of students in third grade related to academic tasks, some researchers have found that students’ beliefs may not be accurate (Pajares, 1996). The self-efficacy rating developed for the current study may not have measured the construct it intended. Students may have responded to this item in reference to their performance on the repeated reading intervention texts or how good or bad they were feeling about reading that day, rather than specifically reporting on their confidence regarding the new oral reading fluency probe. 122 Some previous findings suggest that a majority of students are overconfident about their academic abilities (Pajares, 1996). Other researchers have found developmental differences among students regarding their beliefs related to academics. Comparing fourth through twelfth graders, younger children’s performance attributions, or beliefs about the causes for performance, were more influenced by information regarding their task success or failure (Frieze & Bar-Tal, 1980). The older students were more influenced by previous history of performance and the performance of others (i.e., social norms; Frieze & Bar-Tal, 1980). Interviews with other children four to eight years of age illustrated that even some children in kindergarten and first grade begin to compare their performance with others and consider the difficulty of the task (Stipek & Tannatt, 1984). However, students in second and third grade did not clearly distinguish between effort and ability (Stipek & Tannatt, 1984). Nicholls (1978) claims that students do not begin to make this distinction until between 9 and 11 years of age. Students in the current study were younger than nine and therefore may still have viewed ability and effort as one and the same. The feedback provided in the current study may not have been sufficient to affect student’s beliefs about their own abilities and influence the amount of effort they exerted for the CBM probe performance. General Discussion Despite the lack of support for the current research questions’ hypotheses, the intervention provided by the current study did have positive effects on the participating students. Although oral reading fluency scores varied greatly session to session for some students, every participant made progress in oral reading fluency during the summer. The amount of progress made was variable student to student and also depended on whether examining the average oral reading fluency rates at baseline phase to the final treatment phase or examining the minimum 123 and maximum scores across all sessions. Despite these variations, each student made some level of growth in the number of words read correctly in one minute. Decades of research have shown that average students lose more than a month’s worth of skills in math and reading/language arts over the summer vacation (Cooper et al., 1996). Often referred to as the “summer setback” or “summer slide”, research has also shown that the summer vacation can be particularly detrimental for students from low socioeconomic status backgrounds (Cooper et al., 1996). The current study’s intervention may have reduced or even eliminated the loss of skills that some students experience between the end of one school year and the start of the next. In general, most students had difficulty staying motivated as the summer sessions progressed. Although participants did not show many visible signs of frustration during sessions as measured by the intervention quality ratings, it was sometimes difficult to begin sessions with participants, who did not want to switch from preferred summer activities to academic work during the vacation from school. Several efforts were made to keep the sessions engaging and linked to student interests. For example, students were surveyed on their interests prior to the start of the study and related texts were added to the intervention materials specific to these interests (e.g., sports, Star Wars, animals). Intervention texts were exchanged for new texts every two to three weeks. In addition, at the end of each week, participants were able to choose one book to keep. During the repeated reading interventions, a participant was asked why he chose a book, what was his favorite part, and if he had ever had a similar experience. Despite these efforts to maintain student interest and engagement, most students lost some level of interest throughout the summer. The repetition inherent in a repeated readings intervention can be monotonous and uninteresting for students. Changing the intervention activities or incorporating 124 some type of contingent reward system during the sessions may have maintained student motivation and fostered higher engagement as the sessions progressed. The current study was designed to examine the effects of different types of CBM feedback with intervention on struggling students’ oral reading fluency and reported selfefficacy. The repeated reading intervention involved the research assistant modeling the reading of a text, and the student reading the text three times. Students may have informally noticed that they made improvements during multiple readings of these texts, but students were not timed during the repeated readings and did not receive explicit feedback regarding these intervention texts. Students only received feedback on their performance on CBM probes. Perhaps the effects on oral reading fluency and reported self-efficacy would be different or stronger if the feedback had focused on within session improvements on the intervention texts, rather than the CBM probes. This type of feedback might more explicitly teach struggling readers the connection between practice (i.e., repetition of texts) and improvements. In addition, although sessions were generally scheduled four or five weekdays in a row for participants, there were several cancellations for most participants over the summer, which led to longer periods of time between sessions and the receiving of feedback. This may have led students to have a harder time connecting the feedback on their performance to their reading practice. Due to the variable results across students regarding the research questions, additional consideration was given to research assistant assignment and students’ initial reading scores and questionnaire responses. There were no clear patterns to suggest that students assigned to certain research assistants made more or less progress in oral reading fluency or reported higher or lower self-efficacy than other participants. Patterns in research question results were examined considering participants’ oral reading fluency scores during screening and the scores on the 125 Achievement Goal Orientation Questionnaire (AGOQ) and Motivation for Reading Questionnaire (MRQ) administered prior to the start of the study. No strong conclusions can be drawn based on this limited information, but a few points are worth mentioning to guide future research. In regards to the first research question, the one student whose data suggested a functional relationship between PRF and oral reading fluency was also the student who most highly endorsed a mastery achievement goal orientation. This orientation corresponds to the belief that an individual’s progress is based on effort and also that success is based on improvement. Perhaps this student’s strong focus on effort and improvement led to her making more improvements in oral reading fluency during any intervention and feedback. This student was provided with PRF first and the novelty of the intervention and provision of feedback may have resulted in more effort and progress, which then diminished as the summer continued. SRF might also have improved this student’s oral reading fluency more if it had been provided first. In regards to the second research question, two students had a slight increase in selfefficacy ratings between the baseline and intervention phases but did not have enough overall evidence to support a functional relationship between intervention and higher self-efficacy ratings. These students both equally endorsed a mastery and performance-approach achievement goal orientations on the initial AGOQ survey. These two students were also the youngest among the group of participants. This may illustrate that academic intervention and CBM feedback may have limited effects on students who equally endorse mastery and performance goal orientation and/or those students who are younger. In regards to the third research question, only one student’s data suggested a functional relationship between SRF and higher self-efficacy ratings. This student was least motivated by competition in reading according to the MRQ. PRF may not have increased this student’s reports 126 of self-efficacy like SRF because the focus on peer comparison was not motivating for this particular student. This student also had the lowest oral reading fluency score during screening procedures. Perhaps SRF feedback is most helpful for improving the self-efficacy of students whose academic skills are lowest compared to their peers. Limitations and Future Research The current study did not identify functional relationships to support the research hypotheses, however, the limitations of the present study could be addressed by future research studies to further explore the relationship between CBM feedback and students’ academic skills and self-efficacy. First, the current study was a single-case design study with only six rising third grade participants. A larger study with a group design could be used to explore similar research questions. It may also be worthwhile to consider a similar study with older students who may better understand performance feedback and accurately report self-efficacy compared to the younger students in the current study. In addition, other studies might consider different types of performance feedback in different academic subjects or different ways to present the feedback to students (e.g., verbally versus graphically). A second limitation was the use of oral reading fluency CBM probes as a measure of reading skills for the current study. Other researchers have highlighted the difficulty with establishing reliable estimates of skills to inform education decisions using these types of probes. Ardoin and Christ (2009) examined three different passage sets (i.e., DIBELS, AIMSweb, and an experimental passage set) used to measure oral reading fluency. The researchers were interested in the use of the passage sets in in comparing a student’s progress to his own previous performance by considering the standard errors associated with inconsistent passage difficulty. Ardoin and Christ (2009) determined that compared to DIBELS passage sets, another 127 Curriculum-Based Measurement of oral reading fluency, AIMSweb R-CBM probes, have significantly smaller standard error of the slopes and standard errors of the estimate, but both passage sets were inferior in this respect when compared to the experimental passage set. The newer passage set, referred to as FAIP-R, resulted in significantly less measurement error compared to the two commercially available passage sets that are frequently used in schools. A passage set with a larger standard error of the estimate may result in considerable variability in student performance across probes. Ardoin, Christ, Morena, Cormier, and Klingbeil (2013) conducted a broad review of literature on CBMs in oral reading fluency (CBM-R). There was strong evidence to support the use of CBM-R for screening and benchmark purposes, however, there was limited evidence to support the use of CBM-R for progress monitoring and evaluating the responses to instruction and intervention. “On the contrary, evidence has recently emerged to suggest that CBM-R progress monitoring outcomes lack reliability and validity unless extensive data are collected for an extended time period” (Ardoin et al., 2013, p. 12). Some studies reviewed indicated that a student’s oral reading fluency performance may vary by 10 to 20 words correct in one minute around a student’s trend line. Despite this variability and limited evidence, there are also extremely limited alternatives to CBM-R in monitoring student progress over time and creating short-term estimates of growth in skills. The poor reliability in measuring student’s oral reading fluency and the differences in passage difficulty inherent in CBM probes have implications for the current study. Due to the participants’ previous exposure to AIMSweb R-CBM probes, DIBELS Next oral reading fluency probes were selected as the next best alternative for the current study. In addition, although there is poor reliability in the administration of a single probe, only one probe was administered per 128 intervention session in order to reduce the repetition of probes across the summer and control for potential practice effects on the passages. Future research studies considering oral reading fluency might consider using AIMSweb R-CBM probes or newer passage sets that assess oral reading fluency and have stronger technical adequacy than those used in the current study. In addition, the limitations in measuring students’ oral reading fluency may have unintentionally affected the feedback the students received. For example, if a student was assigned to read slightly more difficult passages in the SRF phase, which resulted in worse performance, it may have affected the student’s self-efficacy in a direction that did not support the current hypotheses. Due to the limitations in measurement, the current study’s results should focus on students’ overall trends in performance rather than detailed examinations of levels and slopes, which are more likely to be influenced by outlier scores. In addition, the variability in student performance on the oral reading fluency influenced the feedback that participants received. It was expected that students’ oral reading fluency rates would increase with the implementation of the repeated reading intervention. Participants were identified as students struggling with reading fluency and repeated reading has been identified as a research-based intervention for reading fluency. If students made improvements in their oral reading fluency rates, students would have received feedback about their improving performance relative to the grade-level norms or relative to their previous performances. Based on the data provided by the oral reading fluency probes, students made limited improvements in their oral reading fluency rates. They received feedback about varying levels of performance with sometimes little to no improvement. Students may have responded differently to self-efficacy measures or future oral reading fluency probes, if they had made continuous progress with the intervention. 129 The limitations of the tool used to measure oral reading fluency progress may be related to another issue with the current study. Some participants had increasing trends during the baseline phase of the study, which makes establishing functional relations difficult. The baseline stability criteria for the current study relied heavily on the latest three data points in the baseline phase, which may have not been the most reliable measurement of stability considering the variability in oral reading fluency performance. The decision to move to the intervention phase was made, in some cases, to meet the minimum number of sessions for each intervention phase and to ensure there was enough data to evaluate trends for each student in each phase. For one participant (i.e., Becca), the decision was ethical in nature because she struggled significantly with reading texts independently during the baseline phase; the change was made to prevent her frustration and discouragement before the intervention even began. The increasing trends for some of the participants make it difficult to ensure that student progress in the first intervention phase was due to the intervention and feedback rather than the natural growth of reading rate from reading aloud to the research assistant. A fourth limitation of the current study was that it was implemented as a summer program in participants’ homes. Other studies of performance feedback and repeated reading interventions have been conducted in school settings. Although the current study’s format allowed for the study of students’ skills without the confounding variable of classroom instruction while also utilizing typical school procedures, this may not have been the most ideal setting for intervention sessions for participants, particularly those with attention and behavior difficulties. This led to variable levels of distraction for each participant with parents and some siblings in close proximity. Future research studies should consider implementing similar interventions in schools. This could potentially decrease student distractibility and lead to a more 130 authentic learning experience. Student association of schools with a place of learning and improvement might lead to more engagement in the intervention for some students. Similar research procedures could be incorporated into a school’s normal MTSS procedures and students could be provided with feedback on their performance. Finally, another limitation of the current study involves the measures of student motivation, achievement goal orientation, and self-efficacy. As with most survey research studies that rely on individuals’ self-reports of beliefs and feelings, current participants’ responses on the surveys (i.e., MRQ, AGOQ, RSPS, and self-efficacy ratings) might not represent participants’ true beliefs. Children in particular may have been biased to provide socially desirable responses to please adult research participants despite reassurance that “there are no right or wrong answers.” In addition, the one item self-efficacy question that was used to briefly assess student self-efficacy during each baseline and intervention session may not have been sensitive enough to measure changes in student self-efficacy. Future research might determine what type of scale is most sensitive to changes in student self-efficacy or use measures of self-efficacy with more items in a pretest-posttest design. Implications for Practice Despite the limitations and lack of support for the hypotheses, the results of the current study provide some limited implications for educational practice. First, many school districts are currently using curriculum-based measurements (CBMs) to assess students’ reading skills for screening, diagnostic, or instructional purposes, and monitoring students’ progress without considering the reliability of these measurements. These CBM data are often shared with parents and school professionals, and in some cases, are also shared with students. The implications of sharing this data with students have not been thoroughly considered in educational research. 131 Further research is needed to examine whether CBM feedback will encourage or discourage students. There was insufficient evidence in the present study to indicate that self-referenced performance feedback with a repeated reading intervention led to improvements in students’ oral reading fluency or self-efficacy. Providing feedback also did not seem to greatly frustrate or systematically reduce self-efficacy ratings of students. School professionals should think carefully about the poor reliability in measuring student oral reading fluency using CBM probes. The large amount of variability in the participants’ oral reading fluency performance in the current study is consistent with other research demonstrating the unreliability in this type of measurement and the variability in student performance (Ardoin et al., 2013). Although there have been improvements in the design of reading CBMs, these quick measures of students’ skills still have some level of unreliability (Ardoin et al., 2013). Sharing potentially unreliable data with students may be deceiving or discouraging. It may also lead some students to focus too much on competition with peers or only focus on reading speed and disregard accuracy or comprehension. A second implication of the current study is the consideration of student motivation related to reading. Student’s mental states and beliefs should be considered as a component of the learning process. The current study sought to highlight the importance of assessing and addressing student self-efficacy, but there was limited support that higher reported self-efficacy corresponded to better academic performance among the student participants. Despite efforts to maintain student interest and engagement, most participants lost some level of interest throughout the summer. Changing the intervention activities or incorporating reward system during the intervention sessions may be needed for monotonous interventions such as repeated readings. Low self-efficacy, or perceived competence in their own abilities to perform certain 132 tasks, may not be the most important motivational construct to consider or the driving force for student behavior and performance. Instead, considering might need to be given to students’ performance attributions, as knowing this might inform school professionals how a child might interpret feedback and allow teachers to structure feedback consequently (Schunk, 1983a). In discussing the results of their study, Stipek and Tannatt (1984) indicated that teachers should be careful because negative feedback to students regarding effort and work habits might be interpreted as negative feedback regarding their abilities. Performance attributions were outside the scope of the current study, but may represent an area for future research. Conclusions Performance feedback is one way to affect change in students’ academic skills and selfefficacy, but little is known about the effects of specific types of CBM performance feedback on the skills and self-efficacy of young, struggling readers. In this study, there was insufficient evidence to support a functional relation between self-referenced CBM feedback (SRF) and higher growth in oral reading fluency or SRF and higher reported self-efficacy ratings. While students made growth in oral reading fluency over the course of the summer intervention sessions, this growth was limited and students made more progress at the start of the intervention. Despite efforts to maintain student engagement, most students lost some motivation over the summer. The repetition inherent in some evidence-based interventions can be monotonous for students. School professionals might consider changing intervention activities or incorporating contingent reward system to foster student motivation. Given the significant variability in student oral reading fluency performance across the study, careful consideration should also be given the unreliability of curriculum-based measurements when monitoring student progress and providing feedback to students on their skills. 133 APPENDICES 134 Appendix A Initial Parent Consent Form Dear Parent: Your child is being considered for participation in a research study this summer. Through this study, we hope to learn about the effects of a reading intervention and feedback from reading assessments on the oral reading fluency and self-beliefs of students having difficulty in reading. Study Title: Effects of Self- or Peer-Referenced CBM Feedback on Oral Reading Fluency and Self-Efficacy of Struggling Readers Researchers and Titles: Kristen Girard, M.A., Doctoral Candidate and Sara (Bolt) Witmer, Ph.D., Associate Professor Department: Department of Counseling, Educational Psychology, and Special Education, Michigan State University My name is Kristen Girard and I am a graduate student at Michigan State University in the School Psychology Program. I am completing my doctoral requirements by examining the effects of an intervention on the reading fluency and self-beliefs of students. This research may lead to improvements in reading interventions in schools. Your child is being considered for this study because your child’s second grade teacher indicated that he/she might benefit from additional reading instruction this summer. In order to determine if your child qualifies for the study, your child would need to complete a short reading assessment. This reading assessment uses curriculum-based measures to assess your child’s oral reading fluency skills which are commonly used in many schools today. If your child meets the criteria for the study, your child may be selected to participate in the summer reading intervention. Of those who participate in the initial screening, six students will be selected for participation. Participation is completely voluntary. If you decide to participate in this initial screening, two trained research assistants from Michigan State University will come to your home and give you a demographic survey to complete and will give your son/daughter the reading assessment. This survey should take no more than 5 minutes to fill out. The reading assessment should take no more than 10 minutes and will be audiotaped. In addition, your child may be asked to fill out two additional surveys that ask questions about their beliefs about reading and learning and participate in some sample reading activities. These additional surveys and activities should take no longer than 45 minutes. There are no risks involved by participating in this initial study and you will be provided with the results of the reading assessment. The results of your survey and the results of your child’s reading assessment will be kept confidential whether or not your child is selected for participation. Your confidentiality will be protected, except when reporting is required by law or MSU policy. Only the researchers, research staff, and Michigan State University’s Institutional Review Board (IRB) will have access to the data. 135 Participation is voluntary, you may choose not to participate at all, or you may refuse to participate in certain procedures or answer certain questions or discontinue your participation at any time without consequence (e.g. will not affect treatment you will receive). The researcher is responsible for explaining risks and benefits of participation so that you may make an informed decision regarding participation. Please ask the researchers any questions you may have about this study. If you have questions about this study, you may direct those to Dr. Witmer at sbolt@msu.edu or myself at 401-439-8970 or girardk3@msu.edu. If you have any questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this research study, you may contact, anonymously if you wish, the Michigan State University Human Research Protection Program at 517-355-2180, FAX 517432-4503, or e-mail irb@msu.edu, or regular mail at: 408 West Circle Drive Room 207 Olds Hall, MSU, East Lansing, MI 48824 Kristen Girard, M.A. Sara (Bolt) Witmer, Ph.D. If you sign your name below, it means that you agree to have research staff come to your home, complete a brief demographic survey, and have your child participate in the initial reading screening, answer questions about their thoughts about reading, and engage in some sample reading activities. It also means that you agree to have your child’s reading assessment audiotaped and that you understand your child may or may not be selected for participation in the research study. _____________________________ Your Name (Printed) _________________ Date _____________________________ Your Child’s Name (Printed) _________________ Date _____________________________ Your Signature _________________ Date ______________________________ Signature of Person Obtaining Consent _________________ Date 136 Appendix B Parent Demographic Survey Your Name Child’s Name Child’s Teacher Will your child be available for reading support this summer, Monday through Friday for at least 7 weeks of the summer vacation? ☐Yes ☐No Please list any dates your child will be unavailable this summer due to prior commitments, vacations, etc. ____________________________________________ ____________________________________________ ____________________________________________ ____________________________________________ Is your child being promoted to 3rd grade? Is English your child’s primary language? Does your child receive special education services? Does your child receive any other tutoring or other educational supports? ☐Yes ☐ No ☐Yes ☐No ☐Yes ☐No ☐Yes ☐ No If yes, please list services/supports below: ____________________________________________ ____________________________________________ ____________________________________________ ____________________________________________ 137 Appendix C Final Parent Consent Form Dear Parent: My name is Kristen Girard and I am a graduate student at Michigan State University in the School Psychology Program. I am completing my doctoral requirements by examining the effects of an intervention on the reading fluency and self-beliefs of six students. This research may lead to improvements in reading interventions and assessments in schools. You were recently invited to have your child participate in an initial screening for a research study. Based on the results of this screening, your child has been selected to participate in the reading intervention study that will occur this summer. Study Title: Effects of Self- or Peer-Referenced CBM Feedback on Oral Reading Fluency and Self-Efficacy of Struggling Readers Researchers and Titles: Kristen Girard, M.A., Doctoral Candidate and Sara (Bolt) Witmer, Ph.D., Associate Professor Department: Department of Counseling, Educational Psychology, and Special Education, Michigan State University Participation Procedures If you decide to have your child participate, a university researcher will briefly review your child's CA-60 (school file) to gather descriptive information about his/her previous achievement in reading (report cards with grades and teacher comments) and any previous reading interventions that have been conducted with your child at school. The results of this file review will not affect the procedures of the study or the participant's inclusion in the study. Your child will then be asked to attend 30-45 minute intervention sessions five times per week for the summer vacation (8 weeks). The intervention sessions will involve a research-based reading intervention and brief reading assessments. Your child will receive feedback regarding theses reading assessments. Your child will also be asked to complete several surveys about their thoughts and beliefs about themselves and about reading. Two trained research assistants from Michigan State University will come to your home to run these intervention sessions. There are no risks involved by participating in this study except that the intervention sessions are not guaranteed to improve your child’s reading. In addition, it is possible that your child may be frustrated or bored with participating in these intervention sessions. Every effort will be made to make these intervention sessions helpful and enjoyable for your child. Your child will receive the benefit of additional reading support over the summer vacation and will receive eight books (one book per week) for participating. Confidentiality and Privacy Your confidentiality will be protected, except when reporting is required by law or MSU policy. All data collection will occur during private sessions with your child and one or two trained research assistants. The data for this project will be kept confidential to protect participants' privacy. All students will be assigned a confidential code by the principal researcher. Research data collected will use this code instead of your child’s name. Records of this study will be kept 138 confidential and you and your child will not be identified in any written or verbal reports. Data Protection The researcher will provide you and your child’s school with the results of the study in a summary with no individual data. Data will only be used for research purposes. All data will be stored on a password protected computer and external hard drive. Any hard copies of data will be stored in a locked room in a locked cabinet at Michigan State University. Only the researchers, research assistants, and Michigan State University’s Institutional Review Board (IRB) will have access to the data. Contacts for Questions Participation is voluntary, you or your child may choose not to participate at all, or you may refuse to participate in certain procedures or answer certain questions or discontinue your participation at any time without consequence (e.g. will not affect treatment you will receive). The researcher is responsible for explaining risks and benefits of participation so that you may make an informed decision regarding participation. Please ask the researchers any questions you may have about this study. If you have questions about this study, you may direct those to Dr. Witmer at sbolt@msu.edu or myself at 401-439-8970 or girardk3@msu.edu. If you have any questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this research study, you may contact, anonymously if you wish, the Michigan State University Human Research Protection Program at 517-355-2180, FAX 517432-4503, or e-mail irb@msu.edu, or regular mail at: 408 West Circle Drive Room 207 Olds Hall, MSU, East Lansing, MI 48824 Kristen Girard, M.A. Sara (Bolt) Witmer, Ph.D. If you sign your name below, it means that you agree to have your child’ participate in this research study. In addition, you agree to have your child’s CA-60 file reviewed by the principal researcher. _____________________________ _________________ Your Name (Printed) Date _____________________________ Your Child’s Name (Printed) _________________ Date _____________________________ Your Signature _________________ Date ______________________________ Signature of Person Obtaining Consent _________________ Date 139 Appendix D Student Assent Form Title: Effects of Self- or Peer-Referenced CBM Feedback on Oral Reading Fluency and Self-Efficacy of Struggling Readers Department: Department of Counseling, Educational Psychology, and Special Education Institution: Michigan State University Investigators: Kristen Girard, M.A., Doctoral Candidate Sara (Bolt) Witmer, Ph.D., Associate Professor The investigators named above are doing a research study. These are some things we want you to know about research studies: We are asking you to be in a research study. We can learn from research. Research is a way to test new ideas. Why am I being asked to be in this research study? You are being asked to be in the study because your parents and teachers think that some extra reading time might be helpful for you this summer. Whether or not to be in this research is your choice. You can say Yes or No. Whatever you decide is OK. What is the study about? School psychologists and teachers need to learn more about the best way to help kids who have trouble reading and learn more about what children think about reading. What will happen during this study? If you agree to be in this study, you will get to work with an adult for 30-45 minutes, five times a week on reading activities this summer in your home. The adult will also ask you to answer some questions about how you feel about reading. There is very little risk if you decide to participate. What are the good things that might happen? The investigators might find out something that will help teachers teach other children like you how to read. The investigators might also find out more about how children like you feel about reading. You will also get to take home eight books (one book per week) while you are participating in the study. Who should I ask if I have any questions? If you have any questions about this study, you or your parents can call Kristen Girard at 401439-8970. Do I have to be in the study? No, you do not have to be in the study. Even if you say yes now, you can change your mind later. It is up to you. No one will be mad at you if you don’t want to do this. 140 Now that I have asked my questions and think I know about the study and what it means, here is what I decided: _________OK, I’ll be in the study. ______ No, I do not want to be in the study. The adults have told me about the research. I could ask questions if needed. I know I can ask questions now and later. I want to be in the research. If you sign your name below, it means that you agree to take part in this research study. _____________________________ Your Name (Printed) ____
 Age _________________ Date _____________________________ Your Signature _________________ Date ________________________________________ Signature of Person Obtaining Assent _________________ Date 141 Appendix E Motivation for Reading Questionnaire Participant #: __________ Date: _________________ Motivations for Reading Questionnaire We are interested in your reading. The sentences tell how some students feel about reading. Listen to each sentence and decide whether it talks about a person who is like you or different from you. There are no right or wrong answers. We only want to know how you feel about reading. For many of the statements, you should think about the kinds of things you read in school. Here are some ones to try before we start on the ones about reading: I like ice cream. 1 Very different from me 2 A little different from me 3 Not sure If the statement is very different from you, circle a 1. If the statement is a little different from you, circle a 2. If you are not sure if it different or like you, circle a 3. If the statement is a little like you, circle a 4.
 If the statement is a lot like you, circle a 5. 142 4 A little like me 5 A lot like me I like spinach. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me If the statement is very different from you, what should you circle? If the statement is a little different from you, what should you circle? If you are not sure, what should you circle? If the statement is a little like you, what should you circle?
 If the statement is a lot like you, what should you circle? Okay, we are ready to start on the ones about reading. Remember, when you give your answers you should think about the things you are reading in your class. There are no right or wrong answers, we just are interested in YOUR ideas about reading. To give your answer, circle ONE number on each line. The answer lines are right under each statement. Let’s turn the page and start. Please follow along with me while I read each of the statements, and then circle your answer. 143 1. I like being the best at reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 2. I like it when the questions in books make me think. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 3. I read to get better grades. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 4. If the teacher talks about something interesting I might read more about it. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 5. I like hard books. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 6. I enjoy a long story or fiction book. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 144 7. I know that I will do well in reading next year. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 
 8. If a book is interesting I don’t care how hard it is to read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 9. I try to get more answers right than my friends. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 10. I have favorite subjects that I like to read about. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 11. I visit the library often with my family. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 12. I make pictures in my mind when I read. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 145 13. I don’t like reading something when the words are too hard. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 14. I enjoy reading books about people in different countries. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 15. I am a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 16. I usually learn difficult things by reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 17. It is very important to me to be a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 
 18. My parents often tell me what a good job I am doing in reading. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 146 19. I read to learn new information about topics that interest me. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 
 20. If the project is interesting, I can read hard material. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 21. I learn more from reading than most students in the class. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 22. I read stories about fantasy and make believe. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 23. I read because I have to. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 24. I don’t like vocabulary questions. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 147 25. I like to read about new things. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 26. I often read to other family members. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 27. Compared to my other activities, it is very important to me to be a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 28. I like having the teacher say I read well. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 29. I read about my hobbies to learn more about them. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 30. I like mysteries. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 148 31. My friends and I like to trade things to read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 32. I think hard stories are no fun to read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 33. I read a lot of adventure stories. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 34. I do as little schoolwork as possible in reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 35. I feel like I make friends with people in good books. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 36. Finishing every reading assignment is very important to me. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 149 37. My friends sometimes tell me I am a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 38. Grades are a good way to see how well you are doing in reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 39. I like to help my friends with their schoolwork in reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 40. I don’t like it when there are too many people in the story. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 41. I am willing to work hard to read better than my friends. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 42. I sometimes read to my parents. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 150 43. I like to get compliments for my reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 44. It is important for me to see my name on a list of good readers. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 45. I talk to my friends about what I am reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 46. I always try to finish my reading on time. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 47. I am happy when someone recognizes my reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 48. I like to tell my family about what I am reading. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me ______________________________________________________________________________ 151 49. I like being the only one who knows an answer in something we read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 50. I look forward to finding out my reading grade. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 51. I always do my reading work exactly as the teacher wants it. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 52. I like to finish my reading before other students. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me ______________________________________________________________________________ 53. My parents ask me about my reading grade. 1 Very different from me 2 A little different from me 3 Not sure 152 4 A little like me 5 A lot like me Appendix F Achievement Goal Orientation Questionnaire Participant #: __________ Date: _________________ Student Questionnaire The sentences tell how some students feel about school. Listen to each sentence and decide whether it is true or not true for you. There are no right or wrong answers. We only want to know how you feel about school. Here is one to try before we start on the ones about school: I like strawberry ice cream. 1 NOT AT ALL TRUE 2 3 SOMEWHAT TRUE 4 5 VERY TRUE If the statement is not at all true, circle a 1. If the statement is a little true, circle a 2. If the statement is somewhat true, circle a 3. If the statement is pretty true, circle a 4. If the statement is very true, circle a 5.
 Okay, we are ready to start on the ones about school. There are no right or wrong answers, we just are interested in YOUR ideas about reading. To give your answer, circle ONE number on each line. The answer lines are right under each statement. Let’s turn the page and start. Please follow along with me while I read each of the statements, and then circle your answer. 153 1. One of my goals in class is to avoid looking like I have trouble doing the work. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 2. One of my goals is to show others that I’m good at my class work. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 3. One of my goals is to look smart compared to other students in my class. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 4. It’s important to me that I learn a lot of new information this year. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 5. One of my goals is to keep others from thinking I’m not smart in class. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 6. One of my goals in class is to learn as much as I can. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 7. It’s important to me that my teacher doesn’t think that I know less than others in the class. 1 NOT AT ALL TRUE 2 3 SOMEWHAT TRUE 154 4 5 VERY TRUE 8. One of my goals is to get a lot of new skills this year. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 9. It’s important to me that I completely understand my class work. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 10. It’s important to me that other students in my class think I am good at my class work. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 11. One of my goals is to show others that class work is easy for me. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 12. It’s important to me that I look smart compared to others in my class. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 13. It’s important to me that I don’t look stupid in class. 1 2 3 4 5 NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE __________________________________________________________________ 14. It’s important to me that my skills get better this year. 1 NOT AT ALL TRUE 2 3 SOMEWHAT TRUE 155 4 5 VERY TRUE Appendix G Self-Efficacy Rating Practice Rating If you were asked to jump different distances right now, how sure are you that you can jump each of the distances described below? Rate your degree of confidence by recording a number from 1 to 5 using the scale given below. 1 I’m sure I can not 2 3 4 5 Don’t think I can Maybe I can Pretty sure I can I’m sure I can Confidence (1-5) ____ ____ ____ ____ Distance 1 foot 3 feet 5 feet 7 feet Reading Rating If you were asked to read this passage right now, how sure are you that you can read aloud well? Rate your degree of confidence by recording a number from 1 to 5 using the scale given below. 1 I’m sure I can not 2 Don’t think I can 3 Maybe I can 156 4 Pretty sure I can 5 I’m sure I can Appendix H Reader Self-Perception Scale Participant #: __________ Date: _________________ Reader Self-Perception Scale Listed below are statements about reading. Please read each statement carefully and decide whether it talks about a person like you or different from you. Then circle the number that matches what you think. There are no right or wrong answers. We only want to know how you feel about reading. Example: I think pepperoni pizza is the best. 1 Very different from me 2 A little different from me 3 Not sure 4 A little like me 5 A lot like me If the statement is really different from you, circle 1. If the statement is a little different from you, circle 2. If you are not sure if the statement is more like you or more different from you, circle 3. If the statement is a little like you, circle 4. If the statement is a lot like you, circle 5. Okay, we are ready to start on the ones about reading. There are no right or wrong answers, we just are interested in YOUR ideas about reading. To give your answer, circle ONE number on each line. The answer lines are right under each statement. Let’s turn the page and start. Please follow along with me while I read each of the statements, and then circle your answer. 157 1. I think I am a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 2. I can tell that my teacher likes to listen to me read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 3. My teacher thinks that my reading is fine. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 4. I like to read faster than other kids. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 5. I like to read aloud. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 6. When I read, I can figure out words better than other kids. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 158 7. My classmates like to listen to me read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 8. I feel good inside when I read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 9. My classmates think that I read pretty well. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 10. When I read, I don’t have to try as hard as I used to. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 11. I seem to know more words than other kids when I read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 12. People in my family think I am a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 159 13. I am getting better at reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 14. I understand what I read as well as other kids do. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 15. When I read, I need less help than I used to. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 16. Reading makes me feel happy inside. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 17. My teacher thinks I am a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 18. Reading is easier for me than it used to be. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 160 19. I read faster than I could before. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 20. I read better than other kids in my class. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 21. I feel calm when I read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 22. I read more than other kids. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 23. I understand what I read better than I could before. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 24. I can figure out words better than I could before. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 161 25. I feel comfortable when I read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 26. I think reading is relaxing. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 27. I read better now than I could before. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 28. When I read, I recognize more words than I used to. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 29. Reading makes me feel good. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 30. Other kids think I’m a good reader. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 162 31. People in my family think I read pretty well. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 32. I enjoy reading. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 33. People in my family like to listen to me read. 1 2 3 4 5 Very A little Not A little A lot different different sure like me like me from me from me __________________________________________________________________ 163 Appendix I Script for Peer-Referenced Feedback Phase Research assistant explains graph: Remember that the goal of our sessions together is to help you improve your reading. Here is a graph that shows us your score on your most recent reading. The numbers at the bottom of the graph (points at X axis) show us the number of sessions we have had together. The numbers on the side of the graph (points at Y axis) shows us the number of words read correctly when you are timed for 1 minute. This line at the top (points to norm line) shows us the number of words read correctly by other students at the end of second grade. This point (points to data point) shows us your score today/the last time when I timed you for 1 minute. We can see that you read ___ (insert median score here) words correctly in 1 minute today/ the last time when I timed you. Research assistant says one of the following based on the student’s performance: If the student scores lower than the norm line… Your score is below the number of words read by other students at the end of second grade. But I know you have been trying to do your best reading with me. Let’s see how you do today/next time. If the student scores at the norm line (106 words )… Your score is the same number of words read by other students at the end of second grade. I know you have been trying to do your best reading with me. Let’s see how you do today/next time. If the student scores higher than the norm line… Your score is higher than the number of words read by other students at the end of second grade. I know you have been trying to do your best reading with me. Let’s see how you do today/next time. 164 Appendix J Script for Self-Referenced Feedback Phase Research assistant explains graph: Remember that the goal of our sessions together is to help you improve your reading. Here is a graph that shows us your score on your most recent reading. The numbers at the bottom of the graph (points at X axis) show us the number of sessions we have had together. The numbers on the side of the graph (points at Y axis) shows us the number of words read correctly when you are timed for 1 minute. This point (points to latest data point) shows us your score today/the last time when I timed you for 1 minute. The other points (point to previous data points) show us your scores for every time I have timed you before. We can see that you read ___ (insert median score here) words correctly in 1 minute today/ the last time when I timed you. Research assistant says one of the following based on the student’s performance: If the student scores lower than the previous session… You read fewer words than the last time I timed you. But I know you have been trying to do your best reading with me. Let’s see how you do today/next time. If the student scores the same as the previous session… You read the same amount of words as the last time I timed you. But I know you have been trying to do your best reading with me. Let’s see how you do today/next time. If the student scores higher than the previous session… You read more words than the last time I timed you. I know you have been trying to do your best reading with me. Let’s see how you do today/next time. 165 Appendix K Intervention Integrity Checklists Baseline Phase Session Baseline Phase Session Circle one Present 1. Child selects book(s) and reads aloud for 20 minutes without modeling or repetition or feedback from RA Missing Present 2. Self-Efficacy Rating administered Missing Present 3. DIBELS Next Oral Reading Fluency probe administered and scored Missing Present 4. Reader Self-Perception Scale administered (last session of phase only) Missing Present 5. Child and RA play card game Missing 166 Self-Referenced Feedback (SRF) Phase Session SRF Phase Session 1. SRF scripted feedback provided using previous session’s score Circle one Present Missing 2. Child selects book with help from RA Present Missing 3. RA asks child 2 questions - Why did you choose this book? - What do you think this book is about? 4. RA reads book aloud and child follows along Present Missing Present Missing 5. Child reads book aloud two times Present Missing 6. RA asks and child answers 3 questions (provides guidance as needed) - 1 fact/story detail - 1 personal experience/ connection to the content - 1 favorite part 7. Child reads book aloud two times Present Missing Present Missing 8. RA repeats steps 2-7 until 20 minutes have elapsed Present Missing 9. Self-Efficacy Rating administered Present Missing 10. DIBELS Next Oral Reading Fluency probe administered and scored Present Missing 11. SRF feedback graph created Present Missing 12. SRF scripted feedback provided using current session’s score Present Missing 13. Reader Self-Perception Scale administered (last session of phase only) Present Missing 14. Child and RA play card game Present Missing 167 Peer-Referenced Feedback (PRF) Phase Session PRF Phase Session 1. PRF scripted feedback provided using previous session’s score Circle one Present Missing 2. Child selects book with help from RA Present Missing 3. RA asks child 2 questions - Why did you choose this book? - What do you think this book is about? 4. RA reads book aloud and child follows along Present Missing Present Missing 5. Child reads book aloud two times Present Missing 6. RA asks and child answers 3 questions (provides guidance as needed) - 1 fact/story detail - 1 personal experience/ connection to the content - 1 favorite part 7. Child reads book aloud two times Present Missing Present Missing 8. RA repeats steps 2-7 until 20 minutes have elapsed Present Missing 9. Self-Efficacy Rating administered Present Missing 10. DIBELS Next Oral Reading Fluency probe administered and scored Present Missing 11. PRF feedback graph created Present Missing 12. PRF scripted feedback provided using current session’s score Present Missing 13. Reader Self-Perception Scale administered (last session of phase only) Present Missing 14. Child and RA play card game Present Missing 168 Appendix L Intervention Quality Rating Criteria Session Protocol Quality Rating (circle one) 1- Low: 1 component missing AND small deviation from scripted feedback (i.e., changing just 1-3 words without changing the meaning of the sentence) OR 2+ components missing OR large deviation from script (i.e., changing words and changing the meaning of the sentence) Criteria Quality Rating (circle one) Child Attentiv eness/ Engage ment 1- Low: Child demonstrates no or low levels of attentiveness/ engagement (e.g., frequently engaging in off-task behavior, appearing bored) 2- Med: 1 component missing OR small deviation from scripted feedback 3- High: Child demonstrates high levels of attentiveness/ engagement (e.g., being responsive to RA, appearing happy/excited) 3- High: All required components present and feedback provided as scripted RA Engage ment/ Enthusia sm 1- Low: RA demonstrates no or low levels of engagement and enthusiasm (e.g., using monotone voice or ignoring child while child reads) 2- Med: Child demonstrates medium levels of attentiveness/ engagement (e.g., occasionally engaging in off-task behavior or appearing bored) Child Frustrati on 1- Low: Child does not demonstrate any visible signs of frustration 2- Med: Child appears slightly frustrated as evidenced by needing minor prompting (i.e., 1-3 statements from RA) or encouragement from RA to continue 2- Med: RA demonstrates medium levels of engagement and enthusiasm (e.g., occasionally using friendly facial expressions, using warm tone, being responsive to child while child reads) 3- High: Child appears significantly frustrated as evidenced by appearing visibly angry/upset/sad and/or needed excessive prompting (i.e., 4+ statements from RA) or encouragement from RA and/or refusal to participate in session 3- High: RA demonstrates high levels of engagement and enthusiasm (e.g., consistently using friendly facial expressions, using warm tone, being responsive to child while child reads) 169 REFERENCES 170 REFERENCES Airasian, P. W. (1997). Classroom assessment (3rd ed.). New York: McGraw-Hill Alber-Morgan, S. R, Ramp, E. M., Anderson, L. L., & Martin, C. M. (2007). Effects of repeated readings, error correction, and performance feedback on the fluency and comprehension of middle school students with behavior problems. Journal of Special Education, 41 (1), 17-30. doi: 10.1177/00224669070410010201 Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84, 261-271. doi: 10.1037/0022-0663.84.3.261 Ames, C. & Ames, R. (1984). Systems of student and teacher motivation: Toward a qualitative definition. Journal of Educational Psychology, 76 (4), 535-556. doi: 10.1037/00220663.76.4.535 Ardoin, S. P., & Christ, T. J. (2009). Curriculum based measurement of oral reading: Estimates of standard error when monitoring progress using alternate passage sets, School Psychology Review, 38, 266-283. Retrieved from http://www.nasponline.org/publications/spr Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1-18. doi: 10.1016/j.jsp.2012.09.004 Baker, L. & Wigfield, A. (1999). Dimensions of children’s motivation for reading and their relations to reading activity and reading achievement. Reading Research Quarterly, 34 (4), 452-477. doi: 10.1598/RRQ.34.4.4 Baker, S. K., Smolkowski, K., Katz, R. Fien, H., Seeley, J. R., Kame’enui, E. J., & Beck, C. T. (2008). Reading fluency as a predictor of reading proficiency in low-performing, highpoverty schools. School Psychology Review, 37 (1), 18-37. Retrieved from http://www.nasponline.org/publications/spr Bandura, A. (1977a). Social Learning Theory, Prentice-Hall, Englewood Cliffs, NJ. Bandura, A. (1977b). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84 (2), 191-215. doi: 10.1037/0033-295X.84.2.191 Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37 (2), 122-147. doi: 10.1037/0003-066X.37.2.122 Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice- Hall, Inc. Bandura, A. (2006). Guide for constructing self-efficacy scales. In T. Urdan & F. Pajares (Eds.), 171 Self-efficacy beliefs of adolescents (pp. 307-337). Charlotte, NC: Information Age Publishing. Bandura, A., & Schunk, D. H. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology, 41, 586-598. doi: 10.1037/0022-3514.41.3.586 Baumann, J. F., Hoffman, J. V., Duffy-Hester, A. M., Moon Ro, J. (2000). The First R yesterday and today: U.S. elementary reading instruction practices reported by teachers and administrators. Reading Research Quarterly, 35 (3), 338-377. doi: 10.1598/RRQ.35.3.2 Baumeister, R. F., Campbell, J. D., Krueger, J. I., and Vohs, K. D. (2003). Does high self-esteem cause better performance, interpersonal success, happiness, or healthier lifestyles?. Psychological Science in the Public Interest, 4 (1), 1-44. doi: 10.1111/1529-1006.01431 Begeny, J. C., Braun, L. M., Lynch, H. L., Ramsay, A. C., & Wendt, J. M., (2012). Initial evidence for using the HELPS reading fluency program with small instructional groups. School Psychology Forum: Research in Practice, 6 (3), 50-63. Retrieved from http://www.nasponline.org/publications/spf/index-list.aspx Begeny, J. C., Laugle, K. M., Krouse, H. E., Lynn, A. E., Tayrose, M. P., & Stage, S. A. (2010). A control-group comparison of two reading fluency programs: The Helping Early Literacy with Practice Strategies (HELPS) program and the Great Leaps K-2 reading program. School Psychology Review, 39 (1), 137-155. Retrieved from http://www.nasponline.org/publications/spr Brown, A. L., Palincsar, A. S., & Purcell, L. (1986). Poor readers: Teach, don’t label. In U. Neisser (Ed.). The school achievement of minority children: New perspectives (pp 105143. Hillsdale, NJ: Erlbaum. Brown-Chidsey, R., & Steege, M. W. (2010). Response to intervention: Principles and Strategies for effective practice (Second edition). New York, NY: Guildford Press. Chafouleas, S. M., Martens, B. K., Dobson, R. L., Weinstein, K. S., & Gardner, K. B. (2004). Fluent reading as improvement of stimulus control: Additive effects of performancebased interventions to repeated reading on students’ reading and error rates. Journal of Behavioral Education, 13 (2), 67-81. doi: 1053-0819/04/0600-0067/0 Chapman, J. W. & Tunmer, W. E. (2010). Reading difficulties, reading-related self-perceptions, and strategies for overcoming negative self-beliefs. Reading & Writing Quarterly: Overcoming Learning Difficulties, 19 (1), 5-24; doi: 10.1080/10573560390143003 Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35 (5), 386-406. doi: 10.1177/00222194020350050101 Cooper, H., Nye, B., Charlton, K., Lindsay, J., & Greathouse, S. (1996). The effects of summer vacation on achievement test scores: A narrative and meta-analytic review. Review of 172 Educational Research, 66 (3), 227-268. doi: 10.3102/00346543066003227 Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130146. Retrieved from http://www.nasponline.org/publications/spr Frieze, I. H., & Bar-Tal, D. (1980). Developmental trends in cue utilization for attributional judgments. Journal of Applied Developmental Psychology, 1, 83-94. doi: 10.1016/01933973(80)90001-5 Cipielewski, J., & Stanovich, K. (1992). Predicting growth in reading ability from children’s exposure to print. Journal of Experimental Child Psychology, 54, 74–89. doi: 10.1016/0022-0965(92)90018-2 Codding, R. S., Chan-Iannetta, L., Palmer, M., & Lukito, G. (2009). Examining a classwide application of cover-copy-compare with and without goal setting to enhance mathematics fluency. School Psychology Quarterly, 24 (3), 173-185. doi: 10.1037/a0017192 Conte, K. L., & Hintze, J. M. (2000). The effects of performance feedback and goal setting on oral reading fluency within curriculum-based measurement. Assessment for Effective Intervention, 25 (2), 85-98. doi: 10.1177/073724770002500201 Coulter, G., Shavin, K., & Gichuru, M. (2009). Oral reading fluency: Accuracy of assessing errors and classification of readers using a 1-min timed reading sample. Preventing School Failure, 54 (1), 71-76. doi: 10.3200/PSFL.54.1.71-76 Cox, K. E., & Guthrie, J. T. (2001). Motivational and cognitive contributions to students’ amount of reading. Contemporary Educational Psychology, 26, 116-131. doi: 10.1006/ceps.1999.1044 Cunningham, A. E., & Stanovich, K. E. (1991). Tracking the unique effects of print exposure in children: Association with vocabulary, general knowledge, and spelling. Journal of Educational Psychology, 83, 264-274. doi: 10.1037//0022-0663.83.2.264 Davis, L. B., Fuchs, L. S., Fuchs, D., & Whinnery, K. (1995). “Will CBM help me learn?” Students’ perception of the benefits of curriculum-based measurement. Education and Treatment of Children, 18 (1), 19-32. Retrieved from http://www.educationandtreatmentofchildren.net/ deBettencourt, L., & Zigmond, N. (1990). The learning disabled secondary school dropout: What teachers should know. What teachers can do. Teacher Education and Special Education, 13, 17–20. doi: 10.1177/088840649001300103 Deno, S. (1992). The nature and development of curriculum-based measurement. Preventing School Failure, 36, 5-10. doi: 10.1080/1045988X.1992.9944262 Dorn, S., (2010). The political dilemmas of formative assessment. Exceptional Children, 76 (3) 325-337. Retrieved from http://journals.cec.sped.org/ec/ 173 Dunn, E. K., & Eckert, T. L. (2002). Curriculum-based measurement in reading: A comparison of similar versus challenging material. School Psychology Quarterly, 17 (1), 24-46. doi: 10.1521/scpq.17.1.24.19904 Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41 (10), 1040-1048. Retrieved from http://psycnet.apa.org/journals/amp/ Dweck, C. S. (1999). Caution – praise can be dangerous. American Educator, 23 (1), 4-9. Retrieved from http://www.aft.org Dweck, C. S., & Bempechat, J. (1983). Children's theories of intelligence. In S. Paris & G. Olsen, & H. Stevenson (Eds.), Learning and motivation in the classroom (pp. 239-256). Hillsdale, NJ: Erlbaum. Dweck, C. S., & Leggett, E. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95, 256-273. doi: 10.1037/0033-295X.95.2.256 Eckert, T. L., Ardoin, S. P., Daly, III., E. J., & Martens, B. K. (2002). Improving oral reading fluency: A brief experimental analysis of combining an antecedent intervention with consequences. Journal of Applied Behavior Analysis, 35 (3), 271-281. doi: 10.1901/jaba.2002.35-271 Eckert, T. L., Dunn, E. K., & Ardoin, S. P. (2006). The effects of alternate forms of performance feedback on elementary-aged students’ oral reading fluency. Journal of Behavioral Education, 15, 149-162. doi: 10.1007/s10864-006-9018-6 Fountas, I., & Pinnell, G. S. (2005). Leveled books, K-8: Matching texts to readers for effective teaching. Portsmouth, NH: Heinemann Fuchs, L. S., Bahr, C. M., & Rieth, H. J. (1989). Effects of goal structures and performance contingencies on the math performance of adolescents with learning disabilities. Journal of Learning Disabilities, 22 (9), 554-560. doi: 10.1177/002221948902200907 Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5 (3), 239-256. doi: 10.1207/S1532799XSSR0503_3 Gaa, J. P. (1973). Effects of individual goal-setting conferences on achievement, attitudes, and goal-setting behavior. Journal of Experimental Education, 42 (1), 22-28. Retrieved from http://www.tandf.co.uk/journals/titles/00220973.asp Gage, N. A., & Lewis, T. J. (2013). Analysis of effect for single-case design research. Journal of Applied Sport Psychology, 25 (1), 46-60. doi: 10.1080/10413200.2012.660673 Glor-Scheib, S., & Zigmond, N. (1993). Exploring the potential motivational properties of curriculum-based measurement in reading among middle school students with learning disabilities. Learning Disabilities: A Multidisciplinary Journal, 4, 35-43. Retrieved from http://www.ldanatl.org/journal.asp 174 Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: University of Oregon. Guthrie, J. T., Wigfield, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis, M. H., …Tonks, S. (2004). Increasing reading comprehension and engagement through Concept-Oriented Reading Instruction. Journal of Educational Psychology, 96, 403-423. doi: 10.1037/0022-0663.96.3.403 Hasbrouck, J. E., & Tindal, G. (1992). Curriculum-based oral reading fluency norms for students in grades 2 through 5. Teaching Exceptional Children, 24, 41-44. Retrieved from http://journals.cec.sped.org/tec Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81-112. doi: 10.3102/003465430298487 Henk, W. A. & Melnick, S. A. (1995). The Reader Self-Perception Scale (RSPS): A new tool for measuring how children feel about themselves as readers. Reading Teacher, 48 (6), 470482. Retrieved from http://www.reading.org/general/Publications/Journals/RT.aspx Hinson, J., DiStefano, C., & Daniel, C. (2003). The Internet Self-Perceptional Scale: Measuring elementary students’ levels of self-efficacy regarding internet use. Journal of Educational Computing Research, 29 (2), 209-228. doi: 10.2190/BWGN-84AE-9AR6-16DY Hintze, J. M., Callahan, J. E., Matthews, W. J., Williams, S. A. S., & Tobin, K. G. (2002). Oral reading fluency and prediction of reading comprehension in African American and Caucasian elementary school children. School Psychology Review, 31 (4), 540-553. Retrieved from http://www.nasponline.org/publications/spr Hintze, J. M., Christ, T. J., & Methe, S. A. (2006). Curriculum-based assessment. Psychology in the Schools, 43 (1), 45-56. doi: 10.1002/pits Howe, K. B., & Shinn, M. M. (2002). Standard reading assessment passages (RAPs) for use in general outcome measurement: A manual describing development and technical features. Eden Prairie, MN: Edformation. Retrieved from http://www.aimsweb.com/uploads/pdfs/passagestech- nicalmanual.pdf Jenkins, J., & Terjeson, K. J. (2011). Monitoring reading growth: Goal setting, measurement frequency, and methods of evaluation. Learning Disabilities Research & Practice, 26 (1), 28-35. doi: 10.1111/j.1540-5826.2010.00322.x Johnson, L., Graham, S., & Harris, K. R. (1997). The effects of goal setting and self-instruction on learning a reading comprehension strategy: A study of students with learning disabilities. Journal of Learning Disabilities, 30 (1), 80-91. doi: 10.1177/002221949703000107 Jones, E. D., & Krouse, J. R. (1988). The effectiveness of data-based instruction by student teachers in classrooms for pupils with mild learning handicaps. Teacher Education and Special Education, 11 (1), 9-19. doi: 10.1177/088840648801100102 175 Kazdin, A. E. (2011). Single-case design research designs: Methods for clinical and applied settings (2nd ed.). New York, NY: Oxford University Press. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), 254-284. doi: 10.1037/0033-2909.119.2.254 Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices. Journal of Educational Psychology, 95 (1), 3-21. doi: 10.1037/0022-0663.95.1.3 Lackaye, T. D., & Margalit, M. (2006). Comparisons of achievement, effort, and self-perceptions among students with learning disabilities and their peers from different achievement groups. Journal of Learning Disabilities, 39 (5), 432-446. doi: 10.1177/00222194060390050501 Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal-setting and task performance. American Psychologist, 57, 705-717. doi: 10.1037//0003066X.57.9.705 Lynch, J. (2002). Parents’ self-efficacy beliefs, parents’ gender, children’s reader selfperceptions, reading achievement and gender. Journal of Research in Reading, 25 (1), 54-67. doi: 10.1111/1467-9817.00158 Madelaine, A., & Wheldall, K. (2004). Curriculum-based measurement of reading: Recent Advances. International Journal of Disability, Development and Education, 51 (1), 5782. doi: 10.1080/1034912042000182201 McCabe, P. P. (2006). Convincing students they can learn to read: Crafting self-efficacy prompts. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 79, 252-257. doi: 10.3200/TCHS.79.6.252-257 McLane, K. (2008). Fact sheet: Benefits of curriculum based measurement. Washington, DC: National Center on Student Progress Monitoring. McMasters, A. B. (2011). Use of a tier 3 evidence-based intervention with progress monitoring, formative assessment, and student goal-setting: An evaluation of the immediate and longterm effects on student reading achievement. (Unpublished doctoral dissertation). Indiana University of Pennsylvania, PA. McMillan, J. H., Myran, S., & Workman, D. (2002). Elementary teachers’ classroom assessment and grading practices. Journal of Educational Research, 95 (4), 203-213. doi: 10.1080/00220670209596593 Meece, J. L., Anderman, E. M., & Anderman, L. H. (2006). Classroom goal structure, student 176 motivation, and academic achievement. Annual Review of Psychology, 57, 487-503, doi: 10.1146/annurev.psych.56.091103.070258 Melnick, S. A., Henk, W. A., & Marinak, B. A. (2009, Oct). Validation of a Reader-Self Perception scale (RSPS2) for use in grades 7 and above. Presented at the annual meeting of the Northeastern Educational Research Association, Rocky Hill, CT. Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., … Urdan, T. (2000). Manual for the Patters of Adaptive Learning Scales. Ann Arbor: University of Michigan. Murayama, K. & Elliot, A. J. (2009). The joint influence of personal achievement goals and classroom goal structures on achievement-relevant outcomes. Journal of Educational Psychology, 101 (2), 432-447. doi: 10.1037/a0014221 Murphy, R. J., & Brown, A. J. (1980). Multiple-baseline and multiple-probe designs: Practical alternatives for special education assessment and evaluation. Journal of Special Education, 14, 325-335. doi: 10.1177/002246698001400306 National Institute of Child Health and Human Development (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office. Nes Ferrara, S. L. (2005). Reading fluency and self-efficacy: A case study. International Journal of Disability, Development and Education, 52 (3), 215-231. doi: 10.1080/10349120500252858 Nicholls, J. (1978). The development of the concepts of effort and ability, perception of academic attainment, and the understanding that difficult tasks require more ability. Child Development, 49, 800-814. doi: 10.2307/1128250 Nicholls, J. (1984). Conceptions of ability and achievement motivation. In R. Ames & C. Ames (Eds.), Research on motivation in education: Student motivation (Vol. 1, pp. 39-73). New York: Academic Press. O’Connor, R. E., White, A., & Swanson, H. L. (2007). Repeated reading versus continuous reading: Influences on reading fluency and comprehension. Exceptional Children, 74 (1), 31-46. Retrieved from http://journals.cec.sped.org/ec/ Oka, E., & Paris, S. (1986). Patterns of motivation and reading skills in underachieving children. In S. Ceci (Ed.). Handbook of cognitive, social, and neuropsychological aspects of learning disabilities. Hillsdale, NJ: Erlbaum. Osborn, J., Lehr, F., & Hiebert, E. (2003). Research-based practices in early reading series: A focus on fluency. Pacific Resources for Education and Learning. Retrieved from http://prel.org/services/reading--literacy/products.aspx 177 O’Shea, L. J., Sindelar, P. T., & O’Shea, D. (1987). The effects of repeated readings and attentional cues on the reading fluency and comprehension of learning disabled readers. Learning Disabilities Research, 2, 103– 109. doi: 10.1080/10862968509547535 Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66 (4), 543-578. doi: 10.3102/00346543066004543 Parker, R. I., Vannest, K. J., & Brown, L. (2009). The improvement rate difference for singlecase research. Exceptional Children, 75 (2), 135-150. Retrieved from http://www.cec.sped.org/Publications/CEC-Journals/ Pearce, L. R., Gayle, R. (2009). Reading fluency as a predictor of reading comprehension with American Indian and white elementary students. School Psychology Review, 38 (3), 419427. Retrieved from http://www.nasponline.org/publications/spr Powell-Smith, K. A., Good, R. H., III, & Atkins, T. (2010). DIBELS Next Oral Reading Fluency Readability Study (Technical Report No. 7). Eugene, OR: Dynamic Measurement Group. Available: http://dibels.org/. Quirk, M. P., & Schwanenflugel, P. J. (2004). Do supplemental remedial reading programs address the motivational issues of struggling readers? An analysis of five popular programs. Reading Research and Instruction, 43 (3), 1-19. doi: 10.1080/19388070509558408 Rasinski, T. V., & Hoffman, J. V. (2003). Oral reading in the school literacy curriculum. Reading Research Quarterly, 38 (4), 510-522. doi: 10.1598/RRQ.38.4.5 Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading as an indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47, 427-469. doi: 10.1016/j.jsp.2009.07.001 Riley-Tillman, T. C., & Burns, M. K. (2009). Evaluating educational interventions: Single-case design for measuring response to intervention. New York, NY: Guilford Press. Ross, S. G. (2012) Measuring response to intervention: Comparing three effect size calculations techniques for single-case design analysis. (Unpublished doctoral dissertation). North Carolina State University, NC. Scheirer, M.A., & Kraut, R.E. (1979). Increased educational achievement via self-concept change. Review of Educational Research, 49, 131–150. doi: 10.2307/1169929 Schunk, D. H. (1983a). Ability versus effort attributional feedback: Differential effects on selfefficacy and achievement. Journal of Educational Psychology, 75 (6) 848-856. doi: 10.1037/0022-0663.75.6.848 178 Schunk, D. H. (1983b). Developing children’s self-efficacy and skills: The roles of social comparative information and goal setting. Contemporary Educational Psychology, 8, 7686. doi: 10.1016/0361-476X(83)90036-X Schunk, D. H. (1984). Enhancing self-efficacy and achievement through rewards and goals: motivational and informational effects. Journal of Educational Research, 78 (1), 29-34. Retrieved from http://www.tandfonline.com/toc/vjer20/current Schunk, D. H. (2001). Self-regulation through goal setting. (ERIC/CASS Digest). Retrieved from http://www.eric.ed.gov.proxy2.cl.msu.edu/PDFS/ED462671.pdf Schunk, D. H. (2003). Self-efficacy for reading and writing: Influence of modeling, goal setting, and self-evaluation. Reading and Writing Quarterly: Overcoming Learning Difficulties, 19 (2), 159-172. doi: 10.1080/10573560308219 Schunk, D. H., & Rice, J. M. (1989). Learning goals and children’s reading comprehension. Journal of Reading Behavior, 21 (3), 279-293. doi: 10.1080/10862968909547677 Schunk, D. H., & Rice, J. M. (1991). Learning goals and progress feedback during reading comprehension instruction. Journal of Reading Behavior, 23 (3), 351-364. doi: 10.1080/10862969109547746 Schunk, D. H., & Swartz, C. W. (1993). Goals and progress feedback: Effects on self-efficacy and writing achievement. Contemporary Educational Psychology, 18 (3), 337-354. doi: 10.1006/ceps.1993.1024 Shapiro, E. S., Keller, M. A., Lutz, J. G., Santoro, L. E., & Hintze, J. M. (2006). Curriculumbased measures and performance on state assessment and standardized tests: Reading and math performance in Pennsylvania. Journal of Psychoeducational Assessment, 24 (1), 1935. doi: 10.1177/0734282905285237 Shell, D. F., Colvin, C., & Bruning, R. H. (1995). Self-efficacy, attribution, and outcome expectancy mechanisms in reading and writing achievement: Grade-level and achievement-level differences. Journal of Educational Psychology, 87 (3), 386-398. doi: 10.1037/0022-0663.87.3.386 Shell, D. F., Murphy, C. C., & Bruning, R. H. (1989). Self-efficacy and outcome expectancy mechanisms in reading and writing achievement. Journal of Educational Psychology, 81 (1), 91-100. doi: 10.1037/0022-0663.81.1.91 Shih, S., & Alexander, J. M. (2000). Interacting effects of goal setting and self- or otherreferenced feedback on children’s development of self-efficacy and cognitive skill within the Taiwanese classroom. Journal of Educational Psychology, 92 (3), 536-543. doi: 10.1037/0022-0663.92.3.536 Skiba, R., Wesson, C., & Deno, S. L. (1982). The effects of training teachers in the use of formative evaluation in reading: An experimental control comparison (Research Report 179 No. 89). Minneapolis, MN: University of Minnesota, Institute for Research on Learning Disabilities. Sins, P. H. M., van Joolingen, W. R., Savelsbergh, E. R., & van Hout-Wolters, B. (2008). Motivation and performance within a collaborative computer-based modeling task: Relations between students’ achievement goal orientation, self-efficacy, cognitive processing, and achievement. Contemporary Educational Psychology, 33, 58-77. doi: 10.1016/j.cedpsych.2006.12.004 Slavin, R. E. (1994). Preventing early school failure: The challenge and the opportunity. In R. E. Slavin, N. L. Karweit, & B. A. Wasik (Eds.), Preventing early school failure: Research, policy, and practice (pp. 1–12). Boston: Allyn & Bacon. Slavin, R. E., Karweirt, N. L., Wasik, B. A., Madden, N. A., & Dolan, L. J. (1994). Success for all: A comprehensive approach to prevention and early intervention. In R. E. Slavin, N. L. Karweit, B. A. Wasik (Eds). Preventing early school failure: Research, policy, and practice (pp. 175-205). Boston: Allyn & Bacon. Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Spring, C., Blunden, D., & Gatheral, M. (1981). Effect on reading comprehension of training to automaticity in word reading. Perceptual and Motor Skills, 53, 779-786. doi: 10.2466/pms.1981.53.3.779 Stanovich , K. E. (1986) Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21 (4), 360-407. doi: 10.1598/RRQ.21.4.1 Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42 (8), 795-819. doi: 10.1002/pits.20113 Stiggins, R. J., & Bridgeford, N. J. (1985). The ecology of classroom assessment. Journal of Educational Measurement, 22 (4), 271-286. doi: 10.1111/j.1745-3984.1985.tb01064.x Stipek, D. J., & Tannatt, L. M. (1984). Children’s judgments of their own and their peers’ academic competence. Journal of Educational Psychology, 76 (1), 75-84. doi: 10.1037/0022-0663.76.1.75 Swain, K. D. (2005). CBM with goal setting: Impacting students’ understanding of reading goals. Journal of Instructional Psychology, 32 (3), 259-265. Retrieved from http://www.projectinnovation.biz/jip.html Taylor, B. M., Frye, B. J., Maruyama, G. M. (1990). Time spent reading and reading growth. American Educational Research Journal, 27 (2), 351-362. doi: 10.2307/1163013 180 Therrien, W. J., (2004). Fluency and comprehension gains as a result of repeated reading. Remedial and Special Education, 25 (4), 252-261. doi: 10.1177/07419325040250040801 Torgesen, J. K. (2002). The prevention of reading difficulties. Journal of School Psychology, 40 (1), 7-26. doi: 10.1016/S0022-4405(01)00092-9 Torgesen, J. K., Houston, D. D., Rissman, L. M., Decker, S. M., Roberts, G., Vaughn, S., … Lesaux, N. (2007). Academic literacy instruction for adolescents: A guidance document from the Center on Instruction. Portsmouth, NH: RMC Research Corporation, Center on Instruction. Torgesen, J. K., Rashotte, C. A., & Alexander, A. (2001). Principles of fluency instruction in reading: Relationships with established empirical outcomes. In M. Wolf (Ed.). Dyslexia, fluency, and the brain (pp. 333-355) Parkton, MD: York Press. Unrau, N., & Schlackman, J. (2006). Motivation and its relationship with reading achievement in an urban middle school. Journal of Educational Research, 100, 81-101. doi: 10.3200/JOER.100.2.81-101 U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics (2011). The nation’s report card: Reading 2011. Retrieved from http://nces.ed.gov/nationsreportcard/pubs/main2011/2012457.asp. U.S. Department of Education, Office of Planning, Evaluation and Policy Development (2010). A blueprint for reform: The reauthorization of the Elementary and Secondary Education Act. Washington, DC: Author. Wanzek, J., & Vaughn, S. (2008). Response to varying amounts of time in reading intervention for students with low response to intervention. Journal of Learning Disabilities, 41, 126142. doi: 10.1177/0022219407313426 Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25, 68-81. doi: 10.1006/ceps.1999.1015 Wigfield, A., & Guthrie, J. T. (1995). Dimensions of children's motivations for reading: An initial study (Research Rep. No. 34). Athens, GA: National Reading Research Center. Wigfield, A., & Guthrie, J. T. (1997). Relations of children’s motivation for reading to the amount and breadth of their reading. Journal of Educational Psychology, 89, 420-432. doi: 10.1037/0022-0663.89.3.420 Wigfield, A., Guthrie, J. T., Tonks, S., & Perencevich, K. C., (2004). Children’s motivation for reading: Domain specificity and instructional influences. Journal of Educational Research, 97, 299-309. doi: 10.3200/JOER.97.6.299-310 Wolters, C. A. (2004). Advancing achievement goal theory: Using goal structures and goal orientations to predict students’ motivation, cognition, and achievement. Journal of Educational Psychology, 96 (2), 236-250. doi: 10.1037/0022-0663.96.2.236 181 Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25 (1), 3-17. doi: 10.1207/s15326985ep2501_2 Zimmerman, B. J., & Martinez-Pons, M. (1992). Perceptions of efficacy and strategy use in the self-regulation of learning. In D. H. Schunk & J. Meese (Eds.). Student perceptions in the classroom: Causes and consequences. Hillsdale, NJ: Erlbaum. 182