CONTENT AREA READING PRACTICES IN THE DEAF EDUCATION SOCIAL STUDIES CLASSROOM: A CASE STUDY OF FOUR TEACHERS By Michella Maiorana-Basas A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Special Education- Doctor of Philosophy 2015 ABSTRACT CONTENT AREA READING PRACTICES IN THE DEAF EDUCATION SOCIAL STUDIES CLASSROOM: A CASE STUDY OF FOUR TEACHERS By Michella Maiorana-Basas Research clearly indicates that readers who are deaf or hard of hearing (DHH) struggle with comprehension of content area text (Moores, 2001) and lack the skills associated with comprehension of these texts (e.g., connections to prior knowledge, understanding text structure/organization, attention to text features, vocabulary development, and inference) (Bringham & Hartman, 2010; Easterbrooks & Stephenson, 2006; Howell & Luckner, 2003; Marschark & Hauser, 2008; Strassman, 1997). Additionally, theories regarding the education of marginalized students, as discussed by Gee (2008), Morrell (2004), and Oakes (1985), suggest that there is a connection between how teachers perceive their students as learners and the instructional choices they make (Gee, 2008; Morrell, 2004; Oakes, 1985). The present study used instrumental case study design (Stake, 2000) to investigate the frequency, duration, and intensity of content area reading (CAR) integration in four, upper-grade deaf education social studies classrooms. The study also investigated the frequency and duration of instructional approaches used by teachers of the deaf when teaching content material. Findings from this study revealed that the two middle school teachers dedicated a larger percentage of instructional time to CAR skills than did the two participating high school teachers, and that CAR skills were primarily addressed at low levels of intensity across all four participating teachers. Background knowledge (activation of background knowledge and building of background knowledge) and content-specific vocabulary were the most commonly integrated CAR skills by all teachers. Inference and text structure were the least integrated CAR skills. Only one teacher participant integrated text structure during the study, however, it was addressed for less than 1% of instructional time, and was categorized at the lowest level of intensity. Findings also revealed that the instructional practices of three teachers favored directive instructional approaches (e.g., lecture) when delivering content material. Only one teacher used instructional practice that favored a social-constructivist approach to delivering content material (e.g., discussion). Across all four teachers, questions were categorized most frequently at the knowledge and comprehension levels. Few higherlevel questions (e.g., analysis, synthesis, evaluation) were asked by teachers during instruction. Interviews with teachers revealed that they did not view themselves as teachers of reading, and that pressures from state mandates in addition to curriculum requirements limited the types of activities used to teach content material. Teachers expressed concerns regarding their students’ abilities to read and learn content material from text. Additionally, they expressed concerns regarding their students’ abilities with expressive and receptive language, specifically academic language. Keywords: Content Area Reading, Deaf and Hard of Hearing, Adolescent Literacy; Expository Texts; Case Study; Social Studies; Teacher Preconceptions Influence on Instructional Choice Copyright by MICHELLA MAIORANA-BASAS 2015 This opus is dedicated to my mother, Susan, my father, Phillip, and my husband, JC. You all had a bigger part in this than you realize. Thank you for being a source of inspiration and support through my journey. v ACKNOWLEDGEMENTS A wise friend once told me that it takes a village to write a dissertation (thank you, Jen, for this beautiful metaphor). This journey has been a long and arduous one, and I would have never made it to the end without the help of my “village”. First, I would like to thank Dr. Harold Johnson for recognizing the potential I did not realize I had, and for encouraging me to begin my doctoral studies. Without you, Dr. Johnson, I don’t believe I would have ever had the confidence to start my Ph.D. Your continuous faith in me has been a source of strength and encouragement, and for that, I am profoundly grateful. I would also like to thank the National Leadership Consortium in Sensory Disabilities for not only providing me with a unique and enriching doctoral experience, but for providing the financial support needed to complete my degree. To all the members of the leadership team (and to all the fellows!!!!), thank you for your guidance and support over these past four years. Your knowledge and expertise has been integral to my success. Sergio, Kelly, Adam, and Gabby, thank you for providing a place for me to rest my head during data collection and for all of the hugs and love on the “difficult” days. I am so thankful for the time I was able to spend with you. This dissertation would have been impossible without your love and generosity. Daphne, Daphne, Daphne. There are not enough words in the English language (or in ASL) to convey my appreciation for your friendship, your support, and your insight in ALL aspects of this dissertation. You, Jeff, and Savvy have been, and continue to be, vi an incredible source of happiness, encouragement, and motivation. I can’t wait to help Savvy when she is ready to write her “tation” (I still have all of the early drafts!). I love you all to the moon and back, and much of this dissertation would not have existed without you. Dr. Jen Knight, you have been an amazing friend and mentor throughout this whole process. I will forever treasure our many long hours at Grand River Coffee, drinking beautifully decorated lattes and agonizing, together, over literature reviews, data analyses, and impending deadlines. You are an amazing human being and I am so grateful for you. To those of you who provided financial support during the dissertation phase, “DW”, “RL”, Mike Palmer, Tine Moe, The Boyan Family, Dad, and others who wish to remain “anonymous”. Thank you for providing a way for me to pay for driving across the country and back to collect data! You are all AMAZING!!! Dr. Jessica Trussell, my fearless colleague and research hero. Thank you for all your assistance in certain aspects of this dissertation and for always being available at the drop of a hat. So glad to have you as part of my team! To all of the teachers who participated in this research study, thank you for your time, your openness, and your willingness to allow me to sit in your classrooms and ask you so many questions. I have gained so much from this experience working with each of you. Your generosity and enthusiasm about this study certainly kept me going! Dr. Claudia Pagliaro, my dissertation chair and “best advisor ever”, thank you for your guidance and patience throughout my entire doctoral program. You have been so vii much more than just a mentor. You are my role model and my friend. I could have never, ever, ever, ever, gotten this far without you. Thank you for allowing me to discover my own research path, for supporting and encouraging me through the whole process, for being tough on me when I needed it, and for giving me time and space when things got too overwhelming. Thank you for your incredibly detailed and thoughtful feedback, even after (what seemed like) a million revisions. You are amazing, and I can only hope that one day, I will be the same kind of mentor for one of my students. Last, but certainly not least, I would like to extend my deepest gratitude to my husband, JC, for his unremitting support, encouragement, and love during this entire process. You have been my biggest cheerleader and greatest source of motivation. Thank you for your understanding and support during my late night writing marathons, times of panic, breakdowns over data, and for listening to me read and re-read each version of each chapter over, and over, and over… There are few people in this world who have a partner in life like you. I am the luckiest girl in the world. So, here I am. I have finally made it to the end. To every teacher, professor, colleague, friend, and student I have ever had the good fortune of knowing, you all helped shape me into the person I am today. Thank you. Now, I begin the next part of my journey. I hope I serve you all, and the field of Deaf Education, well. Much love and aloha. viii TABLE OF CONTENTS LIST OF TABLES ........................................................................................................... xiv LIST OF FIGURES ...........................................................................................................xv KEY TO ABBREVIATIONS .......................................................................................... xvi CHAPTER 1 ........................................................................................................................1 INTRODUCTION ...............................................................................................................1 Problem Statement ...................................................................................................4 Rationale ..................................................................................................................4 Significance..............................................................................................................5 CHAPTER 2 ........................................................................................................................8 REVIEW OF THE LITERATURE .....................................................................................8 The Skills of Content Area Reading ........................................................................9 Background knowledge ...............................................................................9 Activation of background knowledge ............................................10 Building of background knowledge ...............................................13 Text structure .............................................................................................16 Text features...............................................................................................19 Content-specific vocabulary ......................................................................23 Inference ....................................................................................................25 Content Area Reading Skills Are Interconnected ..................................................28 Theoretical Framework ..........................................................................................29 Social-constructivist theories of learning support higher levels of learning ......................................................................................................31 Teacher preconceptions influence the way they instruct ...........................32 Knowing how students think and learn is necessary for designing effective instruction ...................................................................................35 Social Studies and Content Area Reading .............................................................36 CHAPTER 3 ......................................................................................................................44 METHODS ........................................................................................................................44 Study Design ..........................................................................................................45 Participant recruitment ...............................................................................46 Participant selection criteria .......................................................................46 Teacher Participants ...............................................................................................50 Teacher A ...............................................................................................................52 The school ..................................................................................................52 Personal and professional background.......................................................52 Participating class ......................................................................................53 Audiological information ...............................................................54 Language use and preferences .......................................................54 ix Reading levels ................................................................................55 QRI-4 .............................................................................................56 Observed instruction ..................................................................................57 Teacher B ...............................................................................................................57 The school ..................................................................................................57 Personal and professional background.......................................................57 Participating class ......................................................................................58 Audiological information ...............................................................59 Language use and preferences .......................................................60 Reading levels ................................................................................60 QRI-4 .............................................................................................61 Observed instruction ..................................................................................61 Teacher C ...............................................................................................................61 The school ..................................................................................................61 Personal and professional background.......................................................61 Participating class ......................................................................................62 Audiological information ...............................................................63 Language use and preferences .......................................................63 Reading levels ................................................................................64 QRI-4 .............................................................................................64 Observed instruction ..................................................................................64 Teacher D ...............................................................................................................64 The school ..................................................................................................64 Personal and professional background.......................................................65 Participating class ......................................................................................66 Audiological information ...............................................................67 Language use and preferences .......................................................67 Reading levels ................................................................................68 QRI-4 .............................................................................................68 Observed instruction ..................................................................................68 Data Collection ......................................................................................................68 QRI-4 procedures .......................................................................................69 Vocabulary assessment procedures................................................69 Reading comprehension assessment procedures ...........................70 Class observation procedures .....................................................................71 Response-to-instruction meetings ..............................................................72 Teacher interviews .....................................................................................73 Software and Other Technology Used to Manage Data ........................................73 Wondershare ..............................................................................................73 Camtasia.....................................................................................................74 Screencast ..................................................................................................74 Data Coding and Analysis .....................................................................................75 Video recorded data ...................................................................................75 Coding system ............................................................................................75 Inter-rater agreement measures ..................................................................79 Procedures for inter-rater agreement .............................................79 x Inter-rater agreement scores ...........................................................87 Use of inter-rater agreement in qualitative research ......................89 Minimum standards for inter-rater agreement in qualitative research .........................................................................................90 CHAPTER 4 ......................................................................................................................92 RESULTS .........................................................................................................................92 Overall Comparison Results ..................................................................................94 Instructional approaches and content area reading integration of all teachers ......................................................................................................94 Instructional approaches duration ..............................................................94 Instructional approaches frequency ...........................................................96 Duration of content area reading integration .............................................98 Duration of background knowledge.........................................................100 Frequency and intensity of background knowledge ................................102 Duration and intensity of background knowledge ...................................104 Duration and intensity of text structure ...................................................106 Duration of text features ..........................................................................108 Frequency and intensity of text features ..................................................110 Duration and intensity of text features .....................................................112 Duration of content-specific vocabulary..................................................114 Frequency and intensity of content-specific vocabulary .........................116 Duration and intensity of content-specific vocabulary ............................118 Duration of inference ...............................................................................120 Frequency and intensity of inference .......................................................122 Duration and intensity of inference..........................................................124 Frequency and complexity of questioning ...............................................126 Summary of overall comparison results ..................................................128 Within Teacher Results ........................................................................................128 Teacher A .............................................................................................................128 Instructional approaches .........................................................................128 Integration of content area reading (CAR) skills .....................................134 Background knowledge ...............................................................134 Text structure ...............................................................................135 Text features.................................................................................135 Content-specific vocabulary ........................................................136 Inference ......................................................................................136 Complexity of questioning ...........................................................137 Summary .................................................................................................138 Teacher B .............................................................................................................138 Instructional approaches .........................................................................139 Integration of content area reading (CAR) skills .....................................147 Background knowledge ...............................................................147 Text structure ...............................................................................148 Text features.................................................................................148 Content-specific vocabulary ........................................................149 xi Inference ......................................................................................150 Complexity of questioning ...........................................................150 Summary .................................................................................................151 Teacher C .............................................................................................................151 Instructional approaches .........................................................................151 Integration of content area reading (CAR) skills .....................................157 Background knowledge ...............................................................157 Text structure ...............................................................................158 Text features.................................................................................158 Content-specific vocabulary ........................................................159 Inference ......................................................................................159 Complexity of questioning ...........................................................160 Summary .................................................................................................160 Teacher D .............................................................................................................161 Instructional approaches .........................................................................161 Integration of content area reading (CAR) skills .....................................167 Background knowledge ...............................................................167 Text structure ...............................................................................168 Text features.................................................................................168 Content-specific vocabulary ........................................................169 Inference ......................................................................................170 Complexity of questioning ...........................................................171 Summary .................................................................................................171 Across Teacher Results ........................................................................................172 Summary of across teacher results ...........................................................174 Insights From Semi-Structured Interviews ..........................................................174 Instructional preferences ..........................................................................175 Preconceptions/views about students as readers and learners .................176 Reading versus content knowledge ..........................................................178 Frustrations about curriculum and policy ................................................181 Frustrations about support at the district, school, and/or family levels ...183 CHAPTER 5 ....................................................................................................................187 INTERPRETATIONS, CONCLUSIONS, AND RECOMMENDATIONS ...................187 Instructional Approaches .....................................................................................187 In what ways do teachers of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during instruction? ...................................................................................187 What instructional approaches are used by teachers of the deaf in the upper-grade, social studies classroom? ....................................................189 Student independent work/activities ........................................................191 Content Area Reading Integration .......................................................................193 What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? ...........................193 Co-occurrences ........................................................................................194 Teacher Preconceptions .......................................................................................194 xii What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? ..................................195 Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the uppergrade, deaf education classroom? ............................................................196 Is there a relationship between teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? ...197 Suggestions for Teacher Preparation Programs and Professional Development .198 Preparation in instructional practices and CAR .......................................198 Preparation in language and communication ...........................................200 Preparation that fosters the belief that all teachers are teachers of reading......................................................................................................201 Limitations ...........................................................................................................201 Researcher bias ........................................................................................202 Researcher presence .................................................................................203 Sample......................................................................................................203 Observations ............................................................................................204 Data analysis ............................................................................................204 Directions for Future Research ............................................................................205 Conclusions ..........................................................................................................207 APPENDICES .................................................................................................................209 APPENDIX A ......................................................................................................210 APPENDIX B .....................................................................................................211 APPENDIX C .....................................................................................................212 APPENDIX D ......................................................................................................214 APPENDIX E ......................................................................................................215 APPENDIX F.......................................................................................................216 APPENDIX G ......................................................................................................217 APPENDIX H ......................................................................................................219 APPENDIX I .......................................................................................................233 APPENDIX J .......................................................................................................234 REFERENCES ................................................................................................................237 xiii LIST OF TABLES Table 1: Demographic Data of Teachers ...........................................................................51 Table 2: Duration and Intensity (Expressed in Total Minutes Across All Observations) of Activation and Building of Background Knowledge.......................................................105 Table 3: Duration and Intensity (Expressed in Total Minutes Across All Observations) of Text Structure...................................................................................................................107 Table 4: Duration and Intensity (Expressed in Total Minutes Across All Observations) of Text Features ....................................................................................................................113 Table 5: Duration and Intensity (Expressed in Total Minutes Across All Observations) of Content-Specific Vocabulary ...........................................................................................119 Table 6: Duration and Intensity (Expressed in Total Minutes Across All Observations) of Inference ..........................................................................................................................125 Table 7: Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher A ..................................................133 Table 8: Duration and Intensity (Expressed in Total Minutes of Instruction Across All Analyzed Sessions*) of Instructional Behaviors of Teacher B .......................................146 Table 9: Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher C ..................................................156 Table 10: Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher D ..................................................166 Table 11: Co-Occurrences of CAR Skills........................................................................173 Table 12: Questioning Codes Contextualized Within Bloom’s Taxonomy ....................229 Table 13: Example of Inter-rater Agreement Documentation Form ...............................233 Table 14: Overall Results of Duration of Instructional Practices ....................................234 Table 15: Overall Results of Duration of CAR Integration .............................................235 xiv LIST OF FIGURES Figure 1: Theoretical framework .......................................................................................30 Figure 2: Visual approximation of code applications determined to be in agreement via visual inspection in Dedoose .............................................................................................82 Figure 3: Visual approximation of code applications determined to be in partial agreement via visual inspection in Dedoose ......................................................................84 Figure 4: Visual approximation of code applications determined not to be in agreement via visual inspection in Dedoose........................................................................................86 Figure 5: Total duration of instructional approaches (expressed as percentage of instructional time) ..............................................................................................................95 Figure 6: Frequency of instructional approaches .............................................................97 Figure 7: Total duration of CAR integration (expressed as percentage of total instructional time) ..............................................................................................................99 Figure 8: Total duration of activation and building of background knowledge (expressed as percentage of total instructional time) .........................................................................101 Figure 9: Frequency and intensity of activation of background knowledge and building of background knowledge ....................................................................................................103 Figure 10: Total duration of text features (expressed as percentage of total instructional time) .................................................................................................................................109 Figure 11: Frequency and intensity of text features ........................................................111 Figure 12: Total duration of content-specific vocabulary (expressed as percentage of total instructional time) ............................................................................................................115 Figure 13: Frequency and intensity of content-specific vocabulary ...............................117 Figure 14: Total duration of inference (expressed as percentage of total instructional time) ................................................................................................................................121 Figure 15: Frequency and intensity of inference .............................................................123 Figure 16: Frequency and complexity of questioning .....................................................127 xv KEY TO ABBREVIATIONS ABGK ...................................................................... Activation of Background Knowledge AEBPD ................................................. ASL/English Bilingual Professional Development ALD ........................................................................................... Assistive Listening Device ASL ............................................................................................... American Sign Language BBGK ......................................................................... Building of Background Knowledge CAR ................................................................................................... Content Area Reading CI................................................................................................................ Cochlear Implant CSV ......................................................................................... Content-Specific Vocabulary DHH ................................................................................................... Deaf/Hard of Hearing DIR .......................................................................... Teacher Giving Directions/Instructions HA ...................................................................................................................... Hearing Aid HO .......................................................................................................... Hands-On Learning IND ............................................................................ Student Independent Work/Activities INF ......................................................................................................................... Inference LVL .............................................................................................................................. Level NCLB ..................................................................................................No Child Left Behind NI .............................................................................................................. Non-Instructional O................................................................... Other (in reference to instructional behaviors) P2P .....................................................................................................................Peer-to-Peer PSE..................................................................................................... Pidgin Signed English SimCom ................................................................................ Simultaneous Communication TC ...................................................................................................... Total Communication xvi TDL ............................................................................................ Teacher-Directed Learning TF ..................................................................................................................... Text Features TFL ......................................................................................... Teacher-Facilitated Learning TS ....................................................................................................................Text Structure ZPD ..................................................................................... Zone of Proximal Development xvii CHAPTER 1 INTRODUCTION Advanced literacy skills are an important component of achieving personal and professional success (Vacca & Vacca, 2010). Failure to develop these skills may result in higher rates of high school drop out, unemployment, underemployment, limited access to higher education, failure to obtain a high school diploma, and difficulty in managing one’s personal and professional life (Biancarosa & Snow, 2006; Peterson, Caverly, Nicholson, O’Neal, & Cusenbary, 2000). While advanced literacy development is an obstacle for many students in general education, it is an even greater obstacle for students who are deaf or hard of hearing (DHH). Currently, only 5% of high school seniors who are DHH demonstrate literacy achievement rates at or above those of their hearing counterparts (Kelly & Barac-Cikoja, 2007), and over half struggle to develop reading comprehension levels commensurate with those of a typical 4th grader (Easterbrooks & Beal-Alvarez, 2012; Traxler, 2000). One piece of the advanced literacy puzzle is the ability to read and comprehend content area texts (e.g., content area reading). Content area reading (CAR) skills include the skills and strategies needed to mediate and comprehend content area texts, which include: text-books, leveled readers specifically related to content topics, content-related magazines and articles, content-related graphic novels, and content-related learning materials that are presented through the use of presentation software (i.e., Microsoft PowerPoint) or online. It is important to understand that CAR is not the same as narrative text reading (Vacca & Vacca, 2010). First, content area texts differ from narrative texts in their 1 structure and accessibility in relation to concepts, vocabulary, and discourse (Sanacore & Palumbo, 2009). This means that strategies commonly used to help students mediate and comprehend narrative texts such as phonemic awareness, phonics, fluency, vocabulary, and text comprehension, (National Institute of Child Health and Human Development, 2000) may not be enough to support students in mediating and comprehending content area texts. Second, CAR requires additional skills (Vacca & Vacca, 2010). These additional skills include: background knowledge (activation of background knowledge and building of background knowledge), text structure, text features, content-specific vocabulary, and inference. These skills are important components for supporting readers in making sense of content texts, aiding in the transition from “learning to read” to “reading to learn” (Chall, 1996). Third, once a text requires the reader to “read to learn” (Chall, 1996), such as is the case with content area text, reading and comprehending becomes a more complex task. For example, when reading a content area text, a reader must activate background knowledge in order to make connections between what they know and what they are learning while reading (building background knowledge). The reader must also identify the structure of the text in order to understand how that information is organized within and across the text, pay attention to and interpret text features that support information presented in the text, identify and understand new and/or specialized vocabulary presented in the text, and make inferences, specifically those that help the reader “fill in the gaps” (e.g., elaborative inferences) and support the reader in making connections across a text (e.g., global inferences). Vacca and Vacca (2010) point out that even readers who are proficient with narrative texts may struggle with complex texts such as those 2 used in the content area classroom. Proficient readers, while not burdened with the task of decoding when reading to learn from text, may have a better foundation for reading and decoding text, but may not have the strategies necessary for successfully navigating content area texts. For other readers who are still struggling with basic strategies such as decoding and comprehending, these types of texts becomes even more difficult (FaggellaLuby, Graner, Deschler, & Drew, 2012; Vacca & Vacca, 2010). As such, it is recommended that teachers build in specific strategy instruction during content area instruction to support their student’s comprehension of content area texts. (FaggellaLuby, Graner, Deschler, & Drew, 2012; Fisher & Frey, 2014). This is especially critical for students who are DHH, as texts used in the content area classroom are often too difficult for these students to navigate (Moores, 2001). Many students who are DHH may require additional instructional strategies when approaching content area texts (Moores, 2001). For example, students who are DHH may not have enough proficiency with the English language to fully grasp complex texts (Marschark, Lang, & Albertini, 2002; Moores, 2001) such as those used in the content area classroom. These students often approach content area and expository texts with limited contentspecific vocabulary (and English vocabulary in general) and limited knowledge of English grammar and text structures (Marschark, Lang, & Albertini, 2002; Moores, 2001; Strassman, 1997). Some ways that teachers of the deaf have tried to circumvent these additional obstacles faced by their students is by avoiding the use of text books altogether (Moores, 2001), or through the re-writing of content area text into simplified formats. Easterbrooks and Stephenson (2006) point out that while there are identified “best practices” for supporting advanced literacy with students who are DHH, there are none 3 that support the re-writing of texts as a strategy to deliver content related information. Problem Statement Despite the recognition of CAR as a component of advanced literacy development and the understanding of the differences between narrative text reading and CAR, recent studies have revealed that CAR skills are not being incorporated into content area instruction (Chall, 1996; Neufeld, 2005; Shanahan & Shanahan, 2008). Many content area teachers, including those in the deaf education classroom (Moores, 2001) avoid teaching CAR skills, which may be a result of the lack of research that addresses specific strategies necessary for teaching these skills (Easterbrooks & Stephenson, 2006; Sanacore & Palumbo, 2009). Further, previous research concerned with the reading comprehension abilities of students who are DHH has focused primarily on narrative texts (e.g., Ewolt, Israelite, & Dodds, 1992; Jackson, Paul, & Smith, 1997; Schirmer, 2003; Strassman, 1997; among others), leaving out an entire genre of texts that adolescent students encounter on a daily basis. Rationale Few studies address CAR in the deaf education classroom, and even fewer address the needs of adolescent readers who are DHH. Studies that do address CAR with readers who are DHH (Boyd & George, 1973; Bringham & Hartman, 2010; Diebold & Waldron, 1998; Howell & Luckner, 2003; Hyde, Zevenbergen, & Power, 2003) do not follow a particular line of inquiry, making it difficult to come to valid, evidence-based conclusions regarding best practices that support advanced literacy development needed in the content area classroom. Easterbrooks and Stephenson (2006) point out that few of the commonly used and 4 accepted strategies for literacy development used in the deaf education classroom have a solid research base, and that more investigation is needed in this area. They go on to mention that while there is not an evidence base to show that many of the identified “best practices” are beneficial to students who are DHH, there also does not exist evidence that demonstrates the contrary (Easterbrooks & Stephenson, 2006). This finding by Easterbrooks and Stephenson (2006) along with the scant research that addresses CAR in the deaf education classroom supports the need for investigation into evidence-based strategies that foster advanced literacy development. Currently, there are no studies that adequately describe CAR integration in the deaf education classroom. As such, we do not know how teachers in the deaf education classroom integrate CAR skills during instruction, if at all, and at what levels of intensity or complexity. Before effective interventions can be examined, it is necessary to first identify what teachers are currently doing in order to establish a starting point for identifying potential instructional strategies that may support CAR development during content instruction. Significance To address the lack of research (both qualitative and quantitative) regarding CAR in the deaf education classroom, the present study describes the instructional and CAR integration of four, upper-grade social studies teachers in self-contained deaf education classrooms, specifically the frequency, duration, and intensity of CAR integration over the course of an instructional unit. Having an understanding of how teachers of the deaf integrate these skills during instruction will inform the field about areas of strength and areas of need as they relate to CAR. 5 This study is the first to comprehensively measure instructional approaches, CAR integration, and rigor (intensity of instruction) across an entire instructional unit in the upper-grade, content area deaf education classroom. Findings from this study will provide a starting point for researchers in investigating the effectiveness of instructional and CAR strategies commonly accepted by teachers of the deaf, as recommended by Easterbrooks and Stephenson (2006). In addition to instructional and CAR practices used by teachers of the deaf, this study investigates teachers’ preconceptions about their students as readers and learners and how those preconceptions influence their instructional practices and choices. As such, this study will also provide a glimpse into the related attitudes and beliefs held by teachers of the deaf, and will open the conversation for understanding how preconceptions about unique learners (such as those who are DHH) influence instruction. The following research questions guided the present study: • In what ways do teacher of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during instruction? o What instructional approaches are used by teachers of the deaf in the upper-grade, social studies classroom? o What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? • What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? 6 o Is there a relationship between these teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? o Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the upper-grade, deaf education classroom? 7 CHAPTER 2 REVIEW OF THE LITERATURE Content area reading (CAR) skills are the skills that help students understand and remember what they have read in content areas such as mathematics, science, social studies, etc. (Howell & Luckner, 2003; Shanahan, 2012). Examples of CAR skills include: 1) activating and building of background knowledge to make meaningful connections to text; 2) understanding text structures/text organization typically used in expository and content area texts (e.g., problem/solution, description, cause/effect, categorizing, sequence, and comparison); 3) recognizing and understanding text features (e.g., bold words, titles, captions, headings, charts, graphs, etc.); 4) understanding specialized vocabulary (e.g., content/topic specific vocabulary); and 5) inferring (e.g., “reading between the lines”) as a way to make sense of and understand what is meant by text that is not written explicitly or in a straightforward manner. These five CAR skills parallel the areas in which students who are DHH struggle with regards to literacy achievement. Findings indicate that many of these students struggle with: 1) making connections to background knowledge (Andrews & Mason, 1991; Bringham & Hartman, 2010; Boyd & George, 1971; Dymock & Nicholson, 2010; Easterbrooks & Stephenson, 2006; Schirmer, 1997); 2) understanding text structure/organization (Bringham & Hartman, 2010; Diebold & Waldron, 1998; Easterbrooks & Stephenson, 2006; Marschark & Hauser, 2008; Negin, 1987; Schirmer, 1997); 3) paying attention to and appropriately using text features (Diebold & Waldron, 1998; Dymock & Nicholson, 2010; Easterbrooks & Stephenson, 2006; Howell & Luckner, 2003); 4) knowing and using specialized vocabulary (Strassman, 1997); and 5) making inferences (Marschark & Hauser, 2008). Additionally, many readers who are 8 DHH also struggle (both receptively and expressively) with the English language when reading text-based materials, writing, and speaking (Boyd & George, 1971; Hyde, Zevenbergen, & Power, 2003; Marschark, et al., 2002; Serrano Pau, 1995; SkutnabbKangus, 1990), further obstructing comprehension of content area texts. In this chapter, each CAR skill will be defined and summarized based on the current literature base. This chapter will also include a critical review of studies that address these five CAR skills and their relationship to the comprehension of content area texts with students who are DHH, leading to the presentation of research questions for the current study. The Skills of Content Area Reading The skills of content area reading (CAR) are activating and building background knowledge, understanding text structures, recognizing and understanding text features, understanding content-specific vocabulary, and inferring. Background knowledge. Background knowledge (also referred to as prior knowledge) is essentially what an individual knows about any given topic (Dochy, 1994; Jonassen & Grabowski, 1993) and is an important component of reading and understanding texts. Making connections to background knowledge helps readers with comprehension (Anderson & Pearson, 1984; Dymock & Nicholson, 2010; Fisher & Frey, 2009; Pressley, 2002) and supports readers with learning new information by helping them make connections to what they know and to build on what they know (Kintsch, 2004; Fisher & Frey, 2009; Stahl, 2008). Thus, there are two skills under the umbrella of background knowledge: activating background knowledge and building background knowledge. In this section, explanations of these skills will be provided and relevant 9 studies that address how activation of background knowledge and building of background knowledge affect comprehension of content area texts by readers who are DHH. Activation of background knowledge. Activation of background knowledge supports reading comprehension (Dymock & Nicholson, 2010; Fisher & Frey, 2009; Pressley, 2002) by providing a starting point for students to approach text, aiding in comprehension (Anderson & Pearson, 1984; Dymock & Nicholson, 2010; Fisher & Frey, 2009; Pressley, 2002). Typically done at the beginning of a lesson or before a new topic or sub-topic is introduced, activation of background knowledge can be used to capture the attention of students and prepare them for learning new information (Fisher & Frey, 2009). Fisher and Frey (2009) emphasize that comprehension of content area text is especially dependent on the activation of background knowledge as it provides a tool for students to navigate information in a way that is personally meaningful to them while also supporting comprehension. Some activities for activation of background knowledge include brainstorming, using images, having discussions about what students know, or using graphic organizers to determine what students know, what they do not know, and what gaps are present in their understanding of a particular topic (Fisher & Frey, 2009). Key studies identified by Strangman and Hall (2004) (e.g., Davis & Winek, 1989; Squire, 1983; Weisberg, 1988) suggest that approaching texts with a lack of background knowledge, or the inability to activate background knowledge, may have a direct impact on a students’ ability to access the general curriculum. Activation of background knowledge is essential for mediating and comprehending complex texts (Afflerbach, 1990; Bartlett, 1932; Fisher & Frey, 2008; Gee, 2008; Rumelhart, 1980; Stangman & Hall, 2004; Vygotsky, 1978). In order to 10 understand complex texts (such as those used in the content areas), readers must develop ways of knowing and interpreting information (Fisher & Frey, 2009; Gee; 2008; Stangman & Hall, 2004; Vygotsky, 1978). These ways of knowing are often referred to as “schemata” (Afflerbach, 1990; Bartlett, 1932; Rumelhart, 1980; Vygotsky, 1978). Essentially, schemata are units of understanding (Afflerbach, 1990; Bartlett, 1932; Rumelhart, 1980). As an individual interacts with his/her environment, each experience becomes part of a larger framework of understanding, which helps the individual understand and interpret the world around them (Afflerbach, 1990; Bartlett, 1932; Gee, 2008; Rumelhart, 1980; Vygotsky, 1978). Activation of background knowledge is one way of guiding students in the recall of their “frameworks of understanding” (i.e., schemata) (Afflerbach, 1990; Bartlett, 1932; Fisher & Frey, 2008; Gee, 2008; Rumelhart, 1980; Stangman & Hall, 2004; Vygotsky, 1978). Andrews and Mason (1991) described how readers who are DHH relied on activation of background knowledge when determining replacements for words during a CLOZE activity. In their study, Andrews and Mason (1991) compared readers who were DHH to readers who were hearing, specifically in their ability to determine missing words and phrases. Participants in the study were limited to white males, and linguistic preferences among the group ranged from users of manual languages/communication systems (e.g., American Sign Language; Manually Coded English) to a combination of communication methods (e.g., spoken English with sign support; speech with lip reading, etc.). They found that during the CLOZE tasks, readers who were DHH relied more on activation of background knowledge than text dependent strategies (e.g., using context clues, using the title, etc.) when compared to hearing readers with similar reading levels. 11 Although activation of background knowledge was a strong predictor of success in the CLOZE activity for both readers who were DHH and readers who were hearing, the participants who were DHH did far worse on matching the exact text and had more than triple the amount of errors on the CLOZE passages than did the readers who were hearing. Andrews and Mason (1991) concluded that disparity between groups was due to a number of factors: participants who were DHH demonstrated less background knowledge on the topics presented in the CLOZE passages than did the participants who were hearing; participants who were DHH did not utilize text-dependent strategies (e.g., context clues, passage titles, etc.) as often as participants who were hearing; and participants who were DHH did not have the same amount of vocabulary knowledge as the participants who were hearing. Based on their findings, Andrews and Mason (1991) suggested that instructional strategies include skills for using context clues and understanding vocabulary in addition to activating background knowledge when reading texts. Andrews and Mason (1991) did not investigate specific strategies for improving performance on the CLOZE activities, but noted that mediation of passages through dialogue and scaffolding was helpful for some of the participants who were DHH. For example, in their observations, they noted that the experimenter often acted as an “explainer of English” when working with the participants who were DHH, and that the experimenters spent a lot of time explaining vocabulary, phrases, and idiomatic language to the participants who were DHH. Based on their observations, Andrews and Mason (1991) concluded that strategies that include continuous, interactive dialogue with students who are DHH, may be an important component of helping these students in the 12 comprehension of complex texts. Although these findings from Andrews and Mason’s (1991) study provide support for activation of background knowledge as one important strategy for comprehension of complex texts, more information is needed on what specific strategies teachers in the deaf education classroom should use to activate background knowledge during instruction. Building of background knowledge. Building of background knowledge is inherently connected to activation of background knowledge. Activation of background knowledge is the recall of information that is already known, whereas building of background knowledge helps to fill in the gaps and strengthens a person’s schema about a specific topic. For example, when teachers build background knowledge, they must first start with what the students know (through the activation of background knowledge) and then determine what information is needed to build students’ schema about that topic (Marzano, 2004; Fisher & Frey, 2009). This could be accomplished through a brainstorming activity, such as think-pair-share, or through questioning. Once a teacher is able to determine what information students are missing regarding a particular topic, s/he can then begin to fill in the gaps of knowledge through the building of background knowledge. There are a variety of ways in which teachers can build on background knowledge. Marzano (2004) identifies two distinct methods of building background knowledge, direct methods and indirect methods. Direct methods of building background knowledge are hands-on and/or project-based learning experiences and include socially mediated activities such as: going on field trips, field experiences, doing science labs, participating in a simulation, and/or role-play. Indirect methods of building background 13 knowledge are more cognitive in nature, and also include socially mediated activities such as: group work, discussions, classroom debates on critical issues, and/or modeling by the teacher (Marzano, 2004; Fisher & Frey, 2009). Both direct and indirect methods of building background knowledge rely on social-constructivist instructional practices, and help students build schemata through social interaction. It is suggested that they include strategies that are meaningful and authentic (Marzano, 2004; Fisher & Frey, 2009; Vygotsky, 1978). Bringham and Hartman (2010) observed that building background knowledge by making comparisons between historical events and students’ experiences in a 6th-7th grade deaf education social studies classroom helped students make sense of content area text. The authors concluded that comparing historical events to students’ life experiences helped students understand cause and effect relationships when engaging in the text as “active” readers. In this action research study, the researchers and classroom teacher used timelines, prediction, activation of background knowledge, and indirect methods of building background knowledge such as modeling, group learning, discussion, and presentation of content information within multiple contexts to assist students in the mediation and comprehension of their social studies textbooks. Bringham and Hartman (2010) found that using these skills to build students’ background knowledge about social studies concepts helped them understand cause and effect relationships in history, make inferences about historical events, and retain new information. Fisher and Frey (2009) argue that activation of background knowledge and building of background knowledge are the two most important things that teachers can do to improve student learning and understanding, and suggest a direct relationship between 14 the skills of background knowledge (activation of background knowledge and building of background knowledge) and reading comprehension (Marzano, 2004; Marmolejo-Ramos, de Juan, Gygax, Madden, & Roa, 2009; Fisher & Frey, 2009). Marmolejo-Ramos and colleagues (2009) state that the printed word acts only as a scaffold for the meaning of text, and that actual comprehension of the printed word requires background knowledge. They go on to report that without background knowledge, printed words would have virtually no meaning, as the reader would not be able to form a mental representation of what they are reading about, thereby compromising comprehension (Marmolejo-Ramos, et al., 2009). The importance of activating and building background knowledge is especially true for readers who are DHH (Schirmer & Williams, 2011). Many readers who are DHH have limited background knowledge (Trezek, Wang, & Paul, 2011). One reason that readers who are DHH have limited background knowledge could be a result of language delays as discussed by Marschark, Lang, & Albertini (2002). Another possible reason could be due to the lack of opportunities for incidental learning (Furth, 1996; Marschark & Hauser, 2008; Nunes & Moreno, 2002; Rapin, 1986). For example, students who are DHH may lack or have limited access to a variety of information sources such as overhearing conversations and/or listening to information presented over the television, radio, or other auditory channels. This dearth of access to auditory information sources might cause what Furth (1996) describes as “information deprivation.” Therefore, concepts often learned through incidental learning may need to be explicitly taught (Nunes & Moreno, 1998) to DHH learners thereby establishing or strengthening background knowledge on which further learning can be built. 15 Still, while Andrews and Mason (1991) and Bringham and Hartman (2010) address activation of background knowledge and building of background knowledge with students who are DHH, more information is needed regarding specific, evidence-based strategies for mediation of textual information, and how the role of language and socially mediated instruction (e.g., discussion, cooperative learning, role play, field experiences, and inquiry-based instructional models) support the skills of background knowledge in the deaf education content area classroom, as these were not measured in their studies. Text structure. Understanding text structure is defined in the literature as an awareness of how texts are organized for specific and meaningful purposes (Meyer & Rey, 2011; Vacca & Vacca, 2010). When readers understand how texts are structured, they are able to think more critically about what they are reading and make stronger connections between the concepts presented in the text (Akhandi, Malayeri, & Samad, 2011; Vacca & Vacca, 2010). Meyer and Rey (2011) identified six categories of text structure found specifically in content area texts. These categories are: comparison, problem/solution, causation, sequence, collection, and description. Several studies have addressed text structure of narrative texts with students who are DHH (Akamatsu, 1988; Donin, Doehring, & Browns, 1991; Luetke-Stahlman, Griffiths, & Montgomery, 1998; Schirmer, 1993; Schirmer & Bond, 1990), but only two studies addressed how understanding text structures affects comprehension of expository and content area texts (Bringham & Hartman, 2010; Negin, 1987). Although not specifically addressing one of the text structures described by Meyer and Rey (2011), Negin (1987) investigated text structure at the sentence level. In his study, sentences were segmented into larger units of meaning (e.g., subject and 16 predicate) and presented to twenty students who were DHH using a total communication philosophy. Chronological ages of students were not reported, but participants were described as having instructional reading levels equivalent to the second grade level. Prior to participating in the study, students were pre-tested for vocabulary knowledge and were required to demonstrate a proficiency level of at least 95% with regards to specific vocabulary presented in the passages used to evaluate them. In Negin’s study, student participants were asked to read one narrative and one expository reading passage, both approximately 500 words in length and developed by the researcher. Ten of the student participants were randomly assigned to a control group (no text segmentation), and ten participants were randomly assigned to an experimental group (with text segmentation). Comprehension was measured by a multiple-choice test that included eight literal and two inferential questions. Negin’s (1987) findings indicated that segmenting text significantly improved reading comprehension for both narrative and expository reading passages (p < .01). Negin (1987) reported that segmenting text at the sentence level provided meaningful visual cues for student participants (with regards to English text structures) and chunked information in a way that helped focus the reader’s attention on phrases rather than words, aiding comprehension. Negin (1987) mentioned that in addition to segmenting texts, it was necessary to provide some instructional scaffolds. For example, before providing reading passages to students, it was recommended that teachers introduce key vocabulary, and prompt students to mentally recognize phrases when reading. This suggests that although segmenting text into larger units of meaning is beneficial in supporting reading comprehension, it may not be the sole contributing factor to success and that other 17 elements (e.g., vocabulary instruction) must be considered when implementing this strategy. Additionally, although Negin’s (1987) study does provide valuable information regarding the use of this strategy, the study only addressed texts at the second grade level. More investigation is needed to determine the efficacy of this particular intervention with more complex texts, such as content area texts and with older students, such as those in the upper-grades. An action research study by Bringham and Hartman (2010) investigated four strategies for supporting the understanding of social studies concepts with five middle school students who were DHH. The four strategies investigated by Bringham and Hartman (2010) were: using a time line, teaching the meaning of prediction in a “constructivist” manner, repeatedly reviewing the concept of prediction in multiple contexts, and having students compare situations in their own lives with the ones in the historical events they were learning about. Bringham and Hartman (2010) found that using a time line in combination with comparing historical events to students’ life experiences were beneficial for helping students understand cause-and-effect relationships (a text structure identified by Meyer and Rey, 2011) and helped students engage in the text as “active” readers. Although the study did address one of the text structures described by Meyer and Rey (2011) (i.e., cause and effect), Bringham and Hartman (2010) did not measure the impact of the prediction strategy on reading comprehension. Additionally, the prediction strategy was implemented by a student teacher (under the supervision of a mentor teacher) and findings were based on her anecdotal observations of student progress, and not based on precise measures. The perspectives and insights of student teachers can 18 provide a valuable perspective, however, they still require mentorship and guidance in understanding and executing instructional practices as well as in the assessment and progress monitoring of students. Bringham and Hartman (2010) did not report measures of fidelity of implementation or fidelity in documentation of data; a limitation of their study, which could have had an impact on findings. In addition to using the prediction strategy, other strategies such as: facilitative instruction (discussion); activation of background knowledge (having students connect concepts to their own lives and daily reviews); and use of text features (timelines), were also used in the study. As such, it is difficult to determine which of these strategies had the greatest contribution to overall improvement in student performance as observed by the student teacher. Although the strategy of prediction may have some promise in helping students who are DHH better understand text structures, more investigation is needed to determine the impact of this strategy on reading comprehension. Text features. Text features assist readers in understanding expository/content area texts by bringing attention to important information and details through the use of a wide range of visual cues (Bluestein, 2010; Kelley & Clausen-Grace, 2010; Moss, 2005). Text features also help students navigate content area texts and provide supporting or additional information that aids in comprehension (Kelley & Clausen-Grace, 2010; Moss, 2005). Some examples of text features include: table of contents; index; glossary; bold words; italicized words; bullet points; titles; headings; graphs; charts; maps; labeled diagrams; and timelines (Bluestein, 2010; Kelley & Clausen-Grace, 2010; Moss, 2005; Nolan, 1991). Teaching students how to use and interpret these features is an important component for developing CAR proficiency (Bluestein, 2010; Kelley & Clausen-Grace, 19 2010; Moss, 2005). Two studies that addressed how text features specifically support comprehension of content area texts among readers who are DHH were identified (Diebold & Waldron, 1998; Howell & Luckner, 2003). Diebold and Waldron (1998) investigated the impact of various text formats on the reading comprehension of a 6th grade level science textbook with adolescents and adults (ranging in age from 12 years to 22 years) identified as pre-lingually deaf. Participants in the study had reading levels ranging from a 2.95 grade level (reading at the level of a second grade student in their 9th month of schooling) to a 3.94 grade level (reading at the level of a third grade student in their 9th month of schooling). Participants’ understanding of science concepts presented in text were compared across several conditions which were categorized by the researchers as: standard text format, simplified text format, simplified text and labeled diagram format, and labeled diagram format. Diebold and Waldron (1998) measured participants’ gain scores (improvement between pre-test and post-test conditions) and found that including visual-spatial information through the use of text-features (e.g., pictorial representations of concepts) in combination with English text (i.e., simplified text and labeled diagram format) supported comprehension more effectively than text alone (p = .05). Diebold and Waldron (1998) emphasized the importance of how information should be presented textually, pointing out that certain text features may play an important role in how readers who are DHH comprehend complex texts. Diebold and Waldron (1998) concluded that using texts that rely heavily on complex English structures with limited visual supports is not beneficial to learners who are DHH, and that instructional materials used in the deaf education 20 classroom should be designed to meet the specific linguistic and reading comprehension (processing) needs of those students. Although these findings are helpful in understanding the learning needs of students who are DHH, simplification and/or modification of texts is not considered an acceptable educational practice for developing reading comprehension among students who are DHH (Easterbrooks & Stephenson, 2006; Moores, 2001; Schirmer, 1997). It is not a realistic assumption that all texts can be designed in the manner suggested by Diebold and Waldron (1998), and students who are DHH require strategies for understanding and using text features commonly found in unmodified content area texts (Easterbrooks & Stephenson, 2006; Moores, 2001; Schirmer, 1997). A second research study on text features was investigated. Howell and Luckner (2003) conducted an action research study with a student who was DHH, enrolled in an 8th grade science class, who had a reading level of 3.4 (reading at the level of a third grade student in their 4th month of schooling). It was reported that the student’s reading level interfered with her ability to navigate the general education curriculum, resulting in her being placed in a resource room. To help the student access the textbook used in the regular education 8th grade science class at her school, Howell and Lucker (2003) introduced several learning strategies to the student through the use of mini-lessons. These mini-lessons included instruction in the following skills: knowledge and use of text features, explicit instruction of content-specific vocabulary, use of mental imagery when reading, and summarization. The researchers found that using mini-lessons to explicitly teach how to use specific text features (e.g., headings, bold words, captions, etc.), with an explanation of the purpose of each text feature, helped the student navigate information 21 presented in her grade level textbook. The student reported that using text features was the most helpful for identifying where specific information could be found, eliminating the need to re-read the entire chapter to find salient information, and reducing the time needed to answer questions and complete assignments. This finding is particularly noteworthy as the student’s documented reading level was well below that of the text, suggesting that use and knowledge of text features may be a powerful tool in helping readers who are DHH access content information through text. Howell and Luckner (2003) also used text features (images and captions) to help the student learn content-specific vocabulary. Meanings of content-specific vocabulary and phrases were recorded on note cards along with text features (e.g., drawings and photos), printed English text, and pictorial representations of corresponding ASL signs. The student was also taught summarization strategies to assist in developing a schema for the identification of main information, deletion of redundant and trivial information, and identification of main ideas and supporting information in her science textbook. Howell and Luckner (2003) found that by introducing these strategies to the student, she was able to develop confidence in reading content material in science, and independently applied the strategies she learned in her other content area courses. By the end of the school year, the student made almost one full year of progress on her reading comprehension skills, raising her reading level from 3.4 (reading at the level of a third grade student in their 4th month of schooling) to 4.2 (reading at the level of a fourth grade student in their 2nd month of schooling) as measured by the Woodcock Reading Mastery Test; Passage Comprehension. As a result of her progress, the student stated that she “felt 22 confident enough to enroll in general education” for all of her content area classes the following school year. Both the Diebold and Waldron (1998) and the Howell and Luckner (2003) studies demonstrate the effectiveness of using text features in aiding comprehension with regards to content area texts. In addition, Howell and Luckner’s (2003) findings are directly in line with the recommendations of Bluestein (2010) which indicate that simply identifying and using text features is not enough, and explicit instruction in how to use text features along with an explanation of their purpose is necessary to foster independence in reading and comprehending complex text. Content-specific vocabulary. Content-specific vocabulary refers to the technical terms and phrases particular to certain disciplines such as democracy, election, and globalization in social studies; molecule, nucleus, and covalent bond in science; and radius, diameter, and circumference in mathematics (Yopp, et al., 2009). Understanding of content-specific vocabulary plays a critical role in both reading comprehension and academic success (Vacca & Vacca, 2008). Content-specific vocabulary words are complex (Vacca & Vacca, 2008) and learning them occurs gradually, often taking multiple exposures (Nagy & Scott, 2008). In a review on academic vocabulary research, Nagy and Townsend (2012) concluded that content-specific vocabulary development is strongly tied to academic language development, and that one cannot be learned apart from the other. As such, there is a distinct relationship between content-specific vocabulary knowledge, linguistic access, language development, and literacy achievement (Beck & McKeown, 2007; Stahl & Nagy, 2006). This relationship between academic language development and content- 23 specific vocabulary development becomes especially problematic for readers who are DHH, as the lack of linguistic access common to many students who are DHH (Hyde, et al., 2003; Marschark, et al., 2002; Skutnabb-Kangus, 1990; Serrano Pau, 1995; Williams, 2012) (regardless of communication modality) can be a barrier to vocabulary development, and, in turn, have an effect on literacy achievement. Incidental learning, in addition to developing a person’s schema (Furth, 1996; Marschark & Hauser, 2008; Nunes & Moreno, 2002; Rapin, 1986), is also an important component for learning vocabulary. One of the conclusions from a study conducted by Fagan and Pisoni (2010) found that limited access to incidental learning from orallypresented information was a factor in the vocabulary delays of students between 3-14 years of age who used cochlear implants and listening and spoken language as their primary method of communication. Similar conclusions regarding the impact of limited access to incidental learning with children who are DHH and use signed communication have also been discussed in the literature (Marschark, et al., 2002). Content-specific vocabulary plays an important role in understanding and comprehending content area texts, yet no studies were identified that addressed strategies for fostering content-specific vocabulary with readers who are DHH. Much of the research on vocabulary development with students who are DHH is comparative in design, measuring the vocabulary proficiency of students who are DHH against those who are hearing (Trezek, et al., 2010), and is focused primarily on young DHH childrens’ expressive and receptive vocabulary skills (e.g., Easterbrooks, Leadersberg, Miller, Bergeron, & Connor, 2008; Leaderberg & Beal-Alvarez, 2011; Leaderberg & Spencer, 2001; Pittman, 2008; Williams, 2012, among others). As such, it is important to 24 understand how teachers of the deaf teach content-specific vocabulary and to also investigate evidence-based strategies that support learning and understanding of contentspecific vocabulary words with students who are DHH in the upper-grade, content area classroom. Inference. Making an inference in reading is defined as using two or more pieces of textual information to uncover details implied by the author (Graesser, Singer, and Trabasso, 1994; Kispal, 2008). The literature identifies many different skills within inference, which scholars have grouped into three main categories; coherence (or textconnecting) inferences, elaborative (or gap-filling) inferences, and global inferences (Graesser, Singer, & Trabasso, 1994; Kispal, 2008; McMackin & Lawrence, 2001). Coherence inferences are meaningful connections that are found within textual information (Graesser, Singer, & Trabasso, 1994; Kispal, 2008; McMackin & Lawrence, 2001). For example, given the following two statements, “Sandra is my best friend. She is in my math class,” one could infer that the word “she” in the second sentence refers to Sandra. Elaborative inferences, on the other hand, are connections made between textual information. Elaborative inferences require the reader to activate background knowledge in order to understand connections that are not explicitly stated within text (Cain, Oakhill, & Bryant, 2004; Graesser, Singer, & Trabasso, 1994; Kintch, 1998). For example, in the following two statements, “The bike was gone. Sue saw the broken lock in the grass and started to cry,” the reader must activate background knowledge in order to infer that the bike was stolen as it is not explicitly stated in the text (Kispal, 2008). Global inferences are those that create coherent connections and representations across a text (Graesser, Singer, & Trabasso, 1994; Kispal, 2008). When making global 25 inferences, the reader must look for meaning that is implied rather than explicitly stated, in order to identify themes, main ideas, and morals across a text. According to the descriptions provided in the literature, global inferences take the elaborative inference to the next level by examining the text as a whole instead of being limited to implied information between sentences or paragraphs. This would mean that the reader would have to take all of the information presented in a text (e.g., factual information, author’s voice, main ideas, reoccurring themes, etc.) and use both coherence and elaborative inferences, in addition to activation of background knowledge, in order to make broad generalizations about the message the text is trying to convey. For example, in a text discussing the perspectives of the British Soldiers and the American Colonists during The Revolutionary War, the reader would have to recognize the author’s position on the topic, take into consideration all of the information presented, activate background knowledge, and draw on previous experiences in order to gain a more global understanding of the message the author is trying to convey. The message may be a moral one or a political one. It is up to the reader to use the information at hand, in addition to what they already know about similar topics, to understand the true meaning and intentions of the text. In order to measure a student’s ability to make inferences from a particular text, Cain and colleagues (2004) suggest using questioning. Questioning that examines the recollection of literal information measures a reader’s ability to make coherence inferences. For example, given the following statement: “Suzie is five years old, she is in kindergarten,” the following question could be asked: “Is Suzie a boy or a girl?” Using Bloom’s Taxonomy (Bloom, 1956), this type of question would be classified as a comprehension level question. 26 Questioning that measures a reader’s understanding of relationships between information presented over more than one sentence would be used to determine a reader’s ability to make elaborative inferences. For example, in the statements: “The jar fell off the shelf. Tony grabbed a broom from the back to sweep up the pieces,” a question to measure the ability to make an elaborative inference might be, “What happened to the jar?” Using Bloom’s Taxonomy (Bloom, 1956), this type of question would be classified as an analysis level question. For global inferences, questioning that requires the reader to use the information presented in the full text while also activating background knowledge is necessary. For example, some questions that could be asked to measure a reader’s ability to make global inferences in a text about the experiences of the British soldiers during the American Revolution are: “If you were a British soldier, what would you have done in this situation?” or “Taking the perspective of the Bostonians, how would you have solved the problem of your fellow Bostonians being angry with the British soldiers?” Using Bloom’s Taxonomy (Bloom, 1956), these types of questions would most likely be classified as synthesis or evaluation level questions. Comparative and exploratory studies (e.g., Brown & Brewer, 1996; Davey, LaSasso, & MacReady 1983; Sarachan-Diely, 1985) have reported that although students who are DHH do make inferences when reading, there are differences when compared with hearing readers of the same reading abilities. Essentially, the main difference between readers who are hearing and readers who are DHH is that readers who are DHH tend to make more errors when drawing inferences (Brown & Brewer, 1996; Davey, LaSasso, & MacReady, 1983; Sarachan-Diely, 1985), and that readers who are DHH tend 27 to draw inferences from text at a slower pace than did their hearing counterparts. These findings could be due to inadequate background knowledge (as discussed by Trezek, et al., 2011) or delays in language (as discussed by Marschark & Hauser, 2008). No studies, however, were found that specifically addressed why students who are DHH make more errors when drawing inferences, or why readers who are DHH draw inferences slower than their hearing counterparts. Further, no studies were found that investigated strategies for developing the skill of inference among readers who are DHH and reading content area texts. Inference and background knowledge are important skills for reading comprehension (Cain, et al., 2004; Chall, 1996; Kintch, 1998), and reading ability and experiential knowledge directly affect inference skills (Brown & Brewer, 1996; Davey, LaSasso & MacReady, 1983; Sarachan-Diely, 1985). The more skilled a reader is, the more accurately s/he will be able to draw inferences from a text (Brown & Brewer, 1996; Davey, LaSasso & MacReady, 1983; Sarachan-Diely, 1985). Because children who are DHH often approach text with limited reading proficiency (Easterbrooks & Beal-Alvarez, 2012; Traxler, 2000) and less background knowledge than their hearing peers (Andrews & Mason, 1991; Boyd & George, 1971; Bringham & Hartman, 2010; Dymock & Nicholson, 2010; Easterbrooks & Stephenson, 2006; Schirmer, 1997), the ability to make elaborative and global inferences becomes an even greater challenge (Marschark & Hauser, 2008). Content Area Reading Skills Are Interconnected The research reviewed in this chapter demonstrates that the five skills of CAR are interconnected. For example, making inferences, interpreting text features, and learning 28 content-specific vocabulary requires the reader to activate background knowledge. (Bringham & Hartman, 2010; Cain, et al., 2004; Chall, 1996; Kintch, 1998; Luckner, et al., 2006; Trezek, et al., 2010; Williams, 2012). In addition, text features and knowledge of content-specific vocabulary help readers who are DHH understand text structure (Bringham & Hartman, 2010; Diebold & Waldron, 1998; Howell & Luckner, 2003; Negin, 1987). Knowledge of text structure, text features, and content-specific vocabulary assists readers who are DHH in making inferences (Bringham & Hartman, 2010; Dymock & Nicholson, 2010; Negin, 1987; Williams, 2012). Thus, these five skills work together to foster comprehension of content area texts. Instructional strategies for addressing CAR development should be designed in a way that integrates all five skills instead of using them in isolation. Theoretical Framework The present study is framed by three theoretical constructs (see Figure 1) introduced here and explained below. The first construct is that social-constructivist instructional approaches (e.g., socially-mediated instruction) support higher levels of learning. The second construct is that a teachers’ preconceptions about their students as readers and learners influences their instructional choices. These preconceptions influence the extent that certain skills (e.g., CAR skills) are integrated during instruction as well as what types of instructional approaches are used (e.g., socially-mediated versus didactic instructional approaches). The third construct is that knowing how students think and learn is necessary for designing effective instruction. Figure 1 shows how these three theoretical constructs are related to the skills of CAR and to one another. Each construct is explained in greater detail in the sections below. 29 Figure 1: Theoretical framework. 30 Social-constructivist theories of learning support higher levels of learning. Social-constructivist instructional practices that include socially-mediated instruction such as teacher-facilitated learning, peer-to-peer learning activities, experiential learning activities (such as field trips and field experiences) and interactive learning experiences (such as discussion, role play, and simulations) support higher levels of thinking and learning, and build schema (Gee, 2008; Vygotsky, 1978; among others). These instructional practices support academic language development (Gee, 2008), higher proficiency in critical reasoning (Gee, 2008; Morrell, 2004), and text comprehension (Stahl, 1994; Stewart & Kluwin, 2001). Academic language development, critical reasoning, and text comprehension have all been identified as skills that many readers who are DHH need support in developing (Krashen, 1982; Marschark & Hauser, 2008; Meath-Lang & Albertini, 1984, among others). In order to read content area texts, the teacher must provide the reader with a schema for understanding these texts (Gee, 2008). This means that the readers must learn how to interact with, think about, believe in, and communicate with text through sociallymediated instruction such as discussions. For example, Brown (1997) stressed the importance of collaboration among learners in the K-12 setting. In a social-constructivist learning environment, collaboration contributes to knowledge building and sharing among learners (Jonassen, Davidson, Collings, Campbell, & Haag, 1995). In the deaf education classroom, the language barriers often faced by students who are DHH (Marschark, et al., 2002) may have an impact on the development of their schema for understanding content area texts. 31 Despite the support for instructional approaches that are social-constructivist in nature, there still seems to be a disconnection between theory and pedagogy. Gee (2008) points out a paradox between research and practice, and contradictions found in the National Reading Report from the National Institute of Child Health and Development (2000). He identifies one of the instances with the following quote: “Note the paradox here: The report acknowledges Cain’s claim that we know too little about comprehension difficulties because research has concentrated on word recognition, but then the report goes on to blithely concentrate on decoding and word recognition, as if we can safely ignore our ignorance about the difficulties in comprehension and make recommendations about reading instruction in the absence of such knowledge.” (p.37) In this quote, Gee (2008) illustrates a very clear need for investigation into alternative methods for developing advanced literacy skills that are necessary for all readers. His insights suggest that the poor literacy achievement among readers (such as readers who are DHH) may be connected to their difficulties in using language to mediate texts of specific genres, such as content area texts. Teacher preconceptions influence the way they instruct. The second theoretical construct that guides this research considers the relationship between teachers’ perceptions of their students and the way they teach. That is, if a teacher believes that his/her students are capable of having deep and thoughtful conversations about a specific topic, their instruction might include more opportunities for teacher-facilitated learning (e.g., discussions/debates) or peer-to-peer learning. Conversely, if a teacher does not believe that students are capable of constructing knowledge through discussions or cooperative learning opportunities, their instruction might favor more teacher-directed learning (e.g., didacticism or “banking” as discussed by Freire, 1993) resulting in instruction that relies on skills-based instruction and lecture. Freire (1993) argues that this 32 type of instructional approach (didacticism) diminishes opportunities for critical thinking, understanding, and comprehension. It stands to reason that if a teacher believes that a particular group of students does not have the language skills needed to mediate learning though discussion or collaborative learning, instruction that includes social-constructivist learning activities would be abandoned for more skills/lecture-based pedagogical methods, which further impede growth of academic language and advanced literacy development (Freire, 1992; Gee, 2008; Morrell, 2004; Oakes, 1985; among others). Gee (2008) argues that teachers who perceive their students as being low-achieving tend to avoid instruction that incorporates higher-level skills such as those needed for acquiring advanced literacy skills. Gee (2008) supports this theory with data from interviews of hearing, junior high school students placed in low-track and high-track English classes, as cited in Jeannie Oakes’ (1985) work. When asked about what skills they thought were valued by their English teachers, “low-track” students reported the following behaviors: obeying directions, raising their hand, and having good attendance. Conversely, students placed in “high-track” English classes reported that participating in discussions, asking questions, and thinking broadly about concepts while making connections to their lives were skills valued by their teachers and important for being successful in class. Gee (2008) and Oakes (1985) attribute this drastic difference to what teachers believe about their students’ abilities as learners. This phenomenon is also discussed in Ernest Morrell’s (2004) work, which investigated the marginalization of low-performing urban students in what was described as a “two school system.” Morrell (2004) revealed vast differences in how students, 33 identified as “underachievers” were educated compared to those identified as being “high-achieving”. Morrell (2004) found that the underachievers were given consistently fewer opportunities for developing critical thinking skills when compared with the opportunities available to their high-achieving counterparts. The “underachievers” were often placed into educational tracks that focused on basic skills instruction such as filling out job applications, behaving in class, and maintaining good attendance, whereas “high achievers” were placed into educational tracks that encouraged critical literacy practices, and research-based learning with a focus on college preparation. Morrell (2004) found that when underachieving students were given the same educational opportunities as their high-achieving counterparts, not only did the students’ perceptions of themselves change, but so did the preconceptions that teachers and administrators had of them. As these students gained more confidence in what was actually possible, the “underachievers” began to perform as well as, and in some cases better than, their “high-achieving” counterparts. Morrell’s (2004) findings indicate that raising expectations and providing educational opportunities that are focused on college preparation may have a profound effect on how they are perceived as learners. It is important to understand what preconceptions teachers of the deaf have in regards to their students in order to better understand their instructional choices. Over the past century, the reported reading achievement outcomes for students who are DHH have not improved (Allen, 1986; Easterbrooks & Beal-Alvarez, 2012; Furth, 1966; Paul & Jackson, 1993; Pintner & Patterson, 1916; Traxler, 2000; Trybus & Karchmer, 1977). It could be argued that a social categorization, which is the mental process of categorizing individuals based on preconceptions and/or previous experiences with individuals who 34 share similar traits (see: Bruner, 1957; and Robinson & Taifel, 1996 for more on social categorization), of readers who are DHH has primed teachers to believe that excellent readers who are DHH are the exception, rather than the norm, thereby influencing their beliefs of what their current and future students may actually be capable of. These beliefs would subsequently influence the type of instructional approaches used by the teacher. For example, instructional approaches favoring low-level and skills-based instructional practices over critical thinking and collaborative practices as reported and described by Gee (2008), Morrell (2004), and Oakes (1985), among others. Knowing how students think and learn is necessary for designing effective instruction. The third theoretical construct to guide this research considers that in order to design effective instruction, teachers must understand how students think and learn. For example, when teachers understand the processes behind how students think in different disciplines (e.g., mathematics, science, social studies, etc.), they are better able to transform their instructional practices in ways that directly promote higher levels of student achievement. Simply knowing the thinking processes that support CAR can have an effect on student achievement with regards to reading and comprehending content area texts. These three constructs are important for understanding and framing a teacher’s pedagogical choices when it comes to CAR integration during content instruction. In a study investigating teachers’ knowledge and preconceptions about children’s thinking, Carpenter, Fennema, Peterson, and Carey (1998), found that in classrooms where teachers had a good deal of knowledge about their students’ thinking processes, the students demonstrated higher levels of achievement than did those placed in classrooms where the teachers had little or no knowledge regarding students’ thinking 35 processes. The researchers argue that in order to understand the educational choices made by a teacher, it is crucial to understand first, how they perceive their students as learners, and second, how much they understand about their students’ thinking processes with regards to learning. Once teacher preconceptions about student performance can be identified and linked to their current instructional practices, a framework can be implemented to help support teachers in transforming their educational practices to promote higher student achievement. Carpenter and colleagues (2000) highlight several studies (Carpenter, Fennema, Peterson, Chiang, & Loef, 1989; Fennema, Franke, Carpenter, & Carey, 1993; Fennema, Carpenter, Franke, Levi, Jacobs, & Empson, 1996) showing that teachers’ understanding of the development of students’ mathematical thinking leads to fundamental changes in instructional approaches, which are also reflected in student achievement. Given such support, it makes sense to infer that if teachers are given a framework for understanding how advanced literacy skills are developed in other content areas, it would have the same impact on literacy achievement. Social Studies and Content Area Reading Social studies curricula in the United States are designed in such a way that success is directly tied to reading proficiency (Stewart & Kluwin, 2001). For example, the National Council for the Social Studies (NCSS) position statement (2014) for developing state and local standards emphasizes that social studies education should have “direct and explicit connections to the Common Core State Standards for English Language Arts and Literacy in History/Social Studies” (NCSS, 2014, p. 199). Further, the standards applied to social studies through the Common Core State Standards (CCSS) 36 guidelines pertain specifically to literacy skills, supporting the idea that social studies is one conduit for fostering literacy in the k-12 setting. For example, the CCSS.ELALITERACY.RH.6-8.10 states that, “By the end of grade 8, read and comprehend history/social studies texts in the grades 6-8 text complexity band independently and proficiently.” (CCSS, 2010). Additionally, when examining the CCSS related to social studies, the skills of CAR begin to emerge. See below for some examples. CAR Skill: Text Features- CCSS.ELA-LITERACY.RH.6-8.7 “Integrate visual information (e.g., in charts, graphs, photographs, videos, or maps) with other information in print and digital texts.” CAR Skill: Text structure- CCSS.ELA-LITERACY.RH.6-8.5 “Describe how a text presents information (e.g., sequentially, comparatively, causally).” CAR Skill: Content Specific Vocabulary- CCSS.ELA-LITERACY.RH.6-8.4 “Determine the meaning of words and phrases as they are used in a text, including vocabulary specific to domains related to history/social studies.” CAR Skill: Inference- CCSS.ELA-LITERACY.RH.6-8.6 “Identify aspects of a text that reveal an author's point of view or purpose (e.g., loaded language, inclusion or avoidance of particular facts).” Despite these explicit connections between social studies and literacy highlighted by NCSS and the CCSS, general strategy instruction (e.g., CAR skills) for reading content area text is not often included during content area instruction (Early, 1957; Faggella-Luby, Graner, Deschler, & Drew, 2012; Ryder & Graves, 1994; Sanacore & Palumbo, 2009). One reason for the lack of strategy instruction that focuses specifically on CAR could be that literacy skills are not as universal as previously thought (e.g., what 37 works for narrative text may not work or even make sense with content area/expository texts) and that there is a need to make the shift from theory-based instructional strategies to research-based strategies with regards to reading in the content areas (Shanahan, 2012). Others argue that too much focus has been placed on disciplinary literacy in the content area classroom, which has been represented as a replacement for general strategy instruction (e.g., CAR) (Faggella-Luby, Graner, Deschler, & Drew, 2012; Fang, 2012; Shanahan & Shanahan, 2012). Shanahan (2012) points out that more attention has been given to disciplinary literacy in recent years as a result of the Common Core Standards movement in the United States. The Common Core Standards include expectations for students to independently read informational texts, to integrate, analyze, and evaluate informational texts, to interpret and analyze author intent, and to identify the main arguments and key details presented in informational texts through “close reading” (CCSS, 2010). However, disciplinary literacy is different than general strategy instruction, as the focus of disciplinary literacy is aimed more at what is taught and not how it is taught (Shanahan, 2012). General strategy instruction (e.g., CAR) is different in that the focus is aimed on the skills needed to understand what is being taught. For example, disciplinary literacy focuses on the concepts, skills, vocabulary, and thought processes that are necessary for thinking like a scientist, mathematician, or historian (Shanahan, 2012), but may not include specific strategies or skills for reading content area texts. CAR strategies, however, include specific skills (e.g., activation of background knowledge, building of background knowledge, understanding text structure, knowledge and use of text features, 38 content-specific vocabulary knowledge, and inference) needed to navigate and understand content area texts (Vacca & Vacca, 2010). Faggella-Luby, Graner, Deschler, and Drew (2012), among others, argue that while the concept of disciplinary literacy instruction is a potentially powerful component for improving the literacy outcomes of adolescent readers, it should not be looked at as a replacement for general strategy instruction (e.g., CAR), especially for struggling adolescent learners, as these learners require additional support to build and develop the foundational skills for reading and learning in the content areas. Because disciplinary literacy addresses the specific skills, strategies, and practices that are fundamental to a particular discipline, these highly specific skills are not universally applied to other disciplines and content areas, and instructional frameworks. Further, disciplinary literacy instruction is designed in a way that assumes students have foundational reading skills. For struggling students, Faggella-Luby and colleagues (2012) recommend a shift in focus to general strategy instruction as a means to build a strong foundation on which the skills of disciplinary literacy can stand. To date, few studies have addressed discipline-specific strategies for comprehension of content area texts, none of which have focused on strategies for comprehension of social studies texts (Faggella-Luby, Graner, Deschler, & Drew, 2012; Thornton, 2001). Faggella-Luby and colleagues (2012) found that research-based strategies addressing comprehension of content texts in the upper grades has focused solely on English Language Arts (i.e., story structure and narrative theme identification), and that few of these studies have focused on strategies targeting grade-level and/or disciplinary reading materials (Faggella-Luby, et al., 2012), and few studies address 39 social studies instruction in the deaf education classroom. A literature search for social studies and deaf education revealed just two research studies (Bringham & Hartman, 2010; Woolsey, Herring, & Satterfield, 2009). Woolsey and colleagues (2009) investigated how teachers of the deaf allocate instructional time for social studies in grades 3-5. Seventeen teachers in seven residential schools for the deaf were observed during academic instructional blocks, each for four consecutive school days. Results revealed that social studies was not given the same amount of instructional time as other subjects such as mathematics and language arts. Woolsey and colleagues (2009) cited several reasons that may explain why social studies was not given the same amount of time as other subjects in their study. First, they cited the National Council for the Social Studies (2007) report that blames policy mandates, such as No Child Left Behind (NCLB, 2002), for creating pressure for teachers and schools to dedicate more instructional time to mathematics and language arts. Second, Woolsey and colleagues (2009) reference the lack of evidence-based strategies that support learning in social studies as discussed by Thornton (2001), and third, insufficient teacher preparation in social studies methodology, as discussed in a report by the Bayer Cooperation (2004). The Woolsey and colleagues study did suggest some natural connections between social studies and reading instruction. For example, teacher participants who dedicated the most instructional time to social studies did so by incorporating social studies instruction within their language arts block, suggesting that there may be a natural fit between these subjects. The authors did not specifically indicate how language arts and social studies were integrated (e.g., text-based learning, read aloud, writing, etc.), but did 40 mention that reading social studies trade books during the English/Language Arts block was how one teacher was able to integrate the two subjects. Woolsey and colleagues (2009) also found that the majority (72%, averaged across grades) of observed social studies lessons across teacher participants was delivered through interactive learning activities (e.g., hands-on activities such as building the pyramids and painting a state mural), and that 12% (average across grades) of observed instructional time was dedicated to socially-mediated instruction that included class discussion, further supporting the importance of social-constructivist instructional approaches in the social studies classroom. Bringham and Hartman (2010) found that using social-constructivist instructional approaches such as discussion and questioning to teach the concept of prediction as it related to social studies helped students better understand the concepts they were learning in their social studies class. The authors indicated that social studies concepts were taught “as a story” through read alouds, although text-based learning did not appear to be used as part of their intervention. Bringham and Hartman’s (2010) study emphasized the importance of building background knowledge (as discussed earlier) as well as the importance of using social-constructivist instructional approaches during social studies instruction in the upper grades. They did not however report on the frequency or duration of socially-mediated instruction used by the teacher in their study, nor did they report the level of intensity at which each skill was addressed, thus making it difficult to determine how the teacher implemented these strategies during instruction. Although the research of Bringham and Hartman (2010) and of Woolsey et al. (2009) are positive first steps in the investigation of social studies instruction in the deaf 41 education classroom, and in the identification of potentially important strategies for developing CAR proficiency among students who are DHH, there is much to be done in terms of building a framework of understanding of how reading instruction is incorporated in the social studies classroom as neither study investigated CAR strategies or instructional approaches. The studies reviewed in this chapter describe some of the challenges faced by students who are DHH as well as some insights into potential learning strategies that may support CAR development; however, none of them provide an in-depth understanding of how teachers of the deaf in upper-grade content classrooms (e.g., social studies) make use of their instructional time in relation to content area reading. It is not known which CAR skills teachers of the deaf integrate during a typical instructional unit, nor is it known how often they integrate these skills, or at what levels of complexity if they are integrated. Further, little is known about how secondary content area teachers of the deaf perceive their students as readers and learners, and whether or not these preconceptions influence instructional practices. It is important to have an understanding of what instruction looks like in the upper-grade, deaf education content classroom so that educational interventions that support the development of CAR skills with students who are DHH can be developed and investigated in ways that meet the strengths, needs, and constraints of these classrooms. As such, the following research questions were used to guide the present study and thus provide context regarding the instructional practices used by upper-grade level teachers in the deaf education social studies classroom and how these teachers integrated CAR during social studies instruction: 42 • In what ways do teacher of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during instruction? o What instructional approaches are used by teachers of the deaf in the upper-grade, social studies classroom? o What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? • What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? o Is there a relationship between these teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? o Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the upper-grade, deaf education classroom? 43 CHAPTER 3 METHODS The reading outcomes for students who are DHH have stagnated over the last half century (see Furth, 1966; Moores, 2001; Robertson, 2009). One reason for this stagnation could be that advanced literacy skills are not being taught when students enter the upper grades (e.g., middle school and high school) (Chall, 1996; Neufeld, 2005; Shanahan & Shanahan, 2008), especially in the deaf education classroom (Moores, 2001). To date, no studies have been conducted that evaluate or describe how teachers of the deaf incorporate content area reading (CAR) skills during instruction, or even which instructional approaches are typically used by these teachers. As such, not much is known about the instructional and CAR practices of upper-grade teachers in the deaf education content classroom. In an attempt to fill the gap in the research, the present study used the following research questions as a guide to investigate the instructional and CAR practices of four teachers of the deaf in two middle school and two high school social studies classrooms. • In what ways do teacher of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during instruction? o What instructional approaches are used by teachers of the deaf in the upper-grade, social studies classroom? o What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? 44 • What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? o Is there a relationship between these teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? o Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the upper-grade, deaf education classroom? Study Design Instrumental case study design (Stake, 2000) was used to answer the research questions. This design is used to gain insight into a specific problem (e.g., content area reading practices in the deaf education content area classroom) in order to understand a generalized problem (e.g., the poor literacy rates among high school graduates who are DHH) (Stake, 2000) across multiple cases. According to Stake (2000), instrumental case study design is different from a traditional case study in that its focus is known prior to the collection of data, and is designed around established theory. An instrumental case study investigates the larger issue at hand, and each case is used to support the understanding of the larger issue, instead of looking at each case as a single phenomenon. In the present study, multiple cases were selected. Stake (2000) states that multiple cases are included in an instrumental case study design when the investigator is looking to understand the larger collection of cases. 45 Participant recruitment. Before participants were recruited, the study was proposed to and approved by the Michigan State University Institutional Review Board (IRB). Once approval was granted, a “call to participate” notice was posted on social media (Facebook) and sent via email and SMS text message to personal contacts who had connections with administrators at various residential schools for the deaf and deaf education programs located in general education settings. In addition, fourteen other schools and deaf education programs located in general education settings that were located in areas that were logistically feasible and targeted the population of the study were contacted directly by the researcher via email and telephone. Recruitment was targeted towards upper grade (middle school and high school) social studies teachers in schools using sign-based communication (e.g., ASL, Simultaneous Communication) as the primary mode of instruction. For the purposes of the present study, social studies was defined as any course that would satisfy the social studies requirement for the participating school or school district. All teachers that participated in the study were assigned to United States History courses for the 20132014 school year. Recruitment was targeted in this way as a means to strengthen the generalizability of findings across teachers in similar settings. The participant recruitment process took approximately four months with recruitment overlapping data collection for Teacher A. Participant selection criteria. Selection priority was given to teachers identified as “highly qualified” to teach social studies based on No Child Left Behind (NCLB) as well as teachers who held a state recognized social studies endorsement. To be “highly qualified”, the United States Department of Education states that a teacher must have a 46 baccalaureate degree, be licensed to teach in the state in which they work, and prove that they know the subject matter they are teaching (U.S. Department of Education 2005). To prove that they know the subject matter, the United States Department of Education provides several options. Teachers can either have a college major in the subject matter they are teaching, pass a state-developed test to measure their level of competency in that subject area, have college credits equivalent to that needed to major in the subject area they are teaching, obtain an advanced certification from the state, or obtain a graduate degree (U.S. Department of Education, 2005). Teachers who are currently employed also have the option to demonstrate competency through the High, Objective, Uniform, State Standard of Evaluation (HOUSSE), which may include years of teaching experience, professional development, and/or demonstration of knowledge acquired over several years of being in the profession (U.S. Department of Education, 2005). Preference was given to participants with at least three years of teaching experience in the deaf education classroom. Three years teaching experience was included as a selection criterion because teachers with at least three years of experience are eligible for tenure in their position and are able to supervise a student teacher. They are thus no longer considered a novice teacher. Once teacher participants were identified by the school’s administration, an informal request was sent to each potential teacher participant to determine if there was an initial interest. A formal inquiry was then made with the school administrator responsible for addressing research inquiries at the school. Each administrator was asked if there was a requirement for additional external review or research approval in order to conduct the study. None of the participating schools indicated that additional permission 47 was required. A research approval letter (Appendix A) was then sent to each of the participating schools and signed by the individual responsible for making executive decisions regarding research at the school. As a result of the recruitment process, four social studies teachers of the deaf (two middle school teachers and two high school teachers) from three residential schools for the deaf located in three states, one in the Midwest, one in the East, and one in the Southeast regions of the United States participated in the study. The teachers and administrators were then informed about the role of the researcher in their classroom and the school, procedures for maintaining confidentiality of data, and their roles as participants in the study. The teachers were informed that upon the completion of data collection, each would be compensated with a gift card from a national chain bookstore in the amount of $25. The administrative team at each of the participating schools were informed that the school would also receive professional development in content area reading (CAR) conducted by the researcher upon the completion of the study. Once teachers provided formal consent (see Appendix B), they were each given parental consent forms (see Appendix C) to distribute to all families of students enrolled their social studies classes. Two schools (school 2 and school 3) requested to have parental consent forms drafted in Spanish (see Appendix D). A native Spanish speaker was contacted to translate the parental consent form into Spanish. After parental consent forms were returned to the schools, the participating class was chosen. The class with the highest number of parental consent forms returned was selected as the participating class for each teacher. The reason for this purposeful selection was to reduce complications that might arise from potentially capturing non-participants on video (data were collected 48 via video recording of lessons) and thus having to edit out footage and potentially complicating the data analysis process. After each participating class was identified, the researcher was formally introduced to the students. During this time, the researcher explained the role of a researcher in their classroom and provided a brief overview of the information outlined in the student assent form (see Appendix E). Students were also given the opportunity to ask the researcher questions about the study, the role of the researcher in their classroom, their roles as students in the participating classroom, and other personal questions about the researcher’s role in the Deaf community. This is an important step in establishing trust within Deaf culture and reducing anxiety as well as the potential for unnatural behavior. Student assent forms were presented to each student from each participating class, one-on-one, in their preferred mode of communication (e.g., American Sign Language, Spoken English, Simultaneous Communication, or a combination of these). During the assent process, all students appeared to clearly understand the information as it was presented to them. The majority of students asked to have the assent form explained in American Sign Language (ASL). Two students asked to have the information explained to them using spoken English without sign support, and three asked to have the information explained to them using sign-supported speech. During the assent process, the researcher emphasized that they could chose to opt out of the study, even if their parents signed the consent form, and that there would be no consequence for choosing not to participate. All students in Teacher A’s and Teacher B’s classrooms provided assent to participate in the study. Two students from Teacher C’s 49 class, who had obtained parental consent, chose to opt out of participating because they did not want to take part in the assessment process. Two students from Teacher D’s class did not receive parental consent and as such, did not participate in the assent process. No personal or educational data were collected for the four non-participating students. Teacher Participants Demographic data for each of the participating teachers were collected during a semi-structured interview. Some information was clarified through email communications with the teacher participants. Information about school settings, educational philosophies of the schools, and other pertinent information was collected from the teachers, school administration, and information posted on each school’s website. Table 1 summarizes the demographic information of each teacher participants. Specific details for each of the participating schools and teachers are described in the narrative below. 50 Table 1: Demographic Data of Teachers Participant Grade level(s)/ educational philosophy of the school Audiological status Native language Teacher A 8th/ Bilingual Residential School for the Deaf Deaf (pre-lingual) Reported: “no language” /Learned ASL around 4-5years old from sign class Years teaching/ years teaching Social Studies 17/14 Educational background/ certifications BA Elementary Education MA Special Education (focus in deaf education) Certification in Educational Leadership HQ in Social Studies Teacher B 8th/ TC Residential School for the Deaf Hearing Spoken English/ Learned ASL in college 8/6 BA Deaf Education k12/Middle Grades Integrated Curriculum (grades 5-9) ESOL endorsement State Reading Initiative Certification Teacher C Teacher D 9-12 mix/ TC Residential School for the Deaf Deaf (pre-lingual) 9-10 mix/ Bilingual Residential School for the Deaf Deaf (postlingual, as adult) Spoken English/ Learned ASL at age 9 at the school for the deaf 9/3 Spoken English/ Around age 30 by taking some classes, visiting Gallaudet and interacting with the Deaf Community 18/14 51 BA Government/ General Education MA Deaf Education/ Social Studies BA History/ Political Science MA Soviet and European Economic and Social History MA Deaf Education ABD -- Soviet Military History Teacher A The school. Teacher A taught in a residential school for the deaf in a Midwestern state in the United States. Based on information published on the school’s official website, the school is a bilingual educational environment that “recognizes and values both the native language (ASL) of the Deaf community and the majority language (English).” During the period of data collection, the school’s enrollment included 139 students (deaf and hard-of- hearing) from 21 counties and 67 school districts as indicated by the school’s annual report, serving students from 30 months of age through 26 years of age. Personal and professional background. Teacher A reported that she was prelingually deaf, and had “no language” until 4 or 5 years of age, at which time Teacher A was enrolled in an ASL class. Based on the researcher’s observations, Teacher A used ASL with native-like fluency. This was determined based on the researcher’s sixteen years of experience working directly with the Deaf community and twenty-one years of experience using ASL. Teacher A communicated effectively with students, as indicated by student engagement and participation during instruction as well as their responses to questions posed by the teacher. For example, when Teacher A asked a question, students responded by raising their hands and answering with appropriate responses or by extending discussion on the topic by making a comment. There were no observed instances of students asking for linguistic clarification during instruction or otherwise. Teacher A reported having seventeen years of teaching experience, with fourteen years teaching social studies in the deaf education classroom. Teacher A reported that she had a baccalaureate degree in elementary education, a master’s degree in special 52 education (with a focus on deaf education), and a certification in educational leadership. Teacher A also reported that she was “highly qualified” in social studies based on the NCLB criteria (U.S. Department of Education, 2005). When asked about her certifications and endorsements, Teacher A added that she successfully completed the ASL/English Bilingual Professional Development (AEBPD) certification program. AEBPD is a two-year professional development program developed by Dr. Steven Nover as part of a language planning initiative at the New Mexico School for the Deaf (Nover, Andrews, Baker, Everhart, & Bradford, 2002). The training covers a comprehensive review of the history of deaf education practices over the past century as well as best practices and strategies used in schools that adopt the bilingual philosophy of educating students who are DHH. The main goal of the AEBPD training is to support teachers in identifying, reflecting on, and modifying their instructional approaches to support the development of ASL and English in a way that meets the needs of students who are DHH (Nover, et al., 2002). Participating class. Student demographic and educational data were collected over the course of the entire study and were gathered from a variety of sources including information documented on student IEPs, information provided by service providers (e.g., speech language pathologists), information provided by administrators, and information provided by the teachers. To protect the identity of students, the school requested that all student data be reported in aggregate. The participating class was an 8th grade United States History class. At the time of the study, six students were enrolled in the class and ranged in age from 13 years to 15 years. Parental consent and student assent were provided for all six students. Student IQ 53 information was not available for participants in this setting. When asked, a school administrator clarified that IQ data were only documented in cases when the score was below average and/or when students had a documented cognitive disability. The administrator provided assurance that all of the participating students had IQ scores in the average range and that none of the students had a documented disability that would impede learning. Audiological information. All of the students in the participating class had bilateral, sensorineural hearing losses at a variety of audiological levels including the following categories: mild/moderate, severe, and profound. The etiology of hearing loss was indicated on the IEP for three students as “unknown” and for three students as “hereditary deafness”. Only one participating student used an assistive listening device (ALD), a unilateral hearing aid. Of the five students that did not use ALDs during school, one student had bilateral hearing aids, but preferred not using them and one student used bilateral hearing aids only when at home. Language use and preferences. Four students reported that ASL was their preferred language and that signing was their preferred modality for communicating and for learning. One student reported a preference for sign-supported speech, and one student reported a preference for using spoken English. The student who reported a preference for spoken English was new to the school, and had just started to learn ASL a few weeks prior to the start of the study. Kendall Communication Proficiency Levels (Plevels) were provided for each of the student participants. The Kendall Communication Proficiency Levels assessment was developed at Gallaudet University (Singleton & 54 Supalla, 2005). This assessment uses checklists to determine the syntactic, semantic, and pragmatic development of language and is organized into five categories (Singleton & Supalla, 2005). The five categories of the Kendall Communication Proficiency Levels assessment are: reference (the type and variety of things and ideas communicated); content (semantic categories); cohesion (pragmatic and syntactic links between topics); use (pragmatic functions of language); and form (syntactic characteristics of communication) (Singleton & Supalla, 2005). Five of the participating students had P-levels of seven, which translates to communicative competency equivalent to 11-13 years of age. The student who preferred using spoken English as the primary method of communication did not have a reported Plevel. Teacher A reported that because the student had just arrived to the school a language assessment had not yet been scheduled. The school did not have any information regarding that student’s language proficiency for spoken English. Reading levels. Documented reading levels were obtained from the Individualized Education Plan (IEP) for each of the participating students. Documented reading levels ranged from a 2.4 grade level (reading at the same level as a second grader in their fourth month of schooling) to 12.6 (reading at the same level as a twelfth grader in their six month of schooling). These reading levels were based on zone of proximal development (ZPD) (Vygotsky, 1978) scores obtained from the Standardized Test for the Assessment of Reading (STAR). Scores on the STAR test are compared to national norms and results from the STAR test are predictive of achievement for other standardized reading measures and are recommended by the National Center on Response to Intervention (Renaissance Learning, 2013). The STAR test is computerized and takes approximately 55 12 minutes to complete (Renaissance Learning, 2013). Scores are immediately calculated and are presented as grade equivalents (e.g., 4.7), percentile ranks, and normal curve equivalents (Renaissance Learning, 2013). QRI-4. The Qualitative Reading Inventory, 4th Edition (QRI-4) was chosen to assess the instructional reading levels of all student participants with regards to expository text. This assessment was used to provide consistent reading scores across all participating schools and students, as each participating school did not use the same reading assessments to measure students’ reading achievement levels. The QRI-4 is an informal reading inventory developed by Leslie and Caldwell (2006), and was chosen as an appropriate assessment for measuring content area reading for a variety of reasons, including reported reliability ratings above 0.80 (with 75% of the assessments reaching reliability ratings of 0.90 and higher) (Nilsson, 2008). The QRI-4 includes measures for background knowledge, understanding of text structure, content-specific vocabulary, and inference (see Appendix F for an example passage). The QRI-4 also includes commonly used text features and social studies passages that assess readers from the pre-primer reading level through the 12th grade reading level. The QRI-4 was rated as an appropriate measure for content reading by the International Reading Association (Nilsson, 2008) and was highly recommended by colleagues who had experience using the assessment with students in the deaf education classroom. Based on the QRI-4 assessment, students in Teacher A’s classroom had instructional reading levels ranging from 3rd grade through middle school. Specific details about the administration of the QRI-4 assessment are described in the data collection section below. 56 Observed instruction. Observations took place over the course of a full unit of study on the American Revolution. Ten observations were scheduled across three calendar weeks, starting on October 29, 2013 and ending on November 21, 2013. Ten observational sessions were determined to be appropriate for providing a comprehensive view of the instructional approaches and CAR integration of a teacher based on a pilot study (Maiorana-Basas, 2013). The first day of observation occurred on the first day of the new unit, and the last day of observation occurred on the last day of the unit. The remaining eight observational sessions were scheduled as evenly spaced out as possible over the course of the entire unit. Specific details about data collection procedures are described in the data collection section below. Teacher B The school. Teacher B taught in a residential school for the deaf in a Southeastern state in the United States. Based on information published on the school’s official website the school was described as a “residential school for the sensory impaired”. Deaf education classrooms at the school followed a total communication educational philosophy where both sign-based communication (e.g., ASL and/or sign-supported speech) and spoken English were used as the languages of instruction. During the period of data collection, the school’s enrollment included 600 on campus students (deaf, blind, and deaf-blind), grades Pre-k-12th grade and 400 children from across the state, served by the infant/toddler program. Personal and professional background. Teacher B was hearing and reported having eight years of teaching experience, including six years of experience teaching social studies in the deaf education classroom. Teacher B had a baccalaureate degree in 57 deaf education (k-12) and middle grades integrated curriculum (grades 5-9), which included an endorsement to teach social studies. Teacher B reported being proficient in ASL, but stated that she was still learning. Based on the researcher’s observations, Teacher B communicated effectively with students, as indicated by student engagement and participation during instruction as well as their responses to questions posed by the teacher. Based on the researcher’s observations and knowledge of ASL, Teacher B did use conceptually accurate signs during the majority of interactions with students, using sign-based communication that resembled Conceptually Accurate Signed English (CASE). That is, using signs that appropriately convey the concepts being expressed, but that are executed in a way that resembles more of an English sentence structure. There were several instances when complex/abstract concepts were repeated by Teacher B using ASL (without voice support) in order to clarify what was said using sign-supported speech. In those instances, the researcher noted that the level of conceptual accuracy increased when ASL was used in place of sign-supported speech. While most of the signs were executed with conceptual accuracy when communicating using sign-supported speech, ASL grammar was compromised in favor of English grammar and structure. There were no observed instances of students asking for linguistic clarification during instruction or otherwise. Participating class. Student demographic and educational data were collected over the course of the entire study, and were gathered from a variety of sources including information documented on student IEPs, information provided by service providers (e.g., speech language pathologists), information provided by administrators, and 58 information provided by the teachers. To protect the identity of students, the school requested that all student data be reported in aggregate. The participating class was an 8th grade United States History class. At the time of the study, students enrolled in the class ranged in age from 14 years to15 years. Six students were enrolled in the participating class. Parental consent and student assent was provided for all six students. Student IQ scores were obtained from each student’s IEP. Scores were obtained within each student’s triennial reevaluation and were measured using the Wechsler Intelligence Scale for Children, 4th Edition (WISC-IV) and/or the Universal Non-verbal Intelligence Test (UNIT). Students’ perceptual reasoning scores ranged from 77 to 108, as reported in each student’s IEP. Four of the six students had documented disabilities reported on their IEP; all identified as language impaired/speech impaired, one of which was identified as having a mild educational disability and a mild behavioral disability, and one of which was identified as having an “other health disorder”. Audiological information. All students in the participating class had bilateral, sensorineural hearing losses. A variety of audiological levels were reported for the students in the participating class and included the following categories: mild/moderate, severe, and profound. The reported etiology of hearing loss for three students included premature birth, Waardenburg Syndrome, and chronic ear infections. The etiology of hearing loss was indicated as “hereditary deafness” on the IEP for the remaining three students. All six student participants had access to an ALD, but there was variation in their use. Four students had unilateral cochlear implants. Of those four students, one student 59 did not use the cochlear implant at all, and one student did not use the cochlear implant consistently. Two of the six students used hearing aids. Of these two students, one student had a unilateral hearing aid that was not used consistently and one student had bilateral hearing aids that were worn consistently. Language use and preferences. Three of the six students reported that signsupported speech was their preferred method of communication. Two students reported that ASL was their preferred language for communication. One student reported that ASL was the preferred method of communicating in the academic environment and spoken English was the preferred method of communicating in social settings. The school did not conduct language assessments and was not able to provide documented language levels for any of the student participants. Reading levels. Documented reading levels were obtained from the IEP for each of the participating students. Reading comprehension levels ranged from a pre-primer reading level to a 5.0 grade level (reading at the same level as a 5th grader in their before their first month of schooling). These reading levels were based on the Bader Reading and Language Assessment and the Wechsler Individual Achievement Test (WIAT). The Bader Reading and Language Assessment measures word recognition, reading comprehension, and writing (Bader & Pearce, 2008). The assessment is presented in three sections, each of which are based on the performance of the previous (Bader & Pearce, 2008). For example, a student is first given a word recognition task. Once a student reaches a certain ceiling for that section of the test, they are given a reading comprehension passage that is chosen based on the number of mistakes made on the word recognition task (Bader & Pearce, 2008). 60 The school did not provide information about which edition of the WIAT test was used. The most current edition for the WIAT is the third edition (WIAT-III). As such, information for the third edition is provided. The WIAT-III assessment includes subtests for oral reading, math fluency, early reading skills, enriched listening comprehension, oral expression, written expression, enhanced reading comprehension, and is nationally standardized (Weschler, 2009). QRI-4. Based on the QRI-4 assessments administered by the researcher, students had instructional reading levels ranging from pre-primer to third grade. Details about this assessment are described above (Teacher A, QRI-4) and details about the administration of the QRI-4 assessment are included in the data collection section below. Observed instruction. Observations took place over the course of a full unit of study on the American Revolution. Ten observations were scheduled over six calendar weeks (including 2 weeks of winter break) starting on December 18, 2013 and ending on January 28, 2014. As with Teacher A, the first day of observation occurred on the first day of the new unit and the last day of observation occurred on the last day of the unit. The remaining eight observational sessions were scheduled as evenly spaced out as possible over the course of the entire unit. Specific details about data collection procedures are described in the data collection section below. Teacher C The school. Teacher C taught in the same school as Teacher B. Descriptive information about the school can be found above (Teacher B, The school). Personal and professional background. Teacher C was pre-lingually deaf and reported having nine years of teaching experience with three years of experience teaching 61 social studies in the deaf education classroom. Teacher C had a baccalaureate degree in government and general education, and a master’s degree in deaf education and social studies. Based on her educational experience and teaching certifications, Teacher C met the “highly qualified” guidelines established by NCLB (U.S. Department of Education 2005). Teacher C reported spoken English as her native language, and that she learned ASL at a “young age”. Based on the researcher’s observations, Teacher C used ASL with native-like fluency. Teacher C communicated effectively with students, as indicated by the student engagement and participation during instruction as well as their responses to questions posed by the teacher. For example, when Teacher C asked a question, students responded by raising their hands and appropriately answering or by extending discussion on the topic by making a comment. There were no observed instances of students asking for linguistic clarification during instruction or otherwise. Participating class. Student demographic and educational data were collected over the course of the entire study, and was gathered from a variety of sources including information documented on student IEPs, information provided by service providers (e.g., speech language pathologists), information provided by administration, and information provided by the teacher. To protect the identity of students, Teacher C and the school requested that all student data be reported in aggregate. The participating class was a 9-12th grade mixed United States History class. Eight students were enrolled in the participating class. Parental consent was provided for all eight students and student assent was given by six of the eight students. Two of the students declined assent because they did not want to participate in the assessment 62 process. As such, no educational or demographic data were collected for the two nonparticipating students. Of the six participating students, two were in 9th grade, one was in 10th grade, two were in 11th grade, and one was in 12th grade. At the time of the study, the participating students enrolled in the class ranged from 14 years of age to 19 years of age. The IQ scores for participating students ranged from 90-110 based on the Wechsler Intelligence Scale, 4th edition, as reported in each student’s IEP. None of the participating students had documented disabilities other than deafness. The school did report that four of the students had speech impaired/language impaired indicated as areas of eligibility on their IEPs, and that students came to the school with those areas of identification from their previous educational placements in the public school system. Audiological information. All participating students were identified as having a bilateral, sensorineural hearing loss. A variety of audiological levels were reported for the students in the participating class and included the following categories: mild/moderate, severe, and profound. The school principal reported that the etiology of hearing loss for one student was high fever at age 2. For the remaining five students, the principal reported that the etiology of deafness for these students was “hereditary”. Two of the participating students had access to ALDs. Of these two students, one student had a bilateral hearing aid, but reported that it had not been used in the past two years, and one student used a unilateral cochlear implant. Another student previously had cochlear implant, but it was surgically removed. Language use and preferences. Five of the student participants reported using ASL as their preferred language and that signing was their preferred modality for communicating and for learning. One student reported using both ASL and spoken 63 English, and that spoken English was used more often than ASL. No formal language assessments were conducted at the school, and no information was available regarding students language proficiency. Reading levels. Reading levels for the participating students ranged from 1.2 (reading at the same level as a first grader in their second month of schooling) and 6.5 (reading at the same level as a sixth grader in their fifth month of schooling). All reading levels were based on the WIAT. The school was not able to report which edition of the WIAT was used to measure reading levels. QRI-4. The instructional reading levels for participating students based on the QRI-4 assessment ranged from 1st grade to 3rd grade. Details about this assessment are described above (Teacher A, QRI-4) and details about the administration of the QRI-4 assessment are included in the data collection section below. Observed instruction. Observations took place over the course of a full unit of study on “The Kennedy Years.” Ten observational sessions were scheduled across two calendar weeks starting on January 6, 2014 and ending on January 10, 2014. All observations were consecutive. An assessment was given to students on the last observation day. Specific details about data collection procedures are described in the data collection section below. Teacher D The school. Teacher D taught in a residential school for the deaf in an Eastern state in the United States. Based on information published on the school’s official website, the school was described as a bilingual educational environment where both ASL and English are used as the languages of instruction, “with a mission to support and 64 enhance the communicative, cognitive, and social-emotional skills of all students.” During the period of data collection, the school’s enrollment included 125 students from across the state, serving students who are deaf, hard of hearing, and deaf-blind from birth to age 21. Personal and professional background. Teacher D reported having eighteen years of teaching experience with fourteen years of experience in the social studies deaf education classroom. Teacher D had a baccalaureate degree in History and Political Science. Teacher D reported having two master’s degrees, one in Soviet and European Economic and Social History and one in Deaf Education. Teacher D also reported doctoral studies focused on Soviet Military History and had completed all requirements except the dissertation. Like Teacher A, Teacher D reported that he completed the AEBPD certification program. Teacher D reported that he was post-lingually deafened and reported learning ASL at the age of 30 years through courses taught at Gallaudet University and through interacting with the Deaf community. Spoken English was Teacher D’s preferred method of communication. All interviews and response-to-instruction meetings were conducted using spoken English. Teacher D reported having better receptive ASL skills than expressive ASL skills, and during the semi-structured interview described his signing as “Pidgin Signed English.” Based on the researcher’s observations, Teacher D appeared to have difficulty with the execution of signs and with expressing complex ideas and information using sign-based communication. For example, in ASL, each sign is executed using a specific set of parameters (e.g., handshape, location of sign, movement, palm orientation, and 65 non-manual markers such as facial expression). During observations, multiple instances of incorrect sign execution (e.g., violation of one or more parameters) was documented in the researcher’s field notes. For example, the correct handshape, location, and movement of a sign made by Teacher D may have been correct, but the palm orientation and/or nonmanual marker was not. Teacher D did not appear to communicate effectively with students at all times, as indicated by student engagement and participation during instruction as well as their responses to questions posed by the teacher. Teacher D’s frequent errors in sign execution resulted in several observed instances of students asking for linguistic clarification during instruction. For example, during instruction students were observed turning to one another during Teacher D’s instruction to ask questions such as: “Did he say camera or photographer?” Participating class. Student demographic and educational data were collected over the course of the entire study, and was gathered from a variety of sources including information documented on student IEPs, information provided by service providers (e.g., speech language pathologists), information provided by administrators, and information provided by the teacher. To protect the identity of students, the school requested that all student data be reported in aggregate. The participating class was a 9-10th grade mixed United States History class. Eight students were enrolled in the participating class. Parental consent and student assent were provided for six of the eight students. As such, no educational or demographic data were collected for the two non-participating students. Of the six participating students, four were in 9th grade and two were in 10th grade. At the time of 66 the study, the participating students enrolled in the class ranged in age from 14 years to 18 years. The IQ scores for participating students ranged from low-average to highaverage based on the Wechsler Non-Verbal Scale of Ability; as reported in each student’s IEP. IQ data was not available for one of the student participants. None of the participating students had documented disabilities other than deafness. Audiological information. Three of the participating students had bilateral, sensorineural hearing losses included within the following categories: moderate, severe, and profound. Audiological levels were not available for three of the participating students. The etiology of hearing loss was either not available or unknown for all of the participating students. Three of the participating students had access to ALDs. Of these three students, one student had bilateral hearing aids that were not worn consistently, one student had a unilateral cochlear implant but was reported as being a “non-user”, and one student had a unilateral hearing aid and a unilateral cochlear implant. Language use and preferences. Four students reported that ASL was their preferred language and that signing was their preferred modality for communicating and learning. One student reported a preference for sign supported speech and spoken English, and one student reported a preference for using ASL and spoken English. Kendall Communication Proficiency levels (P-levels) were provided for five of the six of the participating students. One of the participating students had a P-level of 5+, which translates to a communicative competency equivalent to 4-5 years of age. Two of the participating students had P-levels of 6+, which translates to a communicative competency equivalent to 6-11 years of age. Two of the participating students had P- 67 levels of 7, which translates to a communicative competency equivalent to 11-13 years of age. Reading levels. Documented reading levels were obtained from the IEP for each of the participating students. Documented reading levels ranged from a 4th grade reading level to a high school grade level equivalent reading level. These reading levels were based on the Informal Reading Inventory (IRI). The IRI is described as a reading survey that is designed to determine the instructional reading level of a student and assesses a student’s strengths and weaknesses in word recognition, word meaning, reading strategies, and reading comprehension (Roe & Burns, 2010). QRI-4. The instructional reading levels for participating students based on the QRI-4 assessment ranged from 2nd to 4th grade. Details about this assessment are described above (Teacher A, QRI-4) and details about the administration of the QRI-4 assessment are included in the data collection section below. Observed instruction. Observations took place over the course of a full unit of study on “The Progressive Era.” Ten observational sessions were scheduled across five calendar weeks starting on February 6, 2014 and ending on March 13, 2014. Due to extreme weather and the block-scheduling calendar used by the school, only two observations could be scheduled each week. An assessment was given to students on the last observation day. Specific details about data collection procedures are described in the data collection section below. Data Collection To ensure quality of data collection (see Yin, 1994), data were collected from multiple sources including: regularly scheduled classroom observations; regularly 68 scheduled viewings of video recorded instruction with teacher participants (as a means of instructional responsiveness and member checking); and semi-structured interviews with teacher participants. All teacher interviews were transcribed and confirmed by member checks with each participant. QRI-4 procedures. Prior to observing instruction, the QRI-4 was administered to all students who provided parental consent and student assent. The researcher administered the QRI-4 in a quiet location designated by each school. Vocabulary assessment procedures. Before students were given a reading passage, the procedures of the QRI-4 indicated that a vocabulary assessment be completed to determine a starting point when administering the reading comprehension portion of the assessment. To determine a starting point for the vocabulary assessment, the QRI-4 recommended choosing a vocabulary list that was at or just below the student’s estimated reading level. This proved to be a challenge with several of the student participants. In several instances, there were discrepancies in the reported reading levels across assessments administered by each of the schools. For example, one student was assessed using the WIAT and scored at the 2.3 grade level (reading at the same level as a second grader in their third month of schooling), but scored at the 4.0 grade level (reading at the same level as a fourth grader before the first month of schooling) on the Bader Reading Inventory that was given during the same academic year. Because of these and other similar discrepancies, it was difficult to ascertain a true grade equivalent reading level for each student prior to administration of the QRI-4. As such, the researcher started the QRI-4 vocabulary assessment at the lowest reported reading level for each student, and 69 used the QRI-4 vocabulary scores as a guideline for choosing the reading passages, as suggested by the QRI-4 manual. For the majority of student participants, the QRI-4 vocabulary score was a reliable predictor of their reading level, as indicated by Leslie and Caldwell (2006). In a few cases, vocabulary scores on the QRI-4 were 1-3 grade levels higher than their QRI-4 instructional reading comprehension levels. There were no instances of QRI-4 reading comprehension level scores surpassing QRI-4 vocabulary scores. Once approximate reading levels were determined, student participants were given a short vocabulary assessment using word lists provided in the QRI-4 battery that correlated to their reported reading levels. Each word list contained 20 vocabulary words organized by grade level. Notecards were made for each of the word lists and were organized by grade level and by the order in which they appeared on the scoring checklist. The notecards were organized by color, and a marker was used to print the words on the front of each card. The word was also written on the back of each card in pencil so that the researcher was able to know what word the student was reading without having to flip the card around. Each student was shown one card at a time and asked to use their preferred method of communication to say or sign what the word on the card was. As the students read the words, a printed checklist was used to mark how many correct and incorrect responses were given. When a student hit his/her instructional level (between 14 and 17 words correctly, identified out of 20), the student was excused. Reading comprehension assessment procedures. After the completion of the vocabulary portion of the QRI-4, each student participant was given the reading comprehension portion (in most cases the day after the vocabulary assessment). Before 70 reading, each student was informed that after reading the passage, he/she would be asked to summarize what they read, and then answer some questions. Each student was told to use as much time as they needed to read the passage to themselves and to let the researcher know when they were done. While the students read, the researcher also read the passage silently. After reading the story, the student was asked to provide a summary of what was read. A checklist, included in the QRI-4 assessment materials, was used to indicate how many details of the story each student included in their retelling. Each student was then asked between 5 and 8 comprehension questions (depending on the passage), which was presented to them in their preferred mode of communication. Students responded to each question using their preferred mode of communication, and their answers were recorded on a scoring sheet. Fluency was not measured as it was not appropriate for the focus of the study, and also because it could not be measured accurately. Many of the students used ASL as their primary mode of communication. Because ASL does not map directly onto English, it would be difficult to obtain an accurate fluency score for those students. Class observation procedures. Each observational session was video recorded. To reduce potential distraction during the observed lessons, the video camera was set up before the start of class in an out-of-the-way location designated by each teacher. Video recordings of each session began at the official start of class, as identified by the school bell or other alert system (some schools for the deaf use a flashing light system to indicate the start and end of a class period) and continued until the official end of the class period as identified by that same alert system. 71 In instances where the participating classroom included students who were nonparticipants (i.e., Teacher C’s and Teacher D’s classes) the video camera was positioned in a way to avoid the capture of these students. However, because the classroom is not a static environment, there were some instances when non-participants were captured on video. To protect the identity of non-participants, all video data were carefully reviewed and any portions that revealed the likeness of any non-participant was edited out using Camtasia. A description of Camtasia is provided in the software and technology section below. During the observational sessions, fieldnotes were recorded using a laptop computer. At the end of each observational session, a summary of observations and noteworthy insights were composed. After each observational session, the video recording of instruction was transferred to a laptop computer and converted into a maccompatible format using a free version of Wondershare video conversion software. A description of the software is provided in the software and technology section below. Response-to-instruction meetings. As a means of contextualizing data and member checking, two response-to-instruction meetings were scheduled with each teacher participant. The purpose of these meetings was to review segments of video recorded instruction and have a discussion with each teacher participant about their instructional practices and choices. The first meeting was scheduled as close as possible to observation 5, and the last meeting was scheduled as close as possible to observation 10. Scheduling was done at the teacher’s convenience, and each meeting typically lasted between 1 hour and 1 hour and 30 min. While viewing the video recorded instruction, the researcher asked teachers to explain what was happening and the purpose behind any 72 instructional strategies being demonstrated. This also provided an opportunity for the researcher to get an initial understanding of how each teacher perceived their students as learners and readers. Videos were viewed on a laptop computer and handwritten notes were taken during the meeting. The response-to-instruction meetings were video recorded. At the end of each response-to-instruction meeting, a brief summary was composed with key information and insights. Teacher interviews. After completing the 10 observational sessions, a semistructured interview (see Appendix G) was scheduled with each participating teacher. Each interview was video recorded and transcribed for the purposes of data analysis. After the interview was transcribed, the interview videos were uploaded to a passwordprotected file on screencast.com. The interview transcript was sent to each teacher via email with a message asking them to review the transcript and to use the track changes tool in Microsoft Word to make any additions, changes, or general notes. All teachers were given the opportunity to access their video recorded interview as part of the member checking process. Only one teacher (Teacher A) made this request. The semi-structured interviews helped to provide a rationale for the instructional and CAR practices used by each of the teacher participants during content area instruction. These interviews also provided information about “why” a particular practice was valued over another, or at all and to identify any connections between teacher beliefs about their students as readers and learners and instructional choices. Software and Other Technology Used to Manage Data Wondershare. Wondershare video conversion software is a program that translates digital video files from a .MOD file (the default file format of the video camera 73 used to collect data for this study) to an .mp4 file, so that it can be read by any Macintosh or Windows operating system (Wondershare Software Co., Ltd., 2013). The free version of this software leaves a transparent watermark in the upper left corner of the video, which did not impede data analysis in any way. After the video was converted using Wondershare, it was then uploaded to Camtasia where it was reviewed for any segments containing non-participants. Camtasia. Camtasia is a video capture and editing software that allows the user to edit and adjust imported digital video files (TechSmith Corporation, 2011). In Camtasia, a digital video file can be edited, and portions of video can be trimmed or removed and replaced with a place holder or text box (TechSmith Corporation, 2011). The video canvas can also be adjusted to remove or crop out portions of the video frame (TechSmith Corporation, 2011). Video segments containing the likeness of a nonparticipant were edited out by either cropping the video canvas or by replacing the video segment with a marker labeled “non-participant interaction”. In instances where the video segment had to be replaced with a marker, a brief description of what was happening during that period of time was indicated on each removed segment using a text box. This was done to ensure the most accurate data analysis possible. Camtasia is directly linked to a password-protected screencast.com account (TechSmith Corporation, 2011). This feature allows the user to directly upload videos edited in Camstasia to screencast.com without having to save a copy of the file on the user’s hard drive (TechSmith Corporation, 2011). After each video was edited, it was then uploaded to Screencast. Screencast. Screencast.com is an online video storage and sharing site that allows 74 users to create password-protected files for video storage (TechSmith Corporation, 2014). Videos are stored in password-protected files in Screencast can only be downloaded from authorized users with permissions to access the file, and also require the user to enter a password (TechSmith Corporation, 2014). Once video transfer was confirmed by Screencast, all video data remaining on the designated laptop was immediately deleted. Raw video data files will be stored at screencast.com for up to 3 years following the conclusion of the study (August, 2017). Data Coding and Analysis Video recorded data. All edited video from the observational sessions were uploaded to Dedoose.com for the purposes of data coding and analysis. Dedoose is a cross-platform application used for coding and analyzing qualitative and mixed methods data (Dedoose, 2014). As an Internet-based program, it provides a way for sensitive information to remain off personal technology and makes it easy to share raw data files with an inter-rater (Dedoose, 2014). In Dedoose, codes are directly applied to digital video files and digital memos can be attached to any code for the purposes of providing context and/or considerations regarding a specific code (Dedoose, 2014). Coding system. The coding system used in the present study (see Appendix H) was developed and refined in a pilot study conducted by the author during the previous year (Maiorana-Basas, 2013). Four major categories were used in the coding process: (a) instruction approaches; (b) content area reading skills; (c) other literacy skills; and (d) other content area integration. Instructional behavior codes were broken down into subcategories of: (a) teacher-directed learning; (b) teacher-facilitated learning (c) peer-topeer learning (d) teacher-guided hands-on instruction; (e) student-independent 75 work/activity; (f) teacher giving directions/instructions (g) non-instructional; and (h) other. Instruction codes and detailed descriptions of these categories and their subcategories can be found in Appendix H. All instruction were designed to be exclusive and did not overlap with one another. Instruction codes were not given a weighted ranking with the exception of peerto-peer learning and student-independent work/activity. A weighted ranking of 1 or 2 was applied to peer-to-peer learning codes and student independent work/activity codes. A weighted ranking of 1 was given when students were doing independent work with minimal or no assistance from the teacher. A weighted ranking of 2 was assigned when students were doing independent work with assistance and support from the teacher. After applying instruction codes, the durations of each applied code were added together to ensure they when equivalent to the actual length of the session. There were some instances where the time recorded for instruction codes were not exactly equal to the length of the session. This was determined to be a limitation of using Dedoose. Because of the way the program is designed, it was nearly impossible to line up instruction codes, leaving fractions of a second between applied instruction codes, uncoded. Additionally, Dedoose does not allow the user to apply or calculate codes at a unit of measure smaller than one second. Because of this limitation, a ¼ second gap between each end of an applied code (½ second total per code) was determined to be the maximum acceptable duration between instruction code application. These small gaps between codes represented less than 0.01% of total instructional time. The remaining codes (content area reading skills, other literacy skills and other content integration) were not mutually exclusive, and overlapped with other codes. 76 Content area reading (CAR) codes were given a weighted ranking depending on the skill. Activation of background knowledge, building of background knowledge, and inference had weighted rankings of 1-3 based on the intensity of delivery. For example, if a weighted ranking of 1 was applied to an activation of background knowledge code, it indicated that the skill was addressed at the lowest level of intensity. If a weighted ranking of 3 was applied to that code, it indicated that the skill was addressed at the highest level of intensity. All other CAR codes (text features, text structure, and contentspecific vocabulary) had weighted rankings of 1-4. These rankings were developed based on findings from a pilot study (Maiorana-Basas, 2013) and paired with themes from relevant literature. Other literacy skills (decoding, CLOZE, and spelling) and other content integration (e.g., mathematics) were not given weighted rankings of intensity since the main focus of the study was to determine the type and complexity of CAR integration and the types of instruction practices used by each teacher. Because CAR codes were not applied exclusively, a blanket code was applied to individual and overlapping CAR codes in order to determine an accurate duration for CAR integration. In addition to instructional behavior and CAR codes, the researcher also coded questions asked by the teacher participants during instruction. This included questions that were repeated by the teacher. Thus, in instances where a teacher asked a question, and then repeated the question two more times, three questioning codes were applied. Instances when a teacher repeated a student’s question were not coded. Questioning codes were weighted with a ranking of 1-6 based on level of complexity. Weighted rankings were designed based on Bloom’s Taxonomy (Bloom, 1956) with a level 1 77 ranking applied to knowledge level questions (e.g., What is this? Who is that?) and a level 6 ranking applied to evaluation level questions (e.g., How would you have solved the problem of the Bostonians being angry with the soldiers?) Questioning codes were only applied to questions asked during instruction (e.g., teacher-facilitated learning, teacher-directed learning, teacher giving directions, student independent work/activity, and peer-to-peer learning). Questions not related to instruction (e.g., How was your weekend?) and rhetorical questions were not coded. Rhetorical questions are often used in ASL as a means to emphasize causal relationships between two or more pieces of information (e.g., in signing the phrase(s): boy cry why, swing fell knee scrape) which translates to, “The boy cried because he fell off the swing and scraped is knee.” (Valli, Lucas, & Mulrooney, 2005). As such, these types of questions were not included in the overall question count. Only questions that were asked by the teacher that indicated that the teacher wanted students to answer were included. Each observational session was independently coded using the system described above. After which the total duration for each applied code was added together to determine the percentage of total time spent on each skill/instructional approach. For example, if “activation of background knowledge” occurred for a total duration of 62 minutes across 10 observational sessions totaling 500 minutes, the percentage of integration of that skill was calculated to be 12.4% of total instructional time. Frequency counts and co-occurrences of codes were also included as part of the analysis. Cooccurrences of codes were calculated as any two or more codes that occurred during the same segment of instruction (e.g., prior knowledge and vocabulary, inference and 78 questioning, etc.) and was calculated using the code co-occurrence analysis tool in Dedoose. Inter-rater agreement measures. In the present study, in addition to triangulation of data sources (field notes, video recorded observational sessions, response-to-instruction meetings, and semi-structured interviews), inter-rater agreement measures were used as a “bias-reducing tool” (Marques & McCall, 2005). The inter-rater agreement procedures, inter-rater agreement scores, and the rationale for the use and exclusion of inter-rater agreement measures in qualitative research in general are explained below. Procedures for inter-rater agreement. The inter-rater selected for the current study was a hearing, former teacher of the deaf, and recent Ph.D. graduate with extensive knowledge in deaf education and literacy. The inter-rater described herself as a “proficient” user of ASL who has been a member of the ASL community for fifteen years. All data were independently coded by the researcher, after which, an online random number generator was used to determine which sessions were given to the interrater for agreement measures. Twenty percent of the raw video data (2 sessions from each teacher participant) were assigned to the inter-rater. Session 6 and session 8 were randomly selected for agreement measures for Teacher A, session 5 and session 7 were randomly selected for agreement measures for Teacher B, session 2 and session 5 were randomly selected for agreement measures for Teacher C, and session 1 and session 6 were randomly selected for agreement measures for Teacher D. Once instructional sessions were chosen for inter-rater agreement measures, they were renamed and uploaded to Dedoose. 79 The inter-rater was provided with a copy of the coding system (Appendix H) via Google Docs and via email. In addition, a one-hour training session was conducted via Skype. During this session, the inter-rater was trained in how to log on to Dedoose, apply codes and code weights using Dedoose, and was given a basic overview of the codebook and examples of how codes were defined. During this training, the inter-rater was encouraged to ask clarifying questions and was instructed to use the comments tool in Google Docs to make notes of anything that was not clear or to ask questions about specific codes that might arise after the training session. After the inter-rater was trained, she was provided with a username and password to access the inter-rater agreement sessions created in Dedoose. In addition to the initial training via Skype, a 30-minute supplementary training review video was created as a reference tool for the inter-rater using the screen-capture function of Camtasia. The supplementary video included a review of step-by-step, narrated instructions on how to apply codes in Dedoose. An additional procedure for documenting applied codes using a code documentation form (see Appendix I) was also included in the supplementary video. This additional procedure was included after a system crash in Dedoose, in which some previously analyzed data was damaged and unable to be recovered. Because of the system crash and inability to conduct a hard drive back-up of analyzed data in Dedoose, the researcher developed the inter-rater agreement code documentation form (see Appendix I) as an additional layer of data security. To calculate inter-rater agreement, the code documentation forms completed by the inter-rater were compared to the code documentation forms completed by the researcher. The inter-rater’s recorded time segments for each applied code were 80 compared with the researcher’s documented time segments along with visual inspection of code placement within Dedoose. For example, in Figure 2, example 1, the code applied by the inter-rater, starts moments before the code applied by the researcher, and ends moments before the code applied by the researcher. In Figure 2, example 2, the code applied by the researcher starts moments after the code applied by the inter-rater and ends moments before the code applied by the inter-rater. Because of the limitations of code application in Dedoose, in these cases, enough of the duration of the code applied by the inter-rater overlapped with the code applied by the researcher, and was therefore determined to be in agreement. 81 Example 1: Example 2: |____________________________| |__________________________| activation of background knowledge level 1 (inter-rater) activation of background knowledge level 1 (inter-rater) |_____________________________| activation of background knowledge level 1 (researcher) |_____________________________| activation of background knowledge level 1 (researcher) Figure 2: Visual approximation of code applications determined to be in agreement via visual inspection in Dedoose. 82 The following diagram is an example of a code application that was determined to be in partial agreement by visual inspection of the data. In the example presented in Figure 3, the duration of the first and second “teacher-directed learning” codes overlap completely with the code applied by the researcher. However, the inter-rater also applied a “directions” code that overlapped with the “teacher-directed learning code” applied by the researcher. This code was determined to not be in agreement. In this example, the two “teacher-directed codes” applied by the inter-rater would be rated as in agreement, and the “directions” code would be rated as not in agreement. 83 |____________________||_____||__________________________| Teacher-Directed Learning Directions Teacher-Directed Learning (inter-rater) |_____________________________________________________| Teacher-Directed Learning (researcher) Figure 3: Visual approximation of code applications determined to be in partial agreement via visual inspection in Dedoose. 84 The following diagram is an example of a code application that was determined not to be in agreement by visual inspection of the data. In the example presented in Figure 4, none of the codes applied by the inter-rater were in agreement with the code applied by the researcher. These codes were determined not to be in agreement. 85 |______________________________________||______________| Peer-to-Peer level 1 Teacher-Directed Learning (inter-rater) |_____________________________________________________| Teacher-Facilitated Learning (researcher) Figure 4: Visual approximation of code applications determined not to be in agreement via visual inspection in Dedoose. 86 The percentage of absolute agreement was used to calculate inter-rater agreement (Graham, Milanowski, & Miller, 2012) in the present study. The documented duration of disagreement was calculated and divided by the number of minutes in the session in order to determine a percent of agreement. In instances where there was partial agreement on code placement for content area reading (CAR) codes, the documented duration of the disagreement was calculated and divided by the total number of minutes of all CAR codes in the session in order to determine a percent of agreement. Inter-rater agreement scores. Inter-rater agreement measures for instruction codes were calculated at a mean percentage of absolute agreement of 85.2% across all inter-rater sessions with a range of 74.8%-90.8%. These agreement scores fell within the range of acceptable levels of agreement as reported by Graham, Milanowski, & Miller (2012), Hartmann (1977), and Stemler (2004). These scores also fell well within the acceptable range for qualitative research (e.g., two-thirds or 66.7% agreement) as suggested by Marques and McCall (2005). Inter-rater agreement measures for CAR codes were calculated at an average percentage of absolute agreement of 66.6% across all inter-rater sessions with a range of 50.0%-83.4%. While the agreement score for the CAR codes did not fall within the range of acceptable levels of agreement as reported by Graham, Milanowski, and Miller (2012), Hartmann (1977), and Stemler (2004), the overall inter-rater agreement score was only one-tenth of a percent lower than the recommended level suggested by Marques and McCall (2005). The CAR agreement scores in three out of eight sessions were below 66.7%, and the majority of the disagreements occurred in the placement of “activating background knowledge” and “building background knowledge” codes. 87 There are several reasons why the agreement scores for CAR codes were low. First is the external rater’s lack of context as described by Marques and McCall (2005). The sessions assigned to the inter-rater for external coding were not sequential and taken out of the context of the larger unit, possibly having an effect on how certain codes that require context (e.g., activating background knowledge and building background knowledge) were applied. Second, the inter-rater’s coding was compared to the coding of the researcher, and not that of another external rater, which may be another possible reason for the lower agreement scores (Marques & McCall, 2005). Third, the majority of disagreements in CAR code application was representative of instances when the interrater did not apply a code to a specific segment of the video, and the researcher did. There were very few instances when the inter-rater applied a code that was in direct conflict of a code placed by the researcher during the same segment of video, further supporting the argument that the low reliability scores could have been due to the interrater’s lack of context. It could also be argued that while activating background knowledge and building background knowledge are two different skills, they are often intertwined with one another, making it difficult to determine exactly when one is ending and the other is beginning. Because these skills are not as “black and white” as other content area reading codes (e.g., using a text feature, teaching a content-specific vocabulary word, etc.) or specific behaviors of instruction (e.g., giving directions to students, giving a lecture, having a discussion, etc.), a lower reliability score would be expected as different observers may package the skills differently (Davey, Gugiu, & Coryn, 2010) due to potential ambiguity in separating the two. 88 It may also be important to note that two of the three sessions with the lowest reliability scores for the CAR codes were from Teacher D’s observational sessions. As mentioned previously, there were many times when it was difficult to understand what Teacher D was saying during instruction due to less than proficient communication skills using sign language. The challenge of trying to understand what was being said, in combination with a lack of context, may have affected the inter-rater agreement results for the CAR codes for those sessions. Use of inter-rater agreement in qualitative research. There are mixed opinions on the use of inter-rater agreement measures in qualitative research. Some researchers (Armstrong, Gosling, Weinman, & Martaeu, 1997; Morse, Barrett, Mayan, Olson, & Spiers, 2002) have suggested that quantitative methods for ensuring rigor in research, such as inter-rater agreement measures, may not be appropriate for qualitative research. Armstrong and colleagues (1997) argue that it is unrealistic to expect an external rater to come to the same insights and conclusions as the primary investigator in a qualitative study. While inter-rater agreement measures are one way of reducing researcher bias in qualitative studies, data triangulation in combination with transparent and detailed documentation of procedures are just as appropriate for ensuring rigor and reliability in qualitative research (Given & Saumure, 2008; Marques & McCall, 2005). Others state that ensuring rigor and trustworthiness of qualitative research is the responsibility of the researcher, and not external raters (Morse et al., 2002). The opposing side of the argument suggests that utilizing quantitative methods for establishing the reliability of qualitative findings may be a helpful for eliminating bias and strengthening findings (Marques & McCall, 2005). However, there still remains a 89 great deal of unexplored territory when determining appropriate methods for calculating reliability in qualitative research, and presently, there are more questions than answers (Marques & McCall, 2005). Marques and McCall (2005) state that while inter-rater agreement measures have a place in all types of research, they acknowledge that there is still much work to be done in determining the role of inter-rater agreement in qualitative research, and that qualitative researchers who employ these strategies for ensuring rigor and reliability are viewed as “pioneers.” Minimum standards for inter-rater agreement in qualitative research. In quantitative research, an acceptable range for inter-rater agreement measures using percentage of absolute agreement is between 75% (representing the minimum level of agreement) and 90% (representing a high level of agreement) (Graham, Milanowski, & Miller, 2012; Hartmann, 1977; Stemler, 2004). However, there remains a considerable amount of variation in what is considered an acceptable standard of agreement between raters in qualitative research (Marques & McCall, 2005). McMillian and Schumacher (2001) report that there is no easy answer to the question of what is an appropriate minimum standard for inter-rater agreement in qualitative research. They conclude that the answer to this question is, “it depends”. Marques and McCall (2005) made an attempt to determine these minimum standards by investigating several sources including: Isaac and Michael (1997), McMillian and Schumacher (2001) and Tashakkori and Teddlie (1998). In addition to these sources, Marques and McCall (2005) also conducted an extensive search across academic databases such as Proquest, and other mainstream search engines including Google. As a result of their efforts to find an answer, Marques and McCall (2005) found 90 many inconsistencies in the literature regarding acceptable levels of inter-rater agreement in qualitative research, with levels ranging from 66.7% (two-thirds) to 80%. They also found that higher levels of agreement occurred when inter-rater agreement measures were conducted by two raters, neither of them being the researcher. They point out that lower levels of agreement were found in studies when inter-rater agreement measures were conducted by an external rater and compared to the researcher’s analysis. They go on to note that lower levels of agreement in these instances could partly be due to the researcher having a larger bank of knowledge and context about the study and the data as a whole. As such, Marques and McCall (2005) argue that depending on the nature of the study, and taking into consideration that inter-raters typically only review a small percentage (20%) of the data, obtaining a level of agreement higher than two-thirds (66.7%) might be difficult to achieve in qualitative research. Chapter 4 provides the results of the observations of the four teachers regarding their instructional approaches and CAR integration. The chapter begins with data that report the frequency and duration of instructional approaches for all teachers, as well as the frequency, duration, and intensity of CAR integration for all teachers. Results are contextualized with information provided in the semi-structured interviews, response-toinstruction meetings, and field notes. 91 CHAPTER 4 RESULTS Instrumental case study design (Stake, 2000) was used to investigate the instructional approaches and content area reading (CAR) integration of four teachers of the deaf in upper-grade social studies classrooms. Content area reading skills are the skills needed to read and comprehend content area texts and include background knowledge (activation of background knowledge and building of background knowledge), text structure, text features, content-specific vocabulary, and inference. Few studies have addressed the topic of CAR in the deaf education classroom (Andrews & Mason, 1991; Bringham & Hartman, 2010; Diebold & Waldron, 1998; Howell & Luckner, 2003; Negin, 1987). As such, the aim of the present study is to describe how teachers in these classrooms address CAR. Results will establish the foundation for designing future educational interventions. The following research questions guided the present study: • In what ways do teacher of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during instruction? o What instructional approaches are used by teachers of the deaf in the upper-grade, social studies classroom? o What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? 92 • What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? o Is there a relationship between these teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? o Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the upper-grade, deaf education classroom? Four upper-grade social studies teachers from three residential schools for the deaf participated in the present study. Ten lessons across a full unit of study were observed and video recorded for each teacher. Results from the study revealed an inconsistency in the use of CAR during instruction across all four teachers with one teacher including CAR skills more often than the three others. A description of each teacher’s instruction and the CAR skill targeted during observed instructional sessions are described below. CAR skills were not used frequently, for long durations of instructional time, or at high levels of intensity during the observed instructional sessions. Additionally, the majority of questioning by teacher participants (tied to the skill of inference) also occurred primarily at the lowest levels of intensity (knowledge and comprehension). Results are organized below as an overall comparison across all four teachers by their instructional approaches, CAR integration, and preconceptions about students as readers 93 and learners. Contextualization and insights from field notes, teacher interviews, and response-to-instruction are included. Overall Comparison Results Instructional approaches and content area reading integration of all teachers. The following figures and tables provide a side-by-side comparison of the instructional approaches and CAR integration of each teacher. See Appendix J for an omnibus table of overall results. Instructional approaches duration. The figure below (Figure 5) is a side-byside comparison of the duration of instructional approaches for each teacher. All teachers incorporated socially-mediated instruction (as indicated by teacher-facilitated learning and peer-to-peer learning) to some extent. Teacher A had the largest percentage of instructional time dedicated to socially-mediated instruction when compared to other teachers. Teachers B, C, and D had larger percentages of instructional time dedicated to student independent work activities when compared to Teacher A. Teacher D had the largest percentage of instructional time dedicated to giving students directions. All teachers had similar percentages of instructional time categorized as non-instructional. Instructional approaches are contextualized with observational and interview data in the section (Overall Results) below. 94 Instructional Approaches: Teacher A (500 min total observed instruction) Instructional Approaches: Teacher B (489 min total analyzed instruction*) TFL, 3.1 Other, 3.5 NI, 17.2 NI, 19.5 DIR, 3.1 TFL, 25.1 DIR, 3.7 TDL, 48.5 IND, 27.3 IND, 11.4 TDL, 28.2 P2P, 8.6 P2P , 0.8 Instructional Approaches: Teacher C (880 min total observed instruction) Instructional Approaches: Teacher D (730 min total observed instruction) Other, 1.7 TFL, 6.8 TFL, 7.4 NI, 20.8 NI, 18.1 TDL, 32.0 DIR, 4.9 DIR, 9.3 IND, 33.3 IND, 23.2 P2P, 1.6 TDL, 35.3 P2P, 5.6 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TFL- Teacher-Facilitated Learning; TDL-Teacher-Directed Learning; P2P- Peer-to-Peer Learning; H-O- Hands-on; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-instructional; O-Other See Appendix H for definitions and descriptions of instructional behaviors. Figure 5: Total duration of instructional approaches (expressed as percentage of total instructional time). 95 Instructional approaches frequency. The figure below (Figure 6) is a side-byside comparison of the frequency of instructional approaches for each teacher across all observational sessions. The frequency of teacher-directed learning is greater than the frequency of teacher-facilitated learning for all teachers. Teacher D had the highest frequency of teacher-directed learning, peer-to-peer learning, student independent work activities, non-instructional, and other. Instructional approaches are contextualized with observational and interview data in the section (Overall Results) below. 96 Instructional Approach Frequency Teacher B 180 180 160 140 160 140 120 100 120 100 Frequency Frequency Instructional Approach Frequency Teacher A 80 60 40 80 60 40 20 20 0 0 TFL TDL P2P IND DIR NI Other TFL P2P IND DIR NI Other Instructional Approach Frequency Teacher D 180 180 160 140 120 100 80 60 40 20 0 160 Frequency Frequency Instructional Approach Frequency Teacher C TDL 140 120 100 80 60 40 20 0 TFL TDL P2P IND DIR NI Other TFL TDL P2P IND DIR NI Other *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TFL- Teacher-Facilitated Learning; TDL-Teacher-Directed Learning; P2P- Peer-to-Peer Learning; HO- Hands-on; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-instructional; O-Other See Appendix H for definitions and descriptions of instructional behaviors. Figure 6: Frequency of instructional approaches. 97 Duration of content area reading integration. The figure below (Figure 7) is a side-by-side comparison of the duration of CAR integration for each teacher. The two middle school teachers (Teacher A and Teacher B) had larger percentages of instructional time dedicated to CAR than did the high school teachers (Teacher C and Teacher D). Teacher A dedicated the largest percentage of instructional time to CAR as compared to the other teachers. Teachers B, C, and D had larger percentages of instructional time dedicated to teacher-directed learning and less dedicated to CAR when compared to Teacher A. Content area reading integration is contextualized with observational and interview data in the section (Overall Results) below. 98 CAR Integration: Teacher A (500 min total observed instruction) CAR Integration: Teacher B (489 min total analyzed instruction*) CAR, 21.6% CAR, 31.6% CAR Integration: Teacher C (880 min total observed instruction) CAR Integration: Teacher D (730 min total observed instruction) CAR, 7.5% CAR, 10.9% *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. CAR- Content Area Reading See Appendix H for definitions and descriptions of CAR behaviors. Figure 7: Total duration of CAR integration (expressed as percentage of total instructional time). 99 Duration of background knowledge. The figure below (Figure 8) is a side-byside comparison of the duration for the CAR skills related to background knowledge (including: activation of background knowledge and building of background knowledge). Compared to all other CAR skills, all teachers addressed the skills of background knowledge (activation of background knowledge and building of background knowledge) for the largest percentage of instructional time. Within this time, all teachers, with the exception of Teacher C, spent more instructional time on activation of background knowledge than on building of background knowledge. Background knowledge integration is contextualized with observational and interview data in the section (Overall Results) below. 100 Background Knowledge: Teacher A (500 min total observed instruction) BBGK, 3.7 Background Knowledge: Teacher B (489 min total analyzed instruction*) BBGK, 3.6 Other CAR, 15.5 ABGK, 12.4 ABGK, 8.5 Background Knowledge: Teacher C (880 min total observed instruction) BBGK, 3.7 ABGK, 3.0 Other CAR, 9.5 Background Knowledge: Teacher D (730 min total observed instruction) BBGK, 1.9 Other CAR, 4.2 ABGK, 0.9 Other CAR, 4.7 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. ABGK- Activation of Background Knowledge; BBGK- Building Background Knowledge; CAR-Content Area Reading See Appendix H for definitions and descriptions of CAR behaviors. Figure 8: Total duration of activation and building of background knowledge (expressed as percentage of total instructional time). 101 Frequency and intensity of background knowledge. The figure below (Figure 9) is a side-by-side comparison of the frequency and intensity of the CAR skill, background knowledge (including: activation of background knowledge and building of background knowledge). Teacher A addressed the CAR skill, activation of background knowledge, more frequently and at higher levels of intensity than did any other teacher. Teacher A addressed the CAR skill, building of background knowledge, at higher levels of intensity than did any other teacher. Teacher B addressed the CAR skill, building of background knowledge, at the lowest levels of intensity more frequently than did any other teacher. Teacher D addressed the CAR skills of background knowledge (including: activation of background knowledge and building of background knowledge) less frequently than did any other teacher. Background knowledge integration contextualized with observational and interview data in the section (Overall Results) below. 102 Frequency/Intensity of Activation and Building of Background Knowledge Teacher A 180 160 140 120 100 80 60 40 20 0 Frequency/Intensity of Activation and Building of Background Knowledge Teacher B* Frequency Frequency 180 160 140 120 100 80 60 40 20 0 ABGK ABGK ABGK BBGK BBGK BBGK lvl 1 lvl 2 lvl 3 lvl 1 lvl 2 lvl 3 ABGK ABGK ABGK BBGK BBGK BBGK lvl 1 lvl 2 lvl 3 lvl 1 lvl 2 lvl 3 Frequency/Intensity of Activation and Building of Background Knowledge Teacher C Frequency/Intensity of Activation and Building of Background Knowledge Teacher D 180 160 Frequency 140 Frequency 120 100 80 60 40 20 180 160 140 120 100 80 60 40 20 0 ABGK ABGK ABGK BBGK BBGK BBGK lvl 1 lvl 2 lvl 3 lvl 1 lvl 2 lvl 3 0 ABGK ABGK ABGK BBGK BBGK BBGK lvl 1 lvl 2 lvl 3 lvl 1 lvl 2 lvl 3 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. ABGK- Activation of Background Knowledge; BBGK- Building Background Knowledge; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. Figure 9: Frequency and intensity of activation of background knowledge and building of background knowledge 103 Duration and intensity of background knowledge. The table below (Table 2) is a side-by-side comparison of the duration and intensity for the CAR skill, background knowledge (including: activation of background knowledge and building of background knowledge). Teacher A dedicated more instructional minutes to the CAR skill of background knowledge and addressed the CAR skill of background knowledge at the highest levels of intensity as compared to Teachers B, C, and D. Background knowledge integration contextualized with observational and interview data in the section (Overall Results) below. 104 Table 2 Duration and Intensity (Expressed in Total Minutes Across All Observations) of Activation and Building of Background Knowledge Participant ABGK Lvl 1 ABGK Lvl 2 ABGK Lvl 3 BBGK Lvl 1 BBGK Lvl 2 BBGK Lvl 3 Teacher A 38.0 7.4 16.6 11.3 5.2 2.0 Teacher B* 32.6 3.1 5.7 13.8 3.9 0.0 Teacher C 18.9 4.0 3.1 7.4 25.5 0.0 Teacher D 5.9 0.6 0.0 6.1 8.3 0.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. ABGK- Activation of Background Knowledge; BBGK- Building of Background Knowledge; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. 105 Duration and intensity of text structure. The table below (Table 3) is a side-byside comparison of the duration and intensity of the CAR skill, text structure. Teacher C was the only teacher who addressed the skill of text structure. Text structure was only addressed at the lowest level of intensity and for a short duration. Text structure integration for Teacher C is contextualized with observational and interview data in the section (Overall Results) below. 106 Table 3 Duration and Intensity (Expressed in Total Minutes Across All Observations) of Text Structure Participant TS Lvl 1 TS Lvl 2 TS Lvl 3 TS Lvl 4 Teacher A 0.0 0.0 0.0 0.0 Teacher B* 3.1 0.0 0.0 0.0 Teacher C 0.0 0.0 0.0 0.0 Teacher D 0.0 0.0 0.0 0.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TS- Text Structure; Lvl-level of intensity See Appendix H for definitions and descriptions of CAR behaviors. 107 Duration of text features. The figure below (Figure 10) is a side-by-side comparison of the duration for the CAR skill, text features. Teacher A dedicated the largest percentage of instructional time to the CAR skill of text features as compared to Teachers B, C, and D. Text feature integration is contextualized with observational and interview data in the section (Overall Results) below. 108 Text Features: Teacher A (500 min total observed instruction) TF, 7.3 Text Features: Teacher B (489 min total analyzed instruction*) TF, 5.4 Other CAR, 24.3 Text Features: Teacher C (880 min total observed instruction) TF, 2.3 Other CAR, 16.2 Text Features: Teacher D (730 min total observed instruction) Other CAR, 8.6 TF, 2.5 Other CAR, 5.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TF- Text Features; CAR- Content Area Reading See Appendix H for definitions and descriptions of CAR behaviors. Figure 10: Total duration of text features (expressed as percentage of total instructional time). 109 Frequency and intensity of text features. The figure below (Figure 11) is a sideby-side comparison of the frequency and intensity of the CAR skill, text features. Teachers A, B, and C addressed the CAR skill, text features, for similar total frequencies (79 times, 70 times, and 73 times, respectively). Teacher A addressed the CAR skill, text features, more frequently at an intensity of level 3 than did the other teachers. Teacher B was the only teacher to address the CAR skill, text features, at an intensity of level 4. Teacher D addressed the CAR skill, text features, less frequently than did all the teachers (48 times). All teachers addressed the CAR skill, text features, typically at the lowest level of intensity. Text feature integration is contextualized with observational and interview data in the section (Overall Results) below. 110 180 160 140 120 100 80 60 40 20 0 Frequency and Intensity of Text Features Teacher B* Frequency Frequency Frequency and Intensity of Text Features Teacher A TF lvl 1 TF lvl 2 TF lvl 3 180 160 140 120 100 80 60 40 20 0 TF lvl 4 TF lvl 1 Frequency and Intensity of Text Features Teacher C TF lvl 3 TF lvl 4 Frequency and Intensity of Text Features Teacher D Frequency 180 160 140 120 100 80 60 40 20 0 Frequency 180 160 140 120 100 80 60 40 20 0 TF lvl 2 TF lvl 1 TF lvl 2 TF lvl 3 TF lvl 4 TF lvl 1 TF lvl 2 TF lvl 3 TF lvl 4 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TF- Text Features; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. Figure 11: Frequency and intensity of text features. 111 Duration and intensity of text features. The table below (Table 4) is a side-byside comparison of the duration and intensity for the CAR skill, text features. Teacher A dedicated more instructional minutes to the CAR skill, text features, than did Teachers B, C, and D. Teacher B was the only teacher who addressed the CAR skill, text features, at the highest level of intensity. Teacher A dedicated the most instructional minutes to the CAR skill, text features, at an intensity of level 3. Teacher D dedicated the least amount of instructional minutes to the CAR skill, text features. Text feature integration is contextualized with observational and interview data in the section (Overall Results) below. 112 Table 4 Duration and Intensity (Expressed in Total Minutes Across All Observations) of Text Features. Participant TF Lvl 1 TF Lvl 2 TF Lvl 3 TF Lvl 4 Teacher A 18.1 2.0 16.6 0.0 Teacher B* 14.8 5.9 5.1 0.8 Teacher C 14.2 5.7 0.6 0.0 Teacher D 9.9 3.5 4.1 0.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TF- Text Features; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. 113 Duration of content-specific vocabulary. The figure below (Figure 12) is a sideby-side comparison of the duration for the CAR skill, content-specific vocabulary. Teacher A dedicated the largest percentage of instructional time to the CAR skill, content-specific vocabulary, as compared to Teachers B, C, and D. The middle school teachers (Teacher A and Teacher B) dedicated larger percentages of instructional time to the CAR skill, content-specific vocabulary, than did the high school teachers (Teacher C and Teacher D). Teacher D dedicated the least amount of instructional time to the CAR skill, content-specific vocabulary. Content-specific vocabulary behaviors are contextualized with observational and interview data in the section (Overall Results) below. 114 Content-Specific Vocabulary: Teacher A (500 min total observed instruction) CSV, 10.8 Content-Specific Vocabulary: Teacher B (489 min total analyzed instruction*) Other CAR, 20.8 CSV, 5.3 Content-Specific Vocabulary: Teacher C (880 min total observed instruction) CSV, 1.9 Other CAR, 16.3 Content-Specific Vocabulary: Teacher D (730 min total observed instruction) Other CAR, 9.0 CSV, 1.5 Other CAR, 6.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. CSV- Content-Specific Vocabulary; CAR- Content Area Reading See Appendix H for definitions and descriptions of CAR behaviors. Figure 12: Total duration of content-specific vocabulary (expressed as percentage of total instructional time). 115 Frequency and intensity of content-specific vocabulary. The figure below (Figure 13) is a side-by-side comparison of the frequency and intensity of the CAR skill, content-specific vocabulary. Teacher A addressed the CAR skill, content-specific vocabulary, more frequently and at higher levels of intensity than did any other teacher. Teacher D addressed the CAR skill, content-specific vocabulary, less frequently and at lower levels of intensity than did any other teacher. Content-specific vocabulary integration is contextualized with observational and interview data in the section (Overall Results) below. 116 Frequency and Intensity of ContentSpecific Vocabulary Teacher A 180 160 140 120 100 80 60 40 20 0 Frequency and Intensity of ContentSpecific Vocabulary Teacher B* Frequency Frequency 180 160 140 120 100 80 60 40 20 0 CSV lvl 1 CSV lvl 2 CSV lvl 3 CSV lvl 1 CSV lvl 4 Frequency and Intensity of ContentSpecific Vocabulary Teacher C CSV lvl 1 CSV lvl 2 CSV lvl 3 CSV lvl 3 CSV lvl 4 Frequency and Intensity of ContentSpecific Vocabulary Teacher D Frequency Frequency 180 160 140 120 100 80 60 40 20 0 CSV lvl 2 180 160 140 120 100 80 60 40 20 0 CSV lvl 4 CSV lvl 1 CSV lvl 2 CSV lvl 3 CSV lvl 4 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. CSV- Content-Specific Vocabulary; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. Figure 13: Frequency and intensity of content-specific vocabulary. 117 Duration and intensity of content-specific vocabulary. The table below (Table 5) is a side-by-side comparison of the duration and intensity for the CAR skill, contentspecific vocabulary. Teacher A dedicated the most instructional minutes on the CAR skill, content-specific vocabulary, and at higher levels of intensity when compared to Teachers B, C, and D. Teacher D spent the least amount of instructional minutes on the CAR skill, content-specific vocabulary, at the lowest levels of intensity when compared to Teachers A, B, and C. The middle school teachers (Teachers A and B) spent more instructional minutes on the CAR skill, content-specific vocabulary, at higher levels of intensity than did the high school teachers (Teachers C and D). Content-specific vocabulary behaviors are contextualized with observational and interview data in the section (Overall Results) below. 118 Table 5 Duration and Intensity (Expressed in Total Minutes Across All Observations) of ContentSpecific Vocabulary Participant CSV CSV CSV CSV Lvl 1 Lvl 2 Lvl 3 Lvl 4 Teacher A 32.8 7.0 8.1 6.1 Teacher B* 23.2 1.2 0.0 1.8 Teacher C 14.1 1.6 0.2 0.5 Teacher D 9.4 1.1 0.0 0.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. CSV- Content-Specific Vocabulary; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. 119 Duration of inference. The figure below (Figure 14) is a side-by-side comparison of the duration for the CAR skill, inference. Teacher A dedicated the largest percentage of instructional time to the CAR skill, inference. Teacher C dedicated the smallest percentage of instructional time to the CAR skill, inference. Inference integration is contextualized with observational and interview data in the section (Overall Results) below. 120 Inference: Teacher A (500 min total observed instruction) INF, 6.4 Inference: Teacher B (489 min total analyzed instruction*) INF, 2.3 Other CAR, 25.2 Inference: Teacher C (880 min total observed instruction) INF, 0.8 Other CAR, 19.3 Inference: Teacher D (730 min total observed instruction) Other CAR, 10.1 INF, 1.5 Other CAR, 6.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. INF- Inference See Appendix H for definitions and descriptions of CAR behaviors. Figure 14: Total duration of inference (expressed as percentage of total instructional time). 121 Frequency and intensity of inference. The figure below (Figure 15) is a side-byside comparison of the frequency and intensity for the CAR skill, inference. Teacher A addressed the skill of inference more frequently than any other teacher. Teacher D was the only teacher who addressed the CAR skill, inference, at the highest level of intensity (global inferences). Inference integration contextualized with observational and interview data in the section (Overall Results) below. 122 Frequency/Intensity of Inference Teacher A Frequency Frequency 180 160 140 120 100 80 60 40 20 0 Frequency/Intensity of Inference Teacher B* INF lvl 1 INF lvl 2 180 160 140 120 100 80 60 40 20 0 INF lvl 3 INF lvl 1 INF lvl 3 Frequency/Intensity of Inference Teacher D 180 160 140 120 100 80 60 40 20 0 180 160 140 120 100 80 60 40 20 0 Frequency Frequency Frequency/Intensity of Inference Teacher C INF lvl 2 INF lvl 1 INF lvl 2 INF lvl 3 INF lvl 1 INF lvl 2 INF lvl 3 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. INF- Inference; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. Figure 15: Frequency and intensity of inference. 123 Duration and intensity of inference. The table below (Table 6) is a side-by-side comparison of the duration and intensity for the CAR skill, inference. Teacher A dedicated the most instructional minutes to the CAR skill, inference, as compared to Teachers B, C, and D. Teacher C dedicated the least instructional minutes on the CAR skill, inference, as compared to Teachers A, B, and D. Teacher A dedicated the most instructional minutes to elaborative inferences (level 2) than did any other teacher. Teacher D was the only teacher who dedicated instructional time to global inferences (level 3). Integration of inference is contextualized with observational and interview data in the section (Overall Results) below. 124 Table 6 Duration and Intensity (Expressed in Total Minutes Across All Observations) of Inference Participant INF Lvl 1 INF Lvl 2 INF Lvl 3 Teacher A 25.5 6.4 0.0 Teacher B 9.9 1.4 0.0 Teacher C 4.2 2.9 0.0 Teacher D 5.6 4.4 0.7 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. INF- Inference; Lvl- Level of Intensity See Appendix H for definitions and descriptions of CAR behaviors. 125 Frequency and complexity of questioning. The figure below (Figure 16) is a side-by-side comparison of the frequency and complexity of questioning for each teacher. Teacher A asked the most questions (688) of all the teachers. Teacher C asked the least amount of questions (213) of all the teachers. Teacher B asked 447 questions and Teacher D asked 651 questions. Teacher A asked more knowledge (level 1), application (level 3), and evaluation (level 6) questions as compared to Teachers B, C, and D. Teacher D asked more comprehension (level 2) and analysis (level 4) questions as compared to Teachers A, B, and C. Knowledge level questions (level 1) were asked more frequently than were any other type of question across all teachers. Synthesis level questions (level 5) were not asked by any teacher during the study. Questioning behaviors are contextualized with observational and interview data in the section (Overall Results) below. 126 Complexity of Questioning Teacher B* 350 350 300 300 Frequency Frequency Complexity of Questioning Teacher A 250 200 150 250 200 150 100 100 50 50 0 0 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Complexity of Questioning Teacher C Complexity of Questioning Teacher D 350 300 Frequency Frequency 350 300 250 200 150 100 250 200 150 100 50 0 50 0 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. Question levels are based on Bloom’s Taxonomy. For example, Level 1 represents knowledge questions; Level 2 represents comprehension questions; Level 3 represents application questions; Level 4 represents analysis questions; Level 5 represents synthesis questions; Level 6 represents evaluation questions. See Appendix H for definitions and descriptions of each level. Figure 16: Frequency and complexity of questioning. 127 Summary of overall comparison results. The previous section summarized overall results as a side-by-side comparison of each teacher with regards to the frequency, duration, and intensity of instructional approaches and CAR integration over the course of an instructional unit. In the following section, these results are contextualized with observational and interview data. Within Teacher Results The following section organizes results by teacher. For each teacher, a description of their instructional approaches, CAR integration, and preconceptions about their students as readers and learners are described and contextualized using insights from field notes, teacher interviews, and response-to-instruction meetings. Teacher A As described in Chapter 3, Teacher A was pre-lingually deaf and used American Sign Language as her preferred method of communication. Teacher A reported having a total of seventeen years of teaching experience, fourteen of those years teaching social studies in the deaf education classroom. Teacher A reported that her highest level of education was a post-graduate certification in Educational Leadership, and had a baccalaureate degree in Elementary Education and a master’s degree in Special Education with a focus in Deaf Education. Teacher A also reported that she was “highly qualified” in social studies based on the NCLB standards (U.S. Department of Education 2005). Instructional approaches. Teacher A’s United States History class for middle school students was designed as a traditional, full year course that met daily for 50 minutes, Monday through Friday. A total of 500 instructional minutes were observed and 128 analyzed across a full instructional unit. The observed instructional practices of Teacher A reflected both directive instruction (such as lecture) and socially-mediated instruction (such as discussion and peer-to peer learning) fairly equally. Teacher A devoted 28.2% of total observed instructional time to directive instruction (coded as teacher-directed learning) and 33.7% of total observed instructional time to socially-mediated instruction (coded as teacher-facilitated learning. 25.1% and peer-to-peer learning, 8.6%. The remaining instructional time was categorized as student independent work (11.4%), teacher giving directions (3.7%), non-instructional time (19.5%), and other (3.5%), explained further below. Teacher A’s instruction was centered on a common text and included teacher-led discussions and opportunities for students to examine and dissect text. Teacher A often asked students to read text from these sources and to translate the displayed text from English print to American Sign Language (ASL). As students did so, Teacher A followed up with questioning (e.g., asking what the text meant). Text was further dissected through discussions (with Teacher A or with their peers). Teacher A facilitated these discussions by using statements and questions such as: “Tell me more.”, “Does that make sense?”, “Show me where it says that in the text.”, and “Why do you think that?”. During the semi-structured interview and response-to-instruction meetings, Teacher A reported that using socially-mediated instruction, such as class discussion, was favored over traditional lecture, stating that, “It allows [the students] to draw on their language skills as well as expand their thinking.” Teacher A’s lessons were systematic and followed a whole-to-part sequence, with each lesson connecting to and building off of the previous one. New lessons often began 129 with activation of student’s background knowledge through a short review of previously covered material. During these brief reviews, Teacher A would ask students to summarize what they had learned. Teacher A’s instructional style was interactive and engaging. Students were observed as being on task and actively involved for the majority of instruction. The culture of the classroom appeared to be one in which students shared in the learning process. Students often got up, wrote on the board, acted out historical events that were being discussed, and made references to information in their books during instruction. Teacher A indicated during the semi-structured interview that discussion and hands on activities were an important part of her instruction, although hands-on instructional activities were not observed during Teacher A’s instruction. She commented that using discussions and hands-on instructional activities helped students make connections to what they were learning, and that the types of instructional practices that she used were related to the language abilities and thinking skills of the students in her classroom. Teacher A indicted that students in the participating classroom had the highest language and thinking skills of all the classes she taught and thus more discussion was used over hands-on learning activities. She expressed that discussions were particularly important because students did not “have the benefits that hearing kids do, like watching TV or listening to people having discussions.” Teacher A also reported, however, that she was torn between using discussion and directive teaching activities over hands-on instructional activities. Teacher A felt that although hands-on learning activities contributed to more authentic learning, these kinds of activities took up a great deal of 130 instructional and planning time, and that they required additional financial support, which she did not have. Student independent learning activities in Teacher A’s classroom were designed with a clear and specific purpose, centered on text. The majority of student independent learning activities consisted of reading content texts, answering comprehension questions, and responding to writing prompts. Teacher A used leveled readers specific to the unit topic, that matched each student’s documented reading level (only one student in Teacher A’s class used a grade level textbook). During student independent learning activities, Teacher A was actively present and monitoring student progress. Teacher A reported that students often struggled with answering questions about the text and that it was during independent learning activities where students’ struggles with reading became very apparent. During a response-to-instruction meeting, Teacher A expressed concerns that some students might not really understand the reading, and simply lifted portions of text directly from the book to answer comprehension questions. Teacher A commented that students were constantly asking her, “What do I put here?” when doing independent work. It was for this reason that Teacher A indicated a preference for providing opportunities for discussion and activities that support the skills related to background knowledge (e.g., showing a film) as she felt that these types of activities helped students “visualize” content concepts and make connections while reading. Non-instructional time occurred primarily at the beginning and end of observational sessions, roughly totaling 10 minutes per session (19.5% of total observed instruction). Activities during these times included announcements, collecting assignments, passing out materials, transitions between activities, and students entering 131 and exiting the classroom. The 3.5% of instructional time categorized as “other” was due to a school-wide activity that occurred during the official class meeting time. Table 7 below shows the total number of instructional minutes dedicated to each category (teacher-facilitated learning activities, teacher-directed learning activities, peerto-peer learning activities, hands-on learning activities, independent learning activities, teacher giving directions, non-instructional time, and other) by Teacher A over the ten observed instructional sessions. 132 Table 7 Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher A TFL TDL P2P Lvl 1 P2P Lvl 2 H-O IND Lvl 1 IND Lvl 2 DIR NI O 125.5 140.7 26.6 16.7 0.0 13.3 43.4 18.6 97.7 17.5 TFL-Teacher-Facilitated Learning; TDL- Teacher-Directed Learning; P2P- Peer-to-Peer Learning; H-O-Hands-On; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-Instructional; O- Other; Lvl- Level of Intensity See Appendix H for definitions and descriptions of instructional behaviors. 133 Integration of content area reading (CAR) skills. Teacher A had the highest number of CAR behaviors observed over the course of her instructional unit. Teacher A also had the highest percentage (31.6%) of instructional time (duration) dedicated to CAR skills integration. The frequency, duration, and intensity of CAR integration for Teacher A is provided in the following section. Background knowledge. Teacher A used socially-mediated instructional practices such as: class discussions, questioning, reenacting of historical events, and viewing historical images and videos when integrating the skills of background knowledge. Background knowledge refers to what an individual knows about a specific topic (Dochy, 1994; Jonassen, & Grabowski, 1993) and supports reading comprehension by helping readers make connections to what they know (Anderson & Pearson, 1984; Dymock & Nicholson, 2010; Kintsch, 2004; Fisher & Frey, 2009; Pressley, 2002; Stahl, 2008). Teacher A had the highest percentage (12.4%) of instructional time dedicated to the activation of background knowledge, addressing this skill at higher levels of intensity than did any of the teachers who participated in the study (see Figure 9). Activation of background knowledge was typically followed by the CAR skill, building of background knowledge. Teacher A, (along with Teacher C), dedicated the highest percentage of instructional time (3.7%) to the CAR skill, building background knowledge, as compared to all of the teachers in the study (Teacher A and Teacher C dedicated equal percentages of instructional time to building background knowledge). Teacher A integrated building background knowledge primarily at an intensity level of 1 (e.g., making an attempt to find out what students know and what they do not know by asking knowledge level 134 questions), and was the only teacher who integrated this skill at an intensity level of 3 (e.g., brainstorming about a topic and acting out historical events). During the semistructured interview, Teacher A emphasized the importance of building background knowledge, commenting that the use of visual information such as videos, photos, and other media was key in helping students make connections for activation of background knowledge and building of background knowledge. Text structure. Text structure refers to how a text is organized (Meyer & Rey, 2011; Vacca & Vacca, 2010) and supports readers by helping them make connections between concepts presented in text (Akhandi, Malayeri, & Samad, 2011; Vacca & Vacca, 2010). The CAR skill, text structure, was not addressed by Teacher A during observed instructional sessions. Text features. Text features are visual cues that assist readers in understanding content area texts (Bluestein, 2010; Kelley & Clausen-Grace, 2010; Moss, 2005), and include: table of contents; index; glossary; bold words; italicized words; bullet points; titles; headings; graphs; charts; maps; labeled diagrams; and timelines (Bluestein, 2010; Kelley & Clausen-Grace, 2010; Moss, 2005; Nolan, 1991). The most common text features used by Teacher A were images, captions below images, maps, and charts. Teacher A did have one observed instance of addressing the glossary and table of contents with students during independent work; however, the use and purpose of these types of text features were not modeled during teacher-directed learning activities or teacher-facilitated learning activities. Teacher A did not address text features such as bold or italicized words during observed instruction. When presenting a text feature to the class, Teacher A typically used a statement such as, “This is...” or a question such as 135 “Who is…?” Compared to the other teachers, Teacher A spent the largest percentage of instructional time (7.3%) analyzing text features. Content-specific vocabulary. Content-specific vocabulary words are the technical terms and phrases particular to certain disciplines (Yopp, et al., 2009), and play a critical role in both reading comprehension and academic success (Vacca & Vacca, 2008). Compared to the other teachers, Teacher A spent the largest percentage (10.8%) of instructional time on the CAR skill of content-specific vocabulary. Teacher A addressed content-specific vocabulary at higher levels of intensity as compared to Teachers B, C, and D (see Table 5). Content-specific vocabulary was integrated primarily within the context of text, a best practice identified by Easterbrooks and Stephenson (2008). Teacher A addressed the skill of content-specific vocabulary largely during socially-mediated instruction, such as discussion, and conversations about multiple meanings of words as well as working through students’ misconceptions about the meanings of words. For example, Teacher A discussed how certain individuals were “appointed” to a leadership position, then asked students what the word “appoint” meant in this context. One student replied that “appoint” was the opposite of “disappoint”. Teacher A used that student’s comment as an opportunity to compare the words and discuss their morphographic components to help students gain a better understanding of the meanings of each word. Inference. The skill of inference requires a reader to use two or more pieces of information to uncover details implied by the author (Kispal, 2008). Inferences can be made within textual elements (text connecting inferences), across textual elements (coherence inferences), and across an entire text (global inferences) (Kispal, 2008). Teacher A dedicated the largest percentage of instructional time, as compared to the other 136 teachers, (6.4%) to inference (see Table 6). As did the other CAR skills, inference cooccurred primarily with socially-mediated instruction including teacher-facilitated learning, discussion, and peer-to-peer learning activities that occurred as a result of students discussing and debating their inferences with one another. Teacher A also addressed the CAR skill of inference at higher levels of intensity than did any of the teachers in the study. Teacher A often asked higher order questions such as “Why do you think that?” and “What do you think they were feeling at this moment?” when discussing historical events. After posing these questions, Teacher A facilitated discussions with students and provided opportunities for students to have discussions about what they inferred from the text and text features presented during instruction. The longer duration of inference observed during Teacher A’s instruction is most likely due to the length of time Teacher A allowed for discussion of inferences and points of view with regards to content information. Complexity of questioning. Because of the relationship between questioning and the skill of inference (Kispal, 2008), every observed instance of a question was categorized and included in the analysis. Questions were categorized by level of intensity based on Bloom’s Taxonomy, level 1 corresponding with knowledge level questions, level 2 corresponding with comprehension questions, level 3 corresponding with application questions, level 4 corresponding with analysis questions, level 5 corresponding with synthesis questions, and level 6 corresponding with evaluation questions (see Appendix H). Teacher A asked the most questions (688) of any teacher who participated in the study (see Figure 16). The majority of Teacher A’s questions (46.7%) were categorized at the knowledge level (level 1), 27% were categorized at the 137 comprehension level, and 23% were categorized at the application level (level 3). Although Teacher A did include analysis (level 4) and evaluation (level 6) questions during observed instructional sessions, each represented only a small percentage of the total questions asked (1.9% and 1.3%, respectively), averaging about one analysis and one evaluation question per observed session. However, longer durations of instructional time were dedicated for students to discuss and answer these types of questions, whereas the duration of instructional time dedicated to answering lower level questions was much shorter. This may be one reason for the lower frequency of these higher-level types of questions, that is, they take longer to answer. Teacher A was not observed asking any synthesis (level 5) questions during the study. Summary. Instruction in Teacher A’s classroom was primarily learner-centered and the teacher and students worked together as a community of learners. Teacher A’s instruction reflected a balanced approach integrating both direct instruction and sociallymediated instruction for similar percentages of instructional time. She provided multiple opportunities for discussion, and asked higher-level questions (such as analysis and evaluation questions) to support critical thinking, inference, and understanding of “big picture” concepts. Teacher A spent the most amount of instructional time addressing the skills of CAR and addressed these skills at higher levels of intensity than did any other teacher in the study. Teacher B As described in Chapter 3, Teacher B was hearing and used Spoken English as her preferred method of communication during one-on-one conversations. Teacher B used sign supported speech as the primary method of communicating when instructing. 138 Teacher B reported having a total of eight years of teaching experience, six of those years teaching social studies in the deaf education classroom. Teacher B reported that her highest level of education was a baccalaureate degree in Deaf Education (k-12) and Middle Grades Integrated Curriculum (5-9). Teacher B also reported having an ESOL (English for Speakers of Other Languages) endorsement and a state reading initiative certification. Instructional approaches. Teacher B’s United States History class for middle school students was designed as a traditional, full year course that met daily for 57 minutes, Monday through Thursday, and for 45 minutes on Fridays. A total of 489 instructional minutes were analyzed (One of the video recorded sessions, Teacher B, session 10, was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions). Teacher B had the lowest percentage of analyzed instructional time (3.1%) dedicated to socially-mediated instructional practices (e.g., teacher-facilitated learning activities such as discussion, and peer-to-peer learning activities), relying on lecture as the primary method of delivering content information. Students in Teacher B’s class were observed as being attentive, yet passive. There were few observed instances of students raising their hands to comment or ask questions, and student interaction was typically limited to answering questions asked by the teacher. Peer-to-peer interactions were representative of only 0.8% of Teacher B’s instruction. The remaining instructional time was categorized as teacher-directed learning (48.5%), teacher giving directions/instructions (3.1%), student independent work activities (27.3%), and non-instructional (17.2), explained further, below. 139 Teacher B dedicated 48.5% of instruction to teacher-directed learning activities (e.g., lecture), which was the highest percentage of all the teachers in the study. During the semi-structured interview, Teacher B stated that the reason instruction was primarily delivered through lecture was because, “…sometimes I feel like there is this information that I want to make sure [the students] have, and I can’t really come up with a good way for [the students] to figure it out on their own, or we don’t have time for [the students] to figure it out on their own, so I just need to give it to them. Sometimes, that is just how I feel, that I just need to give [the students] the information. So, to me, a lecture kind of helps me out that way”. Teacher B also indicated that there was a lot of pressure from state mandates to cover a large scope of content information, and lecture seemed to be the most effective way of accomplishing that in a timely manner. Teacher B described her lectures as “more like guided reading.” Although there are several variations on what defines guided reading, a common theme throughout these definitions is that the teacher talks through a selected text with a small group of students while making connections to background knowledge, bringing attention to key vocabulary, and monitoring comprehension through discussion (Marshall, 2014; Scholastic, 2014). The practice of guided reading as described by Marshall (2014) and Scholastic (2014) was not what was observed during Teacher B’s instruction. Teacher B often read chunks of text projected onto a SmartBoard (e.g., PowerPoint slides, selected portions of the electronic version of the textbook, and documents projected via a document camera) to students. These instances were interpreted as “decoding text” since they did not meet the requirements to be defined as guided reading as described by Marshall (2014) and Scholastic (2014). Few opportunities were provided for students to have discussions focused around text during observed instruction. 140 There were several observed instances of students being asked to come to the front of the room to read portions of text to their classmates from their United States history textbook, which was projected onto a SmartBoard. In each instance where students were asked to read to the class, students decoded text, word-by-word, using signs borrowed from ASL and signed in English word order. Students were not asked to translate what they read into conceptually accurate ASL, to summarize what they were reading, identify or discuss problematic vocabulary, or to demonstrate comprehension. During these activities, the students decoded the text and then returned to their seat. After each student read their assigned portion of the text, Teacher B summarized what the student read, and provided additional context and explanation to the whole class. As students decoded text in sign, Teacher B voice interpreted what was signed. When asked her reason for this particular practice, Teacher B explained that the voice translation was for students who benefited from auditory input. Additionally, when a student came across a word s/he did not know, or when s/he fingerspelled a word that had a sign equivalent, Teacher B would supply the student with the appropriate sign. In one instance, a student fingerspelled the word “act” while reading a selected portion of text to the class. Teacher B interrupted the student, commenting to the class, “I am making up a sign for this”, and signed the word using an “A” handshape while executing the sign in a way similar to the sign for “rule” or “law”. This was the only observed instance of a teacher participant deliberately creating a sign for the purposes of instruction. When asked why a sign was created for the word “act” (instead of just fingerspelling the word) during the response-to-instruction meeting, Teacher B indicated that the fabricated sign 141 would help students understand the concept of the word by relating it to signs they were already familiar with (e.g., rule, law), but that making up signs was not a regular practice. There were several observed instances of student independent work activities (27.3% of total analyzed instruction). During student independent work activities, Teacher B was actively present and monitoring students’ progress and therefore, were measured at an intensity of level 2 (21.5%). Student independent work activities had a specific purpose and were designed by Teacher B to function as cooperative learning activities. However, despite repeated prompting by Teacher B to work collaboratively, students were observed completing their assignments individually. Observed collaboration and discussion between students during student independent work was limited to students asking each other about the answer they had on each other’s papers. As a result, the activities intended to serve as collaborative/ socially-mediated learning activities (for example, peer-to-peer learning) were in actuality “student independent work” since students were not observed collaborating with one another, even after repeated encouragement by Teacher B. The culture of the classroom reflected one where the teacher acted as the “keeper of information,” that is, where the majority of information and discussion about content was delivered by the teacher. One observed example of this occurred when Teacher B asked students a question during instruction. As a student began to answer the question, instead of directing the class to look at the student who was answering, Teacher B mirrored the students’ response by repeating the student’s comment simultaneously so that the entire class knew what was being said. Teacher B’s decision to use this strategy for relaying information during discussions may have been a reflection of the classroom 142 seating arrangement. The seating in Teacher B’s classroom was arranged into small round tables that were spread around the room. The type of seating arrangement in Teacher B’s class was not designed to facilitate communication access between students when using visual communication modalities such as ASL during whole group instruction, and may explain why peer-to-peer interactions did not occur frequently during Teacher B’s instruction. When asked why she chose this particular seating arrangement, Teacher B stated that she felt the arrangement promoted cooperative learning in small groups, and helped to reduce off task behaviors and conversations among students during whole group instruction. However, based on observations the students did not appear to be comfortable participating in large group discussions, and at times, seemed unsure as to how to have discussions with peers during whole group instruction. Similar to Teacher A, non-instructional activities occurred primarily at the beginning and ending of each observational session, roughly 9 minutes per session (17.2% of total analyzed instructional time). Non-instructional activities were typically reflective of the time needed for Teacher B to collect assignments and pass out materials, homeroom type activities such as morning announcements and reciting the Pledge of Allegiance, and transitions between activities and as students entered and exited the classroom. The remaining instructional time was categorized as “teacher giving directions” (3.1%) about assignments and projects. Like Teacher A, hands-on instructional activities were not observed during Teacher B’s instruction. Yet, also like Teacher A, Teacher B indicated that hands-on activities were an important part of instruction (rating it second behind having a 143 discussion during the semi-structured interview). When asked to rank instructional practices from most effective to least effective, Teacher B ranked text-based learning as most effective, followed by hands-on learning, discussion, lecture, and guided reading. Teacher B reported that designing and planning hands-on activities was difficult and time consuming, which was one of the reasons it was not addressed often, but that it was her preferred method of delivering content, noting that the rankings she provided for instructional practices during the semi-structured interview were the opposite of how she actually taught. When asked, Teacher B could not come up with a reason why this occurred, commenting that it was “interesting”. Teacher B reported that having class discussions was a challenge because students often did not come to class with the necessary background knowledge. She noted that if students did not have something to discuss, or if they did not have adequate background knowledge, that the discussion would not be productive. Teacher B commented that students in her class required a lot of guidance in learning social studies concepts, and that was why most of her instruction focused on lecture, and what she described as “guided reading”. During the interview, Teacher B stated that she felt block scheduling would allow more flexibility and opportunity to incorporate socially-mediated learning activities such as discussion and hands-on activities, and that the traditional class period (57 minutes on Monday-Thursday and 45 minutes on Fridays) was not conducive for including those types of activities and still covering all of the required content. The table below (Table 8) shows the total number of instructional minutes dedicated to each category (teacher-facilitated learning, teacher-directed learning, peer- 144 to-peer learning, hands-on learning, independent learning, teacher giving directions, noninstructional time, and other) by Teacher B over nine analyzed instructional sessions. 145 Table 8 Duration and Intensity (Expressed in Total Minutes of Instruction Across All Analyzed Sessions*) of Instructional Behaviors of Teacher B TFL TDL P2P Lvl 1 P2P Lvl 2 HO IND Lvl 1 IND Lvl 2 DIR NI O 15.0 237.0 2.2 1.9 0.0 28.4 105.2 15.2 84.1 0.0 *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s instructional practice is based on 9 sessions. TFL-Teacher-Facilitated Learning; TDL- Teacher-Directed Learning; P2P- Peer-to-Peer Learning; HO-Hands-On; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-Instructional; O- Other; Lvl- Level of Intensity 146 Integration of content area reading (CAR) skills. Teacher B spent 21.6% of instructional time on CAR skills (see Figure 7). Content area reading skills were typically addressed during directive learning activities (lecture). Grade-level content texts were used as resources in the majority of observed sessions, regardless of students’ documented reading levels. When asked about the decision to use grade-level texts, Teacher B indicated that she was mandated by the state to use them since all of the students in the class were classified as being educated in a general education setting, noting that modified materials and/or curricula were only permitted in special education settings. Background knowledge. Teacher B spent more than half the amount of CAR time dedicated to the skills of background knowledge, totaling 12.1% of total analyzed instruction (see Figure 8). The skills of background knowledge often co-occurred with lecture. Activation of background knowledge was addressed at more than twice the duration of building of background knowledge (8.6% and 3.6% of total analyzed instruction, respectively). Both activation of background knowledge and building of background knowledge were addressed at the lowest level of intensity (see Figure 9), and never at an intensity of level 3 in any of the sessions. Similar to Teacher A, Teacher B typically incorporated the skills of background knowledge when introducing a new lesson or sub-topic, and the activation of background knowledge was often followed by the building of background knowledge. Teacher B typically activated background knowledge through questioning and the use of text features (e.g., images). Building of background knowledge was integrated by making connections to students’ previous experiences 147 through socially-mediated activities such as: discussions and the exploration of interactive websites, as well as viewing content related videos. Text structure. Teacher B was the only teacher in the study that was observed integrating the skill of text structure. Text structure was addressed only during one-onone interactions with students in sessions two, four, and nine for a total duration of 3.1 minutes (0.6% of total analyzed instruction) (see Table 3). Text structure was addressed only at the lowest level of intensity, at the sentence and paragraph level. For example, when working with a student during an independent work activity, Teacher B pointed out, “The first sentence in this paragraph tells you what the rest of the paragraph is about.” Although addressing text structure at the sentence level may support comprehension as discussed in Negin’s (1987) study, Teacher B did not address the text structures that support comprehension of content area texts as described by Meyer and Rey (2011). Text features. Images were the most commonly observed text feature used during Teacher B’s instruction, and were used in every observed class session. Teacher B commented on the importance of using visual information (such as text features) to support student learning. Teacher B was the only teacher participant to compare and contrast text features, the highest level of intensity. This occurred during an activity when students were asked to compare and contrast two artistic renditions of the same event in history. However, text features were only integrated at this level of intensity (level 4) for a short duration (0.8 minutes) while Teacher B modeled the skill for students. Once Teacher B modeled the skill, students were asked to continue comparing and contrasting the two images as part of an independent work activity. As such, it was not possible to 148 ascertain how long students spent on this particular skill at this level of intensity, given that the independent work activity also included other skills such as answering comprehension questions, defining vocabulary, and reading passages, and was only documented when it was addressed by the teacher. Teacher B did include opportunities for students to practice the analysis of text features (categorized at level 3) during independent work activities, however, text features addressed at this level of intensity only occurred for a duration of 5.1 minutes (1.0%) of total instruction. There were several observed instances of Teacher B calling attention to headings in the body of text, and one instance when Teacher B asked students to refer to the table of contents in their textbooks as a tool to find information. Other text features such as bold and italicized words were not addressed by Teacher B during observed instruction. Content-specific vocabulary. Teacher B addressed content-specific vocabulary for 5.3% of total analyzed instruction, which often co-occurred with the decoding of text (see Table 11). Teacher B did not assign a specific vocabulary list for students to study. When asked why a vocabulary list was not assigned, Teacher B indicated that she included a list of words in her lesson plans to target during instruction and that contentspecific vocabulary words were addressed as they came up during a lesson. Teacher B posted key content-specific vocabulary words on a word wall for students to reference on their own. There were several instances observed when Teacher B referred to words on the word wall during instruction. These instances frequently co-occurred with the skill of activation of background knowledge, as Teacher B asked students to recall words that had been discussed in previous lessons. When addressing content-specific vocabulary, 149 Teacher B typically identified the content-specific vocabulary word and then told students the meaning and/or corresponding ASL sign. There were not any observed instances of discussion or analysis of content-specific vocabulary. Inference. Teacher B addressed the skill of inference for 2.3% of analyzed instructional time, primarily categorized as text-connecting (level 1) inferences (2.0%) (see Table 7). The skill of inference was primarily addressed during teacher-directed learning activities, and co-occurred with the use of text features (see Table 11). An example occurred when Teacher B presented an image of soldiers camping out in severe weather conditions. Teacher B asked the students to infer how the soldiers were feeling. In this instance, students responded with one-word answers or short phrases (e.g., sad). Teacher B often repeated students’ responses, providing confirmation that their inference was correct and occasionally expanded on their comments by adding additional information and inferences. Complexity of questioning. Teacher B asked 447 questions across all analyzed sessions (see Figure 16). The majority of Teacher B’s questions (71.2%) were at the knowledge level (level 1), 13% were at the comprehension level, and 10% were at the application level (level 3). Teacher B did ask more level 4 questions than did Teacher A (4.7% and 1.9% respectively), however, less instructional time was spent on these questions as compared with Teacher A. When Teacher B asked higher-level questions during instruction, there were not any observed opportunities for students to discuss multiple points of view. For example, when asking a higher-level question like “why” or “what do you think”, a student would respond with a brief comment, after which Teacher B would provide confirmation that the student answered “correctly”, made additional 150 comments, and then moved on to the next topic or question. There was only one observed instance of students having a whole group discussion around a question posed by Teacher B. Summary. Instruction in Teacher B’s classroom was primarily delivered through lecture and teacher-directed learning activities. Students in Teacher B’s classroom were passive participants and few observed opportunities were provided for discussion. Teacher B did allow students to learn from and explore text-based information. Teacher B addressed the skills of CAR at lower levels of intensity than did Teacher A, but addressed CAR for a larger percentage of instructional time than did Teachers C and D. During the interview, Teacher B commented that her actual instructional approaches were not the same as her desired instructional approaches, and that “in a perfect world”, she would include more socially-mediated/social-constructivist learning activities such as hands-on learning, discussion, and facilitative instruction. Teacher C As described in Chapter 3, Teacher C was deaf and used American Sign Language (ASL) as her preferred method of communication during one-on-one conversations and as the language of instruction. Teacher C reported having a total of nine years of teaching experience, three of those years teaching social studies in the deaf education classroom. Teacher C reported that her highest level of education was a master’s degree in Deaf Education and Social Studies, and had a baccalaureate degree in Government and General Education. Instructional approaches. Teacher C’s United States History class of high school students was designed as a full year course that was condensed into a half year 151 through the use of a block scheduling model. Teacher C’s class met daily for 90 minutes, Monday through Thursday, and for 80 minutes on Fridays. A total of 880 instructional minutes were observed and analyzed. The observed unit was the final unit taught for the semester. Teacher C’s instruction favored a more directive approach with 32.0% of instructional time dedicated to teacher-directed learning activities and 9.0% of instructional time dedicated to socially-mediated instruction (teacher-facilitating learning activities, 7.4% and peer-to-peer learning, 1.6%). The remaining instructional time was categorized as teacher giving directions/instructions (4.9%), student independent work (33.3%), and non-instructional (20.8), explained further below. The majority of observed instructional time (33.3%) was dedicated to student independent learning activities. During these activities, Teacher C was not available to students for support or monitoring of progress. Instead, Teacher C was often observed working on a computer in the back of the classroom. Student independent work activities included completion of work packets related to content covered in the unit and included comprehension questions and vocabulary matching activities. Students were also given time to work on an end of the semester project. During student independent work activities, students were observed spending time primarily on off-task behaviors including conversations with other students about school related and personal activities, or surfing the Internet. In one session, a student was observed watching a movie on a laptop computer during independent work time, and in several other sessions, students were observed playing online games and searching the Internet for information that was not related to content (e.g., local basketball team statistics). 152 Only 36.6% of observed instruction (322 minutes) was dedicated to contentrelated instruction delivered by the teacher. Teacher C’s lessons were connected and followed a whole-to-part sequence. During instruction, Teacher C demonstrated a strong understanding of content knowledge, often supplementing information from the textbook with anecdotes and stories about historical events and generally presented content information as a story. Students were observed as being engaged, however, the culture of Teacher C’s classroom reflected one where information flowed through the teacher. Few opportunities were provided for socially-mediated instructional activities such as class discussion and peer-to-peer learning. Several observations were noted of students expressing an interest to discuss or ask questions about content information being presented by Teacher C (indicated by students raising their hand, or making brief interjections), however, these requests were often disregarded by Teacher C, telling students to, “let me finish” or “hold on,” and then continuing on with the planned lecture. During the semi-structured interview, Teacher C reported that socially-mediated instruction using dialogue, in addition to direct instruction through lecture, were key in preparing students for college. Teacher C stated, “I encourage my students to engage in dialogue. They need to be critical thinkers. I encourage them to be critical thinkers. That is what they are going to need to do when they go to college. In college, much of the learning is done through lecture. I have a tendency to spoil my students. I am always adding, expanding on the information, but really, when they get to college, the lecture will be more boring and dry. Also, in college, you are expected to do a lot of reading and discussion. Those are part of the expectations of college, so I try to internalize those skills within my students so that they will have a smoother transition to college.” Teacher C did not frequently use text-based information when presenting contentrelated material. Typically, Teacher C displayed content related information to the class by projecting segments of the textbook or results from Internet searches (Google images) 153 on a SmartBoard. Displayed information was limited to images of those individuals being discussed during the lesson and short video clips related to content. When asked about reading strategies such as guided reading, Teacher C indicated that guided reading was not a strategy used when teaching content, and that it would be difficult to do guided reading activities due to the varied reading levels of students. Teacher C also reported that guided reading was more appropriate for students at the elementary level, and that students might not be able to understand if the text was too difficult and they would “give up”. Teacher C also indicated that lecture and discussion helped students make connections when doing reading assignments for homework. Teacher C reported that in the past, students were required to come to class having read the material to prepare for the lecture, however, she found that strategy to be ineffective and began to assign readings after the lecture. Teacher C reported that when lecture was done before the reading assignments, the students understood more and their grades began to improve. Teacher C stated that during instruction she provided the students the information needed to learn the material, wrote key words/information on the board, and used discussion to support the learning of content information. Teacher C stated, “…students are more comfortable doing that then if I told them to just read. The students push away from that type of work. If I forced them to read, it would mean that I failed them. They would resist history, and I don’t want that. If I lecture and have discussions with them first, then they are ok with it. Then they are more accepting to doing the reading, because they begin to make the connections.” Teacher C reported that while reading materials (e.g., articles, book chapters, supplementary reading materials, etc.) were important for supporting content instruction, teaching students how to read was the job of the English/Language Arts teacher and not the content teachers. During the interview, Teacher C stated, “if I did [teach reading], that 154 would mean I would have to reduce the amount of time I spend on my social studies content. We have reading teachers. That is their job.” As with Teachers A and B, non-instructional behaviors such as entering and exiting the classroom typically occurred at the beginning and ending of each class. There were several observed instances of Teacher C passing out materials, engaging students in conversations not directly related to content, and setting up technology which were also categorized as non-instructional, resulting in 28.8% of instructional time categorized as non-instructional (averaging 18.3 minutes per session). The table below (Table 9) shows the total number of instructional minutes dedicated to each category (teacher-facilitated learning, teacher-directed learning, peerto-peer learning, hands-on learning, independent learning, teacher giving directions, noninstructional time, and other) by Teacher C over ten observed instructional sessions. 155 Table 9 Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher C TFL TDL P2P Lvl 1 P2P Lvl 2 HO IND Lvl 1 IND Lvl 2 DIR NI O 65.0 281.8 10.8 3.6 0.0 267.6 25.0 42.9 183.3 0.0 TFL-Teacher-Facilitated Learning; TDL- Teacher-Directed Learning; P2P- Peer-to-Peer Learning; HO-Hands-On; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-Instructional; O- Other; Lvl- Level of Intensity 156 Integration of content area reading (CAR) skills. Teacher C integrated the skills of CAR for 10.9% of total observed instruction (see Figure 7). Teacher C’s instructional activities were less text centered. Similar to Teacher B, Teacher C used a grade-level textbook (mandated by the state) and a state approved, modified textbook. The company who published the grade level textbook provided an online version that was made available to all students. One feature of the online version of the text was that it could be adjusted to match the reading level of each student. Teacher C reported that this feature was a preferred option for her students, and that in addition to the benefit of not having to carry around a large textbook, many of the students told her that they did not want the hard copy version of the book because the reading level could not be adjusted to fit their needs. Guidance provided to students by Teacher C in supporting the understanding and decoding of text was limited to decoding of specific words and questions represented in English print, and occurred most often during teacher-directed learning activities. Background knowledge. The skills of background knowledge (activation of background knowledge and building of background knowledge) were addressed for the majority of total CAR time for Teacher C, representing 6.7% of total observed instruction (see Figure 8). Teacher C spent the most time of all the teacher participants on the skill of building background knowledge (3.7%), with the majority of building background knowledge categorized at an intensity of level 2 (2.9%). Building of background knowledge occurred primarily when Teacher C showed videos related to the unit topic. Teacher C was the only teacher that was observed doing guided viewing of video. When showing a video or video clip to students, Teacher C would often expand on the 157 information being presented, would pause the video and ask students questions or make additional comments, or rewind portions of the video for students to view a second time. Activation of background knowledge was addressed for 3.0% of total instructional time, primarily at an intensity level of 1 (2.1%). Activation of background knowledge was addressed by questioning students at the beginning of a lesson, or before moving on to a new topic. During Teacher C’s instruction, a logical pattern of activating background knowledge was followed by building background knowledge. During the semi-structured interview, Teacher C emphasized the importance of the skill of background knowledge, commenting that the use of visual information (e.g., photos, videos, etc.) was key in supporting students in the activation of background knowledge and building of background knowledge. Text structure. The CAR skill of text structure was not addressed by Teacher C during observed instructional sessions. Text features. The CAR skill of text features was addressed for 2.3% of total observed instructional time (see Figure 10). Text features were primarily categorized at an intensity of level 1 (1.6%), meaning that Teacher C would just bring attention to the text feature and move on with instruction without further explanation or analysis. Few opportunities were observed of Teacher C analyzing or interpreting text features. The text feature used most by Teacher C was images. When presenting a text feature to the class, Teacher C typically used the statement of “This is...”. Observations indicated that the use of text features by Teacher C were more of an afterthought, as many of the text features used during instruction were from Google image searches done spontaneously during Teacher C’s lecture to help emphasize a point or bring attention to a person, location, or 158 event that was being discussed. Use of other text features such as bold or italicized words, headings, tables, glossaries, etc. were not observed during Teacher C’s instruction. Content-specific vocabulary. Teacher C was the only teacher who assigned a specific list of content-specific vocabulary for students to learn. Content-specific vocabulary was addressed for 1.9% of total instructional time, with the majority of content-specific vocabulary categorized at the lowest level of intensity. When addressing content-specific vocabulary word(s), Teacher C identified the word by either pointing it out in a segment of text, writing the word on the white board, or by fingerspelling the word. After bringing attention to the content-specific vocabulary, Teacher C would tell students the meaning or corresponding sign with no further discussion. There were no observed instances of Teacher C dissecting the morphographical components of contentspecific vocabulary or providing students with opportunities to discuss content-specific vocabulary. Content-specific vocabulary primarily co-occurred with decoding of text (see Table 11). Inference. Teacher C spent 0.8% of instructional time of the skill of inference (see Table 6). Inference was typically at lower levels of intensity, including text connecting inferences (level 1) and elaborative inferences (level 2). The skill of inference was not addressed at an intensity level of 3 (global inference) (see Table 6). The skill of inference was primarily addressed during socially-mediated learning activities such as peer-to-peer learning and teacher-facilitated learning activities, and only occurred in 3 of the 10 observed sessions. One example of Teacher C integrating the skill of inference at higher levels of intensity occurred during a class discussion about the 159 conspiracy theory in reference to the Kennedy assassination. Teacher C had students watch video segments of the Kennedy assassination and of firsthand accounts as told by eyewitnesses. After providing some context through lecture and watching the video segments, students in the classroom had a lively discussion on the topic, providing their perspectives and theories about President Kennedy’s assassination. Complexity of questioning. Teacher C asked the fewest questions of any of the teacher participants (213) (see Figure 16). Questioning frequently occurred during teacher-directed learning activities. The majority of Teacher C’s questions (55.4%) were at the knowledge level (level 1), 30.5% were at the comprehension level, 7.5% were at the application level (level 3), and 6.6% were at the analysis level (level 4). Questions at an intensity level of 4 were typically asked during teacher-facilitated learning (for example, the discussion on the Kennedy Assassination described above). Generally, more instructional time was spent on questions that were at higher levels of intensity. Teacher C was not observed asking any level 5 (synthesis) or level 6 (evaluation) questions. Summary. Instruction in Teacher C’s classroom was primarily delivered through lecture and teacher-directed learning activities. Students in Teacher C’s classroom were highly engaged, however, discussion was often stymied so Teacher C could finish her planned lecture. A large portion of instructional time was allocated for student independent work and no opportunities for collaborative learning between peers were observed. Teacher C stated that students had to follow lectures if they were going to be successful in college, and that her instruction was specifically designed for that purpose. 160 Teacher C addressed CAR skills at lower levels of intensity and asked fewer high-level questions than did any of the teachers in the study. Teacher D As described in Chapter 3, Teacher D was post-lingually deaf and used spoken English as his preferred method of communication during one-on-one conversations. Sign-based communication was used by Teacher D as the language of instruction. Teacher D reported having a total of eighteen years of teaching experience, fourteen of those years teaching social studies in the deaf education classroom. Teacher D reported that his highest level of education was an ABD in Soviet Military History, and reported having a baccalaureate degree in History and Political Science and two master’s degrees, one in Soviet and European Economic and Social History and one in Deaf Education. Instructional approaches. Teacher D’s United States History class of high school students was designed as a traditional, full year course that met Mondays, Wednesdays, and alternating Thursdays for 73 minute long class periods. A total of 730 instructional minutes were observed and analyzed. Teacher D’s instructional approaches favored a more direct instruction approach with 35.3% of observed instructional time dedicated to teacher-directed learning and 12.4% of instructional time dedicated to socialconstructivist learning activities (teacher-facilitated learning, 6.8% and peer-to-peer learning, 5.6%). The remaining instructional time was categorized as student independent learning activities (23.2%), teacher giving directions (9.3%), noninstructional (18.1%), and other (1.7), explained further below. Teacher D’s lessons could best be described as “fragmented”. Lessons in Teacher D’s classroom were disconnected within and across sessions, following more of a part-to- 161 whole sequence. For example, one lesson did not lead into the next and instructional sessions did not begin with the activation of background knowledge or review of what was taught during the previous instructional session. There was also fragmentation within lessons. For example, Teacher D would jump from one sub-topic to another without providing a transition or information about how each sub-topic was related. Each observational session followed a similar pattern: passing out materials, giving a short lecture on a unit sub-topic, giving students directions for independent work activities, passing out materials, having students copy information from the board, giving a short lecture on a new sub-topic related to the unit, passing out materials, giving students directions, having students copy information from the board, etc. This pattern is evident in the data presented in Figure 5, which shows Teacher D as having the highest frequency counts for teacher-directed learning, teacher giving directions, independent student work/activities, and non-instructional. Although Teacher D’s lessons were related to the topic of the unit (The Progressive Era), few connections were made between lessons and activities, making it difficult for the students to follow, as indicated by their questions to Teacher D and nearby peers as well as their facial expressions and behaviors. Field notes taken during Teacher D’s instruction also indicated that the researcher had a difficult time following the lessons. Teacher D spent the most instructional time of any of the teacher participants (9.3%) on giving directions/instructions. This may be largely due to the fragmented nature of Teacher D’s instruction as described above. Teacher D dedicated 23.2% of total observed instruction to independent work activities. Student independent work and cooperative learning activities were largely text- 162 based, and required students to think critically about information solely through interaction with text with minimal instruction and support from Teacher D. During the semi-structured interview, Teacher D rated text-based learning and lecture as the least effective instructional approaches for delivering content information, which were the instructional approaches primarily observed across the instructional unit. Much of the instructional time categorized as student independent work addressed lower levels of intensity and typically included activities where students copied information from the board onto worksheets. There were several observed instances of students being asked to answer questions on a worksheet, followed by Teacher D directing them to write in the “correct” answers by copying information displayed onto a SmartBoard onto their papers. Like the other teacher participants, non-instructional behaviors typically occurred at the beginning and ending of each observational session as students transitioned into and out of class, and during the distribution of worksheets and materials, representing 16.5% of total observed instruction (roughly 12 minutes per session). There was also one instance of a fire drill (lasting approximately 12 minutes). In addition, there were several brief portions of instruction where content information was delivered by a paraprofessional. This occurred in two of the ten sessions, and only for brief periods of time (totaling 2.7 minutes of instructional time). These portions of instruction were categorized as “other”. Observed interactions between students and Teacher D during instruction were in the form of students using repair strategies to figure out what was being said by Teacher D. Teacher D’s fragmented teaching style combined with his less than proficient signing 163 skills appeared to make it especially difficult for students to establish connections during instruction. Students were often observed saying to one another, “What did he say?”, “I don’t understand what he said”, “What did he mean by that?”, and “I don’t understand him.” There were several observed instances of oversimplified communication used to convey complex content topics. One example occurred during a lesson when Teacher D was lecturing about a famous journalist and photographer, Jacob Riis, who is known for his anthology of photographs depicting child labor and impoverished areas of New York during the Progressive Era. In his explanation, Teacher D displayed a photo of Riis and signed the following: “man [Teacher D points to photo] famous book famous, famous, book camera, camera book famous” which would be loosely translated, at best, into English as “That man, famous book, taking photos, famous book.” There were also several observed interactions between students and Teacher D, when the student used ASL (without spoken language support) to communicate with Teacher D, and Teacher D did not appear to understand what was being said (as indicated by his response to the students). In each instance, the student repeated what was signed in ASL, and after multiple failed attempts, used sign-supported speech to communicate with Teacher D. It was only when the students used sign-supported speech that Teacher D appeared to understand what was being said. Teacher D’s lack of ASL proficiency seemed to be the biggest contributor to the fragmented instruction and confusion among students. Because Teacher D had so much difficulty in using ASL to express complex ideas and concepts, many of the concepts taught were paired down to their most basic 164 elements, and little opportunity was provided for authentic critical thinking activities and discussion during instruction. Table 10 shows the total number of instructional minutes dedicated to each category (teacher-facilitated learning, teacher-directed learning, peer-to-peer learning, hands-on learning, independent learning, teacher giving directions, non-instructional time, and other) by Teacher D over ten observed instructional sessions. 165 Table 10 Duration and Intensity (Expressed in Total Minutes of Instruction Across All Observations) of Instructional Behaviors of Teacher D TFL TDL P2P Lvl 1 P2P Lvl 2 HO IND Lvl 1 IND Lvl 2 DIR NI O 49.6 258.0 20.4 20.4 0.0 105.8 63.3 67.6 132.6 12.3 TFL-Teacher-Facilitated Learning; TDL- Teacher-Directed Learning; P2P- Peer-to-Peer Learning; HO-Hands-On; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-Instructional; O- Other; Lvl- Level of Intensity 166 Integration of content area reading (CAR) skills. Teacher D integrated the skills of CAR for 7.5% of total observed instruction (see Figure 7). Teacher D’s instructional activities were not text centered, although text was displayed in the majority of the lessons observed. Teacher D had the lowest frequency of CAR skills integration (102) of all the teachers in the study. Teacher D did not use a textbook and stated during the semi-structured interview that he did not use them because they were often poorly written. Instead, Teacher D stated that he assigned articles for students to read. However, in observed instruction there was only one instance of Teacher D assigning any reading to students. The next class meeting after the reading assignment was given, Teacher D organized a lesson whereby students discussed the reading in small groups. The discussion occurred for a duration of 11.9 minutes, after which, students were asked to complete a worksheet. Any guidance provided to students in supporting the understanding and decoding of text was limited to decoding of specific words or questions represented in English print, and typically occurred during teacher-directed learning for Teacher D. When discussing comprehension questions related to the reading assignment, Teacher D would pose the question to students, and then display the correct answer on the SmartBoard. Once the answers were displayed, Teacher D directed the students to copy the correct answers onto their papers. Background knowledge. Background knowledge was the CAR skill that accounted for the highest percentage of Teacher D’s instructional time dedicated to CAR skills (2.8%) (see Figure 5). Activation of background knowledge was addressed for 0.9% of instructional time, primarily at the lowest level of intensity (0.8%). When 167 Teacher D activated students’ background knowledge, it typically occurred during questioning, often at the beginning of a lesson or when introducing a new sub-topic. Teacher D dedicated 1.9% of total observed instruction to the skill of building background knowledge, primarily at an intensity of level 2 (1.1%). Building background knowledge was primarily addressed by showing content-related videos to students. Teacher D spent the least percentage of instructional time on background knowledge of all the teachers in the study. When asked about which skills Teacher D believed were necessary for students to read and comprehend content area texts, Teacher D commented, “They have to have background knowledge”. Teacher D also noted the importance of background knowledge in accessing texts, however, during the interview, Teacher D stated, “I have not found a way to build in the background knowledge”. Text structure. The CAR skill of text structure was not addressed by Teacher D during observed instructional sessions. Text features. Teacher D used text features in 9 out of 10 observed instructional sessions for 2.5% of total observed instructional time. The most common text feature used by Teacher D was images and the captions below images to support learning of concepts. Teacher D called attention to text features through statements and questions (e.g., This is... Who is…?). Text features were typically integrated at the lowest level of intensity. There were several observed instances of Teacher D modeling the analysis of text features using political cartoons from the Progressive Era. These instances represent the highest level of intensity, accounting for 0.6% of total instructional time. The analysis of text features through the use of political cartoons co-occurred with the skill of inference (discussed below). Teacher D commented on the importance of using visual 168 information such as text features to support the learning of content material. Teacher D did not address any text features other than images (including the use of political cartoons). Content-specific vocabulary. Content-specific vocabulary was addressed for 1.5% of instructional time, and only at the lowest levels of intensity (1.3% at level 1 and 0.2% at level 2). Teacher D used the Frayer Model strategy to address content-specific vocabulary during instruction. The Frayer Model (Frayer, Frederick, & Klausmeier, 1969) includes the use of a graphic organizer consisting of four boxes surrounding a vocabulary word written inside an oval shape in located in the middle of the page (Frayer, et al., 1969). In each of the boxes, students write the definition of the word in their own words (box 1), facts and characteristics about the word, including an image or drawing (box 2), examples of the word (box 3), and non-examples of the word (box 4) (Frayer, et al., 1969). During observations, Teacher D projected a completed Frayer Model for each content-specific vocabulary word and directed students to copy the information onto their blank Frayer Model pages. After students copied the information, the Frayer Model sheets were filed in their social studies folders. No discussions or instruction were observed during the Frayer Model activities. When asked about the Frayer Model approach, Teacher D commented that the Frayer Model activity had been one where students discussed each of the components and developed responses as a class, but that the approach had taken too much time to complete and that the students often complained that they could not read Teacher D’s handwriting when student responses were transcribed onto the document. Teacher D 169 reported that it was easier to have all of the information pre-typed so that students were able to read the information in order to copy it to their papers. When asked why the Frayer Model was used, Teacher D reported that it was a school-wide policy, and that many of the teachers expressed frustration in using the tool. Administrators at the school, however, clarified that the Frayer Model was one of many graphic organizer tools recommended for teachers to use when teaching vocabulary, but was not required, and that teachers were free to select strategies and tools that worked with their instructional style and needs. Inference. Teacher D dedicated 1.5% of total observed instruction on inference (see Table 6). Inference was primarily addressed at an intensity of level 1 (0.8%). Teacher D incorporated inference for short durations of instructional time when asking students to interpret political cartoons from the Progressive Era, by asking students to infer the meaning of different messages hidden within the image. Teacher D was the only participant who addressed inference at an intensity of level 3 (global inference), but only for a short duration (0.7 minutes or < 0.001% of total instructional time). Inference, at an intensity of level 3, was addressed four times in one session (session 7) and was related to Teacher D translating questions from printed English to sign based communication on an assessment. No further interaction occurred after questions were translated for students, explaining the very short duration of integration at this level. There were several instances when students were asked to make inferences on independent assignments and assessments, but the duration of those inferences were only counted when there was direct interaction with the teacher or other 170 peers, as it was impossible to know how long students were using the skill of inference when doing independent work. Complexity of questioning. Questions asked by Teacher D were at the knowledge level (46.7%), comprehension level (32.3%), application level (15.3%), and analysis level (5.2%). Teacher D did not ask any synthesis questions, but did ask two evaluation questions. It is important to note that most instances of Teacher D asking analysis questions and all of the instances of asking evaluation questions occurred when Teacher D read questions to students from a handout or assessment. For example, during a formal assessment, Teacher D walked around the room and “translated” test questions from English print into sign based communication for students at their request. Each time Teacher D read a test item to a student, it was categorized as a question. As a result, the frequency for these questions are not reflective of questions that Teacher D asked as part of a discourse with students during instruction. For the majority of questions, Teacher D added a question marker (signing the word “inquiry”) to statements, transforming them into questions however, this is not considered a commonly used practice for asking questions in ASL. In each of the instances when Teacher D added a question marker to a statement, it was counted as a question. Summary. Instruction in Teacher D’s classroom was primarily delivered through lecture and teacher-directed learning activities. Students in Teacher D’s classroom were passive and appeared to have difficulty in understanding what Teacher D was saying. Instruction was not centered on a text, and lessons were fragmented and disconnected. In talking with Teacher D formally (during response-to-instruction meetings and during the semi-structured interview) and informally (before and after observation sessions), it 171 appeared that Teacher D had knowledge of what skills were important in fostering content knowledge and CAR skills, yet had a difficult time executing these skills in practice. This was a common pattern across all of the instructional practices and CAR integration with regards to Teacher D. Teacher D addressed CAR the least of all the teacher participants and addressed the skills of CAR typically at low levels of intensity. Across Teacher Results There were many instances when CAR skills overlapped or co-occurred with one another (see Table 11). Across all observations of teacher participants, activation of background knowledge had the highest frequency of co-occurrences (531 total) than any other CAR skill. Activation of background knowledge and content-specific vocabulary co-occurred the most often (223 co-occurrences), followed by use of text features and activation of background knowledge (194 co-occurrences). These two skills also had the highest frequency of co-occurrences with decoding of text (141 co-occurrences and 369 co-occurrences, respectively). With the exception of integration of the CAR skill, text structure, which did not co-occur with any CAR behavior (likely due to the fact that it was only integrated for less than 1% of instruction with only one teacher participant) building of background knowledge had the lowest frequency of co-occurrences (154 total), suggesting that other CAR skills may not be as dependent on building of background knowledge than activation of background knowledge. This is not to say that building of background knowledge is a “less important” skill, but that building of background knowledge is not necessarily an essential component of other CAR skills. 172 Table 11 Co-Occurrences of CAR Skills ABGK ABGK BBGK CSV INF TS TF BBGK 31 CSV 223 69 INF 83 16 56 TS 0 0 0 0 TF 194 38 64 145 0 Decoding 141 41 369 68 1 52 ABGK- Activation of Background Knowledge; BBGK- Building Background Knowledge; CSV- Content-Specific Vocabulary; INF- Inference; TS- Text Structure; TFText Features 173 Summary of across teacher results. The majority of teacher participants in this study favored instruction that included lecture and teacher-directed learning activities, with the exception of Teacher A. Teacher A favored instruction that demonstrated a more balanced approach to instruction, integrating both socially-mediated learning activities and teacher-directed learning activities (see Figure 5). Non-instructional behaviors were fairly evenly distributed across sessions and across teachers, typically occurring during transitions to and from class and during the distribution of materials. None of the teacher participants had large segments of time that were spent on non-essential activities (e.g., having long, off topic conversations with students or non-content or school related activities, which was observed in the pilot study) (Maiorana-Basas, 2013). Teachers B, C, and D had similar percentages of observed instructional time spent on student independent work activities. Independent work activities assigned by Teachers A and B had a clear purpose, and students were monitored and supported by the teachers for the majority of independent work activities. The purpose of independent work assigned in Teacher C’s and Teacher D’s classes was less clear, and resembled more of what might be described by some as “busy work” (e.g., completing work packets or copying information projected on the SmartBoard). During these activities, Teachers C and D were less present and apt to support students during these activities. Insights From Semi-Structured Interviews Each interview was video recorded and teacher responses were transcribed. Grounded theory was used to code the transcribed interviews. Five major themes were identified from the semi-structured interviews: 1) instructional preferences (e.g., sociallymediated instruction or lecture/teacher-directed instruction); 2) preconceptions/views 174 about students as readers and learners; 3) teaching of reading versus teaching of content knowledge; 4) frustrations about curriculum and policy; and 5) frustrations about support at the district, school, and/or family levels explained and contextualized further, below. Instructional preferences. The theme “instructional preferences” was defined as the types of instructional activities used to teach content. Instructional practices fell into two categories: socially-mediated learning (e.g., discussion, hands-on learning, etc.) and teacher-directed learning (e.g., lecture). During the semi-structured interviews, the teachers highlighted their preferred methods of instruction. Teachers A, B, and D all commented that discussion and hands-on learning were preferred instructional practices. Teacher C was the only teacher who stated a preference for more directive methods (e.g., lecture). Both Teacher B and Teacher D commented that while they had a preference for using discussions, guided reading, and hands-on activities (what Vygotsky would label as a more socially-mediated instructional practice), these preferences did not reflect what they actually did in the classroom. When reflecting on her comment about her preferred methods of teaching, Teacher B stated, “That is the opposite of how I teach.” Teacher D made a similar comment stating that while he would prefer to include more discussion and activities where students are “doing history,” stated that, “I really don’t like to lecture, and I don’t want to lecture. I have not found a better way to get the curriculum to the students who aren’t going to read, other than direct instruction.” The instructional practices observed in Teacher A and Teacher C’s classrooms reflected the preferred methods of instruction they highlighted during their interview. Teacher A made frequent comments about the importance of using discussion during instruction, stating that: 175 “ My reason for doing this is to ensure their comprehension. Instead of lecturing throughout, I want to incorporate dialogues with my students Also, it allows them to draw on their language skills as well as expand their thinking.” Teacher C commented that reason she used lecture was to prepare students for college, stating that: “In college, much of the learning is done through lecture. I have a tendency to “spoil” my students. I am always adding, expanding on the information, but really, when they get to college, the lecture will be more boring and dry. Also, in college, you are expected to do a lot of reading and discussion. Those are part of the expectations of college, so I try to internalize those skills within my students so that they will have a smoother transition to college.” Preconceptions/views about students as readers and learners. The theme, “preconceptions/views about students as readers and learners” was defined as comments and statements made by the teachers that included preconceptions/views that students were motivated/capable readers and learners (this included statements that described students as having the potential to become capable readers and learners) and preconceptions/views that students were not motivated/capable readers and learners (which included statements describing students as apathetic about reading and/or learning, students having disinterest or lack of motivation for reading and/or learning, etc.). During the semi-structured interviews, each teacher commented that their students struggled with reading and language. Teachers A, B, and C had positive views of their students as readers and learners. Teachers A, B, and C all made comments such as “when these students go to college…”, and “with a little extra work, they will be able to pull up their reading levels”, etc. When asked, “Do you believe your students are good readers?” Teacher A commented: “They can be. They definitely can be. Definitely. Because, they already have their BICS [Basic Interpersonal Communication Skills], they can communicate with each other, and it is not just superficial communication. They really have the 176 ability to have deep conversations and have discussion with each other and ‘play’ with language. In their other classes, they will have discussions and dialogues, but then they have a block when they get to the text. To me, it is a physical thing. As if their brain has not yet developed that ability yet. I think that they can. They can.” When asked, “Do you believe your students are good readers?”, Teacher B said, “I believe that they have more abilities than they let on. I think they are capable. With regards to reading words and text, some maybe face frustration and some are more successful.” Over the course of the interview, Teacher C commented that students in her class did not like to read at that they preferred learning content “through the air. She stated, “When I first met the students, they would often tell me, “I don’t like to read. I don’t like history.” I would hear these things again and again. So, I took that and put it in the back of my mind, and I thought about my expectations for them. Yes, I expect them to read. But, there are limitations to how much can be done in the dorm. These students have other classes that they need to worry about. So, if they are doing reading and vocabulary work in four classes, it can be exhausting for them. So, I thought it would be better if I lectured more, in a way that is fun, not dry, and the kids attend. I also check to see where they are. For example, I am assessing them to see if they understand, if there are things that they do not quite understand.” She went on to comment, “I provide them with the information they need, and I support that information by writing key words/ information on the board, having discussions with the students. They are more comfortable doing that then if I told them to just read. The students would push away from that type of work. If I forced them to read, it would mean that I failed them. They would resist history, and I don’t want that. If I lecture and have discussions with them first, then they are ok with it. They are more accepting of doing the reading, because they begin to make the connections.” During the interview, Teacher D remarked that the students had the capability to learn from text, but that many of the reading selections were too hard for them and that they would not be able to access texts that covered the required content. During the interview, Teacher D made the following statement, 177 “I think reading is just really a hard skill and not just for deaf kids… For a 1st grade reader, the curriculum is not accessible to them. They don’t have the skill to inference and do the kind of things that I am asking them to do. So, then you start asking, well, what’s the point? And then, what should I be doing? And, do I just dump the whole curriculum? But, what will I replace it with? They are surface reading. I am not really sure they are reading, and we have had a lot of discussions. I just can’t comprehend it. What it is to be a high school student and reading at a first or second grade level. I just don’t know what you get from any material, even if I adapt something into PowerPoint with captions. What are you bringing into that? What background knowledge do you bring in if that is really your reading level? And, no one can give me a really good answer to that, and I think that’s probably… we don’t know.” Teacher D also remarked that several students were able to access content texts, but that they were “lazy” and unmotivated to do the reading on the own. Teacher D provided the following example during the semi-structured interview: “I think, for a while here, and I don’t know about other schools, we were into this phase where kids would just read because we said, ‘you have homework, read’, you know. I remember, we were doing All Quiet on the Western Front and it was an adapted version of the book, so it was probably written between the third and fifth grade level, so it was accessible and [names a staff member] was involved too, so we got them in a literacy circle, I think that is what we called it, and we would sit around and they would have a chapter or two to read every night and then we would discuss it. And, I kept saying they are not going to read it, and if we don’t give them questions they are not going to read it, and that is what happened. They came in, and they talked about life experiences and stuff that didn’t connect to what was going on in the book, they were just inventing stuff. So, they weren’t tethered to the book at all.” Reading versus content knowledge. The theme “reading versus content knowledge” was defined as statements describing how the teachers identified themselves (e.g., as teachers of reading or as teachers of content) and as statements made by the teachers during the interview that emphasized the importance of integrating reading during content instruction and statements that emphasized the importance of instruction of content over the integration of reading. Teacher A and Teacher B were the only teacher participants that identified as a teacher of content and a teacher of reading. 178 During the interview, when Teacher A was asked, “As a content area teacher, do you feel it is your job to teach reading skills to your students?”, Teacher A responded with the following: “Oh yes! All the classes are. However, again, it’s a struggle. Am I teaching them content or am I teaching them how to read? How do you create that balance? I don’t feel that it is enough.” Teacher B expressed a desire to incorporate more reading during instruction, however, her comments throughout the interview suggested that content was given priority. When asked, “As a content area teacher, do you feel it is your job to teach reading skills to your students?”, Teacher B responded with the following: “Yes. There is another way to access information, that ‘text-based’ learning, and, honestly, a lot of what they get tested on at higher levels is going to be contentarea passages, when they are tested on reading, or, you know, there is not going to be so many fairy tales when you get to high school, you know. So, who is necessarily going to teach that? If they are in language arts class and doing a lot of literature, do they get a lot of opportunities to tackle a text-book, and so, if we’ve got this text-book, readily available every day, why not teach them how to use it? It's a tool for the content I am trying to teach them.” When asked, “If you had to choose, do you feel that it would be more important to teach the content or to teach the students how to read to learn content information on their own? Teacher B responded: “In this course, the content. To me, this course is all about how America became this proud nation and you are part of it. I want you to be proud of America and know that you are one person but you can make a difference, and I think it would be the same for like a civics course, those kinds of things. Um, you know, I think I could say that in general about most content classes. We could talk about a science class, you know, there are principles in science that might motivate a kid to do or be something better, you know. Um, I do, I do want to argue, why do we have these classes in general? It is for the content, right? I do think it is important for them to read, but I think we have kids who are way smarter than what they can read. You know, and just to say, you have got to be able to read it on your own and understand, I think would limit too many of our kids. So, if we can just teach the content, they can show what they have learned and they can think about 179 things, that would be my choice I guess. [laughter] Not to ‘diss’ reading. Reading and writing is really important.” Both Teachers C and D did not identify as teachers of reading and expressed a belief that reading instruction was primarily the job of the English/Language Arts teacher. During the semi-structured interview, both Teachers C and D expressed that taking time to teach reading skills would be taking time away from teaching the content. Teacher C made the following comments during the semi-structured interview: “I did take a reading methods course where I did learn some helpful tools. But, reading is not my primary area. I provide the information and the knowledge of the content. I do require reading, which supports what they are learning. Whatever they learned in class today, when they get to the reading part, they can see that there are connections, but as for my primary goal to teach reading, no. So, if I did [teach them how to read the text], that would mean I would have to reduce the amount of time I spend on my social studies content. We have reading teachers. That is their job.” When Teacher D was asked, “As a content area teacher, do you feel it is your job to teach reading skills to your students?”, he made the following comments: “[laughs] I like that question! I was just at a workshop, and it was by, I had the book on my desk for a while, Historical Literacy, and the guy was presenting, and first he said there are five aspects to historical literacy. The first one was the code, ‘cracking the code’, and this guy said, and he is renowned in his field, um, ‘It’s not your job to teach kids to break the code’. So, if they don’t understand the word ‘dog’, they need more help than you can give them, and that is from a content area perspective.” Teacher D went on to comment, “If they [the students] are reading at a 3rd or 4th grade level, they can’t inference, and therefore they can’t do what is needed. Your job is to teach them historical literacy, which is a whole special set of skills, and that’s sourcing and corroboration and how to read the historical text. Having said that, being in deaf ed., being a special educator, yeah, it is my job to teach them the reading. But, then the question would be, actually teach them to read what?” During the interview, Teacher D did comment that a shift towards more crossdisciplinary planning across all content areas might help alleviate issues in covering 180 content in a timely manner while also making time for reading instruction and opportunities for examining text on a deeper level. Frustrations about curriculum and policy. The theme “frustrations about curriculum and policy” included any statement(s) made by the teachers that expressed discontent or restriction due to educational policy, educational reforms, and/or mandated curricula. During the semi-structured interviews, all teachers mentioned that they felt as if they were under a great deal of pressure to cover content, and that there was not enough time in the academic year to cover everything as in depth as they would have liked to. All teachers indicated that they were either “behind schedule” in teaching the required content, or that they would most likely not be able to cover everything by the end of the year. When asked, “What is your biggest frustration regarding content instruction?”, Teacher A answered, “Following the rules. I am serious. I am serious.” The interviewer clarified by asking, “You mean, Common Core?” Teacher A continued, “Well, that the state… the school... they have a long list of ‘requirements’. You must do this, and this, and this, and most of the time it is just really frustrating for me to follow all of those rules.” When asked why she felt that way, Teacher A stated, “Because they really just don’t understand how deaf students learn. Also, my big frustration is that we really don’t have… for example, they don’t have the foundation, yet. Or, the foundation that they have is weak. So, I need to go back and re-teach. But, I can’t do that. I am just rushing through it because I have to follow all of these requirements and things that the state and the school are telling me… I have to ‘break the rules’ sometimes. I have to go back. I have to fill in the missing gaps first. I can’t just jump. Their brains can’t handle all of that! They have to develop a foundation of knowledge first. But, by law, I have to do it. I have to do what the law says. They have to know the content. So, this is a very complex question. So, we have Common core, and we have all of these objectives that we must teach, and then they have to take a state test, and it can be signed [in ASL]. They don’t have to read it.” 181 Teacher B expressed her frustrations about curriculum and policy stating, “I feel stress with the amount of standards I am supposed to cover in a year. I was actually thinking this morning, it would be great if my content was The American Revolution, The Constitution…that much. I could spend a whole year on this segment of history. But I’ve got to do all this other stuff beyond it, and yeah, there is just a lot to cover, and I feel, especially with our kids, they are lacking a lot of skills. How to read English, and communicate effectively in English, and they are being tested in English, so they need that practice. So, that cuts into our time too. So, not only do I have to teach all of these standards, and I only have 45 minutes, if I am not interrupted and we don’t have a fire drill and all of these other things.” When asked, “What is your biggest frustration regarding the instruction of content material?”, Teacher C responded: “The state keeps modifying and changing things. That puts a lot of pressure on me. I am responsible for teaching them what they need to know and to do well on the EOC [End of Course Exam]. Basically, I have to teach them to pass the EOC. That is not what I want to do, and I only have one semester to do it in, and that is not enough time. Really, I need a full year. The content really needs to be spread out over a full year.” Teacher D expressed his frustrations about curriculum and policy with the following statement: “I do think, the way the curriculum is headed, not in a bad direction, but I do think the kids themselves are feeling more pressure, the teachers are feeling more pressure, and I think that is impacting, to some degree, the relationships we have built with our kids. And, a lot of the education has come through those relationships of trust and working with the kids. So, now, when you are pressured to do the content and the curriculum, I think it is disintegrating that relationship, and that may lead to, I don’t want to say rebellion, which is good, I like that in social studies, to be rebellious. Protest! Don’t accept anything I say is true, because I could… But, that idea of, ‘they don’t care about me, I don’t have to bring a pen, I don't have to do this, I don’t have to say anything’. I think there is… I think some of the seeds are sowed with what we are asking them to do and they’re frustrated, we are trying to do it, we are frustrated, so, when that relationship breaks, you know, the ‘I don't care’, that kind of attitude, they are basically saying, ‘I’m done, here’.” 182 Frustrations about support at the district, school, and/or family levels. The theme “frustrations about support at the district, school, and/or family levels” included any statement(s) made by the teachers that expressed discontent or restriction related to lack of support from their school district, their school, or from the families of their students. All teacher participants indicated that they felt a lack of support from the state, district, school, and home in supporting student learning and growth in social studies. All teacher participants commented on student’s difficulties with language (both ASL and English) and their struggles with reading. When asked about recommendations for change in teaching content in the deaf education classroom, Teacher A expressed frustrations at both the school level and family level. Teacher A commented that many of her students did not have the support they needed at home stating that in the home, “there are no high expectations for them”. Teacher A went on to discuss her frustrations about the expectations in the dorm setting stating that, “In the dorm, it is just not very structured. If they had more reading… They really just have a lot of playtime and video games. They need more emphasis on the important stuff like reading. As well as bringing in Deaf role models to talk about their successes, so they can build up their self-esteem, or identify with them and think, ‘Oh, I can be like that.’ But, most of the time it is just eat and do your homework and they [the dorm staff] don’t feel the need to enhance the student’s education in the dorm.” Teacher A also expressed frustrations at the school level with regards to ASL proficiency among teachers at the school. Teacher A commented that, “They [school administrators] must make sure that our teachers are fluent in sign [ASL]. If you can’t sign, what are you doing here? And really, there are no consequences.” Teacher A continued, “You need to be fluent. That is just another big issue. I mean, I am fluent, and in my classroom I sign with my voice off, but you go into some other rooms, I mean, 183 do they have the ability to connect their language to the concepts fluently? Because, in some of their other classes, they are not developing that understanding, and they [the students] will ask me, ‘What is she [their teacher] talking about?’ There is language mixing and that becomes confusing.” Teacher B also expressed concerns at the school level, stating that her biggest frustration regarding content instruction was “time”. She commented, “Kids get pulled from my class for things. They can’t get pulled from reading and math classes, we have a rule at school that they cannot be pulled from that, which, is a good thing. They really need to be in those classes, but that leaves the content classes for all of these pullouts for counseling, audiology, and speech, and all of these other things. So, sometimes I do feel like some of the kids are getting robbed, if they are pulled multiple times a week, um, for other services and then they won’t be able to go home to read just to catch up, you know.” Teacher C expressed concerns at the district level. When asked why she chose a particular text book to use in her classroom, she stated that, “That is what was given to me. Why, because of the EOC [End of Course Exam]. They [the students] need that book to prepare for the EOC. I have to use that book because it has everything they need to pass that test. If I did not use the book or complained that it was too high for them the response I would get would be they need it for the EOC test. This is the book that I was given, my hands are tied, and if the students struggle with it, I am expected to handle that.” Teacher C also commented that the students in her classroom made negative comments about having to read, which she believed was tied to the communication philosophy at the school. She commented, “Here, we use a Total Communication approach. I think, maybe, if we changed to a BiBi [bilingual/bicultural] approach, that would be more successful. If we are talking about an approach, it needs to be BiBi. Not only that, we get many students from other countries that come in. We also get a lot from the mainstream setting. These students are raised in the tradition of the Total Communication approach, but I think the right approach is BiBi.” She went on to comment that, 184 “In the Total Communication approach, it is so structured, that students feel limited. Their thoughts and ideas get ‘stuck’. For example, when my students give me writing assignments, I tell them not to worry so much about the structure and grammar. That is not my job. As long as they can convey the message and show me that they understand the content material, that is my job. I give the students the option to write 10 pages or to sign, and almost all of them chose to sign.” The interviewer followed up Teacher C’s comments with the following question: “Do you think students that are learning within a Total Communication approach prevents them from identifying themselves as successful readers?” Teacher C responded, “Yes, right. Still, reading is important. I know that, and I emphasize that. But, when I see their struggles, and the barriers… [shakes head]. Teacher D expressed frustrations at the district level and the school level, emphasizing that there was just not enough time for planning lessons that will not only address the content that is required to be taught, but to also fill in the gaps in knowledge. Teacher D commented, “I really think, when you teach history, one of the problems all teachers have, and what I think is a glaring problem here. If they [the students] don’t have basic math facts, they miss out on a lot of stuff that I do, because I don’t have the time. Or the lesson becomes a math lesson sometimes when I am just looking at statistics. A good example, is the other day, from the 8th grade class, and, it was one of these, uh, Stanford history lessons about loyalists in America. Who were the loyalists, what did they…? You know, so, that’s it. Ten to 20% of all people in the American colonies were loyalists. So I said, there are 6 million people at the time in America, how many of them were loyalists? No idea. So, the number itself, there was no number sense behind how big, how small a number is that. So, does the lesson really mean that. Twenty percent? Who cares? What does that mean? You know. So, when you lose the lesson at that point, because of the math problem, you can’t get into the history stuff, because now it does not make any sense to them. So, we spent a while doing the math. So, again, there has to be a better way that we can all connect. You have to give us time in the summer to 185 plan curriculums. But, at this point we are still saying we are doing the district stuff, and what they are putting out there. So, again, there is that conflict. So, I think, again, when you talk about the ‘ideal world’. If my ideal world was five sections of US history in a deaf ed. school, I think, eventually I could get pretty good at adapting that curriculum and doing it and finding materials. If I do it every three years, eh, not so much, and who knows, if by the third year that curriculum then changes again, and we are introducing new stuff and we are doing it a whole different way! It feels like I am reinventing the wheel all the time. I can’t tell you how many different American History curriculums I have, and even when I have this curriculum done, when we are done this year with this curriculum, I mean, when I teach it again, I will adapt it again based on the students I have. And, depending on who is here, I will try to adapt it lower or try to adapt it higher, so, it is never the same. I think deaf ed. teachers work way too hard. In my opinion [laughter]. Harder than any other group of teachers that I have ever seen before.” The subsequent chapter provides an interpretation of the results and conclusions of this study in the context of each of the research questions outlined at the beginning of this chapter and the themes discussed in the section above. 186 CHAPTER 5 INTERPRETATIONS, CONCLUSIONS, AND RECOMMENDATIONS The present study investigated the instructional approaches and content area reading (CAR) skills integration of four, upper-grade, social studies teachers of the deaf. The instructional practices of four, upper-grade social studies teachers in three residential schools for the deaf were observed during ten instructional sessions that spanned across a full unit of study. Each instructional session was coded for specific related instructional strategies (e.g., teacher-facilitated instruction, teacher-directed instruction, peer-to-peer learning, etc.) as well as instructional strategies specific to CAR (e.g., activation of background knowledge, building of background knowledge, text structure, text features, content specific-vocabulary, and inference). To understand why teachers used specific instructional approaches, teachers were given opportunities to review their video recorded instruction and provide context through response-to-instruction meetings. Additionally, semi-structured interviews were conducted with each teacher to gain a better understanding of how each teacher perceived their students as readers and learners and whether or not those preconceptions influenced their instructional choices. In this section, an interpretation of the results and conclusions of this study in the context of each of the research questions is provided, along with limitations, recommendations for teacher preparation and professional development, as well as recommendations for future research. Instructional Approaches In what ways do teachers of the deaf in the upper-grade, social studies classroom integrate the skills associated with content area reading (CAR) during 187 instruction? All of the teachers in the study were observed using CAR skills to some degree during observed instructional sessions. Teachers integrated CAR skills primarily during lectures and class discussions. Three of the four teachers commented that even though students’ abilities to learn content from text was a challenge for the majority of students however, teaching students to read was not feasible, emphasizing that pressure from state mandates and state testing requirements were what drove their instructional practices. All teachers had some form of text displayed during observed content instruction; however, the middle school teachers provided more opportunities for viewing, reading, and dissecting print-based information than did the high school teachers. All teachers admitted that they did not consistently assign content reading for homework, and that the majority of reading assignments were completed in class. Teachers found it difficult to incorporate text-based learning into instruction for several reasons. First, because of the wide range of reading levels they had to contend with in each of their content classrooms, teachers found it difficult to incorporate text based learning and reading instruction while teaching content. Second, several teachers commented that finding content-appropriate reading materials at appropriate reading levels for all of the students in their classes was difficult. Additionally, all teachers reported that they rarely assigned reading for homework due to lack of appropriate reading materials, support at home and in the dorms, as well as lack of student motivation to complete reading assignments outside of class. A lack of reading experiences both in and out of the classroom may have an impact on a student’s ability to develop the skills of CAR at high levels of intensity. It may also have an impact on a student’s ability to learn content at deeper levels. Further, 188 not regularly assigning reading both in and outside of the classroom places the “burden” of teaching all content related material on the teacher and may have an impact on instructional flexibility, quality and quantity of classroom discussions, and quality and quantity of instructional activities meant to enhance student’s knowledge about content topics. Based on the results, the differences between teachers in upper-grade, social studies classrooms may be related to the roles with which teachers identify. The literature states that content area teachers traditionally do not view themselves as teachers of reading (Fisher & Ivey, 2005; Grey, 1920; Moores, 2001; O’Brien & Stewart, 1990; O’Brien, Stewart, & Moje, 1995; Sanacore & Palumbo, 2009; Vacca & Vacca, 2010; among others). The data in the present study support this conclusion. Results clearly showed that the middle school teachers in this study identified more as teachers of reading than did the high school teachers. One reason that the middle school teachers identified more as teachers of reading than did the high school teachers could simply be the age of the students in their classrooms. For example, the middle school teachers may view themselves as an extension of elementary school teachers, and thereby feel a responsibility to teach reading skills, whereas high school teachers may feel that if students cannot read by the time they are in high school, they cannot be remediated and that the delivery of content, then, is their sole responsibility. What instructional approaches are used by teachers of the deaf in the uppergrade, social studies classroom? Findings reveal that instruction in the upper-grade, deaf education social studies classroom is highly teacher-centered and directive, and that teachers in these classrooms may not use instruction that is socially-mediated frequently 189 or for long durations of instructional time. Data indicated that instruction that was more directive and teacher-centered, such as lecture, accounted for the largest percentage of observed instructional minutes for three of the four teachers. These instructional approaches were used more often, and for longer durations of instructional time than were social-constructivist instructional approaches (e.g., teacher-facilitated learning and peer-to-peer learning). These findings are similar to those found in a case study by Maiorana-Basas (2013), where the teacher dedicated the 49.9% of observed instructional time to activities that were teacher-centered and directive, and included few opportunities for socially-mediated learning. One of the four teachers in the present study was observed as having a “balanced” approach, dedicating similar percentages of instructional time for lecture and for socially-mediated learning activities. This teacher dedicated the most instructional time to CAR skills than did any other teacher. According to the literature, socially-mediated instruction supports critical thinking, text comprehension, and academic language development (Gee, 2008; Morrell, 2004; Stahl, 1994; Stewart & Kluwin, 2001, among others). Instructional approaches that are more lecture-based may restrict a teacher’s opportunity to incorporate CAR skills for longer durations of instructional time. As was the case with the teacher described above, socially-mediated instructional practices may naturally support the integration of CAR in the content area classroom. This style of teaching allows the teacher to address these skills for longer periods of time and at higher levels of intensity than relying primarily on teacher-centered approaches such as lecture. As such, the use of socially-mediated instructional approaches is an important consideration for teachers when designing CAR instruction. 190 Unfortunately, teachers may be deterred from using socially-mediated instruction for at least two reasons: 1), teachers may feel “conflicted” between how they want to instruct and how they feel they must instruct in order to meet the demands and criteria outlined by state and federal governments. Mandates such as No Child Left Behind (NCLB, 2002) and yearly state assessments press teachers to cover a significant amount of content within a limited amount of instructional time, as referenced by Woolsey and colleagues (2009) and the National Council for the Social Studies (2007), thus forcing them to chose a more direct instructional approach with little discussion or active learning; 2), teachers may lack full understanding of what social-constructivist instructional approaches include. For example, an activity that one teacher described as a “cooperative learning activity”, resembled more of an independent seatwork activity. Similar misconceptions were observed in other teachers’ classrooms. Student independent work/activities. All four teachers incorporated student independent work/activities into their instruction to some extent, however the types of activities and the level of support provided by teachers during these activities were different for the middle school teachers than for the high school teachers. The types of student independent work and level of teacher support was clearly different for the middle school teachers than the high school teachers. For example, middle school teachers designed student independent work which included problem solving and critical thinking activities (e.g., activities that required students to compare and contrast information presented from various resources to come to conclusions about what really happened during a particular period in history, and activities that required students to look for clues in the text in order to solve a problem). During these activities, the middle 191 school teachers provided high levels of support to students, guiding student thinking and supporting the learning of concepts through dialogue. As a result, CAR skills were addressed more often during student independent work activities in these classrooms, and included higher levels of teacher support and dialogue. Student independent work activities in the high school classrooms typically consisted of completing work packets (filling in the blanks, matching vocabulary, answering comprehension questions, etc.) or copying information off the board. During these activities, the high school teachers provided little to no support, and there was limited interaction between students. As a result, CAR skills were not observed during these activities. Although teachers did not feel that there was enough instructional time to cover content or to incorporate reading instruction during each class period, extended periods of instructional time were observed where these types of learning activities could have been utilized. Both high school teachers were in classrooms that utilized block scheduling, yet both these teachers maintained the highest duration of instructional time dedicated to student independent work, with the least amount of instructional support. For example, one-third (33.3%) of Teacher C’s observed instruction was categorized as student independent learning, meaning that for 30 minutes of each 90-minute block, students were doing seatwork activities. Based on these results, it is possible that extended class periods, or block scheduling, may not be used effectively in the upper-grade, deaf education social studies classroom. The effectiveness of block scheduling and its impact on student achievement have been widely discussed and debated in the literature (e.g., Hackmann & Waters, 1998, 192 Queen, 2000, Queen, Algozzine, & Eaddy,1997, Skrobarcek, Chang, Thompson, Johnson, Atteberry, Westbrook, & Manus, 1997, among others). Queen (2000) pointed out that in classrooms where teachers use “traditional” instructional approaches (such as lecture), teachers had a difficult time adjusting to block schedules and tended to overuse or overextend lectures to fill time. Queen (2000) commented that when implementing block scheduling, teachers often required additional instruction and staff development in order to understand how to plan for and how to effectively use large blocks of instructional time. The longer durations of student independent learning activities observed in the high school classrooms in the present study may then be due to the lack of instruction or knowledge about how to use large blocks of instructional time effectively. Regardless, the extended student independent work time in these classrooms could be used for working on CAR skills with students either independently or in small groups. Content Area Reading Integration What is the frequency, duration, and intensity of CAR integration in the upper-grade, deaf education social studies classroom? Findings from this study indicate that CAR skills are not integrated frequently or for long durations of instructional time in the upper-grade, deaf education social studies classroom. When teachers do integrate CAR, they typically do so at lower levels of intensity. Of all the identified CAR skills, background knowledge was the most frequently integrated (specifically activation of background knowledge) followed by text features and contentspecific vocabulary. Activation of background knowledge also co-occurred the most frequently with other CAR skills, specifically content specific vocabulary, text features, 193 and inference. This finding suggests that activation of background knowledge may be an important, and perhaps necessary component of other CAR skills. Inference and text structure were the least frequently integrated CAR skills by all teachers. According to the literature, inference supports reading comprehension (Brown & Brewer, 1996) and the use of higher-level questioning supports the skill of inference (Cain, et al., 2004). This is consistent with findings from the observed instruction. Teachers who included higher-level questions frequently during instruction addressed the skill of inference more often than did teachers who did not frequently include higherlevel questions. Inference and the use of higher-level questions also occurred most frequently during socially-mediated instruction, further supporting that instruction that is socially-mediated may support CAR. Co-occurrences. Based on the co-occurrences of CAR skills found in this study (see Table 11) and in a previously conducted pilot study (Maiorana-Basas, 2013), there is evidence to suggest that certain CAR skills may be linked to one another. For example, findings from the present study indicate that activation of background knowledge cooccurred most frequently with all other CAR skills. This finding suggests that activation of background knowledge may be an important, and perhaps necessary component of all CAR skills. Based on this finding, it may be important to take into consideration whether and how each of the CAR skills are linked when developing strategies and educational interventions for supporting CAR during instruction. This analysis was beyond the scope of the current study. Teacher Preconceptions 194 What preconceptions do upper-grade, social studies teachers of the deaf have regarding the ability of their students who are DHH to read and understand content area text in the social studies? Overall, findings from the teacher interviews revealed that all the teachers had similar preconceptions regarding their students’ abilities to read and learn from content area text. Generally, teachers reported that their students were capable of learning how to read and understand content area texts, but that most of their students required significant support and guidance during reading activities. All teachers indicated that the majority of the students in their classes struggled to comprehend 50% of what they read, and that even their very “best” readers were only able to comprehend about 70% of the information presented in text. All of the teachers made some reference to their students’ struggles with language (both social language and academic language), and indicated that language levels had an impact on their students’ abilities to think critically and to understand content-area texts. There were some identified differences among teachers regarding their preconceptions about the reasons for their students’ struggles with text. Some teachers identified external factors such as lack of intensive reading instruction, little or no practice with answering comprehension questions, and few opportunities to practice reading content-area texts as reasons their students had difficulties comprehending text. These teachers also identified difficulties visualizing, making inferences while reading, and limited academic language proficiency as obstacles to comprehension of text among their students. Other teachers identified internal factors such as lack of motivation to read and complete assignments outside the classroom as reasons their students struggled with comprehension of content-related text. 195 Is there a relationship between these preconceptions and how teachers of the deaf integrate CAR skills during social studies instruction in the upper-grade, deaf education classroom? As did Gee (2008), Morrell (2004), and Oakes, (1984) (among others), the present study also points towards a relationship between teacher preconceptions about their students as readers and learners and instructional approaches. Teachers who identified external factors as reasons for their students’ difficulties with comprehension (e.g., need for more intensive reading instruction) spent the most instructional time addressing CAR skills (see Figures 6 and 7), suggesting a relationship between preconceptions and instructional practices. Even though these teachers recognized their students as struggling readers and learners, their generally optimistic view of their students’ potential seems to have had an effect on the amount of instructional time dedicated to the skills needed to mediate and comprehend complex texts. Teachers who identified external factors as reasons for their student’s difficulties with comprehension of content texts provided relatively more opportunities for viewing, reading, and dissecting print-based information than did teachers who blamed internal factors (e.g., motivation) for poor reading outcomes. Results indicated that teachers who identified internal factors as reasons for their students’ struggles with comprehension of text tended to incorporate CAR skills less frequently and at lower levels of intensity during instruction than did teachers who blamed external factors for their students’ struggles. These teachers also used texts less often than did teachers who identified external factors as reasons for student’s struggles with reading and comprehending content area texts. 196 Is there a relationship between teacher preconceptions about students’ ability to learn from text and the instructional approaches used by these teachers in the upper-grade, deaf education social studies classroom? When examining the relationship between preconceptions and instructional approaches, the data are inconclusive. Only one teacher incorporated a relatively larger amount of sociallymediated instructional practices when teaching content. Two of the participating teachers had generally optimistic views of their students potential as readers and learners, yet had very different instructional approaches. Of the two teachers who had generally optimistic views of their students as readers and learners, one had a balance of socially-mediated instruction and teacher-directed learning, while the other had longer durations of instructional time dedicated to teacher-directed learning. Teachers who had less optimistic views of their students as readers and learners tended to have instruction that was more teacher-directed than socially-mediated. Based on the findings from this study, it is evident that participating teachers recognize the struggles that their DHH students have with navigating and comprehending complex texts, and as a result, do not typically incorporate reading assignments as a regular part of their instruction. Data also suggest that preconceptions of students as readers and learners may be related to how content is presented; however, pressure to cover content covered on standardized tests, time, and resources seem to have a larger influence on the instructional practices and choices of these teacher participants. Overall, while preconceptions may not have had an impact of the types of instruction used in these classrooms, teacher preconceptions may have an impact of the duration and intensity of CAR integration. The teachers who had generally positive 197 views about their students as readers and learners tended to incorporate CAR for longer durations and at higher levels of intensity than did teachers who did not have positive views about their students as readers and learners. While the data do provide some possible insights regarding how preconceptions impact instructional choices and CAR integration, more investigation is needed in order to generalize findings. Suggestions for Teacher Preparation Programs and Professional Development The teachers in the present study recognized the need for support and strategies to help their students navigate and understand content area texts; however, none of them seemed to have a comprehensive understanding of the CAR skills that were necessary for cultivating advanced literacy development in the social studies classroom. The following suggestions are thus provided to help inform and guide teacher preparation and professional development with regards to integrating coursework that fosters advanced literacy development in the content area classroom. Preparation in instructional practices and CAR. Based on the data from this study, it is clear that content area teachers are not incorporating CAR skills or text-based learning for long durations of instructional time. These findings suggest that an emphasis on CAR is needed in the area of teacher preparation for pre-service and in-service teachers. Teacher preparation programs should consider designing courses and schools should seek professional development that specifically addresses CAR as part of their agenda. Ongoing preparation for teachers is also needed to provide pre-service and inservice teachers with an understanding of advanced literacy development. These courses and workshops should focus on methodologies for teaching adolescent readers the skills of CAR, as well as identify specific strategies that can be used to teach these skills, such 198 as activation of background knowledge, using socially-mediated instructional techniques, and incorporating higher order questioning to support the skill of inference. More so, courses and workshops that target advanced literacy and content area reading should take into consideration that students who are DHH have unique learning needs, and are not “hearing children who can’t hear” (Marschark, 2014). Strategies that may work with children who are hearing may not be appropriate or as effective for children who are DHH (Marschark & Hauser, 2008). Children who are DHH have less background knowledge than hearing peers (Andrews & Mason, 1991; Boyd & George, 1971; Bringham & Hartman, 2010; Dymock & Nicholson, 2010; Easterbrooks & Stephenson, 2006; Schirmer, 1997) and thus require instructional strategies that help foster background knowledge. The amount of background knowledge a student has about a certain topic has an effect on a students’ ability to make elaborative and global inferences (Graesser, Singer, & Trabasso, 1994; Kispal, 2008; McMackin & Lawrence, 2001), and understand content-specific vocabulary (Fisher & Frey, 2009; Marschark & Hauser, 2008; Stahl, 2008). When designing courses for pre-service teachers and specialized professional development for in-service teachers, the following should be considered. First, an emphasis on the importance of reading and literacy instruction, along with fostering the concept that “all teachers are teachers of reading” (discussed further, below), should be included. Second, an outline of the skills and strategies that foster CAR should be highlighted. For example, activation of background knowledge may be a core strategy for fostering other CAR skills such as building background knowledge, understanding content-specific vocabulary, understanding and interpreting text features, and making 199 inferences. Therefore, coursework and workshops should emphasize strategies for activating background knowledge during instruction. Third, a focus on the use of higher-order questioning during instruction should be emphasized. Skills related to asking complex questions and mediation strategies for helping students understand complex questions should be included as part of the curricula adopted by schools and districts, as these skills are tied to the CAR skill of inference (Cain, et al., 2004) an important component for reading and comprehending content-area texts (Graesser, Singer, & Trabasso, 1994; Kispal, 2008; McMackin & Lawrence, 2001). Finally, since text structure was the least integrated CAR skill in both the present study and the pilot study (Maiorana-Basas, 2013), inclusion of strategies for identifying text structures and using text structures to support the understanding and comprehension of content area texts is necessary. While more research is needed on the overall effects of identifying and understanding text-structures on reading comprehension, there is some evidence to support the role of text-structure in comprehension of complex texts such as those used in the content areas (Bringham & Hartman, 2010; Chall, 1996; Marschark, Lang, & Albertini, 2002; Moores, 2001; Strassman, 1997; among others). Preparation in language and communication. The ability for a teacher to be a skilled communicator in the language and communication modalities of students in the deaf education classroom is a best practice as identified by Easterbrooks and Stephenson (2006), and is listed as one of the ten standards of knowledge and skill required of all beginning teachers (Council for Exceptional Children, 2003). Programs that are designed to prepare teachers to work in educational settings that use American Sign Language (ASL) or other sign-based communication systems as the communication of instruction 200 should consider including advanced ASL courses that include in-depth study of academic ASL. In order to address CAR skills such as content-specific vocabulary, teachers in these educational settings must have knowledge of the signs for these higher academic concepts. Courses should include instruction in translating content-specific vocabulary from English to ASL and instruction in the use of classifiers to explain complex topics and concepts discussed in the content area classroom. A teacher’s sign choice and execution directly impacts students’ abilities to understand content knowledge (Easterbrooks & Stephenson, 2006), including the ability to correctly and effectively ask questions and foster meaningful discussion during content instruction. Preparation that fosters the belief that all teachers are teachers of reading. An important consideration for both teacher preparation programs and in-service professional development is a focus towards fostering the belief that all teachers are teachers of reading, especially for teachers who plan to teach content in the upper grades. Results from the present study allude that teachers in the upper grade, deaf education social studies classroom may not incorporate reading as part of their instructional practices often, or at all. Easterbrooks and Stephenson (2006) point out that reading skills are a crucial component for success in the content areas, and reading instruction in the content areas is especially important for struggling readers. Given the current literacy outcomes for students who are DHH (Easterbrooks & Beal-Alvarez, 2012; Kelly & Barac-Cikoja, 2007; Traxler, 2000), it is even more critical for these students to have instruction that specifically addresses CAR beyond the elementary years. Limitations Like all research, this study presents its own strengths and limitations. 201 Researcher bias. Because data were collected first hand, there remains a potential for bias in their interpretation. In order to reduce bias, several procedures were included. First, at the mid and end points of data collection, during the response-toinstruction meetings, all teachers were given an opportunity to view their video-recorded instruction and provide commentary as to what was happening and why specific instructional practices and strategies were used. Those meetings were also used as a member check to clarify and contextualize the researcher’s observations and field notes. Member checks are important tool for improving accuracy and credibility to the findings of the study, and gives participants an opportunity to clarify and correct any misconceptions of the researcher (Walters, 2006). To further reduce bias, video recorded sessions were reviewed multiple times and were compared to additional information sources (e.g., field notes, teacher interviews, and teacher comments during response-to-instruction meetings) before any conclusions were generated. Video recordings provide an accurate and detailed account of what was observed and are more reliable than field notes alone (Baily, 2008). In addition, interrater agreement measures also help to establish reliability of qualitative findings through the use of quantitative measures (Marques & McCall, 2005). Ten percent of all analyzed video was given to an independent inter-rater, and was coded for inter-rater agreement measures (averaging 85.2% agreement for instructional codes and averaging 66.6% agreement for CAR codes) as a means to reduce bias in the analysis of instructional approaches and CAR integration. Inter-rater agreement measures also increased reliability of findings. 202 Researcher presence. While all efforts were made to not influence instructional practices, it is possible that the researcher’s presence alone could have had an effect on the behaviors of the teacher and the students. In an attempt to reduce this influence, an introductory session was scheduled to give students an opportunity to meet with and ask questions of the researcher, and about the study and the research process. Meetings were also scheduled with each of the teachers prior to data collection to address any questions and concerns. As an added measure to reduce distraction, all video recording equipment was set up during transition periods in an out-of-the-way location in the classroom. Sample. Data collection was restricted to residential schools for the deaf and classrooms where sign-based communication was the primary method of delivering content. While every attempt was made to find classrooms that had the same communication philosophy, two of the participating schools followed a bilingual/bicultural communication philosophy and one school followed a total communication philosophy. Programs that use other communication methods such as listening and spoken language were not included. These differences in communication philosophies and settings, could have biased the data and do limit the generalizability of findings. According to www.deafed.net, there are 60 residential schools for the deaf in the United States. The current study took place in three different residential schools for the deaf, representing only 5% of all schools for the deaf, and an even lower percentage of upper-grade level social studies teachers of the deaf from these schools. Other educational settings where students who are DHH are educated (e.g., general education, resource rooms, inclusion classrooms, etc.) were not included. While generalizability 203 may be limited for this study, the findings do provide some initial insights into the instructional practices of upper-grade teachers, as well as into how teachers integrate CAR during instruction, and can be used as a foundation to building the literature base in this area. Observations. The teachers in the study did not all teach the same unit of study. Although the focus of the study was restricted to one subject area (social studies), differences between each of the participating classrooms confounds comparison findings across teachers, as instructional approaches may change based on a topic of study. Although two of the teacher participants (the middle school teachers) taught the same unit topic (The American Revolution), they did not use the same materials or texts, and covered different topics within the scope of the unit. The high school teachers taught completely different topics (The Kennedy Years and The Progressive Era). Because of these differences, student learning and growth was not measured in the present study, making it difficult to discern how certain instructional approaches and CAR integration directly influence student learning and growth. Data analysis. The frequency of questioning for all teacher participants should be interpreted with a level of caution. As part of the coding rules, a Questioning code was applied each time a question was asked. For example, if a teacher repeated a question multiple times in short succession (e.g., Who is this? Who is this in this picture? Does anyone know who this is?) a Questioning code was applied for each question, which could have inflated frequencies for questions in some categories. Typically, lower level questions, such as knowledge and comprehension questions, tended to be repeated multiple times. 204 Directions for Future Research Content area reading is an understudied area especially in the field of deaf education. The current study represents the first of its kind to investigate the frequency, duration, and intensity of instructional and CAR behaviors in the upper-grade, social studies deaf education classroom, thus providing a baseline for future research. Several recommendations are provided for such studies. First, investigating instructional practices and CAR integration in other content areas (e.g., mathematics, science, etc.), especially in the upper grades, is a logical next step in expanding the findings of the present study, as these were not subject areas included in the present study, and are also under-researched in deaf education. It may be that teachers in different content areas, such as mathematics or science, may include even less CAR and text-based learning during instruction, as those subjects tend to include more application, practice, and hands-on learning activities (e.g. conducting experiments, observation of data, practicing mathematics skills, etc.). Second, investigating instructional practices and CAR integration in deaf education classrooms using different philosophies of communication (e.g., listening and spoken language) and in different educational settings (e.g., general education, resource rooms, inclusion classrooms, etc.) will provide additional information to document practice and increase generalizeability. Findings may differ, for example, in educational settings where students are not learning and reading content in two different languages. Some areas to investigate include: the frequency and duration of instructional practices used in deaf education classrooms that use a listening and spoken language approach; the frequency, duration, and intensity of CAR skills used in deaf education classrooms that 205 use a listening and spoken language approach; and teachers preconceptions about their students as readers and learners in classrooms that use listening and spoken language to communicate. A third recommendation for future research is an investigation of specific evidence-based instructional strategies that support CAR skills, especially those most overlooked by teachers such as: text structure, inference, and the use of higher level questioning in the upper-grade, deaf education content classroom. Intervention studies should be designed to target text structure, inference, and the use of higher-level questioning in the content area classroom. Investigation of interventions that target these skills and that take into account the unique learning needs of students who are DHH will provide support for teachers when designing and planning instruction, and may help improve the academic reading skills of students who are DHH. It is also beneficial to investigate further what teachers in the deaf education classroom already know about CAR and advanced literacy development in addition to how teachers of the deaf perceive their students as readers and learners. A fourth recommendation, then, is for researchers to investigate these areas through interviews, focus groups, or survey research. Data from these sources would provide better information about areas of need when developing intervention research and can help guide the development of professional development opportunities related to CAR. Since student achievement was not measured in the present study, a fifth recommendation is for researchers to investigate how specific instructional approaches impact student learning and achievement in the content area classroom, specifically with regards to reading proficiency. While findings suggest that teachers who favor a social- 206 constructivist approach to instruction may incorporate CAR skills at higher levels of intensity and ask more complex questions during instruction, more investigation is needed in order to establish an evidence base for this practice in the deaf education classroom. Finally, it may also be beneficial to investigate how teachers of the deaf in upper grade classrooms incorporate academic language proficiency during instruction, since this was a point of concern for all teacher participants in this study and in the pilot study. Conclusions Findings from the present study revealed that upper-grade, social studies teachers in the deaf education classroom are not incorporating CAR for long durations of instructional time and are limited in those CAR strategies that are addressed. Data suggest that lack of CAR integration may be due to teachers not having enough knowledge about CAR skills to incorporate them into instruction. Teachers may not establish their instruction so as to provide opportunities for the inclusion of such strategies, focusing more on direct instruction, teacher-centered models that do not lend themselves to integration of CAR for long durations of time or at high levels of intensity. The nature of socially-mediated instruction such as discussion and peer-to peer learning may lend itself to addressing specific skills for longer durations of time and other CAR skills such as inference, which according to the literature is typically measured through questions (Kispal, 2008) might be asked as a result of a discussion. While it is not yet known what types of instructional approaches are best suited to foster CAR, inclusion of socially-mediated instruction may be an important consideration when designing interventions that target CAR. 207 Teacher preconceptions about their students as readers and learners may influence their instructional practices. Believing that their students cannot read or read at the level of instruction and that reading instruction is not their responsibility, and may cause teachers to present a more directed instructional style when teaching content material, and not dedicate much of their time to CAR. Additionally, these beliefs resulted in teachers not assigning reading outside of class and avoiding text-based learning. As a result, students spent little time interacting with content area texts and relied on the teacher to deliver all information related to content. Further, policy and state mandates that dictate what teachers should teach, how much time they have to teach it, and how students will be assessed on what they learn may have a strong influence on how content material is taught, and consequently on the amount of instructional time dedicated to teaching advanced literacy skills. The present study serves as a starting point for investigating teacher practice in relation to CAR with DHH adolescents. Findings can be used as a springboard to future studies investigating the types of instructional strategies that are most beneficial for supporting CAR and advanced literacy development for adolescent students who are DHH. 208 APPENDICES 209 Appendix A: Research Approval Letter Date: Michella Maiorana-Basas 866 E. Grand River #56 Brighton, MI 48116 maiora12@msu.edu RE: Request to conduct research at _________(name of school here)__________ Dear Michella Maiorana-Basas, The research study entitled “Content Area Reading in the Deaf Education Classroom: An Instrumental Case Study of Four Teachers” is authorized to be conducted at the ________(name of school here)______. Data collected for this study, conducted at ________(name of school here)______, is not subject to any outside committee or Internal Review Board’s approval other than what is required by Michigan State University. Because you have received final approval from Michigan State University’s Internal Review Board, you have the permission of ________(name of school here)______ to begin participant recruitment and data collection. Sincerely, Name of person in charge of approving research Address of School Phone/email contact of person in charge of approving research 210 Appendix B: Teacher Participant Consent Form Dear _________________, You have been invited to participate in a research study about reading and literacy in social studies. The purpose of this student is to describe how teacher of the deaf incorporate literacy while teaching content area subjects (such as social studies) and how the reading achievement of students who are deaf and hard of hearing develops over half a school year. The researcher will observe instruction in your classroom for ten instructional sessions, three to four times a week for a period of four to five weeks. Before the observational sessions begin, the researcher will conduct a short interview with you (approximately 3045 min) and may ask questions throughout the period of the study. A video camera will be used to film the interview and classroom instruction for the purposes of note taking and data analysis. All records, videos and information collected during the student will be kept private, and your name or likeness will not be used in any way. This consent form will be stored in a secure location, along with the results of the study in a secure, locked location for seven years after the completion of the study. There are no foreseeable risks for participating in this study. Participation is completely voluntary, and you have the right to opt out of participation at any time. Should you have any questions regarding the nature of the study, please do so before signing and agreeing to participate. Any questions you may have can be directed to me at maiora12@msu.edu. I am also happy to meet with you in person, by phone, or by Skype should you need further explanation of the project. Thank you, Michella Maiorana-Basas Doctoral Candidate and NLCSD Fellow Michigan State University Yes, I give permission for the researcher to collect data in my classroom. I understand that my identity and likeness will not be used or revealed in any way, and that I have the right to opt out of participation at any time during the study. No, I do not give permission for the researcher to collect data in my classroom. I understand that there is no penalty for opting out of this study. Signature of Teacher Participant __________________________ Date ____________ Signature of Investigator ________________________________ Date ____________ 211 Appendix C: Parent Consent Form Date: Dear ____________________, Your child is invited to participate in a research study about instruction in the content area deaf education classroom. You are receiving this invitation because your child’s teacher has agreed to participate. The purpose of the study is to describe how teachers in the deaf education content classroom (such as social studies) instruct their students. A researcher will observe your child’s classroom, several times a week over a period of three to five weeks (for a total of 10 instructional sessions). Before the researcher begins observing, your child will be given a short reading assessment. The purpose of the assessment is to understand how well your child reads and understands content area texts (such as a social studies text book). Most schools already administer regular reading assessments. Your child will not be subjected to excessive or additional testing for the purposes of this study. The results of your child’s assessment will not affect their grade in any way, and will be explained to their classroom teacher at the end of the research study. Your child’s scores will also be sent home to you, along with a brief explanation. Video cameras will be used to film classroom instruction for the purposes of note taking and data analysis. All records, video, and information collected during the study will be kept private. Your child’s name or likeness will not be used in any way. Your child’s name will be removed from any documents, and replaced with an assigned number to protect their privacy. There are no foreseeable risks for participating in this study. Participation is voluntary. You may choose not to allow your child to participate at all, or you may refuse to allow your child to participate in certain procedures or answer certain questions. You may choose to discontinue your child’s participation at any time without consequence. Full, partial or lack of participation will not affect the treatment your child receives, nor will it affect their grades or evaluations in any way. 212 This consent form will be stored in a secure location, along with results of the study, for five years after the completion of the study. If you have concerns or questions about this study, such as scientific issues, how to do any part of it, or to report an injury, please contact the researcher (Michella Maiorana-Basas) at 866 East Grand River #56 Brighton, MI 48116; maiora12@msu.edu; or 808-282-4258. Thank you, Michella Maiorana-Basas Doctoral Candidate at Michigan State University and NLCSD Fellow Your signature below means that you voluntarily agree to allow your child to participate in this research study. Signature of Parent/ Legal Guardian __________________________ Date ________________ 213 Appendix D: Parent Consent Form (Spanish Translation) Fecha: Estimado ______________, Su hijo esta invitado a participar en un estudio para la investigación de estudiantes con problema de audición en la clase de estudios sociales. Usted está recibiendo esta invitación porque el maestro de su hijo a acordado participar. El objetivo del esté studio es describer cómo los profesores de niños con problema de audición enseñan a sus alumnos en los estudios sociales. Un investigador entrará en la clase de su hijo para observar su maestro diez días durante varias semanas. Al inicio del estudio, su hijo se le dará una prueba de lectura corta. El propósito de la prueba es entender cómo su hijo lee su libro de estudios sociales. La mayoría de las escuelas ya administrar las pruebas de lectura con frecuencia. Su hijo no se le dará pruebas excesiva durante este estudio. Resultados de las pruebas de su hijo no afectará sus notas. Los resultados se explican a la maestra de su hijo cuando el estudio se ha completado. Los resultados serán enviados a casa con una breve explicación. Cámaras de video ser utilizados para registrar la instrucción en la clase. El vídeo será utilizado por el investigador para tomar notas y analzing datos. Todos los videos y la información recogida durante el estudio será privado. El nombre o imagen de su hijo no se utilizarán de ninguna manera. El nombre de su hijo será retirado de todos los documentos, y reemplazado con un número para proteger su privacidad. No se perceve probelmas por participar en este estudio y la participación es voluntaria. Usted puede decidir si su hijo no va a participar en este estudio. También puede optar permitir que su hijo participe sólo en ciertos procedimientos. Usted puede decidir suspender la participación de su hijo en cualquier momento sin consecuencias. No participar o participar parcialmente, o participar por completo, no afectará a la forma de tratar a su hijo, y no afectará a sus notas de ninguna manera. Este formulario de consentimiento se mantendrá en un lugar seguro, junto con los resultados del estudio, durante cinco años después de la finalización del estudio. Si tiene dudas o preguntas acerca de este estudio, sobre procedimientos científicos o para reportar un daño, por favor póngase en contacto con la investigadora (Michella Maiorana-Basas) a 866 East Grand River #56 Brighton, MI 48116; maiora12@msu.edu; or 808-282-4258. Gracias por su cooperación, Michella Maiorana-Basas Candidato Doctoral en la Universidad del Estado de Michigan y miembro del NLCSD Su firma significa que está de acuerdo con la participación de su hijo en este estudio de investigación. Firma del Padre / Tutor Legal __________________________ Fecha ____________ 214 Appendix E: Student Assent Form Dear _______________, Your teacher has agreed to participate in a research study about reading in the social studies classroom. The reason for the student is to describe how your teacher helps you learn reading skills while teaching social studies. The researcher would also like to know how your reading improves over time by giving you a short reading test at the beginning of the study and again right before the winter recess. The researcher will also use video cameras to help remember what happened in class each day. The researcher will not show the videos to anyone, and any information about you will be kept private. You are allowed to decide (yes or no) if you would like to be a part of this study. Even if your parents said yes, you can still say no. You can even decide later on that you do not want to be in the study by telling your teacher or the researcher. There are not any consequences for saying “no”. You can ask the researcher questions about the study, anytime, before or after class. Thank you, Michella Maiorana-Basas Doctoral Candidate and NLCSD Fellow Michigan State University Yes, I give permission for the researcher to observe me during my social studies class and to give me a short reading test at the beginning of the school year and again just before winter recess. I understand that it is ok to tell the researcher if I decide I do not want to be a part of the study any longer, and that I will not be punished in any way. No, I do not give permission for the researcher to observe me during my social studies class or to give me a short reading test at the beginning of the school year, and again just before winter recess. I understand that it is ok to say “no” to participating, and that I will not be punished in any way. Signature of the Student Participant _________________________ Date ___________ Signature of Investigator _________________________________ Date ___________ 215 Appendix F: Sample of a Level 5 Expository QRI-4 Passage 216 Appendix G: Semi-structured Interview Questions 1. Based on my observations, it appears that your primary approach to teaching content is (lecture, guided reading, text-based learning, hands-on activities, discussion, a combination of these, other). Why have you chosen ____ to be your primary approach to teaching content material? 2. In your opinion, what is the most effective way to deliver content material to your students (e.g. lecture, guided reading, text-based learning, hands on activities, discussion, a combination of these, other)? 3. If you had to rate the following (lecture, guided reading, text-based learning, hands on activities, discussion) in the order of effectiveness in delivering content material to your students, which would you give the highest rating? Lowest rating? 4. What is the range of reading levels of students enrolled in this class? (If the teacher does not know, I will reframe as: What would you estimate the average reading level of your students to be?) 5. As a content area teacher, do you feel it is your job to teach reading skills to your students? (Why/ why not?) (If “no” is given as a response, follow-up question: Who is responsible for teaching reading to your students?) 6. During my observations, I noticed that you typically use (or don’t use) the following types of reading materials in your classroom (text books, leveled readers, scholastic magazines, other?), what is your rationale for choosing these types of materials? 7. What types of skills do you believe are necessary for students to read and comprehend content area text? 8. How many students (your estimate) are able to grasp key information from content area text alone? (Follow up, how many grasp 100%, 50%, 25%, lower than that…?) 9. If you had to choose, is it more important to teach the content or to teach the students how to read to learn content information on their own? (Follow up: Why or why not?) 10. Do you assign content reading outside of class? (Follow up: Why or why not?) 11. Do you assign vocabulary related to content? (Follow up: Why or why not?) 12. How well do you think your students can communicate and articulate their thoughts and ideas about content-related material linguistically (e.g. sign/ speech/ sign and speech, respectively)? 13. What is your biggest frustration regarding content instruction? (Follow up: Do you have any frustrations regarding your student’s ability to read and comprehend content area text? Successes to share? 14. Do your students believe they are good readers (your opinion)? 217 15. Do you believe your students are good readers? (Follow up: Do you think your students are capable of being good/ proficient readers?) 16. This question will be asked re: each student participant: Tell me about student “x”. Is student “x” a good reader? Why or why not? (Follow up: What makes student “x” a good reader?) 218 Appendix H: Codebook The following codes will be used to analyze video data of instructional and content area reading (CAR) practices observed during 10 instructional sessions of four teachers of the deaf in upper-grade social studies classrooms. Instruction Codes*: Delivering Content Material These codes are exclusive and will not overlap in the coding process. Only “Peer-to-Peer Learning” and “Student Independent Work/Activity” codes are weighted… all other instructional practices do not have a weighting system. Non-instructional: Any transition that is longer than 10 seconds (e.g., restroom breaks, moving locations, breaking into learning groups, dealing with significant behavior issues) or any instance when the focus is not on content material (e.g., schedule change reminders, collection or passing out of materials, distributing incentives to students, talking about the inclement weather, school events, fire drills, “housekeeping” items such as grades, standardized test scores, etc.). If the teacher is giving instructions about what the students are supposed to do for a specific independent assignment or project, this will be coded as “Teacher Giving Directions/ Instructions”. Non-instructional codes are not weighted. Teacher-Directed Learning: Instruction that is directed, explicit, and primarily delivered by the teacher through lecturing or convergent discourse (attempting to elicit a correct answer during questioning or attempting to elicit a correct procedure, skill, or strategy from students through direct teacher prompting). This might be observed during a lecture, read-aloud, completing a handout/worksheet as a class where the teacher is guiding the students step-by-step, viewing a video* when the teacher is translating and elaborating, and/or through discussions** between teacher and student(s) that are directed or explicit (limited to eliciting a “correct” response from a student). This code will also be applied during instances when the teacher is writing something on the board that is directly related to instruction (e.g., writing a vocabulary word on the board, or drawing a diagram to help students understand content material. Not a homework assignment or other non-instructional related material.). This code is representative of instruction that is more directed, dominated, authoritative, and goal/skill directed. Teacher-Directed Learning codes are not weighted. Teacher-Facilitated Learning: Instruction that is less directed, and more facilitative, and attempts to extend student’s thinking through discussions** and conversations, providing students with an opportunity to construct meaning without the answer being explicitly provided by the teacher. This might be observed when the teacher is scaffolding instruction or during a discussion with a student(s) that is more conversational in nature, allowing the student(s) to explore multiple angles of a particular idea or answer and to reach their own conclusions about a particular idea or answer. Teacher-Facilitated Learning codes are not weighted. 219 Peer- to-Peer learning: Instruction or knowledge building that occurs between students (with minimal prompting from the teacher). This includes instances such as having a debate, students standing in front of the class to explain a concept or give their point of view, students working collaboratively to complete an activity or solve a problem, and discussion that occurs between students with minimal prompting/ elaboration from the teacher. Peer-to-Peer learning is weighted at 2 levels of intensity (described below). Weight Level 1: Instances when the student is explaining a concept to the class and information is coming directly from the student with minimal teacher prompting/guidance. Weight Level 2: Instances when peers are learning from each other through a conversation, discussion, or debate between students with minimal teacher prompting/guidance. Hands-On Instruction: Any instance when students are doing hands-on learning activities (e.g., matching vocabulary cards, holding a faux election, building/creating of any kind), when the teacher is monitoring progress and providing direction and or clarification to guide students learning. Teacher-Guided Hands-On Instruction codes are not weighted. Student Independent Work/Activity: Instances when students are given an assignment to work on during class. This code will also be used during formal paper and pencil type assessments. In instances when the teacher provides directions or clarification on test items will be coded as “Teacher Giving Directions/ Instructions”. If the assessment is interactive in nature, this will be coded as either “Teacher-Directed Learning” or “Teacher-Facilitated Learning”, depending on the context. Student Independent Work/Activity is weighted at 2 levels of intensity (described below). If students are off task, but their instructions are to do an independent assignment, this code will still be applied. In these instances, context will be provided in the description of the class session. Weight Level 1: Any instance when students are working on an assignment in class, without any guidance/facilitation by the teacher. Instances of students working together as a group will be coded as “Peer-to-Peer Learning” (only in instances when the students are actually interacting with one another. If students are physically in a group, or are told to work in groups, but are, in reality, not interacting with one another, this action will then be coded as “Student Independent Work/Activity). This includes copying content related material from the board. Weight Level 2: Any instance when the class is working independently on an assignment, but the teacher is "floating", providing minimal guidance or monitoring of individuals working. This is not instruction, just guidance and progress monitoring. Some students may need 1:1 assistance as 220 indicated on their IEP's as well. Instances where students are doing independent work/activities and the teacher is providing instruction and/or scaffolding, will be coded as either "Teacher-Directed Learning" or "Teacher-Facilitated Learning". Teacher Giving Directions/Instructions: Any instance longer than 10 seconds when the teacher is giving detailed directions regarding a task, assignment, class project, etc. This code will still be used when students are asking questions about a task, assignment, class project, etc., and the teacher clarifies or provides further explanation. This includes discussion about where to find resources and materials to complete a task, assignment, class project, etc. This may also occur in instances when the directions are not related to contentspecific instruction. For example, the teacher may say, “I would like you to get a green marker and a purple marker to fill in this map….” or “Let’s read the directions together…” (while referring to the written directions on an assignment) or “I want you to take the next 10 minutes to work on your class projects”. “Teacher Giving Directions/Instructions” codes are not weighted. Other: Any instance when there is a guest speaker, video, or other instruction that is not delivered primarily by the teacher or students. (e.g., passively watching a video without teacher support or elaboration, note: if the teacher is “interpreting” a video, this will be coded as “Teacher-Directed Learning” since the information is coming from the teacher, or instances when the class is attending a presentation delivered by another class, teacher, or guest speaker). “Other” codes are not weighted. * Passively viewing a video will be coded as “Other”. **Discussions that are between peers or that are primarily peer directed (whether through the teacher or across peers) will be coded as “Peer-to-Peer Level 2. Content Area Reading (CAR) and Other Literacy Skills Integration* *These codes are not exclusive and can overlap with one another. These codes specifically fall under “Teacher-Directed and Teacher-Facilitated” instructional practice codes (as they are meant for analyzing teacher behaviors only*). These codes would not be utilized in cases where a student asks a question or uses one of the five CAR skills and/or “other” literacy or content skills. For example, if a student is having a discussion with another student and uses text features, the segment would not be coded as “Attention to Text Features” as it is a student practice and not a teacher practice. *There may be FEW instances when a CAR code “bleeds” into a student code, for example, “Peer-to-Peer Level 1”. CAR codes may only “bleed” into these codes if they are initiated during “Teacher-Directed” or “Teacher-Facilitated” codes, and continue through a portion of a student code. 221 For the 5 CAR skills and teacher questioning, a weighted system was used to determine the level of complexity/intensity. CAR Skill 1: Background Knowledge. This skill is separated into two distinct categories: Activation of Background Knowledge and Building Background Knowledge. See below for further descriptions of these categories. Activation of Background Knowledge. Instances when the teacher makes connections to or refers to previous lessons, student knowledge/experiences, etc. Activation of Background Knowledge is weighted at 3 levels of intensity (described below). Weight Level 1: Instances when the teacher makes an attempt to determine what students know, don’t know, or have misconceptions about regarding content topics. This is typically accomplished through questioning at the knowledge level, or level 1 questioning (see questioning code levels for further clarification). Examples: What did we talk about yesterday?; What is an election?; Does anyone know what the term “progressivism” means?; Remember, this is what happened? This also includes instances when the teacher has students revisit a web page or when reviewing a video or photos. Weight Level 2: Instances when the teacher guides students to use a particular resource to assist them in the recall of information previously read or learned. (If the students use a resource without teacher prompting, for example, the teacher asks them a question and the students use a resource, this DOES NOT COUNT. Only in instances when the teacher prompts the students use a resource is coded). Examples: Teacher asking students, “Look back in your books on page 32. Check your notes from last week’s lesson. Have you seen in the news lately?” Having students do a quick write on an assigned topic. Weight Level 3: Instances when the teacher provides a comprehensive review of material before introducing a new concept or before building on a new concept through a lecture, discussion, brainstorming activity, acting something out, or some other experience. (Note: This is code is not applied in instances when the teacher is adding or “building” knowledge. This code is only for instances when there is a review or “activation” of prior knowledge or learning. In instances when the teacher is adding or “building” upon student’s knowledge, that will be coded as “Building Background Knowledge”. See below for a further description of coding “Building Background Knowledge”). 222 Examples: Having a discussion about previously learned content material as a review before introducing new information. Asking students to provide a comprehensive review of what they learned in a previous lesson through a discussion facilitated by the teacher. Relating familiar experiences to new experiences as a means to activate background knowledge (e.g., linking learning to living). Building Background Knowledge. Instances when the teacher builds upon or adds to student knowledge/experiences, etc. Building Background Knowledge is weighted at 3 levels of intensity (described below). Weight Level 1: Instances when the teacher clarifies or corrects student’s misconceptions about a specific concept or idea. Helping students connect and apply their background knowledge to understand a topic more deeply through comparison or analysis. Linking information to what the students know and building on that. Using a personal story to “parallel” a concept in order to build upon that. Using student’s personal experiences to parallel or connect to new information. Making connections to what students know at the most basic level. Examples: Student sees the word “progressivism” written on the board and makes a connection with Progressive Insurance. Teacher clarifies the use and meaning of the word and makes a comparison with how the word is used as the name of a company, and how the word is used within the context of the lesson on political progressivism in the 1920’s. Pointing to a map and saying “This is where PA would be in the 13 colonies, if PA was a state back then”. Weight Level 2: Instances when the teacher makes explicit connections through instruction, modeling, and/or student discussions between previous learning with the learning of new information, concepts, or ideas. Examples: Watching a film or video clip. Exploring a virtual museum on the Internet. Using interactive software or an interactive website. Going on a field trip. Direct experiences such as conducting a faux election, telling a personal story, “suppose you were a …” scenario. Having a discussion with students. Connecting and building upon what students know as a way to build new knowledge, for example, discussing idiomatic phrases or words with multiple meanings. “A battle is a small part of a war. A war is made of many battles. All together, all of these battles are one war.” 223 Weight Level 3: Instances where the teacher creates cooperative learning opportunities for students to use experiences to develop/expand background knowledge. These are what are referred to as “direct types of building background knowledge” as defined by Fisher and Frey (2010). Examples: Language Experience Activities, conducting a faux election and having students tally and report the results based on an “electoral college” system developed by the teacher to understand the results of the presidential election, and then having students discuss/ debate findings, etc. Acting out concepts in class. CAR Skill 2: Text Structure. Instances when the teacher identifies and explains how a text is organized. For example: “This text is organized by types of animals in Australia. This section is about koalas, this section is about kangaroos, and this section is about bandicoots”. Text Structure is weighted at 4 levels of intensity (described below). Weight Level 1: Instances when the teacher identifies/acknowledges the structure of a text without in depth explanation, or instances of text structure at the sentence or paragraph level. Examples: This chapter is organized as a sequence; this chapter is organized by categories; this article is organized as cause and effect; etc.) The teacher making the statement, “This page explains the British view”, or “The first sentence in this paragraph is going to let you know what details will be explained in that paragraph”. Weight Level 2: Instances when the teacher explains and instructs how and why a text is organized in a specific way. Examples: This chapter is organized sequentially because it talks about what happened in history in order. This event happened first, this event happened next, and this event happened last. Weight Level 3: Instances when the teacher identifies and compares examples of key words and phrases that help students understand how texts are organized. Examples: When the teacher explains “because/as a result/the effects ofare typically found in texts organized in a cause/effect, where terms such as: first/second/third/next/ initially/finally/before/after are words typically found in texts that are organized sequentially”. Weight Level 4: Instances when the teacher guides students in comparing and contrasting texts that are on the same subject but organized differently or guides students in organizing textual information from a variety of sources and organizing them in a logical way. 224 Examples: Assisting students in collecting information from various sources about animals in Australia to make a class “book” or report, and organizing the information categorically. CAR Skill 3: Text Features. Instances when the teacher acknowledges or uses text features such as visual aids, maps, charts, graphs, headings, titles, sub headings, glossary, bulleted lists, captions (under an image), italicized, underlined, and boldface text. For example: having students interpret a graph that represents the presidential election results. Text Features is weighted at 4 levels of intensity (described below). NOT coded when the teacher refers to/points to a text feature using it as a pronoun (e.g., pointing to a photo and saying “he”/“she”, without directly identifying or explaining the image, etc.). Weight Level 1: Instances when the teacher identifies/acknowledges a text feature (any image, chart, heading, map, graph, bold, or italicized word or any information presented visually) without an in depth explanation (not in cases when a text feature is used as a pronoun in ASL). Examples: This is a picture of president Obama; this is a map of Australia; This heading says “election results”; this is a chart of the Koala population in 1900, this word is bold, etc. Weight Level 2: Instances when the teacher uses/explains the basic features and/or functions of a text feature. Examples: Identifying the key on a map or chart and understand what symbols mean; explaining the headings within a chapter and how to navigate the text based on the headings, etc. The teacher asking, “what is the title of this chapter”? The teacher stating, “We are on chapter 5, lesson 5. Use the front of your textbook to find which page it is on” (referring to the table of contents). Weight Level 3: Instances when the teacher uses text features and/or the functions of a text feature to extrapolate and/or interpret information. Examples: Guiding the students in understanding how to interpret the information presented in a key/ legend to help students interpret the information presented on a map or chart. Analysis of a photo. Making inferences about a topic through analysis of a photo. Weight Level 4: Instances when the teacher guides or prompts students to use information from more than one text feature (e.g., charts and headings, or photos and maps; or bolded words, photos and captions) to develop a deeper understanding of information presented in the text. 225 Examples: Using a map and a photo together to develop a deep understanding of a concept. For example, showing a photo of soldiers camping out in terrible conditions next to a map of their wins and losses in battle to create a better understanding of the conditions and struggles of fighting the war. Comparing/contrasting 2 images. CAR Skill 4: Content-Specific Vocabulary. Instances when the teacher uses or provides instruction of content-specific vocabulary. For the purposes of this study, contentspecific vocabulary will be defined as any word(s) or phrase(s) that are pertinent to understanding content material. For example: in a lesson about the presidential election, content-specific vocabulary words would be “election, vote, candidate, electoral college, ballot, precinct, etc.) Content-Specific Vocabulary is weighted at 4 levels of intensity (described below). Weight Level 1: Any instance of decoding, recalling, defining, or matching content-specific vocabulary. This can occur during instruction and/or discussions with students to identify words at the “meaning” level only (not a breakdown or analysis or word parts or a comparison of multiple meanings). Limited to word identification. Examples: The word “election” means a specific time when people of voting age all come together to vote for something. Weight Level 2: Instances when the teacher guides students in applying the use of content-specific vocabulary. Examples: Prompting students to use the word “eucalyptus” instead of “leaf” when discussing or writing about what koalas eat. Guiding students in using content-specific vocabulary to understand the meaning of a word or phrase within the context of text. Making connections/giving context to vocabulary by providing examples or having a discussion around the meaning of a vocabulary word. Weight Level 3: Any instance of analyzing the composition of a word based on its parts (root, suffix, prefix) to understand its full meaning. Examples: Biped breaks down to “2 feet”. Discussion of multiple meanings and similar word pairs (militia v. military; appointed v. appointment). Weight Level 4: Instances when the teacher discusses and compares multiple meanings of words (e.g., electoral college vs. college of education) and/or 226 instances when the teacher guides students in identifying synonyms and/or antonyms for content-specific vocabulary. Examples: What is another word for “habitat”? Discussion of word categories (weapons include: guns, knives, rifles, bow and arrow…) CAR Skill 5: Inference. Instances when the teacher assists students in making inferences (reading between the lines) based on the information presented in text, discussion, and/or experiences (background knowledge). For example: asking students, “Looking at this map, why do you think the koala population is dwindling?” or using open ended questions to illicit an inference from students (e.g., Why?, How do you know?, What do you think?, If you lived during this time, what would life be like?, etc.). Inference is weighted at 3 levels of intensity (described below). Weight Level 1: Instances when the teacher draws inferences directly from text (e.g., coherence inferences, inter-sentence inferences, text connecting inferences) during instruction. This includes when the teacher models how students can piece together information to make an inference or instances when the teacher is guiding the students on how to make an inference. Examples: Asking a “why” type of question that can be answered through recall of information presented in the text (e.g., Why did Obama win? Because according to this chart, he had more votes; Why is this different than_____?; How do you know that ______?) Weight Level 2: Instances when the teacher draws inferences from a text and manipulates that information (e.g., elaborative/ gap filling inferences) during instruction. Examples: Using background knowledge to create new knowledge or to elaborate on learned knowledge that is not explicitly present in the text. (e.g., What is the difference between the population of Koala’s in 1900 and now?; Do you think most Americans are happy with the election results?; How do you think the soldiers felt when they ran out of rations?; What do you think about _____?); Comparing/contrasting information. For example, the teacher asking, “Do you think these two social groups have the same beliefs about how society should be run?” Weight Level 3: Instances when the teacher uses global inferences (e.g., making inferences in regards to the theme, main idea, or moral of a text) during instruction. Examples: If you could go back in time, what would you change to help koalas in Australia? If all the Koalas in the world disappeared forever, what would that do to the environment? 227 Other Literacy Skills: Any other literacy skill that is not included as one of the 5 CAR skills (described below) (These codes will not be weighted) Spelling: Instances when the teacher focuses on the spelling of a particular word. This includes spelling of specialized vocabulary words. In those instances when the teacher focuses on the spelling of specialized vocabulary, both codes (spelling and specialized vocabulary) will be used. In cases where spelling of specialized vocabulary is the focus, a level 1 code will be used for that CAR skill. CLOZE: Instances when the teacher uses “blanks” (e.g., _ _ _ _ _ _) to prompt students to figure out a word, when filling in a particular word in a phrase or sentence (e.g., koala’s have _______ appetites), or when filling in a missing phrase or sentence within an excerpt of text (e.g., Electoral college votes are ________________________.) This can also be accomplished “through the air”. For example, attempting to have students recall a content-specific vocabulary word by saying… “It starts with an ‘A’…”. Decoding Text: Instances when the teacher points to text AND interprets, reads, or deciphers what is written word-by-word or phrase-by-phrase (not just pointing at text). This action MUST include actual decoding text. This code is to be applied when the teacher is assisting the students in decoding a chunk of text, and NOT when students are reading aloud. This can occur in a facilitative nature when the teacher is assisting students in decoding (interpreting, reading, deciphering) a chunk of text through a discussion, or in a directive nature when the teacher is decoding (interpreting, reading, deciphering) text for the students word-by-word or phrase-by-phrase. The duration of this code will be applied at the beginning of the decoding action and may continue through discussions around a chunk of text during teacher-directed or teacher-facilitated codes. This code may “bleed” into peer-to-peer codes as long as it initiates during a teacher code, and that the teacher is actively monitoring the discussion between students while deciphering text. Mathematics Integration: (This code will not be weighted) In some instances, mathematics was used to help students interpret text features, determine how long ago something happened, build/support background knowledge, or to problem solve. Any instance when the teacher integrates mathematics skills (e.g., calculations, making mathematical comparisons such as greater than or less than, etc.) should have this code. Questioning Codes: Instances when the teacher asks questions of the students based on what was read or discussed. These will be directly correlated to Bloom’s Taxonomy. Only frequency and intensity of questioning will be measured. Questioning codes are weighted at 6 levels of intensity (see chart below). Note: The teacher must actually use language (ASL, English, or a combination of both) to ask a question. Instances when a teacher points to several options, implying a question of “which” will not be coded. 228 Rhetorical (RH) questions will not be coded. In instances where a teacher asks a question and does not provide enough wait time for students to answer (just moves on) or does not prompt students to answer (e.g., asks a question, waits, no one answers, teacher decides to move on) WILL be coded, as in many of these instances, it cannot be determined if the teacher intended for the question to be RH. Only CLEAR instances of RH questions will not be coded. For example, when the use of “WHY” in ASL is used as “BECAUSE” (e.g., READ THIS WHY? REVIEW FOR TEST.) Only questions that occur during instruction will be coded. For example, if the teacher asks a student “How was your weekend” as the students are coming into the classroom, it will NOT be coded. Instances when a teacher repeats a student’s question will NOT be coded. Only questions that are generated by the teacher will be coded. Each instance of a question WILL be coded. For example, if a teacher asks a question and repeats it three times, it will be coded three separate times. Table 12: Questioning Codes Contextualized Within Bloom’s Taxonomy Bloom’s Levels Question Examples Weight Level 1 (Knowledge level) Questions that require students to demonstrate comprehension of content material through remembering/recall. These are typically “who, what, where, when” questions. Occasionally, “why” is coded at the knowledge level when it is used to recall knowledge level information. For example, when the teacher says, these soldiers are sad because it is snowing and they have little food and clothing. WHY are they sad?” Examples: • What is this picture of? • Who is this story about? 229 Table 12 (cont’d) Weight Level 2 (Comprehension) Questions that require students to demonstrate comprehension/understanding of content material. Comprehension, translation, interpretation type questions. This includes “WHY” questions that are used to measure comprehension. Examples: • Can you explain, in your own words….? • What do you think the phrase______ means? • What is the main point of this passage? • Can you describe what the authors’ message is? • What is the author trying to warn us about? • What does E-X-I-L-E-S mean? • What happened? Weight Level 3 (Application) Questions that require students to use a learned/familiar concept in a new situation. These are questions that require students to predict or make judgements about something, “how and why” type questions. Examples: • Can you give an example in your life when this happened to you? • How would you solve the problem of the dwindling koala population in Australia? • What do you think is happening in this picture? • Why do you think this battle was lost? • Why is the United States not a 3rd World Country? 230 Table 12 (cont’d) Weight Level 4 (Analysis) Questions that require students to analyze, break down information, compare/contrast, deconstruct information, infer, or outline. Examples: • Based on this graph, what part of Australia has the most koalas in danger? • What are the facts regarding the dwindling population of koalas in Australia? What are some opinions? • What actions do you think led to the soldiers success in this battle? • How do you think the soldiers felt? • Why are these different? • What does this picture NOT show? • If JFK and Nixon had not had their debate on TV, do you think the outcome would have been the same or different? Weight Level 5 (Synthesis) Questions that require students to evaluate, explain, or summarize learned information. Examples: • If you were a koala living in Australia, what would you tell the • humans? • Is this painting an accurate representation of the events as described in the text? If so, what is missing from this description? 231 Table 12 (cont’d) Weight Level 6 (Evaluation) Questions that require students to categorize, modify, or create new information. Examples: • What is your opinion about how environmentalists are trying • to save the koalas in Australia? • Do you agree with them? • What would you do the same/ differently? • Do you think it would be worth it to do _______ in this • situation? • If you were a soldier, what would you have done in this situation? • How would you have solved the problem of the Bostonians • being angry with the soldiers? • How would you organize this information? 232 Appendix I: Inter-rater Agreement Code Documentation Form Table 13: Example of Inter-rater Agreement Documentation Form Site Code: Code Type (INST/ CAR) (frequency) Session: Code Length (duration) Code Start/ End Time Length of Session: Code Level (1= default) (intensity) Page ___ of ___ Code Notes/ IOA check Memos (if applicable) Helpful Abbreviations: TDL= teacher-directed learning; TFL= teacher-facilitated learning; P2P= peer-to-peer giving directions; NI= non instructional; IND= student independent work O= other DIR= teacher A-BGK= activate background knowledge; B-BGK= build background knowledge; CSV= content-specific vocabulary; INF= inference; TF= text features; TS= text structure 233 Appendix J: Omnibus Tables of Overall Results Table 14: Overall Results of Duration of Instructional Practices TFL TDL P2P Lvl 1 P2P Lvl 2 106 124 47 15 HO IND Lvl 1 IND Lvl 2 DIR NI Other 8 15 26 74 3 2.7% 8.7% 3.7% 19.5% 3.5% 13.3 min 43.4 min 18.6 min 97.7 min 17.5 min 13 31 20 39 5.8% 21.5% 3.1% 17.2% 28.4 min 105.2 min 15.2 min 84.1 min 25 15 27 83 30.4% 2.9% 4.9% 20.8% 267.6 min 25.0 min 42.0 min 183.3 min 94 58 85 92 13 14.5% 8.7% 9.3% 18.1% 1.7% 105.8 min 63.3 min 67.6 min 132.6 min 12.3 min Teacher A (500 total minutes across 10 sessions) Total Frequency % of Total Observed Instruction 25.1% 28.2% 5.3% 3.3% Total Observed Minutes 125.5 min 140.7 min 26.6 min 16.7 min 26 67 8 4 Not Observed Teacher B (489 total minutes across 9 sessions*) Total Frequency % of Total Observed Instruction 3.1% 48.5% 0.4% 0.4% Total Observed Minutes 15.0 min 237.0 min 2.2 min 1.9 min 64 102 9 4 Not Observed Not Observed Teacher C (880 total minutes across 10 sessions) Total Frequency % of Total Observed Instruction 7.4% 32.0% 1.2% 0.4% Total Observed Minutes 65.0 min 281.8 min 10.8 min 3.6 min 60 161 36 28 Not Observed Not Observed Teacher D (730 total minutes across 10 sessions) Total Frequency % of Total Observed Instruction 6.8% 35.3% 2.8% 2.8% Total Observed Minutes 49.6 min 258.0 min 20.4 min 20.4 min Not Observed *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s data is based on 9 sessions. TFL- Teacher-Facilitated Learning; TDL-Teacher-Directed Learning; P2P- Peer-to-Peer Learning; HO- Hands-on; IND- Student Independent Work/Activities; DIR- Teacher Giving Directions/Instructions; NI- Non-instructional; Lvl- Level of Intensity 234 Table 15: Overall Results of Duration of CAR Integration ABGK Lvl 1 ABGK Lvl 2 ABGK Lvl 3 BBGK Lvl 1 BBGK Lvl 2 BBGK Lvl 3 Total Frequency 67 8 13 17 6 3 % of Total Observed Instruction 7.6% 1.5% 3.3% 2.3% 1.0% 0.4% Total Observed Minutes 38.0 min 7.4 min 16.6 min 11.3 min 5.2 min 2.0 min 64 4 5 31 3 TS Lvl 1 TS Lvl 2 TS Lvl 3 TS Lvl 4 Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Teacher A (500 total minutes across 10 sessions) Teacher B (489 total minutes across 9 sessions*) Total Frequency % of Total Observed Instruction 6.7% 0.6% 1.2% 2.8% 0.8% Total Observed Minutes 32.6 min 3.1 min 5.7 min 13.8 min 3.9 min Total Frequency 34 5 2 12 13 % of Total Observed Instruction 2.1% 0.5% 0.4% 0.8% 2.9% Total Observed Minutes 18.9 min 4.0 min 3.1 min 7.4 min 25.5 min Total Frequency 8 1 4 5 % of Total Observed Instruction Not Observed 0.8% 0.1% 0.8% 1.1% Total Observed Minutes 5.9 min 0.6 min 6.1 min 8.3 min 3 Not Observed 0.6% 3.1 min Teacher C (880 total minutes across 10 sessions) Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Not Observed Teacher D (730 total minutes across 10 sessions) 235 Table 15 (cont’d) TF Lvl 1 TF Lvl 2 TF Lvl 3 57 6 TF Lvl 4 CSV Lvl 1 CSV Lvl 2 CSV Lvl 3 CSV Lvl 4 INF Lvl 1 INF Lvl 2 53 9 13 7 64 9 6.6% 1.4% 1.6% 1.2% 5.1% 1.3% 32.8 min 7.0 min 8.1 min 6.1 min 25.5 min 6.4 min 67 6 1 31 5 0.4% 2.0% 0.3% 1.8 min 9.9 min 1.4 min INF Lvl 3 Teacher A (500 total minutes across 10 sessions) Total Frequency 16 % of Total Observed Instruction 3.6% 0.4% 3.3% Total Observed Minutes 18.1 min 2.0 min 16.6 min 51 10 7 Not Observed Not Observed Teacher B (489 total minutes across 9 sessions*) Total Frequency 2 Not Observed Not Observed % of Total Observed Instruction 3.0% 1.2% 1.0% 0.2% 4.7% 0.2% Total Observed Minutes 14.8 min 5.9 min 5.1 min 0.8 min 23.2 min 1.2 min 57 15 1 28 2 1 2 6 4 1.6% 0.2% 0.02% 0.1% 0.5% 0.3% 14.1 min 1.6 min 0.2 min 0.5 min 4.2 min 2.9 min 17 2 8 5 4 Not Observed Not Observed 0.8% 0.6% 0.1% 5.6 min 4.4 min 0.7 min Teacher C (880 total minutes across 10 sessions) Total Frequency % of Total Observed Instruction 1.6% 0.6% 0.1% Total Observed Minutes 14.2 min 5.7 min 0.6 min 32 7 9 Not Observed Not Observed Teacher D (730 total minutes across 10 sessions) Total Frequency % of Total Observed Instruction 1.4% 0.5% 0.6% Total Observed Minutes 9.9 min 3.5 min 4.1 min Not Observed 1.3% 0.2% 9.4 min 1.1 min *One of the video recorded sessions (Teacher B, session 10) was damaged and not included in the data analysis. Thus, Teacher B’s data is based on 9 sessions. ABGK- Activation of Background Knowledge; BBGK- Building Background Knowledge; TS- Text Structure; TF- Text Features; CSV- Content-Specific Vocabulary; INF- Inference; Lvl- Level of Intensity 236 REFERENCES 237 REFERENCES Afflerbach, P. (1990). The influence of prior knowledge on expert readers' main idea construction strategies. Reading Research Quarterly, 25(1), 31-46. Akamatsu, C. T. (1988). Summarizing stories: The role of instruction in text structure in learning to write. American Annals of the Deaf, 133, 294-302. Akhondi, M., Malayeri, F. A., & Samad, A. A. (2001). Structure to facilitate reading comprehension. The Reading Teacher, 64, 368-372. doi: 10.1598/RT.64.5.9 Allen, T. E. (1986). Patterns of academic achievement among hearing impaired students: 1974 and 1983. In A.N. Schildroth & M.A. Karchmer (Eds.) Deaf children in America. San Diego: College-Hill Press. Anderson, R. C., & Pearson, P. D. (1984). A schema-theoretic view of basic processes in reading. In P. D. Pearson (Ed.), Handbook of reading research (Vol. 1, pp. 255291). New York: Longman. Andrews, J. F. & Mason, J. M. (1991). Strategy usage among deaf and hearing readers. Exceptional Children, 57(6), 536–545. Armstrong, D., Gosling, A., Weinman, J., & Martaeu, T. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31(3), 597-606. Bader, L. A., & Pearce, D. L. (2008). Bader reading and language inventory (6th edition). Upper Saddle River, NJ: Prentice Hall Baily, J. (2008). First steps in qualitative data analysis: Transcribing. Family Practice 25(2), 127-131. doi: 10.1093/fampra/cmn003 Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press. Bayer Corporation. (2004). The Bayer facts of science education X: Are the nation’s colleges and universities adequately preparing elementary school teachers of tomorrow to teach science? Executive summary. Mission, KS: Market Research Institute. Beck, I. & McKeown, M. (2007). Increasing young low-income children’s oral vocabulary repertories through rich and focused instruction. Elementary School Journal, 107(3), 251–271. 238 Biancarosa, C. & Snow, C. E. (2006). Reading next: A vision for action and research in middle and high school literacy: A report to Carnegie Corporation of New York (2nd ed.). Washington, DC: Alliance for Excellent Education. Retrieved from http://www.all4ed.org/publications/ReadingNext/ReadingNext.pdf. Bluestein, N. (2010). Unlocking text features for determining importance in expository text: A strategy for struggling readers. Reading Teacher, 63(7), 597-600. Bloom, B. S. (1956). Taxonomy of educational objectives, Handbook 1: The cognitive domain. New York: David McKay Co Inc. Boyd, E. & George, K. (1973). The effect of science inquiry on the abstract categorization behavior of deaf children. Journal of Research in Science Teaching, 10, 91-99. Bringham, M. & Hartman, M. C. (2010). What is your prediction? Teaching the metacognitive skill of prediction to a class of sixth and seventh-grade students who are deaf. American Annals of the Deaf, 155(2), 137-143. Brown, A. L. (1997). Transforming schools into communities of thinking and learning about serious matters. American Psychologist, 52(4), 399-413. Brown P. & Brewer L. (1996). Cognitive processes of deaf and hearing skilled and less skilled readers. Journal of Deaf Studies and Deaf Education, (1), 263-270. Bruner, J. S. (1957). Going beyond the information given. Originally published in Contemporary approaches to cognition, and reprinted in J. M. Anglin (Ed.). Jerome S. Bruner: Going beyond the information given. New York: Norton. Cain, K., Oakhill, J., & Bryant, P. (2004). Children’s reading comprehension ability: Concurrent prediction by working memory, verbal ability and component skills. Journal of Educational Psychology, 96(1), 31-42. Carpenter, T. P., Fennema, E., Peterson, P. L. & Carey, D. A. (1998). Teachers’ pedagogical content knowledge of a student’s problem solving in elementary arithmetic. Journal for Research in Mathematics Education, 19, 385-401. Carptenter, T. P., Fennema, E., Peterson, P. L., Chiang, C.P. & Loef, M. (1989). Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal, 26(4), 499-531. Chall, J. S. (1996). Learning to read: The great debate. Fort Worth: Harcourt Brace College Publishers. Davey, B., LaSasso, C., & Macready, G. B. (1983). A comparison of reading comprehension task performance for deaf and hearing subjects. Journal of 239 Speech and Hearing Research, 26(6), 22-628. Davey, J. W., Gugiu, P. C., & Coryn, C. L. S. (2010). Quantitative methods for estimating the reliability of qualitative data. Journal of MultiDiciplinary Evaluation 6(13), 140-162. Davis, S. J. & Winek, J. (1989). Improving expository writing by increasing background knowledge. Journal of Reading, 33(3). 178-181. Dedoose Version 5.0.11, web application for managing, analyzing, and presenting qualitative and mixed method research data (2014). Los Angeles, CA: SocioCultural Research Consultants, LLC (www.dedoose.com). Dewey, J. (1932/1990). The child and the curriculum/ The school and society. Chicago: University of Chicago Press. Diebold, T. J. & Waldron, M. B. (1998). Designing instructional formats: The effects of verbal and pictorial components on hearing-impaired student’s comprehension of science concepts. American Annals of the Deaf, 133(1), 30-35. Dochy, F. (1994). Prior knowledge and learning. In T. Husen & T. N. Postlethwaite (Eds.), International encyclopedia of education (2nd ed., pp. 4698-4702). Oxford/New York: Pergamon Press. Donin, J., Doehring, D. G., & Browns, F. (1991). Text comprehension and reading achievement in orally educated hearing impaired children. Discourse Processes, 14, 307-337. Dymock, S. & Nicholson, T. (2010). “High 5!" Strategies to Enhance Comprehension of Expository Text. The Reading Teacher, 64(3), 166-78. Easterbrooks, S. & Beal-Alvarez, J. (2012). Literacy acquisition in students who are deaf and hard of hearing. Oxford University Press Series, Professional Perspectives on Deafness: Evidence and Applications. New York, NY: Oxford Easterbrooks, S. R., Lederberg, A. R., Miller, E. M., Bergeron, J. P., & Connor, C. M. (2008). Emergent literacy skills during early childhood in children with hearing loss: Strengths and weaknesses. Volta Review, 108(2), 91-114. Easterbrooks, S. & Stephenson, B. (2006). An examination of twenty literacy, science, and mathematics practices used to educate students who are deaf and hard of hearing. American Annals of the Deaf, 151(4), 385-397. Ewolt, C., Israelite, N., & Dodds, R. (1992). The ability of deaf students to understand text: A comparison of the perceptions of teachers and students. American Annals of the Deaf, 137, 351-361. 240 Fagan, M. K. & Pisoni, D. B. (2010). Hearing experience and receptive vocabulary development in deaf children with cochlear implants. Journal of Deaf Studies and Deaf Education, 15(2),149-161. doi: 10.1093/deafed/enq001 Fennema, E., Carpenter, T. P., Franke, M., Levi, L., Jacobs, V. & Empson, S. (1996). Learning to use children’s thinking in mathematics instruction: A longitudinal study. Journal for Research in Mathematics Education, 27(4), 403-434. Fennema, E., Franke, M., Carpenter, T. P. & Carey, D. A. (1993). Using children’s knowledge in instruction. American Educational Research Journal, 30(3), 555-583. Fisher, D. & Frey, N. (2009). Background knowledge: The missing piece of the comprehension puzzle. Portsmouth, NH: Heinemann. Fisher, D. & Frey, N. (2014). Close reading as an intervention for struggling middle school readers. Journal of Adolescent and Adult Literacy 57(5), 367-376. doi: 10.1002/jaal.266 Fisher, D. & Ivey, G. (2005). Literacy and language as learning in content area classes: A departure from “every teacher is a teacher of reading.” Action in Teacher Education, 27(2), 3-11. Frayer, D., Frederick, W. C., & Klausmeier, H. J. (1969). A Schema for Testing the Level of Cognitive Mastery. Madison, WI: Wisconsin Center for Education Research. Freire, P. (1993). Pedagogy of the oppressed. New York, NY: The Continuum International Publishing Group. Furth, H. G. (1966). A comparison of reading test norms of deaf and hearing children. American Annals of the Deaf, 111(2), 461-462. Furth, H. G. (1966). Thinking without language: Psychological implications of deafness. New York: Free Press. Gee, J. (2008). Social linguistics and literacy: Ideology in discourse. (3rd ed.) New York: Taylor and Francis. Given, L. M., & Saumure, K. (2008). Trustworthiness. In L. M. Given (Ed.), The Sage encyclopedia of qualitative research methods (Vol. 2, pp. 895-896). Thousand Oaks, CA: Sage. Glaser, B. G. & Strauss, A. L., (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research, Chicago, Aldine Publishing Company 241 Graesser, A. C., Singer, M. & Trabasso, T. (1994). Constructing inferences during narrative text comprehension, Psychological Review, 101(3), 371-395. Graham, M., Milanowski, A., & Miller, J. (2012). Measuring and promoting inter-rater agreement of teacher and principal performance ratings. Center for Educator Compensation Reform, 1-33. Hackmann, D. G. and Waters, D. L. (1998). Breaking away from tradition: The Farmington High School restructuring experience. NASSP Bulletin, 82(596) 8392. doi: 10.1177/019263659808259615 Hartman, D. P. (1977). Considerations in the choice of interobserver reliability measures. Journal of Applied Behavior Analysis, 10, 103-116. Howell, J. J. & Luckner, J. L. (2003). Helping one deaf student develop content literacy skills: An action research report. Communications Disorders Quarterly. 25(1), 23- 27. Hyde, M., Zevenbergen, R. & Power, D. (2003). Deaf and hard of hearing students' performance on arithmetic word problems. American Annals of the Deaf, 148(1), 56-64. Jackson, D. W., Paul, P. V. & Smith, J. C. (1997). Prior knowledge and reading comprehension ability of deaf adolescents. Journal of Deaf Studies and Deaf Education, 2(3), 172-184. Johnson, H. (2004). U.S. Deaf Education Teacher preparation programs: A look at the present and a vision for the future. American Annals of the Deaf, 149(2), 75-91. Jonassen, D. H., & Grabowski, B. L. (1993). Handbook of individual differences, learning, and instruction. Part VII, Prior knowledge (pp. 417-430). Hillsdale: Lawrence Erlbaum Associates. Kelly, L. P. & Barac-Cikoja, D. (2007). The comprehension of skilled deaf readers: The roles of word recognition and other potentially critical aspects of competence. In K. Cain & J. Oakhill (Eds.), Children’s comprehension problems in oral and written language (pp. 244-280). New York: The Guilford Press. Kelley, M. J. & Clausen-Grace, N. (2010), Guiding students through expository text with text feature walks. The Reading Teacher, 64, 191-195. doi: 10.1598/RT.64.3.4 Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York: Cambridge University Press. Kintsch, W. (2004). The Construction-Integration model of text comprehension and its 242 implications for instruction. In R. B. Ruddell & N. J. Unrau (Eds.), Theoretical models of processes of reading (5th ed., pp. 1270-1328). Newark, DE: International Reading Association. Kispal, A. Department for Children, Schools and Families, (2008). Effective teaching of inference skills for reading (DCSF-RR031). Retrieved from: http://dera.ioe.ac.uk/7918/1/DCSF-RR031.pdf Krashen, S. (1982). Principles and practice in second language acquisition. Oxford, UK: Pergamon. Lang, H., Huppa, M., Monte, D., Brown, S., Babb, I., & Scheifele, P. (2006) A Study of Technical Signs in Science: Implications for Lexical Database Development. Journal of Deaf Studies and Deaf Education, 12(1): 65-79. Lederberg, A. R. & Beal-Alvarez, J. (2011). Expressing meaning: From communicative intent to building vocabulary (pp. 258-275). In M. Marschark & P. E. Spencer (Eds.), Oxford handbook of deaf studies, language, and education, second edition. New York, NY: Oxford University Press. Lederberg, A. R. & Spencer, P. E. (2001). Vocabulary development of deaf and hard of hearing children. In M. D. Clark, M. Marschark, & M. Karchmer (Eds.), Context, Cognition and Deafness (pp.88-112). Washington, DC: Gallaudet University Press. Leslie, L. & Caldwell, J. (2006). Qualitative reading inventory-4. (4th Ed.). Boston, MA: Pearson. Luckner, J. & Cooke, C. (2010). A summary of the vocabulary research with students who are deaf or hard of hearing. American Annals of the Deaf, 155(1), 38-67. Luckner, J., Sebald, A. N., Cooney, J., Young, J., & Goodwin Muir, S. (2006). An examination of the evidence-based literacy research in deaf education. American Annals for the Deaf, 150 (5), 443-456. Luetke-Stahlman, B., Griffiths, C., & Montgomery, N. (1998). Development of text structure knowledge as assessed by spoken and signed retellings of a deaf secondgrade student. American Annals of the Deaf, 143, 337-346. Maiorana-Basas, M. (2013). Content area literacy practices in the D/HH classroom: A case study of a middle school social studies teacher. Poster Session: Association of College Educators- Deaf/ Hard of Hearing National Conference, Santa Fe, NM. Marmolejo-Ramos, F., Juan, M., Gygax, P., Madden, C. J., & Rosa, S. (2009). Reading between the lines: The activation of background knowledge during text 243 comprehension. Pragmatics & Cognition, 17(1). doi: 10.1075/p&c.17.1.03mar Marques, J. F. & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report 10(3), 439-462. Retrieved [July 27, 2014], from http://www.nova.edu/ssss/QR/QR10-4/marques.pdf Marschark, M. & Hauser, P. C. (2008). Deaf cognition: Foundations and outcomes. New York: Oxford University Press. Marschark, M., Lang, H. G. & Albertini, J. A. (2002). Educating deaf students: From research to practice. New York: Oxford University Press. Marshall, P. (2014). Guided reading, a snapshot. K12 Reader: Reading Instruction Resources for Teachers and Parents. Retrieved June 30, 2014, from http://www.k12reader.com/guided-reading-a-snapshot/ Martin, D. S. (2006). The social studies curriculum. Deaf Learners: Developments in Curriculum and Instruction. (pp. 67-74). Washington D. C.: Gallaudet University Press. Martin, D. (n.d.). Deaf learners and successful cognitive achievement – Reaching every learner: Differentiating instruction in theory and practice. LEARN NC. Retrieved September 22, 2014, from http://www.learnnc.org/lp/editions/every-learner/6393 McMackin, M. C. & Lawrence, S. (2001). Investigating inferences: Constructing meaning from expository texts. Reading Horizons, 42(2). 117-137. McMillan, J., & Schumacher, S. (2001). Research in education (5th ed.). New York: Longman. Merlin, J. (2005). Bayer facts of science education XI: 2005 [Electronic version]. Retrieved from http://www.bayerus.com/msms/news/facts.c fm?mode=detail&id=survey05 Meyer, B. J. F. & Ray, M. N. (2011). Structure strategy interventions: Increasing reading comprehension of expository text. International Electronic Journal of Elementary Education, 4(1), 127-152. Retrieved from http://www.iejee.com/4_1_2011/8_IEJEE_4_1_Meyer_Ray.pdf. Moores, D. F. (2001). Educating the deaf: Psychology, principles and practices (5th ed.). Boston: Houghton Mifflin. Morrell, E. (2004). Becoming critical researchers: Literacy and empowerment for urban youth. New York: Peter Lang. 244 Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 1-19. Moss, B. (2005). Making a case and a place for effective content area literacy instruction in the elementary grades. The Reading Teacher, 59(1), 46-55. doi: 10.1598/RT.59.1.5 Nagy, W. & Townsend, D. (2012). Words as tools: Learning academic vocabulary as language acquisition. Reading Research Quarterly, 47(1), 91-108. Retrieved from http://onlinelibrary.wiley.com/doi/10.1002/RRQ.011/full. National Council for the Social Studies (2007). Social studies in the era of No Child Left Behind: A position statement of the National Council for Social Studies (NCSS Position Statement). Social Education, 71(5), 284. National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office. Negin, G. A. (1987). The effects of syntactic segmentation on the reading comprehension of hearing impaired students. Reading Psychology, 8(1), 23–31. Neufeld, P. (2005). Comprehension instruction in content area classes. Reading Teacher, 59(4), 302-312. Nilsson, N. L. (2008). A critical analysis of eight informal reading inventories. The Reading Teacher, 61(7), 526-536. doi: 10.1598/RT.61.7.2 No Child Left Behind Act of 2001, P.L. 107-110, 20 U.S.C. (2002). Nolan, T. E. (1991). Self-questioning and prediction: Combining metacognitive strategies, Journal of Reading, 35, 132-138. Nover, S. M., Andrews, J. F., Baker, S., Everhart, V. S., & Bradford, V. S. (2002). Star schools’ USDLC engaged learning project No. 5 ASL/English bilingual staff development project in deaf education staff development in ASL/English bilingual instruction for deaf students: Evaluation and impact study final report 19972002.Santa Fe, NM: New Mexico School for the Deaf. Nunes, T. & Moreno, C. (2002). An intervention program for promoting deaf pupils achievement in mathematics. Journal of Deaf Studies and Deaf Education 7(2), 120-133. 245 Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven, CT: Yale University Press. O’Brien, D. G. & Stewart, R. A. (1990). Preservice teachers’ perspectives on why every teacher is not a teacher of reading: A qualitative analysis. Journal of Reading Behavior, 22, 101-129. O’Brien, D. G., Stewart, R. A. & Moje, E. (1990). Why content area literacy is difficult to infuse into the secondary school: Complexities of curriculum, pedagogy, and school culture. Reading Research Quarterly, 30, 442-463. Peterson, C. L., Caverly, D. C., Nicholson, S. A., O’Neal, S., & Cusenbary, S. (2000). Building reading proficiency at the secondary school level: A guide to resources. San Marcos, TX: Southwest Texas State University and the Southwest Educational Development Laboratory. Pittman, A. L. (2008). Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidth. Journal of Speech, Language, and Hearing Research, 51, 785-797. Pressley, M. (2002). Comprehension strategies instruction: A turn-of- the-century status report. In C.C. Block & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp. 11-27). New York: Guilford. Paul, P. V. & Jackson, D. W. (1993). Toward a psychology of deafness: Theoretical and empirical perspectives. Boston: Allyn & Bacon. Pintner, R. & Patterson, D. (1916). A measurement of the language ability of deaf children. Psychological Review, 23, 413-436. Queen, J. A., Algozzine, R. F., & Eaddy, M. A. (1997). The road we traveled: Scheduling in the 4 x 4 block. NASSP Bulletin, 81, 88-99. Queen, J. A. (2000). Block scheduling revisited. Phi Delta Kappan, 82, 214-222. doi:10.1177/003172170008200307 Rapin, I. (1986). Helping deaf children acquire language: Lessons from the past. International Journal of Paediatric Otorhinolaryngology, 11, 213-223. Renaissance Learning (2013). STAR reading: Technical manual. Wisconsin Rapids, WI Robertson, L. (2009). Literacy and deafness: Listening and spoken language. San Diego, CA: Plural Publishing, Inc. Yoshinaga-Itano, C. & Gravel, J.S. (2001). The evidence for universal hearing screening. American Journal of Audiology, 10(2), 62-63. 246 Robinson, W. P. & Tajfel, H. (1996). Social groups and identities: Developing the legacy of Henri Tajfel. Oxford: Routledge. Roe, B., & Burns, P. (2010). Informal reading inventory: Pre-primer to twelfth grade. (8th ed.). Belmont, CA: Cengage Learning. Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. C. Bruce & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 35-58). Hillsdale, NJ: Erlbaum. Sanacore, J. & Palumbo, A. (2009). Understanding the fourth-grade slump: Our point of view. The Educational Forum, 73, 67- 74. Sarachan-Diely, A. (1985). Written narrative of deaf and hearing students: Story recall and inference. Journal of Speech and Hearing Research, 28, 151–159. Schirnmer, B. R. (1993). Constructing meaning from narrative text: Cognitive processes of deaf children. American Annals of the Deaf, 138, 46-54. Schirmer, B. R. (1997). Boosting reading success: Language, literacy and content area instruction for deaf. Teaching Exceptional Children, 30(1), 52-55. Schirmer, B. R. (2003). Using verbal protocols to identify the reading strategies of students who are deaf: Do the conclusions of the National Reading Panel apply? Review of Educational Research, 75(1), 83-117. Schirmer B. R. & Bond, W. L. (1990). Enhancing the hearing impaired child’s knowledge of story structure to improve comprehension of narrative text. Reading Improvement, 27, 242-254. Schirmer, B. R., & Williams, C. (2011). Approaches to Reading Instruction. In The Oxford Handbook of Deaf Studies, Language, and Education (2nd ed., Vol. 1, pp. 115-129). New York: Oxford University Press. Schirmer, B. R. & Winter, C. R. (1993). Use of cognitive schema by children who are deaf for comprehending narrative text. Reading Improvement, 30(1), 26-34. Scholastic (2014). What is guided reading? Retrieved June 30, 2014, from http://teacher.scholastic.com/products/guidedreading/pdfs/whatis.pdf Serrano Pau, C. (1995) The deaf child and solving problems of arithmetic: The importance of comprehensive reading. American Annals of the Deaf, 140, 287291. Shanahan, T. (2012). [Web log message]. Retrieved from: http://www.shanahanonliteracy.com/2012/01/disciplinary-literacy-is-not-new- 247 name.html Shanahan, T. & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking content area literacy. Harvard Educational Review. 78(1), 40-59. Singleton, J. L. & Supalla, S. (2003). Assessing children’s proficiency in natural signed languages. In The Oxford Handbook of Deaf Studies, Language, and Education. (pp. 289-302). New York: Oxford University Press. Skrobarcek, S. A., Chang, H. M., Thompson, C., Johnson, J., Atteberry, R., Westbrook, R., and Manus, A. (1997). Collaboration for instructional improvement: Analyzing the academic impact of a block scheduling plan. NASSP Bulletin, 81(589). 104-111. doi: 10.1177/019263659708158915 Skutnabb- Kangus, T. (1990). Language, literacy and minorities. London, England: Minority Rights Group. Stahl, K. A. D. (2008). The effects of three instructional methods on the reading comprehension and content acquisition of novice readers. Journal of Literacy Research, 40(3), 359-393. Stahl, S. & Nagy, W. (2006). Teaching word meanings. Mahwah, NJ: Erlbaum. Strangman, N. & Hall, T. E. (2004). Background knowledge. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved from Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research & Evaluation, 9(4). Squire, J. R. (1983). Composing and comprehending: two sides of the same basic process. Language Arts, 60, 581-589. Stake, R. (2000). Case Studies. In. N. Denzin & Y. Lincoln (Eds.). Handbook of qualitative research (2nd ed. pp. 435-454). Thousand Oaks, CA: Sage Publications. Stewart, D. A. & Kluwin, T. N. (2001). Teaching deaf and hard of hearing students: Content, strategies and curriculum. Boston: Allyn & Bacon. Strassman, B. K. (1997). Metacognition and reading in children who are deaf: A review of research. Journal of Deaf Studies and Deaf Education, 2(3), 140-149. TechSmith Corporation (2011). Camtasia (version 1.2.2) [software]. Available from http://www.techsmith.com/camtasia-features.html 248 TechSmith Corporation (2014). Screencast Pro. Available from http://www.techsmith.com/camtasia-features.html Thornton, S. J. (2001). Educating the educators: Rethinking subject matter and methods. Theory Into Practice, 40(1), 72–78. Traxler, C. B. (2000). Measuring up to performance standards in reading and mathematics: Achievement of selected deaf and hard of hearing students in the national norming of the 9th Edition Stanford Achievement Test. Journal of Deaf Studies and Deaf Education, 5, 337-348. Trezek, B. J., Wang, Y., & Paul, P. V. (2010). Reading and deafness: Theory, research, and practice. Canada: Delmar Cengage Learning. Trezek, B. J., Wang, Y., & Paul, P. V. (2011). Processes and components of reading. In The Oxford Handbook of Deaf Studies, Language, and Education (2nd ed., Vol. 1, pp. 99-114). New York: Oxford University Press. Trybus, R. & Karchmer, M. (1977). School achievement scores of hearing impaired children: National data on achievement status and growth patterns. American Annals of the Deaf. 122(2), 62-69. Vacca, R. T. & Vacca, J. L. (2010). Content area reading: Literacy and learning across the curriculum. (10th ed.). Boston: Allyn & Bacon. Valli, C. (2001). Basic Syntax Types. Linguistics of American Sign Language: An Introduction (5th ed., ). Washington, D.C.: Gallaudet University Press. Valli, C., Lucas, C., & Mulrooney, K. J., (2005). Linguistics of American Sign Language. Washington, DC: Gallaudet University Press. Vygotsky, L. S. (1978). Mind in Society. Cambridge, MA: Harvard University Press. Walters, J. (2006). Doing Qualitative Research: A Practical Handbook. Adult Education Quarterly, 56(2), 166-167. doi:10.1177/0741713605283440 Wechsler, D. (2009). Wechsler individual achievement test, 3rd edition. San Antonio, TX: Pearson. Weisberg, R. (1988). 1980s: A change in focus of reading comprehension research: a review of reading/learning disabilities research based on an interactive model of reading. Learning Disability Quarterly, 11, 149-159. White, R. (1999). A primer on pedagogy: Basic methodology for the beginning social studies teacher. Southern Social Studies Journal, 25(1), 2–11. 249 Williams, C. (2012). Promoting vocabulary learning in young children who are d/Deaf and hard of hearing: Translating research into practice. American Annals of the Deaf, 156(5), 501–508 Wondershare Software Co., Ltd. (2013). Wondershare (4.4) [software]. Available from http://www.wondershare.net/mac-video-converter-ultimate/index.html Woolsey, M. L., Herring, T. J., & Satterfield, S. T. (2009). Social studies instruction in signing programs for deaf and hard of hearing students: An ecobehavioral assessment. American Annals of the Deaf, 154(4), 400-412. doi: 17612AAD154.4 Yin, R. (1994). Case-Study research: Design and Methods (2nd Ed.). Thousand Oaks, CA: Sage. Yopp, H. K., Yopp, R. H., & Boshop, A. (2009). Vocabulary instruction for academic success. Huntington Beach, CA: Shell Education. DOI: www.shelleducation.com. 250